#AAAI2022 workshops round-up 1: AI to accelerate science and engineering, interactive machine learning, and health intelligence

28 March 2022

share this:
AAAI22 banner

As part of the 36th AAAI Conference on Artificial Intelligence (AAAI2022), 39 different workshops were held, covering a wide range of different AI topics. We hear from the organisers of three of the workshops, who tell us their key takeaways from their respective events.

AI to accelerate science and engineering (AI2ASE)
Organisers: Aryan Deshwal, Cory Simon, Jana Doppa, Syrine Belakaria and Yolanda Gil.

  • Professor Max Welling talked about how simulating molecules and their properties is the next frontier for AI. Molecules are the fundamental blocks in different real-world applications in healthcare (drug discovery), environmental and climate sciences (carbon capture).
  • Professor Carla Gomes talked about how to incorporate prior knowledge to reduce the dependency on large amounts of data, which is not possible in many scientific applications where experiments are expensive. Professor Lobato talked about creative ways of using deep generative models for molecule optimization and synthesis.
  • The oral papers talked about some critical real-world applications including protein design, star formation histories, and ellipsometry. The best paper was awarded to “Generative Structured Normalizing Flow Gaussian Processes Applied to Spectroscopic Data” from the Los Alamos National Laboratory, which developed a new probabilistic learning method to generate spectroscopic data conditioned on given chemical composition based on data collected from the ChemCam instrument onboard the Mars rover Curiosity.

Interactive machine learning
Organisers: Elizabeth Daly, Öznur Alkan, Stefano Teso and Wolfgang Stammer.

  • AAAI 2022 hosted the first workshop in Interactive Machine Learning, an area of growing interest as the AI community realizes machines can indeed very effectively be trained to meet a specified objective, which then begs the question have they been given the right objective. The workshop brought together speakers and authors who are truly tackling how technology can help facilitate the users and AI to negotiate learnt objectives.
  • Simone Stumpf shared lessons learnt after several years pioneering in this space emphasizing the difficulties of dealing with real users who need to see the impact of their feedback, while also navigating noisy feedback. Andreas Holzinger brought forward the perspective of human-centered AI and trust, looking at the importance for many real world domains such as healthcare, farming and forest monitoring. Finally, Cynthia Rudin presented her latest research on interpretable neural networks, showing to us that leveraging deep learning does not need to come at a cost to interpretability.
  • Themes that emerged throughout the day were the challenges of designing explanations to facilitate feedback, how this feedback can be incorporated and evaluated by the user to instill trust in particular when expert or domain knowledge may differ from data presented to an AI system. This workshop highlighted many questions and challenges for the research community to consider when facilitating users and AI solutions to align on a common objective and view of the world.

Health intelligence (W3PHIAI-22)
Organisers: Martin Michalowski, Arash Shaban-Nejad, Simone Bianco, Szymon Wilk, David Buckeridge and John Brownstein.

  • Eran Halperin, SVP of AI and Machine Learning in Optum Labs and a professor in the departments of Computer Science, Computational Medicine, Anaesthesiology, and Human Genetics at UCLA, gave a keynote talk on using whole-genome methylation patterns as a biomarker for electronic health record (EHR) imputation. Dr Halperin showed that methylation provides a better imputation performance when compared to genetic or EHR data. This approach uses a new tensor deconvolution of bulk DNA methylation to obtain cell-type-specific methylation that is in turn used for imputation.
  • Irene Chen from the Massachusetts Institute of Technology (MIT) gave a keynote describing how to leverage machine learning towards equitable healthcare. Dr Chen demonstrated how to adapt disease progression modeling to account for differences in access to care. She also examined how to address algorithmic bias in supervised learning for cost-based metrics discrimination. Finally, she discussed how to rethink the entire machine learning pipeline with an ethical lens to building algorithms that serve the entire patient population.
  • Michal Rosen-Zvi, Director of AI for Accelerated HC&LS Discovery at IBM Research, gave the final keynote talk by presenting promising results from the acceleration of biomarker discovery in multimodal data of cancer patients. Dr Rosen-Zvi described advances in the development of computational tools to assess cancer heterogeneity in general and she focused on AI for imaging in particular.
  • The workshop consisted of a rich program of presentations covering topics including natural language processing, prediction, deep learning, computer vision, knowledge discovery, and COVID. It also included a hackallenge (hackathon + challenge) focused on developing automated methods to diagnose dementia on the basis of language samples. In this challenge, teams developed standardized analysis pipelines for two publicly-available datasets (the hackathon). The Pittsburgh and Wisconsin Longitudinal Study corpora were provided for this hackallenge by the Dementia Bank consortium. Additionally, each team provided a sound basis for meaningful comparative evaluation in the context of a shared task (the challenge).


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.

            AIhub is supported by:

Related posts :

Your right to be forgotten in the age of AI

New models, like ChatGPT, are being trained in ways that make it hard to forget users’ data, raising significant concerns for our privacy online.
22 September 2023, by

Machine learning can level the playing field against match fixing – helping regulators spot cheating

A machine learning model for detecting questionable behaviour and unusual outcomes in basketball games.
21 September 2023, by

The Good Robot Podcast: featuring Meredith Broussard

In this episode, Eleanor and Kerry talk to Meredith Broussard about why sexism, racism and ableism in tech is "More than a Glitch".
20 September 2023, by

Exploring layers in deep learning models: interview with Mara Graziani

Mara tells us about uncovering unique concept vectors through latent space decomposition.
19 September 2023, by

AI-narrated audiobooks are here – and they raise some serious ethical questions

What does this new technology mean for the industry, and for human actors?
18 September 2023, by

Training diffusion models with reinforcement learning

We show how diffusion models can be trained on downstream objectives directly using reinforcement learning.
15 September 2023, by

©2021 - Association for the Understanding of Artificial Intelligence


©2021 - ROBOTS Association