ΑΙhub.org
 

#AAAI2022 workshops round-up 1: AI to accelerate science and engineering, interactive machine learning, and health intelligence


by
28 March 2022



share this:
AAAI22 banner

As part of the 36th AAAI Conference on Artificial Intelligence (AAAI2022), 39 different workshops were held, covering a wide range of different AI topics. We hear from the organisers of three of the workshops, who tell us their key takeaways from their respective events.


AI to accelerate science and engineering (AI2ASE)
Organisers: Aryan Deshwal, Cory Simon, Jana Doppa, Syrine Belakaria and Yolanda Gil.

  • Professor Max Welling talked about how simulating molecules and their properties is the next frontier for AI. Molecules are the fundamental blocks in different real-world applications in healthcare (drug discovery), environmental and climate sciences (carbon capture).
  • Professor Carla Gomes talked about how to incorporate prior knowledge to reduce the dependency on large amounts of data, which is not possible in many scientific applications where experiments are expensive. Professor Lobato talked about creative ways of using deep generative models for molecule optimization and synthesis.
  • The oral papers talked about some critical real-world applications including protein design, star formation histories, and ellipsometry. The best paper was awarded to “Generative Structured Normalizing Flow Gaussian Processes Applied to Spectroscopic Data” from the Los Alamos National Laboratory, which developed a new probabilistic learning method to generate spectroscopic data conditioned on given chemical composition based on data collected from the ChemCam instrument onboard the Mars rover Curiosity.

Interactive machine learning
Organisers: Elizabeth Daly, Öznur Alkan, Stefano Teso and Wolfgang Stammer.

  • AAAI 2022 hosted the first workshop in Interactive Machine Learning, an area of growing interest as the AI community realizes machines can indeed very effectively be trained to meet a specified objective, which then begs the question have they been given the right objective. The workshop brought together speakers and authors who are truly tackling how technology can help facilitate the users and AI to negotiate learnt objectives.
  • Simone Stumpf shared lessons learnt after several years pioneering in this space emphasizing the difficulties of dealing with real users who need to see the impact of their feedback, while also navigating noisy feedback. Andreas Holzinger brought forward the perspective of human-centered AI and trust, looking at the importance for many real world domains such as healthcare, farming and forest monitoring. Finally, Cynthia Rudin presented her latest research on interpretable neural networks, showing to us that leveraging deep learning does not need to come at a cost to interpretability.
  • Themes that emerged throughout the day were the challenges of designing explanations to facilitate feedback, how this feedback can be incorporated and evaluated by the user to instill trust in particular when expert or domain knowledge may differ from data presented to an AI system. This workshop highlighted many questions and challenges for the research community to consider when facilitating users and AI solutions to align on a common objective and view of the world.

Health intelligence (W3PHIAI-22)
Organisers: Martin Michalowski, Arash Shaban-Nejad, Simone Bianco, Szymon Wilk, David Buckeridge and John Brownstein.

  • Eran Halperin, SVP of AI and Machine Learning in Optum Labs and a professor in the departments of Computer Science, Computational Medicine, Anaesthesiology, and Human Genetics at UCLA, gave a keynote talk on using whole-genome methylation patterns as a biomarker for electronic health record (EHR) imputation. Dr Halperin showed that methylation provides a better imputation performance when compared to genetic or EHR data. This approach uses a new tensor deconvolution of bulk DNA methylation to obtain cell-type-specific methylation that is in turn used for imputation.
  • Irene Chen from the Massachusetts Institute of Technology (MIT) gave a keynote describing how to leverage machine learning towards equitable healthcare. Dr Chen demonstrated how to adapt disease progression modeling to account for differences in access to care. She also examined how to address algorithmic bias in supervised learning for cost-based metrics discrimination. Finally, she discussed how to rethink the entire machine learning pipeline with an ethical lens to building algorithms that serve the entire patient population.
  • Michal Rosen-Zvi, Director of AI for Accelerated HC&LS Discovery at IBM Research, gave the final keynote talk by presenting promising results from the acceleration of biomarker discovery in multimodal data of cancer patients. Dr Rosen-Zvi described advances in the development of computational tools to assess cancer heterogeneity in general and she focused on AI for imaging in particular.
  • The workshop consisted of a rich program of presentations covering topics including natural language processing, prediction, deep learning, computer vision, knowledge discovery, and COVID. It also included a hackallenge (hackathon + challenge) focused on developing automated methods to diagnose dementia on the basis of language samples. In this challenge, teams developed standardized analysis pipelines for two publicly-available datasets (the hackathon). The Pittsburgh and Wisconsin Longitudinal Study corpora were provided for this hackallenge by the Dementia Bank consortium. Additionally, each team provided a sound basis for meaningful comparative evaluation in the context of a shared task (the challenge).


tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:



Related posts :



We risk a deluge of AI-written ‘science’ pushing corporate interests – here’s what to do about it

  16 Sep 2025
A single individual using AI can produce multiple papers that appear valid in a matter of hours.

Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence