ΑΙhub.org
 

#AAAI2022 workshops round-up 1: AI to accelerate science and engineering, interactive machine learning, and health intelligence


by
28 March 2022



share this:
AAAI22 banner

As part of the 36th AAAI Conference on Artificial Intelligence (AAAI2022), 39 different workshops were held, covering a wide range of different AI topics. We hear from the organisers of three of the workshops, who tell us their key takeaways from their respective events.


AI to accelerate science and engineering (AI2ASE)
Organisers: Aryan Deshwal, Cory Simon, Jana Doppa, Syrine Belakaria and Yolanda Gil.

  • Professor Max Welling talked about how simulating molecules and their properties is the next frontier for AI. Molecules are the fundamental blocks in different real-world applications in healthcare (drug discovery), environmental and climate sciences (carbon capture).
  • Professor Carla Gomes talked about how to incorporate prior knowledge to reduce the dependency on large amounts of data, which is not possible in many scientific applications where experiments are expensive. Professor Lobato talked about creative ways of using deep generative models for molecule optimization and synthesis.
  • The oral papers talked about some critical real-world applications including protein design, star formation histories, and ellipsometry. The best paper was awarded to “Generative Structured Normalizing Flow Gaussian Processes Applied to Spectroscopic Data” from the Los Alamos National Laboratory, which developed a new probabilistic learning method to generate spectroscopic data conditioned on given chemical composition based on data collected from the ChemCam instrument onboard the Mars rover Curiosity.

Interactive machine learning
Organisers: Elizabeth Daly, Öznur Alkan, Stefano Teso and Wolfgang Stammer.

  • AAAI 2022 hosted the first workshop in Interactive Machine Learning, an area of growing interest as the AI community realizes machines can indeed very effectively be trained to meet a specified objective, which then begs the question have they been given the right objective. The workshop brought together speakers and authors who are truly tackling how technology can help facilitate the users and AI to negotiate learnt objectives.
  • Simone Stumpf shared lessons learnt after several years pioneering in this space emphasizing the difficulties of dealing with real users who need to see the impact of their feedback, while also navigating noisy feedback. Andreas Holzinger brought forward the perspective of human-centered AI and trust, looking at the importance for many real world domains such as healthcare, farming and forest monitoring. Finally, Cynthia Rudin presented her latest research on interpretable neural networks, showing to us that leveraging deep learning does not need to come at a cost to interpretability.
  • Themes that emerged throughout the day were the challenges of designing explanations to facilitate feedback, how this feedback can be incorporated and evaluated by the user to instill trust in particular when expert or domain knowledge may differ from data presented to an AI system. This workshop highlighted many questions and challenges for the research community to consider when facilitating users and AI solutions to align on a common objective and view of the world.

Health intelligence (W3PHIAI-22)
Organisers: Martin Michalowski, Arash Shaban-Nejad, Simone Bianco, Szymon Wilk, David Buckeridge and John Brownstein.

  • Eran Halperin, SVP of AI and Machine Learning in Optum Labs and a professor in the departments of Computer Science, Computational Medicine, Anaesthesiology, and Human Genetics at UCLA, gave a keynote talk on using whole-genome methylation patterns as a biomarker for electronic health record (EHR) imputation. Dr Halperin showed that methylation provides a better imputation performance when compared to genetic or EHR data. This approach uses a new tensor deconvolution of bulk DNA methylation to obtain cell-type-specific methylation that is in turn used for imputation.
  • Irene Chen from the Massachusetts Institute of Technology (MIT) gave a keynote describing how to leverage machine learning towards equitable healthcare. Dr Chen demonstrated how to adapt disease progression modeling to account for differences in access to care. She also examined how to address algorithmic bias in supervised learning for cost-based metrics discrimination. Finally, she discussed how to rethink the entire machine learning pipeline with an ethical lens to building algorithms that serve the entire patient population.
  • Michal Rosen-Zvi, Director of AI for Accelerated HC&LS Discovery at IBM Research, gave the final keynote talk by presenting promising results from the acceleration of biomarker discovery in multimodal data of cancer patients. Dr Rosen-Zvi described advances in the development of computational tools to assess cancer heterogeneity in general and she focused on AI for imaging in particular.
  • The workshop consisted of a rich program of presentations covering topics including natural language processing, prediction, deep learning, computer vision, knowledge discovery, and COVID. It also included a hackallenge (hackathon + challenge) focused on developing automated methods to diagnose dementia on the basis of language samples. In this challenge, teams developed standardized analysis pipelines for two publicly-available datasets (the hackathon). The Pittsburgh and Wisconsin Longitudinal Study corpora were provided for this hackallenge by the Dementia Bank consortium. Additionally, each team provided a sound basis for meaningful comparative evaluation in the context of a shared task (the challenge).


tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence