ΑΙhub.org
 

#AAAI2022 workshops round-up 1: AI to accelerate science and engineering, interactive machine learning, and health intelligence

by
28 March 2022



share this:
AAAI22 banner

As part of the 36th AAAI Conference on Artificial Intelligence (AAAI2022), 39 different workshops were held, covering a wide range of different AI topics. We hear from the organisers of three of the workshops, who tell us their key takeaways from their respective events.


AI to accelerate science and engineering (AI2ASE)
Organisers: Aryan Deshwal, Cory Simon, Jana Doppa, Syrine Belakaria and Yolanda Gil.

  • Professor Max Welling talked about how simulating molecules and their properties is the next frontier for AI. Molecules are the fundamental blocks in different real-world applications in healthcare (drug discovery), environmental and climate sciences (carbon capture).
  • Professor Carla Gomes talked about how to incorporate prior knowledge to reduce the dependency on large amounts of data, which is not possible in many scientific applications where experiments are expensive. Professor Lobato talked about creative ways of using deep generative models for molecule optimization and synthesis.
  • The oral papers talked about some critical real-world applications including protein design, star formation histories, and ellipsometry. The best paper was awarded to “Generative Structured Normalizing Flow Gaussian Processes Applied to Spectroscopic Data” from the Los Alamos National Laboratory, which developed a new probabilistic learning method to generate spectroscopic data conditioned on given chemical composition based on data collected from the ChemCam instrument onboard the Mars rover Curiosity.

Interactive machine learning
Organisers: Elizabeth Daly, Öznur Alkan, Stefano Teso and Wolfgang Stammer.

  • AAAI 2022 hosted the first workshop in Interactive Machine Learning, an area of growing interest as the AI community realizes machines can indeed very effectively be trained to meet a specified objective, which then begs the question have they been given the right objective. The workshop brought together speakers and authors who are truly tackling how technology can help facilitate the users and AI to negotiate learnt objectives.
  • Simone Stumpf shared lessons learnt after several years pioneering in this space emphasizing the difficulties of dealing with real users who need to see the impact of their feedback, while also navigating noisy feedback. Andreas Holzinger brought forward the perspective of human-centered AI and trust, looking at the importance for many real world domains such as healthcare, farming and forest monitoring. Finally, Cynthia Rudin presented her latest research on interpretable neural networks, showing to us that leveraging deep learning does not need to come at a cost to interpretability.
  • Themes that emerged throughout the day were the challenges of designing explanations to facilitate feedback, how this feedback can be incorporated and evaluated by the user to instill trust in particular when expert or domain knowledge may differ from data presented to an AI system. This workshop highlighted many questions and challenges for the research community to consider when facilitating users and AI solutions to align on a common objective and view of the world.

Health intelligence (W3PHIAI-22)
Organisers: Martin Michalowski, Arash Shaban-Nejad, Simone Bianco, Szymon Wilk, David Buckeridge and John Brownstein.

  • Eran Halperin, SVP of AI and Machine Learning in Optum Labs and a professor in the departments of Computer Science, Computational Medicine, Anaesthesiology, and Human Genetics at UCLA, gave a keynote talk on using whole-genome methylation patterns as a biomarker for electronic health record (EHR) imputation. Dr Halperin showed that methylation provides a better imputation performance when compared to genetic or EHR data. This approach uses a new tensor deconvolution of bulk DNA methylation to obtain cell-type-specific methylation that is in turn used for imputation.
  • Irene Chen from the Massachusetts Institute of Technology (MIT) gave a keynote describing how to leverage machine learning towards equitable healthcare. Dr Chen demonstrated how to adapt disease progression modeling to account for differences in access to care. She also examined how to address algorithmic bias in supervised learning for cost-based metrics discrimination. Finally, she discussed how to rethink the entire machine learning pipeline with an ethical lens to building algorithms that serve the entire patient population.
  • Michal Rosen-Zvi, Director of AI for Accelerated HC&LS Discovery at IBM Research, gave the final keynote talk by presenting promising results from the acceleration of biomarker discovery in multimodal data of cancer patients. Dr Rosen-Zvi described advances in the development of computational tools to assess cancer heterogeneity in general and she focused on AI for imaging in particular.
  • The workshop consisted of a rich program of presentations covering topics including natural language processing, prediction, deep learning, computer vision, knowledge discovery, and COVID. It also included a hackallenge (hackathon + challenge) focused on developing automated methods to diagnose dementia on the basis of language samples. In this challenge, teams developed standardized analysis pipelines for two publicly-available datasets (the hackathon). The Pittsburgh and Wisconsin Longitudinal Study corpora were provided for this hackallenge by the Dementia Bank consortium. Additionally, each team provided a sound basis for meaningful comparative evaluation in the context of a shared task (the challenge).


tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



ChatGPT is changing the way we write. Here’s how – and why it’s a problem

Have you noticed certain words and phrases popping up everywhere lately?
07 October 2024, by

Will humans accept robots that can lie? Scientists find it depends on the lie

Humans don’t just lie to deceive: sometimes we lie to avoid hurting others, breaking one social norm to uphold another.
04 October 2024, by

Explainable AI for detecting and monitoring infrastructure defects

A team of researchers has demonstrated the feasibility of an AI-driven method for crack detection, growth and monitoring.
03 October 2024, by

The Good Robot podcast: the EU AI Act part 2, with Amba Kak and Sarah Myers West from AI NOW

In the second instalment of their EU AI Act series, Eleanor and Kerry talk to Amba Kak and Sarah Myers West
02 October 2024, by

Forthcoming machine learning and AI seminars: October 2024 edition

A list of free-to-attend AI-related seminars that are scheduled to take place between 1 October and 30 November 2024.
01 October 2024, by

Linguistic bias in ChatGPT: Language models reinforce dialect discrimination

Examining how ChatGPT’s behavior changes in response to text in different varieties of English.
30 September 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association