ΑΙhub.org
 

#AAAI2022 workshops round-up 1: AI to accelerate science and engineering, interactive machine learning, and health intelligence

by
28 March 2022



share this:
AAAI22 banner

As part of the 36th AAAI Conference on Artificial Intelligence (AAAI2022), 39 different workshops were held, covering a wide range of different AI topics. We hear from the organisers of three of the workshops, who tell us their key takeaways from their respective events.


AI to accelerate science and engineering (AI2ASE)
Organisers: Aryan Deshwal, Cory Simon, Jana Doppa, Syrine Belakaria and Yolanda Gil.

  • Professor Max Welling talked about how simulating molecules and their properties is the next frontier for AI. Molecules are the fundamental blocks in different real-world applications in healthcare (drug discovery), environmental and climate sciences (carbon capture).
  • Professor Carla Gomes talked about how to incorporate prior knowledge to reduce the dependency on large amounts of data, which is not possible in many scientific applications where experiments are expensive. Professor Lobato talked about creative ways of using deep generative models for molecule optimization and synthesis.
  • The oral papers talked about some critical real-world applications including protein design, star formation histories, and ellipsometry. The best paper was awarded to “Generative Structured Normalizing Flow Gaussian Processes Applied to Spectroscopic Data” from the Los Alamos National Laboratory, which developed a new probabilistic learning method to generate spectroscopic data conditioned on given chemical composition based on data collected from the ChemCam instrument onboard the Mars rover Curiosity.

Interactive machine learning
Organisers: Elizabeth Daly, Öznur Alkan, Stefano Teso and Wolfgang Stammer.

  • AAAI 2022 hosted the first workshop in Interactive Machine Learning, an area of growing interest as the AI community realizes machines can indeed very effectively be trained to meet a specified objective, which then begs the question have they been given the right objective. The workshop brought together speakers and authors who are truly tackling how technology can help facilitate the users and AI to negotiate learnt objectives.
  • Simone Stumpf shared lessons learnt after several years pioneering in this space emphasizing the difficulties of dealing with real users who need to see the impact of their feedback, while also navigating noisy feedback. Andreas Holzinger brought forward the perspective of human-centered AI and trust, looking at the importance for many real world domains such as healthcare, farming and forest monitoring. Finally, Cynthia Rudin presented her latest research on interpretable neural networks, showing to us that leveraging deep learning does not need to come at a cost to interpretability.
  • Themes that emerged throughout the day were the challenges of designing explanations to facilitate feedback, how this feedback can be incorporated and evaluated by the user to instill trust in particular when expert or domain knowledge may differ from data presented to an AI system. This workshop highlighted many questions and challenges for the research community to consider when facilitating users and AI solutions to align on a common objective and view of the world.

Health intelligence (W3PHIAI-22)
Organisers: Martin Michalowski, Arash Shaban-Nejad, Simone Bianco, Szymon Wilk, David Buckeridge and John Brownstein.

  • Eran Halperin, SVP of AI and Machine Learning in Optum Labs and a professor in the departments of Computer Science, Computational Medicine, Anaesthesiology, and Human Genetics at UCLA, gave a keynote talk on using whole-genome methylation patterns as a biomarker for electronic health record (EHR) imputation. Dr Halperin showed that methylation provides a better imputation performance when compared to genetic or EHR data. This approach uses a new tensor deconvolution of bulk DNA methylation to obtain cell-type-specific methylation that is in turn used for imputation.
  • Irene Chen from the Massachusetts Institute of Technology (MIT) gave a keynote describing how to leverage machine learning towards equitable healthcare. Dr Chen demonstrated how to adapt disease progression modeling to account for differences in access to care. She also examined how to address algorithmic bias in supervised learning for cost-based metrics discrimination. Finally, she discussed how to rethink the entire machine learning pipeline with an ethical lens to building algorithms that serve the entire patient population.
  • Michal Rosen-Zvi, Director of AI for Accelerated HC&LS Discovery at IBM Research, gave the final keynote talk by presenting promising results from the acceleration of biomarker discovery in multimodal data of cancer patients. Dr Rosen-Zvi described advances in the development of computational tools to assess cancer heterogeneity in general and she focused on AI for imaging in particular.
  • The workshop consisted of a rich program of presentations covering topics including natural language processing, prediction, deep learning, computer vision, knowledge discovery, and COVID. It also included a hackallenge (hackathon + challenge) focused on developing automated methods to diagnose dementia on the basis of language samples. In this challenge, teams developed standardized analysis pipelines for two publicly-available datasets (the hackathon). The Pittsburgh and Wisconsin Longitudinal Study corpora were provided for this hackallenge by the Dementia Bank consortium. Additionally, each team provided a sound basis for meaningful comparative evaluation in the context of a shared task (the challenge).


tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



The Turing Lectures: Can we trust AI? – with Abeba Birhane

Abeba covers biases in data, the downstream impact on AI systems and our daily lives, how researchers are tackling the problem, and more.
21 November 2024, by

Dynamic faceted search: from haystack to highlight

The authors develop and compare three distinct methods for dynamic facet generation (DFG).
20 November 2024, by , and

Identification of hazardous areas for priority landmine clearance: AI for humanitarian mine action

In close collaboration with the UN and local NGOs, we co-develop an interpretable predictive tool to identify hazardous clusters of landmines.
19 November 2024, by

On the Road to Gundag(AI): Ensuring rural communities benefit from the AI revolution

We need to help regional small businesses benefit from AI while avoiding the harmful aspects.
18 November 2024, by

Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association