ΑΙhub.org
 

#AAAI2023 workshops round-up 2: health intelligence and privacy-preserving AI

by
21 March 2023



share this:
AAAI banner, with Washington DC view and AAAI23 text

As part of the 37th AAAI Conference on Artificial Intelligence (AAAI2023), 32 different workshops were held, covering a wide range of different AI topics. We continue our round-up of these workshops with summaries from the organisers of two of the workshops, who tell us their key takeaways from their respective events.


Health Intelligence (W3PHIAI)

Organisers: Arash Shaban-Nejad, Martin Michalowski, Simone Bianco, Szymon Wilk, David L. Buckeridge, John S. Brownstein.

This workshop included contributions spanning theory, methods and systems, for application to web-based healthcare, with a focus on applications in population and personalized health. The main takeaways from the workshop were as follows:

  • Explainability in AI-based health applications is gaining traction. While advances have been made methodologically to improve diagnosis, prognosis, and treatment, there is a need to explain decisions being made in the context of care. Several interesting statistical, textual, and visual methods were presented with compelling applications to health.
  • The identification of biomarkers to describe, define, and predict biological age is a very active topic of research. Aging affects organisms differently, and chronological age does not always coincide with biological age. As part of the workshop’s hackathon dedicated to this problem, very novel ideas were presented on how to use multimodal health data to predict biological age.
  • The set of papers presented at the workshop spanned a range of topics. Past workshops were more heavily focused on deep learning methods. This iteration of the workshop saw a more diverse set of research presented on topics including natural language processing, explainability, classification, fairness/ethics, amongst others.

Privacy Preserving AI (PPAI)

Organisers: Ferdinando Fioretto, Catuscia Palamidessi, Pascal Van Hentenryck.

This workshop focussed on both the theoretical and practical challenges related to the design of privacy-preserving AI systems, including multidisciplinary components, such as policy, legal issues, and societal impact of privacy in AI. The three main takeaways from the event were:

  • Privacy and policy in AI applications
    During the workshop, the invited talk by Kobbi Nissim, a renowned privacy expert, provided valuable insights into the compliance of machine learning (ML) systems with existing privacy legal requirements. The discussion revolved around the need for collaborations between AI practitioners, privacy experts, and policy makers. Participants also debated the effectiveness of current privacy-preserving techniques in meeting legal requirements and the potential consequences of non-compliance.
  • Differential privacy for data release tasks
    The workshop also included a comprehensive discussion on the role of differential privacy in data release tasks. The value of synthetic dataset generators was a major topic, as these generators can create datasets that maintain the statistical properties of the original data while ensuring privacy. Further, a need of comparing traditional data anonymization techniques, such as k-anonymity and cell suppression with differentially private algorithms.
  • Challenges in auditing differential privacy ML models
    Another key discussion point during the workshop was the challenges faced when auditing differential privacy ML models. The discussion emphasized the difficulties in verifying the privacy guarantees of differential privacy models and the potential trade-offs between privacy and model performance. A discussion around the need for standardized metrics, tools, and methodologies to assess the privacy-preserving properties of AI models was also underscored.

You can read the first round up in our series of AAAI workshop summaries here.



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



Interview with Salena Torres Ashton: causality and natural language

We spoke to Salena about her research, the AAAI experience, and her career path from professional genealogist and historian to machine learning PhD student.
02 May 2024, by

5 questions schools and universities should ask before they purchase AI tech products

Every few years, an emerging technology shows up at the doorstep of schools and universities promising to transform education.
01 May 2024, by

AIhub monthly digest: April 2024 – explainable AI, access to compute, and noughts and crosses

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
30 April 2024, by

The Machine Ethics podcast: Good tech with Eleanor Drage and Kerry McInerney

In this episode, Ben chats Eleanor Drage and Kerry McInerney about good tech.
29 April 2024, by

AIhub coffee corner: Open vs closed science

The AIhub coffee corner captures the musings of AI experts over a short conversation.
26 April 2024, by

Are emergent abilities of large language models a mirage? – Interview with Brando Miranda

We hear about work that won a NeurIPS 2023 outstanding paper award.
25 April 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association