ΑΙhub.org
 

#AAAI2023 workshops round-up 2: health intelligence and privacy-preserving AI


by
21 March 2023



share this:
AAAI banner, with Washington DC view and AAAI23 text

As part of the 37th AAAI Conference on Artificial Intelligence (AAAI2023), 32 different workshops were held, covering a wide range of different AI topics. We continue our round-up of these workshops with summaries from the organisers of two of the workshops, who tell us their key takeaways from their respective events.


Health Intelligence (W3PHIAI)

Organisers: Arash Shaban-Nejad, Martin Michalowski, Simone Bianco, Szymon Wilk, David L. Buckeridge, John S. Brownstein.

This workshop included contributions spanning theory, methods and systems, for application to web-based healthcare, with a focus on applications in population and personalized health. The main takeaways from the workshop were as follows:

  • Explainability in AI-based health applications is gaining traction. While advances have been made methodologically to improve diagnosis, prognosis, and treatment, there is a need to explain decisions being made in the context of care. Several interesting statistical, textual, and visual methods were presented with compelling applications to health.
  • The identification of biomarkers to describe, define, and predict biological age is a very active topic of research. Aging affects organisms differently, and chronological age does not always coincide with biological age. As part of the workshop’s hackathon dedicated to this problem, very novel ideas were presented on how to use multimodal health data to predict biological age.
  • The set of papers presented at the workshop spanned a range of topics. Past workshops were more heavily focused on deep learning methods. This iteration of the workshop saw a more diverse set of research presented on topics including natural language processing, explainability, classification, fairness/ethics, amongst others.

Privacy Preserving AI (PPAI)

Organisers: Ferdinando Fioretto, Catuscia Palamidessi, Pascal Van Hentenryck.

This workshop focussed on both the theoretical and practical challenges related to the design of privacy-preserving AI systems, including multidisciplinary components, such as policy, legal issues, and societal impact of privacy in AI. The three main takeaways from the event were:

  • Privacy and policy in AI applications
    During the workshop, the invited talk by Kobbi Nissim, a renowned privacy expert, provided valuable insights into the compliance of machine learning (ML) systems with existing privacy legal requirements. The discussion revolved around the need for collaborations between AI practitioners, privacy experts, and policy makers. Participants also debated the effectiveness of current privacy-preserving techniques in meeting legal requirements and the potential consequences of non-compliance.
  • Differential privacy for data release tasks
    The workshop also included a comprehensive discussion on the role of differential privacy in data release tasks. The value of synthetic dataset generators was a major topic, as these generators can create datasets that maintain the statistical properties of the original data while ensuring privacy. Further, a need of comparing traditional data anonymization techniques, such as k-anonymity and cell suppression with differentially private algorithms.
  • Challenges in auditing differential privacy ML models
    Another key discussion point during the workshop was the challenges faced when auditing differential privacy ML models. The discussion emphasized the difficulties in verifying the privacy guarantees of differential privacy models and the potential trade-offs between privacy and model performance. A discussion around the need for standardized metrics, tools, and methodologies to assess the privacy-preserving properties of AI models was also underscored.

You can read the first round up in our series of AAAI workshop summaries here.



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

An AI image generator for non-English speakers

  17 Mar 2026
"Translations lose the nuances of language and culture, because many words lack good English equivalents."

AI and Theory of Mind: an interview with Nitay Alon

  16 Mar 2026
Find out more about how Theory of Mind plays out in deceptive environments, multi-agents systems, the interdisciplinary nature of this field, when to use Theory of Mind, and when not to, and more.
coffee corner

AIhub coffee corner: AI, kids, and the future – “generation AI”

  13 Mar 2026
The AIhub coffee corner captures the musings of AI experts over a short conversation.

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence