As part of the 37th AAAI Conference on Artificial Intelligence (AAAI2023), 32 different workshops were held, covering a wide range of different AI topics. We continue our round-up of these workshops with summaries from the organisers of two of the workshops, who tell us their key takeaways from their respective events.
Health Intelligence (W3PHIAI)
Organisers: Arash Shaban-Nejad, Martin Michalowski, Simone Bianco, Szymon Wilk, David L. Buckeridge, John S. Brownstein.
This workshop included contributions spanning theory, methods and systems, for application to web-based healthcare, with a focus on applications in population and personalized health. The main takeaways from the workshop were as follows:
- Explainability in AI-based health applications is gaining traction. While advances have been made methodologically to improve diagnosis, prognosis, and treatment, there is a need to explain decisions being made in the context of care. Several interesting statistical, textual, and visual methods were presented with compelling applications to health.
- The identification of biomarkers to describe, define, and predict biological age is a very active topic of research. Aging affects organisms differently, and chronological age does not always coincide with biological age. As part of the workshop’s hackathon dedicated to this problem, very novel ideas were presented on how to use multimodal health data to predict biological age.
- The set of papers presented at the workshop spanned a range of topics. Past workshops were more heavily focused on deep learning methods. This iteration of the workshop saw a more diverse set of research presented on topics including natural language processing, explainability, classification, fairness/ethics, amongst others.
Privacy Preserving AI (PPAI)
Organisers: Ferdinando Fioretto, Catuscia Palamidessi, Pascal Van Hentenryck.
This workshop focussed on both the theoretical and practical challenges related to the design of privacy-preserving AI systems, including multidisciplinary components, such as policy, legal issues, and societal impact of privacy in AI. The three main takeaways from the event were:
- Privacy and policy in AI applications
During the workshop, the invited talk by Kobbi Nissim, a renowned privacy expert, provided valuable insights into the compliance of machine learning (ML) systems with existing privacy legal requirements. The discussion revolved around the need for collaborations between AI practitioners, privacy experts, and policy makers. Participants also debated the effectiveness of current privacy-preserving techniques in meeting legal requirements and the potential consequences of non-compliance.
- Differential privacy for data release tasks
The workshop also included a comprehensive discussion on the role of differential privacy in data release tasks. The value of synthetic dataset generators was a major topic, as these generators can create datasets that maintain the statistical properties of the original data while ensuring privacy. Further, a need of comparing traditional data anonymization techniques, such as k-anonymity and cell suppression with differentially private algorithms.
- Challenges in auditing differential privacy ML models
Another key discussion point during the workshop was the challenges faced when auditing differential privacy ML models. The discussion emphasized the difficulties in verifying the privacy guarantees of differential privacy models and the potential trade-offs between privacy and model performance. A discussion around the need for standardized metrics, tools, and methodologies to assess the privacy-preserving properties of AI models was also underscored.
You can read the first round up in our series of AAAI workshop summaries here.
tags:
AAAI,
AAAI2023
AIhub
is dedicated to free high-quality information about AI.
AIhub
is dedicated to free high-quality information about AI.