ΑΙhub.org
 

Interview with Lea Demelius: Researching differential privacy


by
25 March 2025



share this:

In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In the latest interview, we hear from Lea Demelius, who is researching differential privacy.

Tell us a bit about your PhD – where are you studying, and what is the topic of your research?

I am studying at the University of Technology Graz in Austria. My research focuses on differential privacy, which is widely regarded as the state-of-the-art for protecting privacy in data analysis and machine learning. I investigate the trade-offs and synergies that arise between various requirements for trustworthy AI – in particular privacy and fairness, with the goal of advancing the adoption of responsible machine learning models and shedding light on the practical and societal implications of integrating differential privacy into real-world systems.

Could you give us an overview of the research you’ve carried out so far during your PhD?

At the beginning of my PhD, I conducted an extensive literature analysis on recent advances of differential privacy in deep learning. After a long and thorough review process, the results are now published at ACM Computing Surveys. I also worked on privacy and security in AI in general, focusing on data protection during computations, which not only includes differential privacy but also other privacy-enhancing technologies such as homomorphic encryption, multi-party computation, and federated learning.

Over the course of my literature analysis, I came across an intriguing line of research that showed that differential privacy has a disparate impact on model accuracy. While the trade-off between privacy and utility – such as overall accuracy – is a well-recognized challenge, these studies show that boosting privacy protections in machine learning models with differential privacy impacts certain sub-groups disproportionately, raising a significant fairness concern. This trade-off between privacy and fairness is far less understood, so I decided to address some open questions I identified. In particular, I examined how the choice of a) metrics and b) hyperparameters affect the trade-off.

Is there an aspect of your research that has been particularly interesting?

I love that my research challenges me to think in a highly technical and logical way – differential privacy, after all, is a rigorous mathematic framework – while also keeping the broader socio-technological context in mind, considering both societal implications of technology and society’s expectations around privacy and fairness.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

I plan to continue exploring the trade-offs between (differential) privacy, utility and fairness. Additionally, I am interested in practical attacks on sensitive data and machine learning models, as they can both inform our decisions on how to balance these trade-offs and reveal new vulnerabilities that require novel solutions. In the long term, I might broaden my research to other aspects of trustworthy AI, such as explainability or robustness, where there are also interesting trade-offs and potential synergies to investigate.

What made you want to study AI?

At first, my motivation was primarily based on fascination with the novel technologies and opportunities AI brought along. I was eager to understand how machine learning models work and how they can be improved. But I quickly realized that for my PhD, that would not be enough for me. I want to work on something that not only fascinates me, but also benefits society. Given the widespread use of AI models today, I believe it is crucial to develop technical solutions that enhance the trustworthiness of AI, such as improving privacy and fairness, but also robustness and explainability. With my research, I aim to contribute to the adoption of machine learning systems that are more aligned with ethical principles and societal needs.

What advice would you give to someone thinking of doing a PhD in the field?

Your advisor and team are key. Make sure that you can work well together: Keeping up in a field as fast paced as AI will be so much easier together. And keep in mind that every PhD journey is different, there are so many influencing factors – both in and out of your control, so avoid comparing your journey too much with that of others.

Could you tell us an interesting (non-AI related) fact about you?

I love singing, especially with others. I founded an a cappella ensemble when I moved to Graz and I am part of the Styrian youth choir. My family also loves to sing whenever we all get together.

About Lea

Lea Demelius is a PhD student at the University of Technology, Graz (Austria) and the Know Center Research GmbH. Her research revolves around responsible AI, with a focus on differential privacy and fairness. She is particularly interested in the trade-offs and synergies that arise between various requirements for trustworthy AI, as well as the practical and societal implications of integrating differential privacy into real-world systems.



tags: , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



The Machine Ethics podcast: Careful technology with Rachel Coldicutt

This episode, Ben chats to Rachel Coldicutt about AI taxonomy, innovating for everyone not just the few, responsibilities of researchers, and more.

Interview with AAAI Fellow Roberto Navigli: multilingual natural language processing

  21 Mar 2025
Roberto tells us about his career path, some big research projects he’s led, and why it’s important to follow your passion.

Museums have tons of data, and AI could make it more accessible − but standardizing and organizing it across fields won’t be easy

  20 Mar 2025
How can AI models help organize large amounts of data from different collections, and what are the challenges?

Shlomo Zilberstein wins the 2025 ACM/SIGAI Autonomous Agents Research Award

  19 Mar 2025
Congratulations to Shlomo Zilberstein on winning this prestigious award!

#AAAI2025 workshops round-up 1: Artificial intelligence for music, and towards a knowledge-grounded scientific research lifecycle

  18 Mar 2025
We hear from the organisers of two workshops at AAAI2025 and find out the key takeaways from their events.

The Good Robot podcast: Re-imagining voice assistants with Stina Hasse Jørgensen and Frederik Juutilainen

  17 Mar 2025
Eleanor and Kerry chat to Stina Hasse Jørgensen and Frederik Juutilainen about an experimental research project that created an alternative voice assistant.

Visualizing research in the age of AI

  14 Mar 2025
Felice Frankel discusses the implications of generative AI when communicating science visually.

#IJCAI panel on communicating about AI with the public

  13 Mar 2025
A recording of this session at IJCAI2024 is now available to watch.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association