ΑΙhub.org
 

2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation


by
20 December 2024



share this:


Each year, a small group of PhD students are chosen to participate in the AAAI/SIGAI Doctoral Consortium. This initiative provides an opportunity for the students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. During 2024, we met with some of the students to find out more about their research and the doctoral consortium experience. They also shared their advice for prospective PhD students. Here, we collate the interviews.


Interview with Changhoon Kim: Enhancing the reliability of image generative AI


Changhoon Kim completed his PhD in Computer Engineering at Arizona State University. His primary research centered on the creation of trustworthy and responsible machine learning systems. He has devoted recent years to the development of user-attribution methods for generative models. His research extends to various modalities, including image, audio, video, and multi-modal generative models.


Interview with Fiona Anting Tan: Researching causal relations in text

Fiona
Fiona Anting Tan is a PhD student at the National University of Singapore and a recipient of the President’s Graduate Fellowship (PGF). Her research interests span natural language processing and reasoning, with a focus on extracting causal relationships from text for various applications. She has had multiple research internships and attachments with companies including Amazon, American Express, Panasonic and Changi Airport Group.


Interview with Elizabeth Ondula: Applied reinforcement learning


Elizabeth Ondula is a PhD student of Computer Science at the University of Southern California in Los Angeles. She graduated as an Electrical Engineer from the Technical University of Kenya. She is a member of the autonomous networks research group and her research focuses on applied reinforcement learning in public health settings. Other research interests are understanding decision processes for large language model (LLM)-based multi-agent systems and applying graph neural networks to autonomous exploration.


Interview with Célian Ringwald: Natural language processing and knowledge graphs


Célian Ringwald is a PhD student at the Université Côte d’Azur in the Inria Wimmics team. His research focuses on the combination of large language models with knowledge graphs for relation extraction. He graduated from Université Lyon 2 with a Master’s degree in Data Science and worked for three years for a startup specialising in NLP applications. Having a passion for research, he pursued a second Master’s degree, in Digital Humanities, which led him to his PhD topic.


Interview with Aaquib Tabrez: explainability and human-robot interaction


Aaquib Tabrez is a PhD candidate at the University of Colorado Boulder in the Computer Science Department, where he is advised by Professor Brad Hayes. Aaquib previously received a B.Tech. in Mechanical Engineering from NITK Surathkal (India). He works at the intersection of explainability and human-robot interaction, aiming to leverage and enhance multimodal human-machine communication for value alignment and fostering appropriate trust within human-robot teams.


Interview with Raffaele Galliera: Deep reinforcement learning for communication networks


Raffaele Galliera is a PhD student in Artificial Intelligence as part of the Intelligent Systems and Robotics joint program between The University of West Florida (UWF) and the Institute for Human and Machine Cognition (IHMC). He holds degrees in Computer Science and Electronics Engineering from the University of Ferrara and a Master’s in Computer Science from UWF. His research mainly focuses on (multi-agent) reinforcement learning to optimize communication tasks and the deployment of learned policies and strategies in communication protocols.


Interview with Amine Barrak: serverless computing and machine learning

Amine stood next to large letters spelling AAAI
Amine Barrak is a PhD candidate in Software Engineering at the University of Quebec, specializing in the integration of serverless computing with distributed machine learning training. During his Master’s degree (which he received from from Polytechnique Montreal), his focus was on security vulnerability changes in software code.


Interview with Bálint Gyevnár: Creating explanations for AI-based decision-making systems


Bálint Gyevnár is a PhD student at the University of Edinburgh supervised by Stefano Albrecht, Shay Cohen, and Chris Lucas. He focuses on building socially aligned explainable AI technologies for end users, drawing on research from multiple disciplines including autonomous systems, cognitive science, natural language processing, and the law. His goal is to create safer and more trustworthy autonomous systems that everyone can use and trust appropriately regardless of their knowledge of AI systems.


Interview with Mike Lee: Communicating AI decision-making through demonstrations


Mike Lee is currently a postdoc in the Robotics Institute at Carnegie Mellon University, supported by Professor Henny Admoni and Professor Reid Simmons. He received his PhD in robotics from Carnegie Mellon University and his bachelor’s in mechanical and aerospace engineering from Princeton University. Mike researches explainable AI, and his PhD thesis explored how AI agents can teach their decision making to humans by selecting and showing demonstrations of AI behavior in key contexts.


Interview with Salena Torres Ashton: causality and natural language


Salena Torres Ashton will complete her PhD in Information Science at the University of Arizona in 2025. She researches causality and formal semantics in natural language, examining their effects on how people phrase questions. Salena currently teaches “Introduction to Machine Learning” at the University of Arizona, and works as a data science mentor for Posit Academy. Before attending graduate school, Salena worked for over 25 years as a professional genealogist and holds a BA in History.


Interview with Sukanya Mandal: Developing a cognitive digital twin framework for smart cities


Sukanya Mandal is a PhD student at Dublin City University, Ireland, researching the development of a privacy-preserving federated learning (PPFL)-based cognitive digital twin (CDT) framework for smart cities. This interdisciplinary project combines knowledge graphs, cognitive digital twins, and privacy-preserving federated learning. Sukanya has a decade of industry experience in AI and data science.


Interview with Yuan Yang: working at the intersection of AI and cognitive science


Yuan Yang works in the intersecting area of AI and cognitive science, focusing on how AI can help understand human core cognitive abilities (e.g., fluid intelligence, visual abstract reasoning, analogy making, and mental imagery), and conversely, how such understanding can facilitate the development of AI. He has a PhD in computer science from Vanderbilt University, and recently joined the College of Information, Mechanical and Electrical Engineering, Shanghai Normal University, as an associate professor.


Interview with Pulkit Verma: Towards safe and reliable behavior of AI agents


Pulkit Verma is a Postdoctoral Associate at the Interactive Robotics Group at the Massachusetts Institute of Technology, where he works with Professor Julie Shah. His research concerns the safe and reliable behavior of taskable AI agents. He investigates the minimal set of requirements in an AI system that would enable a user to assess and understand the limits of its safe operability. He received his PhD in Computer Science from the School of Computing and Augmented Intelligence, Arizona State University, where he worked with Professor Siddharth Srivastava.



tags: , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

‘Probably’ doesn’t mean the same thing to your AI as it does to you

  17 Apr 2026
Are you sure you and the AI chatbot you’re using are on the same page about probabilities?

Interview with Xinwei Song: strategic interactions in networked multi-agent systems

  16 Apr 2026
Xinwei Song tells us about her research using algorithmic game theory and multi-agent reinforcement learning.

2026 AI Index Report released

  15 Apr 2026
Find out what the ninth edition of the report, which was published on 13 April, says about trends in AI.

Formal verification for safety evaluation of autonomous vehicles: an interview with Abdelrahman Sayed Sayed

  14 Apr 2026
Find out more about work at the intersection of continuous AI models, formal methods, and autonomous systems.

Water flow in prairie watersheds is increasingly unpredictable — but AI could help

  13 Apr 2026
In recent years, the Prairies have seen bigger swings in climate conditions — very wet years followed by very dry ones.

Identifying interactions at scale for LLMs

  10 Apr 2026
Model behavior is rarely the result of isolated components; rather, it emerges from complex dependencies and patterns.

Interview with Sukanya Mandal: Synthesizing multi-modal knowledge graphs for smart city intelligence

  09 Apr 2026
A modular four-stage framework that draws on LLMs to automate synthetic multi-modal knowledge graphs.

Emergence of fragility in LLM-based social networks: an interview with Francesco Bertolotti

  08 Apr 2026
Francesco tells us how LLMs behave in the social network Moltbook, and what this reveals about network dynamics.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence