Each year, a small group of PhD students are chosen to participate in the AAAI/SIGAI Doctoral Consortium. This initiative provides an opportunity for the students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. During 2024, we met with some of the students to find out more about their research and the doctoral consortium experience. They also shared their advice for prospective PhD students. Here, we collate the interviews.
Changhoon Kim completed his PhD in Computer Engineering at Arizona State University. His primary research centered on the creation of trustworthy and responsible machine learning systems. He has devoted recent years to the development of user-attribution methods for generative models. His research extends to various modalities, including image, audio, video, and multi-modal generative models.
Fiona Anting Tan is a PhD student at the National University of Singapore and a recipient of the President’s Graduate Fellowship (PGF). Her research interests span natural language processing and reasoning, with a focus on extracting causal relationships from text for various applications. She has had multiple research internships and attachments with companies including Amazon, American Express, Panasonic and Changi Airport Group.
Elizabeth Ondula is a PhD student of Computer Science at the University of Southern California in Los Angeles. She graduated as an Electrical Engineer from the Technical University of Kenya. She is a member of the autonomous networks research group and her research focuses on applied reinforcement learning in public health settings. Other research interests are understanding decision processes for large language model (LLM)-based multi-agent systems and applying graph neural networks to autonomous exploration.
Célian Ringwald is a PhD student at the Université Côte d’Azur in the Inria Wimmics team. His research focuses on the combination of large language models with knowledge graphs for relation extraction. He graduated from Université Lyon 2 with a Master’s degree in Data Science and worked for three years for a startup specialising in NLP applications. Having a passion for research, he pursued a second Master’s degree, in Digital Humanities, which led him to his PhD topic.
Aaquib Tabrez is a PhD candidate at the University of Colorado Boulder in the Computer Science Department, where he is advised by Professor Brad Hayes. Aaquib previously received a B.Tech. in Mechanical Engineering from NITK Surathkal (India). He works at the intersection of explainability and human-robot interaction, aiming to leverage and enhance multimodal human-machine communication for value alignment and fostering appropriate trust within human-robot teams.
Raffaele Galliera is a PhD student in Artificial Intelligence as part of the Intelligent Systems and Robotics joint program between The University of West Florida (UWF) and the Institute for Human and Machine Cognition (IHMC). He holds degrees in Computer Science and Electronics Engineering from the University of Ferrara and a Master’s in Computer Science from UWF. His research mainly focuses on (multi-agent) reinforcement learning to optimize communication tasks and the deployment of learned policies and strategies in communication protocols.
Amine Barrak is a PhD candidate in Software Engineering at the University of Quebec, specializing in the integration of serverless computing with distributed machine learning training. During his Master’s degree (which he received from from Polytechnique Montreal), his focus was on security vulnerability changes in software code.
Bálint Gyevnár is a PhD student at the University of Edinburgh supervised by Stefano Albrecht, Shay Cohen, and Chris Lucas. He focuses on building socially aligned explainable AI technologies for end users, drawing on research from multiple disciplines including autonomous systems, cognitive science, natural language processing, and the law. His goal is to create safer and more trustworthy autonomous systems that everyone can use and trust appropriately regardless of their knowledge of AI systems.
Mike Lee is currently a postdoc in the Robotics Institute at Carnegie Mellon University, supported by Professor Henny Admoni and Professor Reid Simmons. He received his PhD in robotics from Carnegie Mellon University and his bachelor’s in mechanical and aerospace engineering from Princeton University. Mike researches explainable AI, and his PhD thesis explored how AI agents can teach their decision making to humans by selecting and showing demonstrations of AI behavior in key contexts.
Salena Torres Ashton will complete her PhD in Information Science at the University of Arizona in 2025. She researches causality and formal semantics in natural language, examining their effects on how people phrase questions. Salena currently teaches “Introduction to Machine Learning” at the University of Arizona, and works as a data science mentor for Posit Academy. Before attending graduate school, Salena worked for over 25 years as a professional genealogist and holds a BA in History.
Sukanya Mandal is a PhD student at Dublin City University, Ireland, researching the development of a privacy-preserving federated learning (PPFL)-based cognitive digital twin (CDT) framework for smart cities. This interdisciplinary project combines knowledge graphs, cognitive digital twins, and privacy-preserving federated learning. Sukanya has a decade of industry experience in AI and data science.
Yuan Yang works in the intersecting area of AI and cognitive science, focusing on how AI can help understand human core cognitive abilities (e.g., fluid intelligence, visual abstract reasoning, analogy making, and mental imagery), and conversely, how such understanding can facilitate the development of AI. He has a PhD in computer science from Vanderbilt University, and recently joined the College of Information, Mechanical and Electrical Engineering, Shanghai Normal University, as an associate professor.
Pulkit Verma is a Postdoctoral Associate at the Interactive Robotics Group at the Massachusetts Institute of Technology, where he works with Professor Julie Shah. His research concerns the safe and reliable behavior of taskable AI agents. He investigates the minimal set of requirements in an AI system that would enable a user to assess and understand the limits of its safe operability. He received his PhD in Computer Science from the School of Computing and Augmented Intelligence, Arizona State University, where he worked with Professor Siddharth Srivastava.