ΑΙhub.org
 

2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation


by
20 December 2024



share this:


Each year, a small group of PhD students are chosen to participate in the AAAI/SIGAI Doctoral Consortium. This initiative provides an opportunity for the students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. During 2024, we met with some of the students to find out more about their research and the doctoral consortium experience. They also shared their advice for prospective PhD students. Here, we collate the interviews.


Interview with Changhoon Kim: Enhancing the reliability of image generative AI


Changhoon Kim completed his PhD in Computer Engineering at Arizona State University. His primary research centered on the creation of trustworthy and responsible machine learning systems. He has devoted recent years to the development of user-attribution methods for generative models. His research extends to various modalities, including image, audio, video, and multi-modal generative models.


Interview with Fiona Anting Tan: Researching causal relations in text

Fiona
Fiona Anting Tan is a PhD student at the National University of Singapore and a recipient of the President’s Graduate Fellowship (PGF). Her research interests span natural language processing and reasoning, with a focus on extracting causal relationships from text for various applications. She has had multiple research internships and attachments with companies including Amazon, American Express, Panasonic and Changi Airport Group.


Interview with Elizabeth Ondula: Applied reinforcement learning


Elizabeth Ondula is a PhD student of Computer Science at the University of Southern California in Los Angeles. She graduated as an Electrical Engineer from the Technical University of Kenya. She is a member of the autonomous networks research group and her research focuses on applied reinforcement learning in public health settings. Other research interests are understanding decision processes for large language model (LLM)-based multi-agent systems and applying graph neural networks to autonomous exploration.


Interview with Célian Ringwald: Natural language processing and knowledge graphs


Célian Ringwald is a PhD student at the Université Côte d’Azur in the Inria Wimmics team. His research focuses on the combination of large language models with knowledge graphs for relation extraction. He graduated from Université Lyon 2 with a Master’s degree in Data Science and worked for three years for a startup specialising in NLP applications. Having a passion for research, he pursued a second Master’s degree, in Digital Humanities, which led him to his PhD topic.


Interview with Aaquib Tabrez: explainability and human-robot interaction


Aaquib Tabrez is a PhD candidate at the University of Colorado Boulder in the Computer Science Department, where he is advised by Professor Brad Hayes. Aaquib previously received a B.Tech. in Mechanical Engineering from NITK Surathkal (India). He works at the intersection of explainability and human-robot interaction, aiming to leverage and enhance multimodal human-machine communication for value alignment and fostering appropriate trust within human-robot teams.


Interview with Raffaele Galliera: Deep reinforcement learning for communication networks


Raffaele Galliera is a PhD student in Artificial Intelligence as part of the Intelligent Systems and Robotics joint program between The University of West Florida (UWF) and the Institute for Human and Machine Cognition (IHMC). He holds degrees in Computer Science and Electronics Engineering from the University of Ferrara and a Master’s in Computer Science from UWF. His research mainly focuses on (multi-agent) reinforcement learning to optimize communication tasks and the deployment of learned policies and strategies in communication protocols.


Interview with Amine Barrak: serverless computing and machine learning

Amine stood next to large letters spelling AAAI
Amine Barrak is a PhD candidate in Software Engineering at the University of Quebec, specializing in the integration of serverless computing with distributed machine learning training. During his Master’s degree (which he received from from Polytechnique Montreal), his focus was on security vulnerability changes in software code.


Interview with Bálint Gyevnár: Creating explanations for AI-based decision-making systems


Bálint Gyevnár is a PhD student at the University of Edinburgh supervised by Stefano Albrecht, Shay Cohen, and Chris Lucas. He focuses on building socially aligned explainable AI technologies for end users, drawing on research from multiple disciplines including autonomous systems, cognitive science, natural language processing, and the law. His goal is to create safer and more trustworthy autonomous systems that everyone can use and trust appropriately regardless of their knowledge of AI systems.


Interview with Mike Lee: Communicating AI decision-making through demonstrations


Mike Lee is currently a postdoc in the Robotics Institute at Carnegie Mellon University, supported by Professor Henny Admoni and Professor Reid Simmons. He received his PhD in robotics from Carnegie Mellon University and his bachelor’s in mechanical and aerospace engineering from Princeton University. Mike researches explainable AI, and his PhD thesis explored how AI agents can teach their decision making to humans by selecting and showing demonstrations of AI behavior in key contexts.


Interview with Salena Torres Ashton: causality and natural language


Salena Torres Ashton will complete her PhD in Information Science at the University of Arizona in 2025. She researches causality and formal semantics in natural language, examining their effects on how people phrase questions. Salena currently teaches “Introduction to Machine Learning” at the University of Arizona, and works as a data science mentor for Posit Academy. Before attending graduate school, Salena worked for over 25 years as a professional genealogist and holds a BA in History.


Interview with Sukanya Mandal: Developing a cognitive digital twin framework for smart cities


Sukanya Mandal is a PhD student at Dublin City University, Ireland, researching the development of a privacy-preserving federated learning (PPFL)-based cognitive digital twin (CDT) framework for smart cities. This interdisciplinary project combines knowledge graphs, cognitive digital twins, and privacy-preserving federated learning. Sukanya has a decade of industry experience in AI and data science.


Interview with Yuan Yang: working at the intersection of AI and cognitive science


Yuan Yang works in the intersecting area of AI and cognitive science, focusing on how AI can help understand human core cognitive abilities (e.g., fluid intelligence, visual abstract reasoning, analogy making, and mental imagery), and conversely, how such understanding can facilitate the development of AI. He has a PhD in computer science from Vanderbilt University, and recently joined the College of Information, Mechanical and Electrical Engineering, Shanghai Normal University, as an associate professor.


Interview with Pulkit Verma: Towards safe and reliable behavior of AI agents


Pulkit Verma is a Postdoctoral Associate at the Interactive Robotics Group at the Massachusetts Institute of Technology, where he works with Professor Julie Shah. His research concerns the safe and reliable behavior of taskable AI agents. He investigates the minimal set of requirements in an AI system that would enable a user to assess and understand the limits of its safe operability. He received his PhD in Computer Science from the School of Computing and Augmented Intelligence, Arizona State University, where he worked with Professor Siddharth Srivastava.



tags: , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.

Congratulations to the #NeurIPS2024 award winners

  11 Dec 2024
Find out who has been recognised by the conference awards.

Multi-agent path finding in continuous environments

and   11 Dec 2024
How can a group of agents minimise their journey length whilst avoiding collisions?

The Machine Ethics podcast: Diversity in the AI life-cycle with Caitlin Kraft-Buchman

In this episode Ben chats to Caitlin about gender and AI, using technology for good, lived experience expertise, co-creating technologies, international treaties on AI, and more...




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association