Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we meet three AAAI doctoral consortium participants, find out how machine learning can help monitor bird flocks, and cover the 38th AAAI conference.
The AAAI/SIGAI Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. We’re meeting the participants in a series of interviews to find out about their research, PhD life, and why they decided to study AI. This month, we caught up with Fiona Anting Tan, Elizabeth Ondula, and Célian Ringwald.
Fiona Anting Tan’s research concerns text mining for causal relations, and is split into three parts: extraction, representation, and application. We talked out about these three aspects in this interview.
In our chat with Elizabeth Ondula, we found out more about her work applying reinforcement learning in different domains, notably to develop and evaluate public health policies and decisions in epidemic scenarios.
We also heard from Célian Ringwald who introduced his work on natural language processing and knowledge graphs, specifically extraction of targeted information from texts.
This month saw the running of the 38th Annual AAAI Conference. The event, which took place in Vancouver, ran from 20-27 February, and included invited talks, tutorials, workshops, and a technical programme. We summarised what the attendees got up to in these two posts: #AAAI2024 in tweets: part one | #AAAI2024 in tweets: part two. Stay tuned for more coverage, including summaries of the invited talks, and blog posts from some of the participants.
Shortly before the conference began, the winners of a number of prestigious awards were announced. These were officially presented during an awards ceremony at the conference, on 24 February. You can find out who won what here.
During the opening ceremony, the AAAI 2024 outstanding paper winners were revealed. There were three winners this year:
In work presented at the 32nd International Joint Conference on Artificial Intelligence (IJCAI 2023), Kshitiz, Sonu Shreshtha, Ramy Mounir, Mayank Vatsa, Richa Singh, Saket Anand, Sudeep Sarkar and Sevaram Mali Parihar developed and applied computer vision techniques and datasets for non-invasive monitoring and analysis of migratory bird flocks in their natural habitats. In this interview, Kshitiz tells us more about this research.
Data Like is a new project from Isabella Bicalho-Frazeto and Ndane Ndazhaga. Their mission is to amplify the voices of women in data, providing a platform for their stories, experiences, and perspectives. As part of this initiative, Isabella and Ndane are publishing a series of interviews with women in the field. In their first interview, they spoke to Pratibha V Shambhangoudar about how she came to pursue a career in technology.
Back in April 2023, we collected some of the articles, opinion pieces, videos and resources relating to large language models, and other generative models. We periodically update this list to include the latest resources, and we’re now on the fourth iteration. Check it out here.
On 6 February, UK Research and Innovation (UKRI) announced that £100m will be invested in nine new AI hubs in the UK. Three of the hubs will focus on foundational mathematics and computational research, and the other six will be tasked with exploring AI for science, engineering and real-world data.
Towards the end of January, details were released about the USA National Artificial Intelligence Research Resource (NAIRR) Pilot. The NAIRR is a concept for a national infrastructure that connects US researchers to resources they need to participate in AI research. The NAIRR pilot will run for two years, and will broadly support fundamental, translational and use-inspired AI-related research with particular emphasis on societal challenges.
In this video, Sasha Rush presents a tutorial on large language models (LLMs). Structured in five parts (the “five formulas”), the talk covers the following concepts: generation, memory, efficiency, scaling, and reasoning.
Our resources page
Seminars in 2024
AI around the world focus series
UN SDGs focus series
New voices in AI series