2025 has seen another busy 12 months in the world of artificial intelligence. Throughout the year we’ve reported on some of the larger stories, and some of the lesser-covered happenings, in our regular monthly digests. We look back through the archives and pick out one or two stories from each of our digests.
This month, AI startup DeepSeek released DeepSeek R1, a reasoning model designed for good performance on logic, maths, and pattern-finding tasks. The company has also launched six smaller versions of R1 that are tiny enough to run locally on laptops. In Wired, Zeyi Yang reported on who is behind the startup, whilst Tongliang Liu (in The Conversation) looked at how DeepSeek achieved its results with a fraction of the cash and computing power of its competitors.
We caught up with Erica Kimei to find out about her research studying gas emissions from agriculture, specifically ruminant livestock. Erica combines machine learning and remote sensing technology to monitor and forecast such emissions. This interview formed part of our series highlighting members of the AfriClimate AI community.
During 2025 we’ve chatted to 21 AAAI/SIGAI Doctoral Consortium participants to find out more about their research and PhD life. We launched this series in February with two great interviews, hearing from Kunpeng Xu, a final-year PhD student at Université de Sherbrooke, and Kayla Boggess, who is studying for her PhD at the University of Virginia.
AIhub ambassador Kumar Kshitij Patel caught up with Nisarg Shah at the International Joint Conference on Artificial Intelligence (IJCAI). In an insightful interview, they discussed Nisarg’s research, the role of theory in machine learning research, fairness and safety guarantees, regulation, conference reviews, and advice for those just starting out on their research journey.
The Association for the Advancement of Artificial Intelligence (AAAI), has published a report on the Future of AI Research. The report, which was announced by outgoing AAAI President Francesca Rossi during the AAAI 2025 conference, covers 17 different AI topics and aims to clearly identify the trajectory of AI research in a structured way. You can read the document in full here. As part of this project, members of the report team are taking part in a series of video panel discussions covering selected chapters.
A survey published by the Ada Lovelace Institute and the Alan Turing Institute reveals attitudes towards AI among the general public in the UK. It follows a previous survey carried out in 2022, before the release of ChatGPT and other LLM-based chatbots, which was published in 2023. The survey found that public awareness of different AI uses varies widely. While 93% have heard of driverless cars and 90% of facial recognition in policing, only 18% were aware of the use of AI for welfare benefits assessments. You can read more about the key findings here.
The 14 May saw House Republicans adding a provision to the Budget Reconciliation Bill that would place a moratorium on states enforcing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” for the next ten years. On 22 May, the House passed the bill, which went to the Senate. Although the Senate voted almost unanimously to remove the moratorium from the bill, in December Trump signed an Executive Order which seeks to prevent states from regulating AI.
Citizen science platforms have increased in popularity, fueling the rapid development of biodiversity foundation models. However, such data are inherently biased. In their work DivShift: Exploring Domain-Specific Distribution Shifts in Large-Scale, Volunteer-Collected Biodiversity Datasets, which won the AAAI outstanding paper award (AI for social alignment track), Elena Sierra, Lauren Gillespie and Moises Exposito Alonso tackled the challenge of quantifying the impacts of these biases on deep learning model performance. In this blog post, Elena and Lauren told us more.
RoboCup is an international scientific initiative with the goal of advancing the state of the art of intelligent robots, AI and automation. The annual RoboCup event, where teams gather from across the globe for competitions across a number of leagues, this year took place in Brazil, from 15-21 July. In the lead up to the event, we spoke to Marco Simões, one of the General Chairs of RoboCup 2025 and President of RoboCup Brazil, to hear about plans for the week, some new initiatives, and how RoboCup has grown in Brazil over the past ten years.
We also caught up with Ana Patrícia Magalhães, lead organizer of RoboCupJunior 2025 to find out more about this element of RoboCup, which is designed to introduce school children to the main competition.
We spoke to Evana Gizzi, Artificial Intelligence Research Lead at NASA Goddard Space Flight Center, about the NASA Onboard Artificial Intelligence Research (OnAIR) platform. This open-source software pipeline and cognitive architecture tool has been designed to aid space research and missions. Evana told us about some of the particular challenges of deploying AI-based solutions in space, and how the tool has been used so far. This work was presented at IAAI 2025, which was co-located with AAAI 2025.
The AIhub coffee corner captures the musings of AI experts over a short conversation. In our August edition, Sanmay Das, Tom Dietterich, Sabine Hauert, Sarit Kraus, and Michael Littman tackled the topic of agentic AI, discussing recent developments, and lessons learned from the decades of research in the autonomous agents and multiagent systems community.
Should AI continue to be driven by a single paradigm, or does real progress lie in combining the strengths of many? Luc De Raedt has spent much of his career addressing this question. Through pioneering work that bridges logic, probability, and machine learning, he has helped shape the field of neurosymbolic AI. AIhub ambassador Liliane-Caroline Demers sat down with Luc at IJCAI 2025 to find out more.
October was a busy month on the conference front. Over in Madrid, researchers gathered for the conference on Artificial Intelligence, Ethics, and Society (AIES). The event featured two keynote talks, panel discussions and poster sessions. The organisers also experimented with a slightly different format for the contributed talks. All speakers in a session gave their talks, then took part in a joint discussion on common themes, before the floor was opened to questions from the audience. During the opening ceremony, the winners of the best papers were announced. You can get a flavour of what participants got up to in our round up from social media.
The following week in Bologna was the 28th European Conference on Artificial Intelligence (ECAI-2025), co-located with the 14th Conference on Prestigious Applications of Intelligent Systems (PAIS-2025). During the official opening, the organisers highlighted the winners of this year’s outstanding paper awards. They also presented the EurAI Distinguished Service Award, which this year was bestowed jointly to Frank Van Harmelen and former AIhub trustee Carles Sierra.
Amazon delivery drivers, alongside warehouse workers and other people in the Amazon delivery supply chain, are subject to intense amounts of surveillance, including through the use of AI-powered tools. In this report from The DAIR Institute, authors Adrienne Williams, Alex Hanna and Sandra Barcenas investigate how workplace technology enables Amazon to “steal wages, hide labour, intensify poor working conditions, and evade responsibility”.
Sony AI have released a dataset that establishes a new benchmark for AI ethics in computer vision models. The research behind the dataset, named Fair Human-Centric Image Benchmark (FHIBE), has been published in Nature. FHIBE is the first publicly-available, globally-diverse, consent-based human image dataset (inclusive of over 10,000 human images) for evaluating bias across a wide variety of computer vision tasks. We sat down with project lead, Alice Xiang, Global Head of AI Governance at Sony Group and Lead Research Scientist for AI Ethics at Sony AI, to discuss the project and the broader implications of this research.