Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we explore kernel representation learning for time series, learn about fairness in machine learning, and tackle bad practice in the publication world.
During 2024, we spoke to thirteen of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research and PhD life. Following the success of that series, we’re back in 2025 to talk to this year’s cohort. We began the series with two great interviews, hearing from Kunpeng Xu, a final-year PhD student at Université de Sherbrooke, and Kayla Boggess, who is studying for her PhD at the University of Virginia.
The 39th Annual AAAI Conference on Artificial Intelligence (AAAI 2025) kicks off today with two days of tutorials and labs. The main technical programme begins on Thursday. You can find out more about the events taking place here. Our Senior Managing Editor, Lucy Smith, is attending the conference, so please do reach out if you’d like to chat about communicating your research. We’ll also be running a science communication introductory training session on Wednesday 26 February – find out more here. Keep an eye out for our AAAI 2025 content – new articles will be published here.
AIhub ambassador Kumar Kshitij Patel caught up with Nisarg Shah at the International Joint Conference on Artificial Intelligence (IJCAI). In an insightful interview, they discussed Nisarg’s research, the role of theory in machine learning research, fairness and safety guarantees, regulation, conference reviews, and advice for those just starting out on their research journey.
In our February Coffee Corner discussion, AIhub trustees tackled the topic of bad practice in the sphere of publication. They talked about different aspects of bad practice they’ve encountered, and what can be done about it.
At the Artificial Intelligence (AI) Action Summit in Paris, Commission President Ursula von der Leyen launched InvestAI, an initiative with the aim of mobilising €200 billion for investment in AI. The Confederation of Laboratories for Artificial Intelligence Research in Europe (CAIRNE) has welcomed the investment. You can read their full response here.
As part of a collaboration between Better Images of AI and Cambridge University’s Diversity Fund, Hanna Barakat was commissioned to create a digital collage series to depict diverse images about the learning and education of AI at Cambridge. In this blog post, she talks about her artistic process and reflections upon contributing to this collection. Hanna provides her thoughts on the challenges of creating images that communicate about AI histories and the inherent contradictions that arise when engaging in this work.
AI-wise it was a busy start to the month in the UK, with the government publishing two reports and announcing millions of pounds of new investments. Writing in Real World Data Science, Brian Tarran picks out some key takeaways.
The Association for the Advancement of Artificial Intelligence (AAAI) have announced that Michael Wooldridge (University of Oxford, UK) will be the next President-Elect. Four Executive councillor members have also been elected to serve three-year terms, and these are: Pin-Yu Chen (IBM Research), Bistra Dilkina (University of Southern California), Sriraam Natarajan (University of Texas at Dallas), and Rosina Weber (Drexel University).
In their recent article Suffering is Real. AI Consciousness is Not, David McNeill and Emily Tucker dissect what’s behind a recent open letter claiming that future AI systems could become conscious and be ‘caused to suffer’. The authors conclude their essay by pointing out that “fantasizing about the potential future suffering of a chatbot is one way to deny that difficult truth at a moment in history when to actually become conscious of the suffering that so many human beings are now enduring requires real courage”.
Early this month, The Guardian reported on an AI chatbot designed to waste scammers’ time. The bot, which was given the persona of a grandmother who chats about knitting patterns and recipes for scones, was rolled out for a brief period to show what could be done to counter scammers.
Our resources page
Our events page
Seminars in 2025
AAAI/ACM SIGAI Doctoral Consortium interview series
AI around the world focus series
New voices in AI series