ΑΙhub.org
monthly digest
 

AIhub monthly digest: February 2025 – kernel representation learning, fairness in machine learning, and bad practice in the publication world


by
25 February 2025



share this:
Panda and tiger reading

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we explore kernel representation learning for time series, learn about fairness in machine learning, and tackle bad practice in the publication world.

Launching our 2025 interview series with Doctoral Consortium participants

During 2024, we spoke to thirteen of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research and PhD life. Following the success of that series, we’re back in 2025 to talk to this year’s cohort. We began the series with two great interviews, hearing from Kunpeng Xu, a final-year PhD student at Université de Sherbrooke, and Kayla Boggess, who is studying for her PhD at the University of Virginia.

AAAI 2025 is underway

The 39th Annual AAAI Conference on Artificial Intelligence (AAAI 2025) kicks off today with two days of tutorials and labs. The main technical programme begins on Thursday. You can find out more about the events taking place here. Our Senior Managing Editor, Lucy Smith, is attending the conference, so please do reach out if you’d like to chat about communicating your research. We’ll also be running a science communication introductory training session on Wednesday 26 February – find out more here. Keep an eye out for our AAAI 2025 content – new articles will be published here.

Interview with Nisarg Shah: Understanding fairness in AI and machine learning

AIhub ambassador Kumar Kshitij Patel caught up with Nisarg Shah at the International Joint Conference on Artificial Intelligence (IJCAI). In an insightful interview, they discussed Nisarg’s research, the role of theory in machine learning research, fairness and safety guarantees, regulation, conference reviews, and advice for those just starting out on their research journey.

Bad practice in the publication world

In our February Coffee Corner discussion, AIhub trustees tackled the topic of bad practice in the sphere of publication. They talked about different aspects of bad practice they’ve encountered, and what can be done about it.

EU launches InvestAI initiative

At the Artificial Intelligence (AI) Action Summit in Paris, Commission President Ursula von der Leyen launched InvestAI, an initiative with the aim of mobilising €200 billion for investment in AI. The Confederation of Laboratories for Artificial Intelligence Research in Europe (CAIRNE) has welcomed the investment. You can read their full response here.

The paradoxes of depicting diversity in AI history

As part of a collaboration between Better Images of AI and Cambridge University’s Diversity Fund, Hanna Barakat was commissioned to create a digital collage series to depict diverse images about the learning and education of AI at Cambridge. In this blog post, she talks about her artistic process and reflections upon contributing to this collection. Hanna provides her thoughts on the challenges of creating images that communicate about AI histories and the inherent contradictions that arise when engaging in this work.

£10m for UK regulators to ‘jumpstart’ AI capabilities

AI-wise it was a busy start to the month in the UK, with the government publishing two reports and announcing millions of pounds of new investments. Writing in Real World Data Science, Brian Tarran picks out some key takeaways.

Michael Wooldridge to be next AAAI president elect

The Association for the Advancement of Artificial Intelligence (AAAI) have announced that Michael Wooldridge (University of Oxford, UK) will be the next President-Elect. Four Executive councillor members have also been elected to serve three-year terms, and these are: Pin-Yu Chen (IBM Research), Bistra Dilkina (University of Southern California), Sriraam Natarajan (University of Texas at Dallas), and Rosina Weber (Drexel University).

Suffering is real. AI consciousness is not

In their recent article Suffering is Real. AI Consciousness is Not, David McNeill and Emily Tucker dissect what’s behind a recent open letter claiming that future AI systems could become conscious and be ‘caused to suffer’. The authors conclude their essay by pointing out that “fantasizing about the potential future suffering of a chatbot is one way to deny that difficult truth at a moment in history when to actually become conscious of the suffering that so many human beings are now enduring requires real courage”.

“AI granny” driving scammers up the wall

Early this month, The Guardian reported on an AI chatbot designed to waste scammers’ time. The bot, which was given the persona of a grandmother who chats about knitting patterns and recipes for scones, was rolled out for a brief period to show what could be done to counter scammers.


Our resources page
Our events page
Seminars in 2025
AAAI/ACM SIGAI Doctoral Consortium interview series
AI around the world focus series
New voices in AI series



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



monthly digest

AIhub monthly digest: March 2025 – human-allied AI, differential privacy, and social media microtargeting

  28 Mar 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

AI ring tracks spelled words in American Sign Language

  27 Mar 2025
In its current form, SpellRing could be used to enter text into computers or smartphones via fingerspelling.

How AI images are ‘flattening’ Indigenous cultures – creating a new form of tech colonialism

  26 Mar 2025
AI-generated stock images that claim to depict “Indigenous Australians”, don’t resemble Aboriginal and Torres Strait Islander peoples.

Interview with Lea Demelius: Researching differential privacy

  25 Mar 2025
We hear from doctoral consortium participant Lea Demelius who is investigating the trade-offs and synergies that arise between various requirements for trustworthy AI.

The Machine Ethics podcast: Careful technology with Rachel Coldicutt

This episode, Ben chats to Rachel Coldicutt about AI taxonomy, innovating for everyone not just the few, responsibilities of researchers, and more.

Interview with AAAI Fellow Roberto Navigli: multilingual natural language processing

  21 Mar 2025
Roberto tells us about his career path, some big research projects he’s led, and why it’s important to follow your passion.

Museums have tons of data, and AI could make it more accessible − but standardizing and organizing it across fields won’t be easy

  20 Mar 2025
How can AI models help organize large amounts of data from different collections, and what are the challenges?

Shlomo Zilberstein wins the 2025 ACM/SIGAI Autonomous Agents Research Award

  19 Mar 2025
Congratulations to Shlomo Zilberstein on winning this prestigious award!




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association