ΑΙhub.org
monthly digest
 

AIhub monthly digest: January 2025 – artists’ perspectives on GenAI, biomedical knowledge graphs, and ML for studying greenhouse gas emissions


by
29 January 2025



share this:
Panda and tiger reading

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we hear about artists’ perspectives on generative AI, learn how to explain neural networks using logic, and find out about using machine learning for studying greenhouse gas emissions.

Using ML for studying greenhouse gas emissions from livestock

We caught up with Erica Kimei to find out about her research studying gas emissions from agriculture, specifically ruminant livestock. Erica combines machine learning and remote sensing technology to monitor and forecast such emissions. This interview is the latest in our series highlighting members of the AfriClimate AI community.

Interview with Yuki Mitsufuji: Improving AI image generation

We spoke to Yuki Mitsufuji, Lead Research Scientist at Sony AI, to find out more about two pieces of research that his team presented at the Conference on Neural Information Processing Systems (NeurIPS 2024). These works tackle different aspects of image generation – single-shot novel view synthesis, and high-speed generation, introducing the models GenWarp and PaGoDA respectively.

Understanding artists’ perspectives on generative AI art

In a recent study, Juniper Lovato, Julia Zimmerman, and Jennifer Karson gathered opinions on Generative AI directly from artists. They explored their nuanced perspectives on how Generative AI both empowers and challenges their work. You can find out more in this blog post, where the authors highlight some of the main findings from their study.

Generating a biomedical knowledge graph question answering dataset

In work presented at the European Conference on Artificial Intelligence (ECAI 2024), Xi Yan, Patrick Westphal, Jan Seliger, and Ricardo Usbeck, generated a biomedical knowledge graph question answering dataset. In this blog post, Xi Yan provides some background to the challenges around biomedical knowledge graphs, and explains how the team went about addressing these.

Explaining neural networks using logic

Alessio Ragno writes about work on Transparent Explainable Logic Layers, which contributes to the field of explainable AI by developing a neural network that can be directly transformed into logic. By embedding logic into the structure of a neural network, Alessio and colleagues aim to make its predictions interpretable in a way that feels intuitive and trustworthy to people.

DeepSeek – the talk of the tech town

This month, AI startup DeepSeek released DeepSeek R1, a reasoning model designed for good performance on logic, maths, and pattern-finding tasks. The company has also released six smaller versions of R1 that are small enough to run locally on laptops. In Wired, Zeyi Yang reports on who is behind the startup, whilst Tongliang Liu (in The Conversation) looks at how DeepSeek has achieved its results with a fraction of the cash and computing power of its competitors.

Artificial Intelligence Journal funding call

The Editorial Board of Artificial Intelligence Journal (AIJ) issues funding calls twice a year for activities which “support the promotion and dissemination of AI research”. The latest call opened on 15 January, with a closing date of 15 February 2025. You can find out more about the fund, and how to apply, here.

New playbook on creating images of AI

A recent project has focussed on providing people with the sources and knowledge necessary to create their own images of AI. The Archival Images of AI project has been exploring how existing images – especially those from digital heritage collections – can be remixed and reused to create new images, particularly to represent AI in more compelling ways. You can download their playbook, which gives guidance on image creation and representation.

Public competition for better images of AI – winners announced

At the end of 2024, Better Images of AI launched a public competition with Cambridge Diversity Fund calling for images that “reclaimed and recentred the history of diversity in AI education at the University of Cambridge”. The winners of that competition have now been announced, with the first place prize awarded to Reihaneh Golpayegani for the image “Women and AI”. Janet Turra received the commendation prize for her image “Ground Up and Spat Out”.


Our resources page
Our events page
Seminars in 2024
AAAI/ACM SIGAI Doctoral Consortium interview series
AAAI Fellows 2024 interview series
AI around the world focus series
New voices in AI series



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Forthcoming machine learning and AI seminars: December 2025 edition

  01 Dec 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 December 2025 and 31 January 2026.
monthly digest

AIhub monthly digest: November 2025 – learning robust controllers, trust in multi-agent systems, and a new fairness evaluation dataset

  28 Nov 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

EU proposal to delay parts of its AI Act signal a policy shift that prioritises big tech over fairness

  27 Nov 2025
The EC has proposed delaying parts of the act until 2027 following intense pressure from tech companies and the Trump administration.

Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.

What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence