ΑΙhub.org
monthly digest
 

AIhub monthly digest: January 2025 – artists’ perspectives on GenAI, biomedical knowledge graphs, and ML for studying greenhouse gas emissions


by
29 January 2025



share this:
Panda and tiger reading

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we hear about artists’ perspectives on generative AI, learn how to explain neural networks using logic, and find out about using machine learning for studying greenhouse gas emissions.

Using ML for studying greenhouse gas emissions from livestock

We caught up with Erica Kimei to find out about her research studying gas emissions from agriculture, specifically ruminant livestock. Erica combines machine learning and remote sensing technology to monitor and forecast such emissions. This interview is the latest in our series highlighting members of the AfriClimate AI community.

Interview with Yuki Mitsufuji: Improving AI image generation

We spoke to Yuki Mitsufuji, Lead Research Scientist at Sony AI, to find out more about two pieces of research that his team presented at the Conference on Neural Information Processing Systems (NeurIPS 2024). These works tackle different aspects of image generation – single-shot novel view synthesis, and high-speed generation, introducing the models GenWarp and PaGoDA respectively.

Understanding artists’ perspectives on generative AI art

In a recent study, Juniper Lovato, Julia Zimmerman, and Jennifer Karson gathered opinions on Generative AI directly from artists. They explored their nuanced perspectives on how Generative AI both empowers and challenges their work. You can find out more in this blog post, where the authors highlight some of the main findings from their study.

Generating a biomedical knowledge graph question answering dataset

In work presented at the European Conference on Artificial Intelligence (ECAI 2024), Xi Yan, Patrick Westphal, Jan Seliger, and Ricardo Usbeck, generated a biomedical knowledge graph question answering dataset. In this blog post, Xi Yan provides some background to the challenges around biomedical knowledge graphs, and explains how the team went about addressing these.

Explaining neural networks using logic

Alessio Ragno writes about work on Transparent Explainable Logic Layers, which contributes to the field of explainable AI by developing a neural network that can be directly transformed into logic. By embedding logic into the structure of a neural network, Alessio and colleagues aim to make its predictions interpretable in a way that feels intuitive and trustworthy to people.

DeepSeek – the talk of the tech town

This month, AI startup DeepSeek released DeepSeek R1, a reasoning model designed for good performance on logic, maths, and pattern-finding tasks. The company has also released six smaller versions of R1 that are small enough to run locally on laptops. In Wired, Zeyi Yang reports on who is behind the startup, whilst Tongliang Liu (in The Conversation) looks at how DeepSeek has achieved its results with a fraction of the cash and computing power of its competitors.

Artificial Intelligence Journal funding call

The Editorial Board of Artificial Intelligence Journal (AIJ) issues funding calls twice a year for activities which “support the promotion and dissemination of AI research”. The latest call opened on 15 January, with a closing date of 15 February 2025. You can find out more about the fund, and how to apply, here.

New playbook on creating images of AI

A recent project has focussed on providing people with the sources and knowledge necessary to create their own images of AI. The Archival Images of AI project has been exploring how existing images – especially those from digital heritage collections – can be remixed and reused to create new images, particularly to represent AI in more compelling ways. You can download their playbook, which gives guidance on image creation and representation.

Public competition for better images of AI – winners announced

At the end of 2024, Better Images of AI launched a public competition with Cambridge Diversity Fund calling for images that “reclaimed and recentred the history of diversity in AI education at the University of Cambridge”. The winners of that competition have now been announced, with the first place prize awarded to Reihaneh Golpayegani for the image “Women and AI”. Janet Turra received the commendation prize for her image “Ground Up and Spat Out”.


Our resources page
Our events page
Seminars in 2024
AAAI/ACM SIGAI Doctoral Consortium interview series
AAAI Fellows 2024 interview series
AI around the world focus series
New voices in AI series



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

RWDS Big Questions: how do we highlight the role of statistics in AI?

  25 Mar 2026
Next in our series, the panel explores the statistical underpinning of AI.

A history of RoboCup with Manuela Veloso

  24 Mar 2026
Find out how RoboCup got started and how the competition has evolved, from one of the co-founders.

Information-driven design of imaging systems

  23 Mar 2026
Framework that enables direct evaluation and optimization of imaging systems based on their information content.

Machine learning framework to predict global imperilment status of freshwater fish

  20 Mar 2026
“With our model, decision makers can deploy resources in advance before a species becomes imperiled.”

Interview with AAAI Fellow Yan Liu: machine learning for time series

  19 Mar 2026
Hear from 2026 AAAI Fellow Yan Liu about her research into time series, the associated applications, and the promise of physics-informed models.

A principled approach for data bias mitigation

  18 Mar 2026
Find out more about work presented at AIES 2025 which proposes a new way to measure data bias, along with a mitigation algorithm with mathematical guarantees.

An AI image generator for non-English speakers

  17 Mar 2026
"Translations lose the nuances of language and culture, because many words lack good English equivalents."

AI and Theory of Mind: an interview with Nitay Alon

  16 Mar 2026
Find out more about how Theory of Mind plays out in deceptive environments, multi-agents systems, the interdisciplinary nature of this field, when to use Theory of Mind, and when not to, and more.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence