ΑΙhub.org
monthly digest
 

AIhub monthly digest: January 2025 – artists’ perspectives on GenAI, biomedical knowledge graphs, and ML for studying greenhouse gas emissions


by
29 January 2025



share this:
Panda and tiger reading

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we hear about artists’ perspectives on generative AI, learn how to explain neural networks using logic, and find out about using machine learning for studying greenhouse gas emissions.

Using ML for studying greenhouse gas emissions from livestock

We caught up with Erica Kimei to find out about her research studying gas emissions from agriculture, specifically ruminant livestock. Erica combines machine learning and remote sensing technology to monitor and forecast such emissions. This interview is the latest in our series highlighting members of the AfriClimate AI community.

Interview with Yuki Mitsufuji: Improving AI image generation

We spoke to Yuki Mitsufuji, Lead Research Scientist at Sony AI, to find out more about two pieces of research that his team presented at the Conference on Neural Information Processing Systems (NeurIPS 2024). These works tackle different aspects of image generation – single-shot novel view synthesis, and high-speed generation, introducing the models GenWarp and PaGoDA respectively.

Understanding artists’ perspectives on generative AI art

In a recent study, Juniper Lovato, Julia Zimmerman, and Jennifer Karson gathered opinions on Generative AI directly from artists. They explored their nuanced perspectives on how Generative AI both empowers and challenges their work. You can find out more in this blog post, where the authors highlight some of the main findings from their study.

Generating a biomedical knowledge graph question answering dataset

In work presented at the European Conference on Artificial Intelligence (ECAI 2024), Xi Yan, Patrick Westphal, Jan Seliger, and Ricardo Usbeck, generated a biomedical knowledge graph question answering dataset. In this blog post, Xi Yan provides some background to the challenges around biomedical knowledge graphs, and explains how the team went about addressing these.

Explaining neural networks using logic

Alessio Ragno writes about work on Transparent Explainable Logic Layers, which contributes to the field of explainable AI by developing a neural network that can be directly transformed into logic. By embedding logic into the structure of a neural network, Alessio and colleagues aim to make its predictions interpretable in a way that feels intuitive and trustworthy to people.

DeepSeek – the talk of the tech town

This month, AI startup DeepSeek released DeepSeek R1, a reasoning model designed for good performance on logic, maths, and pattern-finding tasks. The company has also released six smaller versions of R1 that are small enough to run locally on laptops. In Wired, Zeyi Yang reports on who is behind the startup, whilst Tongliang Liu (in The Conversation) looks at how DeepSeek has achieved its results with a fraction of the cash and computing power of its competitors.

Artificial Intelligence Journal funding call

The Editorial Board of Artificial Intelligence Journal (AIJ) issues funding calls twice a year for activities which “support the promotion and dissemination of AI research”. The latest call opened on 15 January, with a closing date of 15 February 2025. You can find out more about the fund, and how to apply, here.

New playbook on creating images of AI

A recent project has focussed on providing people with the sources and knowledge necessary to create their own images of AI. The Archival Images of AI project has been exploring how existing images – especially those from digital heritage collections – can be remixed and reused to create new images, particularly to represent AI in more compelling ways. You can download their playbook, which gives guidance on image creation and representation.

Public competition for better images of AI – winners announced

At the end of 2024, Better Images of AI launched a public competition with Cambridge Diversity Fund calling for images that “reclaimed and recentred the history of diversity in AI education at the University of Cambridge”. The winners of that competition have now been announced, with the first place prize awarded to Reihaneh Golpayegani for the image “Women and AI”. Janet Turra received the commendation prize for her image “Ground Up and Spat Out”.


Our resources page
Our events page
Seminars in 2024
AAAI/ACM SIGAI Doctoral Consortium interview series
AAAI Fellows 2024 interview series
AI around the world focus series
New voices in AI series



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



2025 AI Index Report

  08 May 2025
Read the latest edition of the AI Index Report which tracks and visualises data related to AI.

Defending against prompt injection with structured queries (StruQ) and preference optimization (SecAlign)

  06 May 2025
Recent advances in LLMs enable exciting LLM-integrated applications. However, as LLMs have improved, so have the attacks against them.

Forthcoming machine learning and AI seminars: May 2025 edition

  05 May 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 5 May and 30 June 2025.

Competition open for images of “digital transformation at work”

Digit and Better Images of AI have teamed up to launch a competition to create more realistic stock images of "digital transformation at work"
monthly digest

AIhub monthly digest: April 2025 – aligning GenAI with technical standards, ML applied to semiconductor manufacturing, and social choice problems

  30 Apr 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

#ICLR2025 social media round-up

  29 Apr 2025
Find out what participants got up to at the International Conference on Learning Representations.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence