ΑΙhub.org
monthly digest
 

AIhub monthly digest: January 2025 – artists’ perspectives on GenAI, biomedical knowledge graphs, and ML for studying greenhouse gas emissions


by
29 January 2025



share this:
Panda and tiger reading

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we hear about artists’ perspectives on generative AI, learn how to explain neural networks using logic, and find out about using machine learning for studying greenhouse gas emissions.

Using ML for studying greenhouse gas emissions from livestock

We caught up with Erica Kimei to find out about her research studying gas emissions from agriculture, specifically ruminant livestock. Erica combines machine learning and remote sensing technology to monitor and forecast such emissions. This interview is the latest in our series highlighting members of the AfriClimate AI community.

Interview with Yuki Mitsufuji: Improving AI image generation

We spoke to Yuki Mitsufuji, Lead Research Scientist at Sony AI, to find out more about two pieces of research that his team presented at the Conference on Neural Information Processing Systems (NeurIPS 2024). These works tackle different aspects of image generation – single-shot novel view synthesis, and high-speed generation, introducing the models GenWarp and PaGoDA respectively.

Understanding artists’ perspectives on generative AI art

In a recent study, Juniper Lovato, Julia Zimmerman, and Jennifer Karson gathered opinions on Generative AI directly from artists. They explored their nuanced perspectives on how Generative AI both empowers and challenges their work. You can find out more in this blog post, where the authors highlight some of the main findings from their study.

Generating a biomedical knowledge graph question answering dataset

In work presented at the European Conference on Artificial Intelligence (ECAI 2024), Xi Yan, Patrick Westphal, Jan Seliger, and Ricardo Usbeck, generated a biomedical knowledge graph question answering dataset. In this blog post, Xi Yan provides some background to the challenges around biomedical knowledge graphs, and explains how the team went about addressing these.

Explaining neural networks using logic

Alessio Ragno writes about work on Transparent Explainable Logic Layers, which contributes to the field of explainable AI by developing a neural network that can be directly transformed into logic. By embedding logic into the structure of a neural network, Alessio and colleagues aim to make its predictions interpretable in a way that feels intuitive and trustworthy to people.

DeepSeek – the talk of the tech town

This month, AI startup DeepSeek released DeepSeek R1, a reasoning model designed for good performance on logic, maths, and pattern-finding tasks. The company has also released six smaller versions of R1 that are small enough to run locally on laptops. In Wired, Zeyi Yang reports on who is behind the startup, whilst Tongliang Liu (in The Conversation) looks at how DeepSeek has achieved its results with a fraction of the cash and computing power of its competitors.

Artificial Intelligence Journal funding call

The Editorial Board of Artificial Intelligence Journal (AIJ) issues funding calls twice a year for activities which “support the promotion and dissemination of AI research”. The latest call opened on 15 January, with a closing date of 15 February 2025. You can find out more about the fund, and how to apply, here.

New playbook on creating images of AI

A recent project has focussed on providing people with the sources and knowledge necessary to create their own images of AI. The Archival Images of AI project has been exploring how existing images – especially those from digital heritage collections – can be remixed and reused to create new images, particularly to represent AI in more compelling ways. You can download their playbook, which gives guidance on image creation and representation.

Public competition for better images of AI – winners announced

At the end of 2024, Better Images of AI launched a public competition with Cambridge Diversity Fund calling for images that “reclaimed and recentred the history of diversity in AI education at the University of Cambridge”. The winners of that competition have now been announced, with the first place prize awarded to Reihaneh Golpayegani for the image “Women and AI”. Janet Turra received the commendation prize for her image “Ground Up and Spat Out”.


Our resources page
Our events page
Seminars in 2024
AAAI/ACM SIGAI Doctoral Consortium interview series
AAAI Fellows 2024 interview series
AI around the world focus series
New voices in AI series



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



The Children’s AI Summit – an event from The Turing Institute

  10 Feb 2025
Find out more about this event held ahead of the Paris AI Action Summit.
coffee corner

AIhub coffee corner: Bad practice in the publication world

  07 Feb 2025
The AIhub coffee corner captures the musings of AI experts over a short conversation.

Explained: Generative AI’s environmental impact

  06 Feb 2025
Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.

Interview with Nisarg Shah: Understanding fairness in AI and machine learning

  05 Feb 2025
Hear from the winner of the 2024 IJCAI Computers and Thought Award.

Stuart J. Russell wins 2025 AAAI Award for Artificial Intelligence for the Benefit of Humanity

  04 Feb 2025
Stuart will give an invited talk about his work at AAAI 2025.

Forthcoming machine learning and AI seminars: February 2025 edition

  03 Feb 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 3 February and 31 March 2025.

Hanna Barakat’s image collection & the paradoxes of depicting diversity in AI history

  31 Jan 2025
Read about Hanna's artistic process and reflections upon creating new images about AI

A deep learning pipeline for controlling protein interactions

  30 Jan 2025
Scientists have used deep learning to design new proteins that bind to complexes involving other small molecules like hormones or drugs.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association