Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we hear about artists’ perspectives on generative AI, learn how to explain neural networks using logic, and find out about using machine learning for studying greenhouse gas emissions.
We caught up with Erica Kimei to find out about her research studying gas emissions from agriculture, specifically ruminant livestock. Erica combines machine learning and remote sensing technology to monitor and forecast such emissions. This interview is the latest in our series highlighting members of the AfriClimate AI community.
We spoke to Yuki Mitsufuji, Lead Research Scientist at Sony AI, to find out more about two pieces of research that his team presented at the Conference on Neural Information Processing Systems (NeurIPS 2024). These works tackle different aspects of image generation – single-shot novel view synthesis, and high-speed generation, introducing the models GenWarp and PaGoDA respectively.
In a recent study, Juniper Lovato, Julia Zimmerman, and Jennifer Karson gathered opinions on Generative AI directly from artists. They explored their nuanced perspectives on how Generative AI both empowers and challenges their work. You can find out more in this blog post, where the authors highlight some of the main findings from their study.
In work presented at the European Conference on Artificial Intelligence (ECAI 2024), Xi Yan, Patrick Westphal, Jan Seliger, and Ricardo Usbeck, generated a biomedical knowledge graph question answering dataset. In this blog post, Xi Yan provides some background to the challenges around biomedical knowledge graphs, and explains how the team went about addressing these.
Alessio Ragno writes about work on Transparent Explainable Logic Layers, which contributes to the field of explainable AI by developing a neural network that can be directly transformed into logic. By embedding logic into the structure of a neural network, Alessio and colleagues aim to make its predictions interpretable in a way that feels intuitive and trustworthy to people.
This month, AI startup DeepSeek released DeepSeek R1, a reasoning model designed for good performance on logic, maths, and pattern-finding tasks. The company has also released six smaller versions of R1 that are small enough to run locally on laptops. In Wired, Zeyi Yang reports on who is behind the startup, whilst Tongliang Liu (in The Conversation) looks at how DeepSeek has achieved its results with a fraction of the cash and computing power of its competitors.
The Editorial Board of Artificial Intelligence Journal (AIJ) issues funding calls twice a year for activities which “support the promotion and dissemination of AI research”. The latest call opened on 15 January, with a closing date of 15 February 2025. You can find out more about the fund, and how to apply, here.
A recent project has focussed on providing people with the sources and knowledge necessary to create their own images of AI. The Archival Images of AI project has been exploring how existing images – especially those from digital heritage collections – can be remixed and reused to create new images, particularly to represent AI in more compelling ways. You can download their playbook, which gives guidance on image creation and representation.
At the end of 2024, Better Images of AI launched a public competition with Cambridge Diversity Fund calling for images that “reclaimed and recentred the history of diversity in AI education at the University of Cambridge”. The winners of that competition have now been announced, with the first place prize awarded to Reihaneh Golpayegani for the image “Women and AI”. Janet Turra received the commendation prize for her image “Ground Up and Spat Out”.
Our resources page
Our events page
Seminars in 2024
AAAI/ACM SIGAI Doctoral Consortium interview series
AAAI Fellows 2024 interview series
AI around the world focus series
New voices in AI series