Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we learn about a framework to evaluate diversity in datasets, find out how banks may strategically mitigate their risk from fraud in real-time payment systems, and hear about the AfriClimate AI workshop at the Deep Learning Indaba.
In their paper Measure Dataset Diversity. Don’t Just Claim It, Jerone Andrews and colleagues propose using measurement theory from the social sciences as a framework to improve the collection and evaluation of diverse machine learning datasets. We spoke to Jerone about this work, which won a best paper award at ICML 2024.
Real-time payments offer a fast processing time (of around 10 seconds), allowing for near-immediate receipt of funds. However, these systems are a target for fraud. In this interview, Katherine Mayo tells us about an agent-based analysis of real-time payments, and what this reveals in terms of bank strategies for mitigating fraud.
The International Joint Conference on Artificial Intelligence (IJCAI) doctoral consortium provides a group of PhD students the opportunity to present their work and hear from established researchers in the field. The chairs of this year’s consortium, Anita Raja and Jihie Kim, pick out some of their highlights in this summary.
AfriClimate AI is a grassroots community dedicated to harnessing the power of AI for a sustainable, prosperous and climate-resilient Africa. Conversations at the Deep Learning Indaba in 2023 sparked the formation of AfriClimate AI, and the team behind this initiative returned to the Indaba this year to hold a workshop. You can read a summary of this event here.
Last year, we started a collection of articles, opinion pieces, videos and resources relating to large language models, and other generative models. We periodically update this list to include the latest resources, and we’re now on the fifth iteration. Check it out here.
The latest in the series of CLAIRE (Confederation of Laboratories for Artificial Intelligence Research in Europe) All Questions Answered (AQuA) sessions continued with the panel tackling the topic of “AI for citizens”. You can find the recording of the event here.
This week saw the release of two complimentary reports from the US Government: the Global AI Research Agenda and the AI in Global Development Playbook. These documents are designed to guide future research on artificial intelligence and its use in advancing the UN Sustainable Development Goals.
In an article titled Artifice and Intelligence, Emily Tucker explains why the Center on Privacy & Technology at Georgetown Law has stopped using the terms “artificial intelligence,” “AI,” and “machine learning” in their institutional vocabulary. Instead, they will aim to more accurately describe each individual technology within the context of the specific case that it arises.
In a recent report from the Ada Lovelace Institute, Maili Raven-Adams and Andrew Strait assess the potential, risks and appropriate role of AI-powered genomic health prediction in the UK health system.
In this YouTube video, Sebastian Raschka gives a three-hour coding workshop which takes viewers through the process of building a large language model. This tutorial is aimed at coders interested in understanding the building blocks of LLMs, how they work, and how to code them from the ground up in PyTorch.
Our resources page
Our events page
Seminars in 2024
AAAI/ACM SIGAI Doctoral Consortium interview series
AAAI Fellows 2024 interview series
AI around the world focus series
New voices in AI series