ΑΙhub.org
monthly digest
 

AIhub monthly digest: October 2023 – probabilistic logic shields, a responsible journalism toolkit, and what the public think about AI


by
31 October 2023



share this:
Panda and tiger reading

Welcome to our October 2023 monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, find out about recent events, and more. This month, we talk AI, bias, and ethics with Aylin Caliskan, learn more about probabilistic logic shields, knowledge bases, and sparse reward tasks, and find out why everyone should learn a little programming.

Talking AI, bias, and ethics with Aylin Caliskan

AIhub ambassador Andrea Rafai met with Aylin Caliskan at this year’s International Joint Conference on Artificial Intelligence (IJCAI 2023), where she was giving an IJCAI Early Career Spotlight talk, and asked her about her work on AI, bias, and ethics. In this interview they discuss topics including bias in generative AI tools and the associated research and societal challenges.

Safe reinforcement learning via probabilistic logic shields

In their IJCAI article, Safe Reinforcement Learning via Probabilistic Logic Shields, which won a distinguished paper award at the conference, Wen-Chi Yang, Giuseppe Marra, Gavin Rens and Luc De Raedt provide a framework to represent, quantify, and evaluate safety. They define safety using a logic-based approach rather than a numerical one; Wen-Chi tells us more in this blog post.

Interview with Maurice Funk – knowledge bases and querying

Maurice Funk, and co-authors Balder ten Cate, Jean Christoph Jung and Carsten Lutz, were also recognised with an IJCAI distinguished paper award for their work SAT-Based PAC Learning of Description Logic Concepts. In this interview, Maurice tells us more about knowledge bases and querying, why this is an interesting area for study, and their methodology and results.

Multi-agent sparse reward tasks

The 26th European Conference on Artificial Intelligence (ECAI 2023) took place at the beginning of October in Krakow, Poland. Xuan Liu, winner of an outstanding paper award at the conference, told us about her work on selective learning for sample-efficient training in multi-agent sparse reward tasks. You can also find out what participants got up to at the conference in our round-up.

Insights into RoboCupJunior

In July this year, 2500 participants congregated in Bordeaux for RoboCup2023. The competition comprises a number of leagues, and among them is RoboCupJunior, which is designed to introduce RoboCup to school children, with the focus being on education. There are three sub-leagues: Soccer, Rescue and OnStage. Marek Šuppa serves on the Executive Committee for RoboCupJunior, and he told us about the competition this year and the latest developments in the Soccer league.

AAAI Fall Symposium

Seven symposia comprised this year’s AAAI Fall Symposium, which took place at the end of October. We were able to attend virtually, and covered the plenary talk by Patrícia Alves-Oliveira, on human-robot interaction design, which was part of the symposium on Artificial Intelligence for Human-Robot Interaction (AI-HRI).

Code to Joy: Why Everyone Should Learn a Little Programming

Code to Joy: Why Everyone Should Learn a Little Programming is a new book from Michael Littman, Professor of Computer Science at Brown University and a founding trustee of AIhub. We spoke to Michael about what the book covers, what inspired it, and how we are all familiar with many programming concepts in our daily lives, whether we realize it or not.

USA AI executive order

On 30 October, President Biden issued an Executive Order on “Safe, Secure, and Trustworthy Artificial Intelligence”. A fact sheet from the White House states that the order “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”

What do the public think about AI?

An evidence review from the Ada Lovelace Institute examines what the public think about AI, and highlights the importance of meaningfully involving people in decision-making when it comes to AI safety and governance.

Protein universe atlas

With a new interactive resource from the Swiss Institute of Bioinformatics and the University of Basel, you can navigate through catalogued natural proteins. The Protein Universe Atlas is a sequence similarity network and contains around 53 million unique protein sequences.

The artificiality of alignment

In an essay entitled The Artificiality of Alignment, Jessica Dai asks how we are actually “aligning AI with human values”? “For all the pontification about cataclysmic harm and extinction-level events, the current trajectory of so-called “alignment” research seems under-equipped — one might even say misaligned — for the reality that AI might cause suffering that is widespread, concrete, and acute.”

How AI reduces the world to stereotypes

In this article for Rest of World, Victoria Turk writes about an analysis of 3,000 AI images which shows that generative AI systems have tendencies toward bias, stereotypes, and reductionism when it comes to national identities.

AI and responsible journalism toolkit

The Leverhulme Centre for the Future of Intelligence, University of Cambridge has put together a responsible journalism toolkit. The resource aims to empower journalists, communicators, and researchers, to responsibly report on AI risks and capabilities.


Our resources page
Forthcoming and past seminars for 2023
AI around the world focus series
UN SDGs focus series
New voices in AI series



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



#AAAI2026 social media round up: part 1

  23 Jan 2026
Find out what participants have been getting up to during the first few of days at the conference

Congratulations to the #AAAI2026 outstanding paper award winners

  22 Jan 2026
Find out who has won these prestigious awards at AAAI this year.

3 Questions: How AI could optimize the power grid

  21 Jan 2026
While the growing energy demands of AI are worrying, some techniques can also help make power grids cleaner and more efficient.

Interview with Xiang Fang: Multi-modal learning and embodied intelligence

  20 Jan 2026
In the first of our new series of interviews featuring the AAAI Doctoral Consortium participants, we hear from Xiang Fang.

An introduction to science communication at #AAAI2026

  19 Jan 2026
Find out more about our session on Wednesday 21 January.

Guarding Europe’s hidden lifelines: how AI could protect subsea infrastructure

  15 Jan 2026
EU-funded researchers are developing AI-powered surveillance tools to protect the vast network of subsea cables and pipelines that keep the continent’s energy and data flowing.

What’s coming up at #AAAI2026?

  14 Jan 2026
Find out what's on the programme at the annual AAAI Conference on Artificial Intelligence.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence