ΑΙhub.org
 

AIhub monthly digest: October 2023 – probabilistic logic shields, a responsible journalism toolkit, and what the public think about AI

by
31 October 2023



share this:
Panda and tiger reading

Welcome to our October 2023 monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, find out about recent events, and more. This month, we talk AI, bias, and ethics with Aylin Caliskan, learn more about probabilistic logic shields, knowledge bases, and sparse reward tasks, and find out why everyone should learn a little programming.

Talking AI, bias, and ethics with Aylin Caliskan

AIhub ambassador Andrea Rafai met with Aylin Caliskan at this year’s International Joint Conference on Artificial Intelligence (IJCAI 2023), where she was giving an IJCAI Early Career Spotlight talk, and asked her about her work on AI, bias, and ethics. In this interview they discuss topics including bias in generative AI tools and the associated research and societal challenges.

Safe reinforcement learning via probabilistic logic shields

In their IJCAI article, Safe Reinforcement Learning via Probabilistic Logic Shields, which won a distinguished paper award at the conference, Wen-Chi Yang, Giuseppe Marra, Gavin Rens and Luc De Raedt provide a framework to represent, quantify, and evaluate safety. They define safety using a logic-based approach rather than a numerical one; Wen-Chi tells us more in this blog post.

Interview with Maurice Funk – knowledge bases and querying

Maurice Funk, and co-authors Balder ten Cate, Jean Christoph Jung and Carsten Lutz, were also recognised with an IJCAI distinguished paper award for their work SAT-Based PAC Learning of Description Logic Concepts. In this interview, Maurice tells us more about knowledge bases and querying, why this is an interesting area for study, and their methodology and results.

Multi-agent sparse reward tasks

The 26th European Conference on Artificial Intelligence (ECAI 2023) took place at the beginning of October in Krakow, Poland. Xuan Liu, winner of an outstanding paper award at the conference, told us about her work on selective learning for sample-efficient training in multi-agent sparse reward tasks. You can also find out what participants got up to at the conference in our round-up.

Insights into RoboCupJunior

In July this year, 2500 participants congregated in Bordeaux for RoboCup2023. The competition comprises a number of leagues, and among them is RoboCupJunior, which is designed to introduce RoboCup to school children, with the focus being on education. There are three sub-leagues: Soccer, Rescue and OnStage. Marek Šuppa serves on the Executive Committee for RoboCupJunior, and he told us about the competition this year and the latest developments in the Soccer league.

AAAI Fall Symposium

Seven symposia comprised this year’s AAAI Fall Symposium, which took place at the end of October. We were able to attend virtually, and covered the plenary talk by Patrícia Alves-Oliveira, on human-robot interaction design, which was part of the symposium on Artificial Intelligence for Human-Robot Interaction (AI-HRI).

Code to Joy: Why Everyone Should Learn a Little Programming

Code to Joy: Why Everyone Should Learn a Little Programming is a new book from Michael Littman, Professor of Computer Science at Brown University and a founding trustee of AIhub. We spoke to Michael about what the book covers, what inspired it, and how we are all familiar with many programming concepts in our daily lives, whether we realize it or not.

USA AI executive order

On 30 October, President Biden issued an Executive Order on “Safe, Secure, and Trustworthy Artificial Intelligence”. A fact sheet from the White House states that the order “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”

What do the public think about AI?

An evidence review from the Ada Lovelace Institute examines what the public think about AI, and highlights the importance of meaningfully involving people in decision-making when it comes to AI safety and governance.

Protein universe atlas

With a new interactive resource from the Swiss Institute of Bioinformatics and the University of Basel, you can navigate through catalogued natural proteins. The Protein Universe Atlas is a sequence similarity network and contains around 53 million unique protein sequences.

The artificiality of alignment

In an essay entitled The Artificiality of Alignment, Jessica Dai asks how we are actually “aligning AI with human values”? “For all the pontification about cataclysmic harm and extinction-level events, the current trajectory of so-called “alignment” research seems under-equipped — one might even say misaligned — for the reality that AI might cause suffering that is widespread, concrete, and acute.”

How AI reduces the world to stereotypes

In this article for Rest of World, Victoria Turk writes about an analysis of 3,000 AI images which shows that generative AI systems have tendencies toward bias, stereotypes, and reductionism when it comes to national identities.

AI and responsible journalism toolkit

The Leverhulme Centre for the Future of Intelligence, University of Cambridge has put together a responsible journalism toolkit. The resource aims to empower journalists, communicators, and researchers, to responsibly report on AI risks and capabilities.


Our resources page
Forthcoming and past seminars for 2023
AI around the world focus series
UN SDGs focus series
New voices in AI series



tags:


Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



AIhub coffee corner: Open vs closed science

The AIhub coffee corner captures the musings of AI experts over a short conversation.
26 April 2024, by

Are emergent abilities of large language models a mirage? – Interview with Brando Miranda

We hear about work that won a NeurIPS 2023 outstanding paper award.
25 April 2024, by

We built an AI tool to help set priorities for conservation in Madagascar: what we found

Daniele Silvestro has developed a tool that can help identify conservation and restoration priorities.
24 April 2024, by

Interview with Mike Lee: Communicating AI decision-making through demonstrations

We hear from AAAI/SIGAI Doctoral Consortium participant Mike Lee about his research on explainable AI.
23 April 2024, by

Machine learning viability modelling of vertical-axis wind turbines

Researchers have used a genetic learning algorithm to identify optimal pitch profiles for the turbine blades.
22 April 2024, by

The Machine Ethics podcast: What is AI? Volume 3

This is a bonus episode looking back over answers to our question: What is AI?
19 April 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association