ΑΙhub.org
monthly digest
 

AIhub monthly digest: March 2023 – plant disease diagnosis, logic for trustworthy AI, and neurosymbolic approaches


by
28 March 2023



share this:
Panda and tiger reading

Welcome to our March 2023 monthly digest, where you can catch up with any AIhub stories you may have missed, get the low-down on recent events, and much more. This month, we learn about a mobile phone app to help farmers diagnose plant diseases, find out more about neurosymbolic approaches, and hear how scientific information changes as it is covered by different media.

Diagnosing plant diseases: an interview with Ernest Mwebaze

Ernest Mwebaze and his team have developed a mobile application for farmers to help diagnose diseases in their cassava crops. As part of our focus series on AI around the world, we spoke to Ernest to find out more about this project, how it developed, and plans for further work. Read the interview here.

Neurosymbolic approaches

Ever wondered how the complementary features of symbolic and deep-learning methods can be combined? Using a crime scene analogy, Lauren Nicole DeLong and Ramon Fernández Mir explain all in this blogpost, focusing on neurosymbolic approaches for reasoning on graph structures.

Modelling information change in scientific communication

With the general public relying principally on mainstream media outlets for their science news, accurate reporting is of paramount importance. Overhyping, exaggeration and misrepresentation of research findings erode trust in science and scientists. In her invited talk at AAAI 2023, Isabelle Augenstein presented some of her work relating the communication of scientific research, and how information changes as it is reported by different media. Read our summary of the talk here.

Logic for trustworthy rational robots

Learning-based solutions are efficient, but are they trustworthy enough to be embedded in a robot cooperating with or assisting humans? In this blogpost, Daniele Meli explores this question, and reviews logic programming as a route to trustworthy autonomous (and cooperative) robotic systems.

AAAI 2023 workshops

As part of the 37th AAAI Conference on Artificial Intelligence (AAAI2023), 32 different workshops were held, covering a wide range of topics. We heard from the organisers of four of these workshops, who told us their key takeaways from their respective events. These were split into two articles: 1) #AAAI2023 workshops round-up 1: AI for credible elections, and responsible human-centric AI, and 2) #AAAI2023 workshops round-up 2: health intelligence and privacy-preserving AI.

AI UK

Hosted by the Alan Turing Institute, AI UK is a two-day conference that showcases artificial intelligence and data science research, development, and policy in the UK. This year, the event took place on 21 and 22 March, and we covered the panel discussion session on the role and impact of science journalism.

AAAI policy on use of AI systems in producing publications

AAAI have updated their publication policy to deal with AI systems: “It is AAAI’s policy that any AI system, including Generative Models such as Chat-GPT, BARD, and DALL-E, does not satisfy the criteria for authorship of papers published by AAAI and, as such, also cannot be used as a citable source in papers published by AAAI”. Read the policy in full here.

Stochastic parrots day

On Friday 17 March, a virtual event took place to commemorate the 2nd anniversary of the paper, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. The article’s co-authors and various guests reflected on what has happened in the last two years, the current state of the large language model landscape, and what the future has in store. During the event, attendees shared recommended reading and these suggestions are compiled here.

More on large language models

The latest in the CLAIRE All Questions Answered (AQuA) series saw the panel focus their one-hour session on ChatGPT and large language models. You can catch the recording in full here.

Also well worth a listen is this episode of the Radical AI podcast, where hosts Dylan and Jess chat to Emily M. Bender and Casey Fiesler about the ethical considerations of ChatGPT, bias and discrimination, and the importance of algorithmic literacy in the face of chatbots.

Federal Trade Commission business guidance

A blog post from the USA Federal Trade Commission (FTC) (Keep your AI claims in check) highlights that they will be keeping an eye out for companies who exaggerate the use of AI in their products and those who don’t carry out sufficient risk analysis. A follow-up post (Chatbots, deepfakes, and voice clones: AI deception for sale) focusses on synthetic media and generative AI products with respect to possible fraud and deception, and urges companies to seriously consider the potential use cases of their products before release.

AI Art: How artists are using and confronting machine learning

This video from the Museum of Modern Art (MoMA) sees three artists (Kate Crawford, Trevor Paglen, and Refik Anadol) talk about how machine learning algorithms have affected the art world, new approaches to artmaking, and the future for art in the age of AI.


Our resources page
Forthcoming and past seminars for 2023
AI around the world focus series
UN SDGs focus series
New voices in AI series



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AUAI is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Deep learning-powered biochip to detect genetic markers

System can detect extremely small amounts of microRNAs, genetic markers linked to diseases such as heart disease.

Half of AI health answers are wrong even though they sound convincing – new study

  12 May 2026
Imagine you have just been diagnosed with early-stage cancer and, before your next appointment, you type a question into an AI chatbot.

Gradient-based planning for world models at longer horizons

  11 May 2026
What were the problems that motivated this project and what was the approach to address them?

It’s tempting to offload your thinking to AI. Cognitive science shows why that’s a bad idea

  08 May 2026
Increased offloading to new tools has raised the fear that people will become overly reliant on AI.

Making AI systems more transparent and trustworthy: an interview with Ximing Wen

  07 May 2026
Find out more about Ximing's work, experience as a research intern, and what inspired her to study AI.

Report on foundation model impacts released

  06 May 2026
Partnership on AI publish a progress report on post-deployment governance practices.

Forthcoming machine learning and AI seminars: May 2026 edition

  05 May 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 5 May and 30 June 2026.

AI for Science – from cosmology to chemistry

  01 May 2026
How AI is transforming science, from a day conference at the Royal Society



AUAI is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence