ΑΙhub.org
 

AIhub monthly digest: March 2023 – plant disease diagnosis, logic for trustworthy AI, and neurosymbolic approaches

by
28 March 2023



share this:
Panda and tiger reading

Welcome to our March 2023 monthly digest, where you can catch up with any AIhub stories you may have missed, get the low-down on recent events, and much more. This month, we learn about a mobile phone app to help farmers diagnose plant diseases, find out more about neurosymbolic approaches, and hear how scientific information changes as it is covered by different media.

Diagnosing plant diseases: an interview with Ernest Mwebaze

Ernest Mwebaze and his team have developed a mobile application for farmers to help diagnose diseases in their cassava crops. As part of our focus series on AI around the world, we spoke to Ernest to find out more about this project, how it developed, and plans for further work. Read the interview here.

Neurosymbolic approaches

Ever wondered how the complementary features of symbolic and deep-learning methods can be combined? Using a crime scene analogy, Lauren Nicole DeLong and Ramon Fernández Mir explain all in this blogpost, focusing on neurosymbolic approaches for reasoning on graph structures.

Modelling information change in scientific communication

With the general public relying principally on mainstream media outlets for their science news, accurate reporting is of paramount importance. Overhyping, exaggeration and misrepresentation of research findings erode trust in science and scientists. In her invited talk at AAAI 2023, Isabelle Augenstein presented some of her work relating the communication of scientific research, and how information changes as it is reported by different media. Read our summary of the talk here.

Logic for trustworthy rational robots

Learning-based solutions are efficient, but are they trustworthy enough to be embedded in a robot cooperating with or assisting humans? In this blogpost, Daniele Meli explores this question, and reviews logic programming as a route to trustworthy autonomous (and cooperative) robotic systems.

AAAI 2023 workshops

As part of the 37th AAAI Conference on Artificial Intelligence (AAAI2023), 32 different workshops were held, covering a wide range of topics. We heard from the organisers of four of these workshops, who told us their key takeaways from their respective events. These were split into two articles: 1) #AAAI2023 workshops round-up 1: AI for credible elections, and responsible human-centric AI, and 2) #AAAI2023 workshops round-up 2: health intelligence and privacy-preserving AI.

AI UK

Hosted by the Alan Turing Institute, AI UK is a two-day conference that showcases artificial intelligence and data science research, development, and policy in the UK. This year, the event took place on 21 and 22 March, and we covered the panel discussion session on the role and impact of science journalism.

AAAI policy on use of AI systems in producing publications

AAAI have updated their publication policy to deal with AI systems: “It is AAAI’s policy that any AI system, including Generative Models such as Chat-GPT, BARD, and DALL-E, does not satisfy the criteria for authorship of papers published by AAAI and, as such, also cannot be used as a citable source in papers published by AAAI”. Read the policy in full here.

Stochastic parrots day

On Friday 17 March, a virtual event took place to commemorate the 2nd anniversary of the paper, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. The article’s co-authors and various guests reflected on what has happened in the last two years, the current state of the large language model landscape, and what the future has in store. During the event, attendees shared recommended reading and these suggestions are compiled here.

More on large language models

The latest in the CLAIRE All Questions Answered (AQuA) series saw the panel focus their one-hour session on ChatGPT and large language models. You can catch the recording in full here.

Also well worth a listen is this episode of the Radical AI podcast, where hosts Dylan and Jess chat to Emily M. Bender and Casey Fiesler about the ethical considerations of ChatGPT, bias and discrimination, and the importance of algorithmic literacy in the face of chatbots.

Federal Trade Commission business guidance

A blog post from the USA Federal Trade Commission (FTC) (Keep your AI claims in check) highlights that they will be keeping an eye out for companies who exaggerate the use of AI in their products and those who don’t carry out sufficient risk analysis. A follow-up post (Chatbots, deepfakes, and voice clones: AI deception for sale) focusses on synthetic media and generative AI products with respect to possible fraud and deception, and urges companies to seriously consider the potential use cases of their products before release.

AI Art: How artists are using and confronting machine learning

This video from the Museum of Modern Art (MoMA) sees three artists (Kate Crawford, Trevor Paglen, and Refik Anadol) talk about how machine learning algorithms have affected the art world, new approaches to artmaking, and the future for art in the age of AI.


Our resources page
Forthcoming and past seminars for 2023
AI around the world focus series
UN SDGs focus series
New voices in AI series



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



The Turing Lectures: Can we trust AI? – with Abeba Birhane

Abeba covers biases in data, the downstream impact on AI systems and our daily lives, how researchers are tackling the problem, and more.
21 November 2024, by

Dynamic faceted search: from haystack to highlight

The authors develop and compare three distinct methods for dynamic facet generation (DFG).
20 November 2024, by , and

Identification of hazardous areas for priority landmine clearance: AI for humanitarian mine action

In close collaboration with the UN and local NGOs, we co-develop an interpretable predictive tool to identify hazardous clusters of landmines.
19 November 2024, by

On the Road to Gundag(AI): Ensuring rural communities benefit from the AI revolution

We need to help regional small businesses benefit from AI while avoiding the harmful aspects.
18 November 2024, by

Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association