ΑΙhub.org
 

AIhub monthly digest: March 2023 – plant disease diagnosis, logic for trustworthy AI, and neurosymbolic approaches

by
28 March 2023



share this:
Panda and tiger reading

Welcome to our March 2023 monthly digest, where you can catch up with any AIhub stories you may have missed, get the low-down on recent events, and much more. This month, we learn about a mobile phone app to help farmers diagnose plant diseases, find out more about neurosymbolic approaches, and hear how scientific information changes as it is covered by different media.

Diagnosing plant diseases: an interview with Ernest Mwebaze

Ernest Mwebaze and his team have developed a mobile application for farmers to help diagnose diseases in their cassava crops. As part of our focus series on AI around the world, we spoke to Ernest to find out more about this project, how it developed, and plans for further work. Read the interview here.

Neurosymbolic approaches

Ever wondered how the complementary features of symbolic and deep-learning methods can be combined? Using a crime scene analogy, Lauren Nicole DeLong and Ramon Fernández Mir explain all in this blogpost, focusing on neurosymbolic approaches for reasoning on graph structures.

Modelling information change in scientific communication

With the general public relying principally on mainstream media outlets for their science news, accurate reporting is of paramount importance. Overhyping, exaggeration and misrepresentation of research findings erode trust in science and scientists. In her invited talk at AAAI 2023, Isabelle Augenstein presented some of her work relating the communication of scientific research, and how information changes as it is reported by different media. Read our summary of the talk here.

Logic for trustworthy rational robots

Learning-based solutions are efficient, but are they trustworthy enough to be embedded in a robot cooperating with or assisting humans? In this blogpost, Daniele Meli explores this question, and reviews logic programming as a route to trustworthy autonomous (and cooperative) robotic systems.

AAAI 2023 workshops

As part of the 37th AAAI Conference on Artificial Intelligence (AAAI2023), 32 different workshops were held, covering a wide range of topics. We heard from the organisers of four of these workshops, who told us their key takeaways from their respective events. These were split into two articles: 1) #AAAI2023 workshops round-up 1: AI for credible elections, and responsible human-centric AI, and 2) #AAAI2023 workshops round-up 2: health intelligence and privacy-preserving AI.

AI UK

Hosted by the Alan Turing Institute, AI UK is a two-day conference that showcases artificial intelligence and data science research, development, and policy in the UK. This year, the event took place on 21 and 22 March, and we covered the panel discussion session on the role and impact of science journalism.

AAAI policy on use of AI systems in producing publications

AAAI have updated their publication policy to deal with AI systems: “It is AAAI’s policy that any AI system, including Generative Models such as Chat-GPT, BARD, and DALL-E, does not satisfy the criteria for authorship of papers published by AAAI and, as such, also cannot be used as a citable source in papers published by AAAI”. Read the policy in full here.

Stochastic parrots day

On Friday 17 March, a virtual event took place to commemorate the 2nd anniversary of the paper, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. The article’s co-authors and various guests reflected on what has happened in the last two years, the current state of the large language model landscape, and what the future has in store. During the event, attendees shared recommended reading and these suggestions are compiled here.

More on large language models

The latest in the CLAIRE All Questions Answered (AQuA) series saw the panel focus their one-hour session on ChatGPT and large language models. You can catch the recording in full here.

Also well worth a listen is this episode of the Radical AI podcast, where hosts Dylan and Jess chat to Emily M. Bender and Casey Fiesler about the ethical considerations of ChatGPT, bias and discrimination, and the importance of algorithmic literacy in the face of chatbots.

Federal Trade Commission business guidance

A blog post from the USA Federal Trade Commission (FTC) (Keep your AI claims in check) highlights that they will be keeping an eye out for companies who exaggerate the use of AI in their products and those who don’t carry out sufficient risk analysis. A follow-up post (Chatbots, deepfakes, and voice clones: AI deception for sale) focusses on synthetic media and generative AI products with respect to possible fraud and deception, and urges companies to seriously consider the potential use cases of their products before release.

AI Art: How artists are using and confronting machine learning

This video from the Museum of Modern Art (MoMA) sees three artists (Kate Crawford, Trevor Paglen, and Refik Anadol) talk about how machine learning algorithms have affected the art world, new approaches to artmaking, and the future for art in the age of AI.


Our resources page
Forthcoming and past seminars for 2023
AI around the world focus series
UN SDGs focus series
New voices in AI series



tags:


Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Interview with Mike Lee: Communicating AI decision-making through demonstrations

We hear from AAAI/SIGAI Doctoral Consortium participant Mike Lee about his research on explainable AI.
23 April 2024, by

Machine learning viability modelling of vertical-axis wind turbines

Researchers have used a genetic learning algorithm to identify optimal pitch profiles for the turbine blades.
22 April 2024, by

The Machine Ethics podcast: What is AI? Volume 3

This is a bonus episode looking back over answers to our question: What is AI?
19 April 2024, by

DataLike: Interview with Tẹjúmádé Àfọ̀njá

"I place an emphasis on wellness and meticulously plan my schedule to ensure I can make meaningful contributions to what's important to me."

Beyond the mud: Datasets, benchmarks, and methods for computer vision in off-road racing

Off-road motorcycle racing poses unique challenges that push the boundaries of what existing computer vision systems can handle
17 April 2024, by

Interview with Bálint Gyevnár: Creating explanations for AI-based decision-making systems

PhD student and AAAI/SIGAI Doctoral Consortium participant tells us about his research.
16 April 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association