Welcome to our March 2023 monthly digest, where you can catch up with any AIhub stories you may have missed, get the low-down on recent events, and much more. This month, we learn about a mobile phone app to help farmers diagnose plant diseases, find out more about neurosymbolic approaches, and hear how scientific information changes as it is covered by different media.
Ernest Mwebaze and his team have developed a mobile application for farmers to help diagnose diseases in their cassava crops. As part of our focus series on AI around the world, we spoke to Ernest to find out more about this project, how it developed, and plans for further work. Read the interview here.
Ever wondered how the complementary features of symbolic and deep-learning methods can be combined? Using a crime scene analogy, Lauren Nicole DeLong and Ramon Fernández Mir explain all in this blogpost, focusing on neurosymbolic approaches for reasoning on graph structures.
With the general public relying principally on mainstream media outlets for their science news, accurate reporting is of paramount importance. Overhyping, exaggeration and misrepresentation of research findings erode trust in science and scientists. In her invited talk at AAAI 2023, Isabelle Augenstein presented some of her work relating the communication of scientific research, and how information changes as it is reported by different media. Read our summary of the talk here.
Learning-based solutions are efficient, but are they trustworthy enough to be embedded in a robot cooperating with or assisting humans? In this blogpost, Daniele Meli explores this question, and reviews logic programming as a route to trustworthy autonomous (and cooperative) robotic systems.
As part of the 37th AAAI Conference on Artificial Intelligence (AAAI2023), 32 different workshops were held, covering a wide range of topics. We heard from the organisers of four of these workshops, who told us their key takeaways from their respective events. These were split into two articles: 1) #AAAI2023 workshops round-up 1: AI for credible elections, and responsible human-centric AI, and 2) #AAAI2023 workshops round-up 2: health intelligence and privacy-preserving AI.
Hosted by the Alan Turing Institute, AI UK is a two-day conference that showcases artificial intelligence and data science research, development, and policy in the UK. This year, the event took place on 21 and 22 March, and we covered the panel discussion session on the role and impact of science journalism.
AAAI have updated their publication policy to deal with AI systems: “It is AAAI’s policy that any AI system, including Generative Models such as Chat-GPT, BARD, and DALL-E, does not satisfy the criteria for authorship of papers published by AAAI and, as such, also cannot be used as a citable source in papers published by AAAI”. Read the policy in full here.
On Friday 17 March, a virtual event took place to commemorate the 2nd anniversary of the paper, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. The article’s co-authors and various guests reflected on what has happened in the last two years, the current state of the large language model landscape, and what the future has in store. During the event, attendees shared recommended reading and these suggestions are compiled here.
The latest in the CLAIRE All Questions Answered (AQuA) series saw the panel focus their one-hour session on ChatGPT and large language models. You can catch the recording in full here.
Also well worth a listen is this episode of the Radical AI podcast, where hosts Dylan and Jess chat to Emily M. Bender and Casey Fiesler about the ethical considerations of ChatGPT, bias and discrimination, and the importance of algorithmic literacy in the face of chatbots.
A blog post from the USA Federal Trade Commission (FTC) (Keep your AI claims in check) highlights that they will be keeping an eye out for companies who exaggerate the use of AI in their products and those who don’t carry out sufficient risk analysis. A follow-up post (Chatbots, deepfakes, and voice clones: AI deception for sale) focusses on synthetic media and generative AI products with respect to possible fraud and deception, and urges companies to seriously consider the potential use cases of their products before release.
This video from the Museum of Modern Art (MoMA) sees three artists (Kate Crawford, Trevor Paglen, and Refik Anadol) talk about how machine learning algorithms have affected the art world, new approaches to artmaking, and the future for art in the age of AI.
Our resources page
Forthcoming and past seminars for 2023
AI around the world focus series
UN SDGs focus series
New voices in AI series