ΑΙhub.org
 

Considering the risks of using AI to help grow our food

by
14 April 2022



share this:
harvester in wheat field

Artificial intelligence (AI) is on the cusp of driving an agricultural revolution, and helping confront the challenge of feeding our growing global population in a sustainable way. But researchers warn that using new AI technologies at scale holds huge risks that are not being considered.

Imagine a field of wheat that extends to the horizon, being grown for flour that will be made into bread to feed cities’ worth of people. Imagine that all authority for tilling, planting, fertilising, monitoring and harvesting this field has been delegated to artificial intelligence: algorithms that control drip-irrigation systems, self-driving tractors and combine harvesters, clever enough to respond to the weather and the exact needs of the crop. Then imagine a hacker messes things up.

A new risk analysis, published in the journal Nature Machine Intelligence, warns that the future use of artificial intelligence in agriculture comes with substantial potential risks for farms, farmers and food security that are poorly understood and under-appreciated.

“The idea of intelligent machines running farms is not science fiction. Large companies are already pioneering the next generation of autonomous ag-bots and decision support systems that will replace humans in the field,” said Dr Asaf Tzachor in the University of Cambridge’s Centre for the Study of Existential Risk (CSER), first author of the paper.

“But so far no-one seems to have asked the question ‘are there any risks associated with a rapid deployment of agricultural AI?’” he added.

Despite the huge promise of AI for improving crop management and agricultural productivity, potential risks must be addressed responsibly and new technologies properly tested in experimental settings to ensure they are safe, and secure against accidental failures, unintended consequences, and cyber-attacks, the authors say.

In their research, the authors have come up with a catalogue of risks that must be considered in the responsible development of AI for agriculture – and ways to address them. In it, they raise the alarm about cyber-attackers potentially causing disruption to commercial farms using AI, by poisoning datasets or by shutting down sprayers, autonomous drones, and robotic harvesters. To guard against this they suggest that ‘white hat hackers’ help companies uncover any security failings during the development phase, so that systems can be safeguarded against real hackers.

In a scenario associated with accidental failure, the authors suggest that an AI system programmed only to deliver the best crop yield in the short term might ignore the environmental consequences of achieving this, leading to overuse of fertilisers and soil erosion in the long term. Over-application of pesticides in pursuit of high yields could poison ecosystems; over-application of nitrogen fertiliser would pollute the soil and surrounding waterways. The authors suggest involving applied ecologists in the technology design process to ensure these scenarios are avoided.

Autonomous machines could improve the working conditions of farmers, relieving them of manual labour. But without inclusive technology design, socioeconomic inequalities that are currently entrenched in global agriculture – including gender, class, and ethnic discriminations – will remain.

“Expert AI farming systems that don’t consider the complexities of labour inputs will ignore, and potentially sustain, the exploitation of disadvantaged communities,” warned Tzachor.

Various ag-bots and advanced machinery, such as drones and sensors, are already used to gather information on crops and support farmers’ decision-making: detecting diseases or insufficient irrigation, for example. And self-driving combine harvesters can bring in a crop without the need for a human operator. Such automated systems aim to make farming more efficient, saving labour costs, optimising for production, and minimising loss and waste. This leads to increasing revenues for farmers as well as to greater reliance on agricultural AI.

However, small-scale growers who cultivate the majority of farms worldwide and feed large swaths of the so-called Global South are likely to be excluded from AI-related benefits. Marginalisation, poor internet penetration rates, and the digital divide might prevent smallholders from using advanced technologies, widening the gaps between commercial and subsistence farmers.

With an estimated two billion people afflicted by food insecurity, including some 690 million malnourished and 340 million children suffering micronutrient deficiencies, artificial intelligence technologies and precision agriculture promise substantial benefits for food and nutritional security in the face of climate change and a growing global population.

“AI is being hailed as the way to revolutionise agriculture. As we deploy this technology on a large scale, we should closely consider potential risks, and aim to mitigate those early on in the technology design,” said Dr Seán Ó hÉigeartaigh, Executive Director of CSER and co-author of the new research.

This research was funded by Templeton World Charity Foundation, Inc.

Reference

Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities
Asaf Tzachor, Medha Devare, Brian King, Shahar Avin and Seán Ó hÉigeartaigh
Nature Machine Intelligence, February 2022.




University of Cambridge




            AIhub is supported by:


Related posts :



ChatGPT is changing the way we write. Here’s how – and why it’s a problem

Have you noticed certain words and phrases popping up everywhere lately?
07 October 2024, by

Will humans accept robots that can lie? Scientists find it depends on the lie

Humans don’t just lie to deceive: sometimes we lie to avoid hurting others, breaking one social norm to uphold another.
04 October 2024, by

Explainable AI for detecting and monitoring infrastructure defects

A team of researchers has demonstrated the feasibility of an AI-driven method for crack detection, growth and monitoring.
03 October 2024, by

The Good Robot podcast: the EU AI Act part 2, with Amba Kak and Sarah Myers West from AI NOW

In the second instalment of their EU AI Act series, Eleanor and Kerry talk to Amba Kak and Sarah Myers West
02 October 2024, by

Forthcoming machine learning and AI seminars: October 2024 edition

A list of free-to-attend AI-related seminars that are scheduled to take place between 1 October and 30 November 2024.
01 October 2024, by

Linguistic bias in ChatGPT: Language models reinforce dialect discrimination

Examining how ChatGPT’s behavior changes in response to text in different varieties of English.
30 September 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association