ΑΙhub.org
 

Considering the risks of using AI to help grow our food


by
14 April 2022



share this:
harvester in wheat field

Artificial intelligence (AI) is on the cusp of driving an agricultural revolution, and helping confront the challenge of feeding our growing global population in a sustainable way. But researchers warn that using new AI technologies at scale holds huge risks that are not being considered.

Imagine a field of wheat that extends to the horizon, being grown for flour that will be made into bread to feed cities’ worth of people. Imagine that all authority for tilling, planting, fertilising, monitoring and harvesting this field has been delegated to artificial intelligence: algorithms that control drip-irrigation systems, self-driving tractors and combine harvesters, clever enough to respond to the weather and the exact needs of the crop. Then imagine a hacker messes things up.

A new risk analysis, published in the journal Nature Machine Intelligence, warns that the future use of artificial intelligence in agriculture comes with substantial potential risks for farms, farmers and food security that are poorly understood and under-appreciated.

“The idea of intelligent machines running farms is not science fiction. Large companies are already pioneering the next generation of autonomous ag-bots and decision support systems that will replace humans in the field,” said Dr Asaf Tzachor in the University of Cambridge’s Centre for the Study of Existential Risk (CSER), first author of the paper.

“But so far no-one seems to have asked the question ‘are there any risks associated with a rapid deployment of agricultural AI?’” he added.

Despite the huge promise of AI for improving crop management and agricultural productivity, potential risks must be addressed responsibly and new technologies properly tested in experimental settings to ensure they are safe, and secure against accidental failures, unintended consequences, and cyber-attacks, the authors say.

In their research, the authors have come up with a catalogue of risks that must be considered in the responsible development of AI for agriculture – and ways to address them. In it, they raise the alarm about cyber-attackers potentially causing disruption to commercial farms using AI, by poisoning datasets or by shutting down sprayers, autonomous drones, and robotic harvesters. To guard against this they suggest that ‘white hat hackers’ help companies uncover any security failings during the development phase, so that systems can be safeguarded against real hackers.

In a scenario associated with accidental failure, the authors suggest that an AI system programmed only to deliver the best crop yield in the short term might ignore the environmental consequences of achieving this, leading to overuse of fertilisers and soil erosion in the long term. Over-application of pesticides in pursuit of high yields could poison ecosystems; over-application of nitrogen fertiliser would pollute the soil and surrounding waterways. The authors suggest involving applied ecologists in the technology design process to ensure these scenarios are avoided.

Autonomous machines could improve the working conditions of farmers, relieving them of manual labour. But without inclusive technology design, socioeconomic inequalities that are currently entrenched in global agriculture – including gender, class, and ethnic discriminations – will remain.

“Expert AI farming systems that don’t consider the complexities of labour inputs will ignore, and potentially sustain, the exploitation of disadvantaged communities,” warned Tzachor.

Various ag-bots and advanced machinery, such as drones and sensors, are already used to gather information on crops and support farmers’ decision-making: detecting diseases or insufficient irrigation, for example. And self-driving combine harvesters can bring in a crop without the need for a human operator. Such automated systems aim to make farming more efficient, saving labour costs, optimising for production, and minimising loss and waste. This leads to increasing revenues for farmers as well as to greater reliance on agricultural AI.

However, small-scale growers who cultivate the majority of farms worldwide and feed large swaths of the so-called Global South are likely to be excluded from AI-related benefits. Marginalisation, poor internet penetration rates, and the digital divide might prevent smallholders from using advanced technologies, widening the gaps between commercial and subsistence farmers.

With an estimated two billion people afflicted by food insecurity, including some 690 million malnourished and 340 million children suffering micronutrient deficiencies, artificial intelligence technologies and precision agriculture promise substantial benefits for food and nutritional security in the face of climate change and a growing global population.

“AI is being hailed as the way to revolutionise agriculture. As we deploy this technology on a large scale, we should closely consider potential risks, and aim to mitigate those early on in the technology design,” said Dr Seán Ó hÉigeartaigh, Executive Director of CSER and co-author of the new research.

This research was funded by Templeton World Charity Foundation, Inc.

Reference

Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities
Asaf Tzachor, Medha Devare, Brian King, Shahar Avin and Seán Ó hÉigeartaigh
Nature Machine Intelligence, February 2022.




University of Cambridge




            AIhub is supported by:


Related posts :



Interview with Kayla Boggess: Explainable AI for more accessible and understandable technologies

  14 Feb 2025
Hear from Doctoral Consortium participant Kayla about her work focussed on explanations for multi-agent reinforcement learning, and human-centric explanations.

The Machine Ethics podcast: Running faster with Enrico Panai

This episode, Ben chats to Enrico Panai about different aspects of AI ethics.

Diffusion model predicts 3D genomic structures

  12 Feb 2025
A new approach predicts how a specific DNA sequence will arrange itself in the cell nucleus.

Interview with Kunpeng Xu: Kernel representation learning for time series

  11 Feb 2025
We hear from AAAI/SIGAI doctoral consortium participant Kunpeng Xu.

The Children’s AI Summit – an event from The Turing Institute

  10 Feb 2025
Find out more about this event held ahead of the Paris AI Action Summit.
coffee corner

AIhub coffee corner: Bad practice in the publication world

  07 Feb 2025
The AIhub coffee corner captures the musings of AI experts over a short conversation.

Explained: Generative AI’s environmental impact

  06 Feb 2025
Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.

Interview with Nisarg Shah: Understanding fairness in AI and machine learning

  05 Feb 2025
Hear from the winner of the 2024 IJCAI Computers and Thought Award.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association