ΑΙhub.org
 

Considering the risks of using AI to help grow our food


by
14 April 2022



share this:
harvester in wheat field

Artificial intelligence (AI) is on the cusp of driving an agricultural revolution, and helping confront the challenge of feeding our growing global population in a sustainable way. But researchers warn that using new AI technologies at scale holds huge risks that are not being considered.

Imagine a field of wheat that extends to the horizon, being grown for flour that will be made into bread to feed cities’ worth of people. Imagine that all authority for tilling, planting, fertilising, monitoring and harvesting this field has been delegated to artificial intelligence: algorithms that control drip-irrigation systems, self-driving tractors and combine harvesters, clever enough to respond to the weather and the exact needs of the crop. Then imagine a hacker messes things up.

A new risk analysis, published in the journal Nature Machine Intelligence, warns that the future use of artificial intelligence in agriculture comes with substantial potential risks for farms, farmers and food security that are poorly understood and under-appreciated.

“The idea of intelligent machines running farms is not science fiction. Large companies are already pioneering the next generation of autonomous ag-bots and decision support systems that will replace humans in the field,” said Dr Asaf Tzachor in the University of Cambridge’s Centre for the Study of Existential Risk (CSER), first author of the paper.

“But so far no-one seems to have asked the question ‘are there any risks associated with a rapid deployment of agricultural AI?’” he added.

Despite the huge promise of AI for improving crop management and agricultural productivity, potential risks must be addressed responsibly and new technologies properly tested in experimental settings to ensure they are safe, and secure against accidental failures, unintended consequences, and cyber-attacks, the authors say.

In their research, the authors have come up with a catalogue of risks that must be considered in the responsible development of AI for agriculture – and ways to address them. In it, they raise the alarm about cyber-attackers potentially causing disruption to commercial farms using AI, by poisoning datasets or by shutting down sprayers, autonomous drones, and robotic harvesters. To guard against this they suggest that ‘white hat hackers’ help companies uncover any security failings during the development phase, so that systems can be safeguarded against real hackers.

In a scenario associated with accidental failure, the authors suggest that an AI system programmed only to deliver the best crop yield in the short term might ignore the environmental consequences of achieving this, leading to overuse of fertilisers and soil erosion in the long term. Over-application of pesticides in pursuit of high yields could poison ecosystems; over-application of nitrogen fertiliser would pollute the soil and surrounding waterways. The authors suggest involving applied ecologists in the technology design process to ensure these scenarios are avoided.

Autonomous machines could improve the working conditions of farmers, relieving them of manual labour. But without inclusive technology design, socioeconomic inequalities that are currently entrenched in global agriculture – including gender, class, and ethnic discriminations – will remain.

“Expert AI farming systems that don’t consider the complexities of labour inputs will ignore, and potentially sustain, the exploitation of disadvantaged communities,” warned Tzachor.

Various ag-bots and advanced machinery, such as drones and sensors, are already used to gather information on crops and support farmers’ decision-making: detecting diseases or insufficient irrigation, for example. And self-driving combine harvesters can bring in a crop without the need for a human operator. Such automated systems aim to make farming more efficient, saving labour costs, optimising for production, and minimising loss and waste. This leads to increasing revenues for farmers as well as to greater reliance on agricultural AI.

However, small-scale growers who cultivate the majority of farms worldwide and feed large swaths of the so-called Global South are likely to be excluded from AI-related benefits. Marginalisation, poor internet penetration rates, and the digital divide might prevent smallholders from using advanced technologies, widening the gaps between commercial and subsistence farmers.

With an estimated two billion people afflicted by food insecurity, including some 690 million malnourished and 340 million children suffering micronutrient deficiencies, artificial intelligence technologies and precision agriculture promise substantial benefits for food and nutritional security in the face of climate change and a growing global population.

“AI is being hailed as the way to revolutionise agriculture. As we deploy this technology on a large scale, we should closely consider potential risks, and aim to mitigate those early on in the technology design,” said Dr Seán Ó hÉigeartaigh, Executive Director of CSER and co-author of the new research.

This research was funded by Templeton World Charity Foundation, Inc.

Reference

Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities
Asaf Tzachor, Medha Devare, Brian King, Shahar Avin and Seán Ó hÉigeartaigh
Nature Machine Intelligence, February 2022.




University of Cambridge




            AIhub is supported by:



Related posts :



Machine learning for atomic-scale simulations: balancing speed and physical laws

How much underlying physics can we safely “shortcut” without breaking a simulation?

Policy design for two-sided platforms with participation dynamics: Interview with Haruka Kiyohara

  09 Oct 2025
Studying the long-term impacts of decision-making algorithms on two-sided platforms such as e-commerce or music streaming apps.

The Machine Ethics podcast: What excites you about AI? Vol.2

This is a bonus episode looking back over answers to our question: What excites you about AI?

Interview with Janice Anta Zebaze: using AI to address energy supply challenges

  07 Oct 2025
Find out more about research combining renewable energy systems, tribology, and artificial intelligence.

How does AI affect how we learn? A cognitive psychologist explains why you learn when the work is hard

  06 Oct 2025
Early research is only beginning to scratch the surface of how AI technology will truly affect learning and cognition in the long run.

Interview with Zahra Ghorrati: developing frameworks for human activity recognition using wearable sensors

  03 Oct 2025
Find out more about research developing scalable and adaptive deep learning frameworks.

Diffusion beats autoregressive in data-constrained settings

  03 Oct 2025
How can we trade off more compute for less data?

Forthcoming machine learning and AI seminars: October 2025 edition

  02 Oct 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 3 October and 30 November 2025.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence