ΑΙhub.org
 

Considering the risks of using AI to help grow our food


by
14 April 2022



share this:
harvester in wheat field

Artificial intelligence (AI) is on the cusp of driving an agricultural revolution, and helping confront the challenge of feeding our growing global population in a sustainable way. But researchers warn that using new AI technologies at scale holds huge risks that are not being considered.

Imagine a field of wheat that extends to the horizon, being grown for flour that will be made into bread to feed cities’ worth of people. Imagine that all authority for tilling, planting, fertilising, monitoring and harvesting this field has been delegated to artificial intelligence: algorithms that control drip-irrigation systems, self-driving tractors and combine harvesters, clever enough to respond to the weather and the exact needs of the crop. Then imagine a hacker messes things up.

A new risk analysis, published in the journal Nature Machine Intelligence, warns that the future use of artificial intelligence in agriculture comes with substantial potential risks for farms, farmers and food security that are poorly understood and under-appreciated.

“The idea of intelligent machines running farms is not science fiction. Large companies are already pioneering the next generation of autonomous ag-bots and decision support systems that will replace humans in the field,” said Dr Asaf Tzachor in the University of Cambridge’s Centre for the Study of Existential Risk (CSER), first author of the paper.

“But so far no-one seems to have asked the question ‘are there any risks associated with a rapid deployment of agricultural AI?’” he added.

Despite the huge promise of AI for improving crop management and agricultural productivity, potential risks must be addressed responsibly and new technologies properly tested in experimental settings to ensure they are safe, and secure against accidental failures, unintended consequences, and cyber-attacks, the authors say.

In their research, the authors have come up with a catalogue of risks that must be considered in the responsible development of AI for agriculture – and ways to address them. In it, they raise the alarm about cyber-attackers potentially causing disruption to commercial farms using AI, by poisoning datasets or by shutting down sprayers, autonomous drones, and robotic harvesters. To guard against this they suggest that ‘white hat hackers’ help companies uncover any security failings during the development phase, so that systems can be safeguarded against real hackers.

In a scenario associated with accidental failure, the authors suggest that an AI system programmed only to deliver the best crop yield in the short term might ignore the environmental consequences of achieving this, leading to overuse of fertilisers and soil erosion in the long term. Over-application of pesticides in pursuit of high yields could poison ecosystems; over-application of nitrogen fertiliser would pollute the soil and surrounding waterways. The authors suggest involving applied ecologists in the technology design process to ensure these scenarios are avoided.

Autonomous machines could improve the working conditions of farmers, relieving them of manual labour. But without inclusive technology design, socioeconomic inequalities that are currently entrenched in global agriculture – including gender, class, and ethnic discriminations – will remain.

“Expert AI farming systems that don’t consider the complexities of labour inputs will ignore, and potentially sustain, the exploitation of disadvantaged communities,” warned Tzachor.

Various ag-bots and advanced machinery, such as drones and sensors, are already used to gather information on crops and support farmers’ decision-making: detecting diseases or insufficient irrigation, for example. And self-driving combine harvesters can bring in a crop without the need for a human operator. Such automated systems aim to make farming more efficient, saving labour costs, optimising for production, and minimising loss and waste. This leads to increasing revenues for farmers as well as to greater reliance on agricultural AI.

However, small-scale growers who cultivate the majority of farms worldwide and feed large swaths of the so-called Global South are likely to be excluded from AI-related benefits. Marginalisation, poor internet penetration rates, and the digital divide might prevent smallholders from using advanced technologies, widening the gaps between commercial and subsistence farmers.

With an estimated two billion people afflicted by food insecurity, including some 690 million malnourished and 340 million children suffering micronutrient deficiencies, artificial intelligence technologies and precision agriculture promise substantial benefits for food and nutritional security in the face of climate change and a growing global population.

“AI is being hailed as the way to revolutionise agriculture. As we deploy this technology on a large scale, we should closely consider potential risks, and aim to mitigate those early on in the technology design,” said Dr Seán Ó hÉigeartaigh, Executive Director of CSER and co-author of the new research.

This research was funded by Templeton World Charity Foundation, Inc.

Reference

Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities
Asaf Tzachor, Medha Devare, Brian King, Shahar Avin and Seán Ó hÉigeartaigh
Nature Machine Intelligence, February 2022.




University of Cambridge




            AIhub is supported by:


Related posts :



2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  20 Dec 2024
We collate our interviews with the 2024 cohort of doctoral consortium participants.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.

Congratulations to the #NeurIPS2024 award winners

  11 Dec 2024
Find out who has been recognised by the conference awards.

Multi-agent path finding in continuous environments

and   11 Dec 2024
How can a group of agents minimise their journey length whilst avoiding collisions?




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association