ΑΙhub.org
 

Using data for the public good: the roles of clear governance, good data and trustworthy institutions


by
24 July 2020



share this:
Clear Light Bulb Planter on Grey Rock
Clear Light Bulb Planter on Grey Rock. Photographer: Singkham

By Roger Taylor, Chair of the Centre for Data Ethics and Innovation

Failure to use data effectively means we cannot deal with the most pressing issues that face us today, such as discrimination. Addressing this requires institutions that are fit to enable responsible use of data and technology for the public good, engaging civil society and the public as well as industry and government.

The Royal Society’s Data Governance Explainer (PDF) brings welcome clarity to a complex landscape. It builds upon another Royal Society report published in 2017, written in partnership with the British Academy, that helped set the wheels in motion within the government to create the Centre for Data Ethics and Innovation (CDEI). Since then, the UK institutional landscape has seen an expansion of organisations that are working together to better understand how we effectively and responsibly adopt data-driven technologies and artificial intelligence.

One of my ambitions for the CDEI is that we continue to work closely with others. We play an essential role in creating an UK data governance environment which enables data to be shared at scale in a trustworthy way for the benefit of the public. This helpful explainer provides a stocktake of the landscape, but it also helps us spot what the outstanding challenges are and three issues in particular strike me as important.

Our data protection laws are not sufficient for ethical innovation

Data protection laws are designed to protect individuals and prevent the misuse of personal information. This is essential. But at the same time we also need ways to ensure that data is being shared for the public good at a much greater scale than is currently happening. Access to data is going to be invaluable when tackling some of the biggest societal challenges, whether it is data about the spread of pandemics, data about the biases in institutional decision making, data about energy use, or data about the spread of information and misinformation.

Last week the CDEI published its AI Barometer (PDF). It draws on the expertise of over 120 expert panellists from industry, academia, civil society and government and is a first-of-its-kind analysis that maps the most pressing risks and opportunities of AI and data use, as they arise in different sectors. It also helps to identify the barriers to achieving responsible adoption of technology.

Two of the most significant barriers identified were lack of clarity about the regulatory and governance environment; and lack of access to high quality data. The two issues are linked. Widespread uncertainty about both the ethics and legality of data use is preventing innovation. Despite the extensive work of the ICO to help to clarify the legal complexities of GDPR, organisations can still struggle with the interaction between data protection, common law, and other more specific legal duties. Without clear legal, regulatory and governance frameworks that enable the use of data for public benefit, our society and our economy will be weakened. At the CDEI we see these barriers in action in the frequent requests we receive for advice from organisations who want to use data for public benefit but lack the necessary clarity on what good governance looks like.

Data governance matters for the most pressing social issues of the day

The lack of clear and effective governance to better enable the use of data for public good is affecting our ability to deal with the most pressing issues that face us today, whether that is discrimination and injustices, global warming, aging populations or the current pandemic.

The AI Barometer set out the key opportunities for AI and data-driven technologies and found that many of these depend on the availability and use of complex data sets , often about individual behaviour. These opportunities are related both to protecting individual rights and to creating a sustainable and safe society. We need to find ways to increase access to data to fuel innovation which are fair and which empower individuals and protect their privacy. Techniques to do this exist – and the UK has real expertise in this area – but we need to do more to develop these, widen understanding and increase implementation.

We see the importance of the role that data governance can play in the in-depth review CDEI is currently undertaking into algorithmic bias, the conclusion of which will be published in the summer. Tackling bias in institutional decision-making – whether in policing, financial services, recruitment or local government – may require analysis of complex data by organisations that are competent to monitor and adjust their behaviour in the light of evidence of discrimination. Failure to use data effectively can allow discrimination to go unchecked. One area our review will address is the need for appropriate data governance regimes that enable monitoring and mitigation of racial bias and other forms of discrimination, and ensure transparency and accountability.

CDEI also carried out a review into micro-targeting. This enormously powerful technology can be used to manipulate people and spread misinformation. It can also be used to increase equality of opportunity, encourage responsible health behaviour and deliver more effective educational resources. In discussions with the public (PDF) as part of the review, a clear message coming through was the desire for regulation to prevent harm while also enabling the technology to be used for benefit. Our data governance regime is currently not delivering on this effectively.

Trusted and trustworthy institutions are needed

Good governance relies on trustworthy institutions that the public have confidence will allow data to be used for public benefit, but not used against them. It is neither fair nor plausible to put the responsibility of ethical data governance solely onto individuals. It is not possible to expect individuals to always be able to determine whether each proposed use of data about them is beneficial or harmful.

Transparency and accountability as characteristics of good governance are essential elements in establishing trustworthy institutions. There are many things that can undermine public trust in data use -a lack of transparency and clarity about how exactly data is being used and the impact it has had or will have; over-claiming for the benefits of data use; ‘tech solutionism’ that sees tech as the answer to everything; and, ineffective protection and loss of personal data by companies and governments.

In developing institutions that are fit to enable responsible use of data and technology in the UK, we will need to engage civil society and the public as well as industry and government. The Explainer is welcome not only because it clearly sets out where we are, but hints at where we need to get to.

You can download the Royal Society’s new publication The UK data governance landscape: explainer (PDF).

For another perspective on the UK’s data governance landscape and the Royal Society’s Data Governance Explainer, please see Getting data right: governance for people and society.

This article was originally published on The Royal Society blog.

About the author

Roger TaylorRoger Taylor is Chair of the Centre for Data Ethics and Innovation. He has worked as an entrepreneur, a regulator and a writer.




The Royal Society is a Fellowship of many of the world's most eminent scientists and is the oldest scientific academy in continuous existence.
The Royal Society is a Fellowship of many of the world's most eminent scientists and is the oldest scientific academy in continuous existence.




            AIhub is supported by:



Related posts :



Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence