ΑΙhub.org
 

Bug bounties for algorithmic harms? – a report from the Algorithmic Justice League


by
09 February 2022



share this:

bug bountiesImage from the report “Bug bounties for algorithmic harms?” Credit: AJL.

Researchers from the Algorithmic Justice League (AJL) have released a report which takes a detailed look at bug bounty programmes (BBPs) and how these could be used to address various kinds of socio-technical problems, including algorithmic harm.

BBPs are mechanisms that incentivize hackers to identify and report cybersecurity vulnerabilities. Hundreds of companies and organizations regularly use BBPs to buy security flaws from hackers. Now, BBPs have been adopted to address a wider spectrum of socio-technical harms and risks beyond security bugs.

However, as report authors Josh Kenway, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji, and Joy Buolamwini note, the conditions under which BBPs might constitute appropriate mechanisms for addressing socio-technical concerns remain relatively unexamined.

To compile their report the authors held interviews with BBP experts and practitioners, they reviewed the existing literature, and they analysed historical and present-day approaches to vulnerability disclosure. There were three main lines of enquiry for the team. They considered how BBPs might be used to:

  • Foster and nurture participation and community among researchers
  • Shape field development by fostering the development of resources and methods
  • Drive transparency and accountability across the industry

The five key takeaways from the report are as follows:

  1. Prepare to include socio-technical concerns. Only a few companies/organisations have expanded their current programs to include socio-technical issues, and no clear best-practices have emerged. The report provides recommendations for how to shape BBPs for algorithmic harm discovery and mitigation.
  2. Look across the lifecycle. Bug bounties are just one tool for enhancing cybersecurity. Likewise, BBPs for algorithmic harm will need to be accompanied by other mechanisms in order to assess and act on reports of such harms.
  3. Nurture the community of practice. There is a sense of community within bug bounty platforms with organisations and members sharing educational materials, resources and tools. The authors caution against approaches that exclude those from fields outside of computer science
  4. Intentionally develop a diverse, inclusive community. Successfully deploying BBPs for algorithmic harms will require serious effort to recruit and retain diverse communities of researchers and community advocates, and to ensure fair compensation for work.
  5. Foster and protect participatory, adversarial research, and guarantee some form of public disclosure. Greater protection for third-party algorithmic harms research is needed.

You can find the full pdf version of the report here. This includes more background information, findings and recommendations pertaining to the five key takeaways, interviews with experts, and a case study of Twitter’s recent bias bounty pilot.

Report citation

Kenway, Josh, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji, and Joy Buolamwini. Bug Bounties For Algorithmic Harms? Lessons from Cybersecurity Vulnerability Disclosure for Algorithmic Harms Discovery, Disclosure, and Redress. Washington, DC: Algorithmic Justice League. January 2022. Available at https://ajl.org/bugs.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.

Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

  25 Feb 2026
In the third of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Oliver Chang.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence