ΑΙhub.org
 

Bug bounties for algorithmic harms? – a report from the Algorithmic Justice League


by
09 February 2022



share this:

bug bountiesImage from the report “Bug bounties for algorithmic harms?” Credit: AJL.

Researchers from the Algorithmic Justice League (AJL) have released a report which takes a detailed look at bug bounty programmes (BBPs) and how these could be used to address various kinds of socio-technical problems, including algorithmic harm.

BBPs are mechanisms that incentivize hackers to identify and report cybersecurity vulnerabilities. Hundreds of companies and organizations regularly use BBPs to buy security flaws from hackers. Now, BBPs have been adopted to address a wider spectrum of socio-technical harms and risks beyond security bugs.

However, as report authors Josh Kenway, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji, and Joy Buolamwini note, the conditions under which BBPs might constitute appropriate mechanisms for addressing socio-technical concerns remain relatively unexamined.

To compile their report the authors held interviews with BBP experts and practitioners, they reviewed the existing literature, and they analysed historical and present-day approaches to vulnerability disclosure. There were three main lines of enquiry for the team. They considered how BBPs might be used to:

  • Foster and nurture participation and community among researchers
  • Shape field development by fostering the development of resources and methods
  • Drive transparency and accountability across the industry

The five key takeaways from the report are as follows:

  1. Prepare to include socio-technical concerns. Only a few companies/organisations have expanded their current programs to include socio-technical issues, and no clear best-practices have emerged. The report provides recommendations for how to shape BBPs for algorithmic harm discovery and mitigation.
  2. Look across the lifecycle. Bug bounties are just one tool for enhancing cybersecurity. Likewise, BBPs for algorithmic harm will need to be accompanied by other mechanisms in order to assess and act on reports of such harms.
  3. Nurture the community of practice. There is a sense of community within bug bounty platforms with organisations and members sharing educational materials, resources and tools. The authors caution against approaches that exclude those from fields outside of computer science
  4. Intentionally develop a diverse, inclusive community. Successfully deploying BBPs for algorithmic harms will require serious effort to recruit and retain diverse communities of researchers and community advocates, and to ensure fair compensation for work.
  5. Foster and protect participatory, adversarial research, and guarantee some form of public disclosure. Greater protection for third-party algorithmic harms research is needed.

You can find the full pdf version of the report here. This includes more background information, findings and recommendations pertaining to the five key takeaways, interviews with experts, and a case study of Twitter’s recent bias bounty pilot.

Report citation

Kenway, Josh, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji, and Joy Buolamwini. Bug Bounties For Algorithmic Harms? Lessons from Cybersecurity Vulnerability Disclosure for Algorithmic Harms Discovery, Disclosure, and Redress. Washington, DC: Algorithmic Justice League. January 2022. Available at https://ajl.org/bugs.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Interview with Eden Hartman: Investigating social choice problems

  24 Apr 2025
Find out more about research presented at AAAI 2025.

The Machine Ethics podcast: Co-design with Pinar Guvenc

This episode, Ben chats to Pinar Guvenc about co-design, whether AI ready for society and society is ready for AI, what design is, co-creation with AI as a stakeholder, bias in design, small language models, and more.

Why AI can’t take over creative writing

  22 Apr 2025
A large language model tries to generate what a random person who had produced the previous text would produce.

Interview with Amina Mević: Machine learning applied to semiconductor manufacturing

  17 Apr 2025
Find out how Amina is using machine learning to develop an explainable multi-output virtual metrology system.

Images of AI – between fiction and function

“The currently pervasive images of AI make us look somewhere, at the cost of somewhere else.”

Grace Wahba awarded the 2025 International Prize in Statistics

  16 Apr 2025
Her contributions laid the foundation for modern statistical techniques that power machine learning algorithms such as gradient boosting and neural networks.

Repurposing protein folding models for generation with latent diffusion

  14 Apr 2025
The awarding of the 2024 Nobel Prize to AlphaFold2 marks an important moment of recognition for the of AI role in biology. What comes next after protein folding?




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association