ΑΙhub.org
 

Stanford HAI 2021 fall conference: four radical proposals for a better society


by
11 November 2021



share this:
Stanford HAI conference logo

This year’s Stanford HAI virtual fall conference took place on 9-10 November. It comprised a discussion of four policy proposals that respond to the issues and opportunities created by artificial intelligence. The premise is that each policy proposal poses a challenge to the status quo. These proposals were presented to panels of experts who debated the merits and issues surrounding each policy.

The event was recorded and you can watch both days’ sessions on YouTube. Day one covered proposals 1 and 2, and day two focussed on proposals 3 and 4.

Day one

Proposal 1: Middleware could give consumers choices over what they see online

Middleware is software that rides on top of an existing internet or social media platform such as Google, Facebook or Twitter and can modify the presentation of underlying data. This proposal suggests outsourcing content moderation to a layer of competitive middleware companies that would offer users the ability to tailor their search and social media feeds to suit their personal preferences.

Taking part in this discussion were:
Francis Fukuyama (Freeman Spogli Institute for International Studies)
Ashish Goel (Stanford University)
Kate Starbird (University of Washington)
Katrina Ligett (Hebrew University)
Renee DiResta (Stanford Internet Observatory)

Read more here.

Proposal 2: Universal Basic Income to offset job losses due to automation

The proposal is to give every American adult $1,000 a month to avert an economic crisis.

Taking part in this discussion were:
Andrew Yang (Venture for America)
Darrick Hamilton (The New School Milano)
Mark Duggan (Stanford Institute for Economic Policy Research)
Juliana Bidadanure (Stanford University)

Read more here.

Day two

Proposal 3: Data cooperatives could give us more power over our data

To address the power imbalance between data producers and corporations that profit from our data, scholars propose creating data cooperatives to act as fiduciary intermediaries.

Taking part in this discussion were:
Divya Siddarth (Microsoft)
Pamela Samuelson (UC Berkeley)
Sandy Pentland (MIT)
Jennifer King (Stanford University)

Read more here.

Proposal 4: Third-party auditor access for AI accountability

A proposal for legal protections and regulatory involvement to support organizations that uncover algorithmic harm.

Taking part in this discussion were:
Deborah Raji (UC Berkeley)
DJ Patil (Devoted Health)
Cathy O’Neil (Columbia University)
Fiona Scott Morton (Yale University)

Read more here.

These four proposals were chosen following a public consultation last spring. The organisers received nearly 100 suggestions. In addition to the four discussed at the event, there are six more that the team at Stanford HAI would like to highlight. You can find out more about these here.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence