ΑΙhub.org
 

2023 landscape – a report from the AI Now Institute


by
12 April 2023



share this:

wires at the back of a large serverImage taken from the report cover. Credit: Amba Kak and Sarah Myers West.

The AI Now Institute have released their 2023 annual report. It focusses on the concentration of power in the tech industry and highlights a set of approaches to confront this. The authors suggest both policy reforms and nonregulatory interventions. The intention of the report is to provide strategic guidance to inform future work and to ensure that the technology serves the public, not industry.

The specific themes covered in the report are:

You can read the executive summary here.

Read the report in full

The full report can be read here.
The pdf version can be downloaded here.

The report is authored by Amba Kak and Sarah Myers West, with research and editorial contributions from Alejandro Calcaño, Jane Chung, Kerry McInerney and Meredith Whittaker.

Cite as: Amba Kak and Sarah Myers West, “AI Now 2023 Landscape: Confronting Tech Power”, AI Now Institute, April 11, 2023.

About the AI Now Institute

The AI Now Institute was founded in 2017 and produces diagnosis and policy research to address the concentration of power in the tech industry. Find out more here.



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.

What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Interview with Mario Mirabile: trust in multi-agent systems

  18 Nov 2025
We meet ECAI Doctoral Consortium participant, Mario, to find out more about his research.

Review of “Exploring metaphors of AI: visualisations, narratives and perception”

and   17 Nov 2025
A curated research session at the Hype Studies Conference, “(Don’t) Believe the Hype?!” 10-12 September 2025, Barcelona.

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

  13 Nov 2025
Autonomous systems increasingly face value-laden choices. This blog post introduces the idea of designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence