Jamillah Knowles & Digit / Pink Office / Licenced by CC-BY 4.0
2025 marked a pivotal shift in AI – from testing to deployment. This happened as generative and agentic systems became essential in key sectors worldwide. This feature highlights the major AI ethics and policy developments of 2025, and concludes with a forward-looking perspective on the ethical and policy challenges likely to shape 2026.
One significant change in 2025 was the enforcement of AI principles and regulations laid out over the past few years. In the US, Trump overturned Biden’s 2023 Executive Order on Safe, Secure, and Trustworthy AI, which originally expanded safety requirements for models and increased reporting duties for developers. This is a real shift in attitude towards AI, as the US prioritises deregulation and fast innovation over responsible AI (Squire Patton Boggs, 2025). This may lead to friction with the EU, which is taking a more stringent approach. In June, the EU AI Act began requiring organizations to categorize systems by risk level, prepare oversight plans, conduct red-team tests, and publish transparency information (European Commission, 2024). High-risk systems, such as those used in employment, credit, education, and public services, now face strict conformity assessments and ongoing post-market monitoring. The ACM U.S. Technology Policy Committee (ACM USTPC, 2025) issued several official recommendations advocating for clear documentation practices, rigorous evaluation standards, and effective oversight to ensure public accountability.
In 2025, AI safety infrastructure experienced rapid growth. Safety – once primarily discussed in conceptual terms – has evolved into a structured engineering discipline. The rise of third-party evaluation centers and independent auditing processes highlights a growing understanding that safety assessments need to go beyond static benchmarks. In this vein, the UK has proposed the AI Growth Lab, a sandbox where new AI models can be tested in real-world conditions, with temporary regulatory modifications to enable effective research (DSIT, 2025). Benchmarks for assessing deception, persuasion, and long-term planning were widely adopted by leading laboratories, such as OpenAI, Google, Anthropic, Moonshot AI, and Alibaba (Stanford CRFM, 2025). The ACM USTPC emphasized explainability as essential for fairness, arguing that black-box systems undermine both scientific integrity and democratic oversight. Their guidance influenced policy discussions across healthcare, finance, and critical infrastructure, where transparency became a necessary condition for deployment. These developments reflect themes from the upcoming AI and Ethics Handbook (Medsker 2026), which argues that safety must consider the socio-technical context, not just model-level testing.
One of the most notable changes in 2025 is the growing recognition that refusing to deploy GenAI can be ethically justified. Ethical deployment is now seen as relying not only on regulations but also on essential AI literacy: understanding system limits, social context, and human judgment. This perspective places the primary responsibility on institutions, not individual users, to establish clear governance, provide proper oversight, and determine when AI should not be used at all. This view challenges ideas of technological inevitability and highlights the importance of human judgment in AI-driven systems (Duarte et al., 2026).
Agentic AI systems achieved significant advances in 2025. For example, Cognitive Automation Agent is a new platform that independently manages clinical workflows in the US. This is one of the first of its kind, as it can initiate actions independently of clinicians (Epic, 2025). The increasing capabilities of agentic AI raise critical questions about oversight, predictability, and moral responsibility. Its deployment means that AI ethics evaluations will focus on the actions of such systems, rather than their predictions. To address these concerns, the USTPC emphasized the importance of clear responsibility, strong monitoring systems, and transparent governance structures (ACM USTPC, 2025).
A look back at 2025 would not be complete without discussing the new practice of “vibe coding,” where developers generate, refine, and debug code through iterative interaction with LLMs. While often framed as a productivity boost, this approach effectively delegates significant design and implementation decisions to AI agents, raising familiar questions about accountability, security, and oversight in automated decision-making. These unanswered questions have not halted its popularity, and it has become so ubiquitous that Collins Dictionary named it Word of the Year for 2025.
The global election cycles of 2024–2025 created significant pressure on information ecosystems. Highly convincing deepfakes, synthetic news, fraudulent political ads, and automated persuasion tools spread widely, with AI-based scams on the rise. For example, in July, an unknown actor used Marco Rubio’s voice and writing style to contact five senior US government officials (Gedeon, 2025). AI impersonations of pop stars reportedly scammed fans out of $5.3 billion for concert tickets and VIP experiences that did not exist (Dilts Marshall, 2025). These incidents sparked widespread calls for regulation and marked a decline in digital trust. Governments, platforms, and researchers adopted provenance metadata, watermarking, and digital signature technologies to verify content (C2PA, 2024). However, the effectiveness of these protections varies by platform and situation, as watermarks can be altered. The World Economic Forum called for robust security protocols to combat deepfakes (Colman, 2025).
A significant digital trust challenge emerging around 2025 involves the intersection of children’s rights and democratic integrity, as generative AI chatbots increasingly mediate social, educational, and political information for young users. The USTPC warned about the manipulative potential of chatbots interacting with minors and highlighted broader democratic risks posed by generative misinformation, given that chatbots can facilitate manipulative tactics and misinformation at scale (ACM USTPC, 2025). UNICEF shared this concern, emphasizing that AI systems impacting children must be designed with explicit safeguards, transparency, and accountability to prevent exploitation and undue influence (ACM USTPC, 2025; UNICEF, 2024).
In 2025, debates over the ethics and legality of AI training data intensified. There were more lawsuits related to large-scale web scraping, unauthorized use of copyrighted materials, and biometric data collection. In June, Reddit and the BBC took legal action against Perplexity AI (McMahon, 2025; Reuters, 2025). Several governments moved to require greater transparency about the sources, composition, and legal basis of training datasets. Courts and regulators made uneven but necessary progress on whether training generative AI models on copyrighted works qualifies as fair use. Late-year disputes between major publishers and AI developers highlighted unresolved questions about who is entitled to compensation (Nawotka, 2025). Depending on the outcome of these cases, the only legal generative AI systems in the US may be those trained on public-domain works or under licenses (Samuelson, 2024). Meanwhile, the EU and the UK moved toward obligations for developers to document training data sources and justify the inclusion of copyrighted or sensitive material (UK IPO, 2024), with these requirements expected to expand significantly during 2026–2027.
AI became more deeply integrated into critical infrastructure, but the systems in place are fragile. An AWS service disruption acted as a reminder of this. On October 20, millions of services that people rely on were shut down, from payment platforms like PayPal to consumer technology such as smart beds (Newman, 2025). This highlighted the fragility of a system controlled by only a few actors (Gkritsi & Haeck, 2025).
Copilots and agentic assistants altered workflows in administrative work, accounting, journalism, law, and STEM research. Productivity increased rapidly but unevenly. Some industries experienced major gains, while others faced risks of displacement or widening skill gaps. For example, in March, JP Morgan rolled out its own coding assistant for its software engineers, increasing productivity by 10–20 percent (Reuters, 2025). Meanwhile, the jobs market remained uncertain, with workers scrambling to remain employable as they needed to work with, or compete with, AI amid over 200,000 layoffs in the global tech sector (Mzekandaba, 2026). The USTPC’s recommendations on AI literacy, reskilling, and fair workforce development emphasized that increased national investment in education and training is vital for reducing disparities (ACM USTPC, 2025).
Google released a report on the energy use of its Gemini model, estimating that a single prompt emits 0.03 g of carbon dioxide, consumes 0.26 ml (five drops) of water, and has an energy impact equivalent to watching TV for nine seconds (Vahdat & Dean, 2025). These numbers scale up quickly, given that Gemini powered the 5.9 trillion Google searches in 2025 (Kumar, 2025). However, this reportedly represents a 44-fold decrease in carbon footprint per prompt due to efficiency increases at data centers. Data centers currently account for 4.4 percent of US energy demand, but projections estimate that this will triple by 2028 (O’Donnell & Crownhart, 2025). Other studies documented significant energy and water use associated with training and running large-scale models (Morrison et al., 2025), and policymakers and industry leaders responded by calling for sustainable AI practices, increased efficiency incentives, and broader environmental disclosure requirements (Conference Board, 2025).
Gloria Mendoza / The Environmental Impact of Data Centers in Vulnerable Ecosystems / Licenced by CC-BY 4.0
Despite advances in dataset auditing and fairness tools, algorithmic discrimination remained a significant challenge in 2025. Models used for hiring, credit, public benefits, and education continue to mirror historical inequalities (Buolamwini and Gebru, 2018; Barocas, Hardt, and Narayanan, 2019). The USTPC reaffirmed its long-standing stance, calling for a pause on facial recognition deployments in high-risk settings where civil rights impacts are foreseeable (ACM USTPC, 2025). Policymakers increasingly adopted risk-based frameworks that combine socio-technical analysis with technical audits.
If 2025 marked the year AI regulation became operational, 2026 could be when autonomy, sovereignty, and sustainability take center stage. Three pressures are likely to dominate: governing increasingly automated systems, managing economic and workforce disruption, and confronting the environmental and infrastructural limits of large-scale AI. As AI becomes deeply integrated into economic and social systems, collaboration among governments, research institutions, civil society, and organizations such as ACM USTPC and SIGAI will be essential to developing trustworthy and fair systems.
Economic concerns are increasingly prominent in public discourse and policymaking, with widespread changes taking place as AI displaces many white-collar tasks. Growing questions about whether expectations for AI have peaked or whether parts of the sector are experiencing a speculative bubble have raised the possibility of a market correction with broader economic consequences (Georgieva, 2025; International Monetary Fund, 2025). Such a shift could affect not only the AI technology sector itself but also investment patterns, labor markets, and innovation strategies across the wider economy.
The ethics landscape is also evolving. By 2026, “AI-generated” labels may give way to verifiable provenance signals that can be shared across platforms (Hancock and Bailenson, 2024). Regulators could strengthen enforceable duties around verification, rapid takedown, and auditing, especially as deepfakes increasingly affect high-stakes domains such as health, finance, and education. Agentic systems, automated decision-making tools, and practices such as AI-assisted “vibe coding” are likely to become more prominent as organizations move quickly from pilot projects to real-world deployment. Data from this rapid adoption may spur debates on accountability, security, and testing practices, particularly in high-risk environments where iterative human oversight is vital but inconsistently applied. International action on likeness and consent rights may accelerate, making unauthorized use of synthetic identities a distinct civil offense.
We might see 2026 become the year when restraint becomes a strategic necessity rather than just a moral choice. Over the past year, AI capabilities advanced faster than institutions, labor markets, and governance frameworks could adapt, exposing a growing gap between reassuring narratives and real-world results. While many providers emphasized responsibility through process and compliance, only a few leaders publicly acknowledged systemic risks and called for explicit limits on deployment. Among the most notable was Dario Amodei, whose call for clear rules and responsible scaling framed ethics as a core engineering constraint rather than an after-the-fact safeguard (Amodei, 2024). His contribution shaped subsequent discourse.
Major questions include when and where generative AI should be used, who benefits from its use, and under which conditions it should be restricted. A widely articulated perspective emphasizes that GenAI should assist, not replace, human judgment, with accountability firmly placed on institutions rather than automated systems. Concerns remain that generative AI can reinforce biases, exacerbate epistemic injustice, and centralize power, underscoring the need for transparency, contestability, and data rights. Ongoing debates about training data reveal unresolved issues related to consent, labor, and compensation. Furthermore, environmental impacts and dual-use risks such as misinformation, surveillance, and security threats have become key ethical challenges (Radanliev, 2025; Janssen, 2025).
Societal responses to these questions are becoming more visible. Communities may express a “digital backlash” against algorithmic technologies, as seen in protests over data center projects, student-led petitions, app deletions, industry open letters, and academic position papers (Guest et al., 2025). Educators, technologists, policymakers, artists, labor unions, and community groups increasingly oppose AI systems perceived as harmful, exploitative, environmentally damaging, or socially unjust. Supporting resistance, refusal, reclamation, and reimagining AI remains an essential ethical goal, even as some “responsible” AI narratives suggest opposition is futile (Duarte et al., 2025).
The coming year will test whether global AI governance can keep pace with innovation while protecting democratic values, social trust, and human well-being. Ultimately, 2026 should reveal whether adherence to emerging frontier and general-purpose AI standards effectively influences real-world behavior or merely becomes a box-checking exercise (Clark, 2024). We will also observe whether trust frameworks unify globally or fragment into regional systems with incompatible rules and technologies. Addressing these challenges will require active participation from the AI community through groups like ACM USTPC and SIGAI, as well as engagement from the broader research ecosystem in venues such as AI and Ethics and AI Matters. Together, these efforts are essential for guiding responsible innovation and ensuring AI advancements benefit society.
ACM USTPC (Association for Computing Machinery US Technology Policy Committee). USTPC Policy Products, 2025.
Amodei, Dario. AI Is Getting More Powerful. We Need Clear Rules, 2024.
Barocas, Solon, Moritz Hardt, and Arvind Narayanan. Fairness and Machine Learning: Limitations and Opportunities. 2019.
Buolamwini, Joy, and Timnit Gebru. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. 2018. Proceedings of Machine Learning Research 81: 77–91.
C2PA. Coalition for Content Provenance and Authenticity. 2024.
CAC. Interim Measures for the Management of Generative Artificial Intelligence Services. 2023.
CISA. Widespread IT Outage Due to CrowdStrike Update, 2024.
Clark, Jack, Gillian Hadfield, Percy Liang, and others. Frontier AI Regulation: Managing Emerging Capabilities and Systemic Risks. 2024.
Colman, B. Why detecting dangerous AI is key to keeping trust alive in the deepfake era. 2025.
The Conference Board. AI and Sustainability 2025: Corporate Perspectives on Environmental Impact. 2025.
Dilts Marshall, E. Taylor Swift, Sabrina Carpenter impersonators scam fans out of $5.3 billion in 2025: 2025.
DSIT and Kendall, L. New blueprint for AI regulation could speed up planning approvals, slash NHS waiting times, and drive growth and public trust. 2025.
Duarte, Tania, Ismael Kherroubi Garcia, Anshur, et al. Resisting, Refusing, Reclaiming, Reimagining: Charting Challenges to Narratives of AI Inevitability. 2025.
Duarte, Tania, Kathryn Conrad, Ismael Kherroubi Garcia. The Emergence of Critical AI Literacy. 2026.
Epic Systems Corp. Epic Building Out Agentic AI, Broadens Focus Beyond EHRs. 2025.
European Commission. Artificial Intelligence Act: Regulatory Framework for Trustworthy AI. 2024.
Georgieva, Kristalina. IMF and Bank of England Warn AI-Led Market Bubble Could Burst. 2025.
Gedeon, J. AI scammer posing as Marco Rubio targets officials in growing threat. 2026.
Guest, Olivia et al. Against the Uncritical Adoption of ‘AI’ Technologies in Academia.” 2025.
Hancock, Jeffrey T., and Jeremy N. Bailenson. The Future of Digital Trust: Deepfakes, Authenticity, and the Crisis of Verifiable Media. 2024. Journal of Online Trust and Safety 3 (1): 1–22.
International Monetary Fund. World Economic Outlook: Navigating Global Divergence. 2025.
Janssen, M. Responsible Governance of Generative AI: Conceptualizing AI as a Complex Adaptive System. 2025. Policy & Society 44 (1): 38–5
Kumar, N. How many Google searches per day. 2025.
MacIntyre, John, and Larry R. Medsker. AI and Ethics (journal), Springer Nature, eds. 2020.
McMahon, L. BC threatens AI firm with legal action over unauthorised content use. 2025.
Medsker, Larry R., ed. Forthcoming. AI and Ethics Handbook.
METI (Ministry of Economy, Trade and Industry, Japan). G7 Hiroshima Process: Guiding Principles for Advanced AI. 2023.
Morrison, Jacob, Clara Na, Jared Fernandez, Tim Dettmers, Emma Strubell. Holistically Evaluating the Environmental Impact of Creating Language Models. 2025.
Mzekandaba, S. 2025 global tech sector layoffs surpass 200k. 2026.
Newman, Lily Hay. The Long Tail of the AWS Outage. 2025.
NIST (National Institute of Standards and Technology). AI Risk Management Framework (AI RMF 1.0). 2023.
O’Donnell, J. and Crownhart, C. We did the math on AI’s energy footprint. Here’s the story you haven’t heard. 2025.
OECD. OECD Framework for the Classification of AI Systems. 2023.
Patterson, David, et al. The Carbon Footprint of Large AI Models. 2023. Communications of the ACM 66(10): 58–71.
Nawotka, E. New lawsuit against AI companies seeking more money. 2025.
Gkritsi, E. and Haeck, P. Amazon cloud outage fuels call for Europe to limit reliance on US tech. 2025.
Radanliev, P., et al. AI Ethics: Integrating Transparency, Fairness, and Privacy. 2025. Journal of Information, Communication and Ethics in Society 23 (2): 1–18.
Reuters. JPMorgan Engineers’ Efficiency Jumps as Much as 20% from Using Coding Assistant. 2025.
Reuters. Reddit sues Perplexity for scraping data to train AI system. 2025.
Samuelson, Pamela. Generative AI Meets Copyright Law. 2024. Communications of the ACM 67 (11): 20–23.
Squire Patton Boggs. Key insights on President Trump’s new AI Executive Order and policy & regulatory implications. 2025.
Stanford CRFM. Holistic Evaluation of Language Models. 2025.
UK Intellectual Property Office. Guidance on Copyright and AI Model Training. 2024.
UNICEF. Policy Guidance on AI for Children. 2024.
Vahdat, A. and Dean, J. How much energy does Google’s AI use? We did the math. 2025.
White House. Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. 2024.
Wikipedia contributors. Delta Air Lines v. CrowdStrike. 2024.
Williams, Hannah Murphy. JPMorgan Tests AI Code-Writing Agents to Automate Software Development Tasks. 2024.