ΑΙhub.org
 

How to avoid hype when promoting your AI research


by
25 October 2021



share this:
microphone in front of a crowd

Hype around AI sets inflated expectations about the technology, drives unnecessary fears and detracts from the meaningful discussions that need to happen now, about the technology actually being developed today.

The AIhub trustees have compiled a handy guide to help you avoid hype when communicating your research. Here are their 10 tips:

1. Be specific about the science and achievements

What problem is your research trying to solve? Provide context.

2. Don’t make exaggerated claims

Try to avoid unnecessary superlatives such as: “general, best, first” unless you can provide supporting context.

3. Be clear about the limitations of your experiments

Did your demonstration require external instruments that made the real world “more digital” (for example, external sensors/motion capture)?

4. Explain how things work

What data was used, what type of algorithms, what hardware? Be upfront about the computational cost.

5. Has your research been validated by the community?

Does the community support your findings, through peer-reviewed research or other means?

6. Make your headline catchy but accurate

Prioritise scientific accuracy.

7. Keep any debates scientific

Don’t bring personalities/personal attacks into the debate.

8. Don’t anthropomorphize

Avoid anthropomorphism unless the subject of the research is people.

9. Use relevant images

Use images from your research to illustrate your news. Avoid generic or stereotypical AI images (such as imaginary robots from science fiction).

10. Be open and transparent

Disclose conflicts of interest and/or funding especially if industry or personal interests are involved.

You can find all of the guidelines in this pdf document.



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.

Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

  25 Feb 2026
In the third of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Oliver Chang.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence