ΑΙhub.org
 

How to avoid hype when promoting your AI research


by
25 October 2021



share this:
microphone in front of a crowd

Hype around AI sets inflated expectations about the technology, drives unnecessary fears and detracts from the meaningful discussions that need to happen now, about the technology actually being developed today.

The AIhub trustees have compiled a handy guide to help you avoid hype when communicating your research. Here are their 10 tips:

1. Be specific about the science and achievements

What problem is your research trying to solve? Provide context.

2. Don’t make exaggerated claims

Try to avoid unnecessary superlatives such as: “general, best, first” unless you can provide supporting context.

3. Be clear about the limitations of your experiments

Did your demonstration require external instruments that made the real world “more digital” (for example, external sensors/motion capture)?

4. Explain how things work

What data was used, what type of algorithms, what hardware? Be upfront about the computational cost.

5. Has your research been validated by the community?

Does the community support your findings, through peer-reviewed research or other means?

6. Make your headline catchy but accurate

Prioritise scientific accuracy.

7. Keep any debates scientific

Don’t bring personalities/personal attacks into the debate.

8. Don’t anthropomorphize

Avoid anthropomorphism unless the subject of the research is people.

9. Use relevant images

Use images from your research to illustrate your news. Avoid generic or stereotypical AI images (such as imaginary robots from science fiction).

10. Be open and transparent

Disclose conflicts of interest and/or funding especially if industry or personal interests are involved.

You can find all of the guidelines in this pdf document.



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:



Related posts :



Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

  13 Nov 2025
Autonomous systems increasingly face value-laden choices. This blog post introduces the idea of designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values.

Learning from failure to tackle extremely hard problems

  12 Nov 2025
This blog post is based on the work "BaNEL: Exploration posteriors for generative modeling using only negative rewards".

How AI can improve storm surge forecasts to help save lives

  10 Nov 2025
Looking at how AI models can help provide more detailed forecasts more quickly.

Rewarding explainability in drug repurposing with knowledge graphs

and   07 Nov 2025
A RL approach that not only predicts which drug-disease pairs might hold promise but also explains why.

AI Song Contest – vote for your favourite

  06 Nov 2025
Voting is open until 9 November.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence