ΑΙhub.org
 

How to avoid hype when promoting your AI research


by
25 October 2021



share this:
microphone in front of a crowd

Hype around AI sets inflated expectations about the technology, drives unnecessary fears and detracts from the meaningful discussions that need to happen now, about the technology actually being developed today.

The AIhub trustees have compiled a handy guide to help you avoid hype when communicating your research. Here are their 10 tips:

1. Be specific about the science and achievements

What problem is your research trying to solve? Provide context.

2. Don’t make exaggerated claims

Try to avoid unnecessary superlatives such as: “general, best, first” unless you can provide supporting context.

3. Be clear about the limitations of your experiments

Did your demonstration require external instruments that made the real world “more digital” (for example, external sensors/motion capture)?

4. Explain how things work

What data was used, what type of algorithms, what hardware? Be upfront about the computational cost.

5. Has your research been validated by the community?

Does the community support your findings, through peer-reviewed research or other means?

6. Make your headline catchy but accurate

Prioritise scientific accuracy.

7. Keep any debates scientific

Don’t bring personalities/personal attacks into the debate.

8. Don’t anthropomorphize

Avoid anthropomorphism unless the subject of the research is people.

9. Use relevant images

Use images from your research to illustrate your news. Avoid generic or stereotypical AI images (such as imaginary robots from science fiction).

10. Be open and transparent

Disclose conflicts of interest and/or funding especially if industry or personal interests are involved.

You can find all of the guidelines in this pdf document.



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:



Related posts :



Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence