ΑΙhub.org
 

How to avoid hype when promoting your AI research


by
25 October 2021



share this:
microphone in front of a crowd

Hype around AI sets inflated expectations about the technology, drives unnecessary fears and detracts from the meaningful discussions that need to happen now, about the technology actually being developed today.

The AIhub trustees have compiled a handy guide to help you avoid hype when communicating your research. Here are their 10 tips:

1. Be specific about the science and achievements

What problem is your research trying to solve? Provide context.

2. Don’t make exaggerated claims

Try to avoid unnecessary superlatives such as: “general, best, first” unless you can provide supporting context.

3. Be clear about the limitations of your experiments

Did your demonstration require external instruments that made the real world “more digital” (for example, external sensors/motion capture)?

4. Explain how things work

What data was used, what type of algorithms, what hardware? Be upfront about the computational cost.

5. Has your research been validated by the community?

Does the community support your findings, through peer-reviewed research or other means?

6. Make your headline catchy but accurate

Prioritise scientific accuracy.

7. Keep any debates scientific

Don’t bring personalities/personal attacks into the debate.

8. Don’t anthropomorphize

Avoid anthropomorphism unless the subject of the research is people.

9. Use relevant images

Use images from your research to illustrate your news. Avoid generic or stereotypical AI images (such as imaginary robots from science fiction).

10. Be open and transparent

Disclose conflicts of interest and/or funding especially if industry or personal interests are involved.

You can find all of the guidelines in this pdf document.



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

AI and Theory of Mind: an interview with Nitay Alon

  16 Mar 2026
Find out more about how Theory of Mind plays out in deceptive environments, multi-agents systems, the interdisciplinary nature of this field, when to use Theory of Mind, and when not to, and more.
coffee corner

AIhub coffee corner: AI, kids, and the future – “generation AI”

  13 Mar 2026
The AIhub coffee corner captures the musings of AI experts over a short conversation.

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence