ΑΙhub.org
 

AIhub – leading organisations form new charity to advance public understanding of artificial intelligence


by
15 April 2021



share this:
AIhub logo

Artificial Intelligence (AI) promises to impact our everyday lives. Empowering everyone to learn about AI, and how it’s used, will be instrumental to making sure the benefits are shared more broadly across society. AIhub.org aims to enable everyone to know about AI, learn about the latest research, and get involved. Our content is free, high-quality, fair and impartial. In a bid to reduce hype and report accurately, all information is produced by those working directly in the field, without filter or intermediary. Our contributors are AI experts from across the globe representing the many subdisciplines that comprise the field.

Our journey began five years ago when executive trustees Sabine Hauert and Tom Dietterich met at AAAI and started a discussion around improving the public understanding of artificial intelligence. From this conversation, they set out to build AIhub.org, to connect the AI community to the public. One year on from our official launch, and with support from leading international AI organisations, AAAI, ACM SIGAI, AIJ/IJCAI, CLAIRE, EurAI/AICOMM, ICML, NeurIPS, and RoboCup, we are pleased that “The Association for the Understanding of Artificial Intelligence”, which manages AIhub.org, has recently been awarded charity status in the UK.

According to Tom Dietterich, executive Trustee, “Much of the information about AI that is presented to the public is sensationalized, either because it reflects story lines from science fiction or because research institutions and scientists are trying to gain attention. AIhub.org provides dispassionate, fair coverage of the advances – and the shortcomings – of emerging AI technologies.” Dietterich adds that “Another advantage of AIhub is that it covers the full range of AI methods, not just the latest neural network results.”

The aims of the charity are to “Advance the education of the public in the field of artificial intelligence, including the science, applications, and ethics of artificial intelligence, and by the free dissemination of authoritative, high-quality, research and useful accessible information about such research”.

Sabine Hauert, who is also an executive Trustee, points out “I’m excited we’ve gained charity status, it shows that we’re aiming to do good and really reflects our educational purpose”, and “in all my conversations with the public, there is a clear thirst to learn about technologies such as AI, but it’s not always obvious where to go for useful, truthful information that is accessible. I’m hoping AIhub can serve this role.“

In 2020, we published over 230 posts, written by over 100 contributors, from 20 countries across the world. At the end of last year, we launched a focus series covering the use of AI to help meet the UN sustainable development goals. Each month we tackle a different goal, and so far we’ve published articles on good health and well-being, climate action and quality education.

Managing Editor, Lucy Smith, notes that: “It’s been great to cover the field of AI over the past year, through research summaries, interviews, podcasts, and conference reports. In my role, I get to see a vast amount of research spanning the breadth of AI. There is so much going on in the field at the moment, as evidenced by our monthly digests which give a flavour of the latest happenings in the AI world.”

Appointed to represent each supporting organisation, the trustees that have guided us to this point are all leading experts in their respective fields: Kamalika Chaudhuri (USA), Sanmay Das (USA), Tom Dietterich (USA), Stephen José Hanson (USA), Sabine Hauert (UK), Holger Hoos (NL), Michael Littman (USA), Michela Milano (IT), Carles Sierra (ES), Oskar von Stryk (DE), Zhi-Hua Zhou (CN).

Trustee Holger Hoos, who represents CLAIRE, said: “The public image of AI is quite heavily influenced by hype, exaggeration and science fiction. Especially as AI technologies are more broadly deployed, it’s very important to make available unbiased, high-quality information on AI – what it is (and isn’t), what it can and can’t do, what it should and shouldn’t do. That’s precisely what AIhub is about, and it’s great to have so many of the world’s leading AI experts involved in it. CLAIRE is therefore very happy to work closely with AIhub on its mission for AI for Good and AI for All.”

Check here for ways in which you can contribute stories, podcasts, videos, tutorials, and more, or watch this short video about us.

Click here to sign up to our weekly newsletter.

You can follow us here:
Twitter
Facebook
Linkedin




AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:



Related posts :



Discrete flow matching framework for graph generation

and   17 Sep 2025
Read about work presented at ICML 2025 that disentangles sampling from training.

We risk a deluge of AI-written ‘science’ pushing corporate interests – here’s what to do about it

  16 Sep 2025
A single individual using AI can produce multiple papers that appear valid in a matter of hours.

Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence