ΑΙhub.org
 

GPT-3 in tweets


by
26 August 2020



share this:

AIhub | Tweets round-up
Since OpenAI released GPT-3, you have probably come across examples of impressive and/or problematic content that people have used the model to generate. Here we summarise the outputs of GPT-3 as seen through the eyes of the Twitter-sphere.

GPT-3 is able to generate impressive examples, such as these.

However, caution is needed when using the model. Although it can produce good results, it is important to be aware of the limitations of such a system.

GPT-3 has been shown to replicate offensive and harmful phrases and concepts, like the examples presented in the following tweets.

This harmful concept generation is not limited to English.

It is important to note that GPT-2 had similar problems. This EMNLP paper by Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng pointed out the issue.

GPT-3 should indeed be used with caution.

 




Nedjma Ousidhoum is a postdoc at the University of Cambridge.
Nedjma Ousidhoum is a postdoc at the University of Cambridge.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence