ΑΙhub.org
 

GPT-3 in tweets


by
26 August 2020



share this:

AIhub | Tweets round-up
Since OpenAI released GPT-3, you have probably come across examples of impressive and/or problematic content that people have used the model to generate. Here we summarise the outputs of GPT-3 as seen through the eyes of the Twitter-sphere.

GPT-3 is able to generate impressive examples, such as these.

However, caution is needed when using the model. Although it can produce good results, it is important to be aware of the limitations of such a system.

GPT-3 has been shown to replicate offensive and harmful phrases and concepts, like the examples presented in the following tweets.

This harmful concept generation is not limited to English.

It is important to note that GPT-2 had similar problems. This EMNLP paper by Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng pointed out the issue.

GPT-3 should indeed be used with caution.

 




Nedjma Ousidhoum is a postdoc at the University of Cambridge.
Nedjma Ousidhoum is a postdoc at the University of Cambridge.




            AIhub is supported by:



Related posts :



What’s on the programme at #AIES2025?

  17 Oct 2025
The conference on AI, ethics, and society will take place in Madrid from 20-22 October.

Generative AI model maps how a new antibiotic targets gut bacteria

  16 Oct 2025
Researchers used a GenAI model to reveal how a narrow-spectrum antibiotic attacks disease-causing bacteria.

What’s coming up at #IROS2025?

  15 Oct 2025
Find out what the International Conference on Intelligent Robots and Systems has in store.

Applying machine learning to chip design and manufacturing: interview with Lorenzo Servadei

  14 Oct 2025
Find out how Lorenzo and his team are using ML and Electronic Design Automation.

Why we should be skeptical of the hasty global push to test 15-year-olds’ AI literacy in 2029

  13 Oct 2025
Are schools set to become testing grounds for AI developments?

Machine learning for atomic-scale simulations: balancing speed and physical laws

How much underlying physics can we safely “shortcut” without breaking a simulation?

Policy design for two-sided platforms with participation dynamics: interview with Haruka Kiyohara

  09 Oct 2025
Studying the long-term impacts of decision-making algorithms on two-sided platforms such as e-commerce or music streaming apps.

The Machine Ethics podcast: What excites you about AI? Vol.2

This is a bonus episode looking back over answers to our question: What excites you about AI?



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence