GPT-3 in tweets

26 August 2020

share this:

AIhub | Tweets round-up
Since OpenAI released GPT-3, you have probably come across examples of impressive and/or problematic content that people have used the model to generate. Here we summarise the outputs of GPT-3 as seen through the eyes of the Twitter-sphere.

GPT-3 is able to generate impressive examples, such as these.

However, caution is needed when using the model. Although it can produce good results, it is important to be aware of the limitations of such a system.

GPT-3 has been shown to replicate offensive and harmful phrases and concepts, like the examples presented in the following tweets.

This harmful concept generation is not limited to English.

It is important to note that GPT-2 had similar problems. This EMNLP paper by Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng pointed out the issue.

GPT-3 should indeed be used with caution.


Nedjma Ousidhoum is PhD candidate in NLP at HKUST.

            AIhub is supported by:

Related posts :

Counterfactual predictions under runtime confounding

We propose a method for using offline data to build a prediction model that only requires access to the available subset of confounders at prediction time.
07 May 2021, by

Tweet round-up from #ICLR2021

Peruse some of the chat from this week's International Conference on Learning Representations.
06 May 2021, by

Hot papers on arXiv from the past month: April 2021

What’s hot on arXiv? Here are the most tweeted papers that were uploaded onto arXiv during April 2021.
05 May 2021, by

©2021 - Association for the Understanding of Artificial Intelligence