aihub.org
 

GPT-3 in tweets

by
26 August 2020



share this:

AIhub | Tweets round-up
Since OpenAI released GPT-3, you have probably come across examples of impressive and/or problematic content that people have used the model to generate. Here we summarise the outputs of GPT-3 as seen through the eyes of the Twitter-sphere.

GPT-3 is able to generate impressive examples, such as these.

However, caution is needed when using the model. Although it can produce good results, it is important to be aware of the limitations of such a system.

GPT-3 has been shown to replicate offensive and harmful phrases and concepts, like the examples presented in the following tweets.

This harmful concept generation is not limited to English.

It is important to note that GPT-2 had similar problems. This EMNLP paper by Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng pointed out the issue.

GPT-3 should indeed be used with caution.

 




Nedjma Ousidhoum is PhD candidate in NLP at HKUST.
Nedjma Ousidhoum is PhD candidate in NLP at HKUST.




            AIhub is supported by:


Related posts :



Eleven new NSF artificial intelligence research institutes announced

USA National Science Foundation (NSF) partnerships expand National AI Research Institutes to 40 states.
30 July 2021, by

AIhub monthly digest: July 2021 – ICML, protein folding for all, and AI Song Contest winner announced

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
29 July 2021, by

Use of AI to fight COVID-19 risks harming “disadvantaged groups”, experts warn

Rapid deployment of AI to tackle coronavirus must still go through ethical checks and balances.
28 July 2021, by


















©2021 - Association for the Understanding of Artificial Intelligence














©2021 - Association for the Understanding of Artificial Intelligence- aihub.org