ΑΙhub.org
 

GPT-3 in tweets


by
26 August 2020



share this:

AIhub | Tweets round-up
Since OpenAI released GPT-3, you have probably come across examples of impressive and/or problematic content that people have used the model to generate. Here we summarise the outputs of GPT-3 as seen through the eyes of the Twitter-sphere.

GPT-3 is able to generate impressive examples, such as these.

However, caution is needed when using the model. Although it can produce good results, it is important to be aware of the limitations of such a system.

GPT-3 has been shown to replicate offensive and harmful phrases and concepts, like the examples presented in the following tweets.

This harmful concept generation is not limited to English.

It is important to note that GPT-2 had similar problems. This EMNLP paper by Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng pointed out the issue.

GPT-3 should indeed be used with caution.

 




Nedjma Ousidhoum is a postdoc at the University of Cambridge.
Nedjma Ousidhoum is a postdoc at the University of Cambridge.




            AIhub is supported by:


Related posts :



Forthcoming machine learning and AI seminars: May 2025 edition

  05 May 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 5 May and 30 June 2025.

Competition open for images of “digital transformation at work”

Digit and Better Images of AI have teamed up to launch a competition to create more realistic stock images of "digital transformation at work"
monthly digest

AIhub monthly digest: April 2025 – aligning GenAI with technical standards, ML applied to semiconductor manufacturing, and social choice problems

  30 Apr 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

#ICLR2025 social media round-up

  29 Apr 2025
Find out what participants got up to at the International Conference on Learning Representations.

Copilot Arena: A platform for code

  28 Apr 2025
Copilot Arena is an app designed to evaluate LLMs in real-world settings by collecting preferences directly in a developer’s actual workflow.

Dataset reveals how Reddit communities are adapting to AI

  25 Apr 2025
Researchers at Cornell Tech have released a dataset extracted from more than 300,000 public Reddit communities.

Interview with Eden Hartman: Investigating social choice problems

  24 Apr 2025
Find out more about research presented at AAAI 2025.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence