ΑΙhub.org
 

AlphaFold advances protein folding research


by
03 December 2020



share this:
Protein_PCMT1_PDB_1i1n
Protein PCMT1 PDB, by Emw CC BY-SA 3.0, via Wikimedia Commons.

The grand challenge of protein folding hit the news this week when it was announced that the latest version of DeepMind’s AlphaFold system had predicted protein structures with very high accuracy in CASP’s 2020 experiment.

Proteins are large, complex molecules, and the shape of a particular protein is closely linked to the function it performs. The ability to accurately predict protein structures would enable scientists to gain a greater understanding of how they work and what they do.

Protein folding is explained in this video from DeepMind:

How AlphaFold works

This new version of AlphaFold builds on the initial system, which you can read about in this paper. The associated code is available here. In this first version, the team trained a neural network to make accurate predictions of the distances between pairs of amino acid residues (beads in the protein chain), which conveyed information about the structure. Using this information, they constructed a potential of mean force that could accurately describe the shape of a protein. The resulting potential could be optimized by a simple gradient descent.

In version two, the team implemented new deep learning architectures. They created an attention-based neural network system, trained end-to-end, that attempts to interpret the structure of the spatial graph that represents the protein, while reasoning over the implicit graph that it’s building. The system uses evolutionarily related sequences, multiple sequence alignment (MSA), and a representation of amino acid residue pairs to refine this graph.

The system was trained on ~170,000 protein structures from the publicly available protein databank and using large databases containing protein sequences of unknown structure.

About CASP

Critical Assessment of protein Structure Prediction (CASP) is a community-wide experiment for protein structure prediction that has taken place every two years since 1994. CASP provides an independent mechanism for the assessment of methods of protein structure modelling.

For the 2020 experiment, the organisers posted sequences of unknown protein structures for modelling from May to August this year. Protein models from various research groups around the world were then collected and evaluated as the experimental coordinates became available.

The main metric used by CASP to measure the accuracy of predictions is the Global Distance Test (GDT) which ranges from 0-100. GDT can be approximately thought of as the percentage of amino acid residues within a threshold distance from the correct position. This year, the AlphaFold system achieved a median score of 92.4 GDT overall across all targets. For the very hardest protein targets, AlphaFold achieved a median score of 87.0 GDT. This is a significant increase in accuracy compared to previous years (the best GDT in 2018 (using the first version of AlphaFold) was under 60, and in 2016 was around 40).

You can find the abstracts from all of the participating groups from 2020 here.

Find out more:

DeepMind’s blog post: “AlphaFold: a solution to a 50-year-old grand challenge in biology”.

Nature paper published on the first version of AlphaFold: “Improved protein structure prediction using potentials from deep learning”.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

An AI image generator for non-English speakers

  17 Mar 2026
"Translations lose the nuances of language and culture, because many words lack good English equivalents."

AI and Theory of Mind: an interview with Nitay Alon

  16 Mar 2026
Find out more about how Theory of Mind plays out in deceptive environments, multi-agents systems, the interdisciplinary nature of this field, when to use Theory of Mind, and when not to, and more.
coffee corner

AIhub coffee corner: AI, kids, and the future – “generation AI”

  13 Mar 2026
The AIhub coffee corner captures the musings of AI experts over a short conversation.

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence