ΑΙhub.org
 

Artificial intelligence in 2020: the AIhub roundup


by
27 December 2020



share this:


As 2020 draws to a close we look back on some of the notable research developments, awards, conferences and policy in the world of artificial intelligence.

Research developments

In February it was reported that MIT researchers had used a machine-learning algorithm to identify a powerful new antibiotic compound. In laboratory tests, the drug, called halicin, killed many disease-causing bacteria, including some strains that had been resistant to all existing antibiotics.

Progress in the field of AI in healthcare has continued apace during 2020. A lot of this work is about providing clinicians with extra tools in their armoury. Involving healthcare workers in the development of systems at the outset is key. Examples of research in this space are breast cancer screening, deep phenotyping of Parkinson’s disease, and brain tumour diagnosis.

The COVID-19 outbreak led many researchers to shift the focus of their research, with the machine learning community coming together to launch numerous initiatives. A recent paper by Michela van der Schaar et al details how artificial intelligence and machine learning can help healthcare systems respond to COVID-19.

One of the bigger AI stories of the year was the release of GPT-3, OpenAI’s latest language model. The main difference between this version and previous language models is the shear size; the full version of GPT-3 has capacity for a whopping 175 billion parameters, which is ten times more than any previous non-sparse language model. The authors bagged themselves one of the NeurIPS 2020 best paper awards for their work.

GPT-3 has certainly divided opinion. On one hand it can produce some impressive results; as well as text generation there are examples of it being used for tasks such as creating webpage layouts. One the other hand, the vast amount of power required to train it is a huge environmental concern, and it has been shown to spit out a lot of worrying text.

Natural language processing is indeed an area that has seen intense research effort during 2020. Two of the most-starred GitHub pages were Transformers (first released in 2017) and BERT (from 2018), with researchers adapting these models to their individual projects.

In November, an exciting development in the study of protein folding was announced. DeepMind’s AlphaFold system had predicted protein structures with very high accuracy in CASP’s 2020 experiment – the recognised benchmark for this problem. Proteins are large, complex molecules, and the shape of a particular protein is closely linked to the function it performs. The ability to accurately predict protein structures is a significant development as it has the potential to aid in drug design, or in finding enzymes that break down industrial waste, for example.

Awards

2020 saw the launch of the AAAI Squirrel Award, a $1 million prize given to honour individuals whose work in the field has had a transformative impact on society. The recipient of the first award was Regina Barzilay for her work developing machine learning models to develop antibiotics and other drugs, and to detect and diagnose breast cancer at early stages.

Here are some of the best paper award winners from conferences throughout the year.
Association for the Advancement of Artificial Intelligence (AAAI)
WINOGRANDE: An Adversarial Winograd Schema Challenge at Scale, Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi

A Distributed Multi-Sensor Machine Learning Approach to Earthquake Early Warning, Kévin Fauvel, Daniel Balouek-Thomert, Diego Melgar, Pedro Silva, Anthony Simonet, Gabriel Antoniu, Alexandru Costan, Véronique Masson, Manish Parashar, Ivan Rodero, Alexandre Termier

International Conference on Machine Learning (ICML)
On Learning Sets of Symmetric Elements, Haggai Maron, Or Litany, Gal Chechik, Ethan Fetaya
You can read an interview with Haggai here.

Tuning-free Plug-and-Play Proximal Algorithm for Inverse Imaging Problems
Kaixuan Wei, Angelica I Aviles-Rivero, Jingwei Liang, Ying Fu, Carola-Bibiane Schönlieb, Hua Huang

Neural Information Processing Systems (NeurIPS)
No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium, Andrea Celli, Alberto Marchesi, Gabriele Farina, Nicola Gatti

Improved Guarantees and a Multiple-Descent Curve for Column Subset Selection and the Nyström Method, Michal Derezinski, Rajiv Khanna, Michael W. Mahoney

Language Models are Few-Shot Learners, Tom B. Brown et al

ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
Fairness and Utilization in Allocating Resources with Uncertain Demand, Kate Donahue, Jon Kleinberg

What does it mean to ‘solve’ the problem of discrimination in hiring?, Javier Sanchez-Monedero, Lina Dencik, Lilian Edwards

Computer Vision and Pattern Recognition (CVPR)
Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild, Shangzhe Wu, Christian Rupprecht, Andrea Vedaldi

Empirical methods in natural language processing (EMNLP)
Digital Voicing of Silent Speech, David Gaddy, Dan Klein

Policy

There were developments in policy and regulation with many nations giving serious thought to AI policy, funding and regulation, often as part of a larger digital and data-driven framework.

In early January 2020, the United States Office of Science and Technology Policy released draft guidance for regulation that it proposed agencies must adhere to when drawing up new AI regulations for the private sector. This was followed, in February, by the American AI initiative annual report. In a boost to research, funding for seven new artificial intelligence (AI) research institutes was announced in August.

In Europe, the EU released a whitepaper on AI (entitled: On Artificial Intelligence – A European approach to excellence and trust). This document forms part of the EU’s digital strategy, which aims to make the AI and data transformation work for people and businesses, while helping to achieve its target of a climate-neutral Europe by 2050. The European Commission also considered the specific case of autonomous vehicles – they published this report in September. In 2020, the EU funded 4 ICT48 Networks of AI Excellence called AI4MEDIA, Elise, Humane-AI-Net, and TAILOR. These complement other EU efforts including CLAIRE, AI4EU, ELLIS, EurAI, as well as two thematically related public-private partnerships (BDVA and euRobotics).

In the summer Amazon, Microsoft and IBM all announced that they would (for the time being) stop selling facial technology to police forces. The Gender Shades project, and organisations campaigning for equitable and accountable AI systems, such as The Algorithmic Justice League, were instrumental in forcing this rethink from tech companies. You can read a primer on facial recognition technologies, by Joy Buolamwini, Vicente Ordóñez, Jamie Morgenstern, and Erik Learned-Miller, here.

Conferences – the year we went virtual

As the pandemic spread, conferences were forced to rethink their format, heading into the virtual world. Although it is hard to recreate certain aspects of a physical event – such as serendipitous meetings at poster sessions – virtual events have many benefits, and perhaps 2020 will be the year that changed the nature of the conference for ever? It is probable that, even when physical events can return, many conferences will include a significant virtual element.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Dataset reveals how Reddit communities are adapting to AI

  25 Apr 2025
Researchers at Cornell Tech have released a dataset extracted from more than 300,000 public Reddit communities.

Interview with Eden Hartman: Investigating social choice problems

  24 Apr 2025
Find out more about research presented at AAAI 2025.

The Machine Ethics podcast: Co-design with Pinar Guvenc

This episode, Ben chats to Pinar Guvenc about co-design, whether AI ready for society and society is ready for AI, what design is, co-creation with AI as a stakeholder, bias in design, small language models, and more.

Why AI can’t take over creative writing

  22 Apr 2025
A large language model tries to generate what a random person who had produced the previous text would produce.

Interview with Amina Mević: Machine learning applied to semiconductor manufacturing

  17 Apr 2025
Find out how Amina is using machine learning to develop an explainable multi-output virtual metrology system.

Images of AI – between fiction and function

“The currently pervasive images of AI make us look somewhere, at the cost of somewhere else.”

Grace Wahba awarded the 2025 International Prize in Statistics

  16 Apr 2025
Her contributions laid the foundation for modern statistical techniques that power machine learning algorithms such as gradient boosting and neural networks.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association