It’s been another interesting year in the world of artificial intelligence. We’ve seen large language models grow even larger, conferences returning to physical events, a raft of new policy developments, and machine learning techniques applied across the arts. Buckle up and join us for the ride as we review the year just gone.
Research into both fundamental and applied aspects of artificial intelligence and machine learning continues apace. Here are just some of the interesting research developments that have caught our eye over the course of 2022:
Yue Ma and colleagues used machine-learning techniques to identify antimicrobial peptides encoded by the genome sequences of microbes in the human gut.
Lily Xu’s research focuses on advancing AI methods in machine learning and game theory to address environmental challenges, particularly wildlife conservation through poaching prevention. See our interview with Lily here.
Christopher Franz and colleagues applied an algorithm for solving two-player games to the problem of synthesis planning of new target molecules.
Rose Nakasi and her team have developed a machine-learning method to detect malaria parasites in blood samples. They are working on transferring their method to other microscopically diagnosable diseases.
Thom S. Badings et al proposed a method to compute safe controllers for autonomous systems in safety-critical settings under unknown stochastic noise.
Paula Arguello, Jhon Lopez, Carlos Hinojosa and Henry Arguello have addressed the problem of privacy-preserving in computer vision and image processing by jointly designing the camera lens and the machine learning algorithm.
2022 was the year that a number conferences returned to an in-person format, with most also providing a virtual element. For many PhD students, this year provided their first experience of a physical meeting. Our AIhub trustees were on hand to provide some useful hints and tips for getting the most out of a conference.
One of the events that particularly benefitted from the return to a physical edition was RoboCup, with the teams once again able to experience the excitement of seeing their robots battle it out on the football pitch (or in one of the other many leagues that form RoboCup). We spoke to Jasper Güldenstein about the Humanoid League competition this year.
In July, we were lucky enough to be able to travel to IJCAI/ECAI, where we gave a tutorial on science communication. You can catch all of our coverage of the conference here. Also, be sure to check out our coverage from the events we attended virtually: AAAI2022, ICML2022, and ICLR2022.
A notable launch this year came in the form of Lanfrica, an online resource centre that catalogues, archives and links African language resources. We caught up with the team behind Lanfrica (Chris Emezue, Handel Emezue and Bonaventure Dossou) to find out more about the project, what inspired them to begin, and the potential that Lanfrica offers the AI community and beyond. The team also run a series of talks which provide a platform for anyone to share their efforts in natural language processing, with a particular focus on low-resource languages.
In our 2021 round-up, we reported the launch of the Distributed AI Research Institute (DAIR), founded by Timnit Gebru. In early December this year, the Institute held a celebratory anniversary event, where they shared their work on: quantifying wage theft through surveillance, analysing the impacts of spatial apartheid, developing language technology for Ge’ez-based languages, and re-imagining our technological futures. You can watch the talks here.
It was a bumper year for AI policy developments. Starting in North America, the Canadian government tabled a draft law on artificial intelligence. Meanwhile, the US government released a blueprint for an AI bill of rights.
On 1 March, new regulations for AI came into force in China. The rules cover algorithms that set prices, control search results, and make recommendations.
We now move our attention to Europe, where the European Commission released a proposal for an Artificial Intelligence Liability Directive (AILD). It forms the next step in the development of a legal framework for AI, following on from the 2020 white paper and the 2021 proposed legal framework.
In June, the Spanish government, in conjunction with the EU, announced the launch of an AI Sandbox to implement responsible AI with a human-centric approach. October saw the unveiling of the UK AI Standards Hub. The aim of the Hub is to help stakeholders across industry, government, civil society, and academia understand, use, and develop standards.
If you are interested in keeping abreast of the latest policy developments, the EuropeanAI newsletter from Charlotte Stix is a great place to start. The Digital Constitutionalist also digs into a range of topics from the digital world, including AI regulation and policy.
The 2022 AI Index Report was published in March. Compiled by the Stanford Institute for Human-Centered Artificial Intelligence (HAI), it tracks, summarises and visualises data relating to artificial intelligence.
In June, the World Economic Forum released a blueprint for equity and inclusion in artificial intelligence. Partnership on AI shared their views on the report here.
It’s hard to ignore the large language model (LLM) releases that have hit our screens over the course of the past 12 months. In January, the Google team behind LaMDA, a family of transformer-based neural language models specialised for dialogue, released the details of the model. Galactica, from Meta, was released in November. It was designed with the aim of allowing users to explore the literature, ask scientific questions, and write scientific code. ChatGPT, the latest offering from OpenAI, is a chatbot optimized for conversational dialogue. OpenAI claim that the dialogue format “makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” Although some impressive results have been shared, both the Galactica and ChatGPT models are prone generate factually incorrect, yet extremely convincing, responses. The shortcomings, and potential harms, of LLMs are considered in this article by Abeba Birhane and Deborah Raji.
April saw the publication of a series of articles investigating how AI is enriching a powerful few by dispossessing communities that have been dispossessed before. This MIT Technology Review series was authored by Karen Hao, Heidi Swart, Andrea Paola Hernández and Nadine Freischlad and comprises the following articles: South Africa’s private surveillance machine is fueling a digital apartheid | How the AI industry profits from catastrophe | The gig workers fighting back against the algorithms | A new vision of artificial intelligence for the people.
We love a good podcast, and were delighted to hear about the launch of the GRACE (Global Review of AI Community Ethics) podcast. Hosted by Harriett Jernigan, so far there have been interviews with Brandeis Marshall and Nakeema Stefflbauer. We’ve also been informed and entertained by The Good Robot Podcast, The Radical AI Podcast, The Machine Ethics Podcast, Computing Up, and AI The New Sexy.
At the end of March, it was reported that an AI system from French startup NukkAI had beaten eight world champions at bridge. This has been heralded as an important step, as in bridge (as opposed to games such as chess and Go) players work with incomplete information. The system uses a hybrid of rules-based and deep learning methods. You can watch the recording of the livestream of the challenge in full here.
The story of the Mayflower autonomous ship even had staunch landlubbers dusting off their sea shanties. After a 40-day, 5,600km journey across the Atlantic, the vessel completed its journey, reaching Halifax, Nova Scotia on 5 June. You can see some photos and commentary on the team’s Instagram page.
Regular readers of the AIhub monthly digests will know that we’re big fans of the AI Song Contest. This year was bigger than ever with 46 teams participating. The winner this year was Yaboi Hanoi with อสุระเทวะชุมนุม (“Asura Deva Choom Noom”) – Enter Demons & Gods. You can listen to the winning song here. The other entries are available here.
We finish our round-up with some of the awards that were presented this year.
The AAAI Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity recognizes positive impacts of artificial intelligence to protect, enhance, and improve human life in meaningful ways. Cynthia Rudin was the 2022 award winner, and she spoke about her work at AAAI2022.
The ACM SIGAI Industry Award for Excellence in Artificial Intelligence recognises the transfer of original academic research into AI applications. This year, the award was presented to the team behind Sony’s Gran Turismo Sophy project. They developed reinforcement learning methods that won a Gran Turismo team event against some of the world’s best human players.
The ACM/SIGAI Autonomous Agents Research Award recognises years of research and leadership in the field of robotics and multi-agent systems. This year, the recipient was Maria Gini who, for many years, has been at the forefront of robotics and multi-agent systems.
Four IJCAI awards were bestowed this year, and these were are follows:
IJCAI-22 Award for Research Excellence: Stuart Russell
IJCAI-22 Computers and Thought Award: Bo Li
IJCAI-22 John McCarthy Award: Michael Littman
IJCAI-22 Donald E. Walker Distinguished Service Award: Bernhard Nebel
Luc Steels was awarded the EurAI Distinguished Service Award 2022 for his pioneering and groundbreaking work in the field, as well as for his foundational contributions to AI organization in the European Union.
There were many interesting research articles that received best paper awards at the various conferences throughout the year. Here are links to just some that were presented over the course of the last 12 months: AAAI, FAccT, ICIP, ICLR, ICML, IJCAI, IROS, NAACL, NeurIPS.