Welcome to our March 2021 monthly digest. Our digests are designed to keep you up-to-date with the latest happenings in the AI world. You can catch up with any AIhub stories you may have missed, get the low-down on recent conferences, and generally immerse yourself in all things AI.
This month, our attention turned to education, and we considered both the use of AI in teaching, and the teaching of AI. Carles Sierra wrote about team formation techniques in education, describing how AI methods can be used to facilitate collaborative learning.
Organized as an independent event within the AAAI conference, the symposium on Educational Advances in Artificial Intelligence (EAAI) seeks to improve the teaching and training of AI practitioners. We heard from the co-chairs of this year’s symposium, who provided an overview of the sessions that comprised the event.
We also covered a couple of the invited talks from AAAI/EAAI. Daphne Koller spoke about digital learning, discussing the different motivations for online learning, what we know about effective learning, digital platforms, and using data to improve learning.
In his presentation, Michael Wooldridge focussed on some of the lessons he has learned over the years regarding how to talk to the public about AI.
If you haven’t seen it already, then take a look at our climate action round-up, a summary of our focus series on this topic and the field in general.
The next topic we’ll be featuring on AIhub is UN SDG number 14: “Life below water”. We are still open for contributions so please do get in touch if you’d like to take part.
You may have heard about the Mayflower autonomous ship that is due to launch on 19 April, from Plymouth, UK. The ship has been designed to spend long durations at sea, to carry scientific equipment, and to make its own decisions about how to optimize its route and mission. Keep watch for updates from this mission in our focus series.
Climate action and education were just two topics that featured heavily in the recent AI UK event hosted by The Turing Institute. There were interesting sessions about the AI Council’s UK AI roadmap, and about addressing issues such as the skills gap, fairness and diversity. You can read our round-up of the first day of proceedings here.
If you want to keep up-to-date with the latest seminars, don’t forget to check out the AIhub list of online seminar providers. Events from these organisers are all free to attend. We also post a monthly article listing the specific seminars for the month ahead. If that’s not enough, we’ve collated a list of events, including links to recordings, going back to June 2020.
AIhub ambassador Anil Ozdemir interviewed Michael Milford, Director of the QUT Centre for Robotics, Queensland University of Technology. He spoke about his research developing better navigation and positioning systems for robots and autonomous vehicles. His approach involves using a combination of traditional algorithmic methods, modern deep-learning, and biologically-inspired design.
This month saw the publication of the 2021 AI Index Report. Compiled by the Stanford Institute for Human-Centered Artificial Intelligence (HAI), this report tracks, summarises and visualises data relating to artificial intelligence. The full version comprises a whopping 222 pages and includes chapters covering research and development, AI education, ethical challenges, and policy.
In this article, hot off the press from The New Yorker, the brilliant Ted Chiang writes about the much-hyped notion of a superintelligent AI explosion and why it’s extremely unlikely to happen soon, if at all.
Struggling to keep up with the machine learning literature? Well, Robert Lange could help you out. Once a week he creates what he calls a Machine Learning collage for an interesting paper he has read, and publishes them on his Twitter feed. In this example he summarises a recent arXiv paper by Anirudh Goyal et al: Coordination Among Neural Modules Through a Shared Global Workspace.
In Dodrio: Exploring Transformer Models with Interactive Visualization, Zijie J. Wang, Robert Turko and Duen Horng Chau present an open-source interactive visualization tool to help NLP researchers and practitioners analyze attention mechanisms in transformer-based models with linguistic knowledge. They’ve even created a little video to demonstrate the features. Dodrio is available here.
Too often, the conversations around the challenges of accessing and sharing African data are driven by non-African stakeholders. In Narratives and Counternarratives on Data Sharing in Africa, Rediet Abebe et al discuss issues arising from power imbalances and explore avenues for addressing them when sharing data generated in the continent.
In M6: A Chinese Multimodal Pretrainer, Junyang Lin et al report on their construction of the largest dataset for multimodal pretraining in Chinese. The dataset consists of over 1.9TB images and 292GB texts that cover a wide range of domains.
We’ve made a short video to explain more about us. It gives a flavour of who we are, what we do, and how we can help researchers promote their research to a wider audience.