Welcome to our May 2022 monthly digest, where you can catch up with any AIhub stories you may have missed, get the low-down on recent events, and much more. This month, we chat to our latest new voice in AI, interview an award winner, hear about the RoboCup virtual humanoid competition, and check out a music video created with the help of AI.
We’re pleased to announce that we will be giving a tutorial on Science communication for AI researchers at IJCAI-ECAI 2022. The conference runs from 24-29 July, and our 1 hr 45 minute session will take place on Monday 25 July. You can find out more information here.
Alessandra Rossi is a member of both the technical and organising committees for the RoboCup Humanoid League. We spoke to her about the Humanoid League Virtual Season, which concluded with the grand final of the virtual soccer competition, and a three day workshop. This virtual league was designed as a compliment to the physical league and allows the teams to try out new approaches and keep in touch throughout the year.
We attended the International Conference on Learning Representations (ICLR), which featured eight invited talks on topics ranging from reinforcement learning to connectomics, from societal considerations to interpretability. You can read our summaries of two of the talks, from Pushmeet Kohli and Been Kim, who spoke about AI for science, and a language for AI respectively:
#ICLR2022 invited talk round-up 1: AI for science – protein structure prediction
#ICLR2022 invited talk round-up 2: Beyond interpretability
We also managed to catch up with some of the ICLR outstanding paper award winners. In this interview, X.Y. Han, Vardan Papyan, and David Donoho tell us about their work on the neural collapse phenomenon.
The ACM SIGAI Industry Award for Excellence in Artificial Intelligence (AI) recognises individuals or teams who have transferred original academic research into AI applications in ways that demonstrate the power of AI techniques. Nominations for this year’s award are due by 31 May. Find out more here.
The Information Commissioner’s Office (ICO) in the UK has fined facial recognition database company Clearview AI Inc more than £7.5m for using images of people that were scraped from websites and social media. Clearview AI collected the data to create a global online database, with one of the resulting applications being facial recognition. The company have also been ordered to delete personal data they hold on UK residents, and to stop obtaining and using the personal data that is publicly available on the internet. Find out more here.
In the latest episode of Computing Up, Michael Littman and Dave Ackley chat to Ellie Pavlick (Brown University) about new large AI language models. Topics range from what is and isn’t known about the models, and by them, to if or how scared should we be of them, to what “traditional” sciences like linguistics bring to artificial intelligence research and engineering.
From 26-28 May, Spain played host to the conference JornadasDAR, where participants discussed democracy, algorithms and resistance, with a view to seeking a democratic, decolonial and human rights-based approach to artificial intelligence. The first day of the event was purely online and the morning session was recorded in full. You can watch both the original version in Spanish and the English translation.
In this article about AI-biometry from the Ada Lovelace Institute blog, Mona Sloane writes about how the histories of pseudoscience and biometry have been intimately entangled, how the principles of oppression how become normalised through discriminatory technologies, and the dubious use of AI-biometry in recruitment.
YouTuber DoodleChaos (Mark Robbins) used VQGAN+CLIP text to art generator to help create a music video, using the lyrics of the song as input. Mark explains how he did this here, and notes that additional human input was required to get the desired effect.