Welcome to our November 2022 monthly digest, where you can catch up with any AIhub stories you may have missed, get the low-down on recent events, and much more. This month, we hear from researchers who’ve developed an AI system for live music accompaniment and improvisation. Amongst other things, we also find out more about counterfactual explanations for reinforcement learning, planning robust frictional multi-object grasps, and social bias in knowledge graphs.
Olga Vechtomova and Gaurav Sahu envisioned and developed a system, LyricJam Sonic, that uses AI to create a real-time generative stream of music based on an artist’s own catalogue of studio recordings. The purpose is to inspire the artist with potentially unexpected combinations of sounds. You can read more about their work in this blog post. You can try out LyricJam Sonic yourself, via this interactive web app.
Benjamin Böhm, Tomáš Peitl and Olaf Beyersdorff won an IJCAI 2022 distinguished paper award for their article QCDCL with Cube Learning or Pure Literal Elimination – What is Best? In this blog post, they explain their work, in which they study the reasoning power of various algorithms that decide two-player games.
In their paper FAIR-FATE: Fair Federated Learning with Momentum, Teresa Salazar, Miguel Fernandes, Helder Araujo, and Pedro Henriques Abreu develop a fairness-aware federated learning algorithm which aims to achieve group fairness while maintaining classification performance. In this interview, Teresa tells us more about their work.
Angelie Kraft and Ricardo Usbeck conducted a critical analysis of literature concerning biases at different steps of a knowledge graph lifecycle. We interviewed Angelie, who told us more about knowledge graphs, how social biases become embedded in them, and what researchers can do to mitigate this.
Wisdom Agboh and colleagues have leveraged neural networks and fundamental robot grasping theorems to build an efficient robot system that grasps multiple objects at once. Find out more about their work, and watch videos of the robot in action, in this interview with Wisdom.
In their recent paper, Jasmina Gajcin and Ivana Dusparic study counterfactual explanations for reinforcement learning. In this interview, Jasmina told us more about counterfactuals and some of the challenges of implementing them in reinforcement learning settings.
Mixed-precision neural networks are neural networks with varying precision across layers, kernels or weights. Mariam Rakka, Mohammed E. Fouda, Pramod Khargonekar and Fadi Kurdahi have reviewed recent frameworks in the literature that address mixed-precision neural network training. Here, they tell us more about mixed-precision neural networks and the main findings from their survey.
Hosted by Eleanor Drage and Kerry Mackereth, The Good Robot is a podcast which explores the many complex intersections between gender, feminism and technology. In this episode, Eleanor and Kerry talk to Lorraine Daston about the exorcism of emotion in rational science (and AI). You can catch this, and all past episodes here.
The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022) took place in Kyoto last month. Yahav Avigal, Lars Berscheid, Tamim Asfour, Torsten Kroeger, and Ken Goldberg bagged the best paper award with their work SpeedFolding: Learning Efficient Bimanual Folding of Garments. As well as the best paper award, there were various other paper prizes given out at the conference. Find out more about the winners, and see them presenting their work, here.
Climate Change AI (CCAI) have announced funding, to total USD 1.2M for projects at the intersection of AI and climate change. This program will allocate grants of up to USD 150K for conducting research projects of one year in duration. The deadline for submissions is 1 March 2023, and you can find out how to apply here.
In this report, researchers at the Minderoo Centre for Technology and Democracy propose a sociotechnical audit as a tool to help outside stakeholders evaluate the ethics and legality of police use of facial recognition. They applied this audit to three British police deployments and found that all three deployments failed to meet the minimum ethical and legal standards for the governance of facial recognition technology.
Finally, a call from us to get involved with science communication. We are recruiting AIhub ambassadors to help us write about the latest news, research, conferences, and more, in the field of artificial intelligence. If you’re interested, you can find out more here.