ΑΙhub.org
monthly digest
 

AIhub monthly digest: August 2022 – cross-lingual transfer, philosophy of cognitive science, and #DLIndaba


by
30 August 2022



share this:
Panda and tiger reading

Welcome to our August 2022 monthly digest, where you can catch up with any AIhub stories you may have missed, get the low-down on recent events, and much more. This month, we continue our conference coverage, chat to winners of best paper awards, and listen to some interesting podcasts.

New voices in AI

In the latest episode of new voices in AI, host Joe Daly talks to Dimitri Coelho Mollo about his work on philosophy, cognitive science and AI.

Faithfully reflecting updated information in text

Wouldn’t it be handy to be able to automatically update information in an outdated article? Well, Robert Logan, Alexandre Passos, Sameer Singh and Ming-Wei Chang designed an algorithm to do just that in their paper FRUIT: Faithfully Reflecting Updated Information in Text. This work won them a best new task award at NAACL 2022 (Annual Conference of the North American Chapter of the Association for Computational Linguistics). In this interview, Robert tells us about their methodology, the main contributions of the paper, and ideas for future work.

Evaluating cross-lingual transfer

Dan Malkin, Tomasz Limisiewicz, Gabriel Stanovsky received an outstanding new method paper award at NAACL 2022 for their work A balanced data approach for evaluating cross-lingual transfer: mapping the linguistic blood bank. We spoke to Dan, who told us about multilingual models, the cross-lingual transfer phenomenon, and how the choice of pretraining languages affects downstream cross-lingual transfer.

ICML invited talks

The 39th International Conference on Machine Learning (ICML 2022) took place in Baltimore last month. There were four invited talks at ICML 2022 which we summarised these in two posts:
#ICML2022 invited talk round-up 1: towards a mathematical theory of ML and using ML for molecular modelling
#ICML2022 invited talk round-up 2: estimating causal effects and drug discovery and development

Mihaela van der Schaar on machine learning for medicine

The 31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence (IJACI-ECAI 2022) took place from 23-29 July, in Vienna. As part of the conference there were eight fascinating invited talks. In this post, we summarise the presentation by Mihaela van der Schaar, who talked about some of the opportunities for machine learning in medicine.

Deep Learning Indaba returns

After a two year pandemic-enforced break, Deep Learning Indaba returned this year, taking place from 21-26 August. This is the annual meeting of the African machine learning community with the mission to strengthen African machine learning. Find out what the attendees got up to in our tweet round-up of the event.

Oriel FeldmanHall on reinforcement learning

In the latest episode of Computing Up, Oriel FeldmanHall (Brown University), joins hosts Michael and Dave in a wide-ranging discussion starting with what reinforcement learning does and doesn’t mean. She turns the tables to ask what computer scientists do and don’t get wrong about mind and brain and learning in general.

Radical AI podcast: Should the government use AI?

How does the government use algorithms? How do algorithms impact social services, policing, and other social services? And where does Silicon Valley fit in? In the latest episode of the Radical AI podcast, hosts Dylan and Jess interview Shion Guha about how governments adopt algorithms to enforce public policy.

Watch the talks from the ACM Conference on Fairness, Accountability, and Transparency

For those who weren’t able to attend the ACM FAccT conference, the organisers have made videos of all of the keynote talks, panel discussions, tutorials, and research talks available on YouTube. You can find the links to the playlists here.

How does the proposed UK AI regulation compare to EU and Canadian policies?

In this edition of her EuropeanAI newsletter, Charlotte Stix presents an analysis of the UK’s proposal for AI regulation compared to the EU’s AI Act and Canada’s Artificial Intelligence and Data Act. You can find the archive of her newsletters, which cover topics relating to AI governance, here.


Our resources page
Forthcoming and past seminars 2022
Articles in our UN SDGs focus series
New voices in AI series



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.

What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Interview with Mario Mirabile: trust in multi-agent systems

  18 Nov 2025
We meet ECAI Doctoral Consortium participant, Mario, to find out more about his research.

Review of “Exploring metaphors of AI: visualisations, narratives and perception”

and   17 Nov 2025
A curated research session at the Hype Studies Conference, “(Don’t) Believe the Hype?!” 10-12 September 2025, Barcelona.

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

  13 Nov 2025
Autonomous systems increasingly face value-laden choices. This blog post introduces the idea of designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence