ΑΙhub.org
 

Affinity group round-up from NeurIPS 2022


by
08 December 2022



share this:
Collage of logos from affinity groups, indigenous in AI, Women in Machine Learning, Global South in AI, LatinX in AI, Black in AI, Queer in AI, and North Africans in ML

Collage of logos from affinity groups, indigenous in AI, Women in Machine Learning, Global South in AI, LatinX in AI, Black in AI, Queer in AI, and North Africans in ML

It was a busy month for affinity groups at NeurIPS, with workshops from Black in AI, Queer in AI, LatinX in AI, Indigenous in AI, Global South in AI, Women in ML, and North Africans in ML. These workshops give researchers the opportunity to share their work, find support and make connections, and raise awareness of issues affecting their communities.

Here are some of our highlights from the workshops.

David Adelani presented his work on transfer languages – taking a model in one language and applying it to other languages. Transferring from one to another language can be tricky, especially when they use different structures or scripts. His work focused on named entity recognition, (e.g. people and place names, dates, organizations) and explored what factors improve cross language transfer for African languages. Key aspects included the creation of a diverse dataset, MasakhaNER, and the choice of language used for the transfer. (If you would like to learn more about David’s work, see also our interview with him.)

A Queer in AI panel hosted by Talia Ringer with Jon Cardoso-Silva, Martin Mundt (he/him) , Sara Beery, Shamini Kothari (she/they), shared experiences of being queer and a member of faculty. The conversation included how queerness influences their work and roles, to be or not to be out in academia, their hopes for the future, and insights on how these may differ across the globe.

At the Indigenous in AI workshop, Mason Grimshaw hosted a panel with Michael Running Wolf, Caroline Running Wolf, and Shawn Tsosie. They shared their experience of running the first Lakota AI Code Camp, where over three weeks students learned coding and AI from zero to create an app to identify local plants. The panelists talked about the importance of having something to take home, discussing ethics, and making the content engaging for the students.

The first Global south in AI workshop, had two keynote talks including Jazmia Henry who spoke about conversational bots – and how bias in these AIs can be reduced by taking an operational approach. This means addressing not just the data collected, but also how the model is built, and then maintained after deployment. After talking through what went wrong with the infamous Tay (the chatbot trained through the internet), she gave examples of ways to experiment with systems and of how to use constraints to reduce bias.

If you would like to find out more about the workshops see the NeurIPS website.

We will keep amplifying the work affinity groups. If you know a group who would like their work to be featured, let us know at aihuborg@gmail.com.

Find out more about the groups and their upcoming activities:

Black in AI

Black Women in AI
Are hosting a discussion on the future of sports and AI on a twitter space on the 8th December

Disability in AI

Global South in AI

Indigenous in AI

LatinX in AI
Have an upcoming social at EMNLP 2022, 8th December

New in ML

Queer in AI

Women in ML



tags: ,


Joe Daly Engagement Manager for AIhub
Joe Daly Engagement Manager for AIhub




            AIhub is supported by:


Related posts :



Interview with Mahammed Kamruzzaman: Understanding and mitigating biases in large language models

  17 Jun 2025
Find out how Mahammed is investigating multiple facets of biases in LLMs.

Google’s SynthID is the latest tool for catching AI-made content. What is AI ‘watermarking’ and does it work?

  16 Jun 2025
Last month, Google announced SynthID Detector, a new tool to detect AI-generated content.

The Good Robot podcast: Symbiosis from bacteria to AI with N. Katherine Hayles

  13 Jun 2025
In this episode, Eleanor and Kerry talk to N. Katherine Hayles about her new book, and discuss how the biological concept of symbiosis can inform the relationships we have with AI.

Preparing for kick-off at RoboCup2025: an interview with General Chair Marco Simões

  12 Jun 2025
We caught up with Marco to find out what exciting events are in store at this year's RoboCup.

Graphic novel explains the environmental impact of AI

  11 Jun 2025
EPFL’s Center for Learning Sciences has released Utop’IA, an educational graphic novel that explores the environmental impact of artificial intelligence.

Interview with Amar Halilovic: Explainable AI for robotics

  10 Jun 2025
Find out about Amar's research investigating the generation of explanations for robot actions.

Congratulations to the #IJCAI2025 award winners

  09 Jun 2025
The winners of three prestigious IJCAI awards for 2025 have been announced.

Machine learning powers new approach to detecting soil contaminants

  06 Jun 2025
Method spots pollutants without experimental reference samples.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence