ΑΙhub.org
 

AIhub monthly digest: November 2023 – deconstructing sentiment analysis, few-shot learning for medical images, and Angry Birds structure generation

by
29 November 2023



share this:
Panda and tiger reading

Welcome to our November 2023 monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, find out about recent events, and more. This month, we deconstruct sentiment analysis, find out about few-shot learning in medical imaging, investigate rare events, and look forward to our science communication training session at NeurIPS.

A critical survey towards deconstructing sentiment analysis: Interview with Pranav Venkit and Mukund Srinath

In their paper The Sentiment Problem: A Critical Survey towards Deconstructing Sentiment Analysis, Pranav Venkit, Mukund Srinath, Sanjana Gautam, Saranya Venkatraman, Vipul Gupta, Rebecca Passonneau and Shomir Wilson present a review of the sociotechnical aspects of sentiment analysis. In this interview, Pranav and Mukund tell us more about sentiment analysis, how they went about surveying the literature, and recommendations for researchers in the field.

Few-shot learning for medical image analysis

Deep learning models employed in medical imaging are limited by the lack of annotated images. Few-shot learning techniques, where models learn from a small number of examples, can help overcome this scarcity. In a systematic review of the field, Eva Pachetti and Sara Colantonio investigate the state of the art. You can read a summary of their findings in their blog post.

Utilizing generative adversarial networks for stable structure generation in Angry Birds

Matthew Stephenson and Frederic Abraham won the best artifact award at AIIDE2023 for their work Utilizing Generative Adversarial Networks for Stable Structure Generation in Angry Birds. In this blogpost, they tell us about their GAN approach, which teaches itself how to design Angry Birds structures.

Exploring computer game states

Thoroughly testing video game software by hand is difficult. AI agents that can automatically explore different game functionalities are a promising alternative. In this blogpost, Sasha Volokh writes about an approach for automatically determining the possible actions in computer game states. This was work that won him the best student paper award at AIIDE2023.

AIhub coffee corner: Regulation of AI

The AIhub coffee corner captures the musings of AI experts over a short conversation. Three years ago, our trustees sat down to discuss AI and regulation. A lot has happened since then, both on the technological development front and on the policy front, so we thought it was time to tackle the topic again. Read the discussion here.

The impact of counterfactual explanations’ directionality on user behavior

In their work For Better or Worse: The Impact of Counterfactual Explanations’ Directionality on User Behavior in xAI, Ulrike Kuhl, André Artelt and Barbara Hammer, have investigated counterfactual explanations in explainable artificial intelligence. In this interview, Ulrike tells us more about their study, and highlights some of their interesting and surprising findings.

The power of collaboration: power grid control with multi-agent reinforcement learning

Power grid operation is a complex task with many possible actions. Erica van der Sar, Alessandro Zocca and Sandjai Bhulai have introduced a multi-agent reinforcement learning method to help optimise decision making. Find out more in this blogpost.

Pre-trained language models for music captioning and query response

In his blogpost Pre-trained language models for music captioning and query response, Yinghao Ma writes about work towards a system that provides descriptions of the music you are listening to.

Surveying rare event prediction

In their blogpost A comprehensive survey on rare event prediction, Chathurangi Shyalika, Ruwan Wickramarachchi and Amit Sheth review the rare event prediction literature and highlight open research questions and future directions in the field.

ACM/SIGAI Autonomous Agents Research Award – 2024 award call

The 2024 ACM SIGAI Autonomous Agents Research Award is open for nominations. This award is made for excellence in research in the area of autonomous agents and is intended to recognize researchers whose current work is an important influence on the field. The deadline for nominations is 15 December 2023. You can find out more here.

New UK Centres for Doctoral Training

UK Research and Innovation has announced investment in 12 Centres for Doctoral Training (CDTs) in AI based at 16 universities. A total of £117 million has been awarded to the 12 CDTs, which will train the next generation of AI researchers from across the UK.

Synthetic Beat Brigade wins 2023 AI Song Contest

The 2023 AI Song Contest award show on 4 November, in A Coruña (Spain), saw team Synthetic Beat Brigade bag the honour of best song with their entry “How would you touch me?” You can listen to the song, and find out more about the team and their creative process here. You can listen to all of the entries to the contest here.

Science communication introduction at NeurIPS

If you are attending this year’s Conference on Neural Information Processing Systems (NeurIPS 2023) then please do consider attending our short introduction to science communication for AI researchers. In this session you will learn how to turn your research articles into blog posts, how to use social media to promote your work, and how to avoid hype when writing about your research. This will be held on Monday 11 December and comprises two parts: 1) 12:45 – 13:45 Talk: science communication for AI researchers – introductory training, 2) 14:00 – 16:00 Open drop-in session for one-on-one support. Find out more here.


Our resources page
Forthcoming and past seminars for 2023
AI around the world focus series
UN SDGs focus series
New voices in AI series



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



The Turing Lectures: Can we trust AI? – with Abeba Birhane

Abeba covers biases in data, the downstream impact on AI systems and our daily lives, how researchers are tackling the problem, and more.
21 November 2024, by

Dynamic faceted search: from haystack to highlight

The authors develop and compare three distinct methods for dynamic facet generation (DFG).
20 November 2024, by , and

Identification of hazardous areas for priority landmine clearance: AI for humanitarian mine action

In close collaboration with the UN and local NGOs, we co-develop an interpretable predictive tool to identify hazardous clusters of landmines.
19 November 2024, by

On the Road to Gundag(AI): Ensuring rural communities benefit from the AI revolution

We need to help regional small businesses benefit from AI while avoiding the harmful aspects.
18 November 2024, by

Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association