ΑΙhub.org
 

AIhub monthly digest: June 2024 – network resource allocation, protein structure prediction, and a Ge’ez-Amharic-English dataset

by
27 June 2024



share this:
Panda and tiger reading

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we hear about a Ge’ez-Amharic-English dataset, meet AAAI Fellow Mausam, and learn about network resource allocation.

Meeting AAAI Fellow Professor Mausam

Each year the AAAI recognizes a group of individuals who have made significant, sustained contributions to the field of artificial intelligence by appointing them as Fellows. Over the course of the next few months, we’ll be talking to some of the 2024 AAAI Fellows. In the first interview in the series, we met Professor Mausam and found out about his research, career path, mentorship, and why it is important to add some creative pursuits to your life.

Interview with Henok Biadglign Ademtew: Creating an Amharic, Ge’ez and English parallel dataset

African languages are not well-represented in natural language processing (NLP). This is in large part due to a lack of resources for training models. Henok Biadglign Ademtew and Mikiyas Girma Birbo have created an Amharic, Ge’ez, and English parallel dataset to help advance research into low-resource languages. We spoke to Henok about this project, the creation of the dataset, and some of the challenges faced.

An iterative refinement model for PROTAC-induced structure prediction

Proteolysis targeting chimeras (PROTACs) are small molecules that trigger the breakdown of traditionally “undruggable” proteins by binding simultaneously to their targets and degradation-associated proteins. In this blogpost, Bo Qiang, Wenxian Shi, Yuxuan Song and Menghua Wu write about their work on PROTAC-induced structure prediction.

Learning programs with numerical reasoning

Inductive logic programming is a form of program synthesis that can learn explainable programs from small numbers of examples. However, current approaches struggle to learn programs with numerical values. In this blogpost, Céline Hocquette writes about her work introducing a novel approach to dealing with these numerical values.

Interview with Tianfu Wang: A reinforcement learning framework for network resource allocation

We heard from Tianfu Wang about work addressing resource allocation problems using a reinforcement learning framework, specifically in the domain of network virtualization. The work has implications for applications such as network management, cloud computing, and 5G networks, where efficient resource allocation is critical.

IJCAI 2024 awards

The winners of three International Joint Conferences on Artificial Intelligence (IJCAI) awards have been announced. This year, these three distinctions have been awarded to:

AI Fringe London

The AI Fringe returned for a second year on 5 June. The event was designed to complement the AI Seoul Summit which was co-hosted by the UK and South Korea governments. If you missed the livestream, you can catch the recordings of the half-day event here.

Public voices in AI

Early June saw the launch of a Public Voices in AI Fund, financed by UK Research and Innovation. This will support projects that “seek to ensure that uses of AI are informed by the voices of people underrepresented in or negatively impacted by AI”. The fund is part of the wider Public Voices in AI project which aims to ensure that public views and voices are front and centre in all uses of AI.

International Conference on Web and Social Media

The 18th International Conference on Web and Social Media (ICWSM) took place from 3-6 June in Buffalo, USA. The conference cuts across many disciplines including network science, machine learning, computational linguistics, sociology, communication, and political science. In this round-up, we took a look at what the participants got up to at the event.

Developing an LLM: Building, Training, Finetuning

In a one-hour explainer video “Developing an LLM: Building, Training, Finetuning“, Sebastian Raschka covers the development cycle of LLMs, from architecture and pretraining to the different stages of finetuning.


Our resources page
Seminars in 2024
AAAI/ACM SIGAI Doctoral Consortium interview series
AI around the world focus series
UN SDGs focus series
New voices in AI series



tags:


Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



No free lunch in LLM watermarking: Trade-offs in watermarking design choices

Common design choices in LLM watermarking schemes make the resulting systems surprisingly susceptible to watermark removal or spoofing attacks.
23 October 2024, by

Tweet round-up from the first few days of #ECAI2024

We take a look at what participants have been getting up to over the first few days of the event.
22 October 2024, by

The Good Robot Hot Take: does AI know how you feel?

In this episode, Eleanor and Kerry chat about why AI tools that promise to be able to read our emotions from our faces are scientifically and politically suspect.
21 October 2024, by

#AIES2024 conference schedule

Find out what's on the programme at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society.
19 October 2024, by

#IROS2024 – tweet round-up

We take a look at what the participants got up to at IROS 2024.
18 October 2024, by

What’s on the programme at #ECAI2024?

Find out what the 27th European Conference on Artificial Intelligence has in store.
17 October 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association