ΑΙhub.org
 

AIhub monthly digest: June 2024 – network resource allocation, protein structure prediction, and a Ge’ez-Amharic-English dataset

by
27 June 2024



share this:
Panda and tiger reading

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we hear about a Ge’ez-Amharic-English dataset, meet AAAI Fellow Mausam, and learn about network resource allocation.

Meeting AAAI Fellow Professor Mausam

Each year the AAAI recognizes a group of individuals who have made significant, sustained contributions to the field of artificial intelligence by appointing them as Fellows. Over the course of the next few months, we’ll be talking to some of the 2024 AAAI Fellows. In the first interview in the series, we met Professor Mausam and found out about his research, career path, mentorship, and why it is important to add some creative pursuits to your life.

Interview with Henok Biadglign Ademtew: Creating an Amharic, Ge’ez and English parallel dataset

African languages are not well-represented in natural language processing (NLP). This is in large part due to a lack of resources for training models. Henok Biadglign Ademtew and Mikiyas Girma Birbo have created an Amharic, Ge’ez, and English parallel dataset to help advance research into low-resource languages. We spoke to Henok about this project, the creation of the dataset, and some of the challenges faced.

An iterative refinement model for PROTAC-induced structure prediction

Proteolysis targeting chimeras (PROTACs) are small molecules that trigger the breakdown of traditionally “undruggable” proteins by binding simultaneously to their targets and degradation-associated proteins. In this blogpost, Bo Qiang, Wenxian Shi, Yuxuan Song and Menghua Wu write about their work on PROTAC-induced structure prediction.

Learning programs with numerical reasoning

Inductive logic programming is a form of program synthesis that can learn explainable programs from small numbers of examples. However, current approaches struggle to learn programs with numerical values. In this blogpost, Céline Hocquette writes about her work introducing a novel approach to dealing with these numerical values.

Interview with Tianfu Wang: A reinforcement learning framework for network resource allocation

We heard from Tianfu Wang about work addressing resource allocation problems using a reinforcement learning framework, specifically in the domain of network virtualization. The work has implications for applications such as network management, cloud computing, and 5G networks, where efficient resource allocation is critical.

IJCAI 2024 awards

The winners of three International Joint Conferences on Artificial Intelligence (IJCAI) awards have been announced. This year, these three distinctions have been awarded to:

AI Fringe London

The AI Fringe returned for a second year on 5 June. The event was designed to complement the AI Seoul Summit which was co-hosted by the UK and South Korea governments. If you missed the livestream, you can catch the recordings of the half-day event here.

Public voices in AI

Early June saw the launch of a Public Voices in AI Fund, financed by UK Research and Innovation. This will support projects that “seek to ensure that uses of AI are informed by the voices of people underrepresented in or negatively impacted by AI”. The fund is part of the wider Public Voices in AI project which aims to ensure that public views and voices are front and centre in all uses of AI.

International Conference on Web and Social Media

The 18th International Conference on Web and Social Media (ICWSM) took place from 3-6 June in Buffalo, USA. The conference cuts across many disciplines including network science, machine learning, computational linguistics, sociology, communication, and political science. In this round-up, we took a look at what the participants got up to at the event.

Developing an LLM: Building, Training, Finetuning

In a one-hour explainer video “Developing an LLM: Building, Training, Finetuning“, Sebastian Raschka covers the development cycle of LLMs, from architecture and pretraining to the different stages of finetuning.


Our resources page
Seminars in 2024
AAAI/ACM SIGAI Doctoral Consortium interview series
AI around the world focus series
UN SDGs focus series
New voices in AI series



tags:


Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Geometric deep learning for protein sequence design

Researchers have developed an AI-driven model designed to predict protein sequences from backbone scaffolds.
10 September 2024, by

How to evaluate jailbreak methods: a case study with the StrongREJECT benchmark

Providing a more accurate assessment of jailbreak effectiveness.
09 September 2024, by

CLAIRE AQuA: AI for citizens

Watch the recording of the latest CLAIRE All Questions Answered session.
06 September 2024, by

Developing a system for real-time sensing of flooded roads

Research fuses multiple data sources with AI model for enhanced sensing of road conditions.
05 September 2024, by

Forthcoming machine learning and AI seminars: September 2024 edition

A list of free-to-attend AI-related seminars that are scheduled to take place between 2 September and 31 October 2024.
02 September 2024, by

Causal inference under incentives: an annotated reading list

This annotated reading list is intended to serve as a brief summary of work on causal inference in the presence of strategic agents.
30 August 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association