ΑΙhub.org
 

AIhub monthly digest: June 2024 – network resource allocation, protein structure prediction, and a Ge’ez-Amharic-English dataset

by
27 June 2024



share this:
Panda and tiger reading

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we hear about a Ge’ez-Amharic-English dataset, meet AAAI Fellow Mausam, and learn about network resource allocation.

Meeting AAAI Fellow Professor Mausam

Each year the AAAI recognizes a group of individuals who have made significant, sustained contributions to the field of artificial intelligence by appointing them as Fellows. Over the course of the next few months, we’ll be talking to some of the 2024 AAAI Fellows. In the first interview in the series, we met Professor Mausam and found out about his research, career path, mentorship, and why it is important to add some creative pursuits to your life.

Interview with Henok Biadglign Ademtew: Creating an Amharic, Ge’ez and English parallel dataset

African languages are not well-represented in natural language processing (NLP). This is in large part due to a lack of resources for training models. Henok Biadglign Ademtew and Mikiyas Girma Birbo have created an Amharic, Ge’ez, and English parallel dataset to help advance research into low-resource languages. We spoke to Henok about this project, the creation of the dataset, and some of the challenges faced.

An iterative refinement model for PROTAC-induced structure prediction

Proteolysis targeting chimeras (PROTACs) are small molecules that trigger the breakdown of traditionally “undruggable” proteins by binding simultaneously to their targets and degradation-associated proteins. In this blogpost, Bo Qiang, Wenxian Shi, Yuxuan Song and Menghua Wu write about their work on PROTAC-induced structure prediction.

Learning programs with numerical reasoning

Inductive logic programming is a form of program synthesis that can learn explainable programs from small numbers of examples. However, current approaches struggle to learn programs with numerical values. In this blogpost, Céline Hocquette writes about her work introducing a novel approach to dealing with these numerical values.

Interview with Tianfu Wang: A reinforcement learning framework for network resource allocation

We heard from Tianfu Wang about work addressing resource allocation problems using a reinforcement learning framework, specifically in the domain of network virtualization. The work has implications for applications such as network management, cloud computing, and 5G networks, where efficient resource allocation is critical.

IJCAI 2024 awards

The winners of three International Joint Conferences on Artificial Intelligence (IJCAI) awards have been announced. This year, these three distinctions have been awarded to:

AI Fringe London

The AI Fringe returned for a second year on 5 June. The event was designed to complement the AI Seoul Summit which was co-hosted by the UK and South Korea governments. If you missed the livestream, you can catch the recordings of the half-day event here.

Public voices in AI

Early June saw the launch of a Public Voices in AI Fund, financed by UK Research and Innovation. This will support projects that “seek to ensure that uses of AI are informed by the voices of people underrepresented in or negatively impacted by AI”. The fund is part of the wider Public Voices in AI project which aims to ensure that public views and voices are front and centre in all uses of AI.

International Conference on Web and Social Media

The 18th International Conference on Web and Social Media (ICWSM) took place from 3-6 June in Buffalo, USA. The conference cuts across many disciplines including network science, machine learning, computational linguistics, sociology, communication, and political science. In this round-up, we took a look at what the participants got up to at the event.

Developing an LLM: Building, Training, Finetuning

In a one-hour explainer video “Developing an LLM: Building, Training, Finetuning“, Sebastian Raschka covers the development cycle of LLMs, from architecture and pretraining to the different stages of finetuning.


Our resources page
Seminars in 2024
AAAI/ACM SIGAI Doctoral Consortium interview series
AI around the world focus series
UN SDGs focus series
New voices in AI series



tags:


Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



New computer vision method helps speed up screening of electronic materials

The technique characterizes a material’s electronic properties faster than conventional methods.
26 June 2024, by

The Good Robot Podcast: Featuring Heather Zheng

In this episode, Eleanor and Kerry talk to Heather Zheng about poisoning AI and putting privacy first.
25 June 2024, by

Machine learning-aided thermography for building heat loss detection

Machine learning can automate building energy diagnostics using thermal imaging.
21 June 2024, by

AI Fringe 2024 – event recordings available

Watch the half-day event in full.
20 June 2024, by

The Machine Ethics podcast: AI fictions with Alex Shvartsman

In this episode, Ben chats to Alex Shvartsman about generative AI, human vs AI authorship, our AI future, and more
19 June 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association