news    articles    opinions    tutorials    concepts    |    about    contribute     republish
by and   -   November 27, 2020

By Larry Medsker

AI Policy Matters is a regular column in the ACM SIGAI AI Matters newsletter featuring summaries and commentary based on postings that appear twice a month in the AI Matters blog.

by   -   November 26, 2020

We are excited to announce that next month we will launching the AIhub focus issue on “AI for Good”, which will specifically concentrate on the UN Sustainable Development Goals (SDGs). Each month we will pick a different goal and highlight work in that area.

by   -   November 25, 2020

Asian conference on machine learning logo

Last week saw the virtual running of the 12th Asian Conference on Machine Learning (ACML). The event had been due to be held in Thailand, but instead went online and the organisers decided to make all content freely available. You can watch all of the invited talks, tutorials, workshops, and video presentations of the contributed papers. Also, find out who won the conference awards.

by   -   November 24, 2020
network architechture
The network architecture proposed in this work

Ionut Schiopu and Adrian Munteanu received a Top Viewed Special Session Paper Award at the IEEE International Conference on Image Processing (ICIP 2020) for their paper “A study of prediction methods based on machine learning techniques for lossless image coding”. Here, Ionut Schiopu tells us more about their work.

by   -   November 23, 2020

Ryan Calo
Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Ryan Calo about robot regulation.

by   -   November 20, 2020

By Marvin Zhang

Imagine that you are building the next generation machine learning model for handwriting transcription. Based on previous iterations of your product, you have identified a key challenge for this rollout: after deployment, new end users often have different and unseen handwriting styles, leading to distribution shift. One solution for this challenge is to learn an adaptive model that can specialize and adjust to each user’s handwriting style over time. This solution seems promising, but it must be balanced against concerns about ease of use: requiring users to provide feedback to the model may be cumbersome and hinder adoption. Is it possible instead to learn a model that can adapt to new users without labels?

by   -   November 19, 2020

Fly AI report | AIhub

By Benjamin Cramet and Sylvie Grand-Perret

The first Fly AI report provides an overview of the many ways that artificial intelligence is already applied in the industry and assesses its potential to transform the sector.

by   -   November 18, 2020

AIhub coffee corner

The AIhub coffee corner captures the musings of AI experts over a 30-minute conversation. This month, we discuss AI education.

by   -   November 17, 2020

ODI summit logo
Last week the Open Data Institute (ODI) hosted their annual summit. This year the event was held virtually and included keynote talks, panel discussions and expo booths. The summit brought together people from a range of sectors to discuss the future of data. Topics covered included the use of data in innovation, climate studies, health, policy making, and more.

by   -   November 16, 2020

space junk | AIhub

By Tanya Petersen

EPFL researchers are at the forefront of developing some of the cutting-edge technology for the European Space Agency’s first mission to remove space debris from orbit.

by   -   November 13, 2020

AI seminars

Here you can find a list of the AI-related seminars that are scheduled to take place between now and the end of December 2020. We’ve also listed recent past seminars that are available for you to watch. All events detailed here are free and open for anyone to attend virtually.

by   -   November 12, 2020

In recent years, graphs and the associated spectral decomposition have emerged as a unified representation for image analysis and processing. This area of research is broadly categorized under Graph Signal Processing (GSP), an upcoming field which has heralded several algorithms in various topics (including neural networks – Graph Convolution Networks). In this post, we will focus on the problem of representing images using graphs

by   -   November 11, 2020

AIhub | Tweets round-up
We bring you a selection of some interesting and popular tweets about AI from October.

by   -   November 10, 2020

AIhub ambassador

Are you a PhD student or researcher with an interest in science communication? We are recruiting AIhub ambassadors to help us write about the latest news, research, conferences, and more, in the field of artificial intelligence and machine learning.

by   -   November 9, 2020
Figure 1: An encoder-decoder generative model of translation pairs, which helps to circumvent the limitation discussed before. There is a global distribution \mathcal{D} over the representation space \mathcal{Z}, from which sentences of language L_i are generated via decoder D_i. Similarly, sentences could also be encoded via E_i to \mathcal{Z}.

By Han Zhao and Andrej Risteski

Despite the recent improvements in neural machine translation (NMT), training a large NMT model with hundreds of millions of parameters usually requires a collection of parallel corpora at a large scale, on the order of millions or even billions of aligned sentences for supervised training (Arivazhagan et al.). While it might be possible to automatically crawl the web to collect parallel sentences for high-resource language pairs, such as German-English and French-English, it is often infeasible or expensive to manually translate large amounts of sentences for low-resource language pairs, such as Nepali-English, Sinhala-English, etc. To this end, the goal of the so-called multilingual universal machine translation, a.k.a., universal machine translation (UMT), is to learn to translate between any pair of languages using a single system, given pairs of translated documents for some of these languages.

supported by: