We are excited to announce that next month we will launching the AIhub focus issue on “AI for Good”, which will specifically concentrate on the UN Sustainable Development Goals (SDGs). Each month we will pick a different goal and highlight work in that area.
Last week saw the virtual running of the 12th Asian Conference on Machine Learning (ACML). The event had been due to be held in Thailand, but instead went online and the organisers decided to make all content freely available. You can watch all of the invited talks, tutorials, workshops, and video presentations of the contributed papers. Also, find out who won the conference awards.
Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Ryan Calo about robot regulation.
Imagine that you are building the next generation machine learning model for handwriting transcription. Based on previous iterations of your product, you have identified a key challenge for this rollout: after deployment, new end users often have different and unseen handwriting styles, leading to distribution shift. One solution for this challenge is to learn an adaptive model that can specialize and adjust to each user’s handwriting style over time. This solution seems promising, but it must be balanced against concerns about ease of use: requiring users to provide feedback to the model may be cumbersome and hinder adoption. Is it possible instead to learn a model that can adapt to new users without labels?
Last week the Open Data Institute (ODI) hosted their annual summit. This year the event was held virtually and included keynote talks, panel discussions and expo booths. The summit brought together people from a range of sectors to discuss the future of data. Topics covered included the use of data in innovation, climate studies, health, policy making, and more.
Here you can find a list of the AI-related seminars that are scheduled to take place between now and the end of December 2020. We’ve also listed recent past seminars that are available for you to watch. All events detailed here are free and open for anyone to attend virtually.
In recent years, graphs and the associated spectral decomposition have emerged as a unified representation for image analysis and processing. This area of research is broadly categorized under Graph Signal Processing (GSP), an upcoming field which has heralded several algorithms in various topics (including neural networks – Graph Convolution Networks). In this post, we will focus on the problem of representing images using graphs
Are you a PhD student or researcher with an interest in science communication? We are recruiting AIhub ambassadors to help us write about the latest news, research, conferences, and more, in the field of artificial intelligence and machine learning.
Despite the recent improvements in neural machine translation (NMT), training a large NMT model with hundreds of millions of parameters usually requires a collection of parallel corpora at a large scale, on the order of millions or even billions of aligned sentences for supervised training (Arivazhagan et al.). While it might be possible to automatically crawl the web to collect parallel sentences for high-resource language pairs, such as German-English and French-English, it is often infeasible or expensive to manually translate large amounts of sentences for low-resource language pairs, such as Nepali-English, Sinhala-English, etc. To this end, the goal of the so-called multilingual universal machine translation, a.k.a., universal machine translation (UMT), is to learn to translate between any pair of languages using a single system, given pairs of translated documents for some of these languages.