news    articles    opinions    tutorials    concepts    |    about    contribute     republish
by   -   May 29, 2020


Researchers have developed a model for generating pixel-level morphological classifications of astronomical sources. Morpheus can analyze astronomical image data pixel-by-pixel to identify and classify all of the galaxies and stars in large data sets from astronomy surveys.

by   -   May 28, 2020

quadruped animal

By Xue Bin (Jason) Peng

Whether it’s a dog chasing after a ball, or a monkey swinging through the trees, animals can effortlessly perform an incredibly rich repertoire of agile locomotion skills. But designing controllers that enable legged robots to replicate these agile behaviors can be a very challenging task. The superior agility seen in animals, as compared to robots, might lead one to wonder: can we create more agile robotic controllers with less effort by directly imitating animals?

by   -   May 27, 2020

Volcano Mount Etna

By Sarah Wild

Dr Luciano Zuccarello grew up in the shadow of Mount Etna, an active volcano on the Italian island of Sicily. Farms and orchards ring the lower slopes of the volcano, where the fertile soil is ideal for agriculture. But the volcano looms large in the life of locals because it is also one of the most active volcanoes in the world.

More than 29 million people globally live within 10km of a volcano, and understanding volcanoes’ behaviour – and being able to predict when they are going to erupt or spew ash into the air – is vital for safeguarding people’s wellbeing.

by   -   May 26, 2020
photon counting set-up
Artistic impression of schematic experimental set-up for photon counting. The section in the light grey box corresponds to the thermal light part of the experiment and the section in the dark grey box corresponds to the coherent light part.

The identification of light sources is very important for the development of photonic technologies such as light detection and ranging (LiDAR), and microscopy. Typically, a large number of measurements are needed to classify light sources such as sunlight, laser radiation, and molecule fluorescence. The identification has required collection of photon statistics or quantum state tomography. In recently published work, researchers have used a neural network to dramatically reduce the number of measurements required to discriminate thermal light from coherent light at the single-photon level.

by   -   May 25, 2020
CT scans
Qualitative results for five different cases from the test set. The top row shows the image slice, the second row shows the ground-truth segmentation, and the bottom row shows the predicted segmentation given by the CNN model.

Researchers have developed an algorithm that can detect and identify different types of brain injuries. The team, from the University of Cambridge, Imperial College London and CONICET, have clinically validated and tested their method on large sets of CT scans and found that it was successfully able to detect, segment, quantify and differentiate different types of brain lesions.

by   -   May 22, 2020

ICLR website

The virtual International Conference on Learning Representations (ICLR) was held on 26-30 April and included eight keynote talks. In part two of our round-up we summarise the final four presentations. Courtesy of the conference organisers you can watch the talks in full and see the question and answer sessions.

by   -   May 21, 2020

adversary attacks

By Adam Gleave

Deep reinforcement learning (RL) has achieved superhuman performance in problems ranging from data center cooling to video games. RL policies may soon be widely deployed, with research underway in autonomous driving, negotiation and automated trading. Many potential applications are safety-critical: automated trading failures caused Knight Capital to lose USD 460M, while faulty autonomous vehicles have resulted in loss of life.

knowledge graph images

By Yuan Yang

In recent years, we have witnessed the success of modern machine learning (ML) models. Many of them have led to unprecedented breakthroughs in a wide range of applications, such as AlphaGo beating a world champion human player or the introduction of autonomous vehicles.

There has been continuous effort, both from industry and academia, to extend such advances to solving real-life problems. However, converting a successful ML model into a real-world product is still a nontrivial task.

by   -   May 19, 2020


The current paradigm of artificial intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are seemingly small design decisions, that led to a subtle reframing of some of the field’s original goals, and are now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI.

Far from being a series of separate problems, recent cases of unexpected effects of AI are the consequences of those very choices that enabled the field to succeed, and this is why it will be difficult to solve them. Research at the University of Bristol has considered three of these choices, investigating their connection to some of today’s challenges in AI, including those relating to bias, value alignment, privacy and explainability.

by   -   May 18, 2020
Cecilia Mascolo
Professor Cecilia Mascolo at the University of Cambridge, UK, hopes her coronavirus sounds app could provide the data needed to build a quick and cheap Covid-19 screening test in the future. Image credit – Salvatore Scellato

By Richard Gray

A recording of a cough, the noise of a person’s breathing or even the sound of their voice could be used to help diagnose patients with Covid-19 in the future, according to Professor Cecilia Mascolo, co-director of the centre for mobile, wearable systems and augmented intelligence at the University of Cambridge, UK.

Prof. Mascolo has developed a sound-collecting app to help train machine learning algorithms to detect the tell-tale sounds of coronavirus infection. Created as part of a project called EAR, she hopes it might eventually lead to new ways of diagnosing respiratory diseases and help in the global fight against coronavirus.

by   -   May 15, 2020

AIhub coffee corner

The AIhub coffee corner captures the musings of AI experts over a 30-minute conversation. In light of the recent EU whitepaper on AI and US proposed guidance for regulation, our experts discuss how far regulation should go.

by   -   May 14, 2020

ICLR website
The virtual International Conference on Learning Representations (ICLR) was held on 26-30 April and included eight keynote talks, with a wide range of topics covered. In this post we summarise the first four presentations. Courtesy of the conference organisers you can watch the talks in full and see the question and answer sessions too.

by   -   May 13, 2020

deep learning-assisted nanofabrication

The semiconductor industry as we know it is facing a critical roadblock that will lead to the end of Moore’s law. As transistors continue to shrink, quantum effects have a significant negative consequence on their operation. As such, the development of “beyond CMOS devices” has begun.

by and   -   May 12, 2020
In unsupervised meta-learning, the agent proposes its own tasks, rather than relying on tasks proposed by a human.

By Benjamin Eysenbach (Carnegie Mellon University) and Abhishek Gupta (UC Berkeley)

The history of machine learning has largely been a story of increasing abstraction. In the dawn of ML, researchers spent considerable effort engineering features. As deep learning gained popularity, researchers then shifted towards tuning the update rules and learning rates for their optimizers. Recent research in meta-learning has climbed one level of abstraction higher: many researchers now spend their days manually constructing task distributions, from which they can automatically learn good optimizers. What might be the next rung on this ladder? In this post we introduce theory and algorithms for unsupervised meta-learning, where machine learning algorithms themselves propose their own task distributions. Unsupervised meta-learning further reduces the amount of human supervision required to solve tasks, potentially inserting a new rung on this ladder of abstraction.

by   -   May 11, 2020

Mueller report

In the latest in this series of posts, researchers from the EU-funded COMPRISE project write about privacy issues associated with voice assistants. They propose possible ways to maintain the privacy of users whilst ensuring that manufacturers can still access the quality usage data vital for improving the functionality of their products.

supported by: