news    articles    opinions    tutorials    concepts    |    about    contribute     republish

articles

by   -   December 3, 2020
Protein_PCMT1_PDB_1i1n
Protein PCMT1 PDB, by Emw CC BY-SA 3.0, via Wikimedia Commons.

The grand challenge of protein folding hit the news this week when it was announced that the latest version of DeepMind’s AlphaFold system had predicted protein structures with very high accuracy in CASP’s 2020 experiment.

Proteins are large, complex molecules, and the shape of a particular protein is closely linked to the function it performs. The ability to accurately predict protein structures would enable scientists to gain a greater understanding of how they work and what they do.

by   -   December 1, 2020

AIhub arXiv roundup

What’s hot on arXiv? Here are the most tweeted papers that were uploaded onto arXiv during November 2020.

Results are powered by Arxiv Sanity Preserver.

by   -   November 30, 2020

a 2d cost surface

By Aldo Pacchiano, Jack Parker-Holder, Luke Metz, and Jakob Foerster

Goodhart’s Law is an adage which states the following:

“When a measure becomes a target, it ceases to be a good measure.”

This is particularly pertinent in machine learning, where the source of many of our greatest achievements comes from optimizing a target in the form of a loss function. The most prominent way to do so is with stochastic gradient descent (SGD), which applies a simple rule, follow the gradient

by   -   November 20, 2020

By Marvin Zhang

Imagine that you are building the next generation machine learning model for handwriting transcription. Based on previous iterations of your product, you have identified a key challenge for this rollout: after deployment, new end users often have different and unseen handwriting styles, leading to distribution shift. One solution for this challenge is to learn an adaptive model that can specialize and adjust to each user’s handwriting style over time. This solution seems promising, but it must be balanced against concerns about ease of use: requiring users to provide feedback to the model may be cumbersome and hinder adoption. Is it possible instead to learn a model that can adapt to new users without labels?

by   -   November 12, 2020

In recent years, graphs and the associated spectral decomposition have emerged as a unified representation for image analysis and processing. This area of research is broadly categorized under Graph Signal Processing (GSP), an upcoming field which has heralded several algorithms in various topics (including neural networks – Graph Convolution Networks). In this post, we will focus on the problem of representing images using graphs

by   -   November 9, 2020
Figure 1: An encoder-decoder generative model of translation pairs, which helps to circumvent the limitation discussed before. There is a global distribution \mathcal{D} over the representation space \mathcal{Z}, from which sentences of language L_i are generated via decoder D_i. Similarly, sentences could also be encoded via E_i to \mathcal{Z}.

By Han Zhao and Andrej Risteski

Despite the recent improvements in neural machine translation (NMT), training a large NMT model with hundreds of millions of parameters usually requires a collection of parallel corpora at a large scale, on the order of millions or even billions of aligned sentences for supervised training (Arivazhagan et al.). While it might be possible to automatically crawl the web to collect parallel sentences for high-resource language pairs, such as German-English and French-English, it is often infeasible or expensive to manually translate large amounts of sentences for low-resource language pairs, such as Nepali-English, Sinhala-English, etc. To this end, the goal of the so-called multilingual universal machine translation, a.k.a., universal machine translation (UMT), is to learn to translate between any pair of languages using a single system, given pairs of translated documents for some of these languages.

by   -   November 5, 2020

BAIR blog \RL

By Ben Eysenbach and Aviral Kumar and Abhishek Gupta

The two most common perspectives on Reinforcement learning (RL) are optimization and dynamic programming. Methods that compute the gradients of the non-differentiable expected reward objective, such as the REINFORCE trick are commonly grouped into the optimization perspective, whereas methods that employ TD-learning or Q-learning are dynamic programming methods. While these methods have shown considerable success in recent years, these methods are still quite challenging to apply to new problems. In contrast deep supervised learning has been extremely successful and we may hence ask: Can we use supervised learning to perform RL?

by   -   November 2, 2020

AIhub arXiv roundup

What’s hot on arXiv? Here are the most tweeted papers that were uploaded onto arXiv during October 2020.

Results are powered by Arxiv Sanity Preserver.

by   -   October 28, 2020

piano | AIhub

For those interested in music and AI, a session on “Human collaboration with an AI musician” at the AI for Good global summit proved to be a real treat. The session included a performance between two musicians situated on opposite sides of the globe who improvised alongside the third member of the group – an AI “musician”.

by and   -   October 27, 2020


By Oleh Rybkin, Danijar Hafner and Deepak Pathak

To operate successfully in unstructured open-world environments, autonomous intelligent agents need to solve many different tasks and learn new tasks quickly. Reinforcement learning has enabled artificial agents to solve complex tasks both in simulation and real world. However, it requires collecting large amounts of experience in the environment, and the agent learns only that particular task, much like a student memorizing a lecture without understanding. Self-supervised reinforcement learning has emerged as an alternative, where the agent only follows an intrinsic objective that is independent of any individual task, analogously to unsupervised representation learning. After experimenting with the environment without supervision, the agent builds an understanding of the environment, which enables it to adapt to specific downstream tasks more efficiently. 

← previous page        ·         next page →




supported by: