ΑΙhub.org

deep dive


Deep attentive variational inference

The expressivity of current deep probabilistic models can be improved by selectively prioritizing statistical dependencies between latent variables that are potentially distant from each other.
24 June 2022, by

Rethinking human-in-the-loop for artificial augmented intelligence

How do we build and evaluate an AI system for real-world applications?
17 June 2022, by

Bootstrapped meta-learning – an interview with Sebastian Flennerhag

ICLR2022 award winner tells us about how he and co-authors approached the meta-learning problem.
07 June 2022, by

Designing societally beneficial reinforcement learning systems

Studying the risks associated with using reinforcement learning for real-world applications.
31 May 2022, by

An experimental design perspective on model-based reinforcement learning

We propose a simple algorithm that is able to solve a wide variety of control tasks.
19 May 2022, by

Should I use offline RL or imitation learning?

In this blog post, we aim to understand if, when and why offline RL is a better approach for tackling a variety of sequential decision-making problems.
17 May 2022, by


AIhub is supported by:







Offline RL made easier: no TD learning, advantage reweighting, or transformers

We try to identify the essential elements of offline RL via supervised learning.
03 May 2022, by

Unsupervised skill discovery with contrastive intrinsic control

Unsupervised reinforcement learning (RL), where RL agents pre-train with self-supervised rewards, is an emerging paradigm for developing RL agents that are capable of generalization.
01 April 2022, by

Assessing generalization of SGD via disagreement

We demonstrate that a simple procedure can accurately estimate the generalization error with only unlabeled data.
21 March 2022, by

imodels: leveraging the unreasonable effectiveness of rules

imodels provides a simple unified interface and implementation for many state-of-the-art interpretable modeling techniques, particularly rule-based methods.
14 March 2022, by

Why spectral normalization stabilizes GANs: analysis and improvements

We investigate the training stability of generative adversarial networks (GANs).
07 March 2022, by

The unsupervised reinforcement learning benchmark

We consider the unsupervised RL problem - how do we learn useful behaviors without supervision and then adapt them to solve downstream tasks quickly?
14 February 2022, by

Improving RL with lookahead: learning off-policy with online planning

We suggest using a policy that looks ahead using a learned model to find the best action sequence.
11 February 2022, by

Sequence modeling solutions for reinforcement learning problems

We tackle large-scale reinforcement learning problems with the toolbox of sequence modeling.
03 February 2022, by

An energy-based perspective on learning observation models

We propose a conceptually novel approach to mapping sensor readings into states.
21 January 2022, by

Which mutual information representation learning objectives are sufficient for control?

How can we best design representation learning objectives?
10 January 2022, by

Understanding user interfaces with screen parsing

We introduce the problem of screen parsing, which we use to predict structured user interface models from visual information.
03 January 2022, by

Bridge data: boosting generalization of robotic skills with cross-domain datasets

With our proposed dataset and multi-task, multi-domain learning approach, we have shown one potential avenue for making diverse datasets reusable in robotics.
30 December 2021, by

Why generalization in RL is difficult: epistemic POMDPs and implicit partial observability

In this blog post, we will aim to explain why generalization in RL is fundamentally hard, even in theory.
21 December 2021, by

Designs from data: offline black-box optimization via conservative training

In this post, we discuss offline model-based optimization and some recent advances in this area.
10 December 2021, by

A first-principles theory of neural network generalization

Find out more about research trying to shed light on the workings of deep neural networks.
22 November 2021, by

Compression, transduction, and creation: a unified framework for evaluating natural language generation

Our framework classifies language generation tasks into compression, transduction, and creation.
18 November 2021, by

Making RL tractable by learning more informative reward functions: example-based control, meta-learning, and normalized maximum likelihood

This article presents MURAL, a method for learning uncertainty-aware rewards for RL.
15 November 2021, by

PICO: Pragmatic compression for human-in-the-loop decision-making

In this post, we outline a pragmatic compression algorithm called PICO.
05 November 2021, by

Unsolved ML safety problems

We provide a new roadmap for ML Safety and aim to refine the technical problems that the field needs to address.
26 October 2021, by

Distilling neural networks into wavelet models using interpretations

We propose a method which distills information from a trained DNN into a wavelet transform.
18 October 2021, by

What can I do here? Learning new skills by imagining visual affordances

Can we use an analogous strategy to a child learning in a robotic learning system?
30 September 2021, by

The limitations of limited context for constituency parsing

We consider the representational power of two important frameworks for constituency parsing.
24 September 2021, by

Strategic instrumental variable regression: recovering causal relationships from strategic responses

How can we infer causal relationships in data while employing a decision-making model?
08 September 2021, by

Universal weakly supervised segmentation by pixel-to-segment contrastive learning

Can a machine learn from a few labeled pixels to predict every pixel in a new image?
17 August 2021, by







©2021 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association