ΑΙhub.org



BAIR blog


website   |   @berkeley_ai   |  

The Berkeley Artificial Intelligence Research (BAIR) Lab brings together UC Berkeley researchers across the areas of computer vision, machine learning, natural language processing, planning, and robotics. BAIR includes over two dozen faculty and more than a hundred graduate students pursuing research on fundamental advances in the above areas as well as cross-cutting themes including multi-modal deep learning, human-compatible AI, and connecting AI with other scientific disciplines and the humanities. The BAIR Blog provides an accessible, general-audience medium for BAIR researchers to communicate research findings, perspectives on the field, and various updates. Posts are written by students, post-docs, and faculty in BAIR, and are intended to provide relevant and timely discussion of research findings and results, both to experts and the general audience.




recent posts:


›   Decentralized reinforcement learning: global decision-making via local economic transactions


›   D4RL: building better benchmarks for offline reinforcement learning


›   Open compound domain adaptation


›   OmniTact: a multi-directional high-resolution touch sensor


›   The ingredients of real world robotic reinforcement learning


›   Making decision trees accurate again: explaining what explainable AI did not


›   Robots learning to move like animals


›   Physically realistic attacks on deep reinforcement learning


›   Unsupervised meta-learning: learning to learn without supervision


›   Does on-policy data collection fix errors in off-policy reinforcement learning?


›   BADGR: the Berkeley autonomous driving ground robot


›   Speeding up transformer training and inference by increasing model size


›   Large-scale training at BAIR with Ray Tune


›   Emergent behavior by minimizing chaos


›   What is my data worth?


›   Learning to imitate human demonstrations via CycleGAN


›   Model-based reinforcement learning: theory and practice


›   Data-driven deep reinforcement learning


›   RoboNet: A dataset for large-scale multi-robot learning


›   Look then listen: Pre-learning environment representations for data-efficient neural instruction following


›   Deep dynamics models for dexterous manipulation


›   Evaluating and testing unintended memorization in neural networks


›   1000x faster data augmentation


›   Model-based reinforcement learning from pixels with structured latent variable models


›   Robots that learn to adapt


›   Manipulation by feel





← previous page


 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence