ΑΙhub.org



BAIR blog


website   |   @berkeley_ai   |  

The Berkeley Artificial Intelligence Research (BAIR) Lab brings together UC Berkeley researchers across the areas of computer vision, machine learning, natural language processing, planning, and robotics. BAIR includes over two dozen faculty and more than a hundred graduate students pursuing research on fundamental advances in the above areas as well as cross-cutting themes including multi-modal deep learning, human-compatible AI, and connecting AI with other scientific disciplines and the humanities. The BAIR Blog provides an accessible, general-audience medium for BAIR researchers to communicate research findings, perspectives on the field, and various updates. Posts are written by students, post-docs, and faculty in BAIR, and are intended to provide relevant and timely discussion of research findings and results, both to experts and the general audience.




recent posts:


›   The shift from models to compound AI systems


›   Ghostbuster: detecting text ghostwritten by large language models


›   Asymmetric certified robustness via feature-convex neural networks


›   Goal representations for instruction following


›   Training diffusion models with reinforcement learning


›   On the stepwise nature of self-supervised learning


›   Generating 3D molecular conformers via equivariant coarse-graining and aggregated attention


›   GPT-4 + Stable-Diffusion = ?: Enhancing prompt understanding of text-to-image diffusion models with large language models


›   Koala: A dialogue model for academic research


›   Fully autonomous real-world reinforcement learning with applications to mobile manipulation


›   Keeping learning-based control safe by regulating distributional shift


›   Reverse engineering the NTK: towards first-principles architecture design


›   Why do policy gradient methods work so well in cooperative MARL? Evidence from policy representation


›   FIGS: Attaining XGBoost-level performance with the interpretability and speed of CART


›   The Berkeley Crossword Solver


›   Rethinking human-in-the-loop for artificial augmented intelligence


›   Designing societally beneficial reinforcement learning systems


›   Should I use offline RL or imitation learning?


›   Offline RL made easier: no TD learning, advantage reweighting, or transformers


›   Unsupervised skill discovery with contrastive intrinsic control


›   imodels: leveraging the unreasonable effectiveness of rules


›   The unsupervised reinforcement learning benchmark


›   Sequence modeling solutions for reinforcement learning problems


›   Which mutual information representation learning objectives are sufficient for control?


›   Bridge data: boosting generalization of robotic skills with cross-domain datasets


›   Why generalization in RL is difficult: epistemic POMDPs and implicit partial observability


›   Designs from data: offline black-box optimization via conservative training


›   A first-principles theory of neural network generalization


›   Making RL tractable by learning more informative reward functions: example-based control, meta-learning, and normalized maximum likelihood


›   PICO: Pragmatic compression for human-in-the-loop decision-making





next page →



AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association