ΑΙhub.org



ML@CMU


website   |   @mldcmu   |  

The Carnegie Mellon University Machine Learning (CMU ML) Blog provides an accessible, general-audience medium for CMU researchers to communicate research findings, perspectives on the field of machine learning, and various updates, both to experts and the general audience. Posts are written by students, postdocs, and faculty at CMU. Posts on a variety of machine learning topics studied at CMU will appear approximately bi-weekly.




recent posts:


›   Compression, transduction, and creation: a unified framework for evaluating natural language generation


›   The limitations of limited context for constituency parsing


›   Strategic instrumental variable regression: recovering causal relationships from strategic responses


›   A unifying, game-theoretic framework for imitation learning


›   Document grounded generation


›   See, Hear, Explore: curiosity via audio-visual association


›   A learning theoretic perspective on local explainability


›   Counterfactual predictions under runtime confounding


›   Tilted empirical risk minimization


›   An inferential perspective on federated learning


›   Representational aspects of depth and conditioning in normalizing flows


›   Experiments with the ICML 2020 peer-review process


›   On learning language-invariant representations for universal machine translation


›   Plan2Explore: active model-building for self-supervised visual reinforcement learning


›   Generalizing randomized smoothing for pointwise-certified defenses to data poisoning attacks


›   Topics in data analysis


›   High-frequency component helps explain the generalization of convolutional neural networks


›   Maintaining the illusion of reality: transfer in RL by keeping agents in the DARC


›   In defense of weight-sharing for neural architecture search: an optimization perspective


›   Learning to explore using active neural SLAM


›   Differentiable reasoning over text


›   Unsupervised meta-learning: learning to learn without supervision


›   Learning DAGs with continuous optimization


›   Are sixteen heads really better than one?





← previous page



AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association