ΑΙhub.org



ML@CMU


website   |   @mldcmu   |  

The Carnegie Mellon University Machine Learning (CMU ML) Blog provides an accessible, general-audience medium for CMU researchers to communicate research findings, perspectives on the field of machine learning, and various updates, both to experts and the general audience. Posts are written by students, postdocs, and faculty at CMU. Posts on a variety of machine learning topics studied at CMU will appear approximately bi-weekly.




recent posts:


›   Recurrent model-free RL can be a strong baseline for many POMDPs


›   Galaxies on graph neural networks


›   auton-survival: An open-source package for regression, counterfactual estimation, evaluation and phenotyping censored time-to-event data


›   Does AutoML work for diverse tasks?


›   Deep attentive variational inference


›   An experimental design perspective on model-based reinforcement learning


›   Assessing generalization of SGD via disagreement


›   Why spectral normalization stabilizes GANs: analysis and improvements


›   Improving RL with lookahead: learning off-policy with online planning


›   An energy-based perspective on learning observation models


›   Understanding user interfaces with screen parsing


›   Compression, transduction, and creation: a unified framework for evaluating natural language generation


›   The limitations of limited context for constituency parsing


›   Strategic instrumental variable regression: recovering causal relationships from strategic responses


›   A unifying, game-theoretic framework for imitation learning


›   Document grounded generation


›   See, Hear, Explore: curiosity via audio-visual association


›   A learning theoretic perspective on local explainability


›   Counterfactual predictions under runtime confounding


›   Tilted empirical risk minimization


›   An inferential perspective on federated learning


›   Representational aspects of depth and conditioning in normalizing flows


›   Experiments with the ICML 2020 peer-review process


›   On learning language-invariant representations for universal machine translation


›   Plan2Explore: active model-building for self-supervised visual reinforcement learning


›   Generalizing randomized smoothing for pointwise-certified defenses to data poisoning attacks


›   Topics in data analysis


›   High-frequency component helps explain the generalization of convolutional neural networks


›   Maintaining the illusion of reality: transfer in RL by keeping agents in the DARC


›   In defense of weight-sharing for neural architecture search: an optimization perspective





next page →




©2021 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association