ΑΙhub.org



ML@CMU


website   |   @mldcmu   |  

The Carnegie Mellon University Machine Learning (CMU ML) Blog provides an accessible, general-audience medium for CMU researchers to communicate research findings, perspectives on the field of machine learning, and various updates, both to experts and the general audience. Posts are written by students, postdocs, and faculty at CMU. Posts on a variety of machine learning topics studied at CMU will appear approximately bi-weekly.




recent posts:


›   On noisy evaluation in federated hyperparameter tuning


›   Test-time adaptation with slot-centric models


›   Navigating to objects in the real world


›   On privacy and personalization in federated learning: a retrospective on the US/UK PETs challenge


›   TIDEE: An embodied agent that tidies up novel rooms using commonsense priors


›   Are model explanations useful in practice? Rethinking how to support human-ML interactions


›   RLPrompt: Optimizing discrete text prompts with reinforcement learning


›   Bottom-up top-down detection transformers for open vocabulary object detection


›   Causal confounds in sequential decision making


›   Tackling diverse tasks with neural architecture search


›   Tracking any pixel in a video


›   Recurrent model-free RL can be a strong baseline for many POMDPs


›   Galaxies on graph neural networks


›   auton-survival: An open-source package for regression, counterfactual estimation, evaluation and phenotyping censored time-to-event data


›   Does AutoML work for diverse tasks?


›   Deep attentive variational inference


›   An experimental design perspective on model-based reinforcement learning


›   Assessing generalization of SGD via disagreement


›   Why spectral normalization stabilizes GANs: analysis and improvements


›   Improving RL with lookahead: learning off-policy with online planning


›   An energy-based perspective on learning observation models


›   Understanding user interfaces with screen parsing


›   Compression, transduction, and creation: a unified framework for evaluating natural language generation


›   The limitations of limited context for constituency parsing


›   Strategic instrumental variable regression: recovering causal relationships from strategic responses


›   A unifying, game-theoretic framework for imitation learning


›   Document grounded generation


›   See, Hear, Explore: curiosity via audio-visual association


›   A learning theoretic perspective on local explainability


›   Counterfactual predictions under runtime confounding





next page →



AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association