ΑΙhub.org



ML@CMU


website   |   @mldcmu   |  

The Carnegie Mellon University Machine Learning (CMU ML) Blog provides an accessible, general-audience medium for CMU researchers to communicate research findings, perspectives on the field of machine learning, and various updates, both to experts and the general audience. Posts are written by students, postdocs, and faculty at CMU. Posts on a variety of machine learning topics studied at CMU will appear approximately bi-weekly.




recent posts:


›   Identification of hazardous areas for priority landmine clearance: AI for humanitarian mine action


›   VQAScore: Evaluating and improving vision-language generative models


›   No free lunch in LLM watermarking: Trade-offs in watermarking design choices


›   Rethinking LLM memorization


›   Causal inference under incentives: an annotated reading list


›   CMU-MATH team’s innovative approach secures 2nd place at the AIMO prize


›   How to regularize your regression


›   Beyond the mud: Datasets, benchmarks, and methods for computer vision in off-road racing


›   On noisy evaluation in federated hyperparameter tuning


›   Test-time adaptation with slot-centric models


›   Navigating to objects in the real world


›   On privacy and personalization in federated learning: a retrospective on the US/UK PETs challenge


›   TIDEE: An embodied agent that tidies up novel rooms using commonsense priors


›   Are model explanations useful in practice? Rethinking how to support human-ML interactions


›   RLPrompt: Optimizing discrete text prompts with reinforcement learning


›   Bottom-up top-down detection transformers for open vocabulary object detection


›   Causal confounds in sequential decision making


›   Tackling diverse tasks with neural architecture search


›   Tracking any pixel in a video


›   Recurrent model-free RL can be a strong baseline for many POMDPs


›   Galaxies on graph neural networks


›   auton-survival: An open-source package for regression, counterfactual estimation, evaluation and phenotyping censored time-to-event data


›   Does AutoML work for diverse tasks?


›   Deep attentive variational inference


›   An experimental design perspective on model-based reinforcement learning


›   Assessing generalization of SGD via disagreement


›   Why spectral normalization stabilizes GANs: analysis and improvements


›   Improving RL with lookahead: learning off-policy with online planning


›   An energy-based perspective on learning observation models


›   Understanding user interfaces with screen parsing





next page →



AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association