In ML classification tasks, achieving high accuracy is only part of the goal; it's equally important for models to express how confident they are in their predictions.
This story is a collaboration of three Institutes that are working at the intersection of cell research, cancer research and care, and artificial intelligence.
Our approach provides a simple and practical perspective on what memorization can mean, providing a useful tool for functional and legal analysis of LLMs.
We propose the asymmetric certified robustness problem, which requires certified robustness for only one class and reflects real-world adversarial scenarios.
How can we reconcile the ease of specifying tasks through natural language-based approaches with the performance improvements of goal-conditioned learning?