We propose the asymmetric certified robustness problem, which requires certified robustness for only one class and reflects real-world adversarial scenarios.
How can we reconcile the ease of specifying tasks through natural language-based approaches with the performance improvements of goal-conditioned learning?