TLDR: Current SOTA methods for scene understanding, though impressive, often fail to decompose out-of-distribution scenes. In our ICML paper, Slot-TTA (http://slot-tta.github.io) we find that optimizing per test sample over reconstruction loss improves scene decomposition accuracy.
Problem Statement: In machine learning, we often assume the train and test split are IID samples from the same distribution. However, this doesn’t hold true in reality. In fact, there is a distribution shift happening all the time!
For example on the left, we visualize images from the ImageNet Chair category, and on the right, we visualize the ObjectNet chair category. As you can see there are a variety of real-world distribution shifts happening all the time. For instance, camera pose changes, occlusions, and changes in scene configuration.
So what is the issue? The issue is that in machine learning we always assume there to be a fixed train and test split. However, in the real world, there is no such universal train and test split, instead, there are distribution shifts happening all the time.
Instead of freezing our models at test time, which is what we conventionally do, we should instead continuously adapt them to various distribution shifts.
Given these issues, there has been a lot of work in this domain, which is also referred to as test-time adaptation. Test-time adaptation can be broadly classified into supervised test-time adaptation, where you are given access to a few labeled examples, or unsupervised domain adaptation where you do not have access to any labels. In this work, we focus on unsupervised adaptation, as it is a more general setting.
Within unsupervised domain adaptation, there are various settings such as batch, online, or single-example test-time adaptation. In this work, we focus on single-example setting. In this setting, the model adapts to each example in the test set independently. This is a lot more general setting than batch or online where you assume access to many unlabeled examples.
What is the prominent approach in this setting?
Sun, et al. proposed to encode input data X into a shared encoding of z which is then passed to a supervised task decoder and a self-supervised decoder. The whole model is then trained jointly using supervised and self-supervised losses. This joint training helps to couple the self-supervised and supervised tasks. Coupling allows test-time adaption using the self-supervised loss. Approaches vary based on the type of self-supervised loss used: TTT uses rotation prediction loss, MT3 uses instance prediction loss and TTT-MAE uses masked autoencoding loss.
However, all approaches only focus on the task of Image Classification. In our work, we find just joint training with losses is insufficient for Scene Understanding tasks. We find that architectural biases could be important for adaptation. Specifically, we use slot-centric biases that strongly couple scene decomposition and reconstruction loss are a perfect fit.
Slot-centric generative models attempt to segment scenes into object entities in a completely unsupervised manner, by optimizing a reconstruction objective [1,2,3] that shares the end goal of scene decomposition which can become a good candidate architecture for TTA.
These methods differ in detail but share the notion of incorporating a fixed set of entities, also known as slots or object files. Each slot extracts information about a single entity during encoding and is “synthesized” back to the input domain during decoding.
In light of the above, we propose Test-Time Adaptation with Slot-Centric models (Slot-TTA), a semi-supervised model equipped with a slot-centric bottleneck that jointly segments and reconstructs scenes.
At training time, Slot-TTA is trained in a supervised manner to jointly segment and reconstruct 2D (multi-view or single-view) RGB images or 3D point clouds. At test time, the model adapts to a single test sample by updating its network parameters solely by optimizing the reconstruction objective through gradient descent, as shown in the above figure.
Slot-TTA builds on top of slot-centric models by incorporating segmentation supervision during the training phase. Until now, slot-centric models have been neither designed nor utilized with the foresight of Test-Time Adaptation (TTA).
In particular, Engelcke et al. (2020) showed that TTA via reconstruction in slot-centric models fails due to a reconstruction segmentation trade-off: as the entity bottleneck loosens, there’s an improvement in reconstruction; however, segmentation subsequently deteriorates. We show that segmentation supervision aids in mitigating this trade-off and helps scale to scenes with complicated textures. We show that TTA in semi-supervised slot-centric models significantly improves scene decomposition.
Our contributions are as follows:
(i) We present an algorithm that significantly improves scene decomposition accuracy for out-of-distribution examples by performing test-time adaptation on each example in the test set independently.
(ii) We showcase the effectiveness of SSL-based TTA approaches for scene decomposition, while previous self-supervised test-time adaptation methods have primarily demonstrated results in classification tasks.
(iii) We introduce semi-supervised learning for slot-centric generative models, and show it can enable these methods to continue learning during test time. In contrast, previous works on slot-centric generative have neither been trained with supervision nor been used for test time adaptation.
(iv) Lastly, we devise numerous baselines and ablations, and evaluate them across multiple benchmarks and distribution shifts to offer valuable insights into test-time adaptation and object-centric learning.
Results: We test Slot-TTA on scene understanding tasks of novel view rendering and scene segmentation. We test on various input modalities such as multi-view posed images, single-view images, and 3D point clouds in the datasets of PartNet, MultiShapeNet-Hard, and CLEVR.
We compare Slot-TTA’s segmentation performance against state-of-the-art supervised feedforward RGB image and 3D point cloud segmentors of Mask2Former and Mask3D, state-of-the-art novel view rendering methods of SemanticNeRF that adapt per scene through RGB and segmentation rendering and state-of-the-art test-time adaptation methods such as MT3.
We show that Slot-TTA outperforms SOTA feedforward segmenters in out-of-distribution scenes, dramatically outperforms alternative TTA methods and alternative semi-supervised scene decomposition methods, and better exploits multiview information for improving segmentation over semantic NeRF-based multi-view fusion.
Below we show our multi-view RGB results on MultiShapeNet dataset of Kubrics.
We consider various distribution shifts throughout our paper, for the results below we consider the following distribution shift.
We use a train-test split of Multi-ShapeNet-Easy to Multi-ShapeNet-Hard where there is no overlap between object instances and between the number of objects present in the scene between training and test sets. Specifically, scenes with 5-7 object instances are in the training set, and scenes with 16-30 objects are in the test set.
We consider the following baselines:
(i) Mask2Former (Cheng et al., 2021), a state-of-the-art 2D image segmentor that extends detection transformers (Carion et al., 2020) to the task of image segmentation via using multiscale segmentation decoders with masked attention.
(ii) Mask2Former-BYOL which combines the segmentation model of Cheng et al. (2021) with test time adaptation using BYOL self-supervised loss of MT3 (Bartler et al. (2022)).
(iii) Mask2Former-Recon which combines the segmentation model of Cheng et al. (2021) with an RGB rendering module and an image reconstruction objective for test-time adaptation.
(iv) Semantic-NeRF (Zhi et al., 2021), a NeRF model that adds a segmentation rendering head to the multi-view RGB rendering head of traditional NeRFs. It is fit per scene on all available 9 RGB posed images and corresponding segmentation maps from Mask2Former as input.
(v) Slot-TTA-w/o supervision, a variant of our model that does not use any segmentation supervision; rather is trained only for cross-view image synthesis similar to OSRT (Sajjadi et al., 2022a).
Our conclusions are as follows:
(i) Slot-TTA with TTA outperforms Mask2Former in out-of-distribution scenes and has comparable performance within the training distribution.
(ii) Mask2Former-BYOL does not improve over Mask2Former, which suggests that adding self-supervised losses of SOTA image classification TTA methods (Bartler et al., 2022) to scene segmentation methods does not help.
(iii) Slot-TTA-w/o supervision (model identical to Sajjadi et al. (2022a)) greatly underperforms a supervised segmentor Mask2Former. This means that unsupervised slot-centric models are still far from reaching their supervised counterparts.
(iv) Slot-TTA-w/o supervision does not improve during test-time adaptation. This suggests segmentation supervision at training time is essential for effective TTA.
(v) Semantic-NeRF which fuses segmentation masks across
views in a geometrically consistent manner outperforms single-view segmentation performance of Mask2Former by 3%.
(vi) Slot-TTA which adapts model parameters of the segmentor at test time greatly outperforms Semantic-NeRF in OOD scenes.
(vii) Mask2Former-Recon performs worse with TTA, which suggests that the decoder’s design is very important for aligning the reconstruction and segmentation tasks.
For point clouds, we train the model using certain categories of PartNet and test it using a different set. For quantitative comparisons with the baselines please refer to our paper. As can be seen in the figure below, point cloud segmentation of Slot-TTA improves after optimizing over point cloud reconstruction loss.
For 2D RGB images, we train the model supervised on the CLEVR dataset and test it on CLEVR-Tex. For quantitative comparisons with the baselines please refer to our paper. As can be seen in the figure below, RGB segmentation of Slot-TTA improves after optimizing over RGB reconstruction loss.
Finally, we find that Slot-TTA doesn’t just improve the segmentation performance on out-of-distribution scenes, but also improves the performance on other downstream tasks such as novel view synthesis!
Conclusion: We presented Slot-TTA, a novel semi-supervised scene decomposition model equipped with a slot-centric image or point-cloud rendering component for test time adaptation. We showed Slot-TTA greatly improves instance segmentation on out-of-distribution scenes using test-time adaptation on reconstruction or novel view synthesis objectives. We compared with numerous baseline methods, ranging from state-of-the-art feedforward segmentors, to NERF-based TTA for multiview semantic fusion, to state-of-the-art TTA methods, to unsupervised or weakly supervised 2D and 3D generative models. We showed Slot-TTA compares favorably against all of them for scene decomposition of OOD scenes, while still being competitive within distribution.
Paper Authors; Mihir Prabhudesai, Anirudh Goyal, Sujoy Paul, Sjoerd van Steenkiste, Mehdi S. M. Sajjadi, Gaurav Aggarwal, Thomas Kipf, Deepak Pathak, Katerina Fragkiadaki.
Code: <https://github.com/mihirp1998/Slot-TTA>
Webpage: <https://slot-tta.github.io/>
Paper: <https://arxiv.org/abs/2203.11194>
This article was initially published on the ML@CMU blog and appears here with the authors’ permission.