Skip to content

tue-mps/vfm-uda

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Exploring the Benefits of Vision Foundation Models for Unsupervised Domain Adaptation (CVPR 2024 Second Workshop on Foundation Models)

Authors: Bruno B. Englert, Fabrizio J. Piva, Tommie Kerssies, Daan de Geus, Gijs Dubbelman
Affiliation: Eindhoven University of Technology
Publication: CVPR 2024 Workshop Proceedings for the Second Workshop on Foundation Models
Paper: arXiv
Code: GitHub

🔔 News:

Abstract

Achieving robust generalization across diverse data domains remains a significant challenge in computer vision. This challenge is important in safety-critical applications, where deep-neural-network-based systems must perform reliably under various environmental conditions not seen during training. Our study investigates whether the generalization capabilities of Vision Foundation Models (VFMs) and Unsupervised Domain Adaptation (UDA) methods for the semantic segmentation task are complementary. Results show that combining VFMs with UDA has two main benefits: (a) it allows for better UDA performance while maintaining the out-of-distribution performance of VFMs, and (b) it makes certain time-consuming UDA components redundant, thus enabling significant inference speedups. Specifically, with equivalent model sizes, the resulting VFM-UDA method achieves an 8.4x speed increase over the prior non-VFM state of the art, while also improving performance by +1.2 mIoU in the UDA setting and by +6.1 mIoU in terms of out-of-distribution generalization. Moreover, when we use a VFM with 3.6x more parameters, the VFM-UDA approach maintains a 3.3x speed up, while improving the UDA performance by +3.1 mIoU and the out-of-distribution performance by +10.3 mIoU. These results underscore the significant benefits of combining VFMs with UDA, setting new standards and baselines for Unsupervised Domain Adaptation in semantic segmentation.

Installation

  1. Create a Weights & Biases (W&B) account.

  2. Environment setup.

    conda create -n fuda python=3.10 && conda activate fuda
  3. Install required packages.

    pip install -r requirements.txt

Data preparation

All the zipped data should be placed under one directory. No unzipping is required.

Usage

Training

We recommend using 4 GPUs with 2 batch size per GPU. On a A100, training a ViT-L will take around 20h.

python main.py fit -c uda_vit_vanilla.yaml --root /data  --trainer.devices [0,1,2,3]

(replace /data with the folder where you stored the datasets)

Note: there are small variations in performance between training runs, due to the stochasticity in the process, particularly for UDA techniques. Therefore, results may differ slightly depending on the random seed.

Evaluating

To evaluate a pre-trained VFM-UDA++ model, run:

python3 main.py validate -c uda_vit_vanilla.yaml --root /data  --trainer.devices "[0]" --model.network.ckpt_path "/path/to/checkpoint.ckpt"

or use huggingface urls directly

python3 main.py validate -c uda_vit_vanilla.yaml --root /data  --trainer.devices "[0]" --model.network.ckpt_path "https://huggingface.co/tue-mps/vfmuda_base_gta2city/resolve/main/vfmuda_base_gta2city_771miou_step40000.ckpt"

(replace /data with the folder where you stored the datasets)

Model Zoo

Method Backbone Pre-training Cityscapes (miou) WildDash2 (miou) model
VFM-UDA ViT-B DINOv2 77.1 60.8 model
VFM-UDA ViT-L DINOv2 78.4 64.7 model

Note: these models are re-trained, so the results differ slightly from those reported in the paper.

Citation

@inproceedings{englert2024exploring,
  author={{Englert, Brunó B.} and {Piva, Fabrizio J.} and {Kerssies, Tommie} and {de Geus, Daan} and {Dubbelman, Gijs}},
  title={Exploring the Benefits of Vision Foundation Models for Unsupervised Domain Adaptation},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  year={2024},
}

Acknowledgement

We use some code from:

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published