Deeper, Broader and Artier Domain Generalization




motivation


Abstract

The problem of domain generalization is to learn from multiple training domains, and extract a domain-agnostic model that can then be applied to an unseen domain. Domain generalization (DG) has a clear motivation in contexts where there are target domains with distinct characteristics, yet sparse data for training. For example recognition in sketch images, which are distinctly more abstract and rarer than photos. Nevertheless, DG methods have primarily been evaluated on photo-only benchmarks focusing on alleviating the dataset bias where both problems of domain distinctiveness and data sparsity can be minimal. We argue that these benchmarks are overly straightforward, and show that simple deep learning baselines perform surprisingly well on them.
In this paper, we make two main contributions: Firstly, we build upon the favorable domain shift-robust properties of deep learning methods, and develop a low-rank parameterized CNN model for end-to-end DG learning. Secondly, we develop a DG benchmark dataset covering photo, sketch, cartoon and painting domains. This is both more practically relevant, and harder (bigger domain shift) than existing benchmarks. The results show that our method outperforms existing DG alternatives, and our dataset provides a more significant DG challenge to drive future research.

Download

You can download the paper, code (coming soon), dataset (updated !!!) and pretrained models (updated !!!, the domain name in the model represents the held-out domain). The pretrained models were done in the experiments for our ICCV paper. The uploaded models have slightly better performance than the reports in our paper due to the post-iccv results.


Paper

paper thumbnail

Code (coming soon)

code thumbnail

Models

model thumbnail

Dataset

data thumbnail

Within domain training loss (PT-AlexNet)

Batch size=64, lr=5e-4. training

BibTeX

If you think our dataset or pretrained model is useful for your research, please cite our paper. Thanks!
@inproceedings{Li2017dg,
  			title={Deeper, Broader and Artier Domain Generalization},
  			author={Li, Da and Yang, Yongxin and Song, Yi-Zhe and Hospedales, Timothy},
  			booktitle={International Conference on Computer Vision},
  			year={2017}
		}