Transfer Learning and Domain Adaptation Reading List

Note

This page is currently under construction 🚧 To always stay up to date with the current version, consider bookmarking the original source [1].

This page contains useful references to current transfer learning algorithms, and is mainly taken from Arthur Pesah’s reading list available on github [1]. For a more extensive review, also refer to [2].

[1](1, 2) Arthur Pesah and collaborators. Awesome Transfer Learning. https://github.com/artix41/awesome-transfer-learning
[2]Arthur Pesah. A Little Review of Domain Adaptation in 2017. https://artix41.github.io/static/domain-adaptation-in-2017/

Awesome Transfer Learning

A list of awesome papers and cool resources on transfer learning, domain adaptation and domain-to-domain translation in general! As you will notice, this list is currently mostly focused on domain adaptation (DA) and domain-to-domain translation, but don’t hesitate to suggest resources in other subfields of transfer learning. I accept pull requests.

Papers

Papers are ordered by theme and inside each theme by publication date (submission date for arXiv papers). If the network or algorithm is given a name in a paper, this one is written in bold before the paper’s name.

Unsupervised Domain Adaptation

Transfer between a source and a target domain. In unsupervised domain adaptation, only the source domain can have labels.

Adversarial methods

Temporal models (videos)

Self-Ensembling methods

Semi-supervised Domain Adaptation

All the source points are labelled, but only few target points are.

Few-shot Supervised Domain Adaptation

Only a few target examples are available, but they are labelled

Datasets

Image-to-image

  • MNIST vs MNIST-M vs SVHN vs Synth vs USPS: digit images
  • GTSRB vs Syn Signs : traffic sign recognition datasets, transfer between real and synthetic signs.
  • NYU Depth Dataset V2: labeled paired images taken with two different cameras (normal and depth)
  • CelebA: faces of celebrities, offering the possibility to perform gender or hair color translation for instance
  • Office-Caltech dataset: images of office objects from 10 common categories shared by the Office-31 and Caltech-256 datasets. There are in total four domains: Amazon, Webcam, DSLR and Caltech.
  • Cityscapes dataset: street scene photos (source) and their annoted version (target)
  • UnityEyes vs MPIIGaze: simulated vs real gaze images (eyes)
  • CycleGAN datasets: horse2zebra, apple2orange, cezanne2photo, monet2photo, ukiyoe2photo, vangogh2photo, summer2winter
  • pix2pix dataset: edges2handbags, edges2shoes, facade, maps
  • RaFD: facial images with 8 different emotions (anger, disgust, fear, happiness, sadness, surprise, contempt, and neutral). You can transfer a face from one emotion to another.
  • VisDA 2017 classification dataset: 12 categories of object images in 2 domains: 3D-models and real images.
  • Office-Home dataset: images of objects in 4 domains: art, clipart, product and real-world.

Text-to-text

Results

The results are indicated as the prediction accuracy (in %) in the target domain after adapting the source to the target. For the moment, they only correspond to the results given in the original papers, so the methodology may vary between each paper and these results must be taken with a grain of salt.

Digits transfer (unsupervised)

Sour ceTa rget MNIS TMNI ST-M Synt hSVH N MNIS TSVH N SVHN MNIS T MNIS TUSP S USPS MNIS T
SA 56.9 0 86.4 4 ? 59.3 2 ? ?
DANN 76.6 6 91.0 9 ? 73.8 5 ? ?
CoGA N ? ? ? ? 91.2 89.1
DRCN ? ? 40.0 5 81.9 7 91.8 0 73.6 7
DSN 83.2 91.2 ? 82.7 ? ?
DTN ? ? 90.6 6 79.7 2 ? ?
Pixe lDA 98.2 ? ? ? 95.9 ?
ADDA ? ? ? 76.0 89.4 90.1
UNIT ? ? ? 90.5 3 95.9 7 93.5 8
GenT oAda pt ? ? ? 92.4 95.3 90.8
SBAD A-GA N 99.4 ? 61.1 76.1 97.6 95.0
DAas soc 89.4 7 91.8 6 ? 97.6 0 ? ?
CyCA DA ? ? ? 90.4 95.6 96.5
I2I ? ? ? 92.1 95.1 92.2
DIRT -T 98.7 ? 76.5 99.4 ? ?
Deep JDOT 92.4 ? ? 96.7 95.7 96.4

Libraries

No good library for the moment (as far as I know). If you’re interested in a project of creating a generic transfer learning/domain adaptation library, please let me know.