Informative Feature Disentanglement for Unsupervised Domain Adaptation
Unsupervised Domain Adaptation (UDA) aims at learning a classifier for an unlabeled target domain by transferring knowledge from a labeled source domain with a related but different distribution. The strategy of aligning the two domains in latent feature space via metric discrepancy or adversarial learning has achieved considerable progress. However these existing approaches mainly focus on adapting the entire image and ignore the bottleneck that occurs when forced adaptation of uninformative domain-specific variations undermines the effectiveness of learned features. To address this problem we propose a novel component called Informative Feature Disentanglement (IFD) which is equipped with the adversarial network or the metric discrepancy model respectively. Accordingly the new network architectures named IFDAN and IFDMN enable informative feature refinement before the adaptation. The proposed IFD is designed to disentangle informative features from the uninformative domain-specific variations which are produced by a Variational Autoencoder (VAE) with lateral connections from the encoder to the decoder. We cooperatively apply the IFD to conduct supervised disentanglement for the source domain and unsupervised disentanglement for the target domain. In this way informative features are disentangled from the domain-specific details before the adaptation. Extensive experimental results on three gold-standard domain adaptation datasets e.g. Office31 Office-Home and VisDA-C demonstrate the effectiveness of the proposed IFDAN and IFDMN models for UDA.