Improving Open Set Domain Adaptation Using Image-to-Image Translation and Instance-Weighted Adversarial Learning
-
Abstract
We propose to address the open set domain adaptation problem by aligning images at both the pixel space and the feature space. Our approach, called Open Set Translation and Adaptation Network (OSTAN), consists of two main components: translation and adaptation. The translation is a cycle-consistent generative adversarial network, which translates any source image to the “style” of a target domain to eliminate domain discrepancy in the pixel space. The adaptation is an instance-weighted adversarial network, which projects both (labeled) translated source images and (unlabeled) target images into a domain-invariant feature space to learn a prior probability for each target image. The learned probability is applied as a weight to the unknown classifier to facilitate the identification of the unknown class. The proposed OSTAN model significantly outperforms the state-of-the-art open set domain adaptation methods on multiple public datasets. Our experiments also demonstrate that both the image-to-image translation and the instance-weighting framework can further improve the decision boundaries for both known and unknown classes.
-
-