基于风格迁移及对抗加权的开放集域自适应
Improving Open Set Domain Adaptation Using Image-to-Image Translation and Instance-Weighted Adversarial Learning
-
摘要:研究背景 域自适应旨在解决图像识别任务中训练数据稀缺的问题,即通过从一组类别信息明确的数据集(源域)中学习到一个在不同风格但类别相关的未知数据(目标域)上具有良好识别性能的模型。大多数域自适应方法的前提假设是源域和目标域的类别分布一样,这种假设设定称之为“闭集”。然而,由于目标域类别信息未知,其中有极大可能包含源域中没有出现过的类别。因此,一种更符合现实场景应用的新设定——“开放集”域自适应被广泛关注。目的 本文旨在解决域自适应中的开放集识别问题,即训练模型既能识别目标域中已知类样本,也能将来自未知类的样本标记为“未知”。这种模型在现实应用中更鲁棒,且更符合现实场景应用的需要。方法 本文方法包含两个模块:风格迁移和自适应。其中,风格迁移是循环一致性生成对抗网络,通过将源域数据的风格迁移到近似目标域数据,消除源域和目标域在像素空间中的差异。自适应是加权对抗网络,通过将风格迁移后的源域数据和未标注的目标域数据投影到一个域不变特征空间中,学习到每个目标域数据来自未知类的概率,并将这些概率值作为权重来帮助未知类分类器识别未知样本。结果 本文方法在多个开放集域自适应数据(Digits,Office,VISDA)上相比于其它方法在各项指标(已知类,未知类,总体识别精度)上取得了最优结果。消融实验表明,风格迁移和自适应模块对开放集域自适应的结果都具有明显提升。其中,加权操作可以有效提升未知类的识别。结论 实验结果表明,风格迁移不仅可以有效提升域自适应已知类的识别,而且对开放集中的未知类识别也有很大程度的帮助。通过同时降低源域和目标域在像素空间和特征空间的差异,以及对每个目标域数据实施加权操作,可以使模型更加准确地学习到已知类和未知类的边界。未来我们将探索更多针对开放集域自适应的风格迁移方法。Abstract: We propose to address the open set domain adaptation problem by aligning images at both the pixel space and the feature space. Our approach, called Open Set Translation and Adaptation Network (OSTAN), consists of two main components: translation and adaptation. The translation is a cycle-consistent generative adversarial network, which translates any source image to the “style” of a target domain to eliminate domain discrepancy in the pixel space. The adaptation is an instance-weighted adversarial network, which projects both (labeled) translated source images and (unlabeled) target images into a domain-invariant feature space to learn a prior probability for each target image. The learned probability is applied as a weight to the unknown classifier to facilitate the identification of the unknown class. The proposed OSTAN model significantly outperforms the state-of-the-art open set domain adaptation methods on multiple public datasets. Our experiments also demonstrate that both the image-to-image translation and the instance-weighting framework can further improve the decision boundaries for both known and unknown classes.