NPC: Negative Prototypical Contrasting for Label Disambiguation of Partial Label Learning
-
Abstract
Partial label learning (PLL) learns under label ambiguity where each training instance is annotated with a set of candidate labels, among which only one is the ground-truth label. Recent advances showed that PLL can be promoted by combining label disambiguation with representation learning coherently, which achieved state-of-the-art performance. However, most of the existing deep PLL methods over-emphasize pulling the inaccurate pseudo-label-induced positive samples and fail to achieve a balance between the intra-class compactness and the inter-class separability, thus leading to a sub-optimal representation space. In this paper, we solve this issue by taking into account the pure negative supervision information which can be extracted perfectly from the non-candidate label set. Methodologically, we propose a novel framework Negative Prototypical Contrasting (NPC). The optimization objective of NPC contrasts each instance with its candidate prototypes against its negative prototypes, aiming at a sufficiently distinguishable representation space. Based on the learned representations, the label disambiguation process is performed in a moving-average style. Theoretically, we show that the objective of NPC is equivalent to solving a constrained maximum likelihood optimization. We also justify applying the moving average from the stochastic expectation-maximization perspective. Empirically, extensive experiments demonstrate that the proposed NPC method achieves state-of-the-art classification performance on various datasets, and even competes with its supervised counterparts.
-
-