We use cookies to improve your experience with our site.
Wen-Gang Zhou, Hou-Qiang Li, Yijuan Lu, Qi Tian. Encoding Spatial Context for Large-Scale Partial-DuplicateWeb Image Retrieval[J]. Journal of Computer Science and Technology, 2014, 29(5): 837-848. DOI: 10.1007/s11390-014-1472-3
Citation: Wen-Gang Zhou, Hou-Qiang Li, Yijuan Lu, Qi Tian. Encoding Spatial Context for Large-Scale Partial-DuplicateWeb Image Retrieval[J]. Journal of Computer Science and Technology, 2014, 29(5): 837-848. DOI: 10.1007/s11390-014-1472-3

Encoding Spatial Context for Large-Scale Partial-DuplicateWeb Image Retrieval

  • Many recent state-of-the-art image retrieval approaches are based on Bag-of-Visual-Words model and represent an image with a set of visual words by quantizing local SIFT (scale invariant feature transform) features. Feature quantization reduces the discriminative power of local features and unavoidably causes many false local matches between images, which degrades the retrieval accuracy. To filter those false matches, geometric context among visual words has been popularly explored for the verification of geometric consistency. However, existing studies with global or local geometric verification are either computationally expensive or achieve limited accuracy. To address this issue, in this paper, we focus on partial-duplicate Web image retrieval, and propose a scheme to encode the spatial context for visual matching verification. An efficient affine enhancement scheme is proposed to refine the verification results. Experiments on partial-duplicate Web image search, using a database of one million images, demonstrate the effectiveness and efficiency of the proposed approach. Evaluation on a 10-million image database further reveals the scalability of our approach.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return