We use cookies to improve your experience with our site.

基于大规模图像数据集的视频描述方法

Captioning Videos Using Large-Scale Image Corpus

  • 摘要: 随着网络视频数据的迅猛增长,自动视频描述变得越发的重要。视频描述用于简要描述视频内容,是一种有更丰富的语义,更接近人类认知的视频处理方式。它不仅能够帮助用户快速理解和找到相应的视频,也能够辅助管理视频信息。但是,数据集的稀缺严重的限制了视频描述的发展。因此,本文提出了一种将图像描述语料库应用到视频描述的方法,解决了两个主要的问题:1)将图像描述的语料库有机的应用到视频描述中;2)面向海量数据的高效执行效果。为了达到这两个目标,我们改进了图像描述方法中的查找模型,将其应用于视频描述中,并使用哈希方法来管理数据,以提升在海量数据中的查询效率。最终的实验验证了该方法在各类哈希算法中的有效性。相比传统查找模型,该方法的内存开销仅有1/256,而时间开销仅占1/64,能够适应更大规模的数据处理。

     

    Abstract: Video captioning is the task of assigning complex high-level semantic descriptions (e.g., sentences or paragraphs) to video data. Different from previous video analysis techniques such as video annotation, video event detection and action recognition, video captioning is much closer to human cognition with smaller semantic gap. However, the scarcity of captioned video data severely limits the development of video captioning. In this paper, we propose a novel video captioning approach to describe videos by leveraging freely-available image corpus with abundant literal knowledge. There are two key aspects of our approach: 1) effective integration strategy bridging videos and images, and 2) high efficiency in handling ever-increasing training data. To achieve these goals, we adopt sophisticated visual hashing techniques to efficiently index and search large-scale images for relevant captions, which is of high extensibility to evolving data and the corresponding semantics. Extensive experimental results on various real-world visual datasets show the effectiveness of our approach with different hashing techniques, e.g., LSH (locality-sensitive hashing), PCA-ITQ (principle component analysis iterative quantization) and supervised discrete hashing, as compared with the state-of-the-art methods. It is worth noting that the empirical computational cost of our approach is much lower than that of an existing method, i.e., it takes 1/256 of the memory requirement and 1/64 of the time cost of the method of Devlin et al.

     

/

返回文章
返回