? 基于Helmholtz-Hodge分解的运动物体场构图算法在无先验知识混合运动场分割中的应用研究
Journal of Computer Science and Technology
Quick Search in JCST
 Advanced Search 
      Home | PrePrint | SiteMap | Contact Us | Help
 
Indexed by   SCIE, EI ...
Bimonthly    Since 1986
Journal of Computer Science and Technology 2017, Vol. 32 Issue (3) :520-535    DOI: 10.1007/s11390-017-1741-z
Special Section of CVM 2017 << Previous Articles | Next Articles >>
基于Helmholtz-Hodge分解的运动物体场构图算法在无先验知识混合运动场分割中的应用研究
Cui-Cui Zhang1, Zhi-Lei Liu2,*, Member, CCF
1. School of Marine Science and Technology, Tianjin University, Tianjin 300072, China;
2. Tianjin Key Laboratory of Cognitive Computing and Application, School of Computer Science and Technology Tianjin University, Tianjin 300072, China
Prior-Free Dependent Motion Segmentation Using Helmholtz-Hodge Decomposition Based Object-Motion Oriented Map
Cui-Cui Zhang1, Zhi-Lei Liu2,*, Member, CCF
1. School of Marine Science and Technology, Tianjin University, Tianjin 300072, China;
2. Tianjin Key Laboratory of Cognitive Computing and Application, School of Computer Science and Technology Tianjin University, Tianjin 300072, China

摘要
参考文献
相关文章
Download: [PDF 5496KB]  
摘要 在运动相机拍摄的动态场景中,由于物体的运动和相机的运动互相影响,对动态场景中的运动物体分割非常具有挑战性。运动相机补偿算法被认为是一种比较有效的解决方法,然而现有的这类算法需要利用好相机运动种类与场景结构的先验知识,但这些先验知识在很多情况下是无法获知的。此外,由于二维运动场在三维空间中会受到深度变化的影响,运动物体的分割也具有深度依赖问题。为了解决上述两个问题,本文引入基于Helmholtz-Hodge分解(HHD)的面向运动物体场的构图算法(OOM),用于无先验知识的运动物体分割。首先,本方法利用HHD通过对二维运动场(光流场)分解成无旋成分和无散度成分,可以将任意种类的相机运动用这两个成分唯一表示,从而无需关于相机运动种类的先验知识;然后,本方法利用能够将三维场景中相机的运动表示成与深度无关的部分,从而能够消除深度变化对运动物体分割的影响;最后,为了能够将运动物体从分解的光流场中有效的提取出来,本方法采用了一种时空约束的Quadtree标注算法。为了验证本方法的有效性,在多个标准数据上进行了实验,结果表明本方法对复杂场景中的运动物体分割的准确率较传统算法提高了10%-20%.
关键词无先验知识的混合运动分割   Helmholtz-Hodge分解   运动物体场构图   四叉树标注     
Abstract: Motion segmentation in moving camera videos is a very challenging task because of the motion dependence between the camera and moving objects. Camera motion compensation is recognized as an effective approach. However, existing work depends on prior-knowledge on the camera motion and scene structure for model selection. This is not always available in practice. Moreover, the image plane motion suffers from depth variations, which leads to depth-dependent motion segmentation in 3D scenes. To solve these problems, this paper develops a prior-free dependent motion segmentation algorithm by introducing a modified Helmholtz-Hodge decomposition (HHD) based object-motion oriented map (OOM). By decomposing the image motion (optical flow) into a curl-free and a divergence-free component, all kinds of camera-induced image motions can be represented by these two components in an invariant way. HHD identifies the camera-induced image motion as one segment irrespective of depth variations with the help of OOM. To segment object motions from the scene, we deploy a novel spatio-temporal constrained quadtree labeling. Extensive experimental results on benchmarks demonstrate that our method improves the performance of the state-of-the-art by 10%~20% even over challenging scenes with complex background.
Keywordsprior-free dependent motion segmentation   Helmholtz-Hodge decomposition(HHD)   object-motion oriented map(OOM)   quadtree labeling     
Received 2017-01-19;
本文基金:

This work is supported by the National Natural Science Foundation of China under Grant No. 61503277.

通讯作者: Zhi-Lei Liu     Email: zhileiliu@tju.edu.cn
About author: Cui-Cui Zhang received her Ph.D. degree in computer science from the Kyoto University, Kyoto, in 2015. She is currently an assistant professor in the School of Marine Science and Technology, Tianjin University, Tianjin. Her research interests are computer graphics and visualization.
引用本文:   
Cui-Cui Zhang, Zhi-Lei Liu.基于Helmholtz-Hodge分解的运动物体场构图算法在无先验知识混合运动场分割中的应用研究[J]  Journal of Computer Science and Technology , 2017,V32(3): 520-535
Cui-Cui Zhang, Zhi-Lei Liu.Prior-Free Dependent Motion Segmentation Using Helmholtz-Hodge Decomposition Based Object-Motion Oriented Map[J]  Journal of Computer Science and Technology, 2017,V32(3): 520-535
链接本文:  
http://jcst.ict.ac.cn:8080/jcst/CN/10.1007/s11390-017-1741-z
Copyright 2010 by Journal of Computer Science and Technology