? Multi-exposure Motion Estimation based on Deep Convolutional Networks
Journal of Computer Science and Technology
Quick Search in JCST
 Advanced Search 
      Home | PrePrint | SiteMap | Contact Us | FAQ
 
Indexed by   SCIE, EI ...
Bimonthly    Since 1986
Journal of Computer Science and Technology 2018, Vol. 33 Issue (3) :487-501    DOI: 10.1007/s11390-018-1833-4
Special Section of CVM 2018 Current Issue | Archive | Adv Search << Previous Articles | Next Articles >>
Multi-exposure Motion Estimation based on Deep Convolutional Networks
Zhi-Feng Xie1,2, Yu-Chen Guo1, Shu-Han Zhang1, Wen-Jun Zhang1, Li-Zhuang Ma2,3, Member, CCF
1 Department of Film and Television Engineering, Shanghai University, Shanghai 200072, China;
2 Shanghai Engineering Research Center of Motion Picture Special Effects, Shanghai 200072, China;
3 Department of Software Science and Technology, East China Normal University, Shanghai 200062, China

Abstract
Reference
Related Articles
Download: [PDF 5259KB]     Export: BibTeX or EndNote (RIS)  
Abstract In motion estimation, illumination change is always a troublesome obstacle, which often causes severely performance reduction of optical flow computation. The essential reason is that most of estimation methods fail to formalize a unified definition in color or gradient domain for diverse environmental changes. In this paper, we propose a new solution based on deep convolutional networks to solve the key issue. Our idea is to train deep convolutional networks to represent the complex motion features under illumination change, and further predict the final optical flow fields. To this end, we construct a training dataset of multi-exposure image pairs by performing a series of non-linear adjustments in the traditional datasets of optical flow estimation. Our end-to-end network model consists of three main components:low-level feature network, fusion feature network, and motion estimation network. The former two components belong to the contracting part of our model in order to extract and represent the multi-exposure motion features; the third component is the expanding part of our model in order to learn and predict the high-quality optical flow. Compared with many state-of-the-art methods, our motion estimation based on deep convolutional networks can eliminate the obstacle of illumination change and yield optical flow results with competitive accuracy and time efficiency. Moreover, the good performance of our model is also demonstrated in some multi-exposure video applications, like HDR (High Dynamic Range) composition and flicker removal.
Articles by authors
Keywordsmotion estimation   optical flow   CNN   multi-exposure     
Received 2017-12-27;
Fund:

This work was supported by the National Natural Science Foundation of China under Grant Nos. 61303093, 61472245, and 61402278, the Innovation Program of the Science and Technology Commission of Shanghai Municipality of China under Grant No. 16511101300, and the Gaofeng Film Discipline Grant of Shanghai Municipal Education Commission of China.

About author: Zhi-Feng Xie received his Ph.D. degree in computer application technology from Shanghai Jiao Tong University, Shanghai, in 2013. He was a research assistant at the Department of Computer Science, City University of Hong Kong, Hong Kong, in 2011. He is now an assistant professor with Shanghai University, Shanghai. His research interests include image/video editing, computer graphics, and digital media technology.
Cite this article:   
Zhi-Feng Xie, Yu-Chen Guo, Shu-Han Zhang, Wen-Jun Zhang, Li-Zhuang Ma.Multi-exposure Motion Estimation based on Deep Convolutional Networks[J]  Journal of Computer Science and Technology, 2018,V33(3): 487-501
URL:  
http://jcst.ict.ac.cn:8080/jcst/EN/10.1007/s11390-018-1833-4
Copyright 2010 by Journal of Computer Science and Technology