Journal of Computer Science and Technology

   

Motion-inspired Real-time Garment Synthesis with Temporal-consistency

Yu-Kun Wei1(魏育坤), Member, CCF, Min Shi1,*(石敏), Member, CCF, Wen-Ke Feng1(冯文科), Member, CCF, Deng-Ming Zhu2(朱登明), Member, CCF, Tian-Lu Mao2(毛天露), Member, CCF   

  1. 1School of Control and Computer Engineering, North China Electric Power University, Beijing 102206, China
    2Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
  • Published:2022-09-07
  • Contact: Min Shi E-mail:shi_min@ncepu.edu.cn
  • About author:Min Shi is an associate professor in the school of Control and Computer Engineering, North China Electric Power University, Beijing. She received her Ph.D. degree in computer science and technology from Chinese Academy of Sciences, Beijing, in 2013. Her research interests include cloth simulation, computer vision and virtual reality.

Synthesizing garment dynamics according to body motion is a vital technique in computer graphics. Physics-based simulation depends on an accurate model of the law of kinetics of cloth, which is time-consuming, hard to implement, and complex to control. Existing data-driven approaches either lack temporal consistency, or fail to handle garments that are different from body topology. In this paper, we present a motion-inspired real-time garment synthesis workflow that enables high-level control of garment shape. Given a sequence of body motions, our workflow is able to generate corresponding garment dynamics with both spatial and temporal coherence. To that end, we develop a Transformer-based garment synthesis network to learn the mapping from body motion to garment dynamics. Frame-level attention is employed to capture the dependency of garment and body motion. Moreover, a post-processing procedure is further taken to perform penetration removal and auto-texturing. Then, textured clothing animation that is collision-free and temporally-consistent is generated. We quantitatively and qualitatively evaluated our proposed workflow from different aspects. Extensive experiments demonstrate that our network is able to deliver clothing dynamics which retain the wrinkles from physics-based simulation it is learned from, while running 1000 times faster. Besides, our workflow achieved superior synthesis performance compared with alternative approaches. To stimulate further research in this direction, our code will be publicly available soon. 


中文摘要

1、研究背景
随着CG技术的高速发展,服装动画技术在电影、动漫等领域扮演的角色愈发重要,其核心问题在于如何根据人体运动序列生成连续、稳定的服装变形序列。一种自动化的服装动画技术不仅可以减少人力和时间成本,而且可以提供丰富的视觉效果,激发创作灵感,辅助艺术家进行动画创作。
2、目的
服装动画创作主要分为物理仿真法和实例数据驱动法。基于物理仿真的方法计算成本较高且需要设置复杂的仿真参数,生成效果难以控制,需要美术师或艺术家迭代调整,效率较低;而现阶段实例数据驱动法难以保证生成服装变形效果的连续性,且通常只适用于与人体拓扑一致的简单服装。基于此,本文致力于构建一种高效、稳定的数据驱动的服装动画仿真器。
3、方法
本文提出了一种基于Transformer架构的时序性服装动画方法,建立人体运动序列和服装变形序列间的映射关系并引入基于动画帧的注意力机制建立当前动画帧与上下文间的依赖关系,最后通过后处理进行穿透修正和纹理贴图可生成与人体运动序列时序一致的服装变形序列。
4、结果
通过对实验结果的定量及定性分析发现,一方面,本文数据驱动的服装动画仿真器不仅效率比物理引擎(ARCSim)高1000倍,而且只需改变高层的人体运动表示即可生成时序一致的服装变形效果,无需迭代调整物理仿真参数;另一方面,与已有数据驱动的方法相比,本文方法的预测性能具有明显提升。其中,本文方法生成的服装模型顶点的RMSE和Hausdorff误差分别减少了28%~58%、10%~40%,生成的服装序列的STED误差减少了12%~21%。
5、结论
本文提出了一种数据驱动的服装动画生成方法,只需输入人体运动参数即可生成时序一致的服装变形效果。该方法在效率上比物理仿真方法(ARCSim)高1000倍,且与现有实例数据驱动法相比本文方法在预测效果、时序一致性等方面具有更好的性能。未来,我们将进一步从模型生成性、集成性等方面开展研究。

Key words: clothing animation; computer graphics; transformer; temporal consistency;

[1] Hua-Peng Wei, Ying-Ying Deng, Fan Tang, Xing-Jia Pan, and Wei-Ming Dong. A Comparative Study of CNN- and Transformer-Based Visual Style Transfer [J]. Journal of Computer Science and Technology, 2022, 37(3): 601-614.
[2] Ze-Lin Zhao, Di Huang, and Xiao-Xing Ma. TOAST: Automated Testing of Object Transformers in Dynamic Software Updates [J]. Journal of Computer Science and Technology, 2022, 37(1): 50-66.
[3] Bo Ren, Xu-Yun Yang, Ming C. Lin, Nils Thuerey, Matthias Teschner, Chenfeng Li. Visual Simulation of Multiple Fluids in Computer Graphics: A State-of-the-Art Report [J]. , 2018, 33(3): 431-451.
[4] Xu-Ran Zhao, Xun Wang, Qi-Chao Chen. Temporally Consistent Depth Map Prediction Using Deep CNN and Spatial-temporal Conditional Random Field [J]. , 2017, 32(3): 443-456.
[5] Ming-Ming Cheng, Qi-Bin Hou, Song-Hai Zhang, Paul L. Rosin. Intelligent Visual Media Processing: When Graphics Meets Vision [J]. , 2017, 32(1): 110-121.
[6] Bai-Lin Yang, Frederick W. B. Li, Zhi-Geng Pan, and Xun Wang. An Effective Error Resilient Packetization Scheme for Progressive Mesh Transmission over Unreliable Networks [J]. , 2008, 23(6 ): 1015-1025 .
[7] Yong-Wei Miao, Jie-Qing Feng, Chun-Xia Xiao, Qun-Sheng Peng and A.R. Forrest. Differentials-based segmentation and parameterization for point-sampled surfaces [J]. , 2007, 22(5): 749-760 .
[8] Feng Xue, You-Sheng Zhang, Ju-Lang Jiang, Min Hu, Xin-Dong Wu, and Rong-Gui Wang. Real-Time Texture Synthesis Using s-Tile Set [J]. , 2007, 22(4): 590-596 .
[9] Xue-Ying Qin, Eihachiro Nakamae, Wei Hua, Yasuo Nagai, and Qun-Sheng Peng. Anti-Aliased Rendering of Water Surface [J]. , 2004, 19(5): 0-0.
[10] Fu-Li Wu, Chun-Hui Mei, and Jiao-Ying Shi. Method of Direct Texture Synthesis on Arbitrary Surfaces [J]. , 2004, 19(5): 0-0.
[11] A. R. Forrest. Future Trends in Computer Graphics: How Much is Enough? [J]. , 2003, 18(5): 0-0.
[12] Liang Xundong; Li Bin; Liu Shenquan;. Three-Dimensional Vector Field Visualization Based on Tensor Decomposition [J]. , 1996, 11(5): 452-460.
[13] Qin Kaihuai; Gong Minglun; Tong Geliang;. Fast Ray Tracing NURBS Surfaces [J]. , 1996, 11(1): 17-29.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!

ISSN 1000-9000(Print)

         1860-4749(Online)
CN 11-2296/TP

Home
Editorial Board
Author Guidelines
Subscription
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
Tel.:86-10-62610746
E-mail: jcst@ict.ac.cn
 
  Copyright ©2015 JCST, All Rights Reserved