运动启发的时序一致性服装动画方法
Motion-Inspired Real-Time Garment Synthesis with Temporal-Consistency
-
摘要:研究背景 随着CG技术的高速发展,服装动画技术在电影、动漫等领域扮演的角色愈发重要,其核心问题在于如何根据人体运动序列生成连续、稳定的服装变形序列。一种自动化的服装动画技术不仅可以减少人力和时间成本,而且可以提供丰富的视觉效果,激发创作灵感,辅助艺术家进行动画创作。目的 服装动画创作主要分为物理仿真法和实例数据驱动法。基于物理仿真的方法计算成本较高且需要设置复杂的仿真参数,生成效果难以控制,需要美术师或艺术家迭代调整,效率较低;而现阶段实例数据驱动法难以保证生成服装变形效果的连续性,且通常只适用于与人体拓扑一致的简单服装。基于此,本文致力于构建一种高效、稳定的数据驱动的服装动画仿真器。方法 本文提出了一种基于Transformer架构的时序性服装动画方法,建立人体运动序列和服装变形序列间的映射关系并引入基于动画帧的注意力机制建立当前动画帧与上下文间的依赖关系,最后通过后处理进行穿透修正和纹理贴图可生成与人体运动序列时序一致的服装变形序列。结果 通过对实验结果的定量及定性分析发现,一方面,本文数据驱动的服装动画仿真器不仅效率比物理引擎(ARCSim)高1000倍,而且只需改变高层的人体运动表示即可生成时序一致的服装变形效果,无需迭代调整物理仿真参数;另一方面,与已有数据驱动的方法相比,本文方法的预测性能具有明显提升。其中,本文方法生成的服装模型顶点的RMSE和Hausdorff误差分别减少了28%~58%、10%~40%,生成的服装序列的STED误差减少了12%~21%。结论 本文提出了一种数据驱动的服装动画生成方法,只需输入人体运动参数即可生成时序一致的服装变形效果。该方法在效率上比物理仿真方法(ARCSim)高1 000倍,且与现有实例数据驱动法相比本文方法在预测效果、时序一致性等方面具有更好的性能。未来,我们将进一步从模型生成性、集成性等方面开展研究。Abstract: Synthesizing garment dynamics according to body motions is a vital technique in computer graphics. Physics-based simulation depends on an accurate model of the law of kinetics of cloth, which is time-consuming, hard to implement, and complex to control. Existing data-driven approaches either lack temporal consistency, or fail to handle garments that are different from body topology. In this paper, we present a motion-inspired real-time garment synthesis workflow that enables high-level control of garment shape. Given a sequence of body motions, our workflow is able to generate corresponding garment dynamics with both spatial and temporal coherence. To that end, we develop a transformer-based garment synthesis network to learn the mapping from body motions to garment dynamics. Frame-level attention is employed to capture the dependency of garments and body motions. Moreover, a post-processing procedure is further taken to perform penetration removal and auto-texturing. Then, textured clothing animation that is collision-free and temporally-consistent is generated. We quantitatively and qualitatively evaluated our proposed workflow from different aspects. Extensive experiments demonstrate that our network is able to deliver clothing dynamics which retain the wrinkles from the physics-based simulation, while running 1000 times faster. Besides, our workflow achieved superior synthesis performance compared with alternative approaches. To stimulate further research in this direction, our code will be publicly available soon.