×

DeepGamingAI's video: Motion Style Transfer For 3D Character Animation Game Futurology 26

@Motion Style Transfer For 3D Character Animation | Game Futurology #26
This is episode of the video series "Game Futurology" covering the paper "Unpaired Motion Style Transfer from Video to Animation" by Kfir Aberman, Yijia Weng, Dani Lischinski, Daniel Cohen-Or and Baoquan Chen. PDF: https://deepmotionediting.github.io/papers/Motion_Style_Transfer-camera-ready.pdf Authors' Video: https://www.youtube.com/watch?v=m04zuBSdGrc Game Futurology: This is a video series consisting of short 2-3 minute overview of research papers in the field of AI and Game Development. This series aims to ponder over what the future games might look like based on the latest academic research going on in the field today. Subscribe for more weekly videos! Abstract: Transferring the motion style from one animation clip to another, while preserving the motion content of the latter, has been a long-standing problem in character animation. Most existing data-driven approaches are supervised and rely on paired data, where motions with the same content are performed in different styles. In addition, these approaches are limited to transfer of styles that were seen during training. In this paper, we present a novel data-driven framework for motion style transfer, which learns from an unpaired collection of motions with style labels, and enables transferring motion styles not observed during training. Furthermore, our framework is able to extract motion styles directly from videos, bypassing 3D reconstruction, and apply them to the 3D input motion. Our style transfer network encodes motions into two latent codes, for content and for style, each of which plays a different role in the decoding (synthesis) process. While the content code is decoded into the output motion by several temporal convolutional layers, the style code modifies deep features via temporally invariant adaptive instance normalization (AdaIN). Moreover, while the content code is encoded from 3D joint rotations, we learn a common embedding for style from either 3D or 2D joint positions, enabling style extraction from videos. Our results are comparable to the state-of-the-art, despite not requiring paired training data, and outperform other methods when transferring previously unseen styles. To our knowledge, we are the first to demonstrate style transfer directly from videos to 3D animations - an ability which enables one to extend the set of style examples far beyond motions captured by MoCap systems. Music Credits: https://www.fesliyanstudios.com/ ---------------------------------------------------------------- • YouTube - https://www.youtube.com/c/DeepGamingA... • Twitter - https://twitter.com/deepgamingai • Medium - https://medium.com/@chintan.t93 • GitHub - https://github.com/ChintanTrivedi --------------------------------------------------------------------

25

1
DeepGamingAI
Subscribers
5.4K
Total Post
71
Total Views
106K
Avg. Views
2K
View Profile
This video was published on 2020-08-07 19:11:49 GMT by @DeepGamingAI on Youtube. DeepGamingAI has total 5.4K subscribers on Youtube and has a total of 71 video.This video has received 25 Likes which are lower than the average likes that DeepGamingAI gets . @DeepGamingAI receives an average views of 2K per video on Youtube.This video has received 1 comments which are lower than the average comments that DeepGamingAI gets . Overall the views for this video was lower than the average for the profile.DeepGamingAI #26 #DeepLearning #GameDesign #GenerativeModeling #Animation #GAN has been used frequently in this Post.

Other post by @DeepGamingAI