• Home
  • License
  • Explore Dataset
MotionFixMotionFix
  • Home
  • License
  • Explore Dataset
  • Sign In
    Logout

MotionFix: Text-Driven 3D
Human Motion Editing

SIGGRAPH Asia 2024 

Nikos Athanasiou1,  Alpár Cseke1,4,  Markos Diomataris1,3,  Michael J. Black1,  Gül Varol2

1Max Planck Institute for Intelligent Systems, Germany, 2LIGM, École des Ponts, Univ Gustave Eiffel, CNRS, France, 3ETH Zürich, Switzerland 4Meshcapade, Germany 

arXiv   🎥 Video  HuggingFace Demo 
🔍 Explore MotionFix  💻 Code & 📀 Data 📧 Contact 

Video



What is MotionFix?

The MotionFix dataset is the first benchmark for 3D human motion editing from text.
It contains triplets of source and target motions, and edit texts that describe the desired modification.
Our dataset allows both training and evaluation of models for text-based motion editing.

 
002277_0_120-009407_0_120

Do it in a smaller circle


Visit our Data Exploration Webpage!


What it TMED?

TMED is a conditional diffusion model trained on MotionFix to perform motion editing. TMED used both the source motion and the edit text to determine the edit based on a Transformer Encoder Denoiser.

Editing Different Motions

Acknowledgements


We are grateful to Tsvetelina Alexiadis for valuable data collection guidance and managementof both the studies and the data collection process.
We are also grateful to Arina Kuznetcova, Asuka Bertler, Claudia Gallatz, Suraj Bhor, Tithi Rakshit for data annotation, Taylor McConnell, Tomasz Niewiadomski for data annotation and help with design of the annotation interface.

Their help was extremely significant for collecting the data and completing the project.

The authors would like to thank Benjamin Pellkofer for IT support and the development of the data exploration webpage. His help was essential for making this dataset easily accesible to the users.


We also thank Lea Müller for initial discussions about diffusion models and Mathis Petrovich for discussions about human motion representations and diffusion details. The first author is also thankful to Peter Kulits for his support and seatmating. The first author would also like to thank the members of Imagine Lab for hosting him and providing a welcoming research environmnent.

 Finally, we thanks Yuliang Xiu for proofreading.

BibTex

@inproceedings{athanasiou2024motionfix,
  title = {{MotionFix}: Text-Driven 3D Human Motion Editing},
  author = {Athanasiou, Nikos and Ceske, Alp{\'a}r and Diomataris, Markos and Black, Michael J. and Varol, G{\"u}l},
  booktitle = {SIGGRAPH Asia 2024 Conference Papers},
  year = {2024}
}
© 2024 Max-Planck-Gesellschaft -Imprint-Privacy Policy-License
RegisterSign In
© 2024 Max-Planck-Gesellschaft