TalkinNeRF: Animatable Neural Fields for
Full-Body Talking Humans


Aggelina Chatziagapi1
Bindita Chaudhuri3
Amit Kumar2
Rakesh Ranjan2
Dimitris Samaras1
Nikolaos Sarafianos2

1Stony Brook University
2Meta Reality Labs
3Flawless AI

ECCV Workshops 2024




Given monocular videos, TalkinNeRF learns a unified NeRF-based network that represents the holistic 4D human motion, including body pose, hand articulation, and facial expressions. It synthesizes high-quality animations of full-body talking humans.



Abstract

We introduce a novel framework that learns a dynamic neural radiance field (NeRF) for full-body talking humans from monocular videos. Prior work represents only the body pose or the face. However, humans communicate with their full body, combining body pose, hand gestures, as well as facial expressions. In this work, we propose TalkinNeRF, a unified NeRF-based network that represents the holistic 4D human motion. Given a monocular video of a subject, we learn corresponding modules for the body, face, and hands, that are combined together to generate the final result. To capture complex finger articulation, we learn an additional deformation field for the hands. Our multi-identity representation enables simultaneous training for multiple subjects, as well as robust animation under completely unseen poses. It can also generalize to novel identities, given only a short video as input. We demonstrate state-of-the-art performance for animating full-body talking humans, with fine-grained hand articulation and facial expressions.



Method



Overview of TalkinNeRF. Given a monocular video of a subject, we learn a unified NeRF-based network that represents their holistic 4D motion. Corresponding modules for body, face, and hands are combined together, in order to synthesize the final full-body talking human. By learning an identity code per video, our method can be trained on multiple identities simultaneously.



Demo




BibTeX

If you find our work useful, please consider citing our paper:
                
                    @inproceedings{chatziagapi2024talkinnerf,
                        title={TalkinNeRF: Animatable Neural Fields for Full-Body Talking Humans},
                        author={Aggelina Chatziagapi and Bindita Chaudhuri and Amit Kumar and Rakesh Ranjan and Dimitris Samaras and Nikolaos Sarafianos},
                        year={2024},
                        booktitle={ECCV Workshops},
                    }