|
|
|
|
|
|
|
|
Our proposed MIGS (Multi-Identity Gaussian Splatting) learns a single neural representation for multiple identities based on tensor decomposition. It enables robust animation of human avatars under novel poses, out of the training distribution. |
We introduce MIGS (Multi-Identity Gaussian Splatting), a novel method that learns a single neural representation for multiple identities, using only monocular videos. Recent 3D Gaussian Splatting (3DGS) approaches for human avatars require per-identity optimization. However, learning a multi-identity representation presents advantages in robustly animating humans under arbitrary poses. We propose to construct a high-order tensor that combines all the learnable 3DGS parameters for all the training identities. By assuming a low-rank structure and factorizing the tensor, we model the complex rigid and non-rigid deformations of multiple subjects in a unified network, significantly reducing the total number of parameters. Our proposed approach leverages information from all the training identities, enabling robust animation under challenging unseen poses, outperforming existing approaches. We also demonstrate how it can be extended to learn unseen identities. |
Overview of MIGS. Given monocular videos of multiple identities, we learn a unified 3DGS representation for human avatars based on CP tensor decomposition. We construct a tensor W of all the 3DGS parameters of all our training identities. We assume a low-rank structure and decompose it into U1, U2, U3. By leveraging information from the diverse deformations of different subjects, MIGS enables robust animation under novel challenging poses. |
If you find our work useful, please consider citing our paper:
|