AvatarCLIP: Zero-Shot Text-Driven Generation and
Animation of 3D Avatars

  • 1S-Lab, Nanyang Technological University
  • 2SenseTime Research
  • 3Shanghai AI Laboratory
  • *equal contribution
  • ✉corresponding author
SIGGRAPH 2022 (Journal Track)
TL;DR: AvatarCLIP generate and animate avatars given descriptions of body shapes, appearances and motions.
Abstract
3D avatar creation plays a crucial role in the digital age. However, the whole production process is prohibitively time-consuming and labor-intensive. To democratize this technology to a larger audience, we propose AvatarCLIP, a zero-shot text-driven framework for 3D avatar generation and animation. Unlike professional software that requires expert knowledge, AvatarCLIP empowers layman users to customize a 3D avatar with the desired shape and texture, and drive the avatar with the described motions using solely natural languages.
Our key insight is to take advantage of the powerful vision-language model CLIP for supervising neural human generation, in terms of 3D geometry, texture and animation. Specifically, driven by natural language descriptions, we initialize 3D human geometry generation with a shape VAE network. Based on the generated 3D human shapes, a volume rendering model is utilized to further facilitate geometry sculpting and texture generation. Moreover, by leveraging the priors learned in the motion VAE, a CLIP-guided reference-based motion synthesis method is proposed for the animation of the generated 3D avatar. Extensive qualitative and quantitative experiments validate the effectiveness and generalizability of AvatarCLIP on a wide range of avatars. Remarkably, AvatarCLIP can generate unseen 3D avatars with novel animations, achieving superior zero-shot capability.
Avatar Gallery

Hover over the video to check the geometry. Click 'Show More' to see more results. Click 'Load Model' to view the 3D model.

Download links to the models (GLB formats) are provided in the popup window after clicking 'Load Model'.

For the FBX models, we provide a zip file for download.

(a) Celebrity
Abraham Lincoln
Barack Obama
Donald Trump
Hilary Clinton
Joe Biden
Geoffrey Hinton
Yoshua Bengio
Yann Lecun
Andrew Ng
Bill Gates
Elvis Presley
Freddie Mercury
Drake
Kanye West
John Lennon
Ellen DeGeneres
Karl Lagerfeld
Simon Cowell
Cristiano Ronaldo
Lionel Messi
Steve Jobs
Pope
Leonardo DiCaprio
Tom Cruise
Keanu Reeves
(b) Fictional Character
Ant-man
Captain America
Thor
Loki
Doctor Strange
Superman
Deadpool
Batman
Nick Fury
Captain Marvel
Luke Skywalker
Jedi
Harry Potter
Sheldon Cooper
Forrest Gump
the Godfather
Sherlock Holmes
Flynn Rider
Elsa
Gintoki
James Bond
John Wick
Godzilla
Alien
Zombie
(c) General Description
Witch
Wizard
Robot
Warrior
Ancient Prince of India
Teenager
Senior Citizen
Gardener
Construction Manager
Casino Dealer
Rock Star
Soldier
Firefighter
Pilot
Astronaut
Scientist
Researcher
Professor
Software Developer
Lawyer
Accountant
Police Officer
Graduate Student
High School Teacher
Doctor
Motion Generation Results
Arguing
Brushing Teeth
Kicking Soccer
Raising Both Arms
Interpolation
Baseline
Ours
Supplementary Video
Bibtex
@article{hong2022avatarclip,
    title={AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars},
    author={Hong, Fangzhou and Zhang, Mingyuan and Pan, Liang and Cai, Zhongang and Yang, Lei and Liu, Ziwei},
    journal={ACM Transactions on Graphics (TOG)},
    volume={41},
    number={4},
    pages={1--19},
    year={2022},
    publisher={ACM New York, NY, USA}
}
            
Related Works

There are lots of wonderful works that might interest you.

+ EVA3D is the first high-quality unconditional 3D human generative model that only requires 2D image collections for training.

+ MotionDiffuse is the first diffusion-model-based text-driven motion generation framework with probabilistic mapping, realistic synthesis and multi-level manipulation ability.

+ StyleGAN-Human scales up high-quality 2D human dataset and achieves impressive 2D human generation results.

+ Text2Human proposes a text-driven controllable human image generation framework.

There are lots of wonderful works that inspire our work or come around the same time as ours.

+ Dream Fields enables zero-shot text-driven general 3D object generation using CLIP and NeRF.

+ Text2Mesh proposes to edit a template mesh by predicting offsets and colors per vertex using CLIP and differentiable rendering.

+ CLIP-NeRF can manipulate 3D objects represented by NeRF with natural languages or examplar images by leveraging CLIP.

+ Text to Mesh facilitates zero-shot text-driven general mesh generation by deforming from a sphere mesh guided by CLIP.

Acknowledgement

This study is supported by NTU NAP, MOE AcRF Tier 2 (T2EP20221-0033), and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).

We referred to the project page of Nerfies when creating this project page.