META CODEC AVATARs

Codec Avatars: Immersive Telepresence with Lifelike Avatars | Meta

What are Codec Avatars?

Codec Avatars allow people in different places to interact using eye contact, subtle shifts in expression, posture and gesture. This fully embodied interaction enables natural and expressive remote communication.

Why open source?

The Codec Avatars lab at Meta Reality Labs Research has been building the future of connection with lifelike avatars since 2015, and has shared many of its results and methods with the research community.

Through this site, Meta Reality Labs Research provides the research community with datasets and baseline reference implementations for Codec Avatars, supporting the advancement of metric telepresence research. Using the code and models we share, researchers are empowered to investigate open challenges in metric telepresence including:

  • Generalization of universal priors to new identities
  • Online encoder adaptation
  • Improving quality for clothing and hair

Get started with Codec Avatars

Ava-256 dataset

First dataset for end-to-end telepresence.

Ava-256 dataset overviewGitHub

Goliath-4 dataset

First complete captures of full bodies, hands and faces.

Goliath-4 dataset overviewGitHub

Other OSS publications and releases

""
Multiface

High-quality recordings of the faces of 13 people

CT2Hair

High-fidelity 3D hair modeling using computed tomography

Interhand2.6M

Dataset and baseline for 3D interacting hand post estimation

Re:Interhand

Dataset of relightable 3D interacting hands

Eyeful

High-quality indoor scenes for neural reconstruction

Sounding bodies

Modeling 3D spatial sound of humans using body pose and audio

PatternedClothing

4 subjects wearing patterned clothes for high-quality registration

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *