-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
81 changed files
with
1,568 additions
and
21 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,65 @@ | ||
--- | ||
date: 2023-10-02T04:14:54-08:00 | ||
draft: false | ||
params: | ||
author: Nikolaos Zioulis | ||
title: ActorsNeRF | ||
categories: ["papers"] | ||
tags: ["nerf", "smpl", "generalized", "monocular", "iccv23"] | ||
layout: simple | ||
menu: # | ||
robots: all | ||
# sharingLinks: # | ||
weight: 10 | ||
# showPagination: true | ||
# showHero: true | ||
# layoutBackgroundBlur: true | ||
# heroStyle: thumbAndBackground | ||
description: "ActorsNeRF: Animatable Few-shot Human Rendering with Generalizable NeRFs" | ||
summary: TODO | ||
keywords: # | ||
type: '2023' # we use year as a type to list papers in the list view | ||
series: ["Papers Published @ 2023"] | ||
series_order: 15 | ||
--- | ||
|
||
## `ActorsNeRF`: Animatable Few-shot Human Rendering with Generalizable NeRFs | ||
|
||
> Jiteng Mu, Shen Sang, Nuno Vasconcelos, Xiaolong Wang | ||
{{< keywordList >}} | ||
{{< keyword icon="tag" >}} NeRF {{< /keyword >}} | ||
{{< keyword icon="tag" >}} SMPL {{< /keyword >}} | ||
{{< keyword icon="tag" >}} Generalized {{< /keyword >}} | ||
{{< keyword icon="tag" >}} Monocular {{< /keyword >}} | ||
{{< keyword icon="email" >}} *ICCV* 2023 {{< /keyword >}} | ||
{{< /keywordList >}} | ||
|
||
{{< github repo="JitengMu/ActorsNeRF" >}} | ||
|
||
### Abstract | ||
{{< lead >}} | ||
While NeRF-based human representations have shown impressive novel view synthesis results, most methods still rely on a large number of images / views for training. In this work, we propose a novel animatable NeRF called ActorsNeRF. It is first pre-trained on diverse human subjects, and then adapted with few-shot monocular video frames for a new actor with unseen poses. Building on previous generalizable NeRFs with parameter sharing using a ConvNet encoder, ActorsNeRF further adopts two human priors to capture the large human appearance, shape, and pose variations. Specifically, in the encoded feature space, we will first align different human subjects in a category-level canonical space, and then align the same human from different frames in an instance-level canonical space for rendering. We quantitatively and qualitatively demonstrate that ActorsNeRF significantly outperforms the existing state-of-the-art on few-shot generalization to new people and poses on multiple datasets. | ||
{{< /lead >}} | ||
|
||
{{< button href="https://openaccess.thecvf.com/content/ICCV2023/papers/Mu_ActorsNeRF_Animatable_Few-shot_Human_Rendering_with_Generalizable_NeRFs_ICCV_2023_paper.pdf" target="_blank" >}} | ||
Paper | ||
{{< /button >}} | ||
|
||
### Approach | ||
|
||
{{< figure | ||
src="overview.png" | ||
alt="ActorsNeRF overview" | ||
caption="`ActorsNeRF` overview." | ||
>}} | ||
|
||
### Results | ||
|
||
#### Data | ||
{{<badge label="test" message="ZJU_MOCAP" color="yellowgreen" logo="github" link="https://github.com/zju3dv/neuralbody/blob/master/INSTALL.md#zju-mocap-dataset" target="_blank">}} | ||
{{<badge label="test" message="AIST++" color="navy" logo="github" link="https://google.github.io/aistplusplus_dataset/factsfigures.html" target="_blank">}} | ||
|
||
#### Comparisons | ||
{{<badge label="body--NeRF" message="NeuralBody" color="coral" logo="github" link="https://github.com/zju3dv/neuralbody" target="_blank">}} | ||
{{<badge label="body--NeRF" message="HumanNeRF" color="blue" logo="github" link="chungyiweng/HumanNeRF" target="_blank">}} |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,69 @@ | ||
--- | ||
date: 2023-08-01T04:14:54-08:00 | ||
draft: false | ||
params: | ||
author: Nikolaos Zioulis | ||
title: AvatarReX | ||
categories: ["papers"] | ||
tags: ["nerf", "smpl", "deformation", "siggraph23"] | ||
layout: simple | ||
menu: # | ||
robots: all | ||
# sharingLinks: # | ||
weight: 10 | ||
showHero: true | ||
description: "AvatarReX: Real-time Expressive Full-body Avatars" | ||
summary: TODO | ||
keywords: # | ||
type: '2023' # we use year as a type to list papers in the list view | ||
series: ["Papers Published @ 2023"] | ||
series_order: 19 | ||
--- | ||
|
||
## `AvatarReX`: Real-time Expressive Full-body Avatars | ||
|
||
> Zerong Zheng, Xiaochen Zhao, Hongwen Zhang, Boning Liu, Yebin Liu | ||
{{< keywordList >}} | ||
{{< keyword icon="tag" >}} NeRF {{< /keyword >}} | ||
{{< keyword icon="tag" >}} SMPL {{< /keyword >}} | ||
{{< keyword icon="tag" >}} Deformation {{< /keyword >}} | ||
{{< keyword icon="email" >}} *SIGGRAPH* 2023 {{< /keyword >}} | ||
{{< /keywordList >}} | ||
|
||
|
||
### Abstract | ||
{{< lead >}} | ||
We present AvatarReX, a new method for learning NeRF-based full-body avatars from video data. The learnt avatar not only provides expressive control of the body, hands and the face together, but also supports real-time animation and rendering. To this end, we propose a compositional avatar representation, where the body, hands and the face are separately modeled in a way that the structural prior from parametric mesh templates is properly utilized without compromising representation flexibility. Furthermore, we disentangle the geometry and appearance for each part. With these technical designs, we propose a dedicated deferred rendering pipeline, which can be executed in real-time framerate to synthesize high-quality free-view images. The disentanglement of geometry and appearance also allows us to design a two-pass training strategy that combines volume rendering and surface rendering for network training. In this way, patch-level supervision can be applied to force the network to learn sharp appearance details on the basis of geometry estimation. Overall, our method enables automatic construction of expressive full-body avatars with real-time rendering capability, and can generate photo-realistic images with dynamic details for novel body motions and facial expressions. | ||
{{< /lead >}} | ||
|
||
{{< button href="https://dl.acm.org/doi/pdf/10.1145/3592101" target="_blank" >}} | ||
Paper | ||
{{< /button >}} | ||
|
||
### Approach | ||
|
||
{{< figure | ||
src="overview.jpg" | ||
alt="AvatarReX overview" | ||
caption="`AvatarReX` overview." | ||
>}} | ||
|
||
### Results | ||
|
||
#### Data | ||
{{<badge label="test" message="DeepCap" color="cyan" logo="link" link="https://gvv-assets.mpi-inf.mpg.de/" target="_blank">}} | ||
|
||
#### Comparisons | ||
{{<badge label="body--NeRF" message="NeuralBody" color="coral" logo="github" link="https://github.com/zju3dv/neuralbody" target="_blank">}} | ||
{{<badge label="body--NeRF" message="AnimatableNeRF" color="cyan" logo="github" link="https://github.com/zju3dv/animatable_nerf" target="_blank">}} | ||
{{<badge label="body--NeRF" message="HumanNeRF" color="blue" logo="github" link="chungyiweng/HumanNeRF" target="_blank">}} | ||
{{<badge label="body--NeRF" message="ARAH" color="magenta" logo="github" link="https://github.com/taconite/arah-release" target="_blank">}} | ||
{{<badge label="test" message="NeuralActor" color="brightgreen" logo="link" link="https://gvv-assets.mpi-inf.mpg.de/" target="_blank">}} | ||
|
||
#### Performance | ||
{{<badge label="train" message="3d" color="informational" logo="link" >}} | ||
{{<badge label="train" message="RTX3090" color="informational" logo="link" >}} | ||
{{<badge label="render" message="40ms" color="informational" logo="link" >}} | ||
{{<badge label="render" message="1024_x_1024" color="informational" logo="link" >}} | ||
{{<badge label="render" message="RTX3090" color="informational" logo="link" >}} |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,63 @@ | ||
--- | ||
date: 2023-06-20T04:14:54-08:00 | ||
draft: false | ||
params: | ||
author: Nikolaos Zioulis | ||
title: CAT-NeRF | ||
categories: ["papers"] | ||
tags: ["nerf", "smpl", "cvpr23"] | ||
layout: simple | ||
menu: # | ||
robots: all | ||
# sharingLinks: # | ||
weight: 10 | ||
showHero: true | ||
description: "CAT-NeRF: Constancy-Aware Tx2Former for Dynamic Body Modeling" | ||
summary: TODO | ||
keywords: # | ||
type: '2023' # we use year as a type to list papers in the list view | ||
series: ["Papers Published @ 2023"] | ||
series_order: 11 | ||
--- | ||
|
||
## `CAT-NeRF`: Constancy-Aware Tx2Former for Dynamic Body Modeling | ||
|
||
> Haidong Zhu, Zhaoheng Zheng, Wanrong Zheng, Ram Nevatia | ||
{{< keywordList >}} | ||
{{< keyword icon="tag" >}} NeRF {{< /keyword >}} | ||
{{< keyword icon="tag" >}} SMPL {{< /keyword >}} | ||
{{< keyword icon="email" >}} *CVPR* 2023 {{< /keyword >}} | ||
{{< /keywordList >}} | ||
|
||
### Abstract | ||
{{< lead >}} | ||
This paper addresses the problem of human rendering in the video with temporal appearance constancy. Reconstructing dynamic body shapes with volumetric neural rendering methods, such as NeRF, requires finding the correspondence of the points in the canonical and observation space, which demands understanding human body shape and motion. Some methods use rigid transformation, such as SE(3), which cannot precisely model each frame’s unique motion and muscle movements. Others generate the transformation for each frame with a trainable network, such as neural blend weight field or translation vector field, which does not consider the appearance constancy of general body shape. In this paper, we propose CAT-NeRF for self-awareness of appearance constancy with Tx2Former, a novel way to combine two Transformer layers, to separate appearance constancy and uniqueness. Appearance constancy models the general shape across the video, and uniqueness models the unique patterns for each frame. We further introduce a novel Covariance Loss to limit the correlation between each pair of appearance uniquenesses to ensure the frame-unique pattern is maximally captured in appearance uniqueness. We assess our method on H36M and ZJU-MoCap and show state-of-the-art performance. | ||
{{< /lead >}} | ||
|
||
{{< button href="https://openaccess.thecvf.com/content/CVPR2023W/DynaVis/papers/Zhu_CAT-NeRF_Constancy-Aware_Tx2Former_for_Dynamic_Body_Modeling_CVPRW_2023_paper.pdf" target="_blank" >}} | ||
Paper | ||
{{< /button >}} | ||
|
||
### Approach | ||
|
||
{{< figure | ||
src="overview.png" | ||
alt="CAT-NeRF overview" | ||
caption="`CAT-NeRF` overview." | ||
>}} | ||
|
||
### Results | ||
|
||
#### Data | ||
{{<badge label="test" message="ZJU_MOCAP" color="yellowgreen" logo="github" link="https://github.com/zju3dv/neuralbody/blob/master/INSTALL.md#zju-mocap-dataset" target="_blank">}} | ||
{{<badge label="test" message="Human3.6M" color="critical" logo="link" link="http://vision.imar.ro/human3.6m/description.php" target="_blank">}} | ||
|
||
#### Comparisons | ||
{{<badge label="body--NeRF" message="NeuralBody" color="coral" logo="github" link="https://github.com/zju3dv/neuralbody" target="_blank">}} | ||
{{<badge label="body--NeRF" message="HumanNeRF" color="blue" logo="github" link="chungyiweng/HumanNeRF" target="_blank">}} | ||
{{<badge label="body--NeRF" message="AnimatableNeRF" color="cyan" logo="github" link="https://github.com/zju3dv/animatable_nerf" target="_blank">}} | ||
|
||
#### Performance | ||
{{<badge label="train" message="20--30h" color="informational" logo="link" >}} | ||
{{<badge label="train" message="A100" color="informational" logo="link" >}} |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Oops, something went wrong.