3D reconstruction from an RGB video is a difficult task due to how limited information from RGB alone. Many approaches are using 3D mesh templates to help with the reconstruction (e.g. SMPL-X [2] for humans, SMAL [3] for animals). However, those template-based approaches can not handle novel object categories. In this post, I introduce LASR [1], which can reconstruct a 3D mesh from a single RGB video.
Shape reconstruction from a single RGB video
Figure 1. The simplified version of LASR. Blue cells are parameters that we need to optimize for the input video. Red cell indicates the loss. Grey cells are values and operators that are not optimized in the process.
From Figure 1 we can see there are 3 main inputs:
- An RGB video with a sequence of frames , with indicates the -th frame.
- Silhouettes .
- Optical flows from the input video .
and the outputs we want are:
- : rest shape of the object.
- : the time-varying transformations.
- : the camera intrinsics.
From those outputs, to compute the reconstruction loss, we need to use a differentiable renderer [4] to render them into camera space. The intermediate outputs for loss computation are:
- Rendered color images .
- Rendered silhouettes .
- Rendered flows .
Then we only need to optimize the difference between the rendered intermediate outputs and the inputs to reconstruct the object of interest.
Modeling
Figure 1. The detailed version of LASR. Blue cells are parameters that we need to optimize for the input video. Red cell indicates the loss. Grey cells are values and operators that are not optimized in the process.
Rest shape: Object shape with vertices and a fixed topology (faces) .
Linear-blend skinning: To map a vertex from the rest position to a new position corresponding to each frame, we use linear-blend skinning. Each vertex is transformed by linearly combining the weighted bone transformations in the object coordinate frame and then transformed to the camera coordinate frame.
with skinning weight matrix , is the root transformations, is the transformation matrix for -th frame and -th bone (or joint), with denotes rotation parameters, denotes translation parameters, is the vertex index, and is the bone index.
To transform , we use a weighted transformation matrix to move the vertex into that space, then transform again to follow the root space. So the complexity grows with the number of bones and vertices.
Skinning weights. The skinning weights are defined as a mixture of Gaussians with components. The probability of assigning vertex to bone is:
where is the center of -th Gaussian, is the corresponding precision matrix that determines the orientation and radius of a Gaussian, and is a normalization factor. In the implementation, centers are initialized using K-mean before starting optimization.
Finally, we apply a perspective projection before rasterization, where the principle point is the same across all frames and the focal length are learnable parameters.
In the implementation, instead of optimizing explicit time-varying parameters, LASR uses ResNet18 with the input image as shown in Figure 2:
, with 1 for focal length (for ), 4 parameters for each bone rotation, and 3 for each translation.
Note that to compute the flow for rendering we predict the next position from the previous timestep. For example, to compute the forward flow, we take surface positions in frame and compute their locations in the next frame, then use the different as flow.
Loss
Reconstruction losses. Given a pair of rendered outputs and ground-truth , the reconstruction loss is
Where are weights empirically chosen and is the perceptual distance measure by AlexNet pretrained on ImageNet.
Shape and motion regularization: To enforce a smoothing surface, LASR uses Laplacian smoothing
where is a set of vertex indices adjacient to .
Motion regularization consists of as-rigid-as-possible (ARAP) and least deformation loss. The least deformation term enforces the deformation from the rest shape to be as small as possible. And ARAP term enforces the deformation between consecutive frames to be small, hence the movement will be more natural.
Conclusion
Figure 3. The reconstructed output uses camel video as input.
In this post, I describe a method that by utilizing a differentiable renderer and linear-blending skinning, we can achieve very good reconstruction results given only RGB information as shown in Figure 3. While the results are impressive, the obvious drawback is the occlusion problem. Therefore, we can have a lot of room to enhance it, for example by using information from multiple viewpoints.
References
[1] Yang, Gengshan, et al. "Lasr: Learning articulated shape reconstruction from a monocular video." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
[2] Pavlakos, Georgios, et al. "Expressive body capture: 3d hands, face, and body from a single image." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.
[3] Zuffi, Silvia, et al. "3D menagerie: Modeling the 3D shape and pose of animals." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
[4] Liu, Shichen, et al. "Soft rasterizer: A differentiable renderer for image-based 3d reasoning." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.