3D reconstruction

A Review on Point — NeRF

Prim Wong
3 min readJul 10, 2022
Point-NeRF uses neural 3D points to efficiently represent and render a continuous radiance volume.

Point-NeRF is a state-of-the-art 3D reconstruction, published in the CVPR 2022, using neural 3D points to efficiently render and represent a render continuous radiance volume. Optimized per scene to achieve reconstruction quality that surpasses NeRF in tens of minutes.

Scene Representations Techniques

There are many scene representations techniques
1. Volumes
2. Point clouds
3. Meshes
4. Depth maps
5. Implicit functions

Image courtesy

Point-NeRF: Point-based Neural Radiance Fields

Point-NeRF is an extremely fast rendering model. It is a volumetric neural rendering methods like NeRF, generate high-quality view synthesis results but are optimized per-scene leading to prohibitive reconstruction time. Furthermore, Point-NeRF combines the 3D point clouds, with associated neural features, to model a radiance field.

The representation and rendering are entirely in 3D as it uses point cloud to stimulate the scene geometry. This makes Point-NeRF a natural and efficient to adapt to scene surface and leaving out the empty sscene space.

Point-NeRF Representation

Efficient Reconstruction and Rendering techniques for teh Point-NeRF Representation

  1. Volume rendering and radiance fields
  1. Point-based radiance field
  2. Per-point processing
  3. View-dependent radiance regression

Point-NeRF Reconstruction

Leverage a deep neural network to generate the initial point-based field. This initial field is further optimized per scene with our point growing and pruning techniques, leading to our final high-quality radiance field reconstruction

References

--

--

Responses (2)