Skip to content

This is a Instants-NGP renderer implemented using Taichi

License

Notifications You must be signed in to change notification settings

cv-lab-x/taichi-ngp-renderer

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Taichi NGP Renderer

License

Update 2022-10-27: Support all platforms, Windows and Linux (CUDA, Vulkan), MacOS (Vulkan)
Update 2022-10-23: Support depth of field (DoF)

This is an Instant-NGP renderer implemented using Taichi, written entirely in Python. No CUDA! This repository only implemented the rendering part of the NGP but is more simple and has a lesser amount of code compared to the original (Instant-NGP and tiny-cuda-nn).


Installation

Clone this repository and install the required package:

git clone https://github.com/Linyou/taichi-ngp-renderer.git
python -m pip install -r requirement.txt

Description

This repository only implemented the forward part of the Instant-NGP, which include:

  • Rays intersection with bounding box: ray_intersect()
  • Ray marching strategic: raymarching_test_kernel()
  • Spherical harmonics encoding for ray direction: dir_encode()
  • Hash table encoding for 3d coordinate: hash_encode()
  • Fully Fused MLP using shared memory: sigma_layer(), rgb_layer()
  • Volume rendering: composite_test()

However, there are some differences compared to the original:

Missing function
  • Taichi is currently missing the frexp() method, so I have to use a hard-coded scale of 0.5. I will update the code once Taichi supports this function.
Fully Fused MLP
  • Instead of having a single kernel like tiny-cuda-nn, this repo use separated kernel sigma_layer() and rgb_layer() because the shared memory size that Taichi currently allow is 48KB as issue #6385 points out, it could be improved in the future.
  • In the tiny-cuda-nn, they use TensorCore for float16 multiplication, which is not an accessible feature for Taichi, so I directly convert all the data to ti.float16 to speed up the computation.

GUI

This code supports real-time rendering GUI interactions with less than 1GB VRAM. Here are the functionality that the GUI offers:

  • Camera:
    • keyboard and mouse control
    • DoF
  • Rendering:
    • different resolution
    • the number of samples for each ray
    • transparency threshold (Stop ray marching)
    • show depth
  • Export:
    • Snapshot
    • Video recording (Required ffmpeg)

the GUI is running up to 66 fps on a 3090 GPU at 800 $\times$ 800 resolution (default pose).

Run python taichi_ngp.py --gui --scene lego to start the GUI. This repository provided eight pre-trained NeRF synthesis scenes: Lego, Ship, Mic, Materials, Hotdog, Ficus, Drums, Chair



Running python taichi_ngp.py --gui --scene <name> will automatically download pre-trained model <name> in the ./npy_file folder. Please check out the argument parameters in taichi_ngp.py for more options.

Custom scene

You can train a new scene with ngp_pl and save the pytorch model to numpy using np.save(). After that, use the --model_path argument to specify the model file.

Acknowledgments

Many thanks to the incredible projects that open-source to the community, including:

Todo

  • Support Vulkan backend
  • Refactor to separate modules
  • Support real scenes

...

About

This is a Instants-NGP renderer implemented using Taichi

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%