This abstracts the device api and lenticular rendering, so user can write a quilt renderer to show the hologram on the device. Also implemented basic rgbd to quilt renderer.
python3 -m pip install -e .
Inputs are cv2 images
To render image or already projected lenticular frame
hologram_rendering.render_image(image)
To render a quilt
tile dimensions should match your specific device
hologram_rendering.render_quilt(quilt)
Displacement map implementation.
offset_scale 0 to 1
rot_max_rad radians
hologram_rendering.render_rgb_depth(rgb, depth, offset_scale, rot_max_rad):
- Displacement map + infill:
- Track optical flow history for infill
- Infill Adobe PatchMatch
- Owl3D unknown deep learning temporal method
- So goal temporally stable real-time video inpainting
- https://huggingface.co/stabilityai/stable-diffusion-2-inpainting
- Deep learning end to end:
- comma.ai has one, small offset simulator for dashcam video
- https://github.com/HypoX64/Deep3D
- Vertex grid textured heightmap
- Similar to jbienz refract
- Stretches the pixels
- Displacement map:
- Doesn't try to fill missing data, texture samples with offset
- Displacement map with custom sampling:
- Camera reprojection with only fragment shader depth buf analysis
- Fill missing data with texture sampling tricks
- Displacement map or vertex grid with multiple layers:
- Split to layers by depth
- Similar to rgb depth looking glass studio
- Similar to Facebook 3D photos
- Facebook uses CNN to hallucinate missing data
Render left- and rightmost cameras, and interpolate rest with camera reprojection (or sbs to hologram)
maybe with mesh shaders, render to all quilts with single pass
irl facecam that object tracks tracks eyes, so can render only needed views.
for heavy raytracing or something
LICENSE: CC0