-
Notifications
You must be signed in to change notification settings - Fork 8
/
index.html
executable file
·178 lines (174 loc) · 13.7 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
<!DOCTYPE html>
<html lang="en">
<head>
<title>Immersive Light Field Video with a Layered Mesh Representation</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width,minimum-scale=1.0">
<meta name="description" content="Immersive Light Field Video with a Layered Mesh Representation is a SIGGRAPH 2020 Technical Paper.">
<meta name="keywords" content="deep view, light field video, augmented perception, google, siggraph">
<meta name="author" content="Augmented Perception, Google">
<meta name="robots" content="all">
<link href="https://fonts.googleapis.com/css2?family=Lato:wght@300;400;700;900&family=Open+Sans:wght@300;400;600;700;800&family=Roboto:wght@400;500;700;900&display=swap" rel="stylesheet">
<link rel="stylesheet" href="assets/css/style.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-152598381-5"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-152598381-5');
</script>
</head>
<body>
<canvas id="global-canvas"></canvas>
<div class="container">
<div class="header">
<div class="title">
<h1>Immersive Light Field Video <br/> with a Layered Mesh Representation</h1>
<div class="title-line"><hr></div>
</div>
<div class="subheading">
<h2>SIGGRAPH 2020 Technical Paper</h2>
<p class="subheading-link">
<a href="https://storage.googleapis.com/immersive-lf-video-siggraph2020/ImmersiveLightFieldVideoWithALayeredMeshRepresentation.pdf">Download PDF</a>
</p>
</div>
<div class="authors">
<h4>
<span class="no-wrap">Michael Broxton*</span>,
<span class="no-wrap">John Flynn*</span>,
<span class="no-wrap">Ryan Overbeck*</span>,
<span class="no-wrap">Daniel Erickson*</span>,
<span class="no-wrap">Peter Hedman</span>,
<span class="no-wrap">Matthew DuVall</span>,
<span class="no-wrap">Jason Dourgarian</span>,
<span class="no-wrap">Jay Busch</span>,
<span class="no-wrap">Matt Whalen</span>,
<span class="no-wrap">Paul Debevec</span>
</h4>
<p class="equal-contribution pt-5 center-text">* - Denotes equal contribution.</p>
</div>
<div class="organization">
<h3>Google LLC</h3>
</div>
</div>
</div>
<div class="content">
<div class="section" style="padding: 0 0 10px 0;">
<video controls poster="assets/img/video_poster.jpg" style="padding-bottom: 20px; width: 100%">
<source src="https://storage.googleapis.com/immersive-lf-video-siggraph2020/DV_SIGG20_2020061701.mp4" type="video/mp4">
Play Video
</video>
</div>
<div class="paper-thumbnails">
<div class="page">
<a href="https://storage.googleapis.com/immersive-lf-video-siggraph2020/ImmersiveLightFieldVideoWithALayeredMeshRepresentation.pdf"><img src="assets/img/DeepViewVideo_Submit1.jpg" alt="Download .pdf"></a>
</div>
<div class="page">
<a href="https://storage.googleapis.com/immersive-lf-video-siggraph2020/ImmersiveLightFieldVideoWithALayeredMeshRepresentation.pdf"><img src="assets/img/DeepViewVideo_Submit2.jpg" alt="Download .pdf"></a>
</div>
<div class="page">
<a href="https://storage.googleapis.com/immersive-lf-video-siggraph2020/ImmersiveLightFieldVideoWithALayeredMeshRepresentation.pdf"><img src="assets/img/DeepViewVideo_Submit3.jpg" alt="Download .pdf"></a>
</div>
<div class="page">
<a href="https://storage.googleapis.com/immersive-lf-video-siggraph2020/ImmersiveLightFieldVideoWithALayeredMeshRepresentation.pdf"><img src="assets/img/DeepViewVideo_Submit4.jpg" alt="Download .pdf"></a>
</div>
<p class="pt-15">
Click to <a href="https://storage.googleapis.com/immersive-lf-video-siggraph2020/ImmersiveLightFieldVideoWithALayeredMeshRepresentation.pdf" class="content-link">download</a> a PDF of the paper.
</p>
</div>
<div class="section">
<h5>Abstract</h5>
<p><span class="dropcap">W</span>e present a system for capturing, reconstructing, compressing, and rendering high quality immersive light field video. We record immersive light fields using a custom array of 46 time-synchronized cameras distributed on the surface of a hemispherical, 92cm diameter dome. From this data we produce 6DOF volumetric videos with a wide 80-cm viewing baseline, 10 pixels per degree angular resolution, and a wide field of view (>220 degrees), at 30fps video frame rates. Even though the cameras are placed 18cm apart on average, our system can reconstruct objects as close as 20cm to the camera rig. We accomplish this by leveraging the recently introduced DeepView view interpolation algorithm, replacing its underlying multi-plane image (MPI) scene representation with a collection of spherical shells which are better suited for representing panoramic light field content. We further process this data to reduce the large number of shell layers to a small, fixed number of RGBA+depth layers without significant loss in visual quality. The resulting RGB, alpha, and depth channels in these layers are then compressed using conventional texture atlasing and video compression techniques. The final, compressed representation is lightweight and can be rendered on mobile VR/AR platforms or in a web browser.</p>
</div>
</div>
<div class="hero-viewer">
<div class="hero-container" style="display: flex;">
<div id="scene-hero" class="scene-lf"></div>
</div>
<div class="content">
<p class="pt-15 italic center-text">
A single frame from a video light field showing a geometrically complex workshop scene with reflections and sparks.<br />
Scroll down to the example scenes at the bottom of this page to view the full video light field in your browser.
</p>
</div>
</div>
<div class="content">
<div class="section">
<h5>VR Headset Demo</h5>
<p>Our VR headset demo is currently available for PC-based VR headsets (e.g. Oculus Rift, Oculus Rift S, Oculus Quest using Oculus Link, HTC Vive, or Valve Index).</p>
<p>
Click to <a href="https://storage.googleapis.com/immersive-lf-video-siggraph2020/DeepViewSiggraphViewer_08_24.zip" class="content-link">download</a> the VR demo.
</p>
<p>To run, extract the .zip file and execute the contained <em>DeepViewVideo.exe</em></p>
<p>
Be sure to extract to a directory without spaces in the directory path. We have some reports that spaces in the directory path cause the demo to only render a black screen.
</p>
<h5>Web Examples</h5>
<p>Below are several light field videos that can be explored interactively in your web browser. Click on a scene's thumbnail to view the light field in motion. Additionally, the links below the thumbnail allow you to explore the intermediate scene representations used in our light field video processing pipeline. These include:
</p>
<p><b>MSI</b>: a Multi Sphere Image.<br />
<b>LM</b>: a Layered Mesh with individual layer textures.<br />
<b>LM-TA</b>: a Layered Mesh with a texture atlas.</p>
<p>Please see the paper manuscript for more information about these representations. <br />Note: in order to reduce download times the examples below are at lower resolution than the main results shown in our paper. Please see our video for full resolution results.</p>
<p>All examples are available in the following resolutions:</p>
<p>
<b>Low-Res</b>: a web and mobile friendly resolution.<br />
<b>High-Res</b>: better for workstations and laptops with beefy GPUs and a high bandwidth internet connection. <br />
</p>
<p>
For each scene we have also made the raw video data and camera models from our 46 camera rig available to download. <a href="https://github.com/augmentedperception/deepview_video_dataset">Click here to learn more</a>. If you want to learn how to write your own Javascript viewer for our layered mesh format, take a look at this <a href="simple_viewer">Simple Viewer</a>. Using the Chrome web browser, open the developer console where you can browse and live-edit the Javascript code.
</p>
<p class="half-gap">We presented a new higher-resolution 24-camera hemispherical light field camera rig called <span class="italic"><b>Brutus</b></span> at the <a href="https://visual.ee.ucla.edu/ccd2021.htm">CVPR 2021 Workshop on Computational Cameras and Displays</a>; the <a href="https://storage.googleapis.com/immersive-lf-video-siggraph2020/CVPR_CCD_2021_Brutus_Abstract.pdf">two-page abstract</a> is here:</p>
<p>Jay Busch, Peter Hedman, Matthew DuVall, Matt Whalen, Michael Broxton, John Flynn, Ryan Overbeck, Daniel Erickson and Paul Debevec. <a href="https://storage.googleapis.com/immersive-lf-video-siggraph2020/CVPR_CCD_2021_Brutus_Abstract.pdf"><span class="italic">Brutus: A Mid-Range Multi-Camera Array for Immersive Light Field Video Capture</span></a>. CVPR Workshop on Computational Cameras and Displays, June 2021.</p>
<!-- This gets filled with DeepView examples in the js -->
<div class="deepview-container" id="deepview-container"></div>
</div>
<div class="section">
<h5 style="text-transform: none;">BibTeX</h5>
<div class="bibtex-text">
@article{broxton2020immersive,<br />
title={Immersive Light Field Video with a Layered Mesh Representation},<br />
author={Michael Broxton and John Flynn and Ryan Overbeck and Daniel Erickson and Peter Hedman and Matthew DuVall and Jason Dourgarian and Jay Busch and Matt Whalen and Paul Debevec},<br />
booktitle={ACM Transactions on Graphics (Proc. SIGGRAPH)},<br />
publisher = {ACM},<br />
volume = {39},<br />
number = {4},<br />
pages = {86:1--86:15},<br />
year={2020}<br />
}
</div>
</div>
</div>
<div class="footer">
<div class="footer-decoration">
<div class="footer-decoration-col" style="background-color: #aecbfa;"></div>
<div class="footer-decoration-col" style="background-color: #f6aea9;"></div>
<div class="footer-decoration-col" style="background-color: #fde293;"></div>
<div class="footer-decoration-col" style="background-color: #a8dab5;"></div>
</div>
<div class="footer-content">
<div class="footer-links-row">
<div class="footer-links-left">
<h6>Related Work</h6>
<ul class="footer-links">
<li><a href="https://augmentedperception.github.io/deepview">DeepView: View Synthesis with Learned Gradient Descent</a></li>
<li><a href="https://augmentedperception.github.io/lowcost-panoramic-LFV">A Low Cost Multi-Camera Array for Panoramic Light Field Video Capture</a></li>
<li><a href="https://augmentedperception.github.io/welcome-to-lightfields">A System for Acquiring, Processing, and Rendering Panoramic Light Field Stills for Virtual Reality</a></li>
</ul>
</div>
<div class="footer-links-right">
<h6>Additional Links</h6>
<ul class="footer-links">
<li><a href="https://s2020.siggraph.org/conference/program-events/technical-papers">SIGGRAPH 2020 Technical Papers</a></li>
<li><a href="https://dl.acm.org">ACM Digital Library</a></li>
</ul>
</div>
</div>
</div>
</div>
</div>
<script src="assets/js/main.js"></script>
</body>
</html>