Movie Editing and Cognitive Event Segmentation in Virtual Reality Video
June 13, 2018 Β· Declared Dead Β· π ACM Transactions on Graphics
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Ana Serrano, Vincent Sitzmann, Jaime Ruiz-Borau, Gordon Wetzstein, Diego Gutierrez, Belen Masia
arXiv ID
1806.04924
Category
cs.GR: Graphics
Citations
108
Venue
ACM Transactions on Graphics
Last Checked
1 month ago
Abstract
Traditional cinematography has relied for over a century on a well-established set of editing rules, called continuity editing, to create a sense of situational continuity. Despite massive changes in visual content across cuts, viewers in general experience no trouble perceiving the discontinuous flow of information as a coherent set of events. However, Virtual Reality (VR) movies are intrinsically different from traditional movies in that the viewer controls the camera orientation at all times. As a consequence, common editing techniques that rely on camera orientations, zooms, etc., cannot be used. In this paper we investigate key relevant questions to understand how well traditional movie editing carries over to VR. To do so, we rely on recent cognition studies and the event segmentation theory, which states that our brains segment continuous actions into a series of discrete, meaningful events. We first replicate one of these studies to assess whether the predictions of such theory can be applied to VR. We next gather gaze data from viewers watching VR videos containing different edits with varying parameters, and provide the first systematic analysis of viewers' behavior and the perception of continuity in VR. From this analysis we make a series of relevant findings; for instance, our data suggests that predictions from the cognitive event segmentation theory are useful guides for VR editing; that different types of edits are equally well understood in terms of continuity; and that spatial misalignments between regions of interest at the edit boundaries favor a more exploratory behavior even after viewers have fixated on a new region of interest. In addition, we propose a number of metrics to describe viewers' attentional behavior in VR. We believe the insights derived from our work can be useful as guidelines for VR content creation.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Graphics
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Everybody Dance Now
R.I.P.
π»
Ghosted
Deep Bilateral Learning for Real-Time Image Enhancement
R.I.P.
π»
Ghosted
Animating Human Athletics
R.I.P.
π»
Ghosted
BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Re-integration
R.I.P.
π»
Ghosted
Shape Transformation Using Variational Implicit Functions
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted