6dof and depth maps - why are they important?
Our team at SLRLabs has been focusing heavily on depth maps as a way of improving the VR experience. But what are “depth maps”? In today’s blog, we’ll explain exactly what they are, and why our work in this field will help virtual reality take a massive leap forward.
A “depth map” essentially works out the depth of the image in virtual reality and tells you how near or far an object in VR is. This allows 6DoF (6 degrees of freedom) to be employed, which gives the user the freedom to move within their virtual environment. It means you can look left and right, up and down, and even move forwards and backwards. This technology could be employed to bring to life 3D scanned scenes like this: , , , , . You can read more about this on our forum post .
It is a vital way of making VR come to life and seem fully immersive. But as well as helping a fully immersive experience, depth maps are also useful for correcting distracting artifact corrections.
You’ll have noticed that some VR videos have artifacts which make the viewing experience uncomfortable. Most of these are caused by the inconsistent positioning of stereoscopic camera objectives and your eyes. Here are some of the problems you may have experienced and how depth maps can help:
Close-up scenes
During some close-up scenes, you may have to close one eye to see the action. This is because the faces are close to the camera and our eyes are crossed, rather than looking straight ahead because objects are located at the center of each eye's field of view. Stereoscopic VR cameras have a larger field of view but no moving parts, so the image remains static. This is illustrated in the diagram below.
As a result, a person instinctively crosses their eyes when something appears close up, but the image remains unchanged from stable cameras. The mind is not able to process the image, so people often feel eye strain and tend to close one eye. Panoramic stereo cameras usually have a larger field of view than the eye, so it is possible to fix that issue by rotating the projected image during runtime. The rotation rate should correspond to eye position and can be taken either from an eye-tracking device (most headsets do not have that ability right now) or calculated from the depth point at the central area of sight and interpapillary distance (IPD) using simple geometry.
Head shaking issues
When you shake your head, one eye accidentally becomes lower than the other, leading to a confusing image. The visible objects go up or down, depending on the distance while the video from the camera remains static.
The solution to this head tilt issue is to move images up and down at the correct rate, dependent on depth.
Inability to look at the sides
This is one of the biggest reported issues. If you rotate your head, the eye position in the virtual world will differ significantly from the objective position of the camera. If you look at far objects, the distortions are negligible, but close objects are difficult to see.
Each of the corrections requires information about the distance to the objects, so the depth maps should be precomputed and streamed along with the video, or calculated on the fly while the video plays. Then we can correct each frame depending on the depth information and head position.
Want to know more?
Here you can with the above corrections applied, allowing you to adjust the rate. Please note that it needs to be installed via Sidequest and only works on the Quest 2. Here are three videos to try it out with:
We’d love to know what you think about these depth maps and any other ideas you may have. Join in the discussion on our and let us know your thoughts.