Interaxial separation

Here is a stereo 360 of a Sydney Harbour view


Here the separation between the lenses of my twin DSLR rig is 30cm . This large value  exaggerates the depth visible in  the scene so even quite distant areas show some depth. If there were areas close to the camera there would in fact be too much apparent depth in those areas so the camera needs to be up high and away from any close poles, walls etc for such a large camera interaxial to work.

The screen window is set at about 30m, so there are some window violations in the foreground grassy area beneath the camera — but these are low contrast areas and depth discrimination overall is usually better if there is not too much overall disparity in the more important areas of the scene. (It is the range of visible depth differences rather than total depth that creates the strongest 3d impression.  Consider the furthest features in the scene, for example the Harbour Bridge. Here there are wide anaglyph fringes in the panorama but they are the same width all along the bridge, so there is no actual depth structure seen in the bridge.)

With a high, isolated viewpoint though you can be at risk of  losing some feeling of 3d immersion in the scene – even with a high interaxial separation.  Compare “immersivity” in this panorama with the depth in this close up view of some zombies (taken with a 4 camera miniature rig) where the the interaxial between adjacent pairs of cameras is only about 3.5 cm.

Some technical details of the shooting and stitching process of the panorama: Canon 5DMkIIs with 10.5mm Nikkors  using SRaw1 setting. 40 frames per camera in 16 seconds with a rotating turntable. Ie. at one exposure per camera every 0.4 seconds (using an intervalometer and a spliced cable release). This btw is the fastest rate possible to shoot Raw files with these cameras in long continuous sequences.

Here the cameras are symmetrically arranged on the camera rotator. For calibrating the primary camera (the one whose yaw, pitch and roll values I will use for the other camera) I used the masking feature in PTGui to force the program to find common points between frames only with very distant parts of the scene. This meant masking all the foreground areas, the trees and the closer building. Very often you wouldnt be able to do this and still have common areas for point finding between frames — but here with the distant views and the high viewpoint it was possible.