Stereo imaging

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.


In stereo imaging the range of an object is generally determined by the techniques called stereo triangulation and stereo disparity. There are also some other techniques like depth from focus (DFF), depth from defocus (DFD), etc. [1,2,3]. DFF and DFD are normally considered to be in a seperate class, distinguished from triangulation techniques such as depth from stereo, vergence or motion [4]. One more technique is Single Lens, Single Image Passive Ranging [5] but the most popular techniques are stereo triangulation and disparity. The next section describes the stereo imaging techniques.


Stereo is a well-known technique to obtain depth information from digital images. Many research activities are known dealing with stereo vision, e.g., mobile robots, photogrametry, stereo microscopy, etc. Dhond and Aggarwal presented a review on stereo vision techniques [6]. It is also used for ranging in aircraft navigation [7]. There are two formulations with two different techniques for ranging and acquiring depth information.


One way of determining range is triangulation. This requires two or more sensors, preferably very far apart. The accuracy of the system improves with the separation distance. The two cameras placed at C1 and C2, with a third point T (target point) forms a triangle (Fig. 1), where d is the distance between cameras.


The field-of-view (FOV) is the range of angles from which the incident radiation can be collected by the detector. The FOV may be decomposed into its horizontal and vertical components, labeled as HFOV and VFOV respectively. In both cases FOV is determined by a combination of focal lengths of the lens, f, and the size field stop DF.S. The field stop is a device that blocks rays that are beyond its dimensions from reaching the detecting element(s). The detecting elements are located at the plane, which is usually not the same location as the focal point. The location of the focal plane determines at what range objects will be brought into focus. The field stop is located just before the focal plane. If there is no physical stop, then the boundaries of the detecting elements determine the field stop dimensions.


For small angles, less than 20°, where field stop is equal to one pixel size is generally true for IFOV, the inverse tangent can be accurately approximated by tan-1(x) x (radians), in which case:

IFOV = d/f (if d/f<<1) (4) [2]

IFOV = 2 tan-1(ps/2f) (5)

FOV = 2 tan-1(DF.S/2f) (6)


Another way of determining range is by using stereo disparity, as shown in Fig. 4. Let us consider a set up using two cameras in stereo.

Let the origin O of this system (fig. 4) be mid-way between the lenses centers. Consider a point (x,y,z) , in three-dimensional world coordinates, on an object. Let this point has image coordinates (x'l , y'l) and (x'r , y'r) in the left and right image planes of the respective cameras. Let f be the focal length of both cameras, the perpendicular distance between the lens center and and the image plane. Then by similar triangles:

  • Two cameras with their optical axes are parallel and separated by a distance d.
  • The line connecting the camera lens centers is called the baseline.
  • Let baseline be perpendicular to the line of sights of the cameras (optical axis).
  • Let the x axis of the three - dimensional world coordinate system be parallel to the baseline [8]


The formulations of stereo triangulation and stereo disparity are different. The stereo triangulation performs at one point ( i.e., at the coincident point of the edges of Field of View) only. Whereas stereo disparity works at the center point between cameras as well as at other points. During study it is established that if an object is at center point where stereo triangulation works, triangulation and disparity are same.


  1. International Journal of Computer Vision 39(2) pp. 141-162 (2000).
  2. [ Electro-optical Imaging systems, Introduction to Naval Weapons Engineering,
  3. Depth from Defocus vs. Stereo: How Different Really are They? Proc. International Conference on Pattern Recognition, pp.1784-1786 (1998).
  4. Eric Krotkov, Knud Henriksen, and Ralf Kories, Stereo Ranging with Verging Cameras, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 12, No. 12, December
  5. 1990.
  6. Imaging System Laboratory, University of Colorado-Boulder.
  7. Dhond U.R and Aggarwal J.G. “Structure from Stereo” IEEE Transaction on Systems, Man, and Cybernetics, 19:1489-1510, 1989.
  8. Parshall E.R. and Mooney J.M. “Infrared Stereo Imaging for 3-D Tracking of Point Targets, Rome Laboratory RL/ERO Hanscom AFB, MA 01731
  9. Introduction to Stereo Imaging Theory,