Dynamic focusing has not received as much attention as the other two indirect 3-D vision methods but represents an important alternative particularly in terms of price and speed of operation. This technique relies upon correlating the signature between two linear photodiode arrays to obtain the range information (Stauffer and Wilwerding 1984) and is therefore inherently less flexible than structured lighting or stereo vision (it relies on conveyor belt movement to provide the third dimensional axis) but is eminently suitable for object dimensioning, edge detection and tracking.
Dynamic Focusing Vision Sensors
Vision sensors based on dynamic focusing make use of the automatic focusing technology first developed in the mid-1970s for the 35 mm SLR and video camera markets. They sense the relative position of the plane of focus by analysing the image phase shift which occurs when a picture is not in focus.
The principle of operation is as follows: when an image is in focus, all the light arriving at a single point on the image plane is produced by a corresponding single point in the scene. That is to say all the light collected by the lens from this single point in the scene is focused on the corresponding single point (and this one only) on the image plane. The whole lens, viewed from a point in the image, must therefore appear to be one uniform colour and brightness, like the corresponding point in the scene. It thus follows that, if every point in the image were divided into two halves, they would both have the same values of colour and brightness (i.e. their ‘signature’ would be the same). When the scene image is not in focus the same two halves of the image point would not have the same values of colour and brightness (i.e. a different ‘signature’), the degree of difference providing a measure of how far out of focus the scene image is.
This is the same mechanism upon which dynamic focusing vision sensors, such as the Honeywell HDS-23, are based. The front-end optical transducer, as shown in Figure 1 (a), is a single row of twenty-three light cells which are similar to the cells in a solid-state camera, but with two major differences: each of the cells has its own miniature lens, about the diameter of human hair, and is made up of two halves, a right half and a left half. Thus, each of the cells in the HDS-23 sensor can look at the surface of the lens from a single point on the image plane and determine if the light brightness from the right half of the lens is the same as that from the left half. Local intelligence within the sensor then computes a ‘signature’ of the scene on the image plane, as seen by the twenty three left and right light cells, and determines if these two signatures are the same (image in focus) or are not the same (image not in focus).
It turns out, as shown in Figure 2, that for a 2-D scene, for instance a black square on a white background, the out-of-focus signatures are actually the same but shifted in phase. By computer processing the direction and the magnitude of the signature phase shift, the sensor can determine the magnitude and the direction of the image plane displacement from the focus condition and, as shown in Figure 3, determine the object distance (Iversen, 1983).
Honeywell-Visitronics has recently commercially released a range of robotic sensors based on the dynamic focusing technique (Honeywell Visitronics, 1984).