Static Range Finders
Static Range Finders
Static LED arrays offer two main advantages over scanning laser range finders : they have no moving parts and employ cheaper light sources. They do, however, suffer from a lower optical launch power and, potentially, also a lower x-y resolution which tends to limit their use to low-cost, short-range applications. Thus they are eminently suitable for eye-in-hand robot vision applications particularly in the case of multisensor robot systems where they can fill the natural gap between the visual sensing of an overhead camera and the pressure sensing of a tactile sensor.
The principle of operation of a static LED array rangefinder is essentially the same as that of the scanning laser device, that is each LED output is focused on to the target object whose reflected light is processed to provide the range data, but the scan is achieved electronically by multiplex ing the drive to the LED matrix so that only one LED is ‘on’ at any point in time, thereby avoiding the need for a potentially troublesome mechanical scan. Two main techniques have been developed to achieve this goal-an x-y LED array sensor and a circular LED array sensor.
A 2-D LED array sensor has been developed based on the phase measurement principle. As shown in Figure 1 this sensor has the light-emitting diodes arranged in a 2-D array with the same x-y resolution as the intended object range image.
The principle of operation, as illustrated in Figures 2 and 3, is as follows: a single LED is modulated at an appropriate frequency and is focused on to the target object. The subsequent reflected/scattered optical flux Φs is coupled to the secondary detectors whose output signal Vs. is compared with the reference signal Vr. provided by the primary detectors which receive light only (and directly) from the LED array. The phase difference between these two signals provides a measure of the object distance, as shown by eqns (1) and (2):
where ωM is the LED modulating frequency and c the speed of light. The procedure is repeated for all LEDs in the array and a 3-D scan of the target object is thus achieved.
Local intelligence provides a degree of 3-D image pre-processing such as ‘closest object point’ and ‘holdsites location’ which helps to reduce communication with and processing times by the main robot computer, a desirable goal in most applications such as obstacle avoidance and pick-and place operations.
The features of such a sensor are low cost, high speed and medium x-y resolution. This latter furthermore increases as the sensor approaches the target object thus making it ideal for eye-in-hand operation and integration within a robot Multisensory Feedback System. Embryonic examples of such systems (like the one based on an overhead camera, a 3-D eye-in-hand vision sensor and a tactile sensor) have been shown to provide a consider able increase in the robot flexibility of operation and are the subject of wide ranging research interest.
The sensor front-end, as shown in Figure 4, consists of a circular array of LEDs, an objective lens and a dedicated optical transducer acting as a light spot position detector.
The principle of operation is as follows: the LEDs are aligned so that all the light beams cross the optical axis at the same point, thus forming an optical cone, the tip of which is placed at approximately the centre of operation of the 3-D vision sensor. The objective lens and the optical transducer form a camera arrangement so that when an object intercepts the light beam the resultant light spot is imaged on to the optical transducer, a planar PIN photodiode with homogeneous resistive sheets on both sides
(the Hamamatsu S1300) which, as shown in Figure 5, can measure the photocurrents produced by the light spot on an x-y plane.
These values are then used by the local intelligence to evaluate the light spot position according to eqns (3) and (4), where r is the resistance of the homogeneous resistive sheet, L is the length of the transducer and I is the electrode current:
Knowledge of the light spot x-y position, the camera optics and the trajectory of the light beam allows triangulation calculations to be performed, in accordance with eqn (5), to find the object point coordinates in 3-D. From the measurement of multiple close object points, the surface orientation can be calculated.
The features of this sensor are: the simplicity of the operating principle, fast speed (it can generate up to 1000 range measurements per second) and precision (0.07 mm for distance and 1.5° for surface orientation). Its drawbacks, compared with the 2-D linear LED array technique, are: the larger size (Figure 6 shows the typical size for a six LEDs sensor), and the smaller range of operation (4-5 cm compared with 10-80 cm typically), both thought to be caused indirectly by the inherent construction requirement that the LEDs be angled with respect to the transducer optical axis.
Development of this sensor is still proceeding, with Prof. Kanade and his team currently working on a prototype that enables the use of LED modulation (to allow operation under ambient lighting conditions), fiber optic cables and laser diodes (to increase the optical incident power) and double optical cones (to eliminate the singularity of ‘ plane fitting’) .
Another example of an optical range sensor capable of providing a 2-D or 3-D image (depending on the light source configuration used). This sensor is also based on measuring the phase difference between the launched and the returned optical signal and is therefore based on the same theory. The main difference lies in the use of laser diodes in preference to LEDs. This choice affords higher modulation frequency and therefore a potentially higher range measurement resolution . Unfortunately the higher unit cost, larger para metric variations and more complex drive circuit requirements of laser diodes also limit their use to individual light sources (in the same fashion as scanning laser rangefinders) or as I-D arrays which then requires knowledge of the object movement, as in the case of conveyer belt use, to produce a 3-D image.
One instance of a research device successfully developed and marketed and now commercially available is the single point optical rangefinder based on triangulation. This device was first proposed in the late seventies and subsequently developed by several researchers to yield the present day product. The principle of operation is illustrated in Figure 7.
These, however, are single point devices, that is they are only capable of providing the distance measurement to one point on the object surface and, because of their size, are unsuitable for mounting as an x-y array which would be necessary in order to provide a 3-D image. This sensor geometry does, however, allow high precision single point range measurements (for instance, Tanwar quoted a precision of 0.01 micrometers for polished surfaces) albeit within a small range of operation (±75 micrometers for the Tanwar sensor).