Structured Lighting For 3D Sensors

Monday, November 13th, 2017 - Light, Photovoltaic, Transducer/Sensor

Structured Lighting For 3D Sensors

Structured lighting research has also received wide support both in Europe and in the USA, particularly in the areas of inspection and quality control. This method is based on the idea of using geometric information, encoded in the illumination, to help extract the required geometric information from the object 2-D image. This is achieved by projecting a suitable light pattern (using a high-power light projector or a laser) on the target object and observing the deformations that the object shape produces on the pattern, using a suitable 2-D optical transducer such as a camera and triangulation calculations to obtain the depth map.

Structured Lighting For 3D Sensors,Structured lighting generalized diagram (light stripe pattern shown)

Figure 1. Structured lighting generalized diagram (light stripe pattern shown)

The light pattern can be a series of geometrically arranged dots, parallel lines or more simply a sheet of light, depending on the shape of the object and the application. Figure 1 shows how a plane of light is projected on to the target object (in this case an optically opaque cube) and produces the 2-D image shown in Figure 2. This image, often referred to as the ‘raw data’, is then processed to extract the 3-D information about the object.

As only that part of the object which is illuminated is sensed by the camera so, in the instance of a single plane of light (as shown in Figures 1, 2 and 3(a) , the image is restricted to an essentially one­ dimensional entity, thereby simplifying the pixel correspondence problem.

Camera output (raw data)

Figure 2. Camera output (raw data)

The light plane itself has a known position and every point in the object image must also lie on the light plane in the 3-D space. To find out where exactly this point lies (i.e. how far from the projector and the camera) one must consider that the light plane and the line of sight of the camera intersect in just one point (as long as the camera’s focal point is not in the light plane). Thus, by computing the intersection of the line of sight with the light plane we can calculate the 3-D position of any object point illuminated by the stripe.

Different types of structured light patterns

Figure 3. Different types of structured light patterns

It is important to notice that the only points that can be ‘seen’ by both the light source and the camera at once can be computed in 3-D. Since the triangulation calculations require a non-zero baseline, the camera cannot be too close to the light source and thus concavities in the scene are potentially difficult to measure, since both the camera and the light source may not be able to see into them at the same time. Another potential problem is created by those object surfaces which are parallel with the light piane since they will have only a relatively small number of lines projected on to them.

This, and similar problems, can be improved by using different light patterns as shown in Figure 3(a), (b) and (d). However, the image processing is easier when the light pattern is a single plane of light (Popplestone et at., 1975; Agin, 1972; Sugihara, 1977) than when it is a series of geometrically arranged dots or stripes, in view of the higher visual data content provided by the two latter techniques.

Robot vision based on structured lighting has recently produced some industrial applications worthy of note both in the USA and the UK (Nelson, 1984; Meta Machines, 1985; Edling, 1986).

 I hope this information about “Structured Lighting For 3D Sensors” is easy to be understood.