3-D Sensors

Monday, November 13th, 2017 - Light, Transducer/Sensor

3-D Sensors

Indirect methods of obtaining depth maps, based largely on triangulation techniques, have provided the largest input in this area. This is thought to be due in part to the existing optoelectronics technology (camera tubes, photodiode arrays, etc.) which, being inherently 2-D devices, require triangulation techniques to integrate them within 3-D vision system, and in part to the analogy with the human vision system which is also based on a triangulation technique (Marr and Poggio, 1976, 1977) .

Stereo Vision

Stereo vision, in particular, has received considerable attention. The disparity technique is based on the correlation between images of the same object taken by two different cameras under the same lighting conditions (Marr and Poggio, 1976), while the photometric technique is based on the correlation between the images taken by the same camera under two different lighting conditions (Ikeuchi and Horn, 1979) .

3-D Sensors,Stereo imaging diagram,3d sensors,3d sensors wiki,3d sensors market,3d sensors ppt,3d sensors to allow interact with computers,3d sensors pdf,3d sensors wikipedia,3d sensors kinect,3d sensors applications,3d sensors price,3d sensors buy,3d sensors arduino,3d sensors openni,3d sensors market study,primesense 3d sensors,sourcefire 3d sensors,lmi 3d sensors,htc evo 3d sensors,google 3d sensors,google phone 3d sensors,best 3d sensors,brickstream 3d sensors,canesta 3d sensors,comparison of 3d sensors,corrsys 3d sensors ag,capri 3d sensors,corr 3d sensors ag,corrsys 3d sensors,corrsys 3d sensors ag wetzlar,how do 3d sensors work,evo 3d sensors,how 3d sensors work,haimer 3d sensors,hp 3d sensors

Figure 1. Stereo imaging diagram (after Nevatia, courtesy of Prentice-Hall, 1982)

Stereo vision sensors, like their 2-D counterparts, are also based on optical array transducers, both vacuum and solid-state (such as camera tubes, CCD, DRAM and photo diode arrays). Their main function is to provide multiple position views (usually two) of the object in question. Figure 1 shows a diagrammatic view of how these two images are obtained.

To draw the imaging lines in Figure 1 (which, for simplicity’s sake, were limited to two per image) one must consider that each point of the object’s image corresponds to one point on the object surface (assuming properly focused optics). This means that this object point must lie along the line joining the image point and the focal point of the imaging device lens, its distance along the line being unknown. If the object is now viewed from a different angle and the same point is visible in both views, then it must lie at the intersection of the lines determined from the two separate views; its position (i.e. the distance from the imaging devices) can then be calculated by triangulation.

For the computer to carry out this calculation automatically, however, it  needs to process the two images in three main steps:

  1. Determine the point pairs in the two images, that is, determine which  point in the right image corresponds to which point in the left image. This is the hardest, and therefore computationally the most expensive, part of stereo vision. It may, in fact, be very difficult to identify the same features in both images. The image of a small area on the object surface may be different in the two images because of the different perspective and surface reflectivity due to the viewing angles. Moreover, some of the points in one image may not be visible in the other.
  2. Translate the corresponding two points in the left and right images to yield the disparity measurement (i.e. the difference in the x-y position of the point in the left image compared with the x-y position of the corresponding point in the right image).
  3. Determine the distance of the object point from the imaging devices by triangulation. (This operation requires data on the relative positions and orientations of the stereo imaging device(s) which produced the left and right images.)

The essence of stereo vision, therefore, is step 1, namely the solution to the disparity problem. To help gauge the difficulty attached to such a step one needs to note that all disparity measurements computed using local similarities (or features) may be ambiguous if two or more regions of the image have similar properties.

Consider for example the left and right images as shown in Figure 2 consisting of three dark squares each as marked. Each square in one image is similar to any of the three in the other. If we now correspond L1 and R1, L2 and R2, L3 and R3, the three squares will be computed to be at the same height above the background, as

Stereo ambiguity problem Of Stereo Vision 3D Sensor

Figure 2. Stereo ambiguity problem (after Nevatia, courtesy of Prentice-Hall, 1982)

by the filled squares.If L1 were to be matched with R2, L2 with R3 and L3 with R1 then the computed heights would be shown by the empty triangles. Another possible interpretation is shown by the unfilled circles, thereby giving an indication of how critical the correspondence problem can become in the absence of any known and unique object features.

In spite of these difficulties and the relatively high expected price tag, robot stereo vision is a desirable goal. Stereo vision has the highest inherent 3-D image resolution (limited only by the type of camera and its optics) and flexibility (for instance it is the only method that can provide colour images relatively easily) and as such it comes closest to the aforementioned definition of a general-purpose, flexible vision sensor. This makes it a desirable goal but does require large investments and long project  lead times.

The USA financial investment in stereo vision research, for instance, has already been considerable (approx. $3,000,000 to date), but the results so far have been limited mostly to laboratory prototypes. The reasons are thought to be many and varied, ranging from the aforementioned difficulty of solving the disparity problem in a sufficiently short time to the sheer complexity of a system that is essentially trying to emulate a major function of the human brain. Recently, however, there have been reports of successful industrial applications, such the positioning of car bodies, using stereo vision (Rooks, 1986).

There are 3 main methods of using the 2-D vision sensors to obtain multiple views as required for stereo vision:

  1. Disparity method 1. Use of two stationary imaging devices.
    Stereo vision, disparity 1 method

    Figure 3. Stereo vision, disparity 1 method

    This could be defined as the ‘classical’ stereo vision method because of its analogy to the human vision system. As shown in Figure 3 it consists of an illumination system and two stationary cameras which provide the required two 2-D images. This method is inherently more expensive than the other two because it uses 2 cameras but does not require any mechanical movement and, therefore, compared to method ‘b’ is faster and can provide more accurate measurement of the cameras positions as required for the disparity calculations.

  2. Stereo vision , disparity 2 method

    Figure 4. Stereo vision , disparity 2 method

    Disparity method 2. Use of one imaging device moved to different known positions. This is essentially a cost variation on the method ‘ a’ since, as shown in Figure 4, it only differs by the use of a single camera which, to provide images from a different angle, is mechanically moved to a different known position.

  3. Photometric method. Use of one stationary imaging device under different lighting conditions.
    Stereo vision , photometric method

    Figure 5. Stereo vision , photometric method

    This method relies on maintaining a camera in the same position, thereby avoiding the pixel correspondence problem, and obtaining multiple images by changing the illumination conditions. Process­ ing of these images can uniquely determine the object surfaces orientation thus enabling its 3-D mapping (Woodham 1978).

I hope this information about “3-D Sensors” is easy to be understood.