Robot Vision Sensors
Robot Vision Sensors
The last few years have seen the increasing use of robots in the industrial world and the emergence of robotics as a subject area with its own identity. The development of robot technology has been identified as following three major conceptual stages which, in turn, have helped to identify the robots belonging to these stages as respectively, robots of the first, second and third generations (Pera, 1981).
First generation robots: These are robots without any external (i.e. ex teroceptive) sensors or transducers. They do not therefore have the means (i.e. the sensors) nor, usually, the computing power to interact with their environment. These robots control the end-effector by calculating its location using the data supplied by the internal (i.e. proprioceptive) position transducers present within each robot joint.
At present the majority of the commercially used robots belong to this category.
Second generation robots: These robots have some exteroceptive sensors and a limited amount of computer power which allows them to process the environmental data and respond to it in a limited way. At present only a small percentage of the existing industrial robots can be classified under this category and their development is still largely at an experimental stage.
Third generation robots: Robots with sensors and extensive computing power which allows them to interact fully with the environment, that is: to make decisions, plan and execute the tasks that circumstances require. These robots are not yet in existence.
Second and third generation robots therefore require sensors that can provide the necessary environmental feedback and help to increase the robots’ accuracy and/or their flexibility. The development of these sensors can follow two main paths:
- a long term strategy aimed at the development of a general-purpose, flexible sensor
- a shorter term strategy aimed at providing specific, albeit inflexible, solutions to current industrial automation problems.
The former option is more suitable for the development of the future Flexible Manufacturing Systems but does require a larger investment, not only in terms of capital but also, quite significantly, in terms of manpower and planning. This type of research therefore tends to be limited to academic institutions and large corporate companies.
The latter option, by contrast, reflects the needs of western industries to be both competitive and efficient, achievements which are measured on relatively short time scales.
It comes as no surprise, therefore, to find that in the field of Robotic research the majority of USA and, to a lesser extent, European research institutions, are following a plan of ‘task driven’ research based on a close collaboration between the academic institutions and individual ‘pools’ of industrial sponsors. This arrangement, first pioneered by Rosen et al. at the Stanford Research Institute in the early 70s, is believed to be one of the underlying causes for the present USA success in this field.
The choice of which type of sensor is incorporated in the robot control structure depends, on course, on the application. It is generally accepted, however, that vision is the most powerful and yet flexible type of environmental feedback available, which has led to considerable research and development in to this field. Indeed Robot Vision and Sensory control is now an established conference topic in its own right. Table 1 shows the main areas of interest in the field of robotic vision.
Vision Sensor Generalities
Vision sensors are so called because they possess, in their make-up and functioning, a certain analogy with the human eye and vision. The analogy is somewhat easier to see in the case of vacuum and solid-state cameras, because they also possess the ‘equivalent’ of the human retina in the form of a photosensitive array. In the case of some active 3-D vision sensors, such as scanning laser range finders , the data is acquired by mechanical scanning using a single optical transducer.
In spite of their differences, however, all vision sensors can be broken down into the same constituents. Figure 1 shows a typical block diagram of a vision sensor.