Development of Intelligent Sensor System
Disclaimer: This dissertation has been submitted by a student. This is not an example of the work written by our professional dissertation writers. You can view samples of our professional work here.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
What is Automation?
Automation in general, can be explained as the use of computers or microcontrollers to control industrial machinery and processes thereby fully replacing human operators. Automation is a kind of transition from mechanization. In mechanization, human operators are provided with machinery to assist their operations, where as automation fully replaces the human operators with computers.
The advantages of automation are:
- Increased productivity and higher production rates.
- Better product quality and efficient use of resources.
- Greater control and consistency of products.
- Improved safety and reduced factory lead times.
Home automation is the field specializing in the general and specific automation requirements of homes and apartments for their better safety, security and comfort of its residents. It is also called Domotics. Home automation can be as simple as controlling a few lights in the house or as complicated as to monitor and to record the activities of each resident. Automation requirements depend on person to person. Some may be interested in the home security while others will be more into comfort requirements. Basically, home automation is anything that gives automatic control of things in your house.
Some of the commonly used features in home automation are:
- Control of lighting.
- Climate control of rooms.
- Security and surveillance systems.
- Control of home entertainment systems.
- House plant watering system.
- Overhead tank water level controllers.
Complex large-scale systems consist of a large number of interconnected components. Mastering the dynamic behavior of such systems, calls for distributed control architectures. This can be achieved by implementing control and estimation algorithms in several controllers. Some algorithms manipulate only local variables (which are available in the local interface) but in most cases, algorithms implemented in some given computing device will use variables which are available in this device's local interface, and also variables which are input to the control system via remote interfaces, thus rising the need for communication networks, whose architecture and complexity depend on the amount of data to be exchanged, and on the associated time constraints. Associating computing (and communication) devices with sensing or actuating functions, has given rise to intelligent sensors. These sensors have gained a huge success in the past ten years, especially with the development of neural networks, fuzzy logic, and soft computing algorithms.
The modern definition of smart or intelligent sensors can be formulated now as: ‘Smart sensor is an electronic device, including sensing element, interfacing, signal processing and having several intelligence functions as self-testing, self-identification, self-validation or self-adaptation'. The keyword in this definition is ‘intelligence'. The self-adaptation is a relatively new function of smart sensors and sensor systems. Self-adaptation smart sensors and systems are based on so-called adaptive algorithms and directly connected with precision measurements of frequency-time parameters of electrical signals.
The later chapters will give an elaborate view on why we should use intelligent sensors, intelligent sensor structure, characteristics and network standards.
2.1 Conventional Sensors
Before talking more on intelligent sensors, first we need to examine regular sensors in order to obtain a solid foundation on which we can develop our understanding on intelligent sensors. Most of the conventional sensors have shortcomings, both technically and economically. For a sensor to work effectively, it must be calibrated. That is, its output must be made to match some predetermined standard so that its reported values correctly reflect the parameter being measured. In the case of a bulb thermometer, the graduations next to the mercury column must be positioned so that they accurately correspond to the level of mercury for a given temperature. If the sensor is not calibrated, the information that it reports won't be accurate, which can be a big problem for the systems that use the reported information.
The second concern one has when dealing with sensors is that their properties usually change over time, a phenomenon knows as drift. For instance, suppose we are measuring a DC current in a particular part of a circuit by monitoring the voltage across a resistor in that circuit. In this case, the sensor is the resistor and the physical property that we are measuring the voltage across it. As the resistor ages, its chemical properties will change, thus altering its resistance. As with the issue of calibration, some situations require much stricter drift tolerances than others; the point is that sensor properties will change with time unless we compensate for the drift in some fashion, and these changes are usually undesirable.
The third problem is that not only do sensors themselves change with time, but so, too, does the environment in which they operate. An excellent example of that would be the electronic ignition for an internal combustion engine. Immediately after a tune-up, all the belts are tight, the spark plugs are new, the fuel injectors are clean, and the air filter is pristine. From that moment on, things go downhill; the belts loosen, deposits build up on the spark plugs and fuel injectors, and the air filter becomes clogged with ever-increasing amounts of dirt and dust. Unless the electronic ignition can measure how things are changing and make adjustments, the settings and timing sequence that it uses to fire the spark plugs will become progressively mismatched for the engine conditions, resulting in poorer performance and reduced fuel efficiency. The ability to compensate for often extreme changes in the operating environment makes a huge difference in a sensor's value to a particular application.
Yet a fourth problem is that most sensors require some sort of specialized hardware called signal-conditioning circuitry in order to be of use in monitoring or control applications. The signal-conditioning circuitry is what transforms the physical sensor property that we're monitoring (often an analog electrical voltage that varies in some systematic way with the parameter being measured) into a measurement that can be used by the rest of the system. Depending upon the application, the signal conditioning may be as simple as a basic amplifier that boosts the sensor signal to a usable level or it may entail complex circuitry that cleans up the sensor signal and compensates for environmental conditions, too. Frequently, the conditioning circuitry itself has to be tuned for the specific sensor being used, and for analog signals that often means physically adjusting a potentiometer or other such trimming device. In addition, the configuration of the signal-conditioning circuitry tends to be unique to both the specific type of sensor and to the application itself, which means that different types of sensors or different applications frequently need customized circuitry.
Finally, standard sensors usually need to be physically close to the control and monitoring systems that receive their measurements. In general, the farther a sensor is from the system using its measurements, the less useful the measurements are. This is due primarily to the fact that sensor signals that are run long distances are susceptible to electronic noise, thus degrading the quality of the readings at the receiving end. In many cases, sensors are connected to the monitoring and control systems using specialized (and expensive) cabling; the longer this cabling is, the more costly the installation, which is never popular with end users. A related problem is that sharing sensor outputs among multiple systems becomes very difficult, particularly if those systems are physically separated. This inability to share outputs may not seem important, but it severely limits the ability to scale systems to large installations, resulting in much higher costs to install and support multiple redundant sensors.
What we really need to do is to develop some technique by which we can solve or at least greatly alleviate these problems of calibration, drift, and signal conditioning.
2.2 Making Sensors Intelligent
Control systems are becoming increasingly complicated and generate increasingly complex control information. Control must nevertheless be exercised, even under such circumstances. Even considering just the detection of abnormal conditions or the problems of giving a suitable warning, devices are required that can substitute for or assist human sensation, by detecting and recognizing multi-dimensional information, and conversion of non visual information into visual form. In systems possessing a high degree of functionality, efficiency must be maximized by division of the information processing function into central processing and processing dispersed to local sites. With increased progress in automation, it has become widely recognized that the bottleneck in such systems lies with the sensors.
Such demands are difficult to deal with by simply improvising the sensor devices themselves. Structural reinforcement, such as using array of sensors, or combinations of different types of sensors, and reinforcement from the data processing aspect by a signal processing unit such as a computer, are indispensible. In particular, the data processing and sensing aspects of the various stages involved in multi-dimensional measurement, image construction, characteristic extraction and pattern recognition, which were conventionally performed exclusively by human beings, have been tremendously enhanced by advances in micro-electronics. As a result, in many cases sensor systems have been implemented that substitute for some or all of the intellectual actions of human beings, i.e. intelligent sensor systems.
Sensors which are made intelligent in this way are called ‘intelligent sensors' or ‘smart sensors'. According to Breckenridge and Husson, the smart sensor itself has a data processing function and automatic calibration/automatic compensation function, in which the sensor itself detects and eliminates abnormal values or exceptional values. It incorporates an algorithm, which is capable of being altered, and has a certain degree of memory function. Further desirable characteristics are that the sensor is coupled to other sensors, adapts to changes in environmental conditions, and has a discriminant function.
Scientific measuring instruments that are employed for observation and measurement of physical world are indispensible extensions of our senses and perceptions in the scientific examination of nature. In recognizing nature, we mobilize all the resources of information obtained from the five senses of sight, hearing, touch, taste and smell etc. and combine these sensory data in such a way as to avoid contradiction. Thus more reliable, higher order data is obtained by combining data of different types. That is, there is a data processing mechanism that combines and processes a number of sensory data. The concept of combining sensors to implement such a data processing mechanism is called ‘sensor fusion'
2.2.1 Digitizing the Sensor Signal
The discipline of digital signal processing or DSP, in which signals are manipulated mathematically rather than with electronic circuitry, is well established and widely practiced. Standard transformations, such as filtering to remove unwanted noise or frequency mappings to identify particular signal components, are easily handled using DSP. Furthermore, using DSP principles we can perform operations that would be impossible using even the most advanced electronic circuitry.
For that very reason, today's designers also include a stage in the signal-conditioning circuitry in which the analog electrical signal is converted into a digitized numeric value. This step, called analog-to-digital conversion, A/D conversion, or ADC, is vitally important, because as soon as we can transform the sensor signal into a numeric value, we can manipulate it using software running on a microprocessor. Analog-to-digital converters, or ADCs as they're referred to, are usually single-chip semiconductor devices that can be made to be highly accurate and highly stable under varying environmental conditions. The required signal-conditioning circuitry can often be significantly reduced, since much of the environmental compensation circuitry can be made a part of the ADC and filtering can be performed in software.
2.2.2 Adding Intelligence
Once the sensor signal has been digitized, there are two primary options in how we handle those numeric values and the algorithms that manipulate them. We can either choose to implement custom digital hardware that essentially “hard-wires” our processing algorithm, or we can use a microprocessor to provide the necessary computational power. In general, custom hardware can run faster than microprocessor-driven systems, but usually at the price of increased production costs and limited flexibility. Microprocessors, while not necessarily as fast as a custom hardware solution, offer the great advantage of design flexibility and tend to be lower-priced since they can be applied to a variety of situations rather than a single application.
Once we have on-board intelligence, we're able to solve several of the problems that we noted earlier. Calibration can be automated, component drift can be virtually eliminated through the use of purely mathematical processing algorithms, and we can compensate for environmental changes by monitoring conditions on a periodic basis and making the appropriate adjustments automatically. Adding a brain makes the designer's life much easier.
2.2.3 Communication Interface
The sharing of measurements with other components within the system or with other systems adds to the value of these measurements. To do this, we need to equip our intelligent sensor with a standardized means to communicate its information to other elements. By using standardized methods of communication, we ensure that the sensor's information can be shared as broadly, as easily, and as reliably as possible, thus maximizing the usefulness of the sensor and the information it produces.
Thus these three factors consider being mandatory for an intelligent sensor:
- A sensing element that measures one or more physical parameters (essentially the traditional sensor we've been discussing),
- A computational element that analyzes the measurements made by the sensing element, and
- A communication interface to the outside world that allows the device to exchange information with other components in a larger system.
It's the last two elements that really distinguish intelligent sensors from their more common standard sensor relatives because they provide the abilities to turn data directly into information, to use that information locally, and to communicate it to other elements in the system.
2.3 Types of Intelligent Sensors
Intelligent sensors are chosen depending on the object, application, precision system, environment of use and cost etc. In such cases consideration must be given as to what is an appropriate evaluation standard. This question involves a multi-dimensional criterion and is usually very difficult. The evaluation standard directly reflects the sense of value itself applied in the design and manufacture of the target system. This must therefore be firmly settled at the system design stage.
In sensor selection, the first matter to be considered is determination of the subject of measurement. The second matter to be decided on is the required precision and dynamic range. The third is ease of use, cost, delivery time etc., and ease of maintenance in actual use and compatibility with other sensors in the system. The type of sensor should be matched to such requirements at the design stage. Sensors are usually classified by the subject of measurement and the principle of sensing action.
2.3.1 Classification Based on Type of Input
In this, the sensor is classified in accordance with the physical phenomenon that is needed to be detected and the subject of measurement. Some of the examples include voltage, current, displacement and pressure. A list of sensors and their categories are mentioned in the following table.
Flow rate, Pressure, force, tension
Distortion, direction proximity
Light (infra red, visible light or radiation)
Current, voltage, frequency, phase, vibration, magnetism
Quantity of Energy or Heat
Temperature, humidity, dew point
Analytic sensors, gas, odour, concentration, pH, ions
Sensory Quantities or Biological Quantities
Touch, vision, smell
Table 2.3.1: Sensed items Classified in accordance with subject of measurement.
2.3.2 Classification Based on Type of Output
In an intelligent sensor, it is often necessary to process in an integrated manner the information from several sensors or from a single sensor over a given time range. A computer of appropriate level is employed for such purposes in practically y all cases. For coupling to the computer when constructing an intelligent sensor system, a method with a large degree of freedom is therefore appropriate. It is also necessary to pay careful attention to the type of physical quantity carrying the output information to the sensor, and to the information description format of this physical quantity or dynamic quantity, and for the description format an analog, digital or encoded method etc., might be used.
Although any physical quantities could be used as output signal, electrical quantities such as voltage are more convenient for data input to a computer. The format of the output signal can be analog or digital. For convenience in data input to the computer, it is preferable if the output signal of the sensor itself is in the form of a digital electrical signal. In such cases, a suitable means of signal conversion must be provided to input the data from the sensor to the computer
2.3.3 Classification Based on Accuracy
When a sensor system is constructed, the accuracy of the sensors employed is a critical factor. Usually sensor accuracy is expressed as the minimum detectable quantity. This is determined by the sensitivity of the sensor and the internally generated noise of the sensor itself. Higher sensitivity and lower internal noise level imply greater accuracy.
Generally for commercially available sensors the cost of the sensor is determined by the accuracy which it is required to have. If no commercial sensor can be found with the necessary accuracy, a custom product must be used, which will increase the costs. For ordinary applications an accuracy of about 0.1% is sufficient. Such sensors can easily be selected from commercially available models. Dynamic range (full scale deflection/minimum detectable quantity) has practically the same meaning as accuracy, and is expressed in decibel units. For example a dynamic range of 60dB indicates that the full scale deflection is 103 times the minimum detectable quantity. That is, a dynamic range of 60dB is equivalent to 0.1% accuracy.
In conventional sensors, linearity of output was regarded as quite important. However, in intelligent sensor technology the final stage is normally data processing by computer, so output linearity is not a particular problem. Any sensor providing a reproducible relationship of input and output signal can be used in an intelligent sensor system.
3.1 Sensor selection
The function of a sensor is to receive some action from a single phenomenon of the subject of measurement and to convert this to another physical phenomenon that can be more easily handled. The phenomenon constituting the subject of measurement is called the input signal, and the phenomenon after conversion is called the output signal. The ratio of the output signal to the input signal is called the transmittance or gain. Since the first function of a sensor is to convert changes in the subject of measurement to a physical phenomenon that can be more easily handled, i.e. its function consists in primary conversion, its conversion efficiency, or the degree of difficulty in delivering the output signal to the transducer constituting the next stage is of secondary importance
The first point to which attention must be paid in sensor selection is to preserve as far as possible the information of the input signal. This is equivalent to preventing lowering of the signal-to-noise ratio (SNR). For example, if the SNR of the input signal is 60 dB, a sensor of dynamic range less than 60 dB should not be used. In order to detect changes in the quantity being measured as faithfully as possible, a sensor is required to have the following properties.
- Non-interference. This means that its output should not be changed by factors other than changes in the subject of measurement. Conversion satisfying this condition is called direct measurement. Conversion wherein the measurement quantity is found by calculation from output signals determined under the influence of several input signals is called indirect measurement.
- High sensitivity. The amount of change of the output signal that is produced by a change of unit amount of the input quantity being measured, i.e. the gain, should be as large as possible.
- Small measurement pressure. This means that the sensor should not disturb the physical conditions of the subject of measurement. From this point of view, modulation conversion offers more freedom than direct-acting conversion.
- High speed. The sensor should have sufficiently high speed of reaction to track the maximum anticipated rate of variation of the measured quantity.
- Low noise. The noise generated by the sensor itself should be as little as possible.
- Robustness. The output signal must be at least more robust than the quantity being measured, and be easier to handle. 'Robustness' means resistance to environmental changes and/or noise. In general, phenomena of large energy are more resistant to external disturbance such as noise than are phenomena of smaller energy, they are easier to handle, and so have better robustness.
If a sensor can be obtained that satisfies all these conditions, there is no problem. However, in practice, one can scarcely expect to obtain a sensor satisfying all these conditions. In such cases, it is necessary to combine the sensor with a suitable compensation mechanism, or to compensate the transducer of the secondary converter.
Progress in IC manufacturing technology has made it possible to integrate various sensor functions. With the progressive shift from mainframes to minicomputers and hence to microcomputers, control systems have changed from centralized processing systems to distributed processing systems. Sensor technology has also benefited from such progress in IC manufacturing technology, with the result that systems whereby information from several sensors is combined and processed have changed from centralized systems to dispersed systems. Specifically, attempts are being made to use silicon-integrated sensors in a role combining primary data processing and input in systems that measure and process two-dimensional information such as picture information. This is a natural application of silicon precision working technology and digital circuit technology, which have been greatly advanced by introduction of VLSI manufacturing technology. Three-dimensional integrated circuits for recognizing letter patterns and odour sensors, etc., are examples of this. Such sensor systems can be called perfectly intelligent sensors in that they themselves have a certain data processing capability. It is characteristic of such sensors to combine several sensor inputs and to include a microprocessor that performs data processing. Their output signal is not a simple conversion of the input signal, but rather an abstract quantity obtained by some reorganization and combination of input signals from several sensors.
This type of signal conversion is now often performed by a distributed processing mechanism, in which microprocessors are used to carry out the data processing that was previously performed by a centralized computer system having a large number of interfaces to individual sensors. However, the miniaturization obtained by application of integrated circuit techniques brings about an increase in the flexibility of coupling between elements. This has a substantial effect. Sensors of this type constitute a new technology that is at present being researched and developed. Although further progress can be expected, the overall picture cannot be predicted at the present time. Technically, practically free combinations of sensors can be implemented with the object of so-called indirect measurement, in which the signals from several individual sensors that were conventionally present are collected and used as the basis for a new output signal. In many aspects, new ideas are required concerning determination of the object of measurement, i.e. which measured quantities are to be selected, determination of the individual functions to achieve this, and the construction of the framework to organize these as a system.
3.2 Structure of an Intelligent Sensor
The rapidity of development in microelectronics has had a profound effect on the whole of instrumentation science, and it has blurred some of the conceptual boundaries which once seemed so firm. In the present context the boundary between sensors and instruments is particularly uncertain. Processes which were once confined to a large electronic instrument are now available within the housing of a compact sensor, and it is some of these processes which we discuss later in this chapter. An instrument in our context is a system which is designed primarily to act as a free standing device for performing a particular set of measurements; the provision of communications facilities is of secondary importance. A sensor is a system which is designed primarily to serve a host system and without its communication channel it cannot serve its purpose. Nevertheless, the structures and processes used within either device, be they hardware or software, are similar.
The range of disciplines which arc brought together in intelligent sensor system design is considerable, and the designer of such systems has to become something of a polymath. This was one of the problems in the early days of computer-aided measurement and there was some resistance from the backwoodsmen who practiced the 'art' of measurement.
3.2.1 Elements of Intelligent Sensors
The intelligent sensor is an example of a system, and in it we can identify a number of sub-systems whose functions are clearly distinguished from each other. The principal sub-systems within an intelligent sensor are:
- A primary sensing element
- Excitation Control
- Amplification (Possibly variable gain)
- Analogue filtering
- Data conversion
- Digital Information Processing
- Digital Communication Processing
The figure illustrates the way in which these sub-systems relate to each other. Some of the realizations of intelligent sensors, particularly the earlier ones, may incorporate only some of these elements.
The primary sensing element has an obvious fundamental importance. It is more than simply the familiar traditional sensor incorporated into a more up-to-date system. Not only are new materials and mechanisms becoming available for exploitation, but some of those that have been long known yet discarded because of various difficulties of behaviour may now be reconsidered in the light of the presence of intelligence to cope with these difficulties.
Excitation control can take a variety of forms depending on the circumstances. Some sensors, such as the thermocouple, convert energy directly from one form to another without the need for additional excitation. Others may require fairly elaborate forms of supply. It may be alternating or pulsed for subsequent coherent or phase-sensitive detection. In some circumstances it may be necessary to provide extremely stable supplies to the sensing element, while in others it may be necessary for those supplies to form part of a control loop to maintain the operating condition of the clement at some desired optimum. While this aspect may not be thought fundamental to intelligent sensors there is a largely unexplored range of possibilities for combining it with digital processing to produce novel instrumentation techniques.
Amplification of the electrical output of the primary sensing element is almost invariably a requirement. This can pose design problems where high gain is needed. Noise is a particular hazard, and a circumstance unique to the intelligent form of sensor is the presence of digital buses carrying signals with sharp transitions. For this reason circuit layout is a particularly important part of the design process.
Analogue filtering is required at minimum to obviate aliasing effects in the conversion stage, but it is also attractive where digital filtering would lake up too much of the real-time processing power available.
Data conversion is the stage of transition between the continuous real world and the discrete internal world of the digital processor. It is important to bear in mind that the process of analogue to digital conversion is a non-linear one and represents a potentially gross distortion of the incoming information. It is important, however, for the intelligent sensor designer always to remember that this corruption is present, and in certain circumstances it can assume dominating importance. Such circumstances would include the case where the conversion process is part of a control loop or where some sort of auto-ranging, overt or covert, is built in to the operational program.
Compensation is an inevitable part of the intelligent sensor. The operating point of the sensors may change due to various reasons. One of them is temperature. So an intelligent sensor must have an inbuilt compensation setup to bring the operating point back to its standard set stage.
Information processing is, of course, unique to the intelligent form of sensor. There is some overlap between compensation and information processing, but there are also significant areas on independence.
An important aspect is the condensation of information, which is necessary to preserve the two most precious resources of the industrial measurement system, the information bus and the central processor. A prime example of data condensation occurs in the Doppler velocimctcr in which a substantial quantity of information is reduced to a single number representing the velocity. Sensor compensation will in general require the processing of incoming information and in some circumstances will represent the major processing task. The intelligent sensor, to some degree, can be responsible for checking the integrity of its information; whether, for example, the range and behaviour of the incoming variables is physically reasonable.
It can, however, destroy information or introduce false information. This must be regarded as a major hazard in intelligent sensor design, as it is so easy to insert a process realized intuitively in software which may not be fully understood.
A final, but extremely important, element is communications processing. It is so important that it requires a processor of its own, though this may be realized as part of the main processor chip. The natural form of communication for the intelligent sensor processor is the multi-drop bus, which can produce enormous cost savings over the traditional star topology network. A most important attribute of the intelligent sensor concept is addressability, which is of course essential to the multi-drop principle and a powerful aid to the logical organization of sensor systems operation, but it docs introduce limitations. Addressability implies some form of polling of the devices, and though this may be prioritized in various ways, it does imply a constraint on the response time of the system to changes at any particular sensor site. A major contribution of intelligence is the integrity of communication. The transmission process can be protected by various forms of redundant coding, of which parity checking is the simplest example. In crucial applications information can be double checked by means of a high level handshake dialogue, in which the central processor asks for the information and then returns it to the sensor for confirmation. This deals with almost every possible fault except where the sensing element, though behaving apparently reasonably, is wrong. In such a case the only cure is the triplication of sensor elements, or in the extreme the triplication of intelligent sensors.
3.2.2 Hardware structures
Hardware structures for intelligent sensors can reveal great variety (Brignell 1984). Obviously such structures are greatly affected by the enabling technologies employed. There is generally a need to mix technologies: an instrumentation amplifier, for example, poses different problems from a microprocessor, and to try to realize them in the same technology requires special and demanding circuit design techniques.
In the minimal hardware structure of an intelligent sensor, the basic and most essential element is digital processing. It also contains input amplification and a data conversion unit. The basic structure is shown in the figure 3.2.2A. Note that a minimum requirement is the monitoring line at least for temperature which is an ever-present cross-sensitivity.
Although digital systems are becoming more and more the norm there remains a requirement for analogue output. The intelligent sensor with analogue output effectively replaces the dumb sensor, but obviates its imperfections. It is, however, useful to discuss the implementation of analogue output, as it provides a platform for a discussion of a number of important intelligent sensor concepts.
The obvious way to implement analogue output is to provide a DAC, so that the output is available with a precision defined by that of the converter. There are, however, important variations which have some potential for improved performance. Figure 3.2.2B shows one of these. Instead of calculating the required output the digital processor calculates the difference between the actual amplified sensor signal and the ideal output (after corrections for non-linearity, drift etc.). This difference is output as a correction, which is added to the original signal in a summing amplifier. By judicious selection of the weighting applied by the summing resistors, this smaller corrections signal can be made to span the whole of the DAC output range. As a consequence the effective precision of the total output can be made much greater than the inherent precision of the DAC.
To take a simple example, imagine that the maximum anticipated error is 5% of full scale. A DAC of only eight hits could be used to span this 5%, and the effective precision at the summed output is one part in 28/0.05, or better than 12 bits. Of course, as it stands this is not a satisfactory solution, since the errors in the summing resistors have to be taken into account, which is a useful point at which to emphasize the power of providing for an auto-calibration cycle. If we expand the system by providing a multi-way analogue switch at the input, which is under control of the processor, any errors associated with that switch will be common to all inputs. The extended system is illustrated in figure 3.2.2C. This simple addition permits a variety of different calibration strategies. First, by switching in a reference voltage and then ground, the span in terms of voltage can be accurately assessed. Then, by switching in the output voltage a sweep of the whole range can be made to check for any sources of non-linearity, such as missing codes in the two data conversion stages. For temperature calibration the sensor is taken through its working range of temperatures. It is not necessary for the temperature to be known in any particular external units, but it is necessary for thermal equilibrium to be reached at a sufficient number of calibration points. Auto-calibration can be carried out on power-up, or it can be interleaved with carrying out the required functions of the sensor, provided the particular measurement strategy permits this.
It will be seen that the resulting sensor system is independent of errors produced by analogue component tolerances, and this exemplifies the intelligent sensor approach to accuracy. In the production stage there will generally also be a calibration cycle in which the target physical variable is swept through its range, and in certain applications it is possible to expose the sensor to calibrated signals in between operations.
One of the most important characteristics of intelligent sensors is the provision of a self-check cycle. It was a major adverse criticism of the original concept that the extra complications would reduce reliability, and without self-check this complaint would be valid. In a system such as that shown in figure 3.2.2C, the input signal switches allow a variety of test inputs to be applied. It should be noted that for the sake of clarity this figure has been somewhat simplified. It would, for example, be necessary to provide an attenuator so that the signals do not overload the amplifier. Also, without further switching elements, we have created a complete feedback loop. This has two implications: the effect of the DAC voltage is diluted by an amount determined by the amplifier gain, and there are stability considerations.
A complete self-check cycle may be implemented as follows. First, the input is switched to ground in order to check for any input offset drift. Small values of drift can be stored and applied as a digital correction, but large values of offset indicate a pathological condition which should be signaled via the communications processor as soon as the sensor is polled.
Second, a standard input derived from an internal reference voltage source is applied. This allows the gain to he checked and, if applied over a period, tests for intermittency.
Third, the input is switched to receive the DAC output via an attenuator, and a linear ramp is generated digitally. This simple test achieves a number of objectives. If the ramp is reproduced faithfully the linearity of the analogue components, the DAC and the ADC are all confirmed. Particular non-linearities that will be screened are missing bits in the DAC and ADC.
We should add here a remark on how the test procedure can be extended to differential amplifiers, as these are very common in intelligent sensors, because of the attractions of bridge configurations. An extra switch is required for the extra input, and the routine is as follows:
- Apply zero volts to both inputs
- Apply zero volts to one input and a reference voltage to the other
- Reverse these connections
- Apply reference voltage to both
These four stages allow for testing of the gain and the common mode rejection ratio. Ramp tests may be added if desired.
The completion of these tests ensures that all the electronic sub-systems are functioning correctly. The next stage is more difficult and very context dependent, and this is the testing of the primary sensor element. In some cases it is possible to arrange for known physical signals to be applied to the sensor, in which case a complete and proper calibration cycle can be carried out, but in the general case we have to assume that such a procedure would produce an unwarranted interference with the operation of the target system, and we have to make do with less satisfactory information.
Sometimes it is feasible to apply rather sophisticated methods. For example, it may be possible to apply a disturbing stimulus to the target system in the form of a pseudo-random sequence of a magnitude below the threshold that would interfere with operation. The response of the system and the sensor could then be recovered by a process of cross-correlation.
Even without such elaborations it is possible to obtain some information to indicate whether the sensor is behaving correctly by, in effect, asking certain questions:
Is the output a reasonable value? That is to say. is it in range? Is it consistent with the prevailing conditions and plant history?
Is the rate of change of output reasonable? For example, a temperature sensor embedded in a thermal mass will have a constrained rate of response, and any more rapid changes would indicate some form of intermittency.
Is the output actually changing? In an active plant one would expect small changes to be occurring continuously. If they are not it is at least worth flagging a query to central control.
Is the output consistent with that of adjacent sensors? This question could be posed centrally, but it is also possible for the intelligent sensor to pick up the responses of its neighbours directly off the bus, thereby carrying out one of our prime requirements to relieve central control of unnecessary calculation.
There is, however, no escape from the fact that this part of the operation is extremely sensitive to context, and must be tackled case by case. Of course to obtain complete reliability one must resort to duplication or triplication. This option can be applied to the whole intelligent sensor, but as the electronic sub-systems arc fully self-testing, a much cheaper option is to duplicate or triplicate the primary sensors, using signal switches to cycle between them. In the latter case the program can include a voting procedure, so that two correct primary sensors can outvote a discrepant one. It is important, however, to ensure that the pathological condition is signaled to control, because a second primary sensor failure would be fatal.
Often primary sensor faults are simple in nature and therefore easily detected, such as going open- or short-circuit, but there are many other faults that are slower to appear and more difficult to deal with. Examples are the accumulation of various forms of detritus, oxidation, fatigue and migration of materials. In such cases there is a stage at which it is not clear whether there is a fault or not, so it is important to establish within the high level communications protocol a means of signaling a possible incipient fault to prompt a human inspection before more damaging conditions are established.
Clearly this is an area where it is very difficult to generalize, but the above account establishes some of the principles that can be applied.
General purpose structures
In the early days of intelligent instrumentation it was particularly convenient to have available a system that was totally configurable by software and it still has its advantages, particularly in the development phase. The main disadvantages are, first, that in designing such a system one is obliged to make decisions (e.g. the speed-precision trade-off) that will not always be appropriate and, second, in any application much of the system will be redundant. The latter point would not be important if a number of systems were manufactured, as the economics of scale would cancel out the waste.
Figure 3.2.2D shows diagrammatically a system which was initiated as a project in the early 1980s by one of the authors (the so-called Janus project). It was realized as a circuit board and achieved a relatively brief existence as a commercial device. It is, however, very useful in the present context, as it illustrates many of the principles that had been developed up to that point and are major planks in the whole philosophy presented in this book.
The central component of this system is a digital controller of a number of analogue multiplexers. This controller is connected to the bus of the microprocessor and is mapped as a block of its memory, so that the system is reconfigured by writing control words to the block.
One set of analogue multiplexers provides signal selection to the differential input amplifier. This provides for sensor compensation by the sensor-within-a-sensor method or by the sensor array. There is also provision for connection to an internally generated reference voltage and ground for self-check purposes. Another multiplexer controls a resistor network to provide gain selection in the input amplifier. A 12-bit ADC gives data input into the microprocessor.
There is also provision for voltage output from an 8-bit DAC, which may be utilized as an external voltage for such applications as sensor excitation or as an internal offset to the input amplifier. Note the importance of offset provision before data conversion (Brignell 1986).
An essential adjunct to such a system is a software system which simplifies the interface problems for the user, and allows all the system settings to be made by means of simple high level commands. A board such as that shown in figure 3.2.2D can be a very useful in intelligent systems development. Also present, but not shown, was a serial bus communications controller which allowed many such boards to be addressed on a single bus. Associated board level components were an intelligent bus controller which resided in a PC and an intelligent bus repeater, which allowed a network to be extended to kilometers in length.
3.3.3 Software Structures
The art of good programming is a substantial discipline in its own right, and it would not be appropriate to delve into it in a short text such as this. It is, however, worthwhile to examine one or two structures that are peculiarly appropriate to intelligent sensor systems. In passing we might also make a remark about the choice of programming language. There arc pressures in intelligent sensor design to work at the lowest level of programming, machine code, as this gives the highest speed and the most compact code. Where speed is not a special consideration, however, there are sound reasons for opting for a more portable language such as ‘C' The main argument in favour of this option is that it reduces the tendency to 're-invent the wheel'; since procedures can be programmed once and for all. It also reduces the necessity to learn a variety of low level languages and gives a common format which is universally understood.
Look up tables
One of the more powerful concepts that entered at the dawn of computing was the processing of arrays. The Look Up Table (LUT) is a simple example of array processing that is of enormous significance in intelligent sensors. The basic idea is that one or more input variables are used as pointers to values stored in an array, which are then used for further processing. The first and most prominent use of LUTs was in linearization. Before the emergence of digital electronics non-linearity was a problem so overwhelming that it was almost universally avoided. The LUT changed all that, though the problem is now so reduced in importance that it is easy to forget that there arc still pathological cases that arc immune to correction.
Another important application for our purposes is in the switching of sets of coefficients. If you require a number of digital filters, or cascaded sections of digital filters, it is not necessary to duplicate the code that implements a filter. It is merely necessary to change the base address of a pointer so that a new set of filter coefficients can be picked up from a different LUT, and use the same code with a different set of coefficients.
LUTs may also be of two or more dimensions. A very important application of multi-dimensional LUTs is in correction for cross-sensitivity. In this case one pointer will be derived from the uncorrected input variable, while the others will be derived from the interfering variables. By far the most important case is the two dimensional case in which the second variable is temperature, which is of universal concern as a cross sensitivity.
Assume that the input variable is derived from an ADC of A/ bits precision. Then the top N bits are masked off by a logical AND operation with the mask (2N- l) 2M-N. The masked value is shifted down M-N places (i.e. multiplied by 2N-M). This is now the incremental address that can be added to the base address (the location of the lowest entry in the LUT) to point to the desired value.
To illustrate the point with numerical values let us assume we have a table of size 32 (N=5), and the input ADC is of 8-bits precision. The input variable is masked off with the mask 11111000 and shifted right three places to give a five bit incremental address, which when added to the base address points to the required value.
Evidently we have thrown away three (M-N) bits of information. What we do next is a classical example of the trade-off between speed, precision and storage. We can ignore the loss and go for maximum speed, we can make M=N and go for maximum precision at the expense of storage or we can use the bottom M-N bits to provide a linear interpolation between the selected entry and the next one up, and sacrifice some speed to gain precision. Indeed we can use higher degree interpolation formulae to gain precision and conserve storage at the expense of speed. The choice made depends on the demands of the particular application.
For reference in this case the linear interpolation formula reduces to
Figure 3.3.3B shows how a two dimensional LUT is arranged in the store and how it is thought of conceptually. If the two variables, say x and y. are masked off to M bits of precision, the new incremental address is formed from 2Mx + y, so that the layout of the area of storage containing the LUT is as shown on the left-hand side of the figure. It is helpful, however, to conceive of the arrangement as a two dimensional one, as illustrated on the right of the figure.
There is a variety of ways in which the entries can he loaded into the LUT They might be derived from a model, a common calibration curve for the family of sensors or (most preferably) by means of an individual calibration cycle.
Another software structure important in intelligent instrumentation is the cyclic buffer. It can be implemented in a way similar to the LUT, in that it is based on masked pointers. These are incremented every time there is a read or write operation, and because the bottom n bits are masked they return to zero every time they reach 2n. In this way, although the buffer is a linear array it behaves as though it was a circle, and the pointers behave like the hands of a clock (figure 3.3.3C).
Cyclic buffers can have a number of useful applications. They are invaluable in linking two processes that are unsynchronized, such as a constant sampling rate and the random availability of communications access on a bus. We must always remember, however, that our simple law of information flow always applies, and the input and output demands of the buffer must align on average or information will inevitably be destroyed.
Another application of cyclic buffers is in the realization of digital filters and other processes requiring delay (e.g. real time correlation). Here the read pointers arc linked rigidly to the write pointer to achieve fixed delays.
A third useful application area is in what might be called the software transient recorder. The transient recorder is a device that behaves like an oscilloscope except that it enables per- trigger information to be reproduced. The way it operates is that a signal is sampled continually, with each new datum erasing the one that came 2n samples before. When a certain trigger condition occurs (e.g. the signal reaches a given level) the sampling process is stopped after, say, k further samples have been obtained. There are then k post trigger data and 2n - k pre-trigger data. These can be displayed continually by reading them repeatedly to a screen, or transferred to a linear array, care being taken that the first datum is at the beginning of the array. The software transient recorder is invaluable in such applications as impulse or step testing, where it removes any need to synchronize with the stimulating signal.
Signal processing structures
When the numbers being processed are a time series, as is usually the case with sampled data from a sensor, then a powerful set of processes becomes available through the application of advances in linear algebra.
There are two main classes of signal process as far as we are concerned. These we will call block processes and stream processes. In block processing a finite number of samples is acquired, and the whole block of samples is processed once acquisition is complete. This is not normally a real time operation, though there are variations which make it effectively so. In stream processing the samples are acquired continuously and operated on as soon as they arrive. This is normally a real time process and the number of samples is effectively unbounded.
Block processing may be exemplified by the general linear algebraic relationship, where a new vector of variables, yj is obtained from the original set, xi by multiplication by a rectangular array of coefficients
This is, in our terms, the recursive digital filter, which again, via the r-transform, give us a powerful tool in manipulating signals. The power of the idea of recursion, in which a process can operate on its own outputs as well as the inputs, can hardly be overstated. We saw a simple example in the case of the running mean smoothing filter, and the compaction obtained there is typical.
Indirect software structures
Most of our numerical methods arc direct, or closed, structures, but it is easy to forget that the power of the digital processor enables us to use indirect, or open, methods. Many scientists and engineers seem to have distaste for 'guesswork', because of their formal training in analytical methods, but often such methods yield significant solutions where none would be otherwise available.
The most familiar of such methods are the root finding iterations, such as Newton, but there is a highly developed set of tools which are in the nature of optimization. In order to facilitate an optimization technique there are three prior requirements;
- A set of adjustable parameters or coefficients which fully define a process.
- A measure of 'goodness'.
- A means of making an educated guess at a better set of parameters.
Given these three factors we can use a method of trial and error to arrive at an optimum solution. Normally we also need a criterion for stopping the process, either sufficient accuracy or a limiting number of iterations, but in the sort of on-line process we find in intelligent systems it is possible to keep the process going on indefinitely, so that changes in the outside world can be tracked, and an operating point held at optimum.
The two basic classes of optimization process are gradient methods and gradient-free methods. However, because the process of taking a gradient enhances high frequency noise, we normally prefer the latter class in transducer applications.
The idea of a software shell is now a familiar one, as the term is used with common computer operating systems. It is a very important concept with the sort of systems we are discussing here. The user of intelligent sensor systems is normally an instrumentation engineer. It is therefore absolutely vital for the design of the intelligent sensor system to include the design of a software shell. This is not only important in protecting the user from the intricacies of internal operation, but like the shell of an egg it also protects the contents from being damaged by external events. Figure 3.3.3D shows diagrammatically the software arrangements developed alongside the generalized hardware unit described in 3.2.2.
4.1 Home Automation
We find new technology coming in deeper and deeper into our personal lives, as the world gets more and more technologically advanced, even at our home. Home automation is becoming more and more popular around the world and is becoming a common practice. The process of home automation works by making everything in the house automatically controlled using technology to control and do the jobs that we would normally do manually.
Home automation (Domotics) is the field specializing in the general and specific automation requirements of homes and apartments for their better safety, security and comfort of its residents. Domotics is being implemented into more and more homes of people in order to maintain their safety, independence and comfort. These Smart Homes allow the general user to monitor their entire house through a single unit or through a network. For the disabled people, intelligent homes give them opportunity for independence, which will help them gain confidence and determination. Smart Homes can provide both the elderly and disabled with many different types of emergency assistance systems, automated timers, fall prevention, security features, and other alerts. Smart home systems will enable family members to monitor their loved ones from anywhere through internet.
In a more advanced automation systems, room are installed with sensors that can detect the presence of human beings and even identify the person inside the room. The control centre then can set the desired lighting and temperature setting depending on the person, the time of the day and other factors. Automation tasks may also include the setting of air conditioning to an energy saving mode when the house is unoccupied, and restoring to its desired temperature depending on a preset time or the presence of a person in the room. More complicated automation systems are capable of maintaining an inventory of products, recording their usage through a Radio Frequency ID (RFID) tag, prepare a shopping list or even automatically order their replacements.
Other practical implementation of home automation system is to alert the occupants of the house when the sensors detect a fire or smoke by blinking the entire lights of the house.
As we age, we face declining abilities from age-related diseases and the aging process itself. Our home can assist or hinder our ability to complete self-care and household activities. In this chapter we focus on the very latest technology, as well as future technology, that can and will assist us in living independently at home. There is growing interest in the concept of “smart houses”. What is it that a smart house actually does, or offers to its residents? The following list summarizes the discussion smart house functions, organizing these functions into “levels” based on complexity and how long the function has been available; in some cases they are not yet available in product form.
Level 1: Offers Basic Communications
- Offers interactive voice and text communication (phone and email)
- Provides link to World Wide Web
- Offers TV (full range of stations) & radio (AM and FM)
Level 2: Responds to Simple Control Commands from Within or Outside the Home
- Unlock / lock door
- Check for doors / windows open or unlocked
- Turn on lights
- Check for land-mail
- Get help (in the case of a fall or other problem)
Level 3: Automates Household Functions
- Air temperature, humidity
- Lights on/off at predetermined times
- House made secure at certain times
- Music, TV on/off at certain times
Level 4: Tracks Location in the Home, Tracks Behaviors, and Tracks Health Indicators
- Determines activity patterns
- Determines sleep patterns
- Determines health status (vital signs, blood glucose, weight)
Level 5: Analyzes Data, Makes Decisions, Takes Actions
A. Issues alerts
A1. To resident
- Mail has been delivered
- Person (name given, or stranger stated) at door announced
- Water leaks
- Stove has been left on
- Door is unlocked
A2. To distant care provider
Altered, problematic activity or sleep pattern, or health problem
A3. To formal service provider
B. Provides Reports
To resident, care provider, and formal service provider on status
C. Makes changes in automated functions based on learned preferences
Adjusts lights, temperature, music to resident's use patterns—can be overridden Easily
Level 6: Provides Information, Reminders, and Prompts for Basic Daily Tasks
- Notification mail has arrived, someone is at the door, stove was left on
- Medication reminder
- Hydration reminder
- Meal reminder
- Task prompting
- Washing, grooming
- Meal preparation
- Social contacts (e.g., “call. . . .”)
- Household cleaning
Level 7: Answer questions
- “Have I . . . (brushed my teeth, taken my medications, put all the ingredients in the dish I am preparing”
- Orientation—time, day, month, year, season
- What is happening today?
- General information (any question that could be searched on Google)
Level 8: Make household arrangements
- Schedule maintenance and repair visits
- Order medications
- Prepare grocery lists and send to grocery for delivery
- Prepare meals
- Handle house cleaning
Smart House Level I: Offers Basic Communications
Level I technology is necessary for a smart house, but simply having level I technology does not alone make it a smart house. At Level I, technology relates to communications, providing residents with the means to communicate with and receive communications from others beyond the home. Telephones represent one example of communications technology, one that has been available for over 100 years, providing voice communication with others who have the same technology. Today, especially with mobile phones, they exist almost everywhere. The telephone is especially important for older persons with disabilities. Almost one-third live alone, and for those with limited mobility, the telephone provides opportunities for socialization. In a study of older adults living in rural areas, frequent loneliness was found to be associated with frequent use of the telephone. The telephone also provides a mechanism for calls for help. Many people use the telephone for shopping, banking, and arranging other personal services.
Internet access is an important communication technology, essential if a home is to be considered “smart.&rdquo
Cite This Dissertation
To export a reference to this article please select a referencing stye below: