The probability of a disease

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.



3.1 Overview and future prospects

In the beginning, all CAD systems follow a general procedure that is locating a lesion and analyze the probability of a disease. The main technologies concerned in CAD systems are (Doi, 2005):

  • Image processing for detection and extraction of abnormalities;
  • quantitation of image features for candidates of abnormalities;
  • Data processing for classification of image features between normals and abnormals (or benign and malignant).
  • Quantitative evaluation and retrieval of images similar to those of unknown lesions.
  • Observer performance studies using ROC analysis.

Lately, computer-aided diagnosis (CAD) has become a part of daily medical work for different medical divisions, particularly radiology had witnessed a great development during the past century, X-ray computed tomography (CT) and magnetic resonance imaging (MRI) have grown hugely due to image detector systems and computer technology advances. A number of various types of CAD systems have been developed for the detection and/or classification of different lesions in medical images (Doi, 2005), the human organs analyzed with these CAD systems are; brain, breast, chest, liver, colon, kidney, and the vascular and skeletal systems.

In spite of that, those CAD systems are limited to a specific disease of a single organ. The coming generation of CAD system is not a single purpose system, but somewhat a comprehensive system. Kobatake (2007) developed a multi-organ, multi-disease CAD system. That system targets multiple organs and multiple diseases.

This chapter highlights some of key related researches, algorithms and techniques that are relevant to this research. Various types of features extraction, classification and matching techniques are also highlighted in this chapter. This chapter provides an overview on current researches related to CAAD and image understanding systems, which applied in their implementation the most common features extraction, classification and matching algorithms and techniques. Finally, this chapter provides a critical comparison of CAAD using the algorithms that are explained in this literature.

3.2 Computer Aided Diagnosis

In the 1960's and 1970's, the Department of Medical Computer Sciences of University of Vienna Medical School at the Vienna General Hospital visualized the development of a computer-assisted diagnostic system that did not use stochastic methods. It was intended to develop a system that is not based on statistical assumptions like normal distribution, mutual independency of symptoms, constant probabilities of symptoms in different populations and at different observation times. There is no need for information about the frequency or lack of certain symptoms with the sick or the healthy. Therefore rare complaints are considered as well as frequent diseases (Adlassnig and Grabner, p. 141).

To systematize and formalize medical knowledge and to store it in a suitable form, Georg Grabner (professor of gastroenterology and hepatology and both head of the University Department of Medical Computer Sciences and, at the same time, head of the University Clinic of Gastroenterology and Hepatology) and the IBM information scientist W. Spindelberger started to use a computer for medical diagnosis in the late 1960s. This was followed by intensive collaboration between physicians and mathematicians, and engineers constructed a first computer-assisted diagnostic system based on two-valued logic in 1968 (Spindelberger and Grabner, 1968). One year later Gangl, Grabner, and Bauer published their first experiences with this system in the differential diagnostics of hepatic diseases (Gangl et. al., 1969).

In 1976, the second generation of the system was developed on the basis of three-valued logic. Here, in addition to symptoms and diagnoses being considered to be "present" or "absent", "not examined". Or "not investigated". Symptoms and possible diagnoses are also included. For this system known as CADIAG-I (Computer-Assisted DIAGnostis, version I). The computer-assisted diagnostic (CADIAG) projects are long-term efforts intended at building consultation systems able to extensively assist in the differential diagnostic and eventually in the therapeutic process in internal medicine. CADIAG-II, a consultation system formally based on fuzzy set theory and fuzzy logic, was developed and practically tested in 1979/80 (Adlassnig, 1980).

In the early 1980s, a group in the radiology department at the University of Chicago was the first one to bring computer-assisted diagnostics into the clinical field. They started an academic research and development of various CAD schemes. After that there was different researches and rich literature regarding digital images on radiologic diagnosis (Doi, 2007, p.198; Doi, K & Huang, H.K. p.195). Some of these mentioned CAAD systems used vascular imaging (Fujita H et al., 1987; Hoffmann et. al. 1986), others like (Giger et al. 1987; Giger et al. 1988) used chest radiographs for detection of lung nodules.

Chan et al. (1987) had investigated the use of an automated or computerized tool for the detection of clustered microcalcifications in digital mammograms. Many research papers related to Computer Aided Diagnosis (CAD) have been published and presented at the Radiological Society of North America (RSNA) meetings from year 2000 to 2005.

These papers were related to three human organs; breast, chest and colon. The following table lists these papers mentioned above.

According to Doi (2007), with regard to the effect of using CAD on the detection rate of breast cancer, it is important to notice that there is an increasing in the detection rates of breast cancer with use of CAD, as shown in table 2.

3.3 CAAD systems for Skin lesions and burns

As a matter of fact, various CAAD systems developed for skin abnormalities had been developed, in the period between 1987 and 2007; there have been noticeable researches regarding the development of digital image analysis systems for the diagnosis of benign and malignant skin tumors (Blum et al., 2008). Those researches focused on skin lesions and used different statistical methods such as logistic regression (Menzies et al. 1997, p. 1064), neural networks (Bauer et al. 2000, p.345) and linear classification with Receiver Operating Characteristics (ROC) (Burroni et al. 2004, p. 1881). In 2005, a research group has published a related work; their research was developing a CAD for diagnosis of melanoma (Barzegari et al. 2005).

Another type of CAAD systems are those developed for wounds and skin burn classification, actually, There has been relatively little CAAD researches on skin burn, the reason is the difficulty of translating the human perception to data understandable by the computer, in another word, how we can mimic our perception to the computer?, likewise, there are quite few researches regarding color image processing of wounds (Berriss & Sangwine 1997), nevertheless, many researches and published papers related to wounds can be found, for example, Herbin et. al., (Herbin et. al., 1990), analyzed RGB color digitized images from Kodachrome color wounds slides.

Another study done by Hansen et. al. (Hansen et. al., 1997) has showed noninvasive wound evaluation by using a computerized color imaging system. Back to the CAAD for skin burns, an earlier research by DeCristofano et. al. (1992), this research can be considered as the one of the first research groups who started skin burn injuries automation , by proposing an advanced thermal response data acquisition, this system is built for collection and analysis of the extents of burns. This is followed by a study of burn injuries classification by using a neural network enhanced spectrometer system (Zhao and Lu,1995). A new system for burn diagnosis has been developed; it is a CAD tool for segmenting the burn from healthy skin then classifies it into three categories of burns: superficial dermal, deep dermal and full thickness (Serrano et. al., 2003).

3.4 Digital Image Processing

In the beginning it is significant to explain the difference between digital image processing and digital image analysis. Image processing can be thought of as a transformation that takes an image into an image, i.e. starting from an image a modified (Russo and Ramponi, 1994, Russo and Ramponi 1995) image is obtained. On the other hand, digital image analysis is a transformation of an image into something different from an image, i.e. it produces some information representing a description or a decision. The purpose of digital image processing is to:

  1. Improve the appearance of an image to a human observer.
  2. Extract from image quantitative information that is not readily apparent to the eye.
  3. Calibrate an image in photometric or geometric terms.

Image processing is an art as well as a science. It is a multidisciplinary field that combines the computer technology, photography aspects, optics, mathematics, and electronics. This dissertation proposes the use of segmentation, as an effective way to achieve a variety of low-level image processing tasks one of these tasks is classification.

3.5 The association between Digital image processing and medicine

Image processing has become more and more important recently in such areas as medicine, geography, industry, etc., image processing was a highly specialized task which required deep understanding of image processing algorithm, an image processing expert system is developed to assist those who do not have sufficient expertise in obtaining a required image from given one using a package of image processing algorithm. Image processing is based on two main categories of manipulation of arrays of two-dimensional data. The first category includes the restoration of one or more objects by compensating for noise, motion, shading, geometric distortions, and other sources of noise degradation associated with the image acquisition system. The second category involves the enhancement of information and data reduction to emphasize, measure, or visualize important features of the image. In recent years, the field of medical imaging has required that the role of image processing expand from the analysis of individual two-dimensional images to the extraction of information from three-dimensional images.

The major difficulty in interpreting cross-sectional gray-scale images is that anatomic structures look very different from their three-dimensional appearance. This divergence requires the physician to perform a significant mental translation of the data, a task that requires highly specialized training. Although radiologists undergo such training, the visual interpretation of the data sets becomes observer dependent, and others may have more difficulty in visualizing the data. In view of the relatively large size of a typical three-dimensional data set (e.g., 80 X 256 X 256) and the fact that a single imaging examination may include the acquisition of several such data sets, the radiologist can work more efficiently if the information from many slices is concentrated into one rendering.

3.6 Expert Systems

Expert systems are branch of a general group of computer applications known as artificial intelligence; those systems apply human knowledge to solve problems that normally would require human intelligence. These expert systems (ES) represent the expertise knowledge as data or rules within the computer. These rules and data can be called upon when needed to solve problems. Knowledge-based systems gather the small fragments of human know-how into a knowledge base, which is used to reason through a problem, using the knowledge that is appropriate. A different problem, within the domain of the knowledge base, can be solved using the same program without reprogramming. The ability of these systems to explain the reasoning process through back-traces and to handle levels of confidence and uncertainty provides an extra feature that conventional programming doesn't handle.

ESs provide powerful and flexible means for achieving solutions to a variety of problems that often cannot be dealt with by other, more traditional and standard methods. Thus, their use is increasing to many areas of our social and technological life, where their applications categories: rule-based systems, knowledge-based systems, neural networks, fuzzy ESs, object-oriented methodology, case-based reasoning (CBR), system architecture development, intelligent agent (IA) systems, modeling, ontology, and database methodology together with their applications for different research and problem domains.

3.6.1 Expert systems in image processing

One of the computer applications in medicine is the Image generating and processing medicine: introducing systems for knowledge-based navigation and monitoring of diagnostic and surgical procedures including routines to avoid undesired events or anatomical regions, displaying differential diagnostic support during image interpretation, and offering clinical data of patients with preceding knowledge-based filtering to assist the image-diagnosing physician in his or her decision.

In recent years, there has been an increasing amount of literature on image processing expert systems, in 1988, Toshikazu Tanaka and Naomichi Sueda proposed a new knowledge acquisition facility into an expert system called EXPLAIN, which assists the non-expert in using a package of image processing algorithm in obtaining a required image from a given one (Sueda and Hoshi, 1986) (Mikame et. al., 1985).

In another major study, Serge G. Manukov, George M. Papoudurakis, George G. Gogichaishvili (2001) an expert system is proposed contains the knowledge [26], obtained by experienced dentists, and information from relative literature and tutorials. In the form of such advising system the stomatologist will get expert assistance containing any difficult diagnostic situation [27].

3.6.2 Expert Systems in Medicine

Medical expert and knowledge-based systems are designed to give expert-level, problem-specific advice in the areas of medical data understanding, patient monitoring, disease diagnosis, treatment selection, prognosis, and patient management. They take into account and provide expert knowledge by applying that knowledge to patient data emulate and aid in the decision-making behavior of medical and administrative personnel.

The history of computerized medical diagnosis is a history of intensive association between physicians and mathematicians respectively electrical engineers or computer scientists. In the late 1950s Ledley and Lusted published Reasoning Foundations for Medical Diagnosis [1], Lipkin and Hardy [2], and Ledley [3], wrote on the methods for the use of card and needle systems for storage and classification of medical data and systematic medical decision-making. In the 1960s and 1970s different approaches to computerized diagnosis arose using Bayes rule [4, 5], factor analysis [6], and decision analysis [3]. On the other side artificial intelligence approaches came into use, e.g., DIALOG (Diagnostic Logic) [7] and PIP (Present Illness Program) [8], which were programs to simulate the physicians reasoning in information gathering and diagnosis using databases in form of networks of symptoms and diagnoses.

Back in 1968, a computer-assisted diagnostic system for differential diagnosis in hepatology and rheumatology based on symbolic logic and heuristic hypothesis generation was developed and successfully tested [22,23].

Actually, four experimental systems are generally concerned as having started the research field of artificial intelligence in medicine. [6] These were MYCIN, a program to advise physicians on antimicrobial selection for patients with bacteremia or meningitis [8,9]; the Present Illness Program (PIP), a system that gathered data and generated hypotheses about disease processes in patients with renal disease [10]; INTERNIST [1], a large system to assist with diagnosing complex problems in general internal medicine, [1] and CASNET, an ophthalmology advisor designed to assess disease states and to recommend management for patients with glaucoma.[1,2] All four drew on Al techniques, confirming the encoding of large amounts of specialized medical knowledge acquired from the clinical literature and from expert collaborators. None used classical statistical techniques, nor did they base their advice on interpretations of accumulated experience in patient data banks.

On the other hand, each was influenced by earlier AI work on general problem-solving techniques, and two of the systems (PIP and INTERNIST [1]) explicitly modeled hypothetic deductive behavior, [13,14] the familiar process by which physicians formulate tentative hypotheses rapidly after obtaining the first few pieces of information about a patient and then let those hypotheses (typically a differential diagnosis) guide further data collection and problem solving.

By the late 1970s such systems had become known as "knowledge-based systems" or "expert systems," terms that continue in common use. Thus, the term "expert system" originally implied a computer-based consultation system using AI techniques to simulate the decision-making behavior of an expert in a specialized, knowledge-intensive field. An expert system has been developed for the analysis of umbilical cord acid-base data [15], encapsulating the knowledge of leading obstetricians, neonatologists and physiologists gained over years of acid-base interpretation.

The expert system combines knowledge of the errors likely to occur in acid-base measurement, physiological knowledge of plausible results and statistical knowledge of a large database of results. It automatically checks for errors in input parameters, identifies the vessel origin (artery or vein) of the results and provides an interpretation in an objective, consistent and intelligent manner.

The expert system was developed in three main incremental stages. Initially, a crisp expert system was developed incorporating conventional forward-chaining logic [20, 21].

The achievements of this work can be summarized as having successfully modeled the clinical expert knowledge necessary for the assessment of umbilical acid base information. The need for basic data validation of acid-base parameters prior to interpretation has been largely accepted clinically.

In reviewing this new field in 1984, Clancey and Shortliffe [24], provided the following definition:

'Medical artificial intelligence is primarily concerned with the construction of AI programs that perform diagnosis and make therapy recommendations. Unlike medical applications based on other programming methods, such as purely statistical and probabilistic methods, medical AI programs are based on symbolic models of disease entities and their relationship to patient factors and clinical manifestations.

3.6 Skin Burn Diagnosis and Classification

This section provides an overview of the most important techniques for determining burn depth.

Although numerous methods have been proposed during the past two decades for determining burn depth, it remains a challenging task; this section will start with the standard technique for determining burn depth which is the clinical observation of the wound. A deep dermal burn that will heal only after many weeks or a full thickness burn that won't heal at all may be only a matter of a few tenths of a millimeter. Further, a burn is a dynamic process for the first few days and a burn that appears shallow on day 1 may appear deep by day 3. Other techniques take advantage of (1) the ability to detect dead cells or denatured collagen (biopsy, ultrasound, vital dyes) [30]; (2) the changes in blood flow (fluorescent, laser Doppler, thermography); (3) the color of the wound (light reflectance); and (4) physical changes, such as edema (nuclear magnetic resonance imaging).

Several studies investigating skin burns have been carried out. In 1977 Anselmo and zawacki [31] developed a method that based on rationing the magnitudes of visible and infrared radiation from several spectral ranges. Results were promising, but for clinical decision making the analysis time was too slow. A method called the Burn Depth Indicator (BDI) had been developed by Heimbach et al for burn depth estimation [32-33]. It is similar to that of Anselmo and Zawacki-relating burn depth to the ratios of red/green, red/infrared, green/infrared light reflected from the burn wound-but is much faster.

An indocyanine green fluorescence based technique to determine viable tissue circulation at different levels in skin and burn tissue was used in the 1990s, but this method did not get widespread clinical acceptance [34]. A recent study involved laser Doppler flowmeter with a multichannel probe was used to measure burn wound perfusion as a tool to predict wound outcome [35]. ColeR. P.; JonesS. G. ; Shakespeare P. G. employed thermographic images to assess hand burns [36]. In another major study, R. E. Barsley, M. H. West, J. A. Fair used Ultraviolet Imaging of wounds on skin [37]. The limitations of these previous techniques are not only in of diagnosis' accuracy but also the huge economical cost.

2.4 Image Segmentation

Image segmentation is not governed by any theory and its associated techniques are basically ad hoc. It varies predominantly in the manner the desired properties of an ideal segmenter are given prominence and in the manner it harmonizes and compromises one desired property against another.

Image segmentation plays an important role in medical image processing. The goal of segmentation is to extract one or several regions of interest in an image. Depending on the context, a region of interest can be characterized based on a variety of attributes, such as grayscale level, contrast, texture, shape, size etc. Selection of good features is the key to successful segmentation. There are a variety of techniques for segmentation, ranging from simple ones such as thresholding, to more complex strategies including region growing, edge-detection, morphological methods, artificial neural networks and much more. Image segmentation can be considered as a clustering process in which the pixels are classified to the specific regions based on their gray-level values and spatial connectivity.

If possible, a good segmenter should produce regions, which are homogeneous and uniform, without many small holes. Further, the boundaries of each segment should be spatially accurate yet smooth, not ragged. And finally, neighboring regions should have considerably dissimilar values regarding the features on which region uniformity is based. There are two kinds of segmentation

  • Complete segmentation: produces a set of disjointed regions uniquely co-relating with objects in the input image.
  • Partial segmentation: in which sectors or regions do not co-relate directly with image objects.

Image is segmented into distinctive regions that are homogeneous with respect to an identifiable property such as brightness, color, reflectivity, texture, etc. In a more complex situation, a set of likely overlapping homogeneous sectors may arise. Further processing is then required to the partially segmented image and the ultimate image segmentation may be identified with the assistance of higher level information.

Image segmentation includes three principal concepts: detection of discontinuities, e.g. edge based, thresholding, e.g. based on pixel intensities and region processing, e.g. group similar pixels.

Methodologies pertaining to segmentation could be classified into three main groups in accordance to the dominant features employed.

  1. First is an all encompassing global knowledge about an image or its component; the knowledge is usually represented by a histogram of image features.
  2. The second group comprises the Edge-based segmentations.
  3. The third group is Region-based segmentations.
  4. It is important to mention that:

  5. There is no universally applicable segmentation technique that will work for all images.
  6. No segmentation technique is perfect.

3.5 Features and Feature extraction

For objects representation we have two options: numerical features (either continuously valued features such as weight, length; or discretely valued features such as number of fins, etc.); or categorical attributes (color, skin texture, etc.) (Bezdek et. al., 1999).

Several research projects have demonstrated the significance of color as a segmentation feature in color computer vision. Segmentation is one of the first steps of low-level image processing during which the input image is divided up into distinguishable units/areas based upon a collection of properties. In [Holla, 1982] the model of the human visual system was used as a preprocessing tool for scene analysis, color seeing was modeled by obtaining a pair of opponent colors (red-green and yellow-blue) as a two dimensional feature. Luminance and chrominance were distinguished from each other. Luminance proved to be a better detector of small details and chrominance, whose importance is emphasized, performed better in rendering coarser structures and areas.

In [Mustafa, 1996] spectral information was used together with curvature to achieve color-based three-dimensional object identification. Surface signatures were obtained by photometric stereo to describe the input images. The signatures were normalized histogram distributions that are invariant to change in pose, partial occlusion or shadowing effects.

In a head-tracking application of [Birchfield, 1998], color was applied because of its invariance to any geometric information. The intensity gradient around the head's perimeter and the color histogram of the head's interior were applied. Their (closely) orthogonal nature allowed one to complement the other in case of failure.

An image retrieval system, FOCUS, has demonstrated improved performance by analyzing color histograms [Das, 1997]. The peaks of the histograms provided the color content of the image, which was matched with the query object. Then the spatial relationship between the examined color regions was analyzed. This study put a heavy emphasis on the hue color-feature. Color information was also successfully utilized in image filtering applications of [Tomasi, 1998]. A bilateral (combined range and domain) filtering smoothed the input image while the perceptually visible edges were preserved.

Although these research projects applied color as a property of secondary or tertiary significance, color has not yet been widely examined as a sole descriptor of surfaces and places. The reason for why such an important object feature is "neglected" in some of the research applications was intriguing and provided one of the main motivations towards the experiments of the presented research.

3.5 Edge-Based Segmentation

Edge-based segmentation schemes take local information into account but do it relative to the contents of the image, not based on an arbitrary grid. Each of the methods in this category involves finding the edges in an image and then utilizing that information to separate the regions. In the edge detection technique, local discontinuities are detected first and then connected to form complete boundaries.

Edge detection is usually done with local linear gradient operators such as the Prewitt (Prewitt, 1965), Sobel (Sobel, 1970) and Laplacian (Gonzalez and Woods, 1992) filters. These operators work well for images with sharp edges and low amounts of noise. For noisy, busy images they may produce false and missing edges. The detected boundaries may not necessarily form a set of closed connected curves, so some edge linking may need to be required (Canny, 1986).

The consequence of employing an edge detector to an image may conduce to a set of connected curves that indicate the objects boundaries. The significance of employing an edge detector to an image may substantially minimize the volume of the processed data to sieve out information that may be regarded as irrelevant, whilst maintaining the vital structural properties of the image (Ziou and Tabbone, 1998).

2.6 Edge: what is it?

Edge detectors are a collection of very important local image pre-processing methods used to locate the sharp changes in the intensity function. As a matter of fact, edges are pixels where the brightness function changes abruptly. Calculus used to describe changes of continuous functions using derivatives (Milan et al. 1998).

  • An image function depends on two variables -- co-ordinates in the image plane -- so operators describing edges are expressed using partial derivatives.
  • A change of the image function can be described by a gradient that points in the direction of the largest growth of the image function.
  • An edge is a (local) property attached to an individual pixel and is calculated from the image function in a neighborhood of the pixel.
  • It is a vector variable with two components
  • magnitude of the gradient;
  • And direction f is rotated with respect to the gradient direction ? by -90°.
  • The gradient direction gives the direction of maximal growth of the function, e.g., from black () to white ( ).
  • This is illustrated below; closed contour lines are lines of the same brightness; the orientation 0° points East.
  • Edges are often used in image analysis for finding region boundaries.
  • Boundary is at the pixels where the image function varies.
  • Boundary and its parts (edges) are perpendicular to the direction of the gradient.
  • The following figure shows several typical standard edge profiles.

Roof edges are typical for objects corresponding to thin lines in the image.

  • Edge detectors are usually tuned for some type of edge profile.
  • Sometimes we are interested only in changing magnitude without regard to the changing orientation.
  • A linear differential operator called the Laplacian may be used.
  • The Laplacian has the same properties in all directions and is therefore invariant to rotation in the image.

In equation (1), The Laplace operator is widely used operator for approximating the second derivative which gives the gradient magnitude only.

2.7 Edge detection techniques

Although there are quite a number of methods for edge detection, but the majority of them could be grouped into two sets, search-based and zero-crossing based (Ziou and Tabbone, 1998). The search-based methods detect edges by first computing a measure of edge strength, usually a first-order derivative expression such as the gradient magnitude, and then searching for local directional maxima of the gradient magnitude using a computed estimate of the local orientation of the edge, usually the gradient direction. The zero-crossing based methods search for zero crossings in a second-order derivative expression computed from the image in order to find edges, usually the zero-crossings of the Laplacian or the zero-crossings of a non-linear differential expression. as shown in Figure 2.3:

As a pre-processing step to edge detection, a smoothing stage, typically Gaussian smoothing, is almost always applied. The edge detection methods that have been published mainly differ in the types of smoothing filters that are applied and the way the measures of edge strength are computed. As many edge detection methods rely on the computation of image gradients, they also differ in the types of filters used for computing gradient estimates in the x- and y-directions.

2.8 Scale-Space Theory

It is a framework for multi-scale signal representation developed by the computer vision, image processing and signal processing communities with complementary motivations from physics and biological vision. It is a formal theory for handling image structures at different scales, by representing an image as a one-parameter family of smoothed images, the scale-space representation, and parameterized by the size of the smoothing kernel used for suppressing fine-scale structures. The parameter t in this family is referred to as the scale parameter, with the interpretation that image structures of spatial size smaller than about vt have largely been smoothed away in the scale-space level at scale t.

2.9 Scale-Space-edge Detector

The primary goal of this image edge detector is to delineate paths that correspond with the physical boundaries and other features of the image's subject.This detector implements an edge definition developed in Lindeberg (1998) that results in "automatic scale selection":

1)The gradient magnitude is a local maximum (in the direction of the gradient). 2) A normalized measure of the strength of the edge response (for example, the gradient weighted by the scale) is locally maximum over scales.

The first condition is a well established technique (Canny, 1986). Accounting for edge information over a range of scales (multi scale edge detection) can be approached in two ways;appropriate scale(s) at each point can be evaluated, or edge information over a range of scales can be combined. The above definitiontakes the first approach, where appropriate scale(s) is taken to mean the scale(s) at which maximal information about the image is present. In this sense it is an adaptive filtering technique -- the edge operator is chosen based on the local image structure.

Ss-edges implementation iteratively climbs the scale space gradient until an edge point is located and then iteratively steps, perpendicular the gradient, along a segment, repeating the gradient climb, to fully extract an ordered edge segment. The advantage of ss-edge detector, compared to a global search method, is in its use of computational resources, flexibility of choosing the search space, and flexibility of specifying the details of its edge finding, the following. Fig. 2.4 shows the edge detecting result of "Third Degree burn" image.

References :

  1. Ledley, R. S., Lusted, L. B.: Reasoning Foundations of Medical Diagnosis. Science, 3. July 1959, Vol. 130, Nr. 3366, pp. 9-21.
  2. Lipkin, M., Hardy, J. D.: Mechanical Correlation of Data in Differential Diagnosis of Hematological Diseases. Journal of the American Medical Association, 166 (1958), pp. 113-125.
  3. Ledley, R. S., Lusted, L. B.: Medical Diagnosis and Modern Decision Making. In: Bellman, R. (ed.): Mathematical Problems in the Biological Sciences, Proceedings of Symposia in Applied Mathematics, 14, pp. 117-158, American Mathematical Society, Providence, R.I., 1962.
  4. Wardle, A., Wardle, L.: Computer aided diagnosis ? A review of research. Methodsof Information in Medicine, 3, 1976, pp. 174-179.
  5. Woodbury, M. A.: The inapplicabilities of Bayes theorem to diagnosis. Proc. Fifth Int. Conf. on Medical Electronics. Liege, Belgium. Springfield, Ill., Charles C. Thomas, 1963, pp. 860-868.
  6. Szolovits P (Ed): Artificial Intelligence in Medicine. Boulder, Colo, Westview Press (AAAS Symposium Series), 1982
  7. Clancey WJ, Shortliffe EH (Eds): Readings in Medical Artificial Intelligence: The First Decade. Reading, Mass. Addison-Wesley, 1984
  8. Yu VL, Fagan LM, Wraith SM, et al: Antimicrobial selection by a computer: A blinded evaluation by infectious disease experts. JAMA 1979; 242:1279-1282
  9. Buchanan BG, Shortliffe EH (Eds): Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Reading, Mass, Addison-Wesley, 1984
  10. Pauker SG, Gorry GA, Kassirer JP, et al: Toward the simulation of clinical cognition: Taking a present illness by computer. Am J Med 1976; 60:981-995
  11. Elstein AS, Shulman LS, Sprafka SA (Eds): Medical Problem Solving: An Analysis of Clinical Reasoning. Cambridge, Mass, Harvard University Press, 1978
  12. Kassirer JP, Gorry GA: Clinical problem solving: A behavioral analysis. Ann Intern Med 1978; 89:245-255
  13. Adlassnig, K.-P., Grabner, G.: The Viennese Computer Assisted Diagnostic System. Its Principles and Values. Automedica, 1980, Vol. 3, pp. 141-150.
  14. Spindelberger, W. und Grabner, G.: Ein Computerverfahren zur diagnostischen Hilfestellung. In: K. Fellinger (Hrsg.): Computer in der Medizin - Probleme, Erfahrungen, Projekte. Wien: Verlag Brüder Hollinek 1968, pp. 189-221.
  15. Gangl, A., Grabner, G. Bauer, P.: Erste Erfahrungen mit einem Computerprogramm zur Differentialdiagnose der Lebererkrankungen. Wiener Zeitschrift für Innere Medizin und ihre Grenzgebiete 50, 1969, pp. 553-586.
  16. N. Sueda and A. Hoshi, "A Support System for Program Construction in Using a Package of Image Processing Algorithms" (in Japanese) in "Expert Systems - Theory and Prcatice", Nikki Datapro Books I, pp.135-154 (1986).
  17. K. Mikame, N. Sueda, A. Hoshi and S. Honiden, "Knowledge Engineering Application in Image Processing", Proc. Of Graphics Interface '85, pp.435-441 (1985).
  18. J.M. Garibaldi, J.A. Westgate, E.C. Ifeachor, and K.R. Greene (1994). The development of an expert system for the analysis of umbilical cord blood at delivery. In Proceedings of the International Conference on Neural Networks and Expert Systems in Medicine and Healthcare, pages 394-402, Plymouth, UK.
  19. J.M. Garibaldi, J.A. Westgate, E.C. Ifeachor, and K.R. Greene (1997). The development and implementation of an expert system for the analysis of umbilical cord blood. Artificial Intelligence Medicine, 10:129-144.
  20. Brein L, Adlassnig K-P, Kolousek G. Rule base and inference process of the medical expert system CADIAG-IV. In: Trappl R, editor. Cybernetics and systems '98. Vienna: Austrian Society for Cybernetic Studies, Schottengasse 3, A-1010, Vienna, Austria, 1998. p. 155±9.
  21. Hiesmayr M, Gamper J, Neugebauer T, Mares P, Adlassnig K-P, Haider W. Clinical application of patient data management systems (PDMS): computer-assisted weaning from artificial ventilation (KBWEAN). In: Lenz K, Metnitz PGH, editors. Patient Data Management in Intensive Care. Wien: Springer, 1993. p.129±38.
  22. Adlassnig K-P. A fuzzy logical model of computer-assisted medical diagnosis. Methods Inform Med 1980;19:141±8.
  23. Aikins JS, Kunz JC, Shortliffe EH, Fallat RJ., "PUFF: an expert system for interpretation of pulmonary function data.", Comput Biomed Res. 1983 Jun;16(3):199-208.
  24. Papadourakis, George, 2001 - Neural networks and expert systems in medicine and healthcare (Proceedings of the Fourth International Conference). TEI of Crete. Heraklion.
  25. MGH Laboratory of Computer Science - projects - dxplain," Laboratory of Computer Science, Massachusetts General Hospital. 2007.
  26. Sevitt, S.: Burns pathology and therapeutic application, London, Butterworth Co. Ltd., 1957
  27. Multispectral Photographic Analysis A New Quantitative Tool to Assist in the Early Diagnosis of Thermal Burn Depth 1VICTOR J. ANSELMO Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California AND BRUCE E. ZAWACKI Los Angeles County/University of Southern California Medical Cent r Los Angeles, California Received August 17, 1976
  28. V.J Anselmo and B.E Zawacki, "Multispectral Photographic Analysis: A New Quantitative Tool to Assist in the Early Diagnosis of Thermal Burn Depth" Annals of Biomedical Engineering, Vol. 5, pp. 179-193, 1977.
  29. D.M. Heimbach, M.A. Afromowitz, L.H. Engrav, J.A Marvin and B.Perry, "Burn Depth Estimation - Man or Machine," The Journal of Trauma, vol. 24, No. 5, pp. 373-378, 1984.
  30. M.A. Afromowitz, G. S. Van Liew, and D.M. Heimbach, "Clinical Evaluation of Burn injuries Using an Optical Reflectance Technique,." IEEE Tran. Biomedical Engineering, vol. BME-34, No. 2, pp. 114-127, 1987.
  31. Green HA, Bua D, Anderson RR, Nishioka NS. Burn depth estimation using indocyanine green fluorescence. Arch Dermatol. 1992;128(1):43-49.
  32. Mileski, W. J. MD; Atiles, L. MD; Purdue, G. MD; Kagan, R. MD; Saffle, J. R. MD; Herndon, D. N. MD; Heimbach, D. MD; Luterman, A. MD; Yurt, R. MD; Goodwin, C. MD; Hunt, J. L. MD, " Serial Measurements Increase the Accuracy of Laser Doppler Assessment of Burn Wounds ". Journal of Burn Care & Rehabilitation:Volume 24(4)July/August 2003pp 187-191
  33. R. P. Cole, S. G. Jones, P. G. Shakespeare, "Thermographic Assessment of Hand Burns", Burns, vol. 16, no. 1, pp. 60-63, 1990.
  34. R. E. Barsley, M. H. west, J. A. Fair, "Forensic Photography. Ultraviolet Imaging of Wounds on Skin", American Journal of Forensic Medical Pathology, vol. 11, no. 4, pp. 300-308, Dec. 1990.