This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Face detection has attracted many researchers because it has a wide area of applications. Given an image, face detection involves localizing all faces - if any - in this image. Face detection is the first step in any automated system that solves problems such as: face recognition, face tracking, and facial expression recognition . In the past decade, the security systems based on the information about a user's identity, like fingerprint and voice recognition, but now it depends on facial feature recognition. The extraction of these features from images is done by using face recognition techniques, and that computers of the security systems can then react accordingly. Human face detection is currently an active research area in the computer vision community. Human face localization and detection is often the first step in applications such as video surveillance, human computer interface, and face recognition and image database management. Locating and tracking human faces is a prerequisite for face recognition or facial expressions analysis, although it is often assumed that a normalized face image is available. In order to locate a human face, the system needs to capture an image using a camera and a frame-grabber to process the image, search the image for important features and then use these features to determine the location of the face. The techniques for face detection were classified into three categories i.e. Knowledge based, Image based and Feature based. The knowledge based approach depends on using rules about human facial features to detect faces. Human facial features for examples as two eyes that are symmetric to each other, a nose and mouth, and features' relative distances represent features relationships. After detecting features, a verification process is done to reduce false detection. This approach is good for frontal images. The difficulty of it is to translate human knowledge into known rules and to detect faces in different poses. In Image based technique there is a predefined standard face pattern is used to match with the segments in the image to determine whether they are faces or not. It uses training algorithms to classify regions into face or non-face classes. Image-based techniques depends on multi-resolution window scanning to detect faces, so these techniques have high detection rates but slower than the feature-based techniques and lastly Feature based technique depends on extraction of facial features that are not affected by variations in lighting conditions, pose, and other factors. These methods are classified according to the extracted features. Feature-based techniques depend on feature derivation and analysis to gain the required knowledge about faces. Features may be skin color, face shape, or facial features like eyes, nose, etc... Feature based methods are preferred for real time systems where the multi-resolution window scanning used by image based methods are not applicable. Human skin color is an effective feature used to detect faces, although different people have different skin color, several studies have shown that the basic difference based on their intensity rather than their chrominance. For detecting face there are various algorithms including skin color based algorithms. Color is known to be a useful cue to extract skin regions, and it is only available in color images. Using skin-color as a feature for tracking a face has several advantages. Color processing is much faster than processing other facial features. Under certain lighting conditions, color is orientation invariant. This property makes motion estimation much easier because only a translation model is needed for motion estimation. Color is not a physical phenomenon; it is a perceptual phenomenon that is related to the spectral characteristics of electromagnetic radiation in the visible wavelengths striking the retina. Tracking human faces using color as a feature has several problems like the color representation of a face obtained by a camera is influenced by many factors (ambient light, object movement, etc.), different cameras produce significantly different color values even for the same person under the same lighting conditions and skin color differs from person to person. In order to use color as a feature for face tracking, we have to solve these problems. It is also robust towards changes in orientation and scaling and can tolerate occlusion well. A disadvantage of the color cue is its sensitivity to illumination color changes and, especially in the case of RGB, sensitivity to illumination intensity. One way to increase tolerance toward intensity changes in images is to transform the RGB image into a color space whose intensity and chromaticity are separate and use only chromaticity part for detection. Eyes are the most important features of human face. There are many applications of the robust eye states extraction. For example, the eye states provide important information for recognizing facial expression and human-computer interface systems. An automatic and robust technique to extract the eye states from input images is very important in this field. In this paper, eyes are detected by using Hough transform. We are applying Circular Hough transform in order to detect the circular object in the image. In human face eyeball, pupils are in circular shape; hence by using circular Hough transform eye can be detected.
MODEL FOR SKIN COLOR DETECTION
Face detection system
The outline of the proposed approach is shown in figure1. In the following sections each step will be discuss in detail.
Face detection System
Detection of skin color in color images is a very popular and useful technique for face detection. The common approach to skin detection is using the skin color as feature, whereby estimating areas with skin color is often the first vital step of such strategy. Hence, skin color classification has become an important task. Classifying the pixels of an input image in two groups; skin and non-skin pixels. An important requirement for automatic skin detection systems is a tradeoff between correct classification rate and response time. Skin color is one of the most important features in the human face. Feature-based face detection techniques may use skin color information to detect faces in color images having complex background . The skin detector detects whether certain regions in a color image represent human skin or not. It must define certain decision rules to discriminate between skin and non-skin pixels. To build these rules, a human skin model must be built. Several skin color-modeling methods have been introduced. The selection of the color space that will be used in modeling skin color is very important; it is well known that different people have different skin color appearance, but these differences lie mostly in the color intensity not in the color itself. That is why many skin detection methods drop the luminance component of the color space. Dropping the luminance component achieves two important goals; first the model will be independent of the differences in skin appearance that may arise from the difference in human race, or the difference in the lighting of the image; second the color space dimensions will be reduced so the calculations would be easier. There are lots of color spaces that have been used in early work of skin detection, such as RGB, normalized RGB, YCbCr, HSI . Although RGB color space is one of the most used color spaces for processing and storing digital images, it is not widely used in skin detection algorithms because the chrominance and luminance components are mixed. Normalized RGB and YCbCr are often used in face detection method.
Face Detection using YCbCr Color Space
YCbCr color space has been defined in response to increasing demands for digital algorithms in handling video information, and has since become a widely used model in a digital video. It belongs to the family of television transmission color spaces. The family includes others such as YUV and YIQ. YCbCr is a digital color system, while YUV and YIQ are analog spaces for the respective PAL and NTSC systems. These color spaces separate RGB (Red-Green-Blue) into luminance and chrominance information and are useful in compression applications however the specification of colors is somewhat unintuitive. The YCbCr components are used, where in the YCbCr color space, the luminance information is contained in Y component; and, the chrominance information is in Cb and Cr. Many research studies found that the chrominance components of the skin-tone color are independent of the luminance component . Hence, in this implementation, the Cb, Cr components are used to model the distribution of skin colors. The RGB components were converted to the YCbCr components using the following formula .
Y = 0.299R + 0.587G + 0.114B
Cb = -0.169R - 0.332G + 0.500B (1)
Cr = 0.500R - 0.419G - 0.081B
In this system, author build a skin color model based on explicitly defined skin regions in Cb Cr color space. Cb Cr color space was chosen in our work for many reasons; first, it contains no information about luminance which yields a more general skin color model; it has only two components which helps to speed up the calculations; the transformations from RGB color space into Cb Cr color space is done using simple and fast calculation. The main reason for using explicitly defined skin regions in building the skin detector is its speed in detecting the skin regions. To build the model, author collected samples of human skin from different races. For every pixel in the skin samples, the values of Cb and Cr are calculated. After the Cb Cr skin color model is built, it can be used for skin detection. The first step in skin detection is pixel based skin detection, where the skin detector tests every pixel of the input image and computes its Cb and Cr. If Cb and Cr values of the pixels satisfy the following condition, then this pixel is considered skin pixel. The ranges of values for skin color are,
Following are some of the result of skin detection.
a) Input Images
Skin color detection using YCbCr model
Pixels belonging to skin region exhibit similar Cb and Cr values. Furthermore, it has been shown that skin color model based on the Cb and Cr values can provide good coverage of different human races. The thresholds be chosen as [Cr1, Cr2] and [Cb1, Cb2], a pixel is classified to have skin tone if the values [Cr, Cb] fall within the thresholds The skin color distribution gives the face portion in the color image. This algorithm is also having the constraint that the image should be having only face as the skin region.
Skin color region is more effectively extracted. This is because Cb and Cr have some distinct color range for skin region. Thus the accuracy of this algorithm is quite good. The skin region from the image and from the skin detected image face is extracted by first extracting facial features and then drawing a bounding box around the face region with the help of facial features.After getting the skin region, facial features viz. eyes are extracted. The image obtained after applying skin color statistics is subjected to binarization i.e., it is transformed to gray-scale image and then to a binary image by applying suitable threshold. This is done to eliminate the hue and saturation values and consider only the luminance part. This luminance part is then transformed to binary image with some threshold because the features we want to consider further for face extraction are darker than the background colors. After thresholding, opening and closing operations are performed to remove noise. These are the morphological operations, opening operation is erosion followed by dilation to remove noise and closing operation is dilation followed by erosion which is done to remove holes. Since eyes lie in the upper half of the face, the lower part of face is removing in order to reduce search area. Following are the result of removal of lower part from face.
Detection of upper half of face
Eye Detection by using Circular Hough transform
Eyes are the most important features of human face. There are many applications of the robust eye states extraction. For example, the eye states provide important information for recognizing facial expression and human-computer interface systems. When man is laughing, his eyes are nearly closed. And when he is surprised, his eyes are opened wide. Recently, many researches are actively conducted to build human-computer interface systems of high performance. The eye states can be got from the eye features such as the inner corner of eye, the outer corner of eye, iris, eyelid, eye position and so on. . The objects of interest have other shapes than lines, it could be parabolas, circles or ellipses or any other arbitrary shape. Hough transform is a general technique for identifying the locations and orientations of certain types of features in a digital image and used to isolate features of a particular shape within an image. Because it requires that the desired features are specified in some parametric form, classical Hough transform is the most commonly used for the detection of regular curves such as lines, circles, ellipses, etc. Hough transform directly detects the object's edges using image global features; it can link every point to form a closed edge in image field. If the shape or edges of objects is known, using Hough transform edges can be detected and points can be linked together easily. The main advantage of the Hough transform technique is tolerant to the presence of gaps in feature boundary descriptions and is relatively unaffected by image noise. In this paper circular Hough transform is used. Hough transform is used for detection of straight line. The disadvantage of Hough transform is that the calculation quantity is very large. With image size increasing, the quantity of data will become too large and the processing quantity of data will be slow. Hence, in order to find circles in image circular Hough transform is used, it is a three dimensional parameter space (xo, yo, r) where xo and yo are the center coordinates of the circle and r is the radius of the circle as the following equation
In this paper, face detection is done using YCbCr color space, skin region are extracted using certain set of rules after getting skin region, and facial feature such as eyes has been extracted from proposed facial feature extraction algorithm which finally gives the detected face. It is an efficient method in classifying the skin color region and face region. Eye region are extracted by using circular Hough transform. It takes some time that the Hough transform with two parameters is used to get radius in the first frame of an image sequences. In our feature work, we will use more information of eye to get better results.