Adaptive Bilateral Filter For Sharpness Enhancement Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.


Moving object detecting is one of the current research hotspots and is widely used in fields such as computer vision and video processing. The study on image

processing toolkit using computer language Matlab was conducted to perform moving object detecting technical processing on video images. First, video pre-processing steps such as frame separation, binary operation, gray enhancement and filter operation were conducted. Then the detection and extraction of moving object was carried out on images according to frame difference-based dynamic-

background refreshing algorithm. Finally, the desired video image was synthesized through adding image casing on the moving objects in the video. The results showed that using computer language Matlab to perform moving object detecting algorithm has favorable effects.



The term digital image refers to processing of a two dimensional picture by a digital computer. In a broader context, it implies digital processing of any two dimensional data. A digital image is an array of real or complex numbers represented by a finite number of bits. An image given in the form of a transparency, slide, photograph or an X-ray is first digitized and stored as a matrix of binary digits in computer memory. This digitized image can then be processed and/or displayed on a high-resolution television monitor. For display, the image is stored in a rapid-access buffer memory, which refreshes the monitor at a rate of 25 frames per second to produce a visually continuous display.


A typical digital image processing system is given in fig.1.1


Mass Storage

Hard Copy Device


Image Processor

Digital Computer

Operator Console

Fig 1.1 Block Diagram of a Typical Image Processing System


A digitizer converts an image into a numerical representation suitable for input into a digital computer. Some common digitizers are


Flying spot scanner

Image dissector

Videocon camera

Photosensitive solid- state arrays.


An image processor does the functions of image acquisition, storage, preprocessing, segmentation, representation, recognition and interpretation and finally displays or records the resulting image. The following block diagram gives the fundamental sequence involved in an image processing system

Problem Domain





Image Acquisition

Recognition & interpretation

Representation & Description


Fig 1.2 Block Diagram of Fundamental Sequence involved in an image Processing system


As detailed in the diagram, the first step in the process is image acquisition by an imaging sensor in conjunction with a digitizer to digitize the image. The next step is the preprocessing step where the image is improved being fed as an input to the other processes. Preprocessing typically deals with enhancing, removing noise, isolating regions, etc. Segmentation partitions an image into its constituent parts or objects. The output of segmentation is usually raw pixel data, which consists of either the boundary of the region or the pixels in the region themselves. Representation is the process of transforming the raw pixel data into a form useful for subsequent processing by the computer. Description deals with extracting features that are basic in differentiating one class of objects from another. Recognition assigns a label to an object based on the information provided by its descriptors. Interpretation involves assigning meaning to an ensemble of recognized objects. The knowledge about a problem domain is incorporated into the knowledge base. The knowledge base guides the operation of each processing module and also controls the interaction between the modules. Not all modules need be necessarily present for a specific function. The composition of the image processing system depends on its application. The frame rate of the image processor is normally around 25 frames per second.


Mathematical processing of the digitized image such as convolution, averaging, addition, subtraction, etc. are done by the computer.


The secondary storage devices normally used are floppy disks, CD ROMs etc.


The hard copy device is used to produce a permanent copy of the image and for the storage of the software involved.


The operator console consists of equipment and arrangements for verification of intermediate results and for alterations in the software as and when require. The operator is also capable of checking for any resulting errors and for the entry of requisite data.




Digital image processing refers processing of the image in digital form. Modern cameras may directly take the image in digital form but generally images are originated in optical form. They are captured by video cameras and digitalized. The digitalization process includes sampling, quantization. Then these images are processed by the five fundamental processes, at least any one of them, not necessarily all of them.


This section gives various image processing techniques.

Image Enhancement

Image Restoration


Image Analysis

Image Compression

Image Synthesis

Fig 2.2.1 Image processing Techniques


Image enhancement operations improve the qualities of an image like improving the image's contrast and brightness characteristics, reducing its noise content, or sharpen the details. This just enhances the image and reveals the same information in more understandable image. It does not add any information to it.


Image restoration like enhancement improves the qualities of image but all the operations are mainly based on known, measured, or degradations of the original image. Image restorations are used to restore images with problems such as geometric distortion, improper focus, repetitive noise, and camera motion. It is used to correct images for known degradations.


Image analysis operations produce numerical or graphical information based on characteristics of the original image. They break into objects and then classify them. They depend on the image statistics. Common operations are extraction and description of scene and image features, automated measurements, and object classification. Image analyze are mainly used in machine vision applications.


Image compression and decompression reduce the data content necessary to describe the image. Most of the images contain lot of redundant information, compression removes all the redundancies. Because of the compression the size is reduced, so efficiently stored or transported. The compressed image is decompressed when displayed. Lossless compression preserves the exact data in the original image, but Lossy compression does not represent the original image but provide excellent compression.


Image synthesis operations create images from other images or non-image data. Image synthesis operations generally create images that are either physically impossible or impractical to acquire.


Digital image processing has a broad spectrum of applications, such as remote sensing via satellites and other spacecrafts, image transmission and storage for business applications, medical processing, radar, sonar and acoustic image processing, robotics and automated inspection of industrial parts.


In medical applications, one is concerned with processing of chest X-rays, cineangiograms, projection images of transaxial tomography and other medical images that occur in radiology, nuclear magnetic resonance (NMR) and ultrasonic scanning. These images may be used for patient screening and monitoring or for detection of tumours or other disease in patients.


Images acquired by satellites are useful in tracking of earth resources; geographical mapping; prediction of agricultural crops, urban growth and weather; flood and fire control; and many other environmental applications. Space image applications include recognition and analysis of objects contained in image obtained from deep space-probe missions.


Image transmission and storage applications occur in broadcast television, teleconferencing, and transmission of facsimile images for office automation, communication of computer networks, closed-circuit television based security monitoring systems and in military communications.


Radar and sonar images are used for detection and recognition of various types of targets or in guidance and manoeuvring of aircraft or missile systems.


It is used in scanning, and transmission for converting paper documents to a digital image form, compressing the image, and storing it on magnetic tape. It is also used in document reading for automatically detecting and recognizing printed characteristics.


It is used in reconnaissance photo-interpretation for automatic interpretation of earth satellite imagery to look for sensitive targets or military threats and target acquisition and guidance for recognizing and tracking targets in real-time smart-bomb and missile-guidance systems.




As video sequence consists of frame sequences which have certain temporal continuity, the detection for moving object in video is conducted in a way that frame sequences are extracted from the video sequence according to a definite cycle. Therefore, moving object detecting has something similar to object detection in still images. Only moving object detecting is more relying on the motion characteristics of objects, i.e. the continuity of time, which is the difference between moving object and object detection in still images. The method frequently used in moving object detecting is video sequence analysis. Two or more frames acquired at different time contain the information about relative motion between an imaging system and a scene. This information is in forms of the gray and color variation between frames, or the location and property variation of marks such as dots, line segments and areas, etc. Therefore, the information about motions can be obtained through analysis and processing of images acquired at different time. Video sequence analysis methods can be classified into three types: optical flow method [5,6], background difference method [7,8] and adjacent frame difference method [9,10]. Optical flow reflects the image variation caused by motions in a definite time interval. The motion field of images is estimated to incorporate similar motion vectors into moving object. Solving transcendental equations is required in optical flow method, the calculation is both complex and extremely sensitive to noise, the amount of calculation is large, and the real-time performance and the practicability is poor. So this method is difficult to be used in real-time video processing. Background difference method is a technique for detecting the motion area by using the difference between the current image and the background image. An image is divided into foreground and background in this method. The background is modeled, and the current frame and the background model are compared pixel by pixel. Those pixels according with the background model are labeled as the background, while others are labeled as the foreground. Background subtraction is a common method in moving object detecting, which is used more often in situations with relatively still background. This method has low complexity, however, acquired background images become sensitive to scene changes caused by illumination and external conditions as time goes on. Many fake C dots can emerge, which affects object detection. The mechanism of refreshing background reference frames needs to be added in under uncontrolled environment. Moreover, it doesn't fit camera motions or fit with conditions with large background Grey variation. In adjacent frame difference method, moving object is extracted according to the differences among two or three continuous frames. The method is the most simple and direct, with which the changing part in video can be quickly detected. As a matter of fact, it only detects objects making relative motions. Moreover, since the time interval between two images is quite short, illumination changes have little influence on difference images, so the detection is effective and stable. The method using frame differences can better adapt to environment in intensive fluctuation, and can easily detect those pixels causing images to change distinctly when the target moves. However, it is inadequate for dots with insignificantly changed pixels. Accordingly, the method is largely used in situations with comparatively simple background and little environmental interference.




The computer language Matlab is an engineering and scientific calculation-oriented interactive computing software introduced by Mathwork company, which has been widely used in various fields including biomedical engineering, image information processing, signal analysis, time series analysis, cybernetics and system theory, etc. As a programming language and visualization tool, Matlab is characterized by easy operation, simple grammar, abundant functions, friendly interface and strong openness, etc. Consisting of a main kit and a toolkit with various functions, and based on matrix operation, Matlab language integrates computation, visualization and programming into a simple and easy interactive work environment. It has the functions of engineering calculation, algorithm research, modelling and simulation, prototype development, data analysis, scientific engineering drawing, application program design and design of graphical user interface, etc. The techniques and methods frequently used in image processing can be simplified with the functions provided by the image processing toolkit of Matlab, which saves plenty of time and energy for the image processing staff and consequently, improves the efficiency of image processing. The whole flow of the algorithm comprises several steps as follows.

The first step is video pre-processing. The computer language Matlab is applied to read out video images stored in the computer or transferred from video camera before the pre-processing of frame separation is carried out. Matlab provides the users with special functions for capturing image data from video, which can be achieved with the aid of the library function aviread in Matlab directly.Residual frames are obtained through adjacent-frame difference method. The image information in each frame will be stored in an M*N three-dimensional matrix, where M and N represent the two-dimensional size of the video image frame. There are three such matrices in total for storing the pixel gray values of red, green and blue components of the image separately. A new threedimensional matrix array can be obtained with the matrices of each two adjacent images being subtracted directly, which represents the residual image information between adjacent frames of the image. The color image of residual frame is transformed into plain-gray image. Weighted average is performed on the gray values of red, green and blue components in the original image. Tests showed that weight election has little effect on the result of the algorithm, so arithmetic mean was adopted directly. After transformation, the form of residual image is changed into a two-dimensional M*N matrix array. Each element in the matrix represents the equivalent gray value of corresponding pixel in the residual image.

Residual image is used for moving object detection and extraction in the original image. As a core part of the algorithm, the subprogram has the algorithm flow as follows. Beginning from the first element on the left upper side of the residual matrix, a 5*5 window is made by using each element as a center. A matrix can be obtained with the average of each window being regarded as an element. After threshold decision, the simplified matrix is obtained. In the simplified matrix, the first row, from left to right, with a total of five elements as zero is sorted out as the lower bound along the ordinate of the dot, which serves as the upper bound. In the same way, the left and the right bounds can be obtained, and the central coordinate is obtained through the four bounds on the left, the right, the top and the bottom. Finally, according to the dimension conversion relationship between the simplified matrix and the original matrix, the four bounds of left, right, top and bottom and the central coordinate of moving object in the original matrix can be obtained


videoinput - Create video input object

obj = videoinput(adaptorname)

obj = videoinput(adaptorname,deviceID)


obj = videoinput(adaptorname) constructs the video input object obj. A video input object represents the connection between MATLAB and a particular image acquisition device. adaptorname is a text string that specifies the name of the adaptor used to communicate with the device. Use the imaqhwinfo function to determine the adaptors available on your system.

obj = videoinput(adaptorname,deviceID) constructs a video input object obj, where deviceID is a numeric scalar value that identifies a particular device available through the specified adaptor, adaptorname. Use the imaqhwinfo(adaptorname) syntax to determine the devices available through the specified adaptor. If deviceID is not specified, the first available device ID is used. As a convenience, a device's name can be used in place of the deviceID. If multiple devices have the same name, the first available device is used.

Getsnapshot - Immediately return single image frame

frame = getsnapshot(obj)


frame = getsnapshot(obj) immediately returns one single image frame, frame, from the video input object obj. The frame of data returned is independent of the video input object FramesPerTrigger property and has no effect on the value of the FramesAvailable or FramesAcquired property.

The object obj must be a 1-by-1 video input object.

rgb2gray -Convert RGB image or colormap to grayscale


I = rgb2gray(RGB)

newmap = rgb2gray(map)


I = rgb2gray(RGB) converts the truecolor image RGB to the grayscale intensity image I. rgb2gray converts RGB images to grayscale by eliminating the hue and saturation information while retaining the luminance.

newmap = rgb2gray(map) returns a grayscale colormap equivalent to map.

Note   A grayscale image is also called a gray-scale, gray scale, or gray-level image.

Class Support

If the input is an RGB image, it can be of class uint8, uint16, single, or double. The output image I is of the same class as the input image. If the input is a colormap, the input and output colormaps are both of class double.


Convert an RGB image to a grayscale image.

I = imread('board.tif');

J = rgb2gray(I);

figure, imshow(I), figure, imshow(J);

Convert the colormap to a grayscale colormap.

[X,map] = imread('trees.tif');

gmap = rgb2gray(map);

figure, imshow(X,map), figure, imshow(X,gmap);


rgb2gray converts RGB values to grayscale values by forming a weighted sum of the R, G, and B components:

0.2989 * R + 0.5870 * G + 0.1140 * B

Note that these are the same weights used by the rgb2ntsc function to compute the Y component.

im2bw -Convert image to binary image, based on threshold


BW = im2bw(I, level)

BW = im2bw(X, map, level)

BW = im2bw(RGB, level)


BW = im2bw(I, level) converts the grayscale image I to a binary image. The output image BW replaces all pixels in the input image with luminance greater than level with the value 1 (white) and replaces all other pixels with the value 0 (black). Specify level in the range [0,1]. This range is relative to the signal levels possible for the image's class. Therefore, a level value of 0.5 is midway between black and white, regardless of class. To compute the levelargument, you can use the function graythresh. If you do not specify level, im2bw uses the value 0.5.

BW = im2bw(X, map, level) converts the indexed image X with colormap map to a binary image.

BW = im2bw(RGB, level) converts the truecolor image RGB to a binary image.

If the input image is not a grayscale image, im2bw converts the input image to grayscale, and then converts this grayscale image to binary by thresholding.

Class Support

The input image can be of class uint8, uint16, single, int16, or double, and must be nonsparse. The output image BWis of class logical. I and X must be 2-D. RGB images are M-by-N-by-3.


load trees

BW = im2bw(X,map,0.4);

imshow(X,map), figure, imshow(BW)

strel -Create morphological structuring element (STREL)


SE = strel(shape, parameters)

SE = strel('arbitrary', NHOOD)

SE = strel('arbitrary', NHOOD, HEIGHT)

SE = strel('ball', R, H, N)

SE = strel('diamond', R)

SE = strel('disk', R, N)

SE = strel('line', LEN, DEG)

SE = strel('octagon', R)

SE = strel('pair', OFFSET)

SE = strel('periodicline', P, V)

SE = strel('rectangle', MN)

SE = strel('square', W)


SE = strel(shape, parameters) creates a structuring element, SE, of the type specified by shape. This table lists all the supported shapes. Depending on shape, strel can take additional parameters. See the syntax descriptions that follow for details about creating each type of structuring element.

Flat Structuring Elements











Nonflat Structuring Elements



SE = strel('arbitrary', NHOOD) creates a flat structuring element where NHOOD specifies the neighborhood. NHOOD is a matrix containing 1's and 0's; the location of the 1's defines the neighborhood for the morphological operation. The center (or origin) of NHOOD is its center element, given by floor((size(NHOOD)+1)/2). You can omit the 'arbitrary'string and just use strel(NHOOD).

SE = strel('arbitrary', NHOOD, HEIGHT) creates a nonflat structuring element, where NHOOD specifies the neighborhood. HEIGHT is a matrix the same size as NHOOD containing the height values associated with each nonzero element of NHOOD. The HEIGHT matrix must be real and finite valued. You can omit the 'arbitrary' string and just usestrel(NHOOD,HEIGHT).

SE = strel('ball', R, H, N) creates a nonflat, ball-shaped structuring element (actually an ellipsoid) whose radius in the X-Y plane is R and whose height is H. Note that R must be a nonnegative integer, H must be a real scalar, and N must be an even nonnegative integer. When N is greater than 0, the ball-shaped structuring element is approximated by a sequence of N nonflat, line-shaped structuring elements. When N equals 0, no approximation is used, and the structuring element members consist of all pixels whose centers are no greater than R away from the origin. The corresponding height values are determined from the formula of the ellipsoid specified by R and H. If N is not specified, the default value is 8.

Note   Morphological operations run much faster when the structuring element uses approximations (N > 0) than when it does not (N = 0).

SE = strel('diamond', R) creates a flat, diamond-shaped structuring element, where R specifies the distance from the structuring element origin to the points of the diamond. R must be a nonnegative integer scalar.

SE = strel('disk', R, N) creates a flat, disk-shaped structuring element, where R specifies the radius. R must be a nonnegative integer. N must be 0, 4, 6, or 8. When N is greater than 0, the disk-shaped structuring element is approximated by a sequence of N periodic-line structuring elements. When N equals 0, no approximation is used, and the structuring element members consist of all pixels whose centers are no greater than R away from the origin. If N is not specified, the default value is 4.

Note   Morphological operations run much faster when the structuring element uses approximations (N > 0) than when it does not (N = 0). However, structuring elements that do not use approximations (N = 0) are not suitable for computing granulometries. Sometimes it is necessary for strel to use two extra line structuring elements in the approximation, in which case the number of decomposed structuring elements used is N + 2.

SE = strel('line', LEN, DEG) creates a flat linear structuring element that is symmetric with respect to the neighborhood center. DEG specifies the angle (in degrees) of the line as measured in a counterclockwise direction from the horizontal axis. LEN is approximately the distance between the centers of the structuring element members at opposite ends of the line.

SE = strel('octagon', R) creates a flat, octagonal structuring element, where R specifies the distance from the structuring element origin to the sides of the octagon, as measured along the horizontal and vertical axes. R must be a nonnegative multiple of 3.

SE = strel('pair', OFFSET) creates a flat structuring element containing two members. One member is located at the origin. The second member's location is specified by the vector OFFSET. OFFSET must be a two-element vector of integers.

SE = strel('periodicline', P, V) creates a flat structuring element containing 2*P+1 members. V is a two-element vector containing integer-valued row and column offsets. One structuring element member is located at the origin. The other members are located at 1*V, -1*V, 2*V, -2*V, ..., P*V, -P*V.

SE = strel('rectangle', MN) creates a flat, rectangle-shaped structuring element, where MN specifies the size. MNmust be a two-element vector of nonnegative integers. The first element of MN is the number of rows in the structuring element neighborhood; the second element is the number of columns.

SE = strel('square', W) creates a square structuring element whose width is W pixels. W must be a nonnegative integer scalar. 


For all shapes except 'arbitrary', structuring elements are constructed using a family of techniques known collectively as structuring element decomposition. The principle is that dilation by some large structuring elements can be computed faster by dilation with a sequence of smaller structuring elements. For example, dilation by an 11-by-11 square structuring element can be accomplished by dilating first with a 1-by-11 structuring element and then with an 11-by-1 structuring element. This results in a theoretical performance improvement of a factor of 5.5, although in practice the actual performance improvement is somewhat less. Structuring element decompositions used for the 'disk' and 'ball'shapes are approximations; all other decompositions are exact.


This table lists the methods supported by the STREL object.




Get height of structuring element


Get structuring element neighbor locations and heights


Get structuring element neighborhood


Extract sequence of decomposed structuring elements


Return true for flat structuring element


Reflect structuring element


Translate structuring element


se1 = strel('square',11) % 11-by-11 square

se2 = strel('line',10,45) % length 10, angle 45 degrees

se3 = strel('disk',15) % disk, radius 15

se4 = strel('ball',15,5) % ball, radius 15, height 5


The method used to decompose diamond-shaped structuring elements is known as "logarithmic decomposition" [1].

The method used to decompose disk structuring elements is based on the technique called "radial decomposition using periodic lines" [2], [3]. For details, see the MakeDiskStrel subfunction in toolbox/images/images/@strel/strel.m.

The method used to decompose ball structuring elements is the technique called "radial decomposition of spheres" [2].

imerode -Erode image


IM2 = imerode(IM,SE)

IM2 = imerode(IM,NHOOD)

IM2 = imerode(...,PACKOPT,M)

IM2 = imerode(...,SHAPE)


IM2 = imerode(IM,SE) erodes the grayscale, binary, or packed binary image IM, returning the eroded image IM2. The argument SE is a structuring element object or array of structuring element objects returned by the strel function.

If IM is logical and the structuring element is flat, imerode performs binary erosion; otherwise it performs grayscale erosion. If SE is an array of structuring element objects, imerode performs multiple erosions of the input image, using each structuring element in SE in succession.

IM2 = imerode(IM,NHOOD) erodes the image IM, where NHOOD is an array of 0's and 1's that specifies the structuring element neighborhood. This is equivalent to the syntax imerode(IM,strel(NHOOD)). The imerode function determines the center element of the neighborhood by floor((size(NHOOD)+1)/2).

IM2 = imerode(...,PACKOPT,M) specifies whether IM is a packed binary image and, if it is, provides the row dimensionM of the original unpacked image. PACKOPT can have either of the following values. Default value is enclosed in braces ({}).




IM is treated as a packed binary image as produced by bwpack. IM must be a 2-Duint32 array and SE must be a flat 2-D structuring element.


IM is treated as a normal array.

If PACKOPT is 'ispacked', you must specify a value for M.

IM2 = imerode(...,SHAPE) specifies the size of the output image. SHAPE can have either of the following values. Default value is enclosed in braces ({}).




Make the output image the same size as the input image. If the value of PACKOPT is'ispacked', SHAPE must be 'same'.


Compute the full erosion.


The binary erosion of A by B, denoted A  B, is defined as the set operation A  B = {z|(Bz ⊆ A}. In other words, it is the set of pixel locations z, where the structuring element translated to location z overlaps only with foreground pixels in A.

In the general form of gray-scale erosion, the structuring element has a height. The gray-scale erosion of A(x, y) by B(x,y) is defined as:

(A  B)(x, y) = min {A(x + x′, y + y′) − B(x′, y′) | (x′, y′) ∊ DB},

where DB is the domain of the structuring element B and A(x,y) is assumed to be +∞ outside the domain of the image. To create a structuring element with nonzero height values, use the syntax strel(nhood,height), where height gives the height values and nhood corresponds to the structuring element domain, DB.

Most commonly, gray-scale erosion is performed with a flat structuring element (B(x,y) = 0). Gray-scale erosion using such a structuring element is equivalent to a local-minimum operator:

(A  B)(x, y) = min {A(x + x′, y + y′) | (x′, y′) ∊ DB}.

All of the strel syntaxes except for strel(nhood,height), strel('arbitrary',nhood,height), and strel('ball', ...) produce flat structuring elements.

For more information on binary erosion, see [1].

Class Support

IM can be numeric or logical and it can be of any dimension. If IM is logical and the structuring element is flat, the output image is logical; otherwise the output image has the same class as the input. If the input is packed binary, then the output is also packed binary.


Erode a binary image with a disk structuring element.

originalBW = imread('circles.png');

se = strel('disk',11);

erodedBW = imerode(originalBW,se);

imshow(originalBW), figure, imshow(erodedBW)

Erode a grayscale image with a rolling ball.

I = imread('cameraman.tif');

se = strel('ball',5,5);

I2 = imerode(I,se);

imshow(I), title('Original')

figure, imshow(I2), title('Eroded')

Algorithm Notes

imerode automatically takes advantage of the decomposition of a structuring element object (if a decomposition exists). Also, when performing binary dilation with a structuring element object that has a decomposition, imerode automatically uses binary image packing to speed up the dilation.

Erosion using bit packing is described in [3].


Enhanced Data rates for GSM Evolution (EDGE) (also known as Enhanced GPRS (EGPRS), or IMT Single Carrier (IMT-SC), orEnhanced Data rates for Global Evolution) is a digital mobile phone technology that allows improved data transmission rates as abackward-compatible extension of GSM. EDGE is considered a pre-3G radio technology and is part of ITU's 3G definition.[1] EDGE was deployed on GSM networks beginning in 2003 - initially by Cingular (now AT&T) in the United States.[2]

EDGE is standardized by 3GPP as part of the GSM family.

Through the introduction of sophisticated methods of coding and transmitting data, EDGE delivers higher bit-rates per radio channel, resulting in a threefold increase in capacity and performance compared with an ordinary GSM/GPRS connection.

EDGE can be used for any packet switched application, such as an Internet connection.

Evolved EDGE continues in Release 7 of the 3GPP standard providing reduced latency and more than doubled performance e.g. to complement High-Speed Packet Access (HSPA). Peak bit-rates of up to 1Mbit/s and typical bit-rates of 400kbit/s can be expected.


EDGE/EGPRS is implemented as a bolt-on enhancement for 2.5G GSM/GPRS networks, making it easier for existing GSM carriers to upgrade to it. EDGE is a superset to GPRS and can function on any network with GPRS deployed on it, provided the carrier implements the necessary upgrade.

EDGE requires no hardware or software changes to be made in GSM core networks. EDGE-compatible transceiver units must be installed and the base station subsystem needs to be upgraded to support EDGE. If the operator already has this in place, which is often the case today, the network can be upgraded to EDGE by activating an optional software feature. Today EDGE is supported by all major chip vendors for both GSM and WCDMA/HSPA.

Transmission techniques

In addition to Gaussian minimum-shift keying (GMSK), EDGE uses higher-order PSK/8 phase shift keying (8PSK) for the upper five of its nine modulation and coding schemes. EDGE produces a 3-bit word for every change in carrier phase. This effectively triples the gross data rate offered by GSM. EDGE, like GPRS, uses a rate adaptation algorithm that adapts the modulation and coding scheme (MCS) according to the quality of the radio channel, and thus the bit rate and robustness of data transmission. It introduces a new technology not found in GPRS,Incremental Redundancy, which, instead of retransmitting disturbed packets, sends more redundancy information to be combined in the receiver. This increases the probability of correct decoding.

EDGE can carry a bandwidth up to 236.8 kbit/s (with end-to-end latency of less than 150 ms) for 4 timeslots (theoretical maximum is 473.6 kbit/s for 8 timeslots) in packet mode. This means it can handle four times as much traffic as standard GPRS. EDGE meets the International Telecommunications Union's requirement for a 3G network, and has been accepted by the ITU as part of the IMT-2000 family of 3Gstandards. It also enhances the circuit data mode called HSCSD, increasing the data rate of this service. EDGE is part of ITU's 3G definition and is considered a 3G radio technology.[1]

EDGE modulation and coding scheme (MCS)

EDGE is four times as efficient as GPRS. GPRS uses four coding schemes (CS-1 to 4) while EDGE uses nine Modulation and Coding Schemes (MCS-1 to 9).

 Coding and modulation 

scheme (MCS)

 Bit Rate 






























Evolved EDGE

Evolved EDGE improves on EDGE in a number of ways. Latencies are reduced by lowering the Transmission Time Interval by half (from 20 ms to 10 ms). Bit rates are increased up to 1 MBit/s peak bandwidth and latencies down to 80 ms using dual carriers, higher symbol rate and higher-order modulation (32QAM and 16QAM instead of 8-PSK), and turbo codes to improve error correction. And finally signal quality is improved using dual antennas improving average bit-rates and spectrum efficiency. EDGE Evolution can be gradually introduced as software upgrades, taking advantage of the installed base. With EDGE Evolution, end-users will be able to experience mobile internet connections corresponding to a 500 kbit/s ADSL service. [3]

imadjust -Adjust image intensity values or colormap


J = imadjust(I)

J = imadjust(I,[low_in; high_in],[low_out; high_out])

J = imadjust(I,[low_in; high_in],[low_out; high_out],gamma)

newmap = imadjust(map,[low_in; high_in],[low_out; high_out],gamma)

RGB2 = imadjust(RGB1,...)


J = imadjust(I) maps the intensity values in grayscale image I to new values in J such that 1% of data is saturated at low and high intensities of I. This increases the contrast of the output image J. This syntax is equivalent toimadjust(I,stretchlim(I)).

J = imadjust(I,[low_in; high_in],[low_out; high_out]) maps the values in I to new values in J such that values between low_in and high_in map to values between low_out and high_out. Values for low_in, high_in, low_out, and high_out must be between 0 and 1. Values below low_in and above high_in are clipped; that is, values belowlow_in map to low_out, and those above high_in map to high_out. You can use an empty matrix ([]) for [low_inhigh_in] or for [low_out high_out] to specify the default of [0 1].

J = imadjust(I,[low_in; high_in],[low_out; high_out],gamma) maps the values in I to new values in J, wheregamma specifies the shape of the curve describing the relationship between the values in I and J. If gamma is less than 1, the mapping is weighted toward higher (brighter) output values. If gamma is greater than 1, the mapping is weighted toward lower (darker) output values. If you omit the argument, gamma defaults to 1 (linear mapping).

newmap = imadjust(map,[low_in; high_in],[low_out; high_out],gamma) transforms the colormap associated with an indexed image. If low_in, high_in, low_out, high_out, and gamma are scalars, then the same mapping applies to red, green, and blue components. Unique mappings for each color component are possible when

low_in and high_in are both 1-by-3 vectors.

low_out and high_out are both 1-by-3 vectors, or gamma is a 1-by-3 vector.

The rescaled colormap newmap is the same size as map.

RGB2 = imadjust(RGB1,...) performs the adjustment on each image plane (red, green, and blue) of the RGB imageRGB1. As with the colormap adjustment, you can apply unique mappings to each plane.

Note   If high_out < low_out, the output image is reversed, as in a photographic negative.

Class Support

For syntax variations that include an input image (rather than a colormap), the input image can be of class uint8,uint16, int16, single, or double. The output image has the same class as the input image. For syntax variations that include a colormap, the input and output colormaps are of class double.


Adjust a low-contrast grayscale image.

I = imread('pout.tif');

J = imadjust(I);

imshow(I), figure, imshow(J)

Adjust the grayscale image, specifying the contrast limits.

K = imadjust(I,[0.3 0.7],[]);

figure, imshow(K)

Adjust an RGB image.

RGB1 = imread('football.jpg');

RGB2 = imadjust(RGB1,[.2 .3 0; .6 .7 1],[]);

imshow(RGB1), figure, imshow(RGB2)