# Active Researches In Computer Vision English Language Essay

**Published:** **Last Edited:**

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Moving object detection is one of the active researches in computer vision. It is widely used in surveillance applications, guidance of autonomous vehicles, video compression, tracking of moving objects, automatic target recognition and so on [1]. The aim of moving object detection is to separate the moving objects from the background. According to the movement of the camera, the methods of moving object detection can be divided into two

types: moving object detection in static scenes and in dynamic scenes. Moving object detection in static scenes is relatively simple. It has been a mature technology and applied in various systems successfully. While moving object detection in dynamic scenes still has many key problems which have not been solved, especially when the background is relatively complicated, the problems become more difficult. Therefore, the research of the methods in dynamic scene is becoming the hot spot of the current application in computer vision.

At present, there are two dominant methods for moving object detection, optical flow method and global motion compensation method. Optical flow method[2] has high computational complexity and poor anti noise capability, and it¿½¿½s used only with special hardware. So the global motion compensation method[3] has been widely used in the field. The main idea of this method is to estimate the global motion parameter of a camera between frames through image matching, and then compensate the motion of the camera. In this way, the detection in dynamic scenes is transformed into static scenes. The difficulty of this method is to estimate the global motion parameters robustly, including feature points extraction and matching, removing the invalid matching points and the optimal solution of global motion parameters. In the paper, we adopt the latter method.

Several popular feature detectors including Harris corner[4], SIFT[5-6]and SURF[7] have been widely used in the global motion compensation. Harris corner does not have the scale invariance and has many different changes in terms of gray and illumination. SIFT(Scale-Invariant Feature Transform) algorithm is widely used in many applications because the feature descriptor is relatively invariant to changes in orientation, scale, illumination and contrast. But, SIFT algorithm can¿½¿½t satisfy the request of real time because of the large amount of calculation, and high time complexity. SURF (Speeded Up Robust Features) algorithm, the accelerated version of SIFT, have a greater promotion in real time. In order to achieve the real-time requirement, we choose the SURF algorithm. Furthermore, to further improve the speed and the precision, we make some improvements based on SURF algorithm.

The proposed algorithm process is as follows: first, extract the feature points and match them between adjacent frames by using the improved SURF algorithm and matching method, then choose the affine transformation model to describe the global motion, use RANSAC[8]to remove the invalid matching points and least square method[9] to obtain the optimal global motion parameters(affine transformation matrix), finally compensate the previous frame by using the parameters, and obtain the objects by the difference between the current frame and the compensated frame. After morphological image processing, we get the accurate moving objects. The overall process of the proposed algorithm is summarized in Fig. 1.

Fig.1. Flowchart of the proposed algorithm.

2 Improve SURF algorithm and matching method

2.1 SURF algorithm

This section reviews the original SURF algorithm. It is proposed by Bay H, Tuytelaars T, Gool L V in 2006[7]. This algorithm is similar with SIFT algorithm. But, it is faster than SIFT in calculation speed. It relies on integral images to reduce the computation time and we also call it the ¿½¿½Fast-Hessian¿½¿½ detector.

The SURF detector is based on the determinant of the Hessian matrix. Based on Integral Image, we can calculate the Hessian matrix. Given a point in an image I, the Hessian matrix in at scale ¿½¿½ is defined as formula¿½¿½1¿½¿½ :

¿½¿½1¿½¿½

Here refers to the convolution of the second order Gaussian derivative with the image at point and similarly for and . As Gaussian filters are non-ideal in any case, and given Lowe¿½¿½s success with LoG approximations, Bay push the approximation even further with box filters. These approximate second order Gaussian derivatives, and can be evaluated very fast using integral images, independently of size. The following formula ¿½¿½2¿½¿½as an accurate approximation for the Hessian determinant using the approximated Gaussians-box filters:

¿½¿½2¿½¿½

refers to the convolution of the box filters with the image at point and similarly for and .

Scale spaces are usually implemented as image pyramids. The image pyramids in SURF is constructed by changing the size of box filters rather than iteratively reducing the image size. The output of the 9¿½¿½9 ?lter is considered as the initial scale layer, to which we will refer as

scale ¿½¿½=1.2.The following layers are obtained by ?ltering the image with gradually bigger masks, such as 9¿½¿½9,15¿½¿½15,21¿½¿½21,27¿½¿½27,etc. If the size of box filter is N¿½¿½N, the corresponding scale ¿½¿½=1.2¿½¿½N/9. In order to localize interest points in the image and over scales, a non-maximum suppression in a 3 ¿½¿½ 3 ¿½¿½ 3 neighborhood is applied. To do this, each pixel in the scale-space is compared to its 26 neighbors, comprised of the 8 points in the native scale and the 9 in each of the scales above and below. The maxima of the determinant of the Hessian matrix are then interpolated in scale and image space.

Rotation invariance is achieved by detecting the dominant orientation of each feature point using Haar wavelet responses in x and y directions within a circular neighborhood of radius 6s around the feature point. Here s is the scale at which the feature point was detected. The size of the Haar filter kernel is scaled to be 4s¿½¿½4s. The responses are weighted with a Gaussian centered at the feature point. The Gaussian is dependent on the scale of the point and chosen to have standard deviation 2.5¿½¿½. The dominant orientation is estimated by calculating the sum of the horizontal and vertical Haar wavelet responses within a sliding orientation window covering an angle of ¿½¿½/3. The two summed responses constitute a vector, and the longest vector lends its orientation to the feature point.

When extracting descriptor, the first step is to construct a square window with size 20¿½¿½ and the window is oriented along the dominant orientation. Then divide the window into 4¿½¿½4 regular sub-regions. For each sub-region, compute Haar wavelet responses of size 2¿½¿½ at 5¿½¿½5 regularly spaced sample points. refers the sum of responses in horizontal direction and refers the sum of responses in vertical direction. and respectively refers the sum of the absolute values of the responses in horizontal and vertical direction. Hence, each sub-region has a four-dimension descriptor vector for its underlying intensity structure . For the window having 4¿½¿½4 sub-regions, each feature point has a 64-dimension descriptor vector. Last, we turn the descriptor into aunit vector to achieve the invariance to contrast.

2.2 Improve SURF algorithm

¿½¿½1¿½¿½ Limit the number of detected feature points

SURF algorithm focus on the detecting effect, without considering the number of the feature points and position. However, if we detect much more feature points in the image frame, it not only increases the time of calculating the feature points¿½¿½ descriptor, but also increases the matching time and the complexity of calculating the optimal global motion parameters. As we know, the affine transformation matrix only need three pairs of matching points at least to achieve the image geometry transform. Hence, reducing some matching points will not affect the final result, and it can improve the efficiency of the whole algorithm.

When using SURF detects feature points, it applies a non-maximum suppression in a 3 ¿½¿½ 3 ¿½¿½ 3 neighborhood. Each pixel in the scale-space is compared to its 26 neighbors, comprised of the 8 points in the native scale and the 9 in each of the scales above and below. But when the image fame is more complex, it can detect a large number of feature points, which increases the computation in the subsequent processing. Therefore, when detecting feature points, we apply the non-maximum suppression in a 7 ¿½¿½ 7 ¿½¿½ 7 neighborhood. In the center of the point for 7 ¿½¿½ 7 region, we compare the determinant to its 146 neighbors, comprised of the 48 points in the native scale and the 49 in each of the scales above and below. In the 7 ¿½¿½ 7 ¿½¿½ 7 neighborhood, we can detect the appropriate number of feature points which have stronger robustness, and the efficiency of the whole algorithm is promoted.

¿½¿½2¿½¿½ A fast method for calculating the feature point¿½¿½s dominant orientation

The method for calculating feature point's dominant orientation in SURF is using a sliding window covering an angle with 60 degrees shift around a circle region, and then calculating the sum of the horizontal and vertical Haar wavelet responses in it. The two summed responses constitute a vector, and the longest vector lends its orientation to the feature point. The shifting step of sliding window is chosen 5¿½¿½. When the sliding window shifting, there are many overlap regions generated. Therefore, we will calculate the sum of the responses repeatedly, which influence the algorithm¿½¿½s efficiency. For example, among 0-60¿½¿½region and 5-65¿½¿½region, 5-60¿½¿½is an overlap region, and the sum of responses is repeatedly calculated which made the algorithm process more complexity.

We adopt a fast method for calculating the feature point¿½¿½s dominant orientation to increase the efficiency of the algorithm [10]. The procedure is as follows:

¿½¿½1¿½¿½ Calculate the sum of horizontal and vertical Haar

wavelet responses at each whole degrees (0-360¿½¿½), and store them in and .

¿½¿½2¿½¿½ Calculate the integral of and , defined as and :

¿½¿½3¿½¿½

The calculation of is similar to .

¿½¿½3¿½¿½ Calculate the sum of Haar wavelet responses in 60¿½¿½sensor region with the end of any angle i.

¿½¿½4¿½¿½

The calculation of is similar to . The local orientation vector could be calculated as formula¿½¿½5¿½¿½:

¿½¿½5¿½¿½

At the end we choose the longest local orientation vector over all windows as the dominant orientation of the feature point.

Using this algorithm to calculate the dominant orientation, the repeated calculation are wiped off and the shifting step of sliding window is changed into 1¿½¿½from 5¿½¿½. Comparing with the original algorithm, the improved algorithm decreases the complexity and increases the accuracy.

2.3 Improve the feature points matching method

Matching two feature points is done by comparing the corresponding feature point descriptors. In the process of searching for matching points, the global search method and KD-Tree algorithm are widely used at present. Global search method is easy to implement, but it needs to calculate the distance of all points in the two point sets. So this method has large amount of computation and it will detect many invalid matching points. KD-Tree algorithm takes full advantage of the data structure information of feature points. It only calculates a part of points¿½¿½ distance in the two point sets by constructing KD-Tree. Though KD-tree algorithm reduces the computational complexity and improves the accuracy, it costs additional time to construct KD-Tree. The study shows that when the number of feature points is small, the speed of KD-Tree is not obviously increased.

Hence, we propose an improved matching method based on the global search method. When searching for a corresponding point in the adjacent frame, the point is searched in a certain region around the feature point of the current frame instead of the range of entire image. The size of the certain region is decided by the speed of the background. This method reduces the calculation of the distance between the matching points, decreases the number of invalid matching points, and reduces the complexity of the subsequent step. In a word, it improves the speed and accuracy at the same time. When measuring the similarity between the matching points, there are two steps. The first step is to do the initial match by using the sign of the Laplacian (based on the determinant of the Hessian matrix). When the matching points have the same sign of the Laplacian, we do the subsequent similarity measure, or we judge the two points are not matched. Hence, this minimal information allows for faster matching and gives a slight increase in performance. The last step is to calculate the Euclidian distance between the two feature points¿½¿½ 64-dimension descriptors, and a matching pair is detected if its nearest neighbor (closest Euclidean distance in descriptor space) is closer than 0.65 times the distance to the second nearest neighbor. Over here, 0.65 is a threshold that it can be changed .The smaller it is, the less match points pair we get. The process of feature points matching is summarized in Fig. 2.

Fig.2. Flowchart of feature points matching.

3 Global motion compensation and objects detection

In this section, the first step is to compensate the global motion of camera by using the matching points. This step converts the detection in dynamic scenes into static scenes.

We choose the affine transformation model to describe the global motion. Affine transformation, with six parameters, is suitable for translation, rotation, scale and stretch. In the two-dimensional space, the affine transformation can be expressed as formula¿½¿½6¿½¿½:

¿½¿½6¿½¿½

Here, refers to the feature points in the previous frame, and refers to the feature points in the current frame. represents translation and represents rotation, scale ,stretch and so on.

The matching pairs we get in section 2 must have invalid matching points. We adopt the RANSAC algorithm to remove them and get the best set of interior points. Then we use the least square method in the best set of interior points to calculate the optimal global motion parameters (affine transformation matrix). Next we compensate the previous frame by using these parameters. After this step, the backgrounds of the previous frame and the current frame are unified, and the detection in dynamic scenes has been transformed into static scenes. So we use the frame difference method between the current frame and the affine transformed frame to detect the moving objects. Finally, the binary image is processed by morphological method to reduce the small holes and residual noise points, and to smooth the objects¿½¿½ contour. At this point, we get the accurate moving objects.

4 Experimental results and analysis

To make our experimental results have more persuasion, we did all simulation experiments in the following situations: Hardware environment: CPU Intel(R) Core(TM) i5 M520 @2.40GHz, RAM 4G, NVIDIA NVS 3100M; Software development tools: Microsoft VS 2008, OpenCV 2.3. The size of video frame in the paper is 720¿½¿½ 480, and the frame rate is 25fps.

4.1 The results of improved SURF

In the paper, we mainly make some improvements on SURF in two ways: One is to limit the number of detected feature points by changing the range of non-maximum suppression. The results are listed in Fig.3 and Tabel1.The other one is to adopt a fast method for calculating the featur point¿½¿½s dominant orientation. The results are listed in Fig.4 and Table2.

(a)

(b)

(c)

Fig.3 Results of detected feature points

Table1. Results of limiting feature points¿½¿½ number

The range of

non-maximum suppression Number of feature points Time/ms

3¿½¿½3¿½¿½3 551 315

5¿½¿½5¿½¿½3 387 209

7¿½¿½7¿½¿½3 234 137

Fig.3(a) shows the detecting result when we apply the non-maximum suppression to detect the feature points in the 3 ¿½¿½ 3 ¿½¿½ 3 neighborhood which is the original SURF algorithm. Fig.3(b) and Fig.3(c) respectively show the detecting results in the 5¿½¿½5¿½¿½3 neighborhood and in the 7 ¿½¿½ 7 ¿½¿½ 3 neighborhood. As shown in Fig.3, most of the detected feature points distribute on the background, which is favour of modeling the background. Furthermore, the improved SURF effectively limit the number of feature points, and feature points distribute on the background evenly with strong robustness. Table1 also shows the results of limiting feature points¿½¿½ number. Fig.3(a) detects 551 feature points, costing 315ms, Fig.3(b) detects 387 feature points, costing 209ms,and Fig.3(c) detects 234 feature points, costing 137ms. Comparing these data, we know the improved SURF effectively limits the number of feature points and decreases the detecting time. However, too few feature points will influence the accuracy of object detection, so we choose the 7 ¿½¿½ 7 ¿½¿½ 3 neighborhood to get about 200 feature points .

(a) Before improving

(b) After improving

Fig.4 Results of the fast method for dominant orientation

Table2 Results of the fast method for dominant orientation

Time/ms Matching pairs

SURF 137 177

Improved SURF 121 144

The Improved SURF wipes off the repeated calculation and the shifting step of sliding window is changed into 1¿½¿½from 5¿½¿½. Comparing with the original algorithm, the improved algorithm decreases the complexity and increases the accuracy. As shown in Table2, the detecting time is saved 16ms by adopting the fast method to calculate the feature points¿½¿½ dominant orientation, and this method removes a part of invalid matching pairs, dropped from 177 to 144. Fig.4 shows the contrast effect of feature points matching by using the original SURF and the improved SURF. Comparing the Fig.4(a) and Fig.4(b), it is obvious that Fig.4(b) removes some invalid matching pairs which demonstrates the efficiency of the improved algorithm.

4.2 The results of improved matching method

(a) Before improving

(b) After improving

Fig.5 Results of improved matching method

Table3 Results of improved matching method

Global search method Improved matching method

Matching pairs 156 160

Matching time/ms 32 16

Best set of interior points by RANSAC 51 130

In the part of feature points matching, we mainly make some improvements based on the global search method, including limiting the search scope, judging the sign of the Laplacian and comparing the nearest neighbor and the second nearest neighbor. In the paper, we search for a feature point¿½¿½s corresponding points in the adjacent frame within a square of 60 ¿½¿½ 60 centered on the feature point. Table3 indicates the improved matching method has a great promotion on time, and matching time dropped from 32ms to 16ms. Two methods nearly detect the same number of matching pairs. However, analyzing the best set of interior points obtained by RANSAC, there are only 51 matching pairs in the best set by using the original global search method which shows that 105 invalid matching pairs are removed by RANSAC. From these numbers, it can be seen that the previous matching step has detected too many invalid matching pairs which influences the accuracy of the global motion model. But in the improved matching method, there are 130 matching pairs in the best set of interior points, and only 30 invalid matching pairs are removed by RANSAC. It¿½¿½s obvious that the improved matching method effectively reduces the number of invalid matching pairs. Furthermore, with more matching pairs in the best set of interior points, the global motion model established by the least square method is more precise and the result of moving object detection is more accurate. As is shown in Fig.5, for the same frame, with the original matching method we can¿½¿½t detect the moving object, but with the improved matching method, we detect the moving object successfully.

4.3 The results of global motion compensation and object detection

We adopt the proposed method in the paper based on improved SURF to detect the moving object in dynamic scenes. Fig.6 respectively shows the results of the fourth frame, the sixth frame and the eighth frame. Fig.6(a) shows the original frames and Fig.6(b) shows the results of global motion compensation for the previous frame. Comparing (a) and (b), the backgrounds of the current frame and the previous frame are unified which indicates the effect of global motion compensation for the previous frame is good, and it realizes the transformation from dynamic scenes to static senses. Fig.6(c) shows the detected object. Experimental results show that the proposed method in the paper is able to complete moving object detection in dynamic scenes.

(a) (b) (c)

Fig.6 Results of global motion compensation and object detection

Table4 Costing time of different algorithm

4 6 8

SITF Feature points 308 276 323

Time/ms 703 621 723

SURF Feature points 551 513 578

Time/ms 367 331 382

Improved SURF Feature points 234 212 245

Time/ms 152 144 155

Table4 respectively shows the costing time of different method based on SIFT, SURF and improved SURF. The method based on SIFT costs an average of 700ms to process a video frame, and the method based on SURF costs an average of 350ms. While the proposed method in the paper based on improved SURF only costs an average of 150ms to complete the object detection.

These results indicate that the proposed moving object detection method based on improved SURF not only has high accuracy and robustness, but also has a good advantage of time.

5 Conclusion

For the moving object detection in dynamic scenes and real-time requirement, an effective method based on improved SURF algorithm was proposed. First, extract featuer points by the improved SURF algorithm and match them by the improved mtching method based on the global match method. We mainly made some improvements on SURF in two ways: one is to limit the number of detected feature points by changing the range of non-maximum suppression, the other one is to adopt a fast method for calculating the feature point¿½¿½s dominant orientation. Then calculate the optimal global motion parameters (affine transformation matrix) by using RANSAC and the least square method. Finally, compensate the previous frame with the parameters, and obtain the object by the frame difference method. After morphological image processing, we got the accurate moving object. The experimental results showed that the proposed method can successfully detect the moving object in dynamic scenes. It not only has high accuracy and robustness, but also has a good advantage of time comparing with the method based on SIFT and SURF.