This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
An active worm refers to a malicious software program that propagates itself on the Internet to infect other computers. The propagation of the worm is based on exploiting vulnerabilities of computers on the Internet. Many real-world worms have caused notable damage on the Internet. These worms include Code-Red worm in 2001 , "Slammer" worm in 2003 , and "Witty"/ "Sasser" worms in 2004 . Many active worms are used to infect a large number of computers and recruit them as bots or zombies, which are networked together to form botnets . These botnets can be used to:
Launch massive Distributed Denial-of-Service (DDoS) attacks that disrupt the Internet utilities ,
Aaccess confidential information that can be misused  through large-scale traffic sniffing, key logging, identity theft, etc.,
Destroy data that has a high monetary value , and
Distribute large-scale unsolicited advertisement emails (as spam) or software (as malware).
There is evidence showing that infected computers are being rented out as "Botnets" for creating an entire black-market industry for renting, trading, and managing "owned" computers, leading to economic incentives for attackers , , . Researchers also showed possibility of "superbotnets," networks of independent botnets that can be coordinated for attacks of unprecedented scale . For an adversary, superbotnets would also be extremely versatile and resistant to countermeasures.
Due to the substantial damage caused by worms in the past years, there have been significant efforts on developing detection and defense mechanisms against worms. A network- based worm detection system plays a major role by monitoring, collecting, and analyzing the scan traffic (messages to identify vulnerable computers) generated during worm attacks. In this system, the detection is commonly based on the self-propagating behavior of worms that can be described as follows: After a worm-infected computer identifies and infects a vulnerable computer on the Internet, this newly infected computer1 will automatically and continuously scan several IP addresses to identify and infect other vulnerable computers. As such, numerous existing detection schemes are based on a tacit assumption that each worm-infected computer keeps scanning the Internet and propagates itself at the highest possible speed. Furthermore, it has been shown that the worm scan traffic volume and the number of worm-infected computers exhibit exponentially increasing patterns , , , , . Nevertheless, the attackers are crafting attack strategies that intend to defeat existing worm detection systems. In particular, "stealth" is one attack strategy used by a recently discovered active worm called "Atak" worm  and the "self-stopping" worm  circumvent detection by hibernating (i.e., stop propagating) with a predetermined period.
Worm might also use the evasive scan  and traffic morphing technique to hide the detection . This worm attempts to remain hidden by sleeping (suspending scans) when it suspects it is under detection. Worms that adopt such smart attack strategies could exhibit overall scan traffic patterns different from those of traditional worms. Since the existing worm detection schemes will not be able to detect such scan traffic patterns, it is very important to understand such smart-worms and develop new countermeasures to defend against them. In this paper, we conduct a systematic study on a new class of such smart-worms denoted as Camouflaging Worm (C-Worm in short). The C-Worm has a self-propagating behaviour similar to traditional worms, i.e., it intends to rapidly infect as many vulnerable computers as possible. However, the C-Worm is quite different from traditional worms in which it camouflages any noticeable trends in the number of infected computers over time. The camouflage is achieved by manipulating the scan traffic volume of worm infected computers. Such a manipulation of the scan traffic volume prevents exhibition of any exponentially increasing trends or even crossing of thresholds that are tracked by existing detection schemes , , . We note that the propagation controlling nature of the C-Worm (and similar smart-worms, such as "Atak") cause a slowdown in the propagation speed. However, by carefully controlling its scan rate, the C-Worm can: 1) still achieve its ultimate goal of infecting as many computers as possible before being detected, and 2) position itself to launch subsequent attacks.
We comprehensively analyse the propagation model of the C-Worm and corresponding scan traffic in both time and frequency domains. We observe that although the C-Worm scan traffic shows no noticeable trends in the time domain, it demonstrates a distinct pattern in the frequency domain. Specifically, there is an obvious concentration within a narrow range of frequencies. This concentration within a narrow range of frequencies is inevitable, since the C-Worm adapts to the dynamics of the Internet in a recurring manner for manipulating and controlling its overall scan traffic volume. The above recurring manipulations involve steady increase, followed by a decrease in the scan traffic volume, such that the changes do not manifest as any trends in the time domain or such that the scan traffic volume does not cross thresholds that could reveal the C-Worm propagation. Based on the above observation, we adopt frequency domain analysis techniques and develop a detection scheme against wide spreading of the C-Worm. Particularly, we develop a novel spectrum-based detection scheme that uses the Power Spectral Density (PSD) distribution of scan traffic volume in the frequency domain and its corresponding Spectral Flatness Measure (SFM) to distinguish the C-Worm traffic from non-worm traffic ( background traffic). Our frequency-domain analysis studies use the real-world Internet traffic traces (Shield logs data set) provided by SANs Internet Storm Center (ISC) , .2 Our results reveal that nonworm traffic (e.g., port-scan traffic for port 80, 135, and 8080) has relatively larger SFM values for their PSD distributions. Whereas, the C-Worm traffic shows comparatively smaller SFM value for its respective PSD distribution.
Furthermore, we demonstrate the effectiveness of our spectrum-based detection scheme in comparison with existing worm-detection schemes. We define several new metrics. Maximal Infection Ratio (MIR) is the one to quantify the infection damage caused by a worm before being detected. Other metrics include Detection Time (DT) and Detection Rate (DR). Our evaluation data clearly demonstrate that our spectrum-based detection scheme achieves much better detection performance against the C-Worm propagation compared with existing detection schemes. Our evaluation also shows that our spectrum-based detection scheme is general enough to be used for effective detection of traditional worms as well.
1.1 WHY POWER SPECTRAL DENSITY
In statistical signal processing, statistics, and physics, the spectrum of a time-series or signal is a positive real function of a frequency variable associated with a stationary stochastic process, or a deterministic function of time, which has dimensions of power per hertz (Hz), or energy per hertz. Intuitively, the spectrum decomposes the content of a stochastic process into different frequencies present in that process, and helps identify periodicities. More specific terms which are used are the power spectrum, spectral density, power spectral density, or energy spectral density.
Explanation: In physics, the signal is usually a wave, such as an electromagnetic wave, random vibration, or an acoustic wave. The spectral density of the wave, when multiplied by an appropriate factor, will give the power carried by the wave, per unit frequency, known as the power spectral density (PSD) of the signal. Power spectral density is commonly expressed in watts per hertz (W/Hz).
For voltage signals, it is customary to use units of V2 Hzâˆ’1 for the PSD and V2 s Hzâˆ’1 for the ESD (energy spectral density). Often it is convenient to work with an amplitude spectral density (ASD), which is the square root of the PSD; the ASD of a voltage signal has units of V Hzâˆ’1/2. For random vibration analysis, units of g2 Hzâˆ’1 are sometimes used for the PSD of acceleration. Here g denotes the g-force.
Although it is not necessary to assign physical dimensions to the signal or its argument, in the following discussion the terms used will assume that the signal varies in time.
Preliminary conventions on notations for time series : The phrase time series has been defined as "... a collection of observations made sequentially in time." But it is also used to refer to a stochastic process that would be the underlying theoretical model for the process that generated the data (and thus include consideration of all the other possible sequences of data that might have been observed, but weren't). Furthermore, time can be either continuous or discrete. There are, therefore, four different but closely related definitions and formulas for the power spectrum of a time series.
If (discrete time) or (continuous time) is a stochastic process, we will refer to a possible time series of data coming from it as a sample or path or signal of the stochastic process. To avoid confusion, we will reserve the word process for a stochastic process, and use one of the words signal, or sample, to refer to a time series of data.
For X any random variable, standard notations of angle brackets or E will be used for ensemble average, also known as statistical expectation, and Var for the theoretical variance.
Suppose , from to is a time series (discrete time) with zero mean. Suppose that it is a sum of a finite number of periodic components (all frequencies are positive):
The variance of is, for a zero-mean function as above, given by . If these data were samples taken from an electrical signal, this would be its average power (power is energy per unit time, so it is analogous to variance if energy is analogous to the amplitude squared).
Now, for simplicity, suppose the signal extends infinitely in time, so we pass to the limit as . If the average power is bounded, which is almost always the case in reality, then the following limit exists and is the variance of the data.
Again, for simplicity, we will pass to continuous time, and assume that the signal extends infinitely in time in both directions. Then these two formulas become
But obviously the root mean square of either or is , so the variance of is and that of is . Hence, the power of which comes from the component with frequency is . All these contributions add up to the power of .
Then the power as a function of frequency is obviously , and its statistical cumulative distribution function will be
is a step function, monotonically non-decreasing. Its jumps occur at the frequencies of the periodic components of , and the value of each jump is the power or variance of that component.
The variance is the covariance of the data with itself. If we now consider the same data but with a lag of , we can take the covariance of with , and define this to be the autocorrelation function of the signal (or data) :
When it exists, it is an even function of . If the average power is bounded, then exists everywhere, is finite, and is bounded by , which is the power or variance of the data.
It is elementary to show that can be decomposed into periodic components with the same periods as :
This is in fact the spectral decomposition of over the different frequencies, and is obviously related to the distribution of power of over the frequencies: the amplitude of a frequency component of is its contribution to the power of the signal.
1.2 ENERGY SPECTRAL DENSITY
Energy spectral density describes how the energy of a signal or a time series is distributed with frequency. Here, the term energy is used in the generalized sense of signal processing; that is, it is the energy of a signal is
The energy spectral density is most suitable for transients-that is, pulse-like signals-having a finite total energy. In this case, Parseval's theorem gives us an alternate expression for the energy of the signal in terms of its Fourier transform, :
Here is the angular frequency. Since the integral on the right-hand side is the energy of the signal, the integrand can be interpreted as a density function describing the energy per unit frequency contained in the signal at frequency . In light of this, the energy spectral density of a signal is defined as[N 1]
As a physical example of how one might measure the energy spectral density of a signal, suppose represents the potential (in volts) of an electrical pulse propagating along a transmission line of impedance , and suppose the line is terminated with a matched resistor (so that all of the pulse energy is delivered to the resistor and none is reflected back). By Ohm's law, the power delivered to the resistor at time is equal to , so the total energy is found by integrating with respect to time over the duration of the pulse. To find the value of the energy spectral density at frequency , one could insert between the transmission line and the resistor a band pass filter which passes only a narrow range of frequencies (, say) near the frequency of interest and then measure the total energy dissipated across the resistor. The value of the energy spectral density at is then estimated to be . In this example, since the power has units of V2 Î©âˆ’1, the energy has units of V2 s Î©âˆ’1 = J, and hence the estimate of the energy spectral density has units of J Hzâˆ’1, as required. In many situations, it is common to forgo the step of dividing by so that the energy spectral density instead has units of V2 s Hzâˆ’1.
This definition generalizes in a straightforward manner to a discrete signal with an infinite number of values such as a signal sampled at discrete times :
where is the discrete Fourier transform of The sampling interval is needed to keep the correct physical units and to ensure that we recover the continuous case in the limit ; however, in the mathematical sciences, the interval is often set to 1.
1.3 POWER SPECTRAL DENSITY
The above definition of energy spectral density is most suitable for transients, i.e., pulse-like signals, for which the Fourier transforms of the signals exist. For continued signals that describe, for example, stationary physical processes, it makes more sense to define a power spectral density (PSD), which describes how the power of a signal or time series is distributed over the different frequencies, as in the simple example given previously. Here, power can be the actual physical power, or more often, for convenience with abstract signals, can be defined as the squared value of the signal. The total power P of a signal is the following time average:
The power of a signal may be finite even if the energy is infinite. For example, a 10-volt power supply connected to a 1 kÎ© resistor delivers (10 V)2 / (1 kÎ©) = 0.1 W of power at any given time; however, if the supply is allowed to operate for an infinite amount of time, it will deliver an infinite amount of energy (0.1 J each second for an infinite number of seconds).
In analyzing the frequency content of the signal , one might like to compute the ordinary Fourier transform ; however, for many signals of interest this Fourier transform does not exist.[N 2] Because of this, it is advantageous to work with a truncated Fourier transform , where the signal is integrated only over a finite interval [0, T]:
Then the power spectral density can be defined as
Here E denotes the expected value; explicitly, we have
Using such formal reasoning, one may already guess that for a stationary random process, the power spectral density and the autocorrelation function of this signal should be a Fourier transform pair. Provided that is absolutely integrable, which is not always true, then
A deep theorem that was worked out by Norbert Wiener and Aleksandr Khinchin (the Wiener-Khinchin theorem) makes sense of this formula for any wide-sense stationary process under weaker hypotheses: does not need to be absolutely integrable, it only needs to exist. But the integral can no longer be interpreted as usual. The formula also makes sense if interpreted as involving distributions (in the sense of Laurent Schwartz, not in the sense of a statistical Cumulative distribution function) instead of functions. If is continuous, Bochner's theorem can be used to prove that its Fourier transform exists as a positive measure, whose distribution function is F (but not necessarily as a function and not necessarily possessing a probability density).
Many authors use this equality to actually define the power spectral density.
The power of the signal in a given frequency band can be calculated by integrating over positive and negative frequencies,
where is the integrated spectrum whose derivative is .
More generally, similar techniques may be used to estimate a time-varying spectral density.
The definition of the power spectral density generalizes in a straightforward manner to finite time-series with , such as a signal sampled at discrete times for a total measurement period .
In a real-world application, one would typically average this single-measurement PSD over several repetitions of the measurement to obtain a more accurate estimate of the theoretical PSD of the physical process underlying the individual measurements. This computed PSD is sometimes called periodogram. One can prove that this periodogram converges to the true PSD when the averaging time interval T goes to infinity (Brown & Hwang) to approach the Power Spectral Density (PSD).
If two signals both possess power spectral densities, then a cross-spectral density can be calculated by using their cross-correlation function.
Properties of the power spectral density:
Some properties of the PSD include:
The spectrum of a real valued process is an even function of frequency: .
If the process is continuous and purely in deterministic, the auto covariance function can be reconstructed by using the Inverse Fourier transform
it describes the distribution of the variance over frequency. In particular,
It is a linear function of the auto covariance function in the sense that if is decomposed into two functions , then
The integrated spectrum or power spectral distribution is defined as
1.4 CROSS-SPECTRAL DENSITY
Given two signals and , each of which possess power spectral densities and , it is possible to define a cross-spectral density (CSD) given by
The cross-spectral density (or 'cross power spectrum') is thus the Fourier transform of the cross-correlation function.
where is the cross-correlation of and .
By an extension of the Wiener-Khinchin theorem, the Fourier transform of the cross-spectral density is the cross-covariance function. In light of this, the PSD is seen to be a special case of the CSD for .
For discrete signals xnn and yn, the relationship between the cross-spectral density and the cross-covariance is
1.5 SPECTRAL DENSITY ESTIMATION
The goal of spectral density estimation is to estimate the spectral density of a random signal from a sequence of time samples. Depending on what is known about the signal, estimation techniques can involve parametric or non-parametric approaches, and may be based on time-domain or frequency-domain analysis. For example, a common parametric technique involves fitting the observations to an autoregressive model. A common non-parametric technique is the periodogram.The spectral density is usually estimated using Fourier transform methods (such as the Welch method), but other techniques such as the maximum entropy method can also be used.
The spectral density of and the autocorrelation of form a Fourier transform pair (for PSD versus ESD, different definitions of autocorrelation function are used).
One of the results of Fourier analysis is Parseval's theorem which states that the area under the energy spectral density curve is equal to the area under the square of the magnitude of the signal, the total energy:
The above theorem holds true in the discrete cases as well. A similar result holds for power: the area under the power spectral density curve is equal to the total signal power, which is , the autocorrelation function at zero lag. This is also (up to a constant which depends on the normalization factors chosen in the definitions employed) the variance of the data comprising the signal.
Most "frequency" graphs really display only the spectral density. Sometimes the complete frequency spectrum is graphed in two parts, "amplitude" versus frequency (which is the spectral density) and "phase" versus frequency (which contains the rest of the information from the frequency spectrum). Cannot be recovered from the spectral density part alone - the "temporal information" is lost.
The spectral centroid of a signal is the midpoint of its spectral density function, i.e. the frequency that divides the distribution into two equal parts.
The spectral edge frequency of a signal is an extension of the previous concept to any proportion instead of two equal parts.
Spectral density is a function of frequency, not a function of time. However, the spectral density of small "windows" of a longer signal may be calculated, and plotted versus time associated with the window. Such a graph is called a spectrogram. This is the basis of a number of spectral analysis techniques such as the short-time Fourier transform and wavelets.
In radiometry and colorimetry (or color science more generally), the spectral power distribution (SPD) of a light source is a measure of the power carried by each frequency or "color" in a light source. The light spectrum is usually measured at points (often 31) along the visible spectrum, in wavelength space instead of frequency space, which makes it not strictly a spectral density. Some spectrophotometers can measure increments as fine as one to two nanometers. Values are used to calculate other specifications and then plotted to demonstrate the spectral attributes of the source. This can be a helpful tool in analyzing the color characteristics of a particular source.
Electrical engineering: The concept and use of the power spectrum of a signal is fundamental in electrical engineering, especially in electronic communication systems, including radio communications, radars, and related systems, plus passive [remote sensing] technology. Much effort has been expended and millions of dollars spent on developing and producing electronic instruments called "spectrum analyzers" for aiding electrical engineers and technicians in observing and measuring the power spectra of signals. The cost of a spectrum analyzer varies depending on its frequency range, its bandwidth, and its accuracy. The higher the frequency range (S-band, C-band, X-band, Ku-band, K-band, Ka-band, etc.), the more difficult the components are to make, assemble, and test and the more expensive the spectrum analyzer is. Also, the wider the bandwidth that a spectrum analyzer possesses, the more costly that it is, and the capability for more accurate measurements increases costs as well.
The spectrum analyzer measures the magnitude of the short-time Fourier transform (STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density. These devices work in low frequencies and with small bandwidths.
Literature survey is the most important step in software development process. Before developing the tool it is necessary to determine the time factor, economy and company strength. Once these things are satisfied, ten next steps are to determine which operating system and language can be used for developing the tool. Once the programmers start building the tool the programmers need lot of external support. This support can be obtained from senior programmers, from book or from websites. Before building the system the above consideration r taken into account for developing the proposed system.
What is a computer virus: A virus is a computer program that by your help or by attaching itself to some other program is able to move from one computer to another. Typically these programs are often malicious rather than beneficial even if they have no payload associated with them as they snatch away the system resources. There are several classes of code that fall under the category "virus". Not all of them are strictly virus in technical terms; some of them are Worms and Trojan horses.
What is a computer worm: Worms are self-replicating programs that do not infect other programs as viruses do; however they create copies of themselves which in turn create copies again, thus hogging the memory resources and clogging the network. Worms are usually seen on networks and multiprocessing OS's.
2.1 ACTIVE WORMS
Active worms are similar to biological viruses in terms of their infectious and self-propagating nature. They identify vulnerable computers, infect them and the worm-infected computers propagate the infection further to other vulnerable computers. In order to understand worm behavior, we first need to model it. With this understanding, effective detection and defense schemes could be developed to mitigate the impact of the worms. For this reason, tremendous research effort has focused on this area,
Worms assets used to implement various mechanisms in the survey as effectively. The basic model can be classified as active worms which have a purely random study (PRS) nature. In the form of PRS, a computer worm infects constantly scans a set of IP addresses at random to find new vulnerable computers. Other worms breed more effectively than PRS worms using different methods, for example, the network port scan and e-mail and file sharing peer-to-peer (P2P), instant messaging (IM addition., Worms use different strategies at different stages of proliferation survey. published in order to increase efficiency and the use of a local network or list of targets to infect vulnerable computers already identified in the first phase. installation can also use DNS, the structure of the network and routing information to determine computers in place of a random selection poll IP addresses.
The Division of IP address space that are intended to prevent the spread of doubled during the survey. Studied exploration potential technological gap filler that can spread quickly and stealthy worm traditional random survey. We had trouble finding a display made â€‹â€‹quick and flexible topology and deployment schedule of flash worms. Studied worm deployment through sensor networks. Worm (Worm-C) studied are to avoid detection by the defense system by spreading worm worm. Closely related, but orthogonal to our work, is that assets polymorphic worms evolved in nature. Polymorphic worms are able to change the sign in the binary representation of the process or as part of the proliferation of such weapons. This can be achieved with self-encryption or connotations mechanisms to maintain code handling techniques. And C-worm also shares some similarities with stealth attacks exploit detection. Such attacks in an attempt to find out what services are available in the target system, while avoiding detection. This is accomplished by reducing the scanning rate of ports, to disguise the origin of the attackers, and so on. Due to the nature of self-proliferation, C-worm must use more sophisticated mechanisms to handle the traffic volume at the time of polling in order to avoid detection.
2.2 WORM DETECTION
Worm was detected intensive study in the past and can be broadly classified into two categories: "host-based" detection and "network-based" detection. Host-based detection Worms detection systems through monitoring and the collection and analysis of the behaviors and worms on the end hosts. Since the worms are malicious programs that implement these teams, and analyze the behavior of the worm executive plays an important role in detecting local host systems. Revealed many schemes are included in this category. In contrast, detection systems, network-based detection of worms primarily by monitoring and collection and analysis of scan traffic (messages to identify vulnerable computers) generated by worm attacks. Revealed many schemes are included in this category. Ideally, you should avoid security vulnerabilities to start, a problem that must be addressed by the NLP community. However, while there are weaknesses and threaten widespread damage, it is important to also focus on network-based detection, and this paper does, to detect a wide c-worms proliferation.
In order to quickly and accurately detect the Internet and widespread widely from worms active, it is imperative to monitor and analyze traffic at several sites on the Internet to detect suspicious movement caused by worms. Detect worm adopted widely framework consists of several observers distribution, and worm detection center, which controls the former. This is a good framework and adopted similar to other existing systems detect the worm, such as the Cyber â€‹â€‹Center to control the disease, and the Internet motion sensor, SANS ISC (Internet Storm Center), sink the Internet, and network telescope. Are distributed observers over the Internet, and can be deployed in end hosts or router or firewalls etc. Each observer recorded a negative irregular Port scan traffic, such as attempts to connect to a range of IP addresses are invalid (IP addresses are not used) and restricted service ports. Periodically, send observers traffic logs to detect center. Detection Center analyzes the traffic logs and determines whether or not there is suspicious survey to blocked ports or IP addresses is invalid. Detection schemes based network traffic analysis usually collect survey data by applying certain rules to detect decision to publish the worm. For example Venkataraman et al, Wu et al, and the proposed plans to study the statistics of traffic volume scanning, Zhou et al. Others suggested Lakhina al.in presented the plan disclosure direction based on the study of the pattern of a tremendous increase of traffic scanning, and plans to study other features of traffic scanning, such as the distribution of destination addresses. Other worms study business trying to take on new patterns to avoid detection. In addition to the schemes disclosed above, and based on traffic monitoring global survey by detecting abnormal behavior of traffic, there are other detection worm defense plans such as hypothesis testing sequential detection worm affected computers, based on the load detection signature worm. In addition, Cai et al. Both the theoretical modeling and experimental results on a collaborative system to generate worm signature that works fingerprint distribution, collection and liquidation of multiple EDGE networks. Dantu et al. Provide feedback in the form control of the state, which reveals an area of â€‹â€‹and control the spread of these viruses or worms by measuring the speed of the number of new connections an infected computer makes. Despite the different approaches mentioned above, we believe that the survey revealed widespread abnormal behavior is still a useful weapon against worms, and that in practice multifaceted advantages of Defense.
2.3 PAPERS REFERRED:
Code-Red: a case study on the spread and victims of an Internet worm: July 19, 2001, more than 359,000 wounded a computer connected to the Internet with the Code Red worm (CRv2) in less than 14 hours. The estimated cost of this epidemic, including subsequent strains of the Code Red, including increases of $ 2.6 billion. Although global damage caused by this attack, there have been few serious attempts to describe the spread of the worm, and partly due to the challenge of collecting information on global worms. Using a technology that enables detection of mass spread of the worm, and paper collected and analyzed data over a period of 45 days beginning July 2, 2001 to determine the characteristics of the spread of red across the Internet icon. The paper describes a methodology to track the spread of red law, and then describe the results of analyzes tracking. The first paper details the spread of red worms and law CodeRedII where rates of infection and disruption. Even without the optimization of the spread of infection, infection rates peaked Red law in more than 2,000 hosts per minute. Then we study the properties of the host population infected, including geographic location, weekly and diurnal time effects, top-level domains, and Internet service providers. Activity showed infection paper prove that the worm was an international event, and the time of day effects, and found that although Most attention focused on large companies, and the Code Red worm preys mainly on home users and small businesses. Qualifying paper the effects of DHCP on measurements of the wounded soldiers, and decided that IP addresses are not an accurate measure of the spread of the worm at intervals longer than 24 hours. Finally, the experience of Red worm symbol indicating that can be exploited weaknesses and widespread Internet hosts quickly and dramatically, and that there must be other restoration techniques to relieve the host Internet worms.
Inside the Slammer Worm : The Slammer worm spread so quickly that human response was ineffective. In January 2003, it packed a benign payload, but its disruptive capacity was surprising. Why was it so effective and what new challenges do this new breed of worm pose?.
An Effective Architecture and Algorithm for Detecting Worms with Various Scan Techniques : Since the days of the Morris worm, the spread of Internet malware was imminent danger. Using various methods of scanning worms to spread rapidly. Worms can carefully select targets more damage detection using random scanning worms. This paper analyzes the different scanning techniques. The paper proposes revealed overall architecture worm monitor malicious activities. The paper proposes and evaluates an algorithm to detect the spread of worms that use real-time effects and simulations. The paper believes that the solution to our activities can be detected when a worm only 4% of the vulnerable device. Results achieved on paper the idea of a future battle against worm attacks.
Modeling and Simulation Study of the Propagation and Defense of Internet E-mail Worms :As many people rely on email for business and worms daily communication, Internet and email is one of the major threats to the security of our society. Unlike scanning worms such as Code Red or Slammer, and e-mail worms spread in the logical network defined by the relations of e-mail, which makes traditional models of the epidemic is not valid for the modeling the propagation of e-mail Worms. Moreover, the spread of the epidemic of exaggeration topological epidemic models greatly accelerate and topological networks due to the implicit assumption that the homogeneous mixture. For this reason, we have simulations to study the spread of email worm herein. This work email worm simulation model that represents the behavior of email users, including email verification time and the possibility of opening an email attachment. Paper tickets from e-mail lists on the Internet suggest that the following email address network heavy tail distribution in terms of the degree of the node, and we as a model grid power law. To study the effect of the topography, the article compares the spread of email worm with worm power law topology implementation in the other two topologies: topology and random small world topology graph. The impact of the topology of the power law in the spread of email worms are mixed: E-mail worms spread more quickly in the power of the topology of the topology of the law or the topology graph random little world, but immunization of the most effective defense for the topology of the power law.
Email Worm Modeling and Defense : Email worms is one of the main Internet security problems. In this work email accounts worm a model for the behavior of email users watching the time you check your e-mail and the ability to open email attachments. Email worms extended in the logical network defined by the ratio of e-mail, and plays an important role in determining what is the dynamics of diffusion email worm. The paper notes that indicate the degree of network node large email distribution tail. Study mail worm propagation compared to paper on a three topologies: Energy Act and the small world topology graph at random and then study how the topology affects defense immunization email worms. The impact force of the law topology on the spread of email worms is mixed: email worms faster to deploy the force of law in the small world topology topology or random topology diagram, but the defense is the more effective immunization force of law topology than the other two.
Peer-to-Peer System-based Active Worm Attacks- Modeling and Analysis : Recently published a worm active events show that active worms can spread in an automated and flooding the Internet in a very short period of time. Due to the recent increase in the peer-to-peer (P2P) with a large number of users, and P2P systems can be a potential means of active worms to achieve rapid worm propagation in the Internet. This paper addresses the question of the effects of active worm propagation on top P2P systems. In particular: model 1) identify the system is based P2P active worm attack strategies Dos attack study (strategy off-line and on-line) in the context of a specific model, 2) develops an analytical approach to analyze the propagation active worm in the specific form of attack window and carry out a comprehensive review of the effects of P2P system parameters, such as size, and degree of topology, the characteristics of structured / unstructured active worm to spread. Based on the numerical results notes that the attack on the basis of P2P could greatly exacerbate the effects of attack (improving the performance of the attack) and note that the rapid spread of the worm is very sensitive to system parameters P2P. Study believe that the paperwork can provide important guidelines in the design and control of P2P systems, and the active defense of the worm
It stands for MATrix LABoratory
It is developed by The Mathworks, Inc. (http://www.mathworks.com)
It is an interactive, integrated, environment
for numerical computations
for symbolic computations (via Maple)
for scientific visualizations
It is a high-level programming language
Program runs in interpreted, as opposed to compiled, mode
Characteristics of MATLAB:
Programming language based (principally) on matrices.
Slow (compared with fortran or C) because it is an interpreted language, i.e. not pre-compiled. Avoid for loops; instead use vector form (see section on vector technique below) whenever possible.
Automatic memory management, i.e., you don't have to declare arrays in advance.
Intuitive, easy to use.
Compact (array handling is fortran90-like).
Shorter program development time than traditional programming languages such as Fortran and C.
Can be converted into C code via MATLAB compiler for better efficiency.
Many application-specific toolboxes available.
Coupled with Maple for symbolic computations.
On shared-memory parallel computers such as the SGI Origin2000, certain operations processed in parallel autonomously -- when computation load warrants.