This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Aim: Distributed denial of service (DDoS) attack is a continuous critical threat to the Internet. Derived from the low layers, new application-layer-based DDoS attacks utilizing legitimate HTTP requests to overwhelm victim resources are more undetectable. The case may be more serious when such attacks mimic or occur during the flash crowd event of a popularWebsite. Focusing on the detection for such new DDoS attacks, a scheme based on document popularity is introduced. An Access Matrix is defined to capture the spatial-temporal patterns of a normal flash crowd. Principal component analysis and independent component analysis are applied to abstract the multidimensional Access Matrix. A novel anomaly detector based on hidden semi-Markov model is proposed to describe the dynamics of Access Matrix and to detect the attacks. The entropy of document popularity fitting to the model is used to detect the potential application-layer DDoS attacks. Numerical results based on real Web traffic data are presented to demonstrate the effectiveness of the proposed method.
Many studies have noticed this type of attack and have proposed different schemes (e.g., network measure or anomaly detection) to protect the network and equipment from bandwidth attacks, it is not as easy as in the past for attackers to launch the DDoS attacks based on network layer.
We compared the performance of the proposed scheme with the moving average in implementing anomaly detection. Different algorithms have been proposed to achieve this objective. This paper applies the FastICA which has been widely used for its good
performance and fast convergence during estimation of the parameters
Creating defenses for attacks requires monitoring dynamic network activities in order to obtain timely and signification information. While most current effort focuses on detecting Net-DDoS attacks with stable background traffic, we proposed a detection architecture in this paper aiming at monitoring Web traffic in order to reveal dynamic shifts in normal burst traffic, which might signal onset of App-DDoS attacks during the flash crowd event.
1) Multidimensional Data Processing
3) Self-Adaptive Scheme
1) Multidimensional Data Processing:
The basic goal of PCA is to reduce the dimension of the data. Indeed, it can be proven that the representation given by PCA is an optimal linear dimension reduction technique in the mean-square sense. Such a reduction in dimension has important effect. The computational overhead in the subsequent processing stages is reduced, and the noise that is not contained in the first components is removed. The main reasons for using the PCA in this paper are:
1) the principle components are typicalfor the high dimensional data of the problem without sacrificing valuable information.
2) it does not require any special distributional assumption, compared with many statistical methods that often assume a normal distribution or resort to the use of central limit theorem.
HsMM (Hidden semi-Markov model)can describe most practical stochastic signals, including non-stationary and the non-Markovian. It has been widely applied in many areas such as mobility tracking in wireless networks, activity recognition in smart environments, and inference for structured video sequences.
3) Self-Adaptive Scheme:
Based on our experiment ,we found the normal user's access behavior and the Website structure exhibit hours-long stability regardless of whether or not there are flash crowd events occurring during the period,i.e., the popularity of documents is mainly affected by the daily life of the users or information update of theWeb pages..
The scheme is divided into three phases: data preparation, training, and monitoring. The main purpose of data preparation is to compute the AM by the logs of the Web server.The training phase includes the three parts, given here.
The steps are as follows.
a) Compute the average matrix and difference matrix, respectively.
b) Compute the eigenvectors and eigenvalues of the covariance matrix.
c) Sort the eigenvalues and select the first eigenvectors,where is given in this paper.
The steps are as follows.
a) Use the outputs of the PCA module (i.e., -dimensional uncorrelated principal component dataset) to estimate the unmixing matrix by ICA algorithm.
b) Transform the -dimensional dataset into independent signals.
a) Use the outputs of ICA module as the model training data set to estimate the parameters of HsMM.
b) Compute the entropy of the training data set and the threshold. The monitoring phase includes the following steps:
Hardware Requirements Specification :
Processor : Intel Pentium Family
Processor Speed : 250MHz to 667 MHz
RAM : 128 MB to 512 MB
Hard Disk : 4 GB or higher
Keyboard : Standard 104 enhanced keyboard
Software Requirements Specification :
Operating System : WindowsXp
Technology : Java
Front End : AWTSwing
Tools Used : MYEclipse
Design (in this he want the design of the project):
This document play a vital role in the development of life cycle (SDLC) as it describes the complete requirement of the system. It means for use by developers and will be the basic during testing phase. Any changes made to the requirements in the future will have to go through formal change approval process.
SPIRAL MODEL was defined by Barry Boehm in his 1988 article, "A spiral Model of Software Development and Enhancement. This model was not the first model to discuss iterative development, but it was the first model to explain why the iteration models.
The steps for Spiral Model can be generalized as follows:
The new system requirements are defined in as much details as possible. This usually involves interviewing a number of users representing all the external or internal users and other aspects of the existing system.
A preliminary design is created for the new system.
A first prototype of the new system is constructed from the preliminary design. This is usually a scaled-down system, and represents an approximation of the characteristics of the final product.
A second prototype is evolved by a fourfold procedure:
Evaluating the first prototype in terms of its strengths, weakness, and risks.
Defining the requirements of the second prototype.
Planning an designing the second prototype.
Constructing and testing the second prototype.
At the customer option, the entire project can be aborted if the risk is deemed too great. Risk factors might involve development cost overruns, operating-cost miscalculation, or any other factor that could, in the customer's judgment, result in a less-than-satisfactory final product.
The following diagram shows how a spiral model acts like:
Fig 1.0-Spiral Model
N-Tier Applications can easily implement the concepts of Distributed Application Design and Architecture. The N-tier Applications provide specific advantages that are vital to the business continuity of the enterprise. Typical features of a real life n-tier may include the following:
Availability and Scalability
n-tier application helps us distribute the overall functionality into various tiers or layers:
Business Rules Layer
Data Access Layer
Each layer can be developed independently of the other provided that it adheres to the standards and communicates with the other layers as per the specifications.
Fig 1.1-N-Tier Architecture
The unified modeling language allows the software engineer to express an analysis model using the modeling notation that is governed by a set of syntactic semantic and pragmatic rules.
A UML system is represented using five different views that describe the system from distinctly different perspective. Each view is defined by a set of diagram, which is as follows.
User Model View
This view represents the system from the users perspective.
The analysis representation describes a usage scenario from the end-users perspective.
Structural model view
In this model the data and functionality are arrived from inside the system.
This model view models the static structures.
Behavioral Model View
It represents the dynamic of behavioral as parts of the system, depicting the interactions of collection between various structural elements described in the user model and structural model view.
Implementation Model View
In this the structural and behavioral as parts of the system are represented as they are to be built.
Environmental Model View
In this the structural and behavioral aspects of the environment in which the system is to be implemented are represented.
Java technology is both a programming language and a platform.
The Java Programming Language
The Java programming language is a high-level language that can be characterized by all of the following buzzwords:
With most programming languages, you either compile or interpret a program so that you can run it on your computer. The Java programming language is unusual in that a program is both compiled and interpreted. With the compiler, first you translate a program into an intermediate language called Java byte codes -the platform-independent codes interpreted by the interpreter on the Java platform. The interpreter parses and runs each Java byte code instruction on the computer.
You can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Java byte codes help make "write once, run anywhere" possible. You can compile your program into byte codes on any platform that has a Java compiler. The byte codes can then be run on any implementation of the Java VM.
The Java platform has two components:
The Java Virtual Machine (Java VM)
The Java Application Programming Interface (Java API)
You've already been introduced to the Java VM. It's the base for the Java platform and is ported onto various hardware-based platforms.
Finally we decided to proceed the implementation using Java Networking.And for dynamically updating the cache table we go for MS Access database.
Java ha two things: a programming language and a platform.
Java is also unusual in that each Java program is both compiled and interpreted. Compilation happens just once; interpretation occurs each time the program is executed. The figure illustrates how this works.
You can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether it's a Java development tool or a Web browser that can run Java applets, is an implementation of the Java VM. The Java VM can also be implemented in hardware.
Testing is the process of detecting errors. Testing performs a very critical role for quality assurance and for ensuring the reliability of software. The results of testing are used later on during maintenance also.
Levels of Testing
Acceptance TestingIn order to uncover the errors present in different phases we have the concept of levels of testing. The basic levels of testing are as shown belowâ€¦
Unit testing focuses verification effort on the smallest unit of software
Unit Testing in this project : In this project each service can be thought of a module. There are so many modules like Login, New Registration, Change Password, Post Question, Modify Answer etc. When developing the module as well as finishing the development so that each module works without any error. The inputs are validated when accepting from the user.
White Box Testing
White Box Testing mainly focuses on the internal performance of the product. Here a part will be taken at a time and tested thoroughly at a statement level to find the maximum possible errors. Also construct a loop in such a way that the part will be tested with in a range. That means the part is execute at its boundary values and within bounds for the purpose of testing.
White Box Testing in this Project : I tested step wise every piece of code, taking care that every statement in the code is executed at least once. I have generated a list of test cases, sample data, which is used to check all possible combinations of execution paths through the code at every module level.
Black Box Testing
Here the module will be treated as a block box that will take some input and generate output. Output for a given set of input combinations are forwarded to other modules.
Black Box Testing in this Project: I tested each and every module by considering each module as a unit. I have prepared some set of input combinations and checked the outputs for those inputs. Also I tested whether the communication between one module to other module is performing well or not.
After the unit testing we have to perform integration testing. The goal here is to see if modules can be integrated properly or not.. Here the input to these modules will be the unit tested modules.
Integration testing is classifies in two typesâ€¦
Top-Down Integration Testing.
Bottom-Up Integration Testing.
Integration Testing in this project: In this project integrating all the modules forms the main system. Means I used Bottom-Up Integration Testing for this project. When integrating all the modules I have checked whether the integration effects working of any of the services by giving different combinations of inputs with which the two services run perfectly before Integration.
Project testing is an important phase without which the system can't be released to the end users. It is aimed at ensuring that all the processes are according to the specification accurately.
System Testing in this project: Here entire 'system' has been tested against requirements of project and it is checked whether all requirements of project have been satisfied or not.
This refers to the system testing that is carried out by the test team with the organization.
This refers to the system testing that is performed by a select group of friendly customers.
Acceptance Testing in this project: In this project I have collected some data that was belongs to the University and tested whether project is working correctly or not.