This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Internet services and applications have become an inextricable part of daily life, enabling communication and the management of personal information from anywhere. To accommodate this increase in application and data complexity, web services have moved to a multi-tiered design wherein the web server runs the application front-end logic and data is outsourced to a database or file server.In this paper, we present DoubleGuard, an IDS system that models the network behavior of user sessions across both the front-end web server and the back-end database. By monitoring both web and subsequent database requests, we are able to ferret out attacks that an independent IDS would not be able to identify. Furthermore, we quantify the limitations of any multitier IDS in terms of training sessions and functionality coverage. We implemented DoubleGuard using an Apache web server with MySQL and lightweight virtualization. We then collected and processed real-world traffic over a 15-day period of system deployment in both dynamic and static web applications. Finally, using DoubleGuard, we were able to expose a wide range of attacks with 100% accuracy while maintaining 0% false positives for static web services and 0.6% false positives for dynamic web services.
An Application is computer software designed to help the user to perform specific tasks. Applications can be classified as Standalone application, and web based applications.
A standalone application is an application that is accessed by stand alone user (personal computer user).
A web application is an application that is accessed by users over a network such as the Internet or an intranet.
A web application framework (WAF) is a software framework that is designed to support the development of dynamic websites, web applications and web services. The framework aims to alleviate the overhead associated with common activities performed in Web development.
One of the basic frame work is MVC (Model-View-Controller) which works in unidirectional and triangular fashion. The MVC architecture can be described as follows.
Model-view-controller (MVC) is a software architecture that separates the representation of information from the user's interaction with it. The model consists of application data and business rules, and the controller mediates input, converting it to commands for the model or view. A view can be any output representation of data, such as a chart or a diagram. The disadvantages of MVC architecture are
Increases the complexity of solutions
Increses the user interface code which increases the complexity in debugging
Frequent changes in Model causes frequent updates in views which is a burden for programmer
MVC applications being hard to deploy
MVC restricted to making changes to data only on a local file system.
For parallel development there is a needed multiple programmers
Multi-tier architecture (often referred to as n-tier architecture) is a client-server architecture in which presentation, application processing, and data management functions are logically separated. It works in bi-directional and linear fashion. The most widespread use of multi-tier architecture is the three-tier architecture.
Three-tier architecture has the following three tiers:
This is the topmost level of the application. The presentation tier displays information related to such services as browsing merchandise, purchasing and shopping cart contents. It communicates with other tiers by outputting results to the browser/client tier and all other tiers in the network.
Application tier (business logic, logic tier, data access tier, or middle tier):
The logical tier is pulled out from the presentation tier and, as its own layer, it controls an application's functionality by performing detailed processing.
This tier consists of database servers. Here information is stored and retrieved. This tier keeps data neutral and independent from application servers or business logic. Giving data its own tier also improves scalability and performance.
Intruder is the one who interferes without any permission. Intruders are of three types.
. Clandestine user.
In computer security, vulnerability is a weakness which allows an attacker to reduce a system's information assurance. Web-based applications have become a popular means of exposing functionality to large numbers of users by leveraging the services provided by web servers and databases. Vulnerable web-based applications are often exposed to the entire Internet, creating easily-exploitable entry points for the compromise of entire networks.
Ideally, the security of web-based applications should be addressed by means of careful design and thorough security testing.The five common Web application vulnerabilities are
Remote code execution
Format string vulnerabilities
Cross Site Scripting (XSS)
An intrusion detection system (IDS) is a device or software application that monitors network or system activities for malicious activities or policy violations and produces reports to a Management Station. A network Intrusion Detection System (IDS) can be classified into two types: anomaly detection and misuse detection. Anomaly detection first requires the IDS to define and characterize the correct and acceptable static form and dynamic behavior of the system, which can then be used to detect abnormal changes or anomalous behaviors. Behavior models are built by performing a statistical analysis on historical data or by using rule-based approaches to specify behavior patterns.
G. Vigna, F. Valeur, D. Balzarotti, W. K. Robertson, C. Kruegel, and E. Kirda proposed a model that composes both web IDS and database IDS to achieve more accurate detection and it also uses a reverse HTTP proxy to maintain a reduced level of service in the presence of false positives. But the attacks of type which uses normal traffic cannot be detected by either the web IDS or database IDS. Some previous approaches have detected intrusions or vulnerabilities by statically analyzing the source code or executables. Others dynamically
track the information flow to understand taint propagations and detect intrusions.
A classic 3 tier model is as shown in fig.
At the database side, we are unable to tell which transaction corresponds to which client request. The communication between the web server and the database server is not separated, and we can hardly understand the relationships among them. In typical 3-tiered web server architecture, the web server receives HTTP requests from user clients and then issues SQL queries to the database server to retrieve and update data. These SQL queries are causally dependent on the web request hitting the web server.
The SQL queries are mixed and all from the same web server. It is impossible for a database server to determine which SQL queries are the results of which web requests, much less to find out the relationship between them. Even if we knew the application logic of the web server and were to build a correct model, it would be impossible to use such a model to detect attacks within huge amounts of concurrent real traffic unless we had a mechanism to identify the pair of the HTTP request and SQL queries that are causally generated by the HTTP request.
A lightweight process containers, referred to as "containers," as ephemeral, disposable servers for client sessions are used to understand the relationship among the web server and database server and also it used to separate the communication between web server and database server. It is possible to initialize thousands of containers on a single physical machine, and these virtualized containers can be discarded, reverted, or quickly reinitialized to serve new sessions. A single physical web server runs many containers, each one an exact copy of the original web server. Our approach dynamically generates new containers and recycles used ones. As a result, a single physical server can run continuously and serve all web requests. However, from a logical perspective, each session is assigned to a dedicated web server and isolated from other sessions.
Figure shown below depicts how communications are categorized as sessions and how database transactions can be related to a corresponding session.
According to Figure, Client 2 will only compromise the VE 2, and the corresponding database transaction set T2 will be the only affected section of data within the database.
This container-based and session-separated web server architecture not only enhances the security performances but also provides us with the isolated information flows that are separated in each container session. It allows us to identify the mapping between the web server requests and the subsequent DB queries, and to utilize such a mapping model to detect abnormal behaviors on a session/client level.
Sensors are placed at both sides of the servers. At the web server, our sensors are deployed on the host system and cannot be attacked directly since only the virtualized containers are exposed to attackers. Our sensors will not be attacked at the database server either, as we assume that the attacker cannot completely take control of the database server. In fact, we assume that our sensors cannot be attacked and can always capture correct traffic information at both ends.
After mapping model was constructed, it can be used to detect abnormal behaviors. Both the web request and the database queries within each session should be in accordance with the model. If there exists any request or query that violates the normality model within a session, then the session will be treated as a possible attack.
Privilege Escalation Attack:
Hijack Future Session Attack:
Direct DB attack:
An intrusion detection system was built on models of normal behavior for multi-tiered web applications from both front-end web (HTTP) requests and back-end database (SQL) queries. Unlike previous approaches that correlated or summarized alerts generated by independent IDSes, Double Guard forms container-based IDS with multiple input streams to produce alerts. Such correlation of different data streams provides a better characterization of the system for anomaly detection because the intrusion sensor has a more precise normality model that detects a wider range of threats. This is achieved by isolating the flow of information from each web server session with a lightweight virtualization.
H/W System Configuration:-
Processor - Pentium -III
Speed - 1.1 Ghz
RAM - 256 MB(min)
Hard Disk - 4 GB
Key Board - Standard Windows Keyboard
Mouse - Two or Three Button Mouse
Monitor - SVGA
S/W System Configuration:-
¶ Operating System :Windows95/98/2000/XP
¶ Application Server : Tomcat5.0/6.X
¶ Front End : HTML, Java, JSP,AJAX
¶ Server side Script : Java Server Pages.
¶ Database Connectivity : Mysql.