Hospital Management is a web based application to manage the activities related to doctor and patient. Hospital Management is based on distributed architecture. This involves the web services which receives request from the web based application and service processes the request and sends the response back to the application. Web services performs the database operations like insert, delete and update the information about the patients, doctors etc. This kind of distributed architecture is called as Service Oriented Architecture (SOA). This application contains login form, patient registration, doctor registration. Hospital Management application allow patients to edit their information like patient name, contact number, address, disease from which he is suffering from etc.

The concept of hospital management is very big. The scope of hospital management involves different modules like login module, patient info, doctor info, billing module, registration module and administration module. Login module will include the operation related to login, forgot password, password change, sending confirmations or alerts etc. Patient info module will include the details about the patient like patient history about his treatment and doctors involved in the treatment, details of medicines suggested by doctors. Billing Module will include the details of fees, mode of payment used by the patient to pay the fees. Registration module will allow the users to register their profiles. Administration module allows performing operations like creating the new users, performing password change operations, loading the information of doctors for the first time. Hospital Management uses sql server 2005 as the backend. The database is maintained on the remote server, this database holds all the information related to the hospital.


Before SOA architecture, DCOM or (ORBs) object request brokers based on CORBA specifications were used to develop the distributed applications. DCOM is known was distributed component object model. DCOM is an extension of COM (component object model), DCOM was released in 1996. It works primarily with Microsoft windows. It will work with Java Applets and ActiveX components through its use of COM.

Service Oriented Architecture is nothing but collection of services. These services are deployed at different servers at different locations. These services communicate with each other to perform required operations. The communication can be simple data passing.

Service Provider: The provider will create the service using any technology like .net or java and publishes its information for accessing the outside world. The provider decides which service to be published and one service can provide multiple operations, how to price the services or without charge like free services. Provider also decides the category of the services. The most common broker service is UDDI (Universal Description Discovery and Integration) provides a way publish and discover the information about the services.

Service Requester: The requester identifies the services using UDDI or any other service broker. The services provide the required operations then the requester should take it to the service provider for contract. Then requester can bind the services to the application and execute to get the required information.

The principles used for development, maintenance and usage of SOA are

  1. Reuse, comparability, granularity and interoperability.
  2. Identifying the services and categorizing them.
  3. Monitoring and tracking.

The specific architectural principles of SOA design are

  1. Service loose coupling
  2. Service encapsulation
  3. Service contract
  4. Service abstraction
  5. Service reusability
  6. Service discoverability



Development of a computerized Hospital management system with the provision of flexible, accurate and secured access to data thus bringing in the highly useful end product for the users as well as for the management.


  • To develop a system that maintains a sophisticated Hospital management details bringing out the flexibility and the ease with which the users can use it.
  • To track and improve internal performance of the financial corporation thereby allowing the flexible and secured transactions to happen.


In the existing system the data required for the Hospital management is maintained in records. These are to be updated according to the requirements of the customer. It takes time to search for the required data. All the details regarding the hospitals and its timings are hard to maintain.

The work will be more so the systems need more number of crew to fulfill the requirements. There may be a chance of failure since it is manual. A simple fault of the system may lead to inconvenience and also cause a vast destruction. So these faults make the system less effective and performance of the system is very slow. Hence, there should be a system to overcome all these defects and provide the users with more facilities.


In the proposed system everything is computerized. The system provides all the details regarding the hospitals, its details, and soon. The users can search the required data easily within no time. A very less number of people are required to handle the system.

The patient's need not wait for long time to fulfill his requirement. There is no chance of any failure in the system, which improves the performance of the system and also increases the efficiency of the system. Though this system is very beneficial a minor failure in the server or else the computer leads to a major loss of data.


The project performs the following functions

In 1997, a team of Medical Professionals has set up the first hospital, it signaled the dawn of a new era in medical care. At the heart of this movement was a burning desire to practice medicine with Compassion, Concern and Care, with a single-minded objective - the recovery of the patient. Today, with Multi-Specialty HOSPITAL across the state, and a reputation for humanitarian and selfless service of the highest order, Hospital enjoys an unbelievable amount of goodwill. A million smiles will bear testimony to that.

At hospital, we operate on a physician driven model. This means that all the main constituents of the CARE movement - the promoters, administrators and service providers are physicians. At the centre of the CARE model is the patient and the over-riding motive of all of Care's activities is to provide quality medical care at an affordable cost. Technology, Training and Teamwork form the very core of the CARE model. We emphasize on a comprehensive and continuous education and training of every individual involved in patient care. Every effort will be taken to ensure that our growth is one decided by the patient's needs, and not one decided by our corporate requirements.

Our hospital believes at:

  • "A patient is the most important person in our hospital.
  • He is not an interruption to our work; he is the purpose of it.
  • He is not an outsider in our hospital. He is a part of it.
  • We are not doing him a favour by serving him.
  • He is doing us a favour by giving us an opportunity to do so."


The use of computerized hospital is to provide effective facilities to the people, which are suffering from any problems. The advantages are:

  • Less cost
  • No mediators
  • Excellent services

The main goal of this hospital management system is to achieve the people satisfaction. Hospital management system provides effective facilities to the people from any place in the world.


Software Requirement Specifications:

Operating Systems : Windows 2000 Prof
Database server : Sql Srver 2005
Programming Language : C#

Hardware Requirement Specifications:

Application Server Configuration:

Computer Processor : Pentium IV
Clock Speed : 700MHZ Processor
Hard Disk : 40GB
RAM : 256/512 MB
Modem : 56KBPS

Database Server Configuration:

Computer Processor : Pentium IV
Clock Speed : 700MHZ Processor
Hard Disk : 40GB
RAM : 256/512 MB


Existing System:

In the current system the data required is maintained in records. They are to be updated according to the requirements of the users. It takes time to search for the required query. All the details regarding the hospital and its patients are hard to maintain. The work will be more, so the system needs more number of crew to fulfill the requirements. There may be a chance of failure since it is manual. A one fault of the system may lead to inconvenience and also causes a vast destruction. So these faults make the system less efficient and performance of the system is very slow. Hence, there should be a system to overcome all these defaults and provide the users with more facilities.

In the current system if the user was suffering from any pain or etc he\she has no idea how to control the pain and suffering. Just they will be no idea for them and they become sicker and died more sooner And to know the availability for the treatment they have to go to hospital but mostly the government hospital doesn't give more facilities to the patient as the patients want from the doctors. But in the case of the private hospital the patients has to pay more fares for the treatment and they do more delays in the case of the treatment they will be more formalities to be fulfil by the patients which take lot of time waste.

Proposed System:

In the proposed system everything is computerized. The system provides all the details regarding the Hospital, doctors, patients, bed numbers, and fares also and so on. The user can search required data easily with no time. A very less number of staff is required to handle the system.

The patients need not wait for a long time to fulfil his requirement. There is no chance of any failure in the system, which improves the performance of the system and also increases the efficiency of the system.

Though this system is very beneficial a minor failures in the server or else the computer leads a major loss of data.


In preliminary investigation we got the result that the computerized Hospital management system is feasible. This includes following aspects.

Technical Feasibility:

Technical feasibility is nothing but implementing the project with existing technology. Computerized Hospital management System is feasible.

Economical Feasibility:

Economic feasibility means the cost of under taking project should be less than existing system Hospital management system is economically feasible, because it reduces the expenses in the manual system.


.NET Framework

The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet. The .NET Framework is designed to fulfill the following objectives:

  • To provide a consistent object-oriented programming environment whether object code is stored and executed locally, executed locally but Internet-distributed, or executed remotely.
  • To provide a code-execution environment that minimizes software deployment and versioning conflicts.
  • To provide a code-execution environment that guarantees safe execution of code, including code created by an unknown or semi-trusted third party.
  • To provide a code-execution environment that eliminates the performance problems of scripted or interpreted environments.
  • To make the developer experience consistent across widely varying types of applications, such as Windows-based applications and Web-based applications.
  • To build all communication on industry standards to ensure that code based on the .NET Framework can integrate with any other code.


The Common Language Runtime and the .NET Framework Class Library: - The common language runtime is the foundation of the .NET Framework. You can think of the runtime as an agent that manages code at execution time, providing core services such as memory management, thread management, and remoting, while also enforcing strict type safety and other forms of code accuracy that ensure security and robustness. In fact, the concept of code management is a fundamental principle of the runtime. Code that targets the runtime is known as managed code, while code that does not target the runtime is known as unmanaged code. The class library, the other main component of the .NET Framework, is a comprehensive, object-oriented collection of reusable types that you can use to develop applications ranging from traditional command-line or graphical user interface (GUI) applications to applications based on the latest innovations provided by ASP.NET, such as Web Forms and XML Web services.

The .NET Framework can be hosted by unmanaged components that load the common language runtime into their processes and initiate the execution of managed code, thereby creating a software environment that can exploit both managed and unmanaged features. The .NET Framework not only provides several runtime hosts, but also supports the development of third-party runtime hosts.

For example, ASP.NET hosts the runtime to provide a scalable, server-side environment for managed code. ASP.NET works directly with the runtime to enable Web Forms applications and XML Web services, both of which are discussed later in this topic.

Internet Explorer is an example of an unmanaged application that hosts the runtime (in the form of a MIME type extension). Using Internet Explorer to host the runtime enables you to embed managed components or Windows Forms controls in HTML documents. Hosting the runtime in this way makes managed mobile code (similar to Microsoft® ActiveX® controls) possible, but with significant improvements that only managed code can offer, such as semi-trusted execution and secure isolated file storage.

The following illustration shows the relationship of the common language runtime and the class library to your applications and to the overall system. The illustration also shows how managed code operates within a larger architecture.


.NET Architecture:

Features of the Common Language Runtime:

The common language runtime manages memory, thread execution, code execution, code safety verification, compilation, and other system services. These features are intrinsic to the managed code that runs on the common language runtime.

With regards to security, managed components are awarded varying degrees of trust, depending on a number of factors that include their origin (such as the Internet, enterprise network, or local computer). This means that a managed component might or might not be able to perform file-access operations, registry-access operations, or other sensitive functions, even if it is being used in the same active application.

The runtime enforces code access security. For example, users can trust that an executable embedded in a Web page can play an animation on screen or sing a song, but cannot access their personal data, file system, or network. The security features of the runtime thus enable legitimate Internet-deployed software to be exceptionally featuring rich.

The runtime also enforces code robustness by implementing a strict type- and code-verification infrastructure called the common type system (CTS). The CTS ensures that all managed code is self-describing. The various Microsoft and third-party language compilers generate managed code that conforms to the CTS. This means that managed code can consume other managed types and instances, while strictly enforcing type fidelity and type safety.

In addition, the managed environment of the runtime eliminates many common software issues. For example, the runtime automatically handles object layout and manages references to objects, releasing them when they are no longer being used. This automatic memory management resolves the two most common application errors, memory leaks and invalid memory references.

The runtime also accelerates developer productivity. For example, programmers can write applications in their development language of choice, yet take full advantage of the runtime, the class library, and components written in other languages by other developers. Any compiler vendor who chooses to target the runtime can do so. Language compilers that target the .NET Framework make the features of the .NET Framework available to existing code written in that language, greatly easing the migration process for existing applications.

While the runtime is designed for the software of the future, it also supports software of today and yesterday. Interoperability between managed and unmanaged code enables developers to continue to use necessary COM components and DLLs.

The runtime is designed to enhance performance. Although the common language runtime provides many standard runtime services, managed code is never interpreted. A feature called just-in-time (JIT) compiling enables all managed code to run in the native machine language of the system on which it is executing. Meanwhile, the memory manager removes the possibilities of fragmented memory and increases memory locality-of-reference to further increase performance.

Finally, the runtime can be hosted by high-performance, server-side applications, such as Microsoft® SQL Server™ and Internet Information Services (IIS). This infrastructure enables you to use managed code to write your business logic, while still enjoying the superior performance of the industry's best enterprise servers that support runtime hosting.

.NET Framework Class Library:

The .NET Framework class library is a collection of reusable types that tightly integrate with the common language runtime. The class library is object oriented, providing types from which your own managed code can derive functionality. This not only makes the .NET Framework types easy to use, but also reduces the time associated with learning new features of the .NET Framework. In addition, third-party components can integrate seamlessly with classes in the .NET Framework.

For example, the .NET Framework collection classes implement a set of interfaces that you can use to develop your own collection classes. Your collection classes will blend seamlessly with the classes in the .NET Framework.

As you would expect from an object-oriented class library, the .NET Framework types enable you to accomplish a range of common programming tasks, including tasks such as string management, data collection, database connectivity, and file access. In addition to these common tasks, the class library includes types that support a variety specialized development scenarios.



ADO.NET provides consistent access to data sources such as Microsoft SQL Server, as well as data sources exposed via OLE DB and XML. Data-sharing consumer applications can use ADO.NET to connect to these data sources and retrieve, manipulate, and update data.

ADO.NET cleanly factors data access from data manipulation into discrete components that can be used separately or in tandem. ADO.NET includes .NET data providers for connecting to a database, executing commands, and retrieving results. Those results are either processed directly, or placed in an ADO.NET Dataset object in order to be exposed to the user in an ad-hoc manner, combined with data from multiple sources, or remotes between tiers. The ADO.NET Dataset object can also be used independently of a .NET data provider to manage data local to the application or sourced from XML.

The ADO.NET classes are found in System.Data.dll, and are integrated with the XML classes found in System.Xml.dll. When compiling code that uses the System.Data namespace, reference both System.Data.dll and System.Xml.dll.

ADO.NET provides functionality to developers writing managed code similar to the functionality provided to native COM developers by ADO.

The most important change from classic ADO is that ADO.NET doesn't reply on OLE DB providers and uses .NET managed providers instead. A .NET provider works as a bridge between your application and the data source. ADO .NET and .NET managed data providers don't use COM at all, so a .NET application can access data without undergoing any performance penalty deriving the switch between managed and unmanaged code.

The most important difference between ADO and ADO.NET is that dynamic and Key set server -side cursors are no longer supported. ADO.NET supports only forward-only read-only result sets and disconnected result sets.

.NET Data Providers:

.NET data providers play the same role that OLE DB providers play under ADO, they enable your application to read and write data stored in a data source. Microsoft Currently supplies five ADO.NET providers:

OLE DB .NET Data Provider:

This provider lets you access a data source for which an OLE DB provider exists, although at the expense of a switch from managed to unmanaged code and the performance degradation that ensues.

SQL Server .NET Data Provider:

This provider has been specifically written to access SQL Server version 7.0 or later using Tabular Data Stream (TDS) as the communication medium. TDS is SQL Server's native protocol, so you can expect this provider to give you better performance than the OLE DB Data Provider. Additionally, the SQL Server, .NET Data Provider exposes SQL Server specific features, such as named transactions and support for the FOR XML clause in SELECT queries.

ODBC .NET Data Provider:

This provider works as a bridge toward an ODBC source, so in theory you can use it to access any source for which an ODBC driver exists. As of this writing, this provider officially supports only the Access, SQL Server, and Oracle ODBC drivers, so there's no clear advantage in using it instead of the OLE DB .NET Data Provider. The convenience of this provider will be more evident when more ODBC drivers are added to the list of those officially supported.

.NET Data Provider for Oracle:

This provider can access an Oracle data source version 8.1.7 or later. It automatically uses connection pooling to increase performance if possible, and supports most of the features of the Microsoft OLEDB Provider for Oracle, even though these two accessing techniques can differ in a few details—for example, the .NET Data Provider for Oracle doesn't support the TABLE data type and ODBC escape sequences.

SQLXML Library:

This DLL, which you can download from the Microsoft Web site, includes a few managed types that let you query and update a Microsoft SQL Server 2000 data source over HTTP. It supports XML templates, XPath queries, and can expose stored procedures and XML templates as Web services. The ODBC and Oracle providers are included in .NET Framework 1.1 but were missing in the first version of the .NET Framework. If you work with .NET Framework 1.0, you can download these providers from the Microsoft Web site. The downloadable versions of these providers differ from the versions that come with .NET Framework 1.1, mainly in the namespaces they use: Microsoft.Data.Odbc and Microsoft.Data.Oracle instead of System.Data.Odbc and System.Data.Oracle.

ADO.NET Object Model:

It's time to have a closer look at the individual objects that make up the ADO.NET architecture illustrated in Figure 21-1. You'll see that objects are divided into two groups, the objects included in the .NET Data Provider, and those that belong to the ADO.NET disconnected architecture. (In practice, the second group includes only the Dataset and its secondary objects.) Dataset (Disconnected data) .NET Data Provider Connection DataAdapter Command Data Reader ADO.NET Objects at a Glance

The Connection object has the same function it has under ADO: establishing a connection to the data source. Like its ADO counterpart, it has the Connection String property, the Open and Close methods, and the ability to begin a transaction using the Begin transaction method. The ADO Execute method isn't supported, and the ADO.NET Connection object lacks the ability to send a command to the database.

The Command object lets you query the database, send a command to it, or invoke one of its stored procedures. You can perform these actions by using one of the object's Executexxxx methods. More specifically, you use the ExecuteNonQuery method to send an action query to the database—for example, an INSERT or DELETE SQL statement—an Execute Reader method to perform a SELECT query that returns a result set, or an Execute Scalar method to perform a SELECT query that returns a single value. Other properties let you set the command timeout and prepare the parameters for a call to a stored procedure. You must manually associate a Command object with the Connection object previously connected to the data source.

The Data Reader object is the object returned by the Execute Reader method of the command object and represents a forward-only, read-only result set. A new row of results becomes available each time you invoke the Data Reader's Read method, after which you can query each individual field using the Get Value method or one of the strongly typed Getxxxx methods, such as Get String or Get Float. Remember that you can't update the database by means of a Data Reader object.

The Dataset object is the main object in the ADO.NET disconnected architecture. It works as a sort of small relational database that resides on the client and is completely unrelated to any specific database. It consists of a collection of DataTable objects, with each DataTable object holding a distinct result set (typically the result of a query to a different database table). A DataTable object contains a collection of Data Row objects, each one holding data coming from a different row in the result. A Dataset also contains a collection of Data Relation objects, in which each item corresponds to a relationship between different Data Table objects, much like the relationships you have between the tables of a relational database. These relations let your code navigate among tables in the same DataSet using a simple and effective syntax.

The DataAdapter object works as a bridge between the Connection object and the DataSet object. It's Fill method moves data from the database to the client-side DataSet, whereas its Update method moves data in the opposite direction and updates the database with the rows that your application has added, modified, or deleted from the DataSet.

Connection Object:

Whether you work in connected or in disconnected mode, the first action you need to perform when working with a data source is to open a connection to it.InADO.NET terms, this means that you create a Connection object that connects to the specific database. The Connection object is similar to the ADO object of the same name, so you'll feel immediately at ease with the new ADO.NET object if you have any experience with ADO programming. Setting the Connection String Property the key property of the Connection class is Connection String, a string that defines the type of the database you're connecting to, its location, and other semicolon-delimited attributes. When you work with the OleDbConnection object, the connection string matches the connection string that you use with the ADO Connection object. Such a string typically contains the following information,

  • The Provider attribute, which specifies the name of the underlying OLE DB Provider, used to connect to the data. The only values that Microsoft guarantees as valid are SQLOLEDB (the OLE DB provider for Microsoft SQL Server), Microsoft.Jet.OLEDB.4.0 (the OLE DB provider for Microsoft Access), and MSDAORA (the OLE DB provider for Oracle).
  • The Data Source attributes, which specifies where the database is. It can be the path to an Access database or the name of the machine on which the SQL Server or the Oracle database is located.
  • The User ID and Password attributes, which specify the user name and the password of a valid account for the database.
  • The Initial Catalog attributes, which specifies the name of the database when you're connecting to a SQL Server or an Oracle data source. Once you've set the Connection String property correctly, you can open the connection by invoking the Open method:

ADO.NET in Disconnected Model:

In the preceding chapter, you saw how to work with ADO.NET in connected mode, processing data coming from an active connection and sending SQL commands to one.ADO.NET in connected mode behaves much like classic ADO, even though the names of the involved properties and methods (and their syntax) are often different. You'll see how ADO.NET differs from its predecessor when you start working in disconnected mode. ADO 2.x permits you to work in disconnected mode using client-side static record sets opened in optimistic batch update mode. This was one of the great new features of ADO that proved to be a winner in client/server applications of any size. As a matter of fact, working in disconnected mode is the most scalable technique you can adopt because it takes resources on the client (instead of on the server) and, above all, it doesn't enforce any locks on database tables (except for the short-lived locks that are created during the update operation).

The following Imports statements are used at the file or project level:

    Imports System. Data Imports System.Data.Common Imports System.Data.OleDb Imports System.Data.SqlClient Imports System.Data.Odbc Imports System.IO Imports System.Text.RegularExpressions

The DataSet Object Because ADO.NET (and .NET in general) is all about scalability and performance, the disconnected mode is the preferred way to code client/server applications. Instead of a simple disconnected recordset, ADO.NET gives you the DataSet object, which is much like a small relational database held in memory on the client. As such, it provides you with the ability to create multiple tables, fill them with data coming from different sources, enforce relationships between pairs of tables, and more.

Data Set:

The DataSet object is central to supporting disconnected, distributed data scenarios with ADO.NET. The DataSet is a memory-resident representation of data that provides a consistent relational programming model regardless of the data source. It can be used with multiple and differing data sources, used with XML data, or used to manage data local to the application. The DataSet represents a complete set of data including related tables, constraints, and relationships among the tables.

The DataAdapter object, which works as a connector between the DataSet and the actual data source. The DataAdapter is in charge of filling one or more DataTable objects with data taken from the database so that the application can then close the connection and work in a completely disconnected mode. After the end user has performed all his or her editing chores, the application can reopen the connection and reuse the same DataAdapter object to send changes to the database. Admittedly, the disconnected nature of the DataSet complicates matters for developers, but it greatly improves its versatility. You can now fill a DataTable with data taken from any data source—whether it's SQL Server, a text file, or a mainframe—and process it with the same routines, regardless of where the data comes from. The decoupled architecture based on the DataSet and the DataAdapter makes it possible to read data from one source and send updates to another source when necessary. You have a lot more freedom working with ADO.NET, but also many more responsibilities.


ASP.NET is an application network developed and marketed by Microsoft to allow programmers to build dynamic web sites, web applications and web services.

ASP.NET, the next version of ASP, is a programming framework used to create enterprise-class Web Applications. It was first released in January 2002 with version 1.0 of the .NET Framework as a successor to Microsoft's Active Server Pages (ASP) technology.


Since 1995, Microsoft has been constantly working to shift its focus from Windows-based platforms to the Internet. As a result, Microsoft introduced ASP (Active Server Pages) in November 1996. ASP offered the efficiency of ISAPI applications along with a new level of simplicity that made it easy to understand and use. However, ASP script was an interpreted script and consisted unstructured code and was difficult to debug and maintain. As the web consists of many different technologies, software integration for Web development was complicated and required to understand many different technologies. Also, as applications grew bigger in size and became more complex, the number of lines of source code in ASP applications increased dramatically and was hard to maintain. Therefore, an architecture was needed that would allow development of Web applications in a structured and consistent way.


ASP.NET is a technology comes with .Net framework which provides set of specifications for creating dynamic web based applications.

ASP.NET is built on the Common Language Runtime (CLR), allowing programmers to write ASP.NET code using any .NET supporting language. comes with web server IIS (Internet Information Services) in the form of asp.dll. The purpose is supporting code execution on server.

ASP.NET supports website level configuration settings in the form of web. Config. This makes the maintenance easier. Microsoft is integrating all the web based technologies into It can be like AJAX, SILVER LIGHT.

The extension for page is .asp. The website can contain mixture of asp and pages. Since web server supports side by side execution. The default .net language for page is VB.Net.

Types of Controls in ASP.Net:

  • Html server controls
  • Web server Controls

Designing techniques for webpage in

  1. in page technique: - When both design part and logic part code is placed in a single file called .asp .Then it is called as In page technique
  2. Code behind technique: - When design part is placed in .asp and logic part is placed with .vb/.cs file. Then it is called code behind technique.

Here web designer duty is design part and developer duty is logic part.

Page Life Cycle Events:

  1. Page_Init:-This is fired when the page is initialized
  2. Page_Load:-This is fired when the page is loaded
  3. The difference between Page_Init and Page_load is that the controls are guaranteed to be fully loaded in the Page_load.The controls are accessible in the Page_Init event, but the View State is not loaded, so controls will have their default values, rather than any values set during the post back.

  4. Control Event:-This is fired if a control triggered the page to be reloaded (such as a button).
  5. Page_unload:- This is fired when the page is unloaded from the memory.

Features of

  • Better Performance
  • As the ASPX pages are complier based the performance of the web application will be faster than the ASP pages (as they are interpreter based)

  • Caching
  • It is a process of maintaining the result or output of a web page temporarily for some period of time .ASP supports Client Side caching where as ASP.Net supports both client side and server side.

  • More powerful data access
  • supports ADO and ADO.Net as its database connectivity model which will be implemented using the most Powerful OOP'S languages like VB.Net and C# and hence the database access using ASPX pages will be very powerful.

  • Better session Management
  • Session Management in ASP.Net can be maintained using the database and as well cookie less sessions are also supported. It also supports enabling and disabling of session info within a web application.

This can be implemented in various ways

  1. View State [Hidden field]
  2. Page Submission
  3. Cookies
  4. Session
  5. Query String
  6. Application
  7. Cache
  • Simplified Form Validations
  • ASP.Net provides validation controls using which any type of client side validations are performed without writing any code.

  • Security in ASP.NET
  • .net provides various authentication methods to achieve security.

    They are: à

    • Forms Authentication
    • Windows Authentication
    • Passport Authentication
    • Custom Authentication

  • Cookies in ASP.NET:
  • It is used to maintain the server side information at the client system. { OR } A cookie can be defined as a small amount of memory used by the web server on the client system.

    Usage:à The main purpose of cookies will be storing personal information of the client; either it can be username, pwd, no of visits, session id.

    They are of 2 types:

    1. Persistent Cookies (Permanent Cookies):- When the cookie is stored on to the hard disk memory then it is called as persistent cookie.
    2. Non-Persistent Cookies (Temporary Cookies): When the cookie is stored with in the process memory of the browser then it is called temporary cookies.

  • Tracing :
  • It is used to trace the flow of the application.

    It is of 2 types à

    • Application level tracing à if this is used then for all the web forms present in the web application the trace details or information will be provided.
    • Page level tracing à if used then only specific web form the trace details will be set.

    Note à if the application level and page level tracing information is set then the preference will be given to the page level tracing only



SOA (service oriented architecture) is an application architecture in which all functions, or services, are defined using a description language and have inviolable interfaces that are called to perform business processes. Each interaction is independent of each and every other interaction and the interconnect protocols of the communicating devices (i.e., the infrastructure components that determine the communication system do not affect the interfaces). Because interfaces are platform-independent, a client from any device using any operating system in any language can use the service.

Though built on similar principles, SOA is not the same as Web services, which indicates a collection of technologies, such as SOAP and XML. SOA is more than a set of technologies and runs independent of any specific technologies.


Early programmers realized that writing software was becoming more and more complex. They needed a better way to reuse some of the code that they were rewriting. When researchers took notice of this, they introduced the concept of modular design. With modular design principles, programmers could write subroutines and functions and reuse their code. This was great for a while. Later, developers started to see that they were cutting and pasting their modules into other applications and that this started to create a maintenance nightmare; when a bug was discovered in a function somewhere, they had to track down all of the applications that used the function and modify the code to reflect the fix. After the fix, the deployment nightmare began. Developers didn't like that; they needed a higher level of abstraction.

Researchers proposed classes and object-oriented software to solve this, and many more, problems. Again, as software complexity grew, developers started to see that developing and maintaining software was complex and they wanted a way to reuse and maintain functionality, not just code. Researchers offered yet another abstraction layer to handle this complexity -- component-based software. Component-based software is/was a good solution for reuse and maintenance, but it doesn't address all of the complexities developers are faced with today. Today, we face complex issues like distributed software, application integration, varying platforms, varying protocols, various devices, the Internet, etc.

Today's software has to be equipped to answer the call for all of the above. In short, SOA (along with web services) provides a solution to all of the above. By adopting a SOA, you eliminate the headaches of protocol and platforms and your applications integrate seamlessly.

Key Components of SOA

The first step in learning something new is to understand its vocabulary. In the context of SOA, we have the terms service, message, dynamic discovery, and web services. Each of these plays an essential role in SOA.


A service in SOA is an exposed piece of functionality with three properties:

  1. The interface contract to the service is platform-independent.
  2. The service can be dynamically located and invoked.
  3. The service is self-contained. That is, the service maintains its own state.

A platform-independent interface contract implies that a client from anywhere, on any OS, and in any language, can consume the service. Dynamic discovery hints that a discovery service (e.g., a directory service) is available. The directory service enables a look-up mechanism where consumers can go to find a service based on some criteria. For example, if I was looking for a credit-card authorization service, I might query the directory service to find a list of service providers that could authorize a credit card for a fee. Based on the fee, I would select a service (see Figure 1.1). The last property of a service is that the service be self-contained.


Service providers and consumers communicate via messages. Services expose an interface contract. This contract defines the behaviour of the service and the messages they accept and return. Because the interface contract is platform- and language-independent, the technology used to define messages must also be agnostic to any specific platform/language. Therefore, messages are typically constructed using XML documents that conform to XML schema. XML provides all of the functionality, granularity, and scalability required by messages. That is, for consumers and providers to effectively communicate, they need a non-restrictive type of system to clearly define messages; XML provides this. Because consumers and providers communicate via messages, the structure and design of messages should not be taken lightly. Messages need to be implemented using a technology that supports the scalability requirements of services. Having to redesign messages will break the interface to providers, which can prove to be costly.

Dynamic Discovery

Dynamic discovery is an important piece of SOA. At a high level, SOA is composed of three core pieces: service providers, service consumers, and the directory service. The role of providers and consumers are apparent, but the role of the directory service needs some explanation. The directory service is an intermediary between providers and consumers. Providers register with the directory service and consumers query the directory service to find service providers. Most directory services typically organize services based on criteria and categorize them. Consumers can then use the directory services' search capabilities to find providers. Embedding a directory service within SOA accomplishes the following:

  1. Scalability of services; you can add services incrementally.
  2. Decouples consumers from providers.
  3. Allows for hot updates of services.
  4. Provides a look-up service for consumers.
  5. Allows consumers to choose between providers at runtime rather than hard-coding a single provide

Service-oriented architecture -- not just Web services:

The advent of Web services has produced a fundamental change, because the success of many Web services projects has shown that the technology does in fact exist, whereby you can implement a true service-oriented architecture. It lets you take another step back and not just examine your application architecture, but the basic business problems you are trying to solve. From a business perspective, it's no longer a technology problem, it is a matter of developing an application architecture and framework within which business problems can be defined, and solutions can be implemented in a coherent, repeatable way.

First, though, it must be understood that Web services does not equal service-oriented architecture. Web services is a collection of technologies, including XML, SOAP, WSDL, and UDDI, which let you build programming solutions for specific messaging and application integration problems.


Web Service is an application logic accessible to different programs based on open industry standard protocols in a platform independent way.


Web services are small units of code built to handle a limited task. Web services are components on a Web server that a client application can call by making HTTP requests across the Web. ASP.NET enables you to create custom Web services or to use built-in application services, and to call these services from any client application.

Many people and companies have debated the exact definition of Web services. At a minimum, however, a Web service is any piece of software that makes itself available over the Internet and uses a standardized XML messaging system.

XML is used to encode all communications to a Web service. For example, a client invokes a Web service by sending an XML message, and then waits for a corresponding XML response, because all communication is in XML.

Beyond this basic definition, a Web service may also have two additional properties:

First, a Web service can have a public interface, defined in a common XML grammar. The interface describes all the methods available to clients and specifies the signature for each method. Currently, interface definition is accomplished through the Web Service Description Language (WSDL).

Second, if you create a Web service, there should be some relatively simple mechanism for you to publish this fact. Likewise, there should be some simple mechanism for interested parties to locate the service and locate its public interface. The most prominent directory of Web services is currently available via UDDI, or Universal Description, Discovery, and Integration.

Independent of Operating Systems

Since web services use XML based protocols to communicate with other systems, web services are independent of both operating systems and programming languages.

An application calling a web service will always send its requests using XML, and get its answer returned as XML. The calling application will never be concerned about the operating system or the programming language running on the other computer.

Web services make it easier to communicate between different applications. They also make it possible for developers to reuse existing web services instead of writing new ones.

Web services can create new possibilities for many businesses because it provides an easy way to distribute information to a large number of consumers. One example could be flight schedules and ticket reservation systems.

Features of Web services:

  • Web services are small units of code
  • Web services are designed to handle a limited set of tasks
  • Web services use XML based communicating protocols
  • Web services are independent of operating systems
  • Web services are independent of programming languages
  • Web services connect people, systems and devices

Web services use the standard web protocols HTTP, XML, SOAP, WSDL, and UDDI.


HTTP (Hypertext Transfer Protocol) is the World Wide Web standard for communication over the Internet. HTTP is standardized by the World Wide Web Consortium (W3C).


XML (extensible Markup Language) is a well known standard for storing, carrying, and exchanging data. XML is standardized by the W3C.


SOAP (Simple Object Access Protocol) is a lightweight platform and language neutral communication protocol that allows programs to communicate via standard Internet HTTP. SOAP is standardized by the W3C.


WSDL (Web Services Description Language) is an XML-based language used to define web services and to describe how to access them. WSDL is a suggestion by Ariba, IBM and Microsoft for describing services for the W3C XML Activity on XML Protocols.


UDDI (Universal Description, Discovery and Integration) is a directory service where businesses can register and search for web services.

UDDI is a public registry, where one can publish and inquire about web services.


The term Web services describes a standardized way of integrating Web-based applications using the XML, SOAP, WSDL and UDDI open standards over an Internet protocol backbone. XML is used to tag the data, SOAP is used to transfer the data, WSDL is used for describing the services available and UDDI is used for listing what services are available. Used primarily as a means for businesses to communicate with each other and with clients, Web services allow organizations to communicate data without intimate knowledge of each other's IT systems behind the firewall.

Unlike traditional client/server models, such as a Web server/Web page system, Web services do not provide the user with a GUI. Web services instead share business logic, data and processes through a programmatic interface across a network. The applications interface, not the users.

Developers can then add the Web service to a GUI (such as a Web page or an executable program) to offer specific functionality to users.



XML is a markup language for documents containing structured information. XML (Extensible Markup Language) is a set of rules for encoding documents electronically. XML's design goals emphasize simplicity, generality, and usability over the Internet. It is a textual data format, with strong support via Unicode for the languages of the world. Although XML's design focuses on documents, it is widely used for the representation of arbitrary data structures, for example in web services.

There are a variety of programming interfaces which software developers may use to access XML data, and several schema systems designed to aid in the definition of XML-based languages.

As of now, hundreds of XML-based languages have been developed, including RSS, Atom, SOAP, and XHTML. XML has become the default file format for most office-productivity tools, including Microsoft Office,, and Apple's work.

Features of XML:

  • XML stands for Extensible Markup Language
  • XML is a markup language much like HTML
  • XML was designed to carry data, not to display data
  • XML tags are not predefined. You must define your own tags
  • XML is designed to be self-descriptive
  • XML is a W3C Recommendation

The Difference between XML and HTML:

XML is not a replacement for HTML. HTML is about displaying information, while XML is about carrying information.

XML and HTML were designed with different goals:

  • XML was designed to transport and store data, with focus on what data is.
  • HTML was designed to display data, with focus on how data looks.

XML documents are composed of markup and content. There are six kinds of markup that can occur in an XML document: elements, entity references, comments, processing instructions, marked sections, and document type declarations.

Important Points:

XML is Just Plain Text: XML is nothing special. It is just plain text. Software that can handle plain text can also handle XML. However, XML-aware applications can handle the XML tags specially. The functional meaning of the tags depends on the nature of the application.

With XML Invent our Own Tags:

The tags in the example above (like <to> and <from>) are not defined in any XML standard. These tags are "invented" by the author of the XML document.

That is because the XML language has no predefined tags.

The tags used in HTML (and the structure of HTML) are predefined. HTML documents can only use tags defined in the HTML standard (like <p>, <h1>, etc.).

XML allows the author to define his own tags and his own document structure.

XML is Not a Replacement for HTML

XML is a complement to HTML. It is important to understand that XML is not a replacement for HTML. In most web applications, XML is used to transport data, while HTML is used to format and display the data.


SOAP is a simple XML-based protocol to let applications exchange information over HTTP. SOAP, originally defined as Simple Object Access Protocol, is a protocol specification for exchanging structured information in the implementation of Web Services in computer networks. It relies on Extensible Markup Language (XML) as its message format, and usually relies on other Application Layer protocols mostly Remote Procedure Call (RPC) and HTTP) for message negotiation and transmission. SOAP can form the foundation layer of a web services protocol stack, providing a basic messaging framework upon which web services can be built.

The SOAP architecture consists of several layers of specifications for message format, message exchange patterns (MEP), underlying transport protocol bindings, message processing models, and protocol extensibility. SOAP is the successor of XML-RPC, though it borrows its transport and interaction neutrality and the envelope/header/body from elsewhere (probably from WDDX).

Features of SOAP:

  • SOAP stands for Simple Object Access Protocol
  • SOAP is a communication protocol
  • SOAP is for communication between applications
  • SOAP is a format for sending messages
  • SOAP communicates via Internet
  • SOAP is platform independent
  • SOAP is language independent
  • SOAP is based on XML
  • SOAP is simple and extensible
  • SOAP allows you to get around firewalls
  • SOAP is a W3C recommendation


It is important for application development to allow Internet communication between programs.

Today's applications communicate using Remote Procedure Calls (RPC) between objects like DCOM and CORBA, but HTTP was not designed for this. RPC represents a compatibility and security problem; firewalls and proxy servers will normally block this kind of traffic.

A better way to communicate between applications is over HTTP, because HTTP is supported by all Internet browsers and servers. SOAP was created to accomplish this.

SOAP provides a way to communicate between applications running on different operating systems, with different technologies and programming languages.

SOAP Building Blocks:

A SOAP message is an ordinary XML document containing the following elements:

  • An Envelope element that identifies the XML document as a SOAP message
  • A Header element that contains header information
  • A Body element that contains call and response information
  • A Fault element containing errors and status information

All the elements above are declared in the default namespace for the SOAP envelope:

Syntax Rules:

Here are some important syntax rules:

  • A SOAP message MUST be encoded using XML
  • A SOAP message MUST use the SOAP Envelope namespace
  • A SOAP message MUST use the SOAP Encoding namespace
  • A SOAP message must NOT contain a DTD reference
  • A SOAP message must NOT contain XML Processing Instructions
  • The SOAP Envelope element is the root element of a SOAP message.

The SOAP Envelope Element: The required SOAP Envelope element is the root element of a SOAP message. This element defines the XML document as a SOAP message.

The encoding Style Attribute:

The encoding Style attribute is used to define the data types used in the document. This attribute may appear on any SOAP element, and applies to the element's contents and all child elements.

A SOAP message has no default encoding.

The SOAP Body Element:

The required SOAP Body element contains the actual SOAP message intended for the ultimate endpoint of the message. Immediate child elements of the SOAP Body element may be namespace-qualified.

The SOAP Fault element holds errors and status information for a SOAP message.

The SOAP Fault Element:

The optional SOAP Fault element is used to indicate error messages. If a Fault element is present, it must appear as a child element of the Body element. A Fault element can only appear once in a SOAP message.


HTTP communicates over TCP/IP. An HTTP client connects to an HTTP server using TCP. After establishing a connection, the client can send an HTTP request message to the server. A SOAP method is an HTTP request/response that complies with the SOAP

Encoding rules


A SOAP request could be an HTTP POST or an HTTP GET request.

The HTTP POST request specifies at least two HTTP headers: Content-Type and Content-Length.


SQL Server 2005 is Microsoft's relational database management system (RDBMS). It builds on a legacy of accomplishments spanning more than a decade of SQL Server development and critical success, from SQL Server 6.0, 6.5, 7.0, and 2000. But it is more than that. It is the most widely used and most scalable data management system in the world, currently deployed in hundreds of thousands of companies where it is in service day in and day out, storing the records of the digital universe that now supports our very existence.

There are important distinctions you need to make between SQL Server and a product like Access or FoxPro, and these revolve around the following three concepts, seen or felt mainly from the data consumer's perspective:

  • Concurrent access to data
  • Integrity of data
  • Availability of data

SQL Server 2005 is not a database application development environment in the sense that Microsoft Access is. It is a vast collection of components and products that holistically combine, as a client/server system, to meet the data storage, retrieval, and analysis requirements of any entity or organization. |

The release of SQL Server 2000 has also revealed a component model that allows SQL Server to scale down and compete in the small systems and desktop. Today, with SQL Server 2005, no one is really surprised with its powerhouse of features and functionality. Because business demands everything.

SQL Server 2005 thus:-

  1. Is fully Internet-enabled
  2. Provides the fastest time-to-market
  3. Is the most highly scalable
  4. Is the most portable

So we can see that no matter with needs, SQL Server 2005 can meet them.

Microsoft SQL Server is a full-featured relational database management system RDBMS that offers a variety of administrative tools to ease the burdens of database development, maintenance and administration. The following are the more frequently used tools: Enterprise Manager, Query Analyzer, SQL Profiler, Service Manager, Data Transformation Services and Books Online.

Enterprise Manager is the main administrative console for SQL Server installations. It provides users with a graphical birds-eye view of all of the SQL Server installations on network. One can perform high-level administrative functions that affect one or more servers, schedule common maintenance tasks or create and modify the structure of individual databases.

Query Analyzer offers a quick and dirty method for performing queries against any SQL Server databases. It's a great way to quickly pull information out of a database in response to a user request, test queries before implementing them in other applications; create/modify stored procedures and execute administrative tasks.

SQL Profiler provides a window into the inner workings of the database. One can monitor many different event types and observe database performance in real time. SQL Profiler allows users to capture and replay system traces that log various activities. It's a great tool for optimizing databases with performance issues or troubleshooting particular problems.

Service Manager is used to control the MSSQLServer the main SQL Server process, MSDTC Microsoft Distributed Transaction Coordinator and SQLServerAgent processes. An icon for this service normally resides in the system tray of machines running SQL Server. Users can use Service Manager to start, stop or pause any one of these services.

Data Transformation Services DTS provide an extremely flexible method for importing and exporting data between a Microsoft SQL Server installation and a large variety of other formats. The most commonly used DTS application is the Import and Export Data wizard found in the SQL Server program group.

Books Online is an often over looked resource provided with SQL Server that contains answers to a variety of administrative, development and installation issues. It's a great resource to consult before turning to the Internet or technical support.


A database management, or DBMS, gives us to access data and helps to transform the data into information. Such database management systems include dBase, paradox, IMS, SQL Server and SQL Server. These systems allow us to create, update and extract information from their database.

A database is a structured collection of data. Data refers to the characteristics of people, things and events. SQL Server stores each data item in its own fields. In SQL Server, the fields relating to a particular person, thing or event are bundled together to form a single complete unit of data, called a record (it can also be referred to as raw or an occurrence). Each record is made up of number of fields. No two fields in a record can have the same field name.

During an SQL Server Database design project, the analysis of our business needs to identify all the fields or attributes of interest. If our business needs change over time, then we can define any additional fields or change the definition of existing fields.

SQL Server Tables

SQL Server stores records relating to each other in a table. Different tables are created for the various groups of information, Related tables are grouped together to form database.

Primary Key

Early table in SQL Server has a field or a combination of fields that uniquely identifies each record in the table. The Unique identifier is called the primary key, or simply the key. The primary key provides the means to distinguish one record from all other in a table. It allows the user and the database system to identify, locate and refer to one particular record in the database.

Relational Database

Sometime all the information of interest to a business operation can be stored in one table. SQL Server makes it very easy to link the data in multiple tables. Matching an employee to the department in which they work is one example. This is what makes SQL Server a relational database management system, or RDBMS. It stores data in two or more tables and enables us to define relationship between the tables and enables us to define relationship between the tables. (i.e., login table and bill pay table)

Foreign Key

When a field is one table matches the primary key of another field is referred to as a foreign key. A foreign Key is a field or a group of fields in one table whose values match those of the primary key of another table. (i.e., accno which is primary Key in logic table becomes foreign Key in bill pay table).



The system design is a solution, a "how to" approach to the creation of a new system. This is composed of serial steps, it provides the understanding the procedural details necessary for implementing the system recommended in the feasibly study, design goes through a logical and physical stages of development, logical design reviews the present physical system, prepares input and output specifications, makes edit, security and control specifications, details the implementation plan; and prepares a logical design walkthrough.

The database tables are designed by analyzing various functions involved in the system and the format of the fields is also designed. The fields in the database table should define their role in the system. The unnecessary fields should be avoided because it affects the storage areas of the system. Then in the input and the output screen design, the design should be made user-friendly. The menu should be precise and compact.


Today database is recognized as a standard of MIS and is for virtually for every size of computer. In a database environment, common data are available and used by several users. Instead of each program managing its own data, software manages the data as an entry. The main objectives of database design is

  • Controlled Redundancy.
  • Base of Learning and use.
  • Data independence.
  • Accuracy and integrity.
  • Recovery from failure.
  • Privacy and security.
  • Performance.

The efficiency of an application development using VB is mainly dependent upon the database tables. The fields in each table and the way the tables are joined using the fields contained in them to retrieve the necessary information. Hence a careful selection of the tables and their fields is imperative.


Data flow diagram is used to decrease analysis the movement of data through a system store of data in the system. Data flow diagrams are the central tool basing on which components are developed.

The transformation of data from input to output, through process may be describe logically and independently of physically components associated with the system. They are called logical data flow diagrams. In contrast physical data flow diagrams show the actual implementation and movement of data between people, Department, and workstation.

The data flow diagram show functional composition of the system. The first level of conceptual level is context diagram, which is followed by the description of input and output for each of entities. The next level of DFD is level 1, which shows the main functions in the system. Level 0 is followed by the description of the main functions. The main function further broken into functions and sub functions.

Testing and Analysis

Testing is an investigation done to provide consumers with the information about the quality and standard of the product or service. Software testing is also known as process of validating and verifying that a program or software component developed meets the technical and business requirements, behaves as expected and can be developed with some specific functionality. Actually the testing process is started when the requirements have been defined and coding process has been done. Testing always cannot identify all the defects within the software or product. It just finishes its objective by observing that the developed product is behaved as expected. Every product has target users. For example the software developed for Health care industry is completely different from the software developed for banking sector.

Scope of testing: A primary goal of testing is to detect the software bugs so that they can be discovered and corrected. Testing process cannot guarantee that a product executes properly in all the scenarios but can find out that it cannot work properly under few scenarios. Its scope includes examining the code as well as running the code in different environments, a testing organization or team may be different from the team of development.

Defects and Failures: All the defects cannot be categorized in the coding errors. There are defects which come under design or requirement analysis. These type of errors are very expensive because they cause to go back to the first step of software development cycle. These requirement gaps are caused by non functional requirements such as scalability, testability, usability etc.

Compatibility: A common cause of software failure is compatibility with other software and hardware. One web application developed in .net technology may work as expected without errors in the browsers like internet explorer, mozilla firebox etc. But it may not be compatible with safari which runs on UNIX/ Linux operating systems. These kinds of issues are not guessed in analysis phase of software development. When the product is distributed to the stakeholders and used by end-users, issues rose in the production. Sometimes the application working in older versions may not be working as expected in the newer versions. Testing team is responsible to find the bugs related to code, related to functionality, related to business flow, related to Operating Systems, related to end-users, related to compatibility with older versions.

Testing artefacts:

There are several artefacts in the process of software testing like test plan, traceability matrix, test case, test script, test suite, test data, test harness.

Test plan: A specifications developed for testing is known as test plan. These specifications are developed by the leads so that developers can develop the code keeping the testing scenarios. This makes the code more reliable. Few companies develop their own testing strategies based on their requirements.

Traceability matrix: A table to correlate the design documents or requirements to the testing documents, this checks the functionality of the product to the behaviour expected by the client.

Test case: Test cases are the scenarios with random input values to test the product. It is a step by step process to check the functionality. Normally test case is nothing but input values, output values and expected values or behaviour.

Test script: Combing the test case, test procedure and the test data is called the test script. Test scripts can be developed by manual, automated process or by both.

Test suite: Collection of test cases is considered as test suite. It contains detailed instructions for test cases.

Test data: Collection of input values to test the particular piece of code or functionality. These input values can be negative, positive, and both.

Testing Methods

Unit Testing

A unit is the smallest part which can be tested. Unit testing is normally performed by the developers in the developing stage. In procedural programming a unit is a function, procedure etc. While in object oriented programming, a unit is a class. Unit testing is very important to find the syntax errors, compiler warnings, and runtime errors. This is very important before a developer moves code to the source control.

Integration Testing

Integration testing is an extension of unit testing where two or more units which have been tested are combined and tested together. Developers combine few units to form a component and combine few components to form a module. This process goes on till the total product is tested. Integration testing is used to identify problems that occur when units are combined. This testing finds out the compatibility with other objects.

There are three common strategies for integration testing they are:

  • Top Down Approach
  • Bottom Up Approach
  • Umbrella Approach

White Box Testing

White Box Testing is done by a tester after combining all the units together. White box testing includes analyzing data flow, control flow information flow etc. White box testing is done to check if the code follows intended design, to validate functionality.

White box requires access to source code; it can be performed at any time in the life cycle after the code is developed.

To implement white box testing tester needs to know different testing tools and techniques.

Sandbox Testing

Sandbox Testing is the testing done by the tester in offline that is Sandbox is basically the ability to submit and process transactions, data etc with in a application that is not in live state. These transactions are not online or real so they have no effect. Usually used in a development cycle.

Functional Testing

In functional testing we test the functionality of the system without its implementation. We provide input to the test if it is giving the required output or not. Tester does not concentrate on the implementation of the program. Generally in functional testing the tester takes test cases and checks for the result as the test cases are tricky problem Functional testing helps creates test suites providing criteria for selecting test cases.



A Software Development Methodology is a framework in Software Engineering which is used to plan, structure and control the process of developing an Information System. It is also known as System Development Methodology. Many verities of frameworks have been evolved past few years, each having its own recognized Strengths and weaknesses. One Software Development Methodology (System development methodology) is not suitable for All Projects.

Each of the available methodologies is best suited for some specific kind of projects based on various technical, project, and organization and team considerations.

Software Development methodology consists of

  1. Software development Philosophy, which shows how to approach software development Process.
  2. Models, methods and multiple tools which help in assisting software development Process.


In previous days (1920's) the software development was done by making use of flow charts. In early 1960's the software development methodology has emerged out. Systems Development lifecycle was the oldest methodology for building Information Systems.

The development lifecycle was originated in 1960's to develop a large scale functional Business system.

Software Development Methodologies:

In 1969 Structured Programming was introduced. From 1980 Onwards Structured Systems Analysis and Designing methodology was introduces also called as (SSADM). In 1990's there was a huge change in Methodologies. Object Oriented Programming was been Developed since 1960's but developed as the Most used methodology by the time of 1990's.

Rapid Application Development was introduced in 1991.

Scrum and Tea Software Process was developed in late 1990's

In 1999 Extreme programming was introduced, in 1998 Rational Unified Process was Introduced and by 2005 Agile Unified process was introduced by Scott Amber.

Types of Software Development or Approach:

  1. Waterfall: linear framework type.
  2. Prototyping: iterative framework type
  3. Incremental: which is a combination of iterative framework type and linear
  4. Spiral: which is a combination of iterative framework type and linear
  5. Rapid Application Development (RAD): Iterative Framework Type

Waterfall Model:

This Model is a Sequential development Process where the development is done from Top to Bottom (That is Steadily Downwards like a Waterfall) Through the Phases of Design, Implementation, testing, requirement analysis, Integration and maintenance. This was published by Winston W.Royce , This is how the waterfall Model works;

  1. The Project is divided in to Phases where some overlap and splash back are acceptable between phases.
  2. Time Schedules, target dates, Implementation and Budget of the whole system is done at one time.
  3. Life of the Project is controlled by using Extensive Written Documentation and also through formal reviews and approval by the user and information Technology management which occurs at every Phase of Life.


Prototyping is the framework of Activities occurred during Software development of creating Prototypes.

The Principles Involved is:

  1. It's not a Standalone and Completely Development Methodology but it is an approach of handling selected Portions of a larger or more traditional Development Methodology.
  2. It reduces the Inherent Project risk by breaking the Project into Smaller segments which Provides more ease of change during the development Process.
  3. Here user is involved throughout the Process which increases the Likelihood of user Acceptance of the Final Implementation.
  4. Small type mock-ups of the system are developed following an iterative modification process until the prototype evolves to meet the users' requirements.
  5. Most of the Prototypes are developed thinking in mind as if they will be discarded.
  6. We should understand the basics of fundamental business problem to avoid the wrong problem solving.


There are many Methods for combining Linear and iterative System Development technologies where the intention is to reduce the project Risk by breaking the Project in to smaller segments which provides more ease of change during the development Process.

The basic Principles Involved here is:

  1. Some series of small-waterfall Models are performed where all the phases of the waterfall model are completed for small part of the systems.
  2. Before proceeding to evolutionary mini waterfall development of Individual increments, all the requirements are defined.
  3. The initial stages of the Software development methodology is defined using Waterfall Approach and then Followed By Iterative Prototyping which is installation of final Prototype.


It is a Software Development Process which combines both Designing and Prototyping-in-Stages, which is an effort combine the advantages of Top-Down and Bottom-up concepts.

The basic Principles involved here are

  1. Here we must focus of Risk Assessment and Minimize the Project risk by breaking the project into smaller parts which provides the more ease of change during the development Process.
  2. It also provides the Opportunity to evaluate Risks and weigh consideration of the project continuation throughout the life cycle.

  3. Each lifecycle involves some progress through some sequence of steps for each portion of the Product
  4. Every trip of the spiral travels through four basic Parts (Quadrants):
    1. Determining objectives, alternatives, and constraints of the iteration
    2. Evaluate Alternatives, identify and resolve risks
    3. Developing and verifying deliverables through the iteration
    4. Planning the next iteration
  5. We Must Begin each cycle by identifying stakeholders and their Win conditions and also identify each cycle with review and commitment.

Agile Methodology:

Agile methodology is very different kind of model used in development of software products. It is based on iterative development, where solutions and requirements are formed by collaboration between the self organizing teams.

It was coined in year 2001 for the first time. Agile methods divide the tasks into small enhancements with very less planning. This type of development does not have any long term planning. Iterations are very short time frames that are around 3 to 4 weeks. Each iteration involves the full software development cycle including planning, analysis, design, coding and testing.

Screen shorts


In today's IT world Service-oriented architectures are accepted rapidly as a sound and modularized approach for developing and deploying services across the enterprise. To implement such SOA requires careful planning.SOA is very distributed and independent of languages and technology. Services developed using JAVA can be consumed using .NET or the vice versa. Applications developed using Service Oriented Architecture needs a good logging mechanism to trouble shoot the issues in raised in production. Each company will encounter unique set of challenges in the long run and they have to proactively solve the challenges.SOA is the best scalable solution to solve the complexities in the IT industry. The benefits of SOA approach are:

  1. Platform independent: SOA is not dependent on any platform.
  2. Simplified integration of components: Components involved in the SOA architecture are easily integrated.
  3. Risk: Risk is low in SOA when it is achieved properly.
  4. Easily decoupling: service providers and service consumers can be easily decoupled.
  5. Lower maintenance costs and simplified application maintenance
  6. Transparency in location of services
  7. Easy development, deployment and maintenance.
  8. Easy trouble shooting
  9. Scalability: In future the applications can be easily extended and new features can be added.
  10. Reusability: components developed in one product can be easily modified and used in other applications.


  1. Amza, C, Chanda, A, Cox, AL, Elnikety, S, Gil, R, Rajamani, K & Zwaenepoel, W 2002, 'Specification and Implementation of Dynamic Web Site Benchmarks', paper presented to 5th IEEE Workshop on Workload Characterization.
  2. Cecchet, E, Chanda, A, Elnikety, S, Marguerite, J & Zwaenepoel, W 2002, 'A Comparison of Software Architectures for E-business Applications', Rice University Technical Report TR02-389.
  3. Choi, S, Lee, J, Kim, SM, Song, J & Lee, Y-J 2004, 'Accelerating Database Processing at e-Commerce Sites', in pp. 41-50.
  4. Corporate Profile 2009, Books-A-Million, Inc. and NetCentral, Inc., <>.
  5. Deng, Y, Frankl, P & Wang, J 2004, Testing Web Database Applications, Polytechnic University, Brooklyn.
  6. Feinstein, WP 1999, 'A study of technologies for client/server applications', Master of Science thesis, University of Alabama.
  7. Iyengar, AK, Nahum, EM, Shaikh, AA & Tewari, R 2003, Improving Web Site Performance, Thomas J. Watson Research Center, IBM Research Division, Yorktown Heights, NY.
  8. Litchfield, D 2005, 'Data-mining with SQL Injection and Inference', NGSSoftware Insight Security Research, viewed 20/09/2009, <>.
  9. Menasc´e, DA, Almeida, ViAF, Riedi, R, Ribeiro, Fa, Fonseca, R & Jr., WM 2003, 'A hierarchical and multiscale approach to analyze E-business workloads', Performance Evaluation, vol. 54, no. 1, pp. 33-57.
  10. Mortensen, M & Roseboro, R 2009, TM Forum Management World 2009, Analysys Mason, viewed 20/09 2009, <>.
  11. Olston, C, Ailamaki, A, Garrod, C, Maggs, BM, Manjhi, A & Mowry, TC 2005, 'A scalability service for dynamic web applications', Proc. CIDR.
  12. Schafer, JB, Konstan, JA & Riedl, J 2001, 'E-Commerce Recommendation Applications', Data Mining and Knowledge Discovery, vol. 5, no. 1, pp. 115-53.
  13. Cheon, Y & Leavens, G 2002, 'A Simple and Practical Approach to Unit Testing: The JML and JUnit Way', in pp. 1789-901.
  14. Elbaum, S, Karre, S & Rothermel, G 2003, 'Improving web application testing with user session data', in Software Engineering, 2003. Proceedings. 25th International Conference on, pp. 49- 59.
  15. Lucca, GD, Fasolino, A, Faralli, F & Carlini, Ud 2002, 'Testing Web Applications', paper presented to 18th IEEE International Conference on Software Maintenance (ICSM'02).