Enterprise Collaboration Tool is rethinking how technology can help companies manage customer relationships. ECT, the market leading CRM application, delivers a feature-rich set of business processes that enhance marketing effectiveness, drive sales performance, improve customer satisfaction and provide executive insight into business performance. Supported by deep collaboration and administration capabilities that adapt to how your company operates, ECT is delighting customers of all sizes across a broad range of industries.

Enterprise Collaboration Tool is a powerful modular Internet/Intranet application framework. It features a scheduler, Meetings, Messaging, address book, file upload and download and feedback. Everything is designed to collaborate online.


Effective planning and scheduling of time bound work and monitoring the work done by the employee's and effective use of online system to communicate and collaborate with the members of other centers of a decentralized organization.


System with adaptability to any organization to monitor the work done by people with respect to time and also assigning work.


§ Captures daily work done by employees

§ Provides communication module.

§ Summary reports on work done by employees.

§ Attendance report based on the entered work details.

§ Online work log entry and work assignment.

§ Address book to store the contact numbers and personal info

§ Scheduler to keep mark notes on a particular day.

§ Feedback, which ensures the management to look up the comments by the employee.

§ Uploading and downloading facility

§ Information about all the existing branches of the organization.

§ The newsletters regarding the day-to-day affairs of the organization.

The proposed Enterprise Collaboration Tool consists of a login screen from which the users and administrator could logon in to the system. In this System, the users are given different roles. Each role is associated with some services. The various roles are administrator & employee, programmers etc. Admin is the head of all branches and is responsible for user management and branch administration.

The Enterprise Collaboration Tool process starts with the creation of users and mapping their roles from the administrator login. The system will maintain the check-in time and checkout time daily and will allow the users to enter their worklog daily and has a provision to enter the yesterday's worklog details also (exemption given to enter backlogs of previous day).

Company Profile:

Verza Soft is quickly emerging as an innovative and essential business software applications development resource. Since 1985, Verza Soft has provided custom design, implementation and support solutions to a variety of information management industries and business environments. Experienced and intuitive, the professionals at Verza Soft understand how fast technologies change and we remain committed to solving the unique information management application challenges of today's business world while developing and evolving strategies for tomorrows.

With over twenty combined years of professional software development experience, the principals at Verza Soft are business applications authorities. As specialists, we've learned to anticipate individual client application needs and design software suites to complement virtually every hardware technology. We offer 24-hour customer service and employ a qualified team of trainers, technicians and creative designers who assist in developing the comprehensive, user-friendly software programs that distinguish Verza Soft as the perfect answer to the often puzzling questions inherent in contemporary information management technologies.

We've been working on an offshore development model from day 1 and have perfected the process of onsite-offshore interaction over the past seven years. Our services are highly cost effective enabling our clients to get the best value for their money

Working primarily within Sun family of productsto combine expert use of hardware technology and state-of-the-art software, Verza Soft is "ware" it's at! From software programming and applications development including custom Internet integration to system architecture and technical design, through continuous support solutions, Verza Soft pieces together today's information management puzzle to create optimal, full integrated, interactive packages that best meet global business demands.

Working one-on-one, we can develop innovative applications that not only satisfy your specific business requirements, but also complement your company's investment in essential information management technologies. We're dedicated to making you look good custom design "ware" by Verza Soft it's the solution that fits!

Verza Soft understands the significance of a good quality assurance (QA) process for creating world-class products. With hundreds of person-years of experience in the field of testing, it has expertise in the development and execution of tests for applications in the client/server, internet/web and the mobile space, using both automated and manual methodologies. Functional testing verifies whether a product/application performs as per specifications

System Analysis:

Existing System

Here the existing system is nothing but a manual system using which the administrator task from the main branch becomes more complex to prepare schedules for all the employees working in different branches and sent them manually and tracking their status. Generating the different reports by co-relating different things is a tedious process.

In this system the sub center individually manually maintains their database regarding employees daily work log, scheduled work, progress of work, attendance, leaves, payments etc.. and sends reports to the main centre through email attachments or by post.

Unless and until the main centre manager receives reports from sub centers, he is unable to take decisions regarding employees and their salary, promotions, meetings, daily schedules, scheduled projects, customer details etc.


§ Doesn't provide effective co-ordination between different branches

§ Doesn't provide role based security for the data

§ Generating different kinds of reports becomes more complex

§ Doesn't provide effective communication for our employees with in the company

§ Doesn't allow the administrator to monitor overall activities of the company.

These drawbacks of the existing system leads to the present web based application called Enterprise Collaboration Tool. There by the management is now relieved from all tensions from which they faced previously.

Proposed System

The proposed system is a software solution for the existing system. It is a powerful modular Internet/Intranet application framework which provides good co-ordination between our branches and allows the administrator to effectively track activities of the company. It features a scheduler, Worklog, Meetings, Messaging, address book, file upload and download and feedback. Everything is designed to collaborate online.


§ Provides effective co-ordination between different branches regarding work schedules through scheduler and worklog facilities.

§ Improves the quality in planning and managing works

§ Generating different reports will be very easy

§ Provides a facility for the administrator to track overall activities of the company

§ Provides good communication channel for the employees to interact with in the company

§ Provides upload and download facilities to share the documents

§ Provides a facility to collect the feedback from the employees

§ Provides a facility for the employees to maintain the contacts in their address book.

Feasibility Steady

Feasibility steady is an important phase in the software development process. It enables the developer to have an assessment of the product being developed. It refers to the feasibility study of the product in terms of out comes of the product, operational use and technical support required for implementing it.

Feasibility study should be performed on the basis of various criteria and parameters. The various feasibility studies are:

§ Economic Feasibility

§ Operational Feasibility

§ Technical Feasibility

Economic Feasibility:

It refers to the benefits or outcomes we are deriving from the product as compared to the total cost we are spending for developing the product. If the benefits are more or less the same as the older system, then it is not feasible to develop the product. In this product if we have developed this application then the amount of time spent in preparing the schedules, sending it different branches and monitor the work will be reduced which indirectly increases the production for the company.

Operational Feasibility:

It refers to the feasibility of the product to be operational. Some products may work very well at design and implementation but may fail in the real time environment. It includes the study of additional human resource required and their technical expertise. This application will also work in any environment with out any problems since we are implementing this project in java language.

Technical Feasibility:

It refers to whether the software that is available in the market fully supports the present application. It studies the pros and cons of using a particular software for the development and its feasibility. It also studies the additional training needed to be given to the people to make the application work. For this project we need not recruit any additional staff to make use of this application. If we train our staff for one hour then it will be enough to work with application. Since this application uses the softwares which are already used by the company so that the company need not purchase new software to run this project.

Software and Hardware Requirements:


Pentium IV processes architecture

1. 512 MB RAM.

2. 160 GB Hard Disk Space.

3. Ethernet card.


Database : Oracle 10g XE

Web Server : Apache Tomcat 5.0

Front end : JSP / Servlets, J2SDK 1.5, HTML, Java Script

Functional Requirements & Non Functional Requirements:

Functional Requirements

The main purpose of functional requirements within the requirement specification document is to define all the activities or operations that take place in the system. These are derived through interactions with the users of the system. Since the Requirements Specification is a comprehensive document & contains a lot of data, it has been broken down into different Chapters in this report.

But the general Functional Requirements arrived at the end of the interaction with the Users are listed below. A more detailed discussion is presented in the Chapters, which talk about the Analysis & Design of the system.

1. The system holds the details of the employees and their braches.

2. It holds the schedules of different employees of the company.

3. It holds the details of all works done by the employees.

4. The system allows the administrator to manage different users.

5. It also allows the administrator to prepare the schedules and assigns them to different employees.

6. It allows the administrator to post the meeting details which will be displayed for all the employees.

7. It allows the employees to store the customer contacts in their address book.

8. It allows the administrator and employees to share the documents using upload and download facilities.

9. It allows the employees to post their feedback

10. It allows the administrator to view customer feedbacks.

10. It allows the administrator to broadcast the news information

11. It allows the employees to send a message to other employees or a group of employees at once.

12. It allows the administrator to view work pending report


The non-functional requirements consist of

1. Constraints.

2. Guidelines.


These are the requirements that are not directly related to the functionality of the system. These should be considered as mandatory when the system is developed. The following Constraints were arrived at for the system:

1. The system should be available over the intranet so that the Users like the administrator & employees can use the system from their respective locations which could be with in the company.

2. For gaining entry into the system the employees should be registered by the administrator and should be able use login & passwords for gaining access to the system.

3. The users should be able to change their passwords for increased security.

4. The system should be easy to understand and organized in a structured way. The users should also receive appropriate messages about any errors that occur.

5. There should be no limitation about the hardware platform that is to be used to run the system.

6. Data integrity should be maintained if an error occurs or the whole system comes down.


We have discussed mandatory requirements in the previous section. The requirements in this section should be taken as suggestions & they should be thought of as recommendations to further enhance the usability of the system.

1. The system should display a menu for users to choose from.

2. The system should display users' requests in a reasonable time.

3. Services of the system should be available 24 hours a day.

4. The system should be designed in such a way that it is easy to enhance it with more functionality. It should be scalable & easily maintainable.

Execution Methodology


The different phases of the implementation of the project here is

Phase 1 - Business Process & Requirements analysis

Phase 2 - System Requirements Specifications

Phase 3 - Design and Development

Phase 4 - Testing & Debugging

Phase 5 - Implementation

Phase 1 - Business Process & Requirements analysis

Business Process & Requirements analysis is the phase when the relevant business area is studied in detail. This process brings out the gaps between the existing systems and identifies the areas where the business operations should be modified, keeping in view the way it needs to be carried out to encounter the problems during this phase the required documents are prepared defining the existing and required setup for the project

Phase 2 - System Requirements Specifications

The information about the requirements is collected which contains the information about the current user system and the proposed system as seen from the user perspective. At the end of this phase a detailed requirement specification document is prepared and approved.

Phase 3 - Design and Development

In this phase the framework for the design of the proposed product is designed to meet the requirements specifications documented.

The product is developed as per the framework to meet the objectives of the system requirement specifications approved.

Phase 4 - Testing & Debugging

This phase contains the preparation of test cases and the standards of testing. The end users using the dummy User Ids carry out the testing.

Phase 5 - Implementation

The project enters the implementation phase when the product is ready to be implemented/piloted on the production environment and after thorough training to all the end users the product is implemented

Project Estimates

The estimated time lines for completing the implementation of the application identified and its approach is enumerated as below


Duration in weeks

1 - Business Process & Req.Analysis

2 - System Requirements Specifications

3 - Design and Development

4 - Testing & Debugging

5 - Implementation






Dubious use of the System

If the user is deliberately going to handle the system improperly, like he can enter incorrect time/out or he may submit wrong worklog or feedback. The System is to be further developed to countercheck the employee's entries with that of the local authorities.

System Design:

Logical Design

Design for WebApps encompasses technical and non-technical activities. The look and feel of content is developed as part of graphic design; the aesthetic layout of the user interface is created as part of interface design; and the technical structure of the WebApp is modeled as part of architectural and navigational design.

Dix argues that a Web engineer must design an interface so that it answers three primary questions for the end-user;

1. Where am I? - The interface should (1) provide an indication of the WebApp has been accessed and (2) inform the user of her location in the content.

2. What can I do now? - The interface should always help the user understand his current options- what functions are available, what links are live, what content is relevant.

3. Where have I been; where am I going? - The interface must facilitate navigation. Hence it must provide a “map” of where the user has been and what paths may be taken to move else where in the WebApp.

Design goals- the following are the design goals that are applicable to virtually every WebApp regardless of application domain, size, or complexity.

1. Simplicity

2. Consistency

3. Identity

4. Visual appeal

5. Compatibility.

Design leads to a model that contains the appropriate mix of aesthetics, content, and technology. The mix will vary depending upon the nature of the WebApp, and as a consequence the design activities that are emphasized will also vary.

The activities of the Design process;

1. Interface design-describes the structure and organization of the user interface. Includes a representation of screen layout, a definition of the modes of interaction, and a description of navigation mechanisms.

Interface Control mechanisms- to implement navigation options, the designer selects form one of a number of interaction mechanism;

a. Navigation menus

b. Graphic icons

c. Graphic images

Interface Design work flow- the work flow begins with the identification of user, task, and environmental requirements. Once user tasks have been identified, user scenarios are created and analyzed to define a set of interface objects and actions.

2. Aesthetic design-also called graphic design, describes the “look and feel” of the WebApp. Includes color schemes, geometric layout. Text size, font and placement, the use of graphics, and related aesthetic decisions.

3. Content design-defines the layout, structure, and outline for all content that is presented as part of the WebApp. Establishes the relationships between content objects.

4. Navigation design-represents the navigational flow between contents objects and for all WebApp functions.

5. Architecture design-identifies the overall hypermedia structure for the WebApp. Architecture design is tied to the goals establish for a WebApp, the content to be presented, the users who will visit, and the navigation philosophy that has been established.

a. Content architecture, focuses on the manner in which content objects and structured for presentation and navigation.

b. WebApp architecture, addresses the manner in which the application is structure to manage user interaction, handle internal processing tasks, effect navigation, and present content. WebApp architecture is defined within the context of the development environment in which the application is to be implemented.

J2EE uses MVC architecture;

6. Component design-develops the detailed processing logic required to implement functional components.


Scheduler & Work Log Module:

User Management & Branch Management Module:

Communication Module (Messages, Meetings, Notices Module & News):

Address Book & Feedback Module:


Scheduler & Work Log Module:

This module helps in preparing the work schedules and monitors work simply by sitting at the main branch. It provides user friendly screens which include calendars to select the date. Once the administrator has added the work to the schedule then it will displayed to all the employees in all the branches. It will be easy for an employee now to know his work schedule, complete it and intimate it to the administrator by entering the work details in the work log so that the administrator can monitor easily.

User Management & Branch Management Module:

This module helps the administrator to add new branch details to the database, edit the existing branch details and delete the branch. It also provides a facility to the administrator to add the employee details and create the logins for the required employees, edit the user details and the delete the user information from the database.

Communication Module (Messages, Meetings, Notices Module & News):

This module provides a facility to the employees to communicate each other very easily by sending the messaged in this application. The messages provides options to send a message to another employee, view the message list, open a message a message, delete a message and send a message to all the employees in a group at a time.

This module provides a facility to the employees to know the details of the meetings which are going to be conducted just by clicking view meeting details option in meetings link. The users can also post the meetings details at any point of time.

This module provides a facility to the employees to send the notices prepared by one branch to another branch then these people can upload the notice document and that will be downloaded by another branch people. It is just like sharing the documents across the branches.

This module helps the administrator to post the news details which will be displayed to all the users when ever they logged in.

Address Book & Feedback Module:

This module provides a facility to the employees to store their individual contact details in the address book. It allows us to add, edit and delete the contact details in the address book.

This module helps the users to post their feedback about a policy which was implemented by company from online and allows the administrator to view the feedbacks posted by all the employees.


This module allows the administrator to view different kinds of reports according to his requirement. It generates the reports based on employees, employee work report, groups reports and employee report.

Physical Design

UML Diagrams

Data Dictionary
Data Modeling Overview

A data model is a conceptual representation of the data structures that are required by a database. The data structures include the data objects, the associations between data objects, and the rules which govern operations on the objects. As the name implies, the data model focuses on what data is required and how it should be organized rather than what operations will be performed on the data. To use a common analogy, the data model is equivalent to an architect's building plans.

A data model is independent of hardware or software constraints. Rather than try to represent the data, as a database would see it, the data model focuses on representing the data as the user sees it in the "real world". It serves as a bridge between the concepts that make up real-world events and processes and the physical representation of those concepts in a database.


There are two major methodologies used to create a data model: the Entity-Relationship (ER) approach and the Object Model.

Data Modeling In the Context of Database Design

Database design is defined as: "design the logical and physical structure of one or more databases to accommodate the information needs of the users in an organization for a defined set of applications". The design process roughly follows five steps:

* planning and analysis
* conceptual design
* Logical design
* Physical design
* implementation

The data model is one part of the conceptual design process. The other, typically is the functional model. The data model focuses on what data should be stored in the database while the functional model deals with how the data is processed. To put this in the context of the relational database, the data model is used to design the relational tables. The functional model is used to design the queries, which will access and perform operations on those tables.

Components of a Data Model

The data model gets its inputs from the planning and analysis stage. Here the modeler, along with analysts, collects information about the requirements of the database by reviewing existing documentation and interviewing end-users. The data model has two outputs. The first is an entity-relationship diagram which represents the data structures in a pictorial form. Because the diagram is easily learned, it is valuable tool to communicate the model to the end-user. The second component is a data document. This is a document that describes in detail the data objects, relationships, and rules required by the database. The dictionary provides the detail required by the database developer to construct the physical database.

Why is Data Modeling Important?

Data modeling is probably the most labor intensive and time consuming part of the development process. Why bother especially if you are pressed for time? A common response by practitioners who write on the subject is that you should no more build a database without a model than you should build a house without blueprints.

The goal of the data model is to make sure that the all data objects required by the database are completely and accurately represented. Because the data model uses easily understood notations and natural language, it can be reviewed and verified as correct by the end-users.

The data model is also detailed enough to be used by the database developers to use as a "blueprint" for building the physical database. The information contained in the data model will be used to define the relational tables, primary and foreign keys, stored procedures, and triggers. A poorly designed database will require more time in the long-term. Without careful planning you may create a database that omits data required to create critical reports, produces results that are incorrect or inconsistent, and is unable to accommodate changes in the user's requirements.


A data model is a plan for building a database. To be effective, it must be simple enough to communicate to the end user the data structure required by the database yet detailed enough for the database design to use to create the physical structure.

1) Table Name: LOGIN_TWM




































2) Table Name: LOGINOUT_TWM










3) Table Name: MESSAGES_TWM



















4) Table Name: GROUPS_TWM






















6) Table Name: MEETINGS_TWM



















7) Table Name: SCHEDULAR_TWM











































9) Table Name: WORKLOG_TWM
















10) Table Name: FEEDBACK_TWM
















11) Table Name: NEWS_TWM




























13) Table Name: DOWNLOAD_TWM


















14) Table Name: BRANCH_TWM











E-R Diagram

Technological Requirements:


HTML, an initialism of Hypertext Markup Language, is the predominant markup language for web pages. It provides a means to describe the structure of text-based information in a document — by denoting certain text as headings, paragraphs, lists, and so on — and to supplement that text with interactive forms, embedded images, and other objects. HTML is written in the form of labels (known as tags), surrounded by angle brackets. HTML can also describe, to some degree, the appearance and semantics of a document, and can include embedded scripting language code which can affect the behavior of web browsers and other HTML processors.

HTML is also often used to refer to content of the MIME type text/html or even more broadly as a generic term for HTML whether in its XML-descended form (such as XHTML 1.0 and later) or its form descended directly from SGML

Hypertext Markup Language (HTML), the languages of the World Wide Web (WWW), allows users to produces Web pages that include text, graphics and pointer to other Web pages (Hyperlinks).

HTML is not a programming language but it is an application of ISO Standard 8879, SGML (Standard Generalized Markup Language), but specialized to hypertext and adapted to the Web. The idea behind Hypertext is that instead of reading text in rigid linear structure, we can easily jump from one point to another point. We can navigate through the information based on our interest and preference. A markup language is simply a series of elements, each delimited with special characters that define how text or other items enclosed within the elements should be displayed. Hyperlinks are underlined or emphasized works that load to other documents or some portions of the same document.

HTML can be used to display any type of document on the host computer, which can be geographically at a different location. It is a versatile language and can be used on any platform or desktop.

HTML provides tags (special codes) to make the document look attractive. HTML tags are not case-sensitive. Using graphics, fonts, different sizes, color, etc., can enhance the presentation of the document. Anything that is not a tag is part of the document itself.

Basic HTML Tags:

<! -- --> specifies comments

<A>……….</A> Creates hypertext links

<B>……….</B> Formats text as bold

<BIG>……….</BIG> Formats text in large font.

<BODY>…</BODY> Contains all tags and text in the HTML document

<CENTER>...</CENTER> Creates text

<DD>…</DD> Definition of a term

<DL>...</DL> Creates definition list

<FONT>…</FONT> Formats text with a particular font

<FORM>...</FORM> Encloses a fill-out form

<FRAME>...</FRAME> Defines a particular frame in a set of frames

<H#>…</H#> Creates headings of different levels ( 1 - 6 )

<HEAD>...</HEAD> Contains tags that specify information about a document

<HR>...</HR> Creates a horizontal rule

<HTML>…</HTML> Contains all other HTML tags

<META>...</META> Provides meta-information about a document

<SCRIPT>…</SCRIPT> Contains client-side or server-side script

<TABLE>…</TABLE> Creates a table

<TD>…</TD> Indicates table data in a table

<TR>…</TR> Designates a table row

<TH>…</TH> Creates a heading in a table


The attributes of an element are name-value pairs, separated by "=", and written within the start label of an element, after the element's name. The value should be enclosed in single or double quotes, although values consisting of certain characters can be left unquoted in HTML (but not XHTML).Leaving attribute values unquoted is considered unsafe.

Most elements take any of several common attributes: id, class, style and title. Most also take language-related attributes: lang and dir.

The id attribute provides a document-wide unique identifier for an element. This can be used by stylesheets to provide presentational properties, by browsers to focus attention on the specific element or by scripts to alter the contents or presentation of an element. The class attribute provides a way of classifying similar elements for presentation purposes. For example, an HTML document (or a set of documents) may use the designation class="notation" to indicate that all elements with this class value are all subordinate to the main text of the document (or documents). Such notation classes of elements might be gathered together and presented as footnotes on a page, rather than appearing in the place where they appear in the source HTML.

An author may use the style non-attributal codes presentational properties to a particular element. It is considered better practice to use an element's son- id page and select the element with a style sheet, though sometimes this can be too cumbersome for a simple ad hoc application of styled properties. The title is used to attach sub textual explanation to an element. In most browsers this title attribute is displayed as what is often referred to as a tool tip. The generic inline span element can be used to demonstrate these various non-attributes.

The preceding displays as HTML (pointing the cursor at the abbreviation should display the title text in most browsers).


* A HTML document is small and hence easy to send over the net.

* It is small because it does not include formatted information.

* HTML is platform independent.

* HTML tags are not case-sensitive.


JavaScript is a script-based programming language that was developed by Netscape Communication Corporation. JavaScript was originally called Live Script and renamed as JavaScript to indicate its relationship with Java. JavaScript supports the development of both client and server components of Web-based applications. On the client side, it can be used to write programs that are executed by a Web browser within the context of a Web page. On the server side, it can be used to write Web server programs that can process information submitted by a Web browser and then update the browser's display accordingly

Even though JavaScript supports both client and server Web programming, we prefer JavaScript at Client side programming since most of the browsers supports it. JavaScript is almost as easy to learn as HTML, and JavaScript statements can be included in HTML documents by enclosing the statements between a pair of scripting tags


<SCRIPT LANGUAGE = “JavaScript”>

JavaScript statements


Here are a few things we can do with JavaScript:

* Validate the contents of a form and make calculations.

* Add scrolling or changing messages to the Browser's status line.

* Animate images or rotate images that change when we move the mouse over them.

* Detect the browser in use and display different content for different browsers.

* Detect installed plug-ins and notify the user if a plug-in is required.

* We can do much more with JavaScript, including creating entire application.

JavaScript and Java are entirely different languages. A few of the most glaring differences are:

* Java applets are generally displayed in a box within the web document; JavaScript can affect any part of the Web document itself.

* While JavaScript is best suited to simple applications and adding interactive features to Web pages; Java can be used for incredibly complex applications.

There are many other differences but the important thing to remember is that JavaScript and Java are separate languages. They are both useful for different things; in fact they can be used together to combine their advantages.

Ø JavaScript can be used for Sever-side and Client-side scripting.

Ø It is more flexible than VBScript.

Ø JavaScript is the default scripting languages at Client-side since all the browsers supports it.

Java Technology

Initially the language was called as “oak” but it was renamed as “Java” in 1995. The primary motivation of this language was the need for a platform-independent (i.e., architecture neutral) language that could be used to create software to be embedded in various consumer electronic devices.

* Java is a programmer's language.

* Java is cohesive and consistent.

* Except for those constraints imposed by the Internet environment, Java gives the programmer, full control.

* Finally, Java is to Internet programming where C was to system programming.

Importance of Java to the Internet

Java has had a profound effect on the Internet. This is because; Java expands the Universe of objects that can move about freely in Cyberspace. In a network, two categories of objects are transmitted between the Server and the Personal computer. They are: Passive information and Dynamic active programs. The Dynamic, Self-executing programs cause serious problems in the areas of Security and probability. But, Java addresses those concerns and by doing so, has opened the door to an exciting new form of program called the Applet.
Java can be used to create two types of programs

Applications and Applets: An application is a program that runs on our Computer under the operating system of that computer. It is more or less like one creating using C or C++. Java's ability to create Applets makes it important. An Applet is an application designed to be transmitted over the Internet and executed by a Java -compatible web browser. An applet is actually a tiny Java program, dynamically downloaded across the network, just like an image. But the difference is, it is an intelligent program, not just a media file. It can react to the user input and dynamically change.

Features of Java Security

Every time you that you download a “normal” program you are risking a viral infection. Prior to Java, most users did not download executable programs frequently, and those who did scan them for viruses prior to execution. Most users still worried about the possibility of infecting their systems with a virus. In addition, another type of malicious program exists that must be guarded against. This type of program can gather private information, such as credit card numbers, bank account balances, and passwords. Java answers both these concerns by providing a “firewall” between a network application and your computer.

When you use a Java-compatible Web browser, you can safely download Java applets without fear of virus infection or malicious intent.


For programs to be dynamically downloaded to all the various types of platforms connected to the Internet, some means of generating portable executable code is needed .As you will see, the same mechanism that helps ensure security also helps create portability. Indeed, Java's solution to these two problems is both elegant and efficient.

The Byte code

The key that allows the Java to solve the security and portability problems is that the output of Java compiler is Byte code. Byte code is a highly optimized set of instructions designed to be executed by the Java run-time system, which is called the Java Virtual Machine (JVM). That is, in its standard form, the JVM is an interpreter for byte code.
Translating a Java program into byte code helps makes it much easier to run a program in a wide variety of environments. The reason is, once the run-time package exists for a given system, any Java program can run on it.
Although Java was designed for interpretation, there is technically nothing about Java that prevents on-the-fly compilation of byte code into native code. Sun has just completed its Just In Time (JIT) compiler for byte code. When the JIT compiler is a part of JVM, it compiles byte code into executable code in real time, on a piece-by-piece, demand basis. It is not possible to compile an entire Java program into executable code all at once, because Java performs various run-time checks that can be done only at run time. The JIT compiles code, as it is needed, during execution.

Java Virtual Machine (JVM)

Beyond the language, there is the Java virtual machine. The Java virtual machine is an important element of the Java technology. The virtual machine can be embedded within a web browser or an operating system. Once a piece of Java code is loaded onto a machine, it is verified. As part of the loading process, a class loader is invoked and does byte code verification makes sure that the code that's has been generated by the compiler will not corrupt the machine that it's loaded on. Byte code verification takes place at the end of the compilation process to make sure that is all accurate and correct. So byte code verification is integral to the compiling and executing of Java code.

Overall Description

Picture showing the development process of JAVA Program
Java programming uses to produce byte codes and executes them. The first box indicates that the Java source code is located in a. Java file that is processed with a Java compiler called javac. The Java compiler produces a file called a. class file, which contains the byte code. The .Class file is then loaded across the network or loaded locally on your machine into the execution environment is the Java virtual machine, which interprets and executes the byte code.

Java Architecture

Java architecture provides a portable, robust, high performing environment for development. Java provides portability by compiling the byte codes for the Java Virtual Machine, which is then interpreted on each platform by the run-time environment. Java is a dynamic system, able to load code when needed from a machine in the same room or across the planet.

Compilation of code

When you compile the code, the Java compiler creates machine code (called byte code) for a hypothetical machine called Java Virtual Machine (JVM). The JVM is supposed to execute the byte code. The JVM is created for overcoming the issue of portability. The code is written and compiled for one machine and interpreted on all machines. This machine is called Java Virtual Machine.

Introduction to Servlets

Servlets provide a Java(TM)-based solution used to address the problems currently associated with doing server-side programming, including inextensible scripting solutions, platform-specific APIs, and incomplete interfaces.

Servlets are objects that conform to a specific interface that can be plugged into a Java-based server. Servlets are to the server-side what applets are to the client-side -- object byte codes that can be dynamically loaded off the net. They differ from applets in that they are faceless objects (without graphics or a GUI component). They serve as platform-independent, dynamically loadable, plug gable helper byte code objects on the server side that can be used to dynamically extend server-side functionality.

What is a Servlet?

Servlets are modules that extend request/response-oriented servers, such as Java-enabled web servers. For example, a servlet might be responsible for taking data in an HTML order-entry form and applying the business logic used to update a company's order database.

Servlets are to servers what applets are to browsers. Unlike applets, however, Servlets have no graphical user interface. Servlets can be embedded in many different servers because the servlet API, which you use to write Servlets, assumes nothing about the server's environment or protocol. Servlets have become most widely used within HTTP servers; many web servers support the Servlet API.

* Use Servlets instead of CGI Scripts:

* Servlets are an effective replacement for CGI scripts. They provide a way to generate dynamic documents that is both easier to write and faster to run. Servlets also address the problem of doing server-side programming with platform-specific APIs: they are developed with the Java Servlet API, a standard Java extension.

* So use Servlets to handle HTTP client requests. For example, have Servlets process data posted over HTTPS using an HTML form, including purchase order or credit card data. A servlet like this could be part of an order-entry and processing system, working with product and inventory databases, and perhaps an on-line payment system.

* Other Uses for Servlets

* Here are a few more of the many applications for Servlets:

* Allowing collaboration between people. A servlet can handle multiple requests concurrently, and can synchronize requests. This allows Servlets to support systems such as on-line conferencing.

* Forwarding requests. Servlets can forward requests to other servers and Servlets. Thus Servlets can be used to balance load among several servers that mirror the same content, and to partition a single logical service over several servers, according to task type or organizational boundaries.

* Architecture of the Servlet Package

* The javax.servlet package provides interfaces and classes for writing Servlets. The architecture of the package is described below.

* The Servlet Interface:

* The central abstraction in the Servlet API is the Servlet interface. All Servlets implement this interface, either directly or, more commonly, by extending a class that implements it such as HttpServlet.

* The Servlet interface declares, but does not implement, methods that manage the servlet and its communications with clients. Servlet writers provide some or all of these methods when developing a servlet.

* Client Interaction:

* When a servlet accepts a call from a client, it receives two objects:

* A ServletRequest, which encapsulates the communication from the client to the server.

* A ServletResponse, which encapsulates the communication from the servlet back to the client.

* ServletRequest and ServletResponse are interfaces defined by the javax.servlet package.
The ServletRequest Interface:

* The ServletRequest interface allows the servlet access to: Information such as the names of the parameters passed in by the client, the protocol (scheme) being used by the client, and the names of the remote host that made the request and the server that receive the input stream, ServletInputStream. Servlets use the input stream to get data from clients that use application protocols such as the HTTP POST and PUT methods.

Interfaces that extend ServletRequest interface allow the servlet to retrieve more protocol-specific data. For example, the HttpServletRequest interface contains methods for accessing HTTP-specific header information.

The ServletResponse Interface:

The ServletResponse interface gives the servlet methods for replying to the client. It:

* Allows the servlet to set the content length and MIME type of the reply.

* Provides an output stream, ServletOutputStream, and a Writer through which the servlet can send the reply data.

Interfaces that extend the ServletResponse interface give the servlet more protocol-specific capabilities. For example, the HttpServletResponse interface contains methods that allow the servlet to manipulate HTTP-specific header information.

Additional Capabilities of HTTP Servlets

The classes and interfaces described above make up a basic Servlet. HTTP Servlets have some additional objects that provide session-tracking capabilities. The servlet writer can use these APIs to maintain state between the servlet and the client that persists across multiple connections during some time period. HTTP Servlets also have objects that provide cookies. The servlet writer uses the cookie API to save data with the client and to retrieve this data.

The classes mentioned in the Architecture of the Servlet Package section are shown in the example in bold:

* SimpleServlet extends the HttpServlet class, which implements the Servlet interface.

* SimpleServlet overrides the doGet method in the HttpServlet class. The doGet method is called when a client makes a GET request (the default HTTP request method) and results in the simple HTML page being returned to the client.

* Within the doGet method,
An HttpServletRequest object represents the user's request.

o An HttpServletResponse object represents the response to the user.

o Because text data is returned to the client, the reply is sent using the Writer object obtained from the HttpServletResponse object.

Servlet Lifecycle

Each servlet has the same life cycle:

* A server loads and initializes the servlet

* The servlet handles zero or more client requests

* The server removes the servlet

Initializing a Servlet

When a server loads a servlet, the server runs the servlet's init method. Initialization completes before client requests are handled and before the servlet is destroyed.

Even though most Servlets are run in multi-threaded servers, Servlets have no concurrency issues during servlet initialization. The server calls the init method once, when the server loads the servlet, and will not call the init method again unless the server is reloading the servlet. The server cannot reload a servlet until after the server has destroyed the servlet by calling the destroy method.

The init Method:

The init method provided by the HttpServlet class initializes the servlet and logs the initialization. To do initialization specific to your servlet, override the init () method following these rules:

If an initialization error occurs that renders the servlet incapable of handling client requests, throw an Unavailable Exception.

Initialization Parameters:

The second version of the init method calls the getInitParameter method. This method takes the parameter name as an argument and returns a String representation of the parameter's value.

The specification of initialization parameters is server-specific. In the Java Web Server, the parameters are specified with a servlet is added then cond in the Administration Tool. For an explanation of the Administration screen where this setup is performed, see the Administration Tool: Adding Servlets online help document.

If, for some reason, you need to get the parameter names, use the getParameterNames method.

Destroying a Servlet:

Servlets run until the server is destroys them, for example at the request of a system administrator. When a server destroys a servlet, the server runs the servlet's destroy method. The method is run once; the server will not run that servlet again until after the server reloads and reinitializes the servlet.

When the destroy method runs, another thread might be running a service request. The Handling Service Threads at Servlet Termination section shows you how to provide a clean shutdown when there could be long-running threads still running service requests.

Using the Destroy Method:

The destroy method provided by the HttpServlet class destroys the servlet and logs the destruction. To destroy any resources specific to your servlet, override the destroy method. The destroy method should undo any initialization work and synchronize persistent state with the current in-memory state.

A server calls the destroy method after all service calls have been completed, or a server-specific number of seconds have passed, whichever comes first. If your servlet handles any long-running operations, service methods might still be running when the server calls the destroy method. You are responsible for making sure those threads complete. The next section shows you how.

The destroy method shown above expects all client interactions to be completed when the destroy method is called, because the servlet has no long-running operations.
Java Server Pages

Java Server Pages technology lets you put snippets of servlet code directly into a text-based document. A JSP page is a text-based document that contains two types of text: static template data, which can be expressed in any text-based format such as HTML, WML, and XML, and JSP elements, which determine how the page constructs dynamic content.

Java Server Page™ (JSP): An extensible Web technology that uses template data, custom elements, scripting languages, and server-side Java objects to return dynamic content to a client. Typically the template data is HTML or XML elements, and in many cases the client is a Web browser.

According to JSP model1 we can develop the application as,

According to above model the presentation logic has to be implemented in JSP page and the business logic has to be implemented as part of Java bean This model help us in separating the presentation and business logic. For large-scale projects instead of using model1 it is better to use model2 (MVC). Struts framework is based on model 2.

Java Server Pages (JSP) lets you separate the dynamic part of your pages from the static HTML. You simply write the regular HTML in the normal manner, using whatever Web-page-building tools you normally use. You then enclose the code for the dynamic parts in special tags, most of which start with "<%" and end with "%>". For example, here is a section of a JSP page that results in something like "Thanks for ordering Core Web Programming

For URL of http://host/OrderConfirmation.jsp?title=Core+Web+Programming:

Thanks for ordering

<I><%= request.getParameter("title") %></I>

You normally give your file a .jsp extension, and typically install it in any place you could place a normal Web page. Although what you write often looks more like a regular HTML file than a servlet, behind the scenes, the JSP page just gets converted to a normal servlet, with the static HTML simply being printed to the output stream associated with the servlet's service method. This is normally done the first time the page is requested, and developers can simply request the page themselves when first installing it if they want to be sure that the first real user doesn't get a momentary delay when the JSP page is translated to a servlet and the servlet is compiled and loaded. Note also that many Web servers let you define aliases that so that a URL that appears to reference an HTML file really points to a servlet or JSP page.

Aside from the regular HTML, there are three main types of JSP constructs that you embed in a page: scripting elements, directives, and actions. Scripting elements let you specify Java code that will become part of the resultant servlet, directives let you control the overall structure of the servlet, and actions let you specify existing components that should be used, and otherwise control the behavior of the JSP engine. To simplify the scripting elements, you have access to a number of predefined variables such as request in the snippet above.

J2EE Platform Overview

The J2EE platform is designed to provide server-side and client-side support for developing distributed, multi-tier applications. Such applications are typically cond as a client tier to provide the user interface, one or more middle-tier modules that provide client services and business logic for an application, and back-end enterprise information systems providing data management.

Multitier Model

The J2EE platform provides a multi-tier distributed application model. This means that the various parts of an application can run on different devices. The J2EE architecture defines a client tier, a middle tier (consisting of one or more sub-tier), and a back-end tier. The client tier supports a variety of client types, both outside and inside of corporate firewalls. The middle tier supports client services through Web containers in the Web tier and supports business logic component services through JavaBeans TM. On the back end, the enterprise information systems in the tier are accessible by way of standard APIs.

Container-Based Component Management

Central to the J2EE component-based development model is the notion of containers. Containers are standardized runtime environments that provide specific services to components. Components can expect these services to be available on any J2EE platform from any vendor. For example, all J2EE Web containers provide runtime support for responding to client requests, performing request-time processing (such as invoking JSP pages or servlet behavior), and returning results to the client. In addition, they provide APIs to support user session management. All WEB containers provide automated support for transaction and life cycle management of WEB components, as well as bean lookup and other services. Containers also provide standardized access to enterprise information systems; for example, providing access to relational data through the JDBC API. In addition, containers provide a mechanism for selecting application behaviors at assembly or deployment time. Through the use of deployment descriptors (XML files that specify component and container behavior), components can be cond to a specific container's environment when deployed, rather than in component code. Features that can be cond at deployment time include security checks, transaction control, and other management responsibilities.

While the J2EE specification defines the component containers that a platform implementation must support, it doesn't specify or restrict the containers' configurations. Thus, both container types can run on a single platform, Web containers can live on one platform and WEB containers on another or a J2EE platform can be made up of multiple containers on multiple platforms.

Support for Client Components

The J2EE client tier provides support for a variety of client types, both within the enterprise firewall and outside. Clients can be offered through Web browsers by using plain HTML pages, HTML generated dynamically by Java Server PagesTM.

Support for Business Logic Components

While simple J2EE applications may be built largely in the client tier, business logic is often implemented on the J2EE platform in the middle tier as Java Beans components (also known as enterprise beans). Enterprise beans allow the component or application developer to concentrate on the business logic while the complexities of delivering a reliable, scalable service are handled by the WEB container.

In many ways, the J2EE platform and Java Beans architecture have complementary goals. The Java Beans component model is the backbone of industrial-strength application architectures in the J2EE programming model. The J2EE platform complements the specification by:

o Fully specifying the APIs that an enterprise bean developer can use to implement enterprise beans

o Defining the larger, distributed programming environment in which enterprise beans are used as business logic components

J2EE Platform Benefits

With features designed to expedite the process of developing distributed applications, the J2EE platform offers several benefits:

o Simplified architecture and development

o Freedom of choice in servers, tools, and components

o Integration with existing information systems

o Scalability to meet demand variations

o Flexible security model

Simplified Architecture and Development

The J2EE platform supports a simplified, component-based development model. Because it is based on the Java programming language and the Java 2 Platform, Standard Edition (J2SETM platform), this model offers “Write-Once-Run-Anywhere” portability, supported by any server product that conforms to the J2EE standard.

The component-based J2EE development model can enhance application development productivity in a number of ways:

* Maps easily to application functionality—Component-based application models map easily and flexibly to the functionality desired from an application. As the examples presented throughout this book illustrate, the J2EE platform provides a variety of ways to con the architecture of an application, depending on such things as client types required, level of access required to data sources, and other considerations. Component-based design also simplifies application maintenance, since components can be updated and replaced independently—new functionality can be shimmed into existing applications simply by updating selected components.

* Enables assembly- and deploy-time behaviors—Because of the high level of service standardization, much of the code of a J2EE application can be generated automatically by tools, with minimal developer intervention. In addition, components can expect standard services to be available in the runtime environment and can dynamically connect to other components by means of consistent interfaces. As a result, many application behaviors can be cond at application assembly or deployment time, without recoding. Component developers can communicate requirements to application deployers through specific deployment descriptors and settings. Tools can automate this process to further expedite development.

* Supports division of labor—Components help divide the labor of application development among specific skill sets, enabling each member of a development team to focus on his or her ability. Web page authors can create JSP templates, Java programming language coders can implement application behavior, domain experts can develop business logic, and application developers and integrators can assemble and deploy applications. This division of labor also expedites application maintenance. For example, the user interface is the most dynamic part of many applications, particularly on the Web. With the J2EE platform, Web page authors can tweak the look and feel of JSP pages without programmer intervention. The J2EE specifications define a number of roles, including application component provider, application assembler, and application deployer. On some development teams, one or two people may perform all these roles, while on others.

Integrating Existing Enterprise Information Systems

The J2EE platform, together with the J2SE platform, includes a number of industries standard APIs for accessing existing enterprise information systems. Basic access to these systems is provided by the following APIs:

The J2EE Connector architecture is the infrastructure for interacting with a variety of Enterprise Information System types, including ERP, CRM, and other legacy systems.

The JDBCTM API is used for accessing relational data from the Java programming language.

The Java Transaction API (JTA) is the API for managing and coordinating transactions across heterogeneous enterprise information systems.

The Java Naming and Directory Interface TM (JNDI) is the API for accessing information in enterprise name and directory services.

The Java Message Service (JMS) is the API for sending and receiving messages via enterprise messaging systems such as IBM MQ Series and TIBCO Rendezvous. In the J2EE platform version 1.3, message-driven beans provide a component-based approach to encapsulating messaging functionality.

Java APIs for XML provide support for integration with legacy systems and applications, and for implementing Web services in the J2EE platform. In addition, specialized access to enterprise resource planning and mainframe systems such as IBM's CICS and IMS is provided through the J2EE Connector architecture. Since each of these systems is highly complex and specialized, they require unique tools and support to ensure utmost simplicity to application developers.

Choice of Servers, Tools, and Components

The J2EE standard and J2EE brand have created a huge marketplace for servers, tools, and components. The J2EE brand on a server product ensures the consistent level of service that is fundamental to the goals of the J2EE platform. At the same time, J2EE standards ensure a lively marketplace for tools and components. Based on past experience and industry momentum, all leading enterprise software vendors are expected to provide the marketplace for J2EE 1.3 products. The standardization and branding of the J2EE platform provides many benefits, including:

o A range of server choices—Application development organizations can expect J2EE branded platforms from a variety of vendors, providing a range of choices in hardware platforms, operating systems, and server configurations. This ensures that businesses get a choice of servers appropriate to their needs.

o Designed for tool support—Both enterprise beans and JSP page components are designed to be manipulated by graphical development tools and to allow automating many of the application development tasks traditionally requiring the ability to write and debug code. Both J2EE server providers and third-party tool developers have developed tools that conform to J2EE standards and support various application development tasks and styles. Application developers have a choice of tools to manipulate and assemble components, and individual team members may choose tools that best suit their specific requirements.

o A marketplace for components—Component-based design ensures that many types of behavior can be standardized, packaged, and reused by any J2EE application. Component vendors will provide a variety of off-the-shelf component solutions, including accounting beans, user interface templates, and even vertical market functionality of interest in specific industries. Application architects get a choice of standardized components to handle common or specialized tasks. The J2EE standard and associated branding programs ensure that solutions are compatible. By setting the stage for freedom of choice, the J2EE platform makes it possible to develop with confidence that the value of your investment will be protected.

Scales Easily

J2EE containers provide a mechanism that supports simplified scaling of distributed applications, with no application development effort. Because J2EE containers provide components with transaction support, database connections, life cycle management, and other features that influence performance, they can be designed to provide scalability in these areas. For example, containers may pool database connections, providing clients with quick, efficient access to data. Because containers may run on multiple systems, Web containers can automatically balance load in response to fluctuating demand.

Simplified, Unified Security Model

The J2EE security model is designed to support single sign on access to application services. Component developers can specify the security requirements of a component at the method level to ensure that only users with appropriate permissions can access specific data operations. While both Java Beans technology and Java Servlet APIs provide programmatic security control, the basic role-based security mechanism (where groups of users share specific permissions) is specified entirely at application deployment time. This provides both greater flexibility and better security control.

J2EE Application Scenarios

The J2EE specifications encourage architectural diversity. The J2EE specifications and technologies make few assumptions about the details of API implementations. The application-level decisions and choices are ultimately a trade-off between functional richness and complexity. The J2EE programming model is flexible enough for applications that support a variety of client types, with both the Web container and WEB container as optional.

The following enterprise requirements heavily influenced the choices made in developing the sample application:

o The need to make rapid and frequent changes to the “look and feel” of the application

o The need to partition the application along the lines of presentation and business logic so as to increase modularity

o The need to simplify the process of assigning suitably trained human resources to accomplish the development task such that work can proceed along relatively independent but cooperating tracks

The need to have developers familiar with back-office applications unburdened from GUI and graphic design work, for which they may not be ideally qualified

The need to have the necessary vocabulary to communicate the business logic to teams concerned with human factors and the aesthetics of the application

The ability to assemble back-office applications using components from a variety of sources, including off-the-shelf business logic components

The ability to deploy transactional components across multiple hardware and software platforms independently of the underlying database technology

The ability to externalize internal data without having to make many assumptions about the consumer of the data and to accomplish this in a loosely coupled manner Clearly, relaxing any or all of these requirements would influence some of the application-level decisions and choices that a designer would make. Although it is reasonable to speak of “throw-away” presentation logic (that is, applications with a look and feel that ages rapidly), there is still significant inertia associated with business logic. This is even truer in the case of database schemas and data in general. It is fair to say that as one moves further away from EIS resources, the volatility of the application code increases dramatically; that is, the code's “shelf life” drops significantly.

Multitier Application Scenario

JSP pages, supported by Servlets, generate dynamic Web content for delivery to the client. The Web container hosts application components that use EIS resources to service requests from Web-tier components. This architecture decouples data access from the application's user interface. The architecture is also implicitly scalable. Application back-office functionality is relatively isolated from the end-user look and feel.

It is worth noting that XML plays an integral role in this scenario. The ability to both produce and consume XML data messages in the Web container is an extremely flexible way to embrace a diverse set of client types. These platforms range from general purpose XML-enabled browsers to specialized XML rendering engines targeting vertical solutions. XML data messages typically use HTTP as their transport protocol. Java and XML are complementary technologies: The Java language offers portable code, XML provides portable data. In the Web tier, the question of whether to use JSP pages or Servlets comes up repeatedly. JSP technology is intended for application user interface components, while Java Servlets are preferred for request processing and application control logic. Servlets and JSP pages work together to provide dynamic content from the Web tier.

Stand-Alone Client Scenario

The stand-alone client may be one of three types:

WEB clients interacting directly with enterprise beans hosted in an WEB container within an WEB server, as shown in 1.5. This scenario uses RMI-IIOP, and the WEB server accesses EIS resources using JDBC and the J2EE Connector architecture.

o Stand-alone clients, implemented in the Java language or another programming language, consuming dynamic Web content (usually XML data messages). In this scenario, the Web container essentially handles XML transformations and provides Web connectivity to clients. Presentation logic occurs in the client tier. The Web tier handles business logic and may directly access EIS resources. Ideally, business logic is implemented as enterprise beans to take advantage of the rich enterprise beans component model.

o Stand-alone Java application clients accessing enterprise information system resources directly using JDBC or Connectors. In this scenario, presentation and business logic are co-located on the client platform and may in fact be tightly integrated into a single application. This scenario is classic two-tier client-server architecture, with its associated distribution, maintenance, and scalability issues.

Web-Centric Application Scenario

There are a number of scenarios in which the use of enterprise beans in an application would be considered overkill: sort of like using a sledgehammer to crack a nut. The J2EE specification doesn't mandate a specific application configuration, nor could it realistically do so. The J2EE platform is flexible enough to support the application configuration most appropriate to a specific application design requirement.

The Web container hosts both presentation and business logic, and it is assumed that JDBC and the J2EE Connector architecture are used to access EIS resources.

In many cases, J2EE platform providers may co-locate their Web and WEB containers, running them within the same Java Virtual Machine (JVM). J2EE applications deployed on such an implementation are still considered Multitier applications, because of the division of responsibilities that the separate technologies imply.

Business-to-Business Scenario

This scenario focuses on peer level interactions between both Web and WEB containers. The J2EE programming model promotes the use of XML data messaging over HTTP as the primary means of establishing loosely coupled communications between Web containers. This is a natural fit for the development and deployment of Web-based commerce solutions.

The peer-level communications between WEB containers is currently a more tightly coupled solution most suitable for intranet environments. With support for JMS and message-driven beans, the J2EE 1.3 platform makes developing loosely coupled intranet solutions increasingly practical.

Future releases of the J2EE platform will provide additional functionality in the form of Java APIs for XML, which enable more complete support for loosely coupled applications through XML-based Web services.

Communication Technologies

Communication technologies provide mechanisms for communication between clients and servers and between collaborating objects hosted by different servers. The J2EE specification requires support for the following types of communication technologies:

Internet protocols

o Remote method invocation protocols

o Object Management Group protocols

o Messaging technologies

o Data formats

Internet Protocols

Internet protocols define the standards by which the different pieces of the J2EE platform communicate with each other and with remote entities. The J2EE platform supports the following Internet protocols:

o TCP/IP—Transport Control Protocol over Internet Protocol. These two protocols provide for the reliable delivery of streams of data from one host to another. Internet Protocol (IP), the basic protocol of the Internet, enables the unreliable delivery of individual packets from one host to another. IP makes no guarantees as to whether the packet will be delivered, how long it will take, or if multiple packets will arrive in the order they were sent. The Transport Control Protocol (TCP) adds the notions of connection and reliability.

o HTTP 1.0—Hypertext Transfer Protocol. The Internet protocol used to fetch hypertext objects from remote hosts. HTTP messages consist of requests from client to server and responses from server to client.

o SSL 3.0—Secure Socket Layer. A security protocol that provides privacy over the Internet. The protocol allows client-server applications to communicate in a way that cannot be eavesdropped or tampered with. Servers are always authenticated and clients are optionally authenticated.

Remote Method Invocation Protocols

Remote Method Invocation (RMI) is a set of APIs that allow developers to build distributed applications in the Java programming language. RMI uses Java language interfaces to define remote objects and a combination of Java serialization technology and the Java Remote Method Protocol (JRMP) to turn local method invocations into remote method invocations. The J2EE platform supports the JRMP protocol, the transport mechanism for communication between objects in the Java language in different address spaces.

Object Management Group Protocols

Object Management Group (OMG) protocols allow objects hosted by the J2EE platform to access remote objects developed using the OMG's Common Object Request Broker Architecture (CORBA) technologies and vice versa. CORBA objects are defined using the Interface Definition Language (IDL). An application component provider defines the interface of a remote object in IDL and then uses an IDL compiler to generate client and server stubs that connect object implementations to an

Object Request Broker (ORB), a library that enables CORBA objects to locate and communicate with one another. ORBs communicate with each other using the Internet Inter-ORB Protocol (IIOP). The OMG technologies required by the J2EE platform are Java IDL and RMI-IIOP.

Java IDL

Java IDL allows Java clients to invoke operations on CORBA objects that have been defined using IDL and implemented in any language with a CORBA mapping. Java IDL is part of the J2SE platform. It consists of a CORBA API and ORB. An application component provider uses the idlj IDL compiler to generate a Java client stub for a CORBA object defined in IDL. The Java client is linked with the stub and uses the CORBA API to access the CORBA object.


RMI-IIOP is an implementation of the RMI API over IIOP. RMI-IIOP allows application component providers to write remote interfaces in the Java programming language. The remote interface can be converted to IDL and implemented in any other language that is supported by an OMG mapping and an ORB for that language. Clients and servers can be written in any language using IDL derived from the RMI interfaces. When remote interfaces are defined as Java RMI interfaces, RMI over

IIOP provides interoperability with CORBA objects implemented in any language.

RMI-IIOP contains:

o The rmic compiler, which generates: - Client and server stubs that work with any ORB. An IDL file compatible with the RMI interface. To create a C++ server object, an application component provider would use an IDL compiler to produce the server stub and skeleton for the server object.

o A CORBA API and ORB. Application clients must use RMI-IIOP to communicate with enterprise beans.



Oracle is a relational database management system, which organizes data in the form of tables. Oracle is one of many databases servers based on RDBMS model, which manages a seer of data that attends three specific things-data structures, data integrity and data manipulation. With oracle cooperative server technology we can realize the benefits of open, relational systems for all the applications. Oracle makes efficient use of all systems resources, on all hardware architecture; to deliver unmatched performance, price performance and scalability. Any DBMS to be called as RDBMS has to satisfy Dr.E.F.Codd's rules.



The Oracle RDBMS is available on wide range of platforms ranging from PCs to super computers and as a multi user loadable module for Novel NetWare, if you develop application on system you can run the same application on other systems without any modifications.


Oracle commands can be used for communicating with IBM DB2 mainframe RDBMS that is different from Oracle , that is Oracle compatible with DB2 .Oracle RDBMS is a high performance fault tolerant DBMS , which is specially designed for online transaction processing and for handling large database applications.


Oracle adaptable multithreaded server architecture delivers scalable high performance for very large number of users on all hardware architecture including symmetric multiprocessors (sumps) and loosely coupled multiprocessors. Performance is achieved by eliminating CPU, I/O, memory and operating system bottlenecks and by optimizing the Oracle DBMS server code to eliminate all internal bottlenecks.


Most popular RDBMS in the market because of its ease of use

* Client/server architecture.

* Data independence.

* Ensuring data integrity and data security.

* Managing data concurrency.

* Parallel processing support for speed up data entry and online transaction processing used for applications.

* DB procedures, functions and packages.


These rules are used for valuating a product to be called as relational database management systems. Out of 12 rules, a RDBMS product should satisfy at least 8 rules +rule called rule 0 that must be satisfied.


For any system that is to be advertised as ,or claimed to be relational DBMS. That system should manage database with in self, with out using an external language.


All information in relational database is represented at logical level in only one way as values in tables.


Each and every data in a relational database is guaranteed to be logically accessibility by using to a combination of table name, primary key value and column name


Null values are supported for representing missing information and inapplicable information. They must be handled in systematic way, independent of data types.


The database description is represented at the logical level in the same way as ordinary data so that authorized users can apply the same relational language to its interrogation as they do to the regular data.


A relational system may support several languages and various models of terminal use. However there must be one language whose statement can express all of the following:

Data Definitions, View Definitions, Data Manipulations, Integrity, Constraints, Authorization and transaction boundaries.


Any view that is theoretically that updatable if changes can be made to the tables that effect the desired changes in the view.


The capability of handling a base relational or derived relational as a single operand applies not only retrieval of data also to its insertion, updating, and deletion.


Application program and terminal activities remain logically unimpaired whenever any changes are made in either storage representation or access method.


Application programs and terminal activities remain logically unimpaired whenever any changes are made in either storage repres3entation or access methods.


Integrity constraints specific to particular database must be definable in the relational data stored in the catalog, not in application program.


Weather or not a system support data base distribution, it must have a data sub-language that can support distributed databases without changing the application program.


If a relational system has low level language, that low language cannot

Use to subversion or by passes the integrity rules and constraints expressed in the higher level relational language.


Ø Rule 1: Information Rule (Representation of information)-YES.

Ø Rule 2: Guaranteed Access-YES.

Ø Rule 3: Systematic treatment of Null values-YES.

Ø Rule 4: Dynamic on-line catalog-based Relational Model-YES.

Ø Rule 5: Comprehensive data sub language-YES.

Ø Rule 6: View Updating-PARTIAL.

Ø Rule 7: High-level Update, Insert and Delete-YES.

Ø Rule 8: Physical data Independence-PARTIAL.

Ø Rule 9: Logical data Independence-PARTIAL.

Ø Rule 10: Integrity Independence-PARTIAL.

Ø Rule 11: Distributed Independence-YES.

Ø Rule 12: Non-subversion-YES.


Testing & Debugging

Quality is incorporated into a web application as consequence of good design. It is evaluated by applying a series of technical reviews that assess various elements of the design model and by applying a testing process.

Testing process-an overview:

The Testing Process- the test process for Web engineering begins with tests that exercise content and interface functionality that is immediately visible to end-users. As testing process aspects of the design architecture and navigation are exercised. The user may or may not be cognizant of these WebApp elements. Finally, the focus shifts to tests that exercise technological capabilities that are not always apparent to end-users-WebApp infrastructure and installation/implementation issues.

Content Testing attempts to uncover errors in content. This testing activity is similar in many respects to copy-editing for a written document. In fact, a large web site might enlist the services of a professional copy editor to uncover typographical errors, grammatical mistakes, errors in content consistency, errors in graphical representations, and cross referencing errors. In addition to examining static content for errors, this testing step also considers dynamic content derived from data maintained as part of a database system that has been integrated with the WebApp.

Interface Testing exercise interaction mechanisms and validates aesthetic aspects of the user interface. The intent is uncovered errors that result from poorly implemented interaction mechanisms or omissions, inconsistencies or ambiguities that have been introduced into the interface inadvertently.

The compatibility testing is to define a set of “commonly encountered” client-side computing configurations and their variants. In essence, a tree structure is created, identifying each computing platform, typically display devices, the operating systems supported on the platform, the browser available, likely Internet connection speeds, and similar information. The

intent of these tests is to uncover errors or execution problems that can be traced to configuration differences.

When a user interacts with a WebApp, the interaction occurs through one or more interface mechanisms. They are given below;

1. Links- Each navigation link is tested to ensure that the proper content object or function is reached (by the <a href=”action to where is specified…..”> Tag).

2. Forms- At a microscopic level, tests are performed to ensure that

a. Labels correctly identify fields within the form and that mandatory fields are identified visually for the user;

b. The server receives all information contained within the form and that no data are lost in the transmission between the client and server;

c. Appropriate defaults are used when the user does not select from a pull-down menu or set of buttons;

d. Browser functions (“back” arrow) do not corrupt data entered in a form; and

e. Scripts that perform error checking on data entered work properly and provide meaningful error messages.

3. Forms- At a more targeted level, tests should ensure that

a. Form fields have proper width and data types;

b. The form establishes appropriate safeguards that preclude the user from entering text strings longer than some predefined maximum;

c. All appropriate options for pull-down menus are specified and ordered in a way that is meaningful to the end-user;

d. Browser “auto-fill ” features do not lead to data input errors; and

e. The tab key initiates proper movement between data.

f. For example, when an application of uploading a file, it must ensure that the form tag must consists the enctype to be set to the rfc-1867 standard “metapart/form-data” otherwise it will send a file path only (<form action=”to where” enctype=”metapart /form-data” method=””post>).

4. Client-side scripting, black-box tests are conducted to uncover any errors in processing as the script (e.g., JavaScript) is executed. These tests are coupled with the forms testing, because script input is often derived from data provided as part of forms processing.

Navigation testing applies use-cases, derived as art of the analysis activity, in the design of test cases that exercise each usage scenario against the navigation design. Navigation mechanisms implemented within the interface layout are tested against use-cases and NSUs (Navigation Semantic Unit) to ensure that any errors that impede completion of a use-case are identified and corrected.

The job of the navigation test is;

1. To ensure that the mechanisms that allow the WebApp user to travel through the WebApp are all functional and

2. To validate that each navigation semantic unit (NSU) can be achieved by the appropriate user category.

Component testing exercise content and functional units within the WebApp when WebApps are considered, the concept of the unit changes. The “unit” of choice within the content architecture is the Web page. Each Web page encapsulates content, navigation links, and processing elements (forms, scripts, applets). A “unit” within the WebApp architecture may be defined functional component that enables the WebApp to provide service directly to an end-user or an infrastructure component is tested in much the same way as an individual module tested in conventional software. In most cases, tests are black-box oriented. However, if processing is complex, white box tests may also be used. In addition to functional testing, database capabilities are also exercised.

Integration testing as the WebApp architecture is constructed; navigation and component testing are used as integration tests. The strategy for integration testing depends on the content and WebApp architecture that has been chosen. If the content architecture has been designed with linear, grid, or simple hierarchical structure, it is possible to integrate Web pages in much the same way as we integrate modules for conventional software. However, if a mixed hierarchy or network architecture is used, integration is similar to the approach used for Object Oriented systems. Thread based testing can be used to integrate the set of Web pages required to respond to a user event. Each thread is integrated and tested individually. Regression testing is applied to ensure that no side effects occur.

Each element of the WebApp architecture is unit tested to the extent possible. For example, in MVC architecture the model, view, and controller components are each tested individually. Upon integration, the flow of control and data across each of these elements is assessed in detail.

Configuration testing attempts to uncover errors that are specific to a particular client or server environment. A cross reference matrix that defines all probable operating systems, browsers, hardware platforms, and communications protocols is created. Tests are then conducted to uncover errors associated with each possible configuration.

Security testing incorporates a series of tests designed to exploit vulnerabilities in the WebApp and its environment. The intent is to demonstrate that a security breach is possible.

Performance testing encompasses a series of tests that are designed to access

1. How WebApp response time and reliability are effected by increase user traffic,

2. Which WebApp components are responsible for performance degradation and what usage characteristics cause degradation to occur, and

3. How performance degradation impacts overall WebApp objectives and requirements.

Application is tested in the following aspects

* Dummy users are created and tested

* Testing the work flow defined in all the forms and action buttons route the document to next level was tested.

* Dummy messages sent to the remote users.

* The user interface all options are verified by checking at each level

* The admin functionalities also have been tested by using dummy users database.

* Validations: Java script is used for validating the data at the client side, which saves the server from extra processing.

* Add User/update user - If the admin is trying to enter the userid, which is already existed, he will get a message that the user is already existed. While adding or updating, if the group the user belongs or the branch where he has to work incorrectly entered, he will be given a respective alert.

* Delete User - Confirmation message is displayed asking whether or not to delete the user that has been selected.

* Monthly Work report/Apply leaves.

* If from and to dates are not consistent, means if to date selected is less than from date, a respective message is displayed regarding the inconsistency of the dates.

* Change Password

* If the change password and the confirm password are not same, the user will be intimated.

* Text Information - Mostly all text boxes are checked to enter only characters in the name fields (like in address book, user profile etc.) and the fields which would contain numbers only like Zip code, phone/cell no.s are validated at the client side itself.

* Implementation

* Employee Master is cond with actual users

* User are trained to use the application and navigation to all the features.

* Help pages are provided to the users

* All the test case criteria points taken up are tested in the live

* scenario and found complying to the requirements

* Database Access Control List is defined

* Database is ported to the Live Server and its used effectively


The efficiency of any system designed to suit an organization depends cooperation during the implementation stage and also flexibility of the system to adopt itself to the organization. Enterprise Collaboration Tool System has been developed to overcome the problems with existing communication and collaboration between users at various branches of the decentralized organization.

As evidence of the success of this system, all the data regarding employees at different locations are stored at server side through which the admin can take decisions in time, and the tasks are scheduled in a well refined manner. The Enterprise Collaboration Tool will be best suited to any organization, which has sub centers at different geographical locations.

Data Integrity is maintained by well-defined security in the database, so the reports coming out are live and accurate. In Total, productivity, quality, safety and user satisfaction are improved greatly in the organization.


1. Horstmann, Cay, Object Oriented Design & Patterns, Wiley Student Ed., 2004.

2. The complete Reference Java 2, 5th Edition, Herbert Schildt, Tata McGraw Hill publishing.

3. Programing and Problem Solving with Java, James M. Slack, ThomsonPte ltd, 2001

4. Pressman, R.S., Software Engineering: A Practitioner's Approach, McGraw Hill Int. S Student Ed., 6th Ed., 2005

5. Sommerville, I., Software Engineering, Pearson Education, 6th ed., 2001

6. Jalote, P., An Integrated Approach to Software Engineering, 2nd ed., Narosa, 1998

7. Bernd Bruegge and Allen H Dutoit, Object-Oriented Software Engineering, Pearson Education, 2000

8. Behforooz, A. and F. Hudson, Fundamentals of Software Engineering, Oxford University Press, 1996

9. Larman, C., Applying UML and Patterns, Pearson Education, 2nd Ed., 2002.

10. Grady Booch et al., Unified Modeling Language User Guide, Pearson Education, 1999.