Improving Active Packet Loss Measurement Computer Science Essay

Published:

Measurement and estimation of packet loss characteristics are challenging due to the relatively rare occurrence and typically short duration of packet loss episodes. While active probe tools are commonly used to measure packet loss on end-to-end paths, there has been little analysis of the accuracy of these tools. The objective of our study is to understand how to measure packet loss episodes accurately with end-to-end probes. Studies show that the standard Poisson-modulated end-to-end measurement of packet loss accuracy has to be improved. Thus, we introduce a new algorithm for packet loss measurement that is designed to overcome the deficiencies in standard Poisson-based tools. Specifically, our method entails probe experiments that follow a geometric distribution to enable more accurate measurements than standard Poisson probing and other traditional packet loss measurement tools. We also find the transfer rate. We evaluate the capabilities of our methodology experimentally by developing and implementing a prototype tool, called BADABING. BADABING reports loss characteristics are far more accurately than traditional loss measurement tools.

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

Measuring and analyzing network traffic dynamics between end hosts has provided the foundation for the development of many different network protocols and systems. Of particular importance is under-standing packet loss behavior since loss can have a significant impact on the performance of both TCP- and UDP-based applications. Despite efforts of network engineers and operators to limit loss, it will probably never be eliminated due to the intrinsic dynamics and scaling properties of traffic in packet switched network. Network operators have the ability to passively monitor nodes within their network for packet loss on routers using SNMP. End-to-end active measurements using probes provide an equally valuable perspective since they indicate the conditions that application traffic is experiencing on those paths.

Our study involves the empirical evaluation of our new loss measurement methodology. To this end, we developed a one-way active measurement tool called BADABING. BADABING sends fixed-size probes at specified intervals from one measurement host to a collaborating target host. The target system collects the probe packets and reports the loss characteristics after a specified period of time. We also compare BADABING with a standard tool for loss measurement that emits probe packets at Poisson intervals. The results show that our tool reports loss episode estimates much more accurately for the same number of probes. We also show that BADABING estimates converge to the underlying loss episode frequency and duration characteristics.

The most commonly used tools for probing end-to-end paths to measure the packet loss resemble the ubiquitous PING utility. PING-like tools send probe packets (e.g., ICMP echo packets) to a target host at fixed intervals. Loss is inferred by the sender if the response packets expected from the target host are not received within a specified time period. Generally speaking, an active measurement approach is problematic because of the discrete sampling nature of the probe process. Thus, the accuracy of the resulting measurements depends both on the characteristics and interpretation of the sampling process as well as the characteristics of the underlying loss process.

The goal of our study is to understand how to accurately measure loss characteristics on end-to-end paths with probes. We are interested in two specific characteristics of packet loss: loss episode frequency, and loss episode duration. Our study consists of three parts: (i) empirical evaluation of the currently prevailing approach, (ii) development of estimation techniques that are based on novel experimental design, novel probing techniques, and simple validation tests, and (iii) empirical evaluation of this new methodology.

The most important implication of these results is that there is now a methodology and tool available for wide-area studies of packet loss characteristics that enables researchers to understand and specify the trade-offs between accuracy and impact. Furthermore, the tool is self-calibrating in the sense that it can report when estimates are poor. Practical applications could include its use for path selection in peer-to-peer overlay networks and as a tool for network operators to monitor specific segments of their infrastructures.

Existing System:

In the Existing traditional packet loss measurement tools, the accuracy of the packet loss measurement has to be improved.

Several studies include the use of loss measurements to estimate packet loss, such as Poisson modulated tools which can be quite inaccurate.

Proposed System:

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

The purpose of our study is to understand how to measure end-to-end packet loss characteristics accurately.

The goal of our study is to understand how to accurately measure loss characteristics on end-to-end paths with probes.

Specifically, our method entails probe experiments that follow a geometric approach to improve the accuracy of the packet loss measurement.

Modules of the Project:

Packet Separation

Designing the Queue

Packet Receiver

User Interface Design

Packet Loss Calculation

Module Description:

Packet Separation:

In this module we have to separate the input data into packets. These packets are then sent to the Queue.

Designing the Queue:

The Queue is designed in order to create the packet loss. The queue receives the packets from the Sender, creates the packet loss and then sends the remaining packets to the Receiver.

Packet Receiver:

The Packet Receiver is used to receive the packets from the Queue after the packet loss. Then the receiver displays the received packets from the Queue.

User Interface Design:

In this module we design the user interface for Sender, Queue, Receiver and Result displaying window. These windows are designed in order to display all the processes in this project.

Packet Loss Calculation:

The calculations to find the packet loss are done in this module. Thus we are developing the tool to find the packet loss.

System Requirements:

Hardware:

Processor : Pentium IV 2.6 GHz

RAM : 512 MB

Monitor : 15"

Hard Disk : 20 GB

CD-Drive : 52X

Key Board : Standard 102 Keys

Software:

Front End : Java, Swings

Tools Used : JFrameBuilder

Operating System : Windows XP

System Architecture:

Sender

Receiver

Queue

Data Flow Diagram:

Sender

Receiver

Queue

(For Packets loss)

Packets

Packets after loss

Use Case Diagram:

A Use-Case model can be instrumental in project development, planning, and documentation of system requirements. A Use-Case is an interaction between users and a system. It captures the goal of the users and the responsibility of the system to its users. A Use-case model describes the uses of the system and shows the occurrences of events that can be performed.

A Use case is shown as ellipse containing the name of the use case. The name of the use case can be placed below or inside the ellipse.

An Actor is shown as a class rectangle with the label <<actor >> or the label and a stick figure or just stick figure with name of the actor.

Use-case

<< Actor >>

Customer

Note

Actor

Sequence Diagram:

The sequence diagram is an interaction diagram that emphasizes the time ordering of messages for modeling a real time system. Graphically, a sequence diagram is a table that shows objects arranged along the X axis and messages, ordered in increasing time, along the Y axis. Sequence Diagram consists of objects, links, lifeline, focus of control, and messages.

Object: Objects are typically named or anonymous instances of class. But may also represent instances of other things such as components, collaboration and nodes.

Link: A link is a semantic connection among objects i.e., an object of an association is called as a link.

Lifeline: A lifeline is vertical dashed line that represents the lifetime of an object.

Focus of Control: A Focus of Control is tall, thin rectangle that shows the period of time during which an object is performing an action.

Messages: A message is a specification of a communication between objects that conveys the information with the expectation that the activity will ensue.

Class Diagram:

A class diagram is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, and the relationships between the classes.

Class: Class is a description of a set of objects that share the same attributes, operations, relationships and semantics. A class implements one or more interfaces.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

Interface: Interface is a collection of operations that specify a service of a class or component. An interface describes the externally visible behavior of that element. An interface might represent the complete behavior of a class or component.

Collaboration: Collaboration defines an interaction and is a society of roles and other elements that work together to provide some cooperative behavior. So collaborations have structural as well as behavioral, dimensions. These collaborations represent the implementation of patterns that make up a system.

Relationships such as

Dependency: Dependency is a semantic relationship between two things in which a change to one thing may affect the semantics of the other thing.

Generalization: A generalization is a specialization / generalization relationship in which objects of the specialized element (child) are substitutable for objects of the generalized element (parent).

Association: An association is a structural relationship that describes a set of links, a link being a connection among objects. Aggregation is a special kind of association, representing a structural relationship between a whole and its parts.

Class Name

Attributes

Operations

Activity Diagram:

An activity diagram shows the flow from activity to activity. The activity diagram emphasizes the dynamic view of a system. It consists of activity states, action states, transition, and object.

Activity State: An activity states is a kind of states in activity diagram; it shows an ongoing non-atomic execution within a state machine. An activity states can be further decomposed.

Action State: An action states are states of the system, each representing the execution of an action. An action states can't be further decomposed.

Transition: A transition specifies the path from one action or activity state to the next action or activity state. The transition is rendered as a simple directed line.

Object: An object is a concrete manifestation of an abstraction; an entity with a well defined boundary and identity that encapsulates state and behavior; an instance of a class. Objects may be involved in the flow of control associated with an activity diagram.

Java:

Java is an object-oriented multithread programming languages .It is designed to be small, simple and portable across different platforms as well as operating systems.

Features of Java:

Platform Independence:

The Write-Once-Run-Anywhere ideal has not been achieved (tuning for different platforms usually required), but closer than with other languages.

Object Oriented

Object oriented throughout - no coding outside of class definitions, including main().

An extensive class library available in the core language packages.

Compiler/Interpreter Combo

Code is compiled to byte codes that are interpreted by a Java virtual machines (JVM).

This provides portability to any machine for which a virtual machine has been written.

The two steps of compilation and interpretation allow for extensive code checking and improved security.

Robust

Exception handling built-in, strong type checking (that is, all data must be declared an explicit type), local variables must be initialized.

Several features of C & C++ eliminated:

No memory pointers 

No preprocessor

Array index limit checking

Automatic Memory Management:

Automatic garbage collection - memory management handled by JVM.

Security:

No memory pointers

Programs run inside the virtual machine sandbox.

Array index limit checking

Code pathologies reduced by

Byte code verifier - checks classes after loading

Class loader - confines objects to unique namespaces. Prevents loading a hacked "java.lang.SecurityManager" class, for example.

Security manager - determines what resources a class can access such as reading and writing to the local disk.

Dynamic Binding:

The linking of data and methods to where they are located is done at run-time.

New classes can be loaded while a program is running. Linking is done on the fly.

Even if libraries are recompiled, there is no need to recompile code that uses classes in those libraries.

This differs from C++, which uses static binding. This can result in fragile classes for cases where linked code is changed and memory pointers then point to the wrong addresses.

Good Performance

Interpretation of byte codes slowed performance in early versions, but advanced virtual machines with adaptive and just-in-time compilation and other techniques now typically provide performance up to 50% to 100% the speed of C++ programs.

Threading

Lightweight processes, called threads, can easily be spun off to perform multiprocessing.

Can take advantage of multiprocessors where available

Great for multimedia displays.

Net Beans:

Net Beans A Java-based development environment (IDE) and platform originally developed by Sun. It includes user interface functions, source code editor, GUI editor, version control as well as support for distributed applications (CORBA, RMI, etc.) and Web applications (JSPs, servlets, etc.).

In 1999, Sun acquired NetBeans Developer from NetBeans and rebranded it as Forte for Java Community Edition (Sun acquired Forte in 1999). In 2000, Sun made the NetBeans IDE open source.

1. GUI: The major requirement of today's developers is to have a good User Interface for their users. They can provide whatever functionality they need but it's the GUI that lets the user better know the existence of that particular functionality and its easier for them to click and select than type something on a black boring screen. Thus, today's developers need IDE's such as netbeans that develop ready made windows forms with all the required buttons, labels, text boxes and like that can be tailor made for the program in question.

2. Database Integration: Database based program developers know how hard it is to interface your back-end database to your front-end program. This is where netbeans packs the punch by providing you a CRUD(create, Read, Update, Delete) application shell.

J2EE (Java 2 Enterprise Edition):

Today, more and more developers want to write distributed transactional applications for the enterprise and leverage the speed, security, and reliability of server-side technology. If you are already working in this area, you know that in today's fast-moving and demanding world of e-commerce and information technology, enterprise applications have to be designed, built, and produced for less money, with greater speed, and with fewer resources than ever before.

To reduce costs and fast-track enterprise application design and development, the Java 2 Platform, Enterprise Edition (J2EE) technology provides a component-based approach to the design, development, assembly, and deployment of enterprise applications. The J2EE platform offers a multi tiered distributed application model, the ability to reuse components, integrated Extensible Markup Language (XML)-based data interchange, a unified security model, and flexible transaction control. Not only can you deliver innovative customer solutions to market faster than ever, but your platform-independent J2EE component-based solutions are not tied to the products and application programming interfaces (APIs) of any one vendor. Vendors and customers enjoy the freedom to choose the products and components that best meet their business and technological requirements.

Distributed Multi tiered Applications:

The J2EE platform uses a multi tiered distributed application model. Application logic is divided into components according to function, and the various application components that make up a J2EE application are installed on different machines depending on the tier in the multi tiered J2EE environment to which the application component belongs

J2EE Components:

Client-tier components run on the client machine.

Web-tier components run on the J2EE server.

Business-tier components run on the J2EE server.

Enterprise information system (EIS)-tier software runs on the EIS server.

J2EE multi tiered applications are generally considered to be three-tiered applications because they are distributed over three different locations: client machines, the J2EE server machine, and the database or legacy machines at the back end. Three-tiered applications that run in this way extend the standard two-tiered client and server model by placing a multithreaded application server between the client application and back-end storage.

J2EE applications are made up of components. A J2EE component is a self-contained functional software unit that is assembled into a J2EE application with its related classes and files and that communicates with other components. The J2EE specification defines the following J2EE components:

Application clients and applets are components that run on the Client.

Java Servlet and Java Server Pages (JSP) technology Components are Web components that run on the server.

Enterprise JavaBeans (EJB) components (enterprise beans) are business components that run on the server.

J2EE components are written in the Java programming language and are compiled in the same way as any program in the language. The difference between J2EE components and "standard" Java classes is that J2EE components are assembled into a J2EE application, verified to be well formed and in compliance with the J2EE specification, and deployed to production, where they are run and managed by the J2EE server.

Web Clients

A Web client consists of two parts: dynamic Web pages containing various types of markup language (HTML, XML, and so on), which are generated by Web components running in the Web tier, and a Web browser, which renders the pages received from the server.

A Web client is sometimes called a thin client. Thin clients usually do not do things like query databases, execute complex business rules, or connect to legacy applications. When you use a thin client, heavyweight operations like these are off-loaded to enterprise beans executing on the J2EE server where they can leverage the security, speed, services, and reliability of J2EE server-side technologies.

Applets

Web components are the preferred API for creating a Web client program because no plug-ins or security policy files are needed on the client systems. Also, Web components enable cleaner and more modular application design because they provide a way to separate applications programming from Web page design. Personnel involved in Web page design thus do not need to understand Java programming language syntax to do their jobs.

Application Clients:

A J2EE application client runs on a client machine and provides a way for users to handle tasks that require a richer user interface than can be provided by a markup language. It typically has a graphical user interface (GUI) created from Swing or Abstract Window Toolkit (AWT) APIs, but a command-line interface is certainly possible.

Application clients directly access enterprise beans running in the business tier. However, if application requirements warrant it, a J2EE application client can open an HTTP connection to establish communication with a servlet running in the Web tier.

JavaBeans Component Architecture

The server and client tiers might also include components based on the JavaBeans component architecture (JavaBeans component) to manage the data flow between an application client or applet and components running on the J2EE server or between server components and a database. JavaBeans components are not considered J2EE components by the J2EE specification.

JavaBeans components have instance variables and get and set methods for accessing the data in the instance variables. JavaBeans components used in this way are typically simple in design and implementation, but should conform to the naming and design conventions outlined in the JavaBeans component architecture.

J2EE Server Communications

The client communicates with the business tier running on the J2EE server either directly or, as in the case of a client running in a browser, by going through JSP pages or servlets running in the Web tier.

J2EE application uses a thin browser-based client or thick application client. In deciding which one to use, should be aware of the trade-offs between keeping functionality on the client and close to the user (thick client) and off-loading as much functionality as possible to the server (thin client). The more functionality you off-load to the server, the easier it is to distribute, deploy, and manage the application; however, keeping more functionality on the client can make for a better perceived user experience.

ODBC:

Microsoft Open Database Connectivity (ODBC) is a standard programming interface for application developers and database systems providers. Before ODBC became a de facto standard for Windows programs to interface with database systems, programmers had to use proprietary languages for each database they wanted to connect to. Now, ODBC has made the choice of the database system almost irrelevant from a coding perspective, which is as it should be. Application developers have much more important things to worry about than the syntax that is needed to port their program from one database to another when business needs suddenly change.

Through the ODBC Administrator in Control Panel, you can specify the particular database that is associated with a data source that an ODBC application program is written to use. Think of an ODBC data source as a door with a name on it. Each door will lead you to a particular database. For example, the data source named Sales Figures might be a SQL Server database, whereas the Accounts Payable data source could refer to an Access database. The physical database referred to by a data source can reside anywhere on the LAN.

The ODBC system files are not installed on your system by Windows 95. Rather, they are installed when you setup a separate database application, such as SQL Server Client or Visual Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a file called ODBCINST.DLL. It is also possible to administer your ODBC data sources through a stand-alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this program, and each maintains a separate list of ODBC data sources.

From a programming perspective, the beauty of ODBC is that the application can be written to use the same set of function calls to interface with any data source, regardless of the database vendor. The source code of the application doesn't change whether it talks to Oracle or SQL Server. We only mention these two as an example. There are ODBC drivers available for several dozen popular database systems. Even Excel spreadsheets and plain text files can be turned into data sources. The operating system uses the Registry information written by ODBC Administrator to determine which low-level ODBC drivers are needed to talk to the data source (such as the interface to Oracle or SQL Server). The loading of the ODBC drivers is transparent to the ODBC application program. In a client/server environment, the ODBC API even handles many of the network issues for the application programmer.

The advantages of this scheme are so numerous that you are probably thinking there must be some catch. The only disadvantage of ODBC is that it isn't as efficient as talking directly to the native database interface. ODBC has had many detractors make the charge that it is too slow. Microsoft has always claimed that the critical factor in performance is the quality of the driver software that is used. In our humble opinion, this is true. The availability of good ODBC drivers has improved a great deal recently. And anyway, the criticism about performance is somewhat analogous to those who said that compilers would never match the speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives you the opportunity to write cleaner programs, which means you finish sooner. Meanwhile, computers get faster every year.

JDBC:

In an effort to set an independent database standard API for Java, Sun Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface is achieved through the use of "plug-in" database connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he or she must provide the driver for each platform that the database and Java run on.

To gain a wider acceptance of JDBC, Sun based JDBC's framework on ODBC. As you discovered earlier in this chapter, ODBC has widespread support on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than developing a completely new connectivity solution.

JDBC was announced in March of 1996. It was released for a 90 day public review that ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released soon after.

The remainder of this section will cover enough information about JDBC for you to know what it is about and how to use it effectively. This is by no means a complete overview of JDBC. That would fill an entire book.

JDBC Goals

Few software packages are designed without goals in mind. JDBC is one that, because of its many goals, drove the development of the API. These goals, in conjunction with early reviewer feedback, have finalized the JDBC class library into a solid framework for building database applications in Java.

The goals that were set for JDBC are important. They will give you some insight as to why certain classes and functionalities behave the way they do. The eight design goals for JDBC are as follows:

SQL Level API

The designers felt that their main goal was to define a SQL interface for Java. Although not the lowest database interface level possible, it is at a low enough level for higher-level tools and APIs to be created. Conversely, it is at a high enough level for application programmers to use it confidently. Attaining this goal allows for future tool vendors to "generate" JDBC code and to hide many of JDBC's complexities from the end user.

SQL Conformance

SQL syntax varies as you move from database vendor to database vendor. In an effort to support a wide variety of vendors, JDBC will allow any query statement to be passed through it to the underlying database driver. This allows the connectivity module to handle non-standard functionality in a manner that is suitable for its users.

JDBC must be implemental on top of common database interfaces

The JDBC SQL API must "sit" on top of other common SQL level APIs. This goal allows JDBC to use existing ODBC level drivers by the use of a software interface. This interface would translate JDBC calls to ODBC and vice versa.

Provide a Java interface that is consistent with the rest of the Java system

Because of Java's acceptance in the user community thus far, the designers feel that they should not stray from the current design of the core Java system.

Keep it simple

This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt that the design of JDBC should be very simple, allowing for only one method of completing a task per mechanism. Allowing duplicate functionality only serves to confuse the users of the API.

Use strong, static typing wherever possible

Strong typing allows for more error checking to be done at compile time; also, less errors appear at runtime.

Keep the common cases simple

Because more often than not, the usual SQL calls used by the programmer are simple SELECT's, INSERT's, DELETE's and UPDATE's, these queries should be simple to perform with JDBC. However, more complex SQL statements should also be possible.

Activity Diagram

File Selection

User

Packet loss estimation

Packets for loss

Packets with loss

Packet Separation

Queue

Use Case Diagram

Choose Text File

Packet Separation

Queue

Packet received with loss

Packet loss calculation

User

Sequence Diagram:

User

Select File

Packet Separation

Queue

Loss estimation

Packet receiver

Selects File

Gives File

Gives Separated packet

Sends packets with loss

Calculates Packet loss

Class Diagram

User

Select File ()

Send File()

Packet Separation

Separation ()

Queue

Queue ()

Receiver

Receive Packets()

Loss calculation ()

Sample Code:

/****************************************************************/

/* PacketSender */

/* */

/****************************************************************/

import java.awt.*;

import java.awt.event.*;

import javax.swing.*;

import java.net.*;

import java.io.*;

/**

* Summary description for PacketSender

*

*/

public class PacketSender extends JFrame

{

// Variables declaration

private JLabel jLabel1;

private JLabel jLabel2;

private JLabel jLabel3;

private JTextField jTextField1;

private JTextArea jTextArea1;

private JScrollPane jScrollPane1;

private JButton jButton1;

private JButton jButton2;

private JButton jButton3;

private JPanel contentPane;

public float filelength;

public byte filebyte[]=new byte[10000];

public String filstr[];

public int filint[];

public char filchar[];

public int i;

Socket st;

// End of variables declaration

public PacketSender()

{

super();

initializeComponent();

//

// TODO: Add any constructor code after initializeComponent call

//

this.setVisible(true);

}

/**

* This method is called from within the constructor to initialize the form.

* WARNING: Do NOT modify this code. The content of this method is always regenerated

* by the Windows Form Designer. Otherwise, retrieving design might not work properly.

* Tip: If you must revise this method, please backup this GUI file for JFrameBuilder

* to retrieve your design properly in future, before revising this method.

*/

private void initializeComponent()

{

jLabel1 = new JLabel();

jLabel1.setFont(new Font("Arial",Font.BOLD,14));

jLabel2 = new JLabel();

jLabel2.setFont(new Font("Arial",Font.BOLD,12));

jLabel3 = new JLabel();

jLabel3.setFont(new Font("Arial",Font.BOLD,12));

jTextField1 = new JTextField();

jTextField1.setFont(new Font("Arial",Font.BOLD,12));

jTextArea1 = new JTextArea();

jTextArea1.setFont(new Font("Arial",Font.BOLD,12));

jScrollPane1 = new JScrollPane();

jButton1 = new JButton();

jButton2 = new JButton();

jButton3 = new JButton();

contentPane = (JPanel)this.getContentPane();

//

// jLabel1

//

jLabel1.setText("SENDER");

//

// jLabel2

//

jLabel2.setText("Open the File");

//

// jLabel3

//

jLabel3.setText("Status Information");

//

// jTextField1

//

jTextField1.addActionListener(new ActionListener() {

public void actionPerformed(ActionEvent e)

{

jTextField1_actionPerformed(e);

}

});

//

// jTextArea1

//

//

// jScrollPane1

//

jScrollPane1.setViewportView(jTextArea1);

//

// jButton1

//

jButton1.setText("Browse");

jButton1.addActionListener(new ActionListener() {

public void actionPerformed(ActionEvent e)

{

jButton1_actionPerformed(e);

}

});

//

// jButton2

//

jButton2.setText("Send");

jButton2.addActionListener(new ActionListener() {

public void actionPerformed(ActionEvent e)

{

jButton2_actionPerformed(e);

}

});

//

// jButton3

//

jButton3.setText("Exit");

jButton3.addActionListener(new ActionListener() {

public void actionPerformed(ActionEvent e)

{

jButton3_actionPerformed(e);

}

});

//

// contentPane

//

contentPane.setLayout(null);

contentPane.setBackground(new Color(119, 119, 119));

addComponent(contentPane, jLabel1, 161,4,132,30);

addComponent(contentPane, jLabel2, 54,46,192,24);

addComponent(contentPane, jLabel3, 119,156,187,24);

addComponent(contentPane, jTextField1, 40,70,270,30);

addComponent(contentPane, jScrollPane1, 55,184,321,147);

addComponent(contentPane, jButton1, 310,70,80,30);

addComponent(contentPane, jButton2, 80,110,90,30);

addComponent(contentPane, jButton3, 190,110,90,30);

//

// PacketSender

//

this.setTitle("PacketSender");

this.setLocation(new Point(135, 133));

this.setSize(new Dimension(437, 400));

this.setDefaultCloseOperation(WindowConstants.DISPOSE_ON_CLOSE);

}

/** Add Component Without a Layout Manager (Absolute Positioning) */

private void addComponent(Container container,Component c,int x,int y,int width,int height)

{

c.setBounds(x,y,width,height);

container.add(c);

}

//

// TODO: Add any appropriate code in the following Event Handling Methods

//

private void jTextField1_actionPerformed(ActionEvent e)

{

System.out.println("\njTextField1_actionPerformed(ActionEvent e) called.");

// TODO: Add any handling code here

}

private void jButton1_actionPerformed(ActionEvent e)

{

System.out.println("\nLoading File");

// TODO: Add any handling code here

try

{

FileDialog fd=new FileDialog(this,"Open",FileDialog.LOAD);

fd.show();

FileInputStream fin=new FileInputStream(fd.getDirectory()+fd.getFile());

jTextField1.setText(fd.getDirectory()+fd.getFile());

File f = new File(fd.getDirectory()+fd.getFile());

fin.read(filebyte);

filelength=f.length();

jTextArea1.setText("\n File Loaded");

jTextArea1.append("\n File Length = "+filelength);

jTextArea1.append("\n File Size = "+filelength/1024+" KB\n ");

filint=new int[(int)filelength];

filchar=new char[(int)filelength];

filstr=new String[(int)filelength];

jTextArea1.append("\n File Content : \n");

for(i=0;i<filelength;i++)

{

filint[i]=(int)filebyte[i];

filchar[i]=(char)filint[i];

filstr[i]=""+filchar[i];

jTextArea1.append(filstr[i]);

}

}

catch (Exception er)

{

System.out.println(er);

}

}

private void jButton2_actionPerformed(ActionEvent e)

{

System.out.println("Sending Packets");

// TODO: Add any handling code here

try

{

st=new Socket("localhost",4500);

DataOutputStream dos=new DataOutputStream(st.getOutputStream());

dos.writeFloat(filelength);

for(i=0;i<filelength;i++)

{

dos.writeUTF(filstr[i]);

}

st.close();

}

catch (Exception ty)

{

}

}

private void jButton3_actionPerformed(ActionEvent e)

{

System.out.println("\nExit");

// TODO: Add any handling code here

System.exit(1);

}

Screen Shots:

Testing:

Introduction:

After finishing the development of any computer based system the next complicated time consuming process is system testing. During the time of testing only the development company can know that, how far the user requirements have been met out, and so on.

Following are the some of the testing methods applied to this effective project:

Source Code Testing:

This examines the logic of the system. If we are getting the output that is required by the user, then we can say that the logic is perfect.

Specification Testing:

We can set with, what program should do and how it should perform under various condition. This testing is a comparative study of evolution of system performance and system requirements.

Module Level Testing:

In this the error will be found at each individual module, it encourages the programmer to find and rectify the errors without affecting the other modules.

Unit Testing:

Unit testing focuses on verifying the effort on the smallest unit of software-module. The local data structure is examined to ensure that the date stored temporarily maintains its integrity during all steps in the algorithm's execution. Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing.

Integration Testing:

Data can be tested across an interface. One module can have an inadvertent, adverse effect on the other. Integration testing is a systematic technique for constructing a program structure while conducting tests to uncover errors associated with interring.

Validation Testing:

It begins after the integration testing is successfully assembled. Validation succeeds when the software functions in a manner that can be reasonably accepted by the client. In this the majority of the validation is done during the data entry operation where there is a maximum possibility of entering wrong data. Other validation will be performed in all process where correct details and data should be entered to get the required results.

Recovery Testing:

Recovery Testing is a system that forces the software to fail in variety of ways and verifies that the recovery is properly performed. If recovery is automatic, re-initialization, and data recovery are each evaluated for correctness.

Security Testing:

Security testing attempts to verify that protection mechanism built into system will in fact protect it from improper penetration. The tester may attempt to acquire password through external clerical means, may attack the system with custom software design to break down any defenses to others, and may purposely cause errors.

Performance Testing:

Performance Testing is used to test runtime performance of software within the context of an integrated system. Performance test are often coupled with stress testing and require both software instrumentation.

Black box Testing:

Black- box testing focuses on functional requirement of software. It enables to derive ets of input conditions that will fully exercise all functional requirements for a program. Black box testing attempts to find error in the following category:

Incorrect or missing function

Interface errors

Errors in data structures or external database access and performance errors.

Output Testing:

After performing the validation testing, the next step is output testing of the proposed system since no system would be termed as useful until it does produce the required output in the specified format. Output format is considered in two ways, the screen format and the printer format.

User Acceptance Testing:

User Acceptance Testing is the key factor for the success of any system. The system under consideration is tested for user acceptance by constantly keeping in touch with prospective system users at the time of developing and making changes whenever required.

Implementation:

Maintenance:

The term "software maintenance" is used to describe the software engineering activities that occur following delivery of a software product to the customer. The maintenance phase of the software life cycle is the time period in which a software product performs useful work. Maintenance activities involve making enhancement to software products, adapting products to new environments and correcting problems. Software product enhancement may involve providing new functional capabilities, improving user display and modes of interaction, and upgrading external documents. Adaptation of software to a new environment may involve moving the software to a different machine. Problem correction involves modification and revalidation of software to correct errors. The enhancement of this project can be accomplished easily. That is, any new functional capabilities can be added to the project by simply including the new module in the homepage and giving a hyperlink to that module. Adaptation of this project to a new environment is also performed easily.

Corrective Maintenance:

Even with the best quality assurance activities, it is likely that they customer will uncover defects in the software. Corrective maintenance changes the software to correct defects.

Adaptive Maintenance:

An activity that modifies the software to properly interface with a changing environment. The system has been modified so that various change include to the new system. In case of Fund Transfer, adoptive maintenance has been performed, that is in earlier system (character based UNIX system) changes are fixed and if any new changes are to be included, was a difficult task. Now provisions are given so that the user can define various changes. Such as, it is designed to accommodate the new change in future.

Enhancement Maintenance:

As software is used, the customer/user will recognize additional functions that will provide benefit. Perceptive maintenance extends the software beyond its original functional requirements.

Applications:

Simple techniques that allow users to validate the measurement outputs are introduced. We implemented this method in a new tool, BADABING, which we tested in our laboratory. Our tests demonstrate that BADABING, in most cases, accurately estimates loss frequencies and durations over a range of cross traffic conditions

Future Enhance:

We are also considering alternative, parametric methods for inferring loss characteristics from our probe process. Another task is to estimate the variability of the estimates of congestion frequency and duration themselves directly from the measured data, under a minimal set of statistical assumptions on the congestion process.

Conclusion:

The purpose of our study was to understand how to ensure end-to-end packet loss characteristics accurately with probes and in a way that enables us to specify the impact on the bottleneck queue. We began by evaluating the capabilities of simple Poisson-modulated probing in a controlled laboratory environment consisting of commodity end hosts and IP routers. We consider this test bed ideal for loss measurement tool evaluation since it enables repeatability, establishment of ground truth, and a range of traffic conditions under which to subject the tool. Our initial tests indicate that simple Poisson probing is relatively ineffective at measuring loss episode frequency or measuring loss episode duration, especially when subjected to TCP (reactive) cross traffic.