The Architecture Of Ebusiness Computer Science Essay

Published:

Today's world is an Internet World. Demand of Internet is increasing every second. Number of Intrnet users are adding up each day and to fullfill the requirments efficiently scalabilty is a major factor. The demand of e-business application is growing each day, so does the net centric system. In order to satisfy customer demands, the internet world has to expand scalability area to achieve customer satisfaction and to serve growing number of users,connection and business process complexity. In past parallel computing was the various form through which scalability was achieved. Massively parallel processors and symmetric multiprocess processings were few techinques which were used to strech the scalabilty of the systems. As the technology evolved with the time, Recently, cost of hardwares, parallel computers and softwares went down and became cheap. Taking the advantage of lowered cost a new approach was presented in which scalability was achieved through clustering and it obtained the popularity. In clustering serval computer networks are used to distribute the workload.

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

The architecture of e-business can be divided in two parts:

Front End

Back End

Front end is part where data is showed to the client and back end is a part where data is stored and all business and complicatd logics are stored and executed. In windows NT envroment , number of simultaneous users are scaled with front end. The complexity of business logic is done by partioning the services and data on severals servers[24] is done bservers is done b back end. Fornt end and Back end, both parts achieve scalabilty through computing power, fault tolerance and load balancing[32].

A successful compution is achieved by exhanging messages in a huge amount. Due to the exchange of large amount of messages the chances of communication latencies are high and often they bottleneck in servers such as dispatched request HTTP servers, load balance brokers, JNDI and also traditional servers for parallel computing like Linda based servers.[11].Though the price and performance is brought down by NOW's and clustering on the flip side due to the exchange of large amount of messages withing workstation the network latincies are high which degrades the performance of parallel computers.

Whenever there is a shared memory area, there are chances of access concurrently, a main issue, which this paper deals. In this paper shared memory is a shared buffer which is designed in java and a network which has a high performance. Shared memory always has a problem of synchronization. This paper introduces a new concurrent data structure, called Parallel Hash table(PHT),which considers the famous problem of producer and consumer and solves its synchronization issue in a way which makes java networks more efficient. The access to shared is given such that multiple consumers and single producer can have access concurrently. Insert, delete and search instruction are presented in parralel dictionary in a batch, such that each batch has one kind of instruction. Using the concept of the PHT, this paper proposes a new server designed in java and we test this server on a network which has 35 workstations working on WINDOWS NT. This paper will show that, an efficient performace can be obtained by increasing the worker thread withing the range of 50-70 as compare to the rang of 15[33].

There are many algorithms which give concurrent acess to the data, these alorithm can be dvided in major three category according to the litreture[19]. These catagories are

Locking alogrithm[1,12,29]

Non-blocking algorithm[13,23,31]

Lock free algorithm.

All tradiotnal algorithm are locking in which a process must gets a lock to the before it

enters the critical section which prevents all the remaining processes to enter the critical

section. As there is only one lock per critical section and it can be obtained by one

process at a time, so til the time the process which has acquired the lock and executing

critical section all the processes which want to enter the critical section defers for unfixed

amount, that is why this algorithm is called blocking. On the other hand Non-blocking

algorithm makes sure that a process executes critical section in a finite time, in other

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

words non-blocking algorithm doesn't force mutual exclusion.This algorithm decreases

synchronization of threads and need of data buffer sharing. The good example of non-

blocking example can be Leader/Follower Pattern. In Leader/Follower problem one

thread is assign the work of leader whose job is to wait for client to make request, while

other threads are assign the job of follower who wait and queued and wait for their turn to

arrive and become leader. When a leader detects a new request it promotes the follower

waiting in the queue to become the leader, then leader process the client request and it

turns itself to the follower and enqueues itself to becoome leader again. When we use

external pool of thread implementation of non blocking algorithm becomes very hard

because it doesen't have any explicit queue which degrades the scalability[26]. The thrid

classifcation is Lock-free algorithms. These algorithms may cause the behaviour of

blockin, even though they don't use lock.

2. Concurrent programming and java network

The main reasons behind the popularity of java is its independency of platform. Java is a write somewhere run where technology, you can write code on any operating system and run at any different platform, what makes java platform independent is its byte code. When java compile its source code, java changes source code into byte code, which is highly encrypted and secure. Java interpretur uses this code and run the application, This byte code is platform independent which makes java a platform independent. Another main reason of popularity of java is its object orientation. These reasons have made java as a most popular language for computing cluster and NOWs with a high performance[10]. Java has many reach api which simplifies network programming. These api's are TCP/IP API, network serialization api which includes JSP, Servlet JNDI etc . These api simplify networking programing upto a large extend.

There are few aspects which makes distributed computing successful. Any progaming language not only give high performance in computation but also it should be high give high performance in networking and concurrent acess. To obtain the lock of an object many new technologies have been updated, like mutiprocessor, Java virtual machine(jvm). These advance technologies have been improve performance in a vital area and all new technologies are based on mutithreaded concept. To assign a job to any thread it takes time in multithreaded system and if there is a shared resources then thread has to wait for it's turn. To creat a child thread and accquire lock of an object, it needs time which is very little but still it is a major factor when number of threads are waiting for access, so this aspect can not be ignored. As the time went on, new versions of java came in the world. All the versions which were released, scalability was the main concern. To achieve I/O scalability java realed JDK 1.4 version which facilitates I/O operation enormously and also solve the tradional I/O problem in java by providing many abstraction keys.

There are my network protocol, but this paper chooses a connection oriented protocol which is TCP/IP protocol. Java provides rich collection of libraries and classes for programming in network. TCP/IP is a duplex mode protocol, which means we can send and reacieve data at a same time and this protocol is fully reliable. Java supports multi threaded approach and it is a shared buffer language, it has built in semaphores and monitors. Java.lang.Thread class allows java to spawn, monitor and control the threads, Java allows user to synchronized the access of threads thourgh many different methods. But the problem with java is that there is only one boolean variable on which java rely. There can be case when many threads who are in same monitor waiting on different condition. In such case if user wants to invoke single waiting thread, then there would be no way to invoke that thread apart from notifylAll method which is expensive as compare to notify method. So there is always a problem in writing a mutlithreaded application, furthermore in java there are only two aspect which makes java platform depedent. One is newly spawned thread and second is AWT. These both aspects are dependent on platform. If we take example of AWT then graphics of AWT will be different for different operating system. Say if we are writing a code for a circle in windows then same code might result in eclips or some distorted shape of circle. When many threads are waiting for their turn then the notification is done in very haphazard way.

3. Shared Buffer Server design

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

In designing a shared buffer design, this paper speciefies IP address, childThread number

that can be created by the client. This server design is prettn be created by the client. This

server design is pretty much similar to the servers which are used in HTTP servers

and cluster of J2EE. Both server designs have same class i.e both servers are subclassed

and and implements same abstract method which handles the clien requests. The method

name is handleRequest. While designing TCP server. There is often a problem is

encountered which is known as "Thunder Herd problem". TCP server uses a pool in

which threads are pre spawned and work is assigned to the workder thread whenever

client requests instead of creating threads per client request. When TCP server uses pre

spawned pool, all worker threads are awakened and even onl. When TCP server uses pre

spawned pool, all worker threads are awakened and even only one thread can access the

lock of object remaining threads are activated which decreases the performance.

Whenever there is a lock system and only one thread can acquire lock at a time in other

In other words single point access entry, the performance of whole system goes down.

Figure 1 illustrate the pre spawned pool server

Fingure 1. TCP server with pool of spawned threads, all threads are activated