Issues And Challenges Of Scheduling Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Abstract- In topical years one of the Cloud Services, Infrastructure-as-a-Service (IaaS) provides a compute resources for demand in various applications like Parallel Data Processing. Major Cloud computing companies have started to integrate frameworks for, making it easy for customers to access these services and to deploy their programs. However, the processing frameworks which are currently used have been designed for static, identical cluster setups and ignore the particular nature of a cloud. Consequently, the allocated compute resources may be insufficient for big parts of the submitted job and unnecessarily increase processing time and cost. However, the current algorithm does not consider the schedule and security during the job execution. In this paper we have focused on the issues and challenges of the scheduling & protection algorithm for proficient parallel data processing in real time cloud computing services. Our Algorithm contains all the concrete information required to schedule and execute the received job on the cloud. Each Execution is by default assigned to its own Execution Instance. The code is lightweight and portable; it makes a great random number generator for both Encryption and Decryption is designed to run data processing on a large number of jobs, and provide a high efficacy in real time cloud services.

Keywords- Cloud Computing, Resource Allocation, Scheduling Strategy, Security Algorithms.

Introduction

Cloud computing is not an innovation per se, but a means to constructing IT services that use advanced computational power and improved storage capabilities. The main focus of cloud computing from the provider's view as extraneous hardware connected to support downtime on any device in the network, without a change in the users' perspective [1]. Also, the users' software image should be easily transferable from one cloud to another. Balding proposes that a layering mechanism should occur between the front-end software, middle-ware networking and back-end servers and storage, so that each part can be designed, implemented, tested and ran independent from subsequent layers. This paper introduces the current state of cloud computing, with its development challenges, academia and industry research efforts. Further, it describes cloud computing security problems and benefits and showcases a model of secure architecture for cloud computing implementation.

As more and more data is generated at a faster-than-ever rate, processing large volumes of data is becoming a challenge for data analysis software. Addressing performance issues, Cloud Computing: Data-Intensive Computing and Scheduling explores the evolution of classical techniques and describes completely new methods and innovative algorithms. This paper delineates many concepts, methods, and algorithms used in cloud computing.

After a general introduction to the field, the text covers resource management, including scheduling algorithms for real-time tasks and practical algorithms for user bidding and auctioneer pricing. It next explains approaches to data analytical query processing, including pre-computing, data indexing, and data partitioning. Applications of MapReduce, a new parallel programming model, are then presented.

RELATED WORK

This chapter deals with study of the system and software information of the system.

Cloud Computing Overview:

1."A large-scale distributed computing paradigm that is driven by economies of scale, in which a pool of abstracted virtualized, dynamically-scalable, managed computing power, storage, platforms, and services are delivered on demand to external customers over Internet".

2. "A style of computing where scalable and elastic IT capabilities are provided as a service to multiple external customers using Internet technologies."

3. "Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction."

Resource Sharing

This allows different customers to share the same resource at the same time. There are three models you can use:

Sharing is disabled (use in most cases) -- this will let only one customer reserve given resource at one time

Allowed -- it's up to the customer to decide whether other customers can share the resource with them or if they wish to use the resource exclusively. An example where this can be used is a shuttle bus rental which normally would also take other customers but sometimes a customer may wish to use the entire shuttle van exclusively. In pricing manager you can specify a rule which would update the price in such case.

Always possible -- this forces all customers to share the resource with the others. This mode should be used whenever you sell tickets or seats or any other kind of resource which is used on per-person basic (including bikes, personal gear, dormitory bed etc.). If you select this option, two things will happen: planyo will add a new reservation form item Number of persons where the customer can set the number of tickets/seats they wish to reserve. Another change is that by selecting always possible the monthly price in case of Planyo PRO will be lower (for shared resources planyo counts the number of resources as a half of the total seats/tickets available).

Real-time scheduling for cloud computing

There are emerging classes of applications that can benefit from increasing timing guarantee of cloud services. These mission critical applications typically have deadline requirements, and any delay is considered as failure for the whole deployment. For instance, traffic control centers periodically collect the state of roads by sensor devices. Database updates recent information before next data reports are submitted. If anyone consults the control center about traffic problems, a real-time decision should be responded to help operators choose appropriate control actions. Besides, current service level agreements cannot provide cloud users real-time control over the timing behavior of the applications, so more flexible, transparent and trust-worthy service agreement between cloud providers and users is needed in future.

Given the above analysis, the ability to satisfy timing constraints of such real-time applications plays a significant role in cloud environment. However, the existing cloud schedulers are not perfectly suitable for real-time tasks, because they lack strict requirement of hard deadlines. A real-time scheduler must ensure that processes meet deadlines, regardless of system load or make span.

Priority is applied to the scheduling of these periodic tasks with deadlines. Every task in priority scheduling is given a priority through some policy, so that scheduler assigns tasks to resources according to priorities. Based on the policy for assigning priority, real-time scheduling is classified into two types: fixed priority strategy and dynamic priority strategy.

SCHEDULING AND SECURITY ALGORITHMS

Scheduling Algorithms:

Job Scheduling and Execution

After having received a valid Job Graph from the user, Nephele's Job Manager Transforms is called as Execution Graph. An Execution Graph is Nephele's primary data structure for scheduling and monitoring the execution of a Nephele job. Unlike the abstract Job Graph, the Execution Graph contains all the concrete information required to schedule and execute the received job on the cloud.

Parallelization and Scheduling Strategies

If constructing an Execution Graph from a user's submitted Job Graph may leave different degrees of freedom to Nephele. The user provides any job annotation which contains more specific instructions we currently pursue simple default strategy [2]. Each vertex of the Job Graph is transformed into one Execution Vertex. The default channel types are network channels. Each Execution Vertex is by default assigned to its own Execution Instance unless the user's annotations or other scheduling restrictions (e.g. the usage of in-memory channels) prohibit it.

Security Algorithms:

Two fish Algorithm

Two fish is a block cipher by Counterpane Labs. It was one of the five Advanced Encryption Standard (AES) finalists [3]. Two fish is unpatented, and the source code is un copyrighted and license-free; it is free for all uses.

General Description

Two fish is a 128-bit block cipher that accepts a variable-length key up to 256 bits. The cipher is a 16-round Feistel network with a bijectiveF function made up of four key-dependent 8-by-8-bit S-boxes, a fixed 4-by-4 maximum distance separable matrix over GF(28), a pseudo-Hadamard transform, bitwise rotations, and a carefully designed key schedule. A fully optimized implementation of two fish encrypts on a Pentium Pro at 17.8 clock cycles per byte, and an 8-bit smart card implementation encrypts at 1660 clock cycles per byte [4]. Two fish can be implemented in hardware in 14000 gates. The design of both the round function and the key schedule permits a wide variety of tradeoffs between speed, software size, key setup time, gate count, and memory. We have extensively crypt analyzed two fish; our best attack breaks 5 rounds with 222.5 chosen plaintexts and 251effort.

128-bit block

128-, 192-, or 256-bit key

16 rounds & Works in all standard modes [3].

Encrypts data in:

18 clocks/byte on a Pentium

16.1 clocks/byte on a Pentium Pro

Figure 1: Blow fish Algorithm Architecture

Blow fish Algorithm

The data transformation process for Pocket Brief uses the Blowfish Algorithm for Encryption and Decryption, respectively. The details and working of the algorithm are given below.

Blowfish is a symmetric block cipher that can be effectively used for encryption and safeguarding of data. It takes a variable-length key, from 32 bits to 448 bits, making it ideal for securing data it's shown in fig 1. Blowfish was designed in 1993 by Bruce Schneieras a fast, free alternative to existing encryption algorithms. Blowfish is unpatented and license-free, and is available free for all uses.

Blowfish Algorithm is a Feistel Network, iterating a simple encryption function 16times. The block size is 64 bits, and the key can be any length up to 448 bits. Although there is a complex initialization phase required before any encryption can take place, the actual encryption of data is very efficient on large microprocessors. Blowfish is a variable-length key block cipher. It is suitable for applications where thekey does not change often, like a communications link or an automatic file encryption. It is significantly faster than most encryption algorithms when implemented on 32-bitmicroprocessors with large data caches.

Feistel Networks

A Feistel network is a general method of transforming any function (usually called an function) into a permutation. It was invented by Horst Feistel and has been used in many block cipher designs. The working of a Feistal Network is given below:

Split each block into halves

Right half becomes new left half

New right half is the final result when the left half is XOR'd with the result of applying f to the right half and the key.

Note that previous rounds can be derived even if the function f is not invertible.

The Blowfish Algorithm

Manipulates data in large blocks

Has a 64-bit block size.

Has a scalable key, from 32 bits to at least 256 bits Uses simple operations that are efficient on microprocessors.

e.g., Exclusive-or, addition, table lookup, modular- multiplication. It does not use variable-length shifts or bit-wise permutations, or conditional jumps [8].

Employs pre computable sub keys.

On large-memory systems, these sub keys can be pre computed for faster operation. Not pre computing the sub keys will result in slower operation, but it should still be possible to encrypt data without any pre computations.

Consists of a variable number of iterations.

For applications with a small key size, the trade-off between the complexity of a brute-force attack and a differential attack make a large number of iterations superfluous. Hence, it should be possible to reduce the number of iterations with no loss of security (beyond that of the reduced key size).

Uses sub keys that are a one-way hash of the key.

This allows the use of long passphrases for the key without compromising security.

Has no linear structures that reduce the complexity of exhaustive search.

Uses a design that is simple to understand. This facilitates analysis and increase the confidence in the algorithm. In practice, this means that the algorithm will be a Feistel iterated block cipher.

Encryption Algorithm:

Blowfish has 16 rounds.

The input is a 64-bit data element, x.

Divide x into two 32-bit halves: xL, xR.

Then, for i = 1 to 16:

xL = xL XOR Pi

xR = F(xL) XOR xR

Swap xL and xR

After the sixteenth round, swap xL and xR again to undo the last swap.

Then, xR = xR XOR P17 and xL = xL XOR P18.

Finally, recombine xL and xR to get the ciphertext.

Decryption is exactly the same as encryption, except that P1, P2,..., P18 are used in the reverse order [8].

Implementations of Blowfish that require the fastest speeds should unroll the loop and ensure that all sub keys are stored in cache.

TEA Algorithm

The Tiny Encryption Algorithm is one of the fastest and most efficient cryptographic algorithms in existence. It was developed by David Wheeler and Roger Needham at the Computer Laboratory of Cambridge University. It is a Feistel cipher which uses operations from mixed (orthogonal) algebraic groups - XOR, ADD and SHIFT in this case. This is a very clever way of providing Shannon's twin properties of diffusion and confusion which are necessary for a secure block cipher, without the explicit need for P-boxes and S-boxes respectively. It encrypts 64 data bits at a time using a 128-bit key. It seems highly resistant to differential cryptanalysis, and achieves complete diffusion (where a one bit difference in the plaintext will cause approximately 32 bit differences in the cipher text) after only six rounds. Performance on a modern desktop computer or workstation is very impressive. You can obtain a copy of Roger Needham and David Wheeler's original paper describing TEA, from the Security Group ftp site at the world-famous Cambridge Computer Laboratory at Cambridge University. There's also a paper on extended variants of TEA which addresses a couple of minor weaknesses (irrelevant in almost all real world applications), and introduces a block variant of the algorithm which can be even faster in some circumstances.

How secure is TEA?

There have been no known successful crypt analyses of TEA. It's believed to be as secure as the IDEA algorithm, designed by Massey and Xuejia Lai. It uses the same mixed algebraic group's technique as IDEA, but it's very much simpler, hence faster. Also it's public domain, whereas IDEA is patented by Ascom-Tech AG in Switzerland. IBM's Don Coppersmith and Massey independently showed that mixing operations from orthogonal algebraic groups performs the diffusion and confusion functions that a traditional block cipher would implement with P- and S-boxes. As a simple plug-in encryption routine, it's great. The code is lightweight and portable enough to be used just about anywhere. It even makes a great random number generator for Monte Carlo simulations. The minor weaknesses identified by David Wagner at Berkeley are unlikely to have any impact in the real world, and you can always implement the new variant TEA which addresses them. If you want a low-overhead end to- end cipher (for real-time data, for example), then TEA fits the bill.

Encode Routine:

Routine, written in the C language, for encoding with key k[0]- k[3]. Data inv[0] and v[1].

void code(long* v, long* k) {

unsigned long y=v[0],z=v[1], sum=0, /* set up */

delta=0x9e3779b9, /* a key schedule constant */

n=32 ;

while (n-->0) { /* basic cycle start */

sum += delta ;

y += ((z<<4)+k[0]) ^ (z+sum) ^ ((z>>5)+k[1]) ;

z += ((y<<4)+k[2]) ^ (y+sum) ^ ((y>>5)+k[3]) ;

} /* end cycle */

v[0]=y ; v[1]=z ; }

Decode Routine:

void decode(long* v,long* k) {

unsigned long n=32, sum, y=v[0], z=v[1],

delta=0x9e3779b9 ;

sum=delta<<5 ;

/* start cycle */

while (n-->0) {

z-= ((y<<4)+k[2]) ^ (y+sum) ^ ((y>>5)+k[3]) ;

y-= ((z<<4)+k[0]) ^ (z+sum) ^ ((z>>5)+k[1]) ;

sum-=delta ; }

/* end cycle */

v[0]=y ; v[1]=z ; }

SYSTEM ANALYSIS & DESIGN

The system analysis refers to the process of examining the solution with the intension of improving it through better procedures and methods. Here the system is analyzed by studying the existing system, need for the proposed system is determined.

System Analysis is a detailed study of various operations performed by a system and their relationships within and outside the system, an examination of a business activity with a view to identify problem access and recommending alternative solution.

Existing System:

A growing number of companies have to process huge amounts of data in a cost-efficient manner. Classic representatives for these companies are operators of Internet search engines. The vast amount of data they have to deal with every day has made traditional database solutions prohibitively expensive. Instead these companies have popularized an architectural paradigm based on a large number of commodity servers [6]. Problems like processing crawled documents or regenerating a web index are split into several independent subtasks, distributed among the available nodes, and computed in parallel.

Major cloud computing companies must access and share their data in efficient manner.

Here that companies resources like storage area accept the parallel access to share it or store their data in to it.

Here there is no scheduling for access the resources from homogenous system. There is no intermediate program to manage the resource sharing in clouds.

It is wasting the time to share the resource in waiting process.

It is totally blocked for the big size job.

Proposed System:

In recent years a variety of systems to facilitate MTC has been developed. Although these systems typically share common goals (e.g. to hide issues of parallelism or fault tolerance), they aim at different fields of application [2]. Encryption is designed to run data processing on a large number of jobs, which is expected to be run across a large set of share-nothing commodity servers. Once a user has fit his program into the required process pattern, the execution framework takes care of splitting the job into subtasks, distributing and executing them [7]. A single encryption job always consists of a distinct encryption program.

The challenge of the existing system solved here using the job manager and virtual machine.

Here the job manager using to schedule the job in priority basis.

Here the virtual machine used to arrange the resource in efficient manner.

It will schedule big size job also for sharing in timely manner.

It is not waste the time for sharing and scheduling, it will do the parallel accessing for resource.

System Design:

The system design involves system flow diagram, output design, input design, modular design and data flow diagram of the proposed system.

System Design is a solution, a "how to" approach to the creation of new system It provides the understanding and procedural details necessary for implementing the system recommended in the feasibility study.

A Design goes through the logical and physical stages of development. Design is a creative process that involves working with the unknown new system, rather than analyzing the existing system. Thus, in analysis it is possible to produce the correct model of existing system.

SYSTEM IMPLEMENTATION

Network Module:

Server - Client computing or networking is a distributed application architecture that partitions tasks or workloads between service providers (servers) and service requesters, called clients. Often clients and servers operate over computer network on separate hardware. A server machine is a high-performance host that is running one or more server programs which share its resources with clients. A client also shares any of its resources; Clients therefore initiate communication sessions with servers which await (listen to) incoming requests.

Allocate Task:

In this Module the service ask the processing data for the encryption process. Client user has to give the related data for task scheduling. Here we can see the given data's in text area and also have to select the encryption model's orders for processing. Then have to precede the data to task scheduling process.

Scheduled Task:

Here the tasks send by the multiple clients are scheduled in processing area. This module arranged the each and every task in FIFO manner. Here it shows the each and every task in list area. After scheduled the task it will precede the each and every data to the Virtual machine process.

Processing in Virtual Machine:

 Virtual machine receives the each and every process and arranges it for the processing servers. It is used to search the free server for processing job. Suppose if every server is in working process, it makes the remaining task to wait stage. It is only arrange the job to the server and respond the output to the client.

Processing Task:

 This is the area of server to processing the encryption for each and every request given from the client. Here some more servers are available for large number of processing task. Each server will do the prober encryption process. After processing the job it will forward that to the virtual machine it's shown in fig 2.

System Architecture:

Figure 2: System Architecture

Data Flow Diagram:

Figure 3: Data Flow Diagram

Database Design:

The database design is a must for any application developed especially more for the data store projects. Since the chatting method involves storing the message in the table and produced to the sender and receiver, proper handling of the table is a must. In the project, admin table is designed to be unique in accepting the username and the length of the username and password should be greater than zero.

CONCLUSION

We have discussed the challenges and opportunities for efficient parallel data processing in cloud environments and presented Nephele, the first data processing framework to exploit the dynamic resource provisioning offered by today's IaaS clouds [7]. We have described Nephele's basic architecture and presented a performance comparison to the well-established data processing framework Hadoop [5]. The performance evaluation gives a first impression on how the ability to assign specific virtual machine types to specific tasks of a processing job, as well as the possibility to automatically allocate/deallocate virtual machines in the course of a job execution, can help to improve the overall resource utilization and, consequently, reduce the processing cost. With a framework like Nephele at hand, there are a variety of open research issues, which we plan to address for future work. In particular, we are interested in improving Nephele's ability to adapt to resource overload or underutilization during the job execution automatically. Our current profiling approach builds a valuable basis for this; however, at the moment the system still requires a reasonable amount of user annotations. In general, we think our work represents an important contribution to the growing field of Cloud computing services and points out exciting new opportunities in the field of parallel data processing.

FUTURE ENHANCEMENTS

We discussed the resource sharing in the cloud and also access the resource the in parallel time. Here we are using the Encryption and decryption process as a resource and input job as a user given data. Here the input job is the word document. In the future we can make it to the input design in any type of document like word, txt, jpeg and mpeg and so on. Also here we using the three encryption process such as Tea, Two fish and Blow fish algorithm and in the future we can use much more algorithm for the sharing process. And also we can do the compression and decompression itself with that encryption process.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.