Cloud Based Simulations and Design Problems: CloudSim Exploration

6015 words (24 pages) Essay in Information Technology

23/09/19 Information Technology Reference this

Disclaimer: This work has been submitted by a student. This is not an example of the work produced by our Essay Writing Service. You can view samples of our professional work here.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.

Cloud Based Simulations and Design Problems

Coursework Type: Portfolio

Contents

Task 1:  Cloud Computing and Distributed Technologies: CloudSim Exploration

1.1 Description on CloudSim product

1.1.1 Description on Cloudsim

1.1.2 Cloudsim Architecture

1.1.3 Capabilities of Cloudsim

1.1.4 Differences between the Cloudsim Packages

1.2 Simulation

1.2.1        Simulation (a)

1.2.1a- Time-shared scheduling policy for VMs and Cloudlets.

1.2.1b- Time-shared scheduling policy for VMs, space-shared provisioning for cloudlets.

1.2.1c- Space-shared scheduling policy for Cloudlets and VMs.

1.2.1d- Space-shared scheduling policy for VMs and time-shared scheduling policy for cloudlets.

1.2.2  Simulation (b)

1.2.2a- Time-shared scheduling policy for VMs and Cloudlets.

1.2.2b- Time-shared scheduling policy for VMs, space-shared provisioning for cloudlets.

1.2.2c- Space-shared scheduling policy for Cloudlets and VMs.

1.2.2d- Space-shared scheduling policy for VMs and time-shared scheduling policy for cloudlets.

1.2.3  Simulation (c)

1.2.3a- Time-shared scheduling policy for VMs and Cloudlets.

1.2.3b- Time-shared scheduling policy for VMs, space-shared provisioning for cloudlets.

1.2.3c- Space-shared scheduling policy for Cloudlets and VMs.

1.2.3d- Space-shared scheduling policy for VMs and time-shared scheduling policy for cloudlets.

1.3 Result Presentation and Discussion

 1.3.1 Simulation (a) report.

 1.3.2 Simulation (b) report.

 1.3.3 Simulation (c) report.

Task 2: Challenges/Latest research directions in distributed technologies and Cloud     Computing

2.1 – Scalable storage issues and its design

2.2 – Business Continuity and Service Availability

 

TASK 1-Provide an overview of the capabilities of CloudSim, describing the basic CloudSim model.

1.1.1 Description on CloudSim.

CloudSim is a new generalized and extensible framework that is used in Cloud computing for simulating different aspects of the cloud infrastructure and its related application. Users of CloudSim are able to create and test the performance of their own Cloud applications in a controlled environment.

Some of the unique features of CloudSim are:

a)      Presence of a virtualization engine that helps in the creation and management of services on a datacenter node.

b)      Ability to switch between time-shared and space-shared processing of the declared services.

Now that we have a basic understanding of what CloudSim is and its features we can now move onto the CloudSim model and its architecture.

1.1.2 Cloudsim Architecture

In figure 1. We have the architectural representation of the Basic Cloudsim architecture.

Figure 1 Basic Cloudsim Architecture (Calheiros, Buyya et al., n.d)

The topmost layer of the Cloud stack is the User Code Layer that specifies the components that refer to the hosts such as the architecture of the machines involved and the number of such machines etc. It also specifies the services that are declared by the user for the hosts and the configuration of those application. This layer also indicates the types of applications, broker scheduling policies, user characteristics (such as number of users, type of services requested by each user etc.).

The next layer of the Cloud stack is the CloudSim layer, which is responsible for the simulation and modelling of cloud-based applications as specified by the user. This layer initially contains interfaces for Cloudlets and Virtual Machines. These interfaces in turn call forth services for the execution of the Cloudlets and also for handling the Virtual Machines that were created. Handling of the virtual machines involves the management of resources such as memory, storage, bandwidth, CPU cycles etc. This layer is also responsible for Virtual Machine provisioning, which handles tasks such as allowing disk access, allowing the VM to be downloaded (i.e. read files associated with a VM), clone Virtual Machine etc. The next layer encompasses all the Cloud resources that are used for handling events, data center characteristics, broker requests etc. Finally, the last layer in the CloudSim stack is the network layer which is responsible for handling error and message passing, throughput calculation, delay calculation and implementing the right type of network topology (i.e. star, mesh, hub etc.) for the application. The network topology is also responsible for receiving messages from the message sender and internally calculating the network delay and then transmitting it to the CloudSim simulation engine (Calheiros, Buyya et al., n.d).

1.1.3 Capabilities of Cloudsim 

Cloudsim is capable of creation and management of multiple executing services in a datacenter environment. It is also capable of producing as abstraction of real-world entities and scenarios of the cloud architecture. Developers and cloud users can test their models and any new changes that needs to be made to their cloud infrastructure in a safe environment and observe their effects. Cloudsim is also capable of building large scale infrastructures consisting of details such as networks with bandwidth, large scale datacenters, message passing systems, hypervisors etc (Mitesh Soni, 2014).

1.1.4 Differences Between the Cloudsim Packages.

The examples present in the given directories were successfully executed and the major differences between the three package directories are as follows-

1)      Package org.cloudbus.cloudsim.examples – This package includes programs demonstrating the creation of simulation entities such as datacenters, hosts, virtual machines and run cloudlets using these entities. The package contains such programs which demonstrates the effect of changing specific parameters in the programs such as creating multiple datacenters with a fixed number of hosts and running varied number of cloudlets in them. The datacenter is a resource class whose hosts can be created and viewed. Operating system, cost, architecture, memory are also part of the resource class. There is the ability to change the scheduling policy for the Cloudlets and Virtual Machine to either time shared or space shared.

2)      Package org.cloudbus.clousim.examples.network – This package includes the creation of cloud simulation programs with a network topology. The standard (built-in) topology comes with 5 nodes and 8 edges with RTWaxman model. This model is a generation model used for generating a random model using a probabilistic selection criterion for interconnecting the nodes of a topology (Cloudbus, n.d).

3)      Package org.cloudbus.cloudsim.examples.network.datacentre – This package contains a template for modelling a network datacenter. Its also contains an edge switch to simulate the edge switch in real life.

(1.2)Simulations

1.2.1) This simulation consists of creating 10 cloudlets and 2 VM’s and assigning 5 cloudlets to each VM. In each simulation we have to vary the scheduling policy for the VM and cloudlets. We also need to create 1 host with 1 processor. 

1.2.1a) Time-shared VMScheduler Policy and Time-shared CloudletScheduler Policy

                              

                              

1.2.1b) Space Shared CloudletScheduler Policy and Time Shared VMScheduler Policy

                                

1.2.1c) Space Shared CloudletScheduling Policy and VMScheduling Policy

                                     

1.2.1d) Space Shared VMScheduler Policy and Time Shared CloudletScheduler Policy

                                       

1.2.2) In this simulation we have to create 10 cloudlets and 2 VM’s and assign 5 cloudlets to each VM, also we need to create 1 host but with 2 processors. Here as well we need to vary the scheduling policies for the cloudlets and VM’s.

1.2.2a) Time-shared VMScheduler Policy and Time-shared CloudletScheduler Policy

                                      

 

                                                         

1.2.2b) Space Shared CloudletScheduler Policy and Time Shared VMScheduler Policy

    

                                 

1.2.2c) Space Shared CloudletScheduling Policy and VMScheduling Policy

    

                                                    

1.2.2d) Space Shared VMScheduler Policy and Time Shared CloudletScheduler Policy

                                                    

1.2.3) In this simulation we need to create 10 cloudlets with 2 VM’s and assign 5 cloudlets to each VM. We also need to create 2 hosts each with an identical PE. Here as well we need to vary the scheduling policy for the Cloudlets and VM’s.

1.2.3a) Time-shared VMScheduler Policy and Time-shared CloudletScheduler Policy

                                        

                                                 

1.2.3b) Space Shared CloudletScheduler Policy and Time Shared VMScheduler Policy

 

                                                             

1.2.3c) Space Shared CloudletScheduling Policy and VMScheduling Policy

                                                  

1.2.3d) Space Shared VMScheduler Policy and Time Shared CloudletScheduler Policy

                                                    

 

1.3) RESULT PRESENTATION AND DISCUSSION

1.3.1) Simulation (a) report

Scheduling policy

Time- Time

Space- Time

Space- Space

Time- Space

Total processing time

5000

1000

1000

5000

Total number of cloudlets

10

10

5

10

No. of virtual machines

2

2

2

2

No. of Data centers

1

1

1

1

No. of host

1

1

1

1

Remarks

All the 10 cloudlets were assigned to their VM and successfully executed. But the execution time is longer due to the limited processing cores

All the 10 cloudlets were assigned to their VM and successfully executed. But the execution time is very less due to the tasks being stored in memory rather than waiting for the time of previous task completion

Only 5 cloudlets were executed due to the lack of memory for all 10 cloudlets. But the 5 cloudlets were executed successfully the other 5 weren’t due to the lack of processing cores.

All the 10 cloudlets were assigned to their VM and successfully executed. But the time of execution is very long due to the VM having to allocate memory each time the task is free.

 

1.3.1a)  Time-shared VMScheduler Policy and Time-shared CloudletScheduler Policy

 

1.3.1b) Space Shared CloudletScheduler Policy and Time Shared VMScheduler Policy

              

1.3.1c) Space Shared CloudletScheduling Policy and VMScheduling Policy

 

1.3.1d) Space Shared VMScheduler Policy and Time Shared CloudletScheduler Policy

 

1.3.2) Simulation (b) report

Scheduling policy

Time- Time

Space- Time

Space- Space

Time- Space

Total processing time

5000

1000

1000

2500

Total number of cloudlets

10

10

10

10

No. of virtual machines

2

2

2

2

No. of processing cores

2

2

2

2

No. of Data centers

1

1

1

1

No. of host

1

1

1

1

Remarks

All 10 cloudlets were successfully executed and assigned to their respective VM. Thus, even with the additional cores the time for execution still remain the same.

All 10 cloudlets were successfully executed and assigned to their respective VM. Also we obtain the best execution time with the combination of time-space policy.

All 10 cloudlets were successfully executed and assigned to their respective VM. Here all the cloudlets are getting executed because each processor is handling the additional tasks using their memory.

Here we are getting an intermediate level of execution time due to the VM’s allocating memory to the tasks as they come. Even though with 2 PE’s we still have to allocate memory each time.

1.3.2a) Time-shared VMScheduler Policy and Time-shared CloudletScheduler Policy

1.3.2b) Space Shared CloudletScheduler Policy and Time Shared VMScheduler Policy

 

1.3.2c) Space Shared CloudletScheduling Policy and VMScheduling Policy

 

1.3.2d) Space Shared VMScheduler Policy and Time Shared CloudletScheduler Policy

 

1.3.3) Simulation (c) report

Scheduling policy

Time- Time

Space- Time

Space- Space

Time- Space

Total processing time

400-800

80-800

80-800

80-800

Total number of cloudlets

10

10

10

10

No. of virtual machines

2

2

2

2

No. of processing cores

1

1

1

1

No. of Data centers

1

1

1

1

No. of host

2

2

2

2

Remarks

Here the total execution time varies from 400 to 800. This is because the 2 hosts run at different times each and take different time to run the tasks. After one is done with all the tasks then the next hosts runs. This is due to the presence of only one processor

In this program we are getting all different time ranges for each of the tasks. This is because whenever a host that is getting executed first gets space for the task its executed and then the host completes all its tasks and the next host is scheduled to run its tasks.

Again the execution time is varied in the range due to the hosts being allocated space when available. Once a host completes all its tasks then the next one is scheduled.

There is once again no improvement in the execution time sequence. The same scenario as seen in the other methods is repeated.

1.3.3a) Time-shared VMScheduler Policy and Time-shared CloudletScheduler Policy

 

1.3.3b) Space Shared CloudletScheduler Policy and Time Shared VMScheduler Policy

 

1.3.3c)  Space Shared CloudletScheduling Policy and VMScheduling Policy

 

1.3.3d) Space Shared VMScheduler Policy and Time Shared CloudletScheduler Policy

 

 

Task 2:  Challenges/latest research directions in distributed technologies and Cloud Computing

 PRIVATE CLOUD INFRASTRUCTURE FOR THE NATIONAL HEALTH SERVICE (NHS)

The NHS is the single largest publicly funded healthcare organization in the United Kingdom and is responsible for not only the treatment of patients but also the maintenance of records for all residents both foreign or domestic in the UK. These electronic records are extensive and exceed over millions and they all must be maintained and managed by the NHS (). But several challenges come into the picture such as staff, finance, security etc. In 2017, the WannaCry virus held the NHS at ransom, affecting vast amounts of data and performing threatening actions such as cancelling appointments, turning away ailing patients and demanding vast amounts of money for the safe release of these records (Information Age, 2017). Thus, it is of tantamount importance that security be one the vital factors of our solution. Thus, a paradigm shift must be introduced so that the handling and ease of operation of these records becomes seamless, secure and instant and it involves the conversion of the current legacy systems that the NHS still runs on to be modernized (TrustMarque, February 2018). To this we propose a hybrid cloud service-oriented architecture that involves a mix of both the public and private cloud infrastructure. This is a change over the current public cloud architecture that the NHS (Information Age, June 2018) has in use. The below diagram illustrates the hybrid cloud model infrastructure.

Figure 2 NHS ARCHITECTURE DESIGN MODEL

To begin with we have to first understand the potential users of the system. The NHS not only connects with patients but also to hospitals, practices etc. Thus, the choice to go with a hybrid model is beneficial. The clients represent all the entities that use the cloud such as doctors, employees at the NHS, hospitals etc. The public cloud aspect was introduced to provide some flexibility. Its usage lies solely for the people that do not have much to add to the central database of NHS records but only wish to perform certain basic functions such as visualizing records, tracking patient appointments etc. The public cloud is one that is provided by a third-party cloud provider and all the hardware, software and infrastructure are provided by them. They are also in charge of handling updates, providing instant services and hosting. They provide software the handles tasks such as appointment and patient management, concurrency control and order entry. The infrastructure handles the job control, security, creation and operation of virtual machines, creation and operation of multiple operating systems, network topology, message passing, storage and delay computation etc.

 When we transition over to the private cloud, the same infrastructure is used with all the same functionality. The private cloud is where the cloud infrastructure is stored on premises at the NHS. This cloud solution is managed and maintained solely by the NHS (Oh, Cha, Ji, Kang et al., 2015). The NHS has the responsibility to safely operate, alter, extract the records and thus it is better for them to maintain it in-house rather then allocate it to some third-party vendor. By doing this we are avoiding the cases of any loss and breach of data, crash of any hardware or software etc. The major factor is the prevention of any malicious attack through the public internet. The private cloud is also responsible for the deployment of any or many applications that may be required by the clients. It also has functions to monitor and track the usage and expenditure of resources. This prevents any unnecessary usage of resources.

2.1 Scalable Storage Issues and its design:

 When we observe the current cloud storage solution of the NHS, it involves just layers of networked storage with their associated backups. The other issue with regards to scalable storage is one currently being faced by the NHS, due to rapidly growing population and economic, governmental factors, the size of the information being stored is exponential. The current storage solution will not hold forever as the size continues daily, thus leading to problems such as crashes, loss of data, erroneous entries, susceptible to hacking, long wait times, low performance etc. Also, the NHS have their current cloud solution only on the public cloud with a shared database, this being open to everyone thus allows any client to cause the failure of the system, this is due to its incapability to handle several hundreds of users simultaneously and the function of having a schema for each tenant. Suppose there is a need to scale up the storage, it would not be an easy task as first and foremost there should be initial consolidation of all the information currently present and then there should be reprogramming involved to add the new volume and have replicates made of the new volume so that they can be used as hot standby if and when they go down. The newly devised solution for the private cloud offers the same storage structure as used by Amazon’s EC2 architecture, which is Elastic Block Storage (EBS) (Jeff Barr, 2008). This EBS offers block storage volumes which is continuously replicated when required. This storage also offers consistent, low-latency performance needed for any types of workloads. When this storage combined with the network module (present outside the infrastructure) can provide features such as multi-tenancy, low-latency, low drops in messages, concurrency etc. The benefit of this model is that it incorporates the current public cloud architecture, but the infrastructure allocates only some storage such as cache and database for the use of clients that access it outside the NHS such as hospitals and practice doctors. As security is also one of the primary concerns, we must keep that in consideration when having sensitive information placed in the public cloud where multiple people have access. Thus, the data and information being placed in the public cloud is for temporary use and serves functions such as viewing, appending, booking and checking appointments, health insurance checks, patient history, accessing applications etc. There are scenarios where places such as hospitals need to access applications such as X-ray analysis, sonogram evaluation, blood tests etc., the flexibility in our model allows third-party vendors to provide access  to these applications for low costs and allow the data to be stored in the storage provided by them for the public cloud and if necessary offload that data onto the private servers and databases of the NHS seamlessly and effortlessly.

2.2 Business Continuity and Service Availability: Issues and Design 

 Business continuity and service availability deals with applying resilience, recovery and contingency strategies so as to overcome and reduce the effect of incidents and disasters that may or is affecting a company or business (Margaret Rouse, n.d). When considering a cloud solution, a business looks into these strategies and possible ways to apply them to their own business scenario. Business continuity deals with keeping the critical business functions continuously up and going and service availability deals with having the services and key and supporting infrastructure available to clients and customers. Some of the common factors that cause such measures to be adopted are natural and man-made disasters, server crashes, accidents caused by human error etc (Ready, n.d). That being said, the current NHS cloud system deals with having redundancy of the data in the public cloud by having multiple backups. This is not only time-consuming but also a waste of valuable resources. The business is still prone to effects such as fires, unexpected crashes etc. There is also risk management present in their system that identifies potential threats so that they can be mitigated when it occurs. There is also a semi-structured way of identifying potential threats and disruptions to services currently available in a crude form.  In our proposed solution we use business impact analysis, quality management (MasterControl, n.d) and risk management as our policies for business management and service continuity. These policies involve the identification of key business functions that are concerned with the NHS IT platform and operation and then assigning a priority level to each function so that they can be monitored continuously or at regular intervals. These policies can be used to calculate the effect of certain problems on the operational, strategic and functional tasks of the business. But even these policies cannot account for human error in the operation of the cloud infrastructure, thus we propose extra vigilance while operation of the cloud platform along with the surveillance by certified and qualified personnel. But to be on the safe side we recommend safety precautions such as offsite equipment (in case of any problems in the primary site) being as hot standby with the replicated version of the data. Since we employ a hybrid cloud model, we have to have planning, recovery and mitigation for both the public and private cloud. Since we are utilizing third party cloud vendors, we can just inherit their current policies for business continuity and as for service continuity which is of tantamount importance, we can just switch cloud providers and migrate our applications and services on the fly either that or we can obtain Service Level Agreements (SLA) from the current provider about having availability throughout without any disruption of service and application. But the challenge comes when choosing policies for the private cloud, we have to have continuous access, up to date information, high availability, security and the infrastructure for the data, services and application. Thus, a viable and attractive option would be the hot standby coupled with the offsite backup and maintenance as mentioned above (anon. August 2013). The benefit of this approach is the high and ready availability of the applications and data when required. It works as follows, when a disaster or problem affects the main or central site, the infrastructure will be programmed to immediately switch to the standby infrastructure available at the secondary site (Reuters, 2013). The backup will be continuously updated with the data and services from the primary and then the backup becomes the primary until the previous primary comes back into service wherein the current primary becomes the secondary again. The services resume normally, and the clients have access to them regardless of their location. During the time of disasters, the workload of the NHS rises exponentially and if the primary site is damaged the secondary site may have a huge workload to handle due to the high traffic, the infrastructure uses its load balancers in this situation and distributes its workload across multiple sites so as to avoid failure.

References:

  • Calheiros, Ranjan, Beloglazov, Rose and Buyya “Cloudsim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms”. doi: 10.1002/spe.995.
  • Oh S, Cha J, Ji M, Kang H, Kim S, Heo E, Han JS, Kang H, Chae H, Hwang H, Yoo S.  Architecture Design of Healthcare Software-as-a-Service Platform for Cloud-Based Clinical Decision Support Service.   Healthcare Inform Res. 2015 Apr;21(2):102-110.   https://doi.org/10.4258/hir.2015.21.2.102.
  • Ready (n.d) “Business Continuity Plan” [online] available from <https://www.ready.gov/business/implementation/continuity> (November 21, 2018).
  • Information Age (18 June 2018) “How cloud technology is transforming the healthcare industry” [online] available from <https://www.information-age.com/cloud-technology-transforming-healthcare-industry-123472352/> (November 16, 2018).
  • Xinlei Wang and Yubo Tan “Application of cloud computing in the Health Information System” [online] International Conference on Computer Application and System Modeling (ICCASM 2010) doi:10.1109/iccasm.2010.5619051.
  • Reuters (August 21, 2013) “U.S. regulators urge firms to improve business continuity and disaster recovery plans” [online] available from <http://blogs.reuters.com/financial-regulatory-forum/2013/08/21/u-s-regulators-urge-firms-to-improve-business-continuity-and-disaster-recovery-plans/> (November, 2018).
  • Jeff Barr (August 20, 2008) “Amazon EBS (Elastic Block Storage)” [online] available from <https://aws.amazon.com/blogs/aws/amazon-elastic/>.
  • Trustmarque (February 27, 2018) “Barriers to the cloud removed by NHS” [online] available from <https://www.trustmarque.com/news/barriers-cloud-removed-nhs/>.
  • Mitesh Soni (March 3, 2014) “The CloudSim Framework: Modelling and Simulating the Cloud Environment” [online] available from <https://opensourceforu.com/2014/03/cloudsim-framework-modelling-simulating-cloud-environment/>.
  • Cloudbus (n.d)Utilization Model for Network Simulation in Cloudsim[online] available from <http://www.cloudbus.org/cloudsim/doc/api/org/cloudbus/cloudsim/class-use/UtilizationModel.html#org.cloudbus.cloudsim.network.datacenter>.
  • MasterControl (n.d) “Cloud Based Quality Management Control” [online] available from <https://www.mastercontrol.com/uk/quality-management-software/cloud_qms.html>.
  • Margaret Rouse (n.d) “Business Continuity” [online] available from <https://searchdisasterrecovery.techtarget.com/definition/business-continuity>.
  • Information Age (20 March, 2017) “Can the cloud save the NHS from a data breach epidemic?” [online] available from <https://www.information-age.com/can-cloud-save-nhs-data-breach-epidemic-123465121/>.

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please: