Simulation of Fog Computing for Internet of Things (IoT) Networking
✅ Paper Type: Free Essay | ✅ Subject: Computer Science |
✅ Wordcount: 5747 words | ✅ Published: 23rd Sep 2019 |
Simulation of Fog Computing for Internet of Things (IoT) Networking
ABSTRACT:
Regardless of the expanding utilization of cloud computing, there are still issues unsolved because of inherent issues of cloud computing such as unreliable latency, lack of mobility support and location-awareness. By providing elastic resources and services to the end users at the edge of the network, Fog computing can address such issues. Cloud Computing is more about providing resources conveyed in the core network. This venture exhibits the idea and simulation of Fog Computing using Cisco Packet Tracer (Networking Perspective) & Amazon AWS (Cloud Platform). Cisco Packet Tracer is a network simulation tool and Amazon AWS is Cloud Computing Platform which can simulate the Internet of Things (IoT) nodes that are connected to a core network passing to the Fog network. The size and the speed computing of the Edge network can be optimized.
TABLE OF CONTENTS
Abstract …………………………………………………… 3
Acknowledgement……………………………………………4
1. Introduction…………………………………………………. .5
2. Architecture & Implementation……………………………. 9
2.1. Cisco Packet Tracer………………………………………..9
2.2. Amazon Web Service………………………………………10
2.3. Simulation in Aws Platform………………………………..11
2.4. Fault Tolerance Environment………………………………16
3. Results……………………..…………………………………. 19
5. Conclusion…………………………………………………….29
6. Future Scope…………………………………………………..30
7. References…………………………………………………… 33
Fig 1: Fog Computing Environment
Typically, a Fog Computing environment is composed of conventional networking components such as routers, switches, set top boxes, proxy servers, Base Stations (BS), etc. and are placed at the closer proximity of IoT devices and sensors. These components are furnished with different computing, storage, networking, capabilities and can bolster benefit service-applications execution. Subsequently, the networking components empower Fog computing to make vast geographical dispersions of Cloud-based services. In addition, Fog computing facilitates location awareness, mobility support, real-time interactions, scalability and interoperability. In this way, Fog computing can perform productively as far as service latency, power consumption, network traffic, capital and operational expenses, content distribution. In this sense, Fog computing better meets the necessities as far IoT applications contrasted with an extensively utilization of Cloud computing.
Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Find out more about our Essay Writing Service
Fig 2: Working of Fog
Fog Computing can also empower Edge computation. However, besides edge network, Fog computing can be extended to the core network as well. More precisely, both edge and core networking components can be utilized as computational framework in Fog computing. Subsequently, multi-level application deployment and service demand mitigation of huge number of IoT devices and sensors can easily be observed through Fog computing. In addition, Fog computing components at the edge network can be set nearer to the IoT devices and sensors compared to cloudlets and cellular edge servers. As IoT devices and sensors are densely dispersed and require real-time responses to the service requests, this approach enables IoT data to be stored and processed within the vicinity of IoT device and sensors.. Fog computing can extend cloud-based services like IaaS, PaaS etc. Due to the features, Fog computing is considered as more potential and well-structured for IoT compared to other related computing paradigms.
-
Architecture & Implementation:
- Cisco Packet Tracer:
Cisco Packet Tracer is a Network Simulation Tool designed by Cisco Systems that allows users to create network topologies and learn different behaviors of network. The Cisco Packet Tracer allows users to simulate the configuration of Cisco routers and switches using a simulated command line interface.
Fig: 3 Fog Computing Architecture in Cisco Packet Tracer
In Cisco Packet Tracer we have created a network topology with the topmost layer as cloud server, the topology at the middle is the Fog server and the fog servers are connected to the end devices through the switch and routers. We have used the generic Switch and the Routers in our topology. IP address has been assigned to each of the routers, end devices and the server. Static routing has been assigned to router 0 and router 1. When we ping from the host PC (End Device), which is assigned with an IP address 192.168.1.1 to the cloud server with the IP address 192.168.3.3 it takes average of 9ms. When we ping from the same host to the Fog server with the IP address 192.168.2.1 it takes average of 5ms. Hence from the comparison we can conclude that there is less latency in Fog compared to the cloud server.
2.2. Amazon Web Service:
2.3. Simulation in Amazon Web Service Platform:
- Testing without Fog Node:
1) We setup an EC-2 instance in AWS which will act as web server. Figure 4 shows us the deployed EC-2 instance.
Fig:4 Deployed EC-2 instance acting as a web server
2) Figure 5 shows us the Linux web server page
Fig:5 Amazon Linux AMI page from EC2 instance serving as a web server
- Testing with Fog Nodes:
CloudFront is a service in AWS, the role of this service is to distribute static and dynamic web pages to customers. CloudFront sends the data from a global network of data centers. When a customer requests data with CloudFront, the customer is routed to the server location that has the lowest latency, so that data is delivered with top performance.
- When data is already in the edge location with the lowest latency, CloudFront delivers it immediately.
- When the data is not in that edge location, CloudFront retrieves it from an HTTP server or any other point where it is been defined and which has been identified as the source for the data
Fig:6 Content Delivery Network (CDN) Architecture
The figure 7 shows us the CDN distribution which is being created for this environment.
Fig:7 Creation of CDN
Fig:8 Webpage accessed via CloudFront
2.4. Fault Tolerance Environment:
To implement the feature of fault tolerance & high-performance capability in our environment we have used autoscaling and load balancing services in AWS. The architecture is explained below
Figure 9: Architecture in AWS for fault tolerance in the AWS environment
Elastic Load Balancing (ELB) distributes incoming HTTP traffic across many destinations, such as EC2 instances etc. ELB handles traffic in a Availability Zone or across many Availability Zones. ELB has 3 types of load balancers which has the following features like high availability, automatic scaling, and robust security necessary to make applications fault tolerant.
ELB provides load balancing across multiple domains. Classic Load Balancer is used for applications that are built as per the EC2-Classic network.
Auto Scaling service of Amazon helps ensure that right number of Amazon EC2 instances are available to handle traffic for the user application. Collections of EC2 instances, called Auto Scaling groups. Minimum number of instances in each Auto Scaling group can be set and Auto Scaling ensures that group never does not below this limit. Similarly, the max number of instances in each Auto Scaling group can be set by the user, and Auto Scaling does not let it go above this set limit. Auto Scaling has a feature that the instances can be crated deleted increased decreased on user demand.
2.4.1. Implementation of the fault tolerant architecture
[1] Classic Load balancer is created using AWS allowing HTTP traffic to flow through the network
Fig: 10 Load balancer Creation
[2] Auto scaling group is created using AWS service setting up 2 instances and other configurations are done in the environment.
Fig: 11 Autoscaling group is created
[3] The 2 running instances spawned by the auto scaling group
Fig: 12 Two running instances
[4] The Apache Web server which is running on both the instances.
Fig:13 Apache Web Server
[5] The instance with ip address 107.23.84.16 is terminated to simulate a environment when a fault is occurring in our web-server.
Fig:14 Terminating one of the EC2 Instance
[6] A new EC-2 instance is created because of the autoscaling group we have configured before. Thus, if any server goes down due to any reason, Auto scaling group will help to ensure the right number of ec-2 instances are present to handle the load in the environment.
Fig: 15 Two EC2 Instances are running
-
Results:
- Cisco Packet Tracer
In Cisco Packet Tracer we have created a network topology with the topmost layer as cloud server, the topology at the middle is the Fog server and the fog servers are connected to the end devices through the switch and routers. We have used the generic Switch and the Routers in our topology. IP address has been assigned to each of the routers, end devices and the server. Static routing has been assigned to router 0 and router 1. When we ping from the host PC (End Device), which is assigned with an IP address 192.168.1.1 to the cloud server with the IP address 192.168.3.3 it takes average of 9ms. When we ping from the same host to the Fog server with the IP address 192.168.2.1 it takes average of 5ms. Hence from the comparison we can conclude that there is less latency in Fog compared to the cloud server.
Fig: 16 Pinging from Host to Cloud Server
Fig:17 Pinging from Host to Fog Server
Fig: 18 Traceroute to Cloud Server (192.168.3.2)
Fig:19 Traceroute to Fog Server (192.168.2.1)
3.2. Amazon Web Service Platform:
In AWS we have created the static website and accessed it from India, without the fog nodes to observe the latency. We accessed the website from India, using VPN. The maximum latency observed is from India. When we accessed the website from India then we observed the average latency of 571ms. When Aws CloudFront is configured for the same environment and the same website is accessed from India, we can see the difference in the latencies between the two. The latency is now reduced to 88ms.Thus by using CDN, we can conclude that low-latency can be achieved to access websites from any location in the world.
Fig:20 Ping from India before CDN deployment
Fig:21 Ping from India after CDN Deployment
In the fault-tolerant environment, it is observed from the figures that as soon as one of the instances goes over the set CPU threshold value or terminates, a new instance automatically spawns up within a couple of minutes. Thus, autoscaling service monitors the environment to make sure that the system is running at desired performance levels. So, when there is a spike in the network traffic autoscaling automatically increases the instances, so the load system reduces.
Fig: 22 Network In before instancce fails
Fig: 23 Network Out before instancce fails
Fig: 24 CPU utilization before instancce fails
Fig:25 Network paackets in before instancce fails
Fig: 26 Network packets out before instance fails
Fig: 27 Network in after 1st instance fails (blue line indicates new instance )
Fig:28 Network out after 1st instance fails (blue line indicates new instance )
Fig:29 Network Packets in after 1st instance fails (blue line indicates new instance )
Fig:30 Network Packets out after 1st instance fails (blue line indicates new instance )
Fig:31 CPU utilization after 1st instance fails (blue line indicates new instance )
The following charts show information about the devices from which CloudFront received requests for the selected distribution. The Devices charts are available only for web distributions that had activity during the specified period and that have not been deleted.
This chart shows the percentage of requests that CloudFront received from the most popular types of device. Valid values include:
- Desktop
- Mobile
-
Unknown:
Fig:32 Types of Devices
The chart shows parameters as a percent of all viewers request for created CloudFront:
- Hits: viewer request for which the object is served from a CloudFront edge cache.
- Misses: viewer request for which the object is not currently in a cache, so CloudFront must get the object.
- Errors: viewer request that resulted in an error, so CloudFront did not serve the object.
Fig:33 Cache Results
4. Conclusion
Fog computing is emerging as an attractive solution to the problem of data processing in the Internet of Things. Rather than outsourcing all operations to cloud, they also utilize devices on the edge of the network that have more processing power than the end devices and are closer to sensors, thus reducing latency and network congestion. Fog computing takes advantages of both edge and cloud computing while it benefits from edge devices’ proximity to the endpoints, it also leverages the on-demand scalability of cloud resources.
-
Future Scope
-
Security Aspects
- Authentication
-
Security Aspects
5.1.2. Privacy
- References
[1] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli. Fog computing and its role in the internet of things. In workshop on Mobile cloud computing. ACM, 2012”
[2] S. Yi, Z. Hao, Z. Qin and Q. Li, “Fog Computing: Platform and Applications,” 2015 Third IEEE Workshop on Hot Topics in Web Systems and Technologies (HotWeb)(HOTWEB), Washington DC, DC, USA, 2015, pp. 73-78. doi:10.1109/HotWeb.2015.22
[3] “https://www.rtinsights.com/what-is-fog-computing-open-consortium/
[4] “http://linkites.com/fog-computing-a-new-approach-of-internet-of-things/
[5] https://www.thbs.com/downloads/Cloud-Computing-Overview.pdf
[6] F. Bonomi et al., “Fog Computing: A Platform for Internet of Things and Analytics,” Big Data and Internet of Things: A Roadmap for Smart Environments, N. Bessis and C. Dobre, eds., Springer, 2014, pp. 169–186”
[7] Y. Cao et al., “FAST: A Fog Computing Assisted Distributed Analytics System to Monitor Fall for Stroke Mitigation,” Proc. 10th IEEE Int’l Conf. Networking, Architecture and Storage (NAS 15), 2015, pp. 2–11
[8] V. Stantchev et al., “Smart Items, Fog and Cloud Computing as Enablers of Servitization in Healthcare,” J. Sensors & Transducers, vol. 185, no. 2, 2015, pp. 121–128
[9] H. Gupta et al., iFogSim: A Toolkit for Modeling and Simulation of Resource Management Techniques in Internet of Things, Edge and Fog Computing Environments, tech. report CLOUDS-TR-2016-2, Cloud Computing and Distributed Systems Laboratory, Univ. of Melbourne, 2016; http:// cloudbus.org/tech_reports.html”
[10] I. Stojmenovic and S. Wen, “The Fog Computing Paradigm: Scenarios and Security Issues,” Proc. 2014 Federated Conf. Comp. Sci. and Info. Sys. (FedCSIS 14), 2014, pp. 1–8”
[11] Balfanz, D., Smetters, D.K., Stewart, P., Wong, H.C.: Talking to strangers: Authentication in ad-hoc wireless networks. In: NDSS (2002)”
[12] Bouzefrane, S., Mostefa, A.F.B., Houacine, F., Cagnon, H.: Cloudlets authentication in nfc-based mobile computing. In: MobileCloud. IEEE (2014)”
[13] Shin, S., Gu, G.: Cloudwatcher: Network security monitoring using openflow in dynamic cloud networks. In: ICNP. IEEE (2012)”
[14] McKeown, N., et al.: Openflow: enabling innovation in campus networks. ACM SIGCOMM CCR 38 (2008)”
[15] Klaedtke, F., Karame, G.O., Bifulco, R., Cui, H.: Access control for sdn controllers. In: HotSDN. vol. 14 (2014)”
[16] Yap, K.K., et al.: Separating authentication, access and accounting: A case study with openwifi. Open Networking Foundation, Tech. Rep (2011)
[17] Lu, R., et al.: Eppa: An efficient and privacy-preserving aggregation scheme for secure smart grid communications. TPDS 23 (2012)
[18] Dwork, C.: Differential privacy. In: Encyclopedia of Cryptography and Security. Springer (2011)
Cite This Work
To export a reference to this article please select a referencing stye below:
Related Services
View allDMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please click the following link to email our support team::
Request essay removal