Encryption In Cloud Storage And Demonstration Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The cloud computing is the novel technology these days and it comes with a lot of hope but as well as a lot of difficulties. The rapid development of technology and user awareness challenge the developer to build a believable cloud that can safe as the computer in home. There are a lot of challenges are have to face in the advancement of the technology.

Scope and Objective:

During this work; the main objective is to study the cloud storage problem and understand the working principal in OpenStack cloud. The OpenStack cloud has many cloud computing projects but we are mostly focus on the storage system; OpenStack Swift.

The scope of this report is limited to the find out some technical fundamental securities issues and demonstrates the working of the OpenStack swift.

Methodology:

Study and understand some paper about cloud storage issue.

Breakdown and summarized the problem.

Install the OpenStack swift

Analyze the working of the OpenStack swift.

Find out the any bug in the OpenStack swift.

Outline:

The report is three parts:

Chapter 1 is dedicated to cloud storage system and its securities issues.

Chapter 2 is dedicated to the OpenStack Swift and its Working principal.

Chapter 3 is dedicated to the Codes and some results.

Chapter 2

Cloud Storage

Cloud computing is the hot topic in IT industry. Cloud computing is the internet based development and is used on computer and so on. Users can buy computing resources as per their needs. If computation and storage service can be acquired easily like electronic and water, it will be a revolution of IT industry. In cloud computing environment, data is stored in a public storage provider. Data security is the most important aspect of cloud storage. In cloud technology people can store and retrieve the data as where basis that means people can access and store the data anywhere in the world. The main constraint for such computing is only the Internet access and its speed.

To refine the definition of cloud computing, the following essential characteristics are denoted

On-demand service: A customer may alter (increase or decrease) the amount of desirable computing resources automatically with or without the human interaction with the provider.

Network access: the resources for computing should be available for pre-defined time and can be accessed by the any standard mechanism.

Resource pooling: as In the cloud computing the resource are shared among the customers and they are dynamically assigned and reassigned according to the consumer's demand. The resources may be the physical or virtual or both that should be assign dynamically.

Rapid elasticity: the service provider should be able to rapidly change the provision scale out or scale in. Ideally, from the consumer point of view the resources are infinite and can be purchase on demand.

Measured service: the services which are used by the consumer should be the "pay as you go" model like our electricity and water supply models.

2.1 Cloud Deployment model:

Based on the access on the cloud infrastructure, the models are distinguish in four type

Private cloud: the infrastructure is used by the single organization and the resources are allocated for only this organization or strictly related company. This can be managed by self or using any other third party.

Community cloud: this type of the infrastructure is used by the some group of organization which may have the common specific goal.

Public cloud: this type of infrastructure is used by the general public and can manage by the one or more organization.

Hybrid cloud; this type of cloud is complex may be the combination of private and public and community cloud. The management may be done by the single organization or group of organization.

Depending upon the utilized cloud deployment model , any organization have different level of access over it. For example; organization has a greater control over the private cloud than in the public cloud.

Basically the cloud computing offers three types of services, namely

Software as a service (Saas) basically deals with the software and application, Platform as a service (Paas) provides the infrastructure for application development and Infrastructure as a service (Iaas) delivers the backbone services to run the computing.

Fig. cloud taxonomy (12)

Yes, the Iaas is the main backbone for storing data i.e. storage. The name is given as cloud storage and defined as

Cloud Storage is "An online storage service over the Internet."â€- [DF3]

"Cloud storage is a model of networked online storage where data is stored in virtualized pools of storage which are generally hosted by third parties. Hosting companies operate large data centers, and people who require their data to be hosted buy or lease storage capacity from them. The data center operators, in the background, virtualizes the resources according to the requirements of the customer and expose them as storage pools, which the customers can themselves use to store files or data objects. Physically, the resource may span across multiple servers. [DF2].

"A cloud storage system can be considered to be a network of distributed data centers which typically uses cloud computing technologies like virtualization, and others some kind of interface for storing data. To increase the availability of the data, it may be redundantly stored at different locations. In general, all of this is not visible to the user.â€- [National Institute of Science and Technology, USA].

Therefore cloud storage consist at least a storage and the resources should be virtualized and have an internet access.

According to Cloud Computing Security Alliance the cloud storage (Iaas) is has the following option to store data:

2.2Types of storage we need?

Suppose we have some data like back up files, pictures, Pdf, databases etc. they cannot be fit in the same kind of storage. For an example, documents cannot work efficiently on the database. Different types of data can have different types of access pattern so that it is better to save in different type of storage system. So naturally the data storage system is broadly categorized in the three different categories; namely:

1.) Block storage

2.) File storage

3.) Object storage

Block storage:

A block is the sequence of bytes or bits having the length normally called block size. In this system raw volume of storage are created which found in hard drives. It is universally deployed, like hard drives, floppy disc and optical devices. The block can be access directly through the I/O devices and SCSI or any fiber channel. They are wake up with operating system wakes up. For example, the databases are created on block storage and work efficiently to retrieve the relational information.

File storage:

File storage takes the formatted hard drives and exposes the file system on it. It is an abstraction to store, retrieve and update a set of files. The operating system needs to have the appropriate client software installed to be able to access the remote file system. For example, Network Attach Storage.

Block storage:

With object storage, files are exposed through an HTTP interface, typically with a REST API. All client data access is done at the user level: the operating system is unaware of the presence of the remote storage system. In OpenStack, the Object Storage service provides this type of functionality. Users access and modify files by making HTTP requests. Because the data access interface provided by an object storage system is at a low level of abstraction, people often build on top of object storage to build file-based applications that provide a higher level of abstraction. For example, the OpenStack Image service can be configured to use the Object Storage service as a backend. Another use for object storage solutions is as a content delivery network (CDN) for hosting static web content (e.g., images, and media files), since object storage already provides an HTTP interface. It is now becoming more popular because of reliability to store huge amount of data in low cost. The swift uses the block storage system and is one of the reason to gain popularity in short period of time.

2.3Problem in cloud storage:

The data is not stored in the user's own computer and therefore; there is always fear of losing the data. The data is very important and shouldn't be mismatch with others data. The assurance of data is another issue in the cloud storage security. "Data is safe until it is in my pen drive but how can I assure that the data is not accessed, mismatch and is available when I want?"

The biggest concerns about cloud storage are reliability and confidentiality and Integrity. Clients aren't likely to entrust their data to another company without a guarantee that they'll be able to access their information whenever they want and no one else will be able to get it. They also must assure about the data integrity, data-stolen and authorization. Therefore, there are a lot of security steps are carried out from the step of the door. The integrity, confidentiality and availability are the heart of the cloud computing.[6] There is several techniques are implemented to achieve the requirement of cloud service.

Encryption: This is the complex method to encode information in cloud because data security is more concern in the cloud. Different techniques of encryption are implemented by different cloud storage provider.

The data is should be transfer in network. So there are two phases of data; data at rest in cloud storage and data is moving (in the network). The encryption process may apply to all of them or none of them depending upon the service level agreement between provider and subscriber. But Cloud Computing Security Alliance (afterward CSA) has given the three options for doing the encryption regardless the technique use by the provider or subscriber.

The encryption scheme must provide the protection against the snapshot cloning/exposure.

It must protect volume from expose loss of drives (physical hazard).

They may use:

File/folder encryption: they can use standard file folder encryption tools to encrypt the data before placing the storage.

Client/Application encryption: when the object is used as back- end for an application, encrypt the data using encryption engine embedded in your application or client.

Proxy encryption: Data should passes through the encryption proxy before stored in the proxy server.

Regardless the type of storage there are different type of the encryption method are propose. Sonami et. Al proposed the RSA encryption algorithm to secure the data in the cloud.[1] Craig Gentry proposed an fully homomorphic encryption[FHE] in his PHD thesis in 2009 A.D[2]. But the some research shows that only encryption is not enough for the secure data in cloud. Van Dijk et. al [3in pdf] proved that impossibility of the encryption to secure the data in cloud. Therefore, they need privacy and integrity on also data. Wang et. al[10in pdf] proposed the privacy preserving auditing system where they involve a third party that is responsible for preserving privacy in cloud.

Fig. Architecture of cloud data storage service

This system generate four algorithm (keygen,siggen,genproof,verifyproof). The keygen is generation of the key and siggen will generate the metadata used to verify the signature. The genproof will generate by the server to check the integrity (correctness) of the data in the server. The verifyproof is run by the third party to check the correctness of the data. The scheme has two solution mac based solution and HLA based solution. In the mac-based solution user upload the data with mac address and send the corresponding secret key to third party which helps to check the correctness by calling the data with mac and calculate the data correctness.

Drawback: it is applicable only for the static data.

Another solution proposed is to change the framework to secure the data. it will share the responsibilities between provider and consumer. Xiao Zhang et. al [13 in pdf]proposed the such a frame work that will share the responsibility. The data lost, stolen and crash is borne by consumer and service provider is responsible for data transfer, authorization and storage of data.

Fig. Framework to ensure data security[13 in pdf]

This framework speaks about secure storage, transfer and authority. The data can be made safe by replicating in the different zone to protect from natural disaster such as fire, storm. This framework also talks about the concept of replication of data. The data may be replicated in the single server platform or multi server platform. The encryption system is proposed to give the more security on data [}6in pdf]. For the transfer of data; it proposed the two solutions. One solution is reduce the calculation time, scheduling and design the new programming model that offer these services. Another method is push data near to the user. This is may be the concept of content delivery network (CDN).

This paper proposed the black box operation of the data so the even the system administrator also not authorized to do anything to data. Each user has own authority to respective belonging to him. Yan et. al proposed the federal identity management in cloud which means each user and server has unique identity and this is allocated by the system hierarchally.[19in pdf]

Authentication: This is used to authenticate the person is legal for work on that data. The validating of any user is usually done in two phase. First phase is disambiguating the identity and second phase is validating the credential that already provided to the user. In general case the authentication method is username and password. The according to server level agreement between provider and subscriber, it may be strict like hardwired authentication or VPN connection or anything that is favorable for both.

Authorization: Client may be the single user or organization. And Individual employee in the organization has specific job and should have the access right to data according to his right. The rule separation is based on enterprise or group of user. And it can be pre-defined by the enterprise or according to SLA.

Integrity: This includes how the data is integrated and can I get the fresh one? Data is vulnerable and leads to human error and it should also assure the data back-up with fresh copy even if the data is removed or deleted accidently.

Reliability: This will assures us the how the data will get that I uploaded without any failure of network.

The CSA, ENSIE and NIST already proposed the general frameworks for the cloud computing. in the every point of view. I am going to present the comparative view of the securities issues.

+ denotes the issues discuss in the document and - means it is omitted.

Table: comparison issues raise by CSA [A],ENSIE[B] and NIST[C].

Chapter 2

OpenStack swift

OpenStack Swift (afterwards only Swift) is highly available, distributed and consistent object/blob storage store. The OpenStack Object Store project, known as Swift, offers cloud storage software so that you can store and retrieve lots of data in virtual containers. It's based on the Cloud Files offering from Rackspace.[defined by openstack]

2.1General architecture of Swift

The following swift characteristics are important to understand to know about swift storage.

All objects stored in Swift have a URL

All objects stored are replicated 3x in as-unique-as-possible zones, which can be defined as a group of drives, a node, a rack etc.

All objects have their own metadata

Developers interact with the object storage system through a RESTful HTTP API

Object data can be located anywhere in the cluster

(Via http://swiftstack.com/openstack-swift/architecture/)

Data

Storage

Get/Put/Deleteee

Fig. the accessing of data in swift storage system

The swift contains different components which are describe below

Proxy server:

This is the main interface for communicate with client. It is the first interface that every client must go through it. For each request, it will check the location of the account, container, or object in the ring and route the request accordingly. The public API is also exposed through the Proxy Server. The hardware failures also handle by the proxy server; when hardware failure occurs, this server will ask the ring for handoff or route instead.

The Ring:

A Ring represents a mapping between the names of entities stored on disk and their physical location. There are separate rings for the Containers, accounts and objects. Whenever any subject/component wants to interact with the corresponding objects, accounts or container, it should be first interact with the ring to know their specific location. The ring is also responsible for determining which devices are used for handoff in failure scenarios.

Partition

Partition

Partition

The Ring

Storage Node

Fig. The Ring Concept

Object server:

Object servers are very simple blobs storage server to store,retrieve, update and delete the objects stored in local devices. Objects are stored as binary files on the filesystem with metadata stored in the file's extended attributes (xattrs). Each object is stored using a path derived from the object name's hash that also contains container and account hash and the operation's timestamp. Last write always wins, and ensures that the latest object version will be served.

Container server:

The Container Server's primary job is to handle listings of objects. It doesn't know where those object's are, just what objects are in a specific container.

Account server:

The Account Server is very similar to the Container Server, excepting that it is responsible for listings of containers rather than objects.

Container data

Container database

Container

Object

Accountdatabase

Container database

Fig. relation among Account, Container and Object

Replication:

Replication is used to make the system consistent in temporary failure like network outages or drive failures. The procedure is simple, it compare the data in the object server which written last with latest remote copy to ensure the fresh availability of data. during the replication process it will see the hast list to quickly compare subsections of each partition, and container and account replication use a combination of hashes and shared high water marks. By default the replication is done in every 30 seconds. It can be configure as per user need.

Updaters:

If there in any time, the object, container or account are not updated; then updater is used for updating these.

Auditors:

They are those who are responsible for integrity of object, container and account. If corruption if found, then file is quarantine and can be replace by any of the fresh copy that is placed in the any of the respective replicas.

Partitions:

acoount database,container and objects are collectively stored in the partiion.this is the backbone for replication system.the every operation like replication, upload/download are done in partition. . The implementation of a Partition is conceptually simple; a partition is just a directory sitting on a disk with a corresponding hash table of what it contains

Object

Object

Object

Object

Account database

Container database

Partition is assign to three disk in different zone.

Zone 3

Zone 2

Zone 1

Fig. Relation among zone, partition and container database

Zone: In Swift; zones are the specific location created by the user. It may be the single computer or multiple computers or may be the different geographical area. Zones are created to ensure the data availability and if a zone is failed by any means then the data is can be bring up by another zone.

The zone plays an important role in the time of hardware failure of accidently deleted objects.

Zone1

Zone2

Zone3

Fig. Date replication in terms of zone failure

2.2How it works:

The Swift stores all the data as an object. Whenever user wants to upload or to read the data from OpenStack from his account. He has to authenticate himself as a swift user. To authenticate himself he should have some authentication parameters such as username and password or authentication token. Whenever uses try to access the Object , the proxy server check the authenticity of that user. He may check with the authentication token or may be using other key-managers. Whatever; the proxy is only door to communicate with object.

So in the first phase, user has to authenticate with the server

Account,username password

Authentication Token and URL for storage Account

Authentication server

Fig. authentication of user

Load balancer

Proxy node

Storage

Storage node

Storage node

Fig. uploading the data in the openstack swift

2.3Key management:

The OpenStack offer different authentication middleware. By default it is tempauth. But we can use the keystone or swauth authentication. Swauth is an external project for the authentication and keystone is newly relatively immature than the swauth. The tempauth is generally used for the testing purpose.

Normally, the authentication token is valid for 24 hours but can be managed. After time expired; the user need to be have another token. The keys are can be placed in the MySQL database using the keymanger(separate project).

Whenever we used the databases to store the keys; the authentication middleware asks the database for the keys.

3.3 Encryption

Swift doesn't provide ant encryption by default if we want to encrypt our data. We must provide external module. The data in the swift is place with the hash function of it container, account, object and salt. There data is unreadable if I want to retrieve illegally. Salt is the hash function that is never change. It is also used to co-ordinate with different zones. So when we store data in our account; data is wrote on disk with the hash. The hash created by using MD5 and placed along with data. Whenever we change on the data, the server will check the hash, and placed the newest hash.

So , in Swift

account+container+object+salt MD5 Path to the file on the storage node.

So, may be there is only one attack is seen clearly that is collision attack.we can't create the hash like that but we can change the hash function using

H(m')=h(m).

The some research institute are trying to make the encryption module for OpenStack Swift as an Open source but yet there is no great achievement. Mirantis[web] is claimed that he made one on disk encryption module with keymanager and keystone and it used the m2crypto as a cryptographic module.

Chapter-3

Installation of OpenStack Swift

For installing Openstack Swift we should have following libraries. I did it on Ubuntu 10.04 desktop platform.

Software needed: python 2.6 ,rsync 3.0

For demonstration purpose we can either make one node 4 partition or 4 node with single partition each for each node. I did single 4 node and each partition for each. I have choosen single node with 4 partition and I am using openstack swift 1.7.7.

Before istalling the openstack swift you need to add extra hard disk in your VMware or Virtual machine. For this do as shown in screen shot

First of all we need to install some software

$sudo su - # that will be take you in root

$apt-get-repository ppa: swift-core/release # this is optional if you want to add repository into your #source list.

$apt-get update #to refresh the source.list

Now add some software:

$apt-get install curl gcc git-core memcached python-coverage python-dev python-nose python-setuptools python-simplejson python-xattr sqlite3 xfsprogs python-eventlet python-greenlet python-pastedeploy python-netifaces python-pip

#imporatant this is required for install and operate the Swift.

Now install the software that you need like openssh vim etc.(optional)

$apt-get install openssh-server

$apt-get install openssl # it may be already in the Ubuntu

$apt-get install m2crypto #module for encryption of objects (optional if you want encryption)

now adduser for openstack swift

$adduser swift # it will ask you password put it

$adduser swift swift #the latter one is group

format the disk using

$fdisk /dev/sdb

then press (m,n,1,enter,enter,w)

Now make the file system. We need to choose xfs file system because it supports metafile that is used by swift.

$mkfs.xfs -f -i size=1024 -L /dev/sdb1

Now test with "blkid"

$blkid

You'll see only the only the previous all file system.

C:\Users\sujit\AppData\Local\Microsoft\Windows\Temporary Internet Files\Content.Word\Screenshot.png

#To mount the disk into the system we have to write in /etc/fstab

#Use you favourite text editor to open /etc/fstab

$gedit /etc/fstab #and add the following lines

/dev/sdb1 /mnt/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0

Now create the direactory for mount the disks.

$mkdir /mnt/sdb1

$ mount /mnt/sdb1

$mkdir /mnt/sdb1/1 /mnt/sdb1/2 /mnt/sdb1/3 /mnt/sdb1/4

Change the rights in node

$chown -R swift:swift /mnt/sdb1/* #previously created user

$mkdir /srv

Create the directory /etc/swift where you can put all the necessary information and configuration file.

$mkdir /etc/swift

$mkdir -p /etc/swift/object-server

$mkdir -p /etc/swift/container-server

$mkdir -p /etc/swift/account-server

$mkdir -p /srv/1/node/sdb1 /srv/2/node/sdb2 /srv/3/node/sdb3 /srv/4/node/sdb4 /var/run/swift

Don't forget this slash# this will create the all necessary directory

#Now change the permission

$chown -R swift:swift chown -R swift:swift /etc/swift /srv/[1-4]/ /var/run/swift

C:\Users\sujit\AppData\Local\Microsoft\Windows\Temporary Internet Files\Content.Word\Screenshot-2.png

Now create the remote synchronization file and paste the following

[container6011]

max connections = 25

path = /srv/node/

read only = false

lock file = /var/lock/container6011.lock$gedit /etc/rsyncd.conf

uid = swift

gid = swift

log file = /var/log/rsyncd.log

pid file = /var/run/rsyncd.pid

address = 127.0.0.1

[container6021]

max connections = 25

path = /srv/node/

read only = false

lock file = /var/lock/container6021.lock[account6012]

max connections = 25

path = /srv/node/

read only = false

lock file = /var/lock/account6012.lock

[container6031]

max connections = 25

path = /srv/nod

read only = false

lock file = /var/lock/container6031.lock[account6022]

max connections = 25

path = /srv/node/

read only = false

lock file = /var/lock/account6022.lock

[container6041]

max connections = 25

path = /srv/node/

read only = false

lock file = /var/lock/container6041.lock[account6032]

max connections = 25

path = /srv/node/

read only = false

lock file = /var/lock/account6032.lock

[account6042]

max connections = 25

path = /srv/node/

read only = false

lock file = /var/lock/account6042.lock

[object6030]

max connections = 25

path = /srv/node/

read only = false

lock file = /var/lock/object6030.lock

[object6040]

max connections = 25

path = /srv/node/

read only = false

lock file = /var/lock/object6040.lock

[object6010]

max connections = 25

path = /srv/node/

read only = false

lock file = /var/lock/object6010.lock

[object6020]

max connections = 25

path = /srv/node/

read only = false

lock file = /var/lock/object6020.lock

goto /etc/default/rsync and make RSYNC_ENABLE=true

$gedit /etc/default.rsync

well after this almost finish the synchronization just restart it.

$ service rsync restart

Now log in as guest in the system

$exit

If you want to run all the system using script then you can create the bin folder (optional).

$mkdir ~/bin

Copy the swift software from any location e.g. github or launchpad etc.

$ git clone https://github.com/openstack/swift.git

$cd swift; sudo python setup.py develop

Again download the python swift client because we are going to access using through CLI.

$git clone https://github.com/openstack/python-swiftclient.git

$cd python-swiftclient; sudo python setup.py develop

Now edit the ~/.bashrc and put

export SWIFT_TEST_CONFIG_FILE=/etc/swift/test.conf

export PATH=${PATH}:~/bin

now we are going to create /etc/proxy-server.conf. It is very important because every request is through the proxy. And we are going to write minimum code that should work in our demo.

$ gedit /etc/swift/proxy-server.conf # and write the following code

[DEFAULT]

bind_port = 8080

user = swift #the username that we created before

log_facility = LOG_LOCAL1

eventlet_debug = true

[pipeline:main]

pipeline = healthcheck cache tempauth proxy-logging proxy-server

[app:proxy-server]

use = egg:swift#proxy

allow_account_management = true

account_autocreate = true

[filter:healthcheck]

use = egg:swift#healthcheck

[filter:cache]

use = egg:swift#memcache

[filter:tempauth]

use = egg:swift#tempauth

user_admin_admin = admin .admin .reseller_admin

user_test_tester = testing .admin

user_test2_tester2 = testing2 .admin

user_test_tester3 = testing3

[filter:proxy-logging]

use = egg:swift#proxy_logging

We are going to make the swift hash that is never change and that is used while creating object hash.

$ sudo gedit /etc/swift/swift.conf #write the following

[swift-hash]

# random unique string that can never change (DO NOT LOSE)

swift_hash_path_suffix = I will never change

now we need to write the some line for container object and account server.as said earlier there is 4 conatiner, 4 accoutn server and 4 object server so we are going to write for all.

[pipeline:main]

pipeline = recon account-server

[app:account-server]

use = egg:swift#account

[filter:recon]

use = egg:swift#recon

[account-replicator]

vm_test_mode = yes

[account-auditor]

[account-reaper]$cd /etc/swift

$sudo gedit account-server/2.conf

[DEFAULT]

devices = /srv/1/node

mount_check = false

disable_fallocate = true

bind_port = 6012

user = <your-user-name>

log_facility = LOG_LOCAL2

recon_cache_path = /var/cache/swift

eventlet_debug = true

$sudo gedit account-server/4.conf

[DEFAULT]

devices = /srv/4/node

mount_check = false

disable_fallocate = true

bind_port = 6042

user = swift

log_facility = LOG_LOCAL5

recon_cache_path = /var/cache/swift4

eventlet_debug = true

[pipeline:main]

pipeline = recon account-server

[app:account-server]

use = egg:swift#account

[filter:recon]

use = egg:swift#recon

[account-replicator]

vm_test_mode = yes

[account-auditor]

[account-reaper]

$sudo gedit account-server/3.conf

[DEFAULT]

devices = /srv/3/node

mount_check = false

disable_fallocate = true

bind_port = 6032

user = swift

log_facility = LOG_LOCAL4

recon_cache_path = /var/cache/swift3

eventlet_debug = true

[pipeline:main]

pipeline = recon account-server

[app:account-server]

use = egg:swift#account

[filter:recon]

use = egg:swift#recon

[account-replicator]

vm_test_mode = yes

[account-auditor]

[account-reaper]

$sudo gedit account-server/2.conf

[DEFAULT]

devices = /srv/2/node

mount_check = false

disable_fallocate = true

bind_port = 6022

user = swift

log_facility = LOG_LOCAL3

recon_cache_path = /var/cache/swift2

eventlet_debug = true

[pipeline:main]

pipeline = recon account-server

[app:account-server]

use = egg:swift#account

[filter:recon]

use = egg:swift#recon

[account-replicator]

vm_test_mode = yes

[account-auditor]

[account-reaper]

$sudo gedit container-server/1.conf

[DEFAULT]

devices = /srv/1/node

mount_check = false

disable_fallocate = true

bind_port = 6011

user = swift

log_facility = LOG_LOCAL2

recon_cache_path = /var/cache/swift

eventlet_debug = true

[pipeline:main]

pipeline = recon container-server

[app:container-server]

use = egg:swift#container

[filter:recon]

use = egg:swift#recon

[container-replicator]

vm_test_mode = yes

[container-updater]

[container-auditor]

[container-sync]

$sudo gedit container-server/2.conf

[DEFAULT]

devices = /srv/2/node

mount_check = false

disable_fallocate = true

bind_port = 6021

user = swift

log_facility = LOG_LOCAL3

recon_cache_path= /var/cache/swift2

eventlet_debug = true

[pipeline:main]

pipeline = recon container-server

[app:container-server]

use = egg:swift#container

[filter:recon]

use = egg:swift#recon

[container-replicator]

vm_test_mode = yes

[container-updater]

[container-auditor]

[container-sync]

$sudo gedit container-server/3.conf

[DEFAULT]

devices = /srv/3/node

mount_check = false

disable_fallocate = true

bind_port = 6031

user = swift

log_facility = LOG_LOCAL4

recon_cache_path = /var/cache/swift3

eventlet_debug = true

[pipeline:main]

pipeline = recon container-server

[app:container-server]

use = egg:swift#container

[filter:recon]

use = egg:swift#recon

[container-replicator]

vm_test_mode = yes

[container-updater]

[container-auditor]

[container-sync]

$sudo gedit container-server/4.conf

[DEFAULT]

devices = /srv/4/node

mount_check = false

disable_fallocate = true

bind_port = 6041

user = swift

log_facility = LOG_LOCAL5

recon_cache_path = /var/cache/swift4

eventlet_debug = true

[pipeline:main]

pipeline = recon container-server

[app:container-server]

use = egg:swift#container

[filter:recon]

use = egg:swift#recon

[container-replicator]

vm_test_mode = yes

[container-updater]

[container-auditor]

[container-sync]

#don't forget to write this.

$sudo gedit object-server/3.conf

[DEFAULT]

devices = /srv/3/node

mount_check = false

disable_fallocate = true

bind_port = 6030

user = swift

log_facility = LOG_LOCAL4

recon_cache_path= /var/cache/swift3

eventlet_debug = true

[pipeline:main]

pipeline = recon object-server

[app:object-server]

use = egg:swift#object

[filter:recon]

use = egg:swift#recon

[object-replicator]

vm_test_mode = yes

[object-updater]

[object-auditor]

$sudo gedit object-server/1.conf

[DEFAULT]

devices = /srv/1/node

mount_check = false

disable_fallocate = true

bind_port = 6010

user = swift

log_facility = LOG_LOCAL2

recon_cache_path = /var/cache/swift

eventlet_debug = true

[pipeline:main]

pipeline = recon object-server

[app:object-server]

use = egg:swift#object

[filter:recon]

use = egg:swift#recon

[object-replicator]

vm_test_mode = yes

[object-updater]

[object-auditor]

$sudo gedit object-server/2.conf

[DEFAULT]

devices = /srv/2/node

mount_check = false

disable_fallocate = true

bind_port = 6020

user = swift

log_facility = LOG_LOCAL3

recon_cache_path = /var/cache/swift2

eventlet_debug = true

[pipeline:main]

pipeline = recon object-server

[app:object-server]

use = egg:swift#object

[filter:recon]

use = egg:swift#recon

[object-replicator]

vm_test_mode = yes

[object-updater]

[object-auditor]

[pipeline:main]

pipeline = recon object-server

[app:object-server]

use = egg:swift#object

[filter:recon]

use = egg:swift#recon

[object-replicator]

vm_test_mode = yes

[object-updater]

[object-auditor]

$sudo gedit object-server/4.conf

[DEFAULT]

devices = /srv/4/node

mount_check = false

disable_fallocate = true

bind_port = 6040

user = swift

log_facility = LOG_LOCAL5

recon_cache_path= /var/cache/swift4

eventlet_debug = true

Let's create the ring where we can defined the number of replicas that we need default is 3.

rebalance # it will take sometimes we have wait a little bit more time.

$swift-ring-builder object.builder

$swift-ring-builder object.builder create 18 3 1

$swift-ring-builder object.builder add z1-127.0.0.1:6010/d1 1

$swift-ring-builder object.builder add z2-127.0.0.1:6020/d2 1

$swift-ring-builder object.builder add z3-127.0.0.1:6030/d3 1

$swift-ring-builder object.builder add z4-127.0.0.1:6040/d4 1

$swift-ring-builder container.builder create 18 3 1

$swift-ring-builder container.builder add z1-127.0.0.1:6011/d1 1

$swift-ring-builder container.builder add z2-127.0.0.1:6021/d2 1

$swift-ring-builder container.builder add z3-127.0.0.1:6031/d3 1

$swift-ring-builder container.builder add z4-127.0.0.1:6041/d4 1

$swift-ring-builder container.builder rebalance

$swift-ring-builder account.builder create 18 3 1

$swift-ring-builder account.builder add z1-127.0.0.1:6012/d1 1

$swift-ring-builder account.builder add z2-127.0.0.1:6022/d2 1

$swift-ring-builder account.builder add z3-127.0.0.1:6032/d3 1

$swift-ring-builder account.builder add z4-127.0.0.1:6042/d4 1

$swift-ring-builder account.builder rebalance

Now lets run

$swift-init start all #work without error then you code is ok. There may be one info that object expirer could find just neglect it.

Now run with the curl.

$curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass: testing' http://127.0.0.1:8080/auth/v1.0

if it is ok then answer should be like

And now trying to connect with storage using this authentication and url.

$curl -v -H 'X-Auth-Token: <token-from-x-auth-token-above>' <url-from-x-storage-url-above>

Now check the status of storage can be check as

$swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat

C:\Users\sujit\AppData\Local\Microsoft\Windows\Temporary Internet Files\Content.Word\test3.png

we can use external middleware for authentication.swauth is old but still reliable authentication middleware. To use swauth; we have to download

$git clone https://github.com/gholt/swauth

Below is configuration for swauth and swauth with SSL certificate.

[DEFAULT]

cert_file = /etc/swift/cert.crt

key_file = /etc/swift/cert.key

bind_port = 443

user = $SWIFT_USER

log_facility = LOG_LOCAL1

[pipeline:main]

pipeline = healthcheck cache swauth proxy-server

[app:proxy-server]

use = egg:swift#proxy

allow_account_management = true

account_autocreate = true

[filter:swauth]

use = egg:swauth#swauth

set log_name = swauth

super_admin_key = swauthkey

default_swift_cluster = local#https://127.0.0.1/v1

[filter:healthcheck]

use = egg:swift#healthcheck

[filter:cache]

use = egg:swift#memcache

[DEFAULT]

bind_port = 8080

user = swift

log_facility = LOG_LOCAL1

[pipeline:main]

pipeline = healthcheck cache swauth proxy-server

[app:proxy-server]

use = egg:swift#proxy

allow_account_management = true

account_autocreate = true

[filter:swauth]

use = egg:swauth#swauth

set log_name = swauth

super_admin_key = swauthkey

default_swift_cluster = local#http://127.0.0.1:8080/v1

[filter:healthcheck]

use = egg:swift#healthcheck

[filter:cache]

use = egg:swift#memcache

The SSL certificate should be saved in /etc/swift and self-signed certificate can be created using the command.

$openssl req -new -x509 -nodes -out cert.crt -keyout cert.key

C:\Users\sujit\AppData\Local\Microsoft\Windows\Temporary Internet Files\Content.Word\ssl.png

Now here is the some command using the swauth with and without SSL.

$swauth-prep -K swauthkey

$swauth-add-user -A http://127.0.0.1:8080/auth/ -K swauthkey -a testaccount testuser testpassword

$curl -v -H 'X-Storage-User:testaccount:testuser' -H 'X-Storage-Pass: testpassword'

http://127.0.0.1:8080/auth/v1.0

$swift -A curl -v -H 'X-Storage-User: testaccount:testuser' -H 'X-Storage-Pass: testpassword'

http://127.0.0.1:8080/auth/v1.0

$curl -v -H 'X-Auth-Token: <Authentication token>' <storage url>

$curl -v -H 'X-Auth-Token: <Authentication token>' <storage url> stat

$curl -v -H 'X-Auth-Token: <Authentication token>' <storage url> upload mydata data

$curl -v -H 'X-Auth-Token: <Authentication token>' <storage url> list

Using SSL we can also add user but you must authenticate ourselves. We can neglect the server certificate using -k. so

$swauth-prep -K swauthkey -A https://127.0.0.1:443/auth/

$swauth-add-user -A https://127.0.0.1:443/auth/ -K swauthkey -a ssl1account ssl1user ssl1password

$ curl -v -H 'X-Storage-User:rage-Pass: ssl1password' -k https://127.0.0.1:443/auth/v1.0

$ curl -k -v -H 'X-Auth-Token: <authentication token>' <storage url> stat

Conclusion

In this project, we learned some interesting facts about the cloud storage system. We went both practical and theoretical approach. As we have seen in the research topic, the implementation is not easy and has to face a lot of difficulties to make the encrypted system. During the project we learned the structure of cloud storage, and how it works with data, what may the possible attack on the data on cloud. And what is being the best solution in the cloud to secure the data. We are also able to see the security awareness of the different cloud computing organization.

We also learned the basic about the open source OpenStack Swift. We studied about the he Swift storage system and its working principle. We are able successfully installed the Swift1.7.7. The working of Swift we found much interesting because it automatically replicate the data in different zone. When one zone is failed, the data can be getting from another zone. So we can say this system is made for "hardware failure". We tested with deleting some data in one partition but it will automatically replicate the data to that place. When we analyze system; I found it may be vulnerable to the collision attack. We tried to implement "Mirantis" open source implementation of on disk encryption in my project but we did not succeed. But we are able to analyze how it works with different authenticate middleware.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.