information technology

The information technology essay below has been submitted to us by a student in order to help you with your studies.

The Data Security In Cloud Computing Information Technology Essay

The term cloud computing has begun to evolve around the technology circles in recent years; it is now the next big thing in the world of IT enterprises. Cloud computing is an on-demand network access to a shared pool of trusted computing resources which also stores the software related applications and the data in large data centers. By this technology the user can be freed from storing data’s locally and maintaining those data. This paper gives a general idea about cloud computing its security issues in the area of data storage in the cloud. Six research papers in the area of data storage security have been analyzed; pros and cons of each paper have been discussed in this paper.

To understand the concept of data storage security in cloud computing we should understand the concept of cloud computing in detail. Generally the Internet is represented in networks as a cloud, technically cloud computing can be defined as an on-demand network access to a shared pool of computing resourses. The computing resourses which include servers, networks, applications etc, this service is done so that the consumer has minimal management effort. The cloud computing has five main characteristics i) On-demand service ii) High elasticity iii) Easy network access iv) Can be accessed from any location v) Grouping together of resources [1].

The concept of cloud computing was based on utility computing which emerged in the late 1960’s [2]. Utility computing is a metered service provided by a traditional public utility company such as electricity, gas, water etc. Most people believe that cloud computing is another variation of utility computing. Every analyst have different definitions for cloud computing also there are only few cloud computing vendors participating in this service this is creating lot of confusion amoung the people. But cloud computing is different from utility computing, cloud computing uses distributed platforms instead of centralized computing resource also they dont charge evry time like utility computing. Sometimes cloud computing is also misunderstood as cllient-server model but cloud computing is not a specific machine at specific location. Cloud can be based on computers anywhere and can be split amoung computers. Cloud computing is not a software as a service or a virtualization though cloud computing may not be directly synonymous to these terms but based on the implementation or its usage they can be constituent of the cloud.

Basically in the cloud we can use the service even if the consumer has no knowledge or expertise of how the technology infrastructure works. The customer has no upfront investments on the hardware or any of the software licensing which makes the cloud computing look more advantageous [4]. The data storage and accessing the resources can be acquired instantly any where this leads to a major security concern. The sensitive data and important information are stored in the remote machines which are not managed or controlled by the customer, so the security over the data storage is of the higher priority.

The data storage security problems are much harder to diagnose because the verification of the correct storage of the data has to be found without revealing any knowledge of the original data since the user has no direct control over his data. The long term assurance of the data safety is also a major concern because new customers are added very frequently. The data stored is modified by adding new data or deleting the existing data this adds to the complexity of the security issue because the storage correctness has to be checked for this dynamic data [3]. The other major concern is that the user data can be stored in many physical locations the data centers are running in a distributed manner which makes the data to be vulnerable, so achieving the secure cloud storage system is more important.

In this paper we analyze the different security protocols for ensuring customers data security in cloud computing architecture. Many researchers have proposed different ideas about the integrity check of the data in cloud computing, we will discuss the pros and cons of the different protocols and different solutions presented by the researchers. The importance of ensuring the remote data security and integrity is the top priority in cloud computing. Since users have no direct contact with their data traditional cryptographic approach cannot be implemented, this makes the problem even more complex. To verify the correctness of the data without having the original data is a great challenge to the researchers. Different solutions are been proposed each year, we will discuss the few solutions which are efficient in data storage security.

The rest of the paper is organized as follows, Section II Security Problems. The different solutions and their pros and cons are discussed in the Section III, IV, V, VI, VII and VIII. We discuss the economic feasibility of the solutions and conclusion in Section IX.

II. SECURITY PROBLEM

The cloud computing system has two main players 1) Cloud service provider 2) Consumer or customer.

Cloud service provider is who manages the distributed cloud storage service operates a cloud infrastructure and provide cloud computing service to the clients.

The consumer uses the cloud computing service to remotely store and process data. They rely on the cloud for data security and integrity. Internet is the main communication channel between the consumer and the cloud service provider.

Since cloud service providers are separate entities where they don’t have direct contact with the users or consumer, data outsourcing by the cloud service provider is not giving user any control over of the data. As a result, the correctness and integrity of the data in the cloud is at risk [5]. The infrastructures and the architecture of the cloud are very powerful when compared to the personal computer used by the users. But these powerful devices are facing problems in data integrity and data security. In this data storage system the consumer stores his data through a cloud service provider. These cloud service providers are not a single server they run in a co-operated and distributed as many servers. After the consumer stores the data through a cloud service provider they can access or retrieve their data any time the user requires to choose. The user can do some form of operations like inserting new data or deleting the existing data or modifying the data.

As we can see the user no longer has the data locally if he wants to use his data he has to access his data via the cloud. This arises various concerns among the users about their data being maintained and stored in the cloud, the user have to be guaranteed about their data security in cloud. Other major concern is that user has to be sure that he is accessing only his own data, so the user has to provided with some tools which can help them check their data in the cloud while not having the original data locally. [2] IDC IT cloud services survey shows that the same view about the cloud and challenges are seen in this year too. Even this year the security, availability and performance still faces the number 1 concern about the issues in cloud computing. The recent survey results of the IDC ranking security [6] challenges are shown in Fig.1.

Fig.1 Results of IDC service ranking security challenges

SaaS (software as a service) and PaaS (platform as a service) these providers claim that the security cloud is more reliable than any other service, but the fact is that even the highest security system has been breached at-least once. To site an example Google’s Gmail service collapsed in Europe this February they had to apologize for these results. The major security threats faced can be caused by cloud service provider, he might not be liable, and due to some storage constraints he might possibly move or delete data which are rarely used. The cloud service provider can delete or change the original data even without the knowledge of the consumer. These cause severe security threats and various security measures have been implemented to avoid such issues.

III FIRST SOLUTION

Consider a cloud data storage service having 3 players. 1) The cloud user uses the cloud computing service to remotely store and process data. They rely on the cloud for data security and integrity. [5] Internet is the main communication channel between the consumer and the cloud service provider. 2) Cloud service provider who manages the cloud server which provides storage space for the users to store their original data.3) the third party auditor is the new entity introduced here he has more capabilities than the consumers. The third party auditor is given freedom to access the cloud storage security instead of the consumer. Even in this approach the user can interact or access data with the cloud server but consumer can use the third party service to check the data integrity or security of the original data. The third party audit is independent and reliable they are not depending on either the consumer or the cloud service provider during the process. The third party audit should be the entity which the user trusts.

Fig.2. Cloud storage data using the third party audit

DESIGN AND BASIC SCHEME

The architecture shows that the user can access data directly with the cloud server through the Internet [5]. The user contacts the third party auditor by sending a security message flow then the third party auditor sends a message to the cloud server and now the third party auditor checks the integrity of the data. Now it conforms to the user that the data is secure so that the user can access the data securely shown in Fig.2. There are certain design issues that have to be considered while designing the architecture using the third party auditing. The third party audit should be able to verify the integrity of the original data without getting any information of the original data. The design should be such that the third party audit [14] should not be able to retrieve any of the original data during this process. It should be able to check the correctness of the data. The most important design should be that the third party audit should be able to handle many users simultaneously also it should have minimum communication with the cloud service provider for efficiency purposes. The auditing system for this solution is in two phases Setup and Audit. There are four main algorithm used in this solution Keygen, SigGen, GenProof, VerifyProof [15].

i) Setup: The user uses the keygen for the secret parameter of the system and SigGen is used to create the verification data of the existing file. The user stores this data in the cloud, the data created using SigGen is sent to the third party audit and the user can add data to the file stored in the server.

ii) Audit: The third party audit uses the verification data to check the file whether it is stored correctly or not. The cloud server will send a message the using GenProof and the third party audit verifies using the VerifyProof.

The solution given by this scheme overcomes most of the basic problems like the third party audit’s demand on the retrieval of users data and the complexity of the communication . This method proposes a unique homomorphic authenticator with random masking technique. In this protocol the sampled blocks which is stored the server is masked with a random number generated by a pseudo random function. This is efficient because the third party audit can no way generate the user’s original data even by having all linear combinations of the file block. In this technique public key homomorphic authenticator is used so the block-authenticator pair will not be affected. There are three theorems proposed in this technique. First theorem tells about the storage correctness guarantee, this says that even if the cloud server passes the audit by the third party audit the cloud server should still hold the original data. The second theorem tells about the privacy preserving guarantee which means we have to make sure that third party audit cannot find the original data during the audit. The third theorem shows the security guarantee for batch auditing the storage correctness and security insurance should be assured for multi-user settings. Now let us discuss some of the pros and cons of this approach [5].

Pros: i) This technique shows the problems faced by the data storage security using the basic scheme and it explains the draw backs. They use these drawbacks and produce theorems which overcome all these drawbacks.

ii) This technique uses the third party audit to check the integrity of the user’s data without retrieving the original data.

iii) This scheme proposes multi user auditing where the third party can audit many users at the same time with more efficiency.

iv) This solution justifies the theorems with concrete experiments and has shown the results of the experiments.

Cons: i) The communication and the verifier storage complexity can be reduced.

ii) The user has to completely depend on the third party audit, so the third party audit should be tamper proof and not forgeable the importance to the tamper proof security can be shown in depth.

iii) To retrieve a data which is damaged or lost the user has to download the data vectors from the server, this method is a probabilistic one so it might not be successful every time.

IV SECOND SOLUTION

This paper tells about the Privacy as a Service which has a set of protocols to ensure data storage security in cloud. This method uses the capabilities of the cryptographic coprocessors to ensure the storage security and processing of the consumers original data. To ensure the security of the data both physically and logically they use tamper proof cryptographic coprocessors [3]. Unlike the many other techniques this scheme is designed in such a way that user has control in managing the privacy of the original data. This can be achieved by having software tools which will help users to protect their data privacy, also this tool helps to provide feedback to the users of the operations or changes applied to the original data. This is one of the ways to increase the user’s trust in cloud service. This system model is designed with two important players. i) Cloud service provider is who manages the distributed cloud storage service operates a cloud infrastructure and provide cloud computing service to the clients. ii) The consumer uses the cloud computing service to remotely store and process data. They rely on the cloud for data security and integrity. Internet is the main communication channel between the consumer and the cloud service provider. This model uses the severity of the user’s data and distinguishes accordingly. First level is full trust this level has insensitive data so the data can be stored without any form of encryption. The second level is Compliance-based trust this level has data which are important and the data is encrypted and the user trusts the service provider to store the data encrypted. The third level is the no trust level this level has highly sensitive data so the customer uses the cryptographic keys to encrypt these data and even the cloud server will not be able to decode the original data. These encrypted data is kept on the isolated cryptographic containers in the cloud which are trusted by the user.

The main component of this design system is the cryptographic coprocessors which provide secure and isolated containers in the cloud; it is nothing but a small hardware that interfaces with the main computer. This is a normal computing system which has RAM, ROM, battery backup, network card etc. But for some economic constraints the main server or computer can be less capable in memory and processing. The crypto coprocessor is tamper proof which can resist both physical and logical attacks. These design and coprocessors are given to a third party which have a mutual understanding with the cloud provider and the user. These coprocessors run as a virtual machine they are installed on the servers who are registered in the service. Also this scheme is designed in such a way that the crypto coprocessor can be shared among many cloud customers. The software tools given to the customers are divided into secure and unsecure parts. The secure part implies that the application is running in the address space of the crypto coprocessor and the unsecure part hosts the application which runs on the address space in the main server. The RP daemon is the important entity in this scheme where it checks and ensures that that the application is running in separate domains. The RP daemon also prevents the application running on the main server and the crypto coprocessor.

The privacy specification is done such that before the user uploads the original data to the cloud certain measures are taken with respect to the data. There are three levels i) No privacy ii) Privacy with trusted provider iii) Privacy with non trusted provider. For each of these levels according to the severity of the data the storage provides allocates the logical space required which is termed as a storage pool. Similarly the privacy feedback protocol is a very important component in this scheme, its main responsibilities is to inform the user if there is any unusual activities in the original data or if there data leaks. The RP daemon [16] also supports the feedback process as mentioned before the RP daemon can handle different entities at the same time [3].

Fig.2. Confidentiality and integrity protection of privacy log

The RP daemon is constructed over the encrypted privacy record [22,23] this is done by using hash chain. This is shown in the Fig.2. The hash chain is done with the privacy record and later the MAC field is added to the privacy record. This process increases the demand of the secure crypto coprocessors in cloud computing this emergence of the need of this security will result in cost/functionality ratio.

Pros: i) This method uses the tamper proof capabilities of the crypto coprocessors to secure storage and process user’s data also this method gives more preference to the sensitivity of the data so the user can be satisfied that the highly sensitive data is very secure for sure..

ii) This technique implies on the privacy feedback process this helps the user to find if there is any fault or unusual activities like data leak, change of the existing data the user is notified with the help of the privacy feedback protocol.

iii) RP daemon is the important entity in this scheme it prevents all kind of attacks between the servers. It also prevents the different applications running on the main server and the crypto coprocessor.

iv) This technique gives more emphasize on the cost and feasibility of the whole architecture. Also the technological advancement can produce cost effective crypto coprocessors. The coprocessor sharing mechanism between different servers plays a major role in reducing the cost.

Cons: This scheme prioritizes the data according to the sensitivity of the data and based on the sensitivity the privacy is allocated. This shows that if the data is not highly important and when it is stored in the cloud server the security of that data might not be guaranteed, this reduces the overall efficiency and the effectiveness of the scheme. The software tools given to the user to check the integrity of the data stored in the cloud burdens the user and it’s up to the user to take care of the security and integrity of the data which is not an effective solution to the data security in the cloud computing.

V THIRD SOLUTION

This scheme lists the various problems for ensuring the security and the integrity of storing the data in the cloud computing [7]. This technique uses the RSA security assumption this helps the client not to keep the original data with him. This technique proposes RSA based assumption to verify the data and its integrity by using both identity based cryptography and digital signature by RSA. The general assumption is that the RSA problem is hard to solve when the modulus n is very large and when the n is randomly generated. The structure of the RSA public key requires where N is a product of two large prime numbers 2 < e < N is co prime to <P (N), where C is chosen randomly with that range. This RSA method works with the private key when the public key is given. For large key sizes for example the key size is about 1024 bits there are no efficient solutions till now to find the eth root of the arbitrary number. The other model is the oracle which responds to every question from the output domain with the truly random response, even for any random query it responds in the same way. This is just a mathematical function which maps the every query with the response which is from the output where the response is random. With this mathematical proof [13] and when it is proven then it is considered to be secure in the random model. The second model is the Standard model is the model where the sources are limited to time and computational power available. The third function is the pseudo random function they are the very important tools in creating a cryptographic primitive and secure encryption schemes. The main advantage is that there is no efficient algorithm to distinguish between a function chosen randomly because the outputs are completely random.

Eqn.1.

This technique mainly depends on this eqn.1. To verify whether the client or the user who issues the file to the database this eqution must be proved. This technique is proved by solving this equstion and the final equation becomes

Eqn.2.

The data integrity check is correct and can ensure the data security. This scheme uses the RSA technique for the data integrity, this integrity can be checked by the user or it can be publicly verified and can be given to any third party to verify the data integrity. So even if the client is not able to verify the data integrity due to some time constraints the third party can verify for the client if the third party is trusted. But due to the security concerns the third party should not be able to retrieve the original data from the cloud. So this scheme concentrates on the formula such that the client only has to have the secret key so that only the user can view or change the existing data in the cloud server.

Pros: i) This technique is efficient than most of the many other schemes because in this method both the owner and the third party audit can check the integrity and security.

ii) This RSA based technique has both the digital signature and the identity based cryptography which improves efficiency.

iii) This scheme helps to relieve the storage burden of the user or the consumer.

Cons: i) This RSA based consumption does not support the dynamic data; this is the main drawback of the scheme.

ii) The integrity check and the security is only verified in the random oracle model.

iii) This technique has lot of space for improvement in the future, lot of loop holes have to be fixed to make this an efficient data security or integrity check.,

.

VI FOURTH SOLUTION

This technique is based on the HDFS architecture to construct and setup a mathematical based cloud computing. This architecture is based on new technologies; to enhance the security and the performance of the system they use Hadoop, Hbase technologies. Basically in the cloud architecture there are dynamic data and the user creates lot of virtual dynamic organizations since there are lot of organizations in this level there should be a mutual trust and understanding between the different organizations [8]. But when there are different organizations it is often difficult to follow the proof strategy, it is also dynamic and unpredictable. To ensure the data security in the cloud we use the widely used cloud computing technology Hadoop Distributed File System. This is a large scale cloud computing architecture it is an open source because of the support of Google. Its main goal is to run on commercial hardware and it is used in the cloud facilities. This existing HDFS system is based on the already existing Google File System as they have the same characteristics. The Hadoop Distributed File System is the core of the Apache Hadoop project. This structure has two main controls the Namenode and the Datanode. The Namenode manages the name space of the client both Datanode and the Namenode has the control to access to the users.

Fig.3.HDFS architecture

The HDFS architecture is shown in the Fig.3. The security features are divided into two parts first the client has to authenticate itself to login; this can be done normally through a client browser window. Namenode plays an important role in the data security, if the Namenode fails in this system the whole system will be at high risk so the security of the Namenode is the key to the success of this security scheme [8]. The second part is the rapid recovery of the data blocks, this has a Datanode which is nothing but a storage node when this node is at risk it may lose all the existing dada and cannot guarantee the availability of the data. So to avoid this problem the HDFS has a back up strategy the original data has three replicas. But this method does not provide full controllability over the recovery of the reading and writing data. The file encryption and the access control has to also considered and taken into account. The main security principle of this system is the three basic principles Confidentiality, integrity and availability.

Fig.4.Data security mode

The data security mode is shown in the Fig.4. This scheme uses three level defense mechanism i) First layer ii) Second layer iii) Third layer. Each layer has to do its own duties to perform. The first layer has duties such as manage user permission, authenticates the user and checks the digital certifications. The second layer is responsible for the encryption of the user’s data. The third layer performs the system recovery and acts as a backup. The authentication is used to protect the users data from tampering [8]. If the user authentication level fails the malign users may enter and can destroy the level of security. So to avoid this attack the user data is always encrypted even if the file or key is illegally accessed so the stranger cannot access the data in any form. The restoration or the backup system helps the user to recover the original data even if the file is damaged.

Pros: i) The Hadoop Distributed File System is an open source , the main advantage of being open source is that it is easy to access and it is run on the commercial hardware.

ii) This technique uses various new technologies to enhance the performance and the security of the cloud; technologies such as Hadoop, Hbase are used. iii) The other main advantage of this system is that it has three level security structures each layers has its own tasks to perform. Based on the performance of these layers the efficiency is improved.

iv) This technique has the highly efficient recovery and backup scheme with this system the original data can be recovered even if the data is damaged or lost.

Cons: i) The main drawback of this technique is the existence of the single point failure, when the Namenode is failed or attacked the whole system is at risk which is not efficient.

ii) Similar to the Namenode if the Datanode is affected the user’s data stored in the system might be damaged or lost.

iii) If the user authentication fails the malign users may enter the system and damage or modify the existing data.

iv) This whole technique is mainly depending on the recovery system, the efficiency of the system reduces as it depends on a single function.

VII FIFTH SOLUTION

This scheme combines various techniques such as attribute based encryption, proxy re-encryption and lazy re-encryption to achieve the data storage security in the cloud [9]. This technique concentrates on the user access confidentiality and accountability. Data storage security is not only a big concern in the area of integrity and confidentiality. But in health care applications where the patient’s health information should meet the insurance standards when these information’s are not secure then this problem becomes a juristic issue. So the data security in cloud has gained importance. The security over the data access control has been evolving for the past thirty years, but the more effective way of finding a solution to this problem is still under debate. The user has various rights in the process of storing and accessing information in the cloud, because the user and the cloud server are in the different domains. The most common approach to this problem can be encrypting the data with a key and the decryption can be done only with the user’s key. But the main problem arises with complexity since the user has to be involved with managing keys and decryption process. The main goal in cloud computing is to reduce the user complexity and have easy access to the data anytime the user wants. This can be done with the help of file-group [12] scheme by arranging the files for fine grain access control. This scheme helps in achieving the fine grained access control for cloud computing. There are some logical expressions carried out to achieve or access the desired data file by the user. The data can be encrypted using public key attribute so that to verify the integrity of the data the user does not need the private key this improves the efficiency of the scheme. The encryption can be complex depending on the contents of the data file it does not depend on the number of user’s. Any inclusion of a new user to the system will only affect the existing file but will not need an overall update. This technique helps in achieving the user to transfer the control to the cloud server without disclosing any of the contents in the data file. This scheme has some important entities Cloud service provider is who manages the distributed cloud storage service operates a cloud infrastructure and provides cloud computing service to the clients. The user uses the cloud computing service to remotely store and process data. They rely on the cloud for data security and integrity. But in this scheme we assume the owner can run his own code on the server to verify or manage the contents of the files in data. The third party auditor is also considered but only when it is necessary.

The security models considered in this technique is such that the cloud servers are in a search to find the user’s information stored in the data cloud. The communication channel between the user and the cloud server is considered to be secure, the user’s try to access their data in the cloud. This design scheme concentrates on the user to achieve fine-grained access control to the data stored in the cloud, the design helps only the data owners to access the stored file in the cloud server. The cloud server should not be able to access any information from the user’s data or any information about the user access controls. The design should be in such a way that the user should be able to perform certain tasks without much complexity, should be efficient and acalable. The Key Policy Attribute based encryption [11] is a public key cryptographic where the any user can verify the integrity of a particular user. The next step is the proxy re-encryption where the ecrypted data from one user can be re-encrypted with out any knowledge of the data. As mentioned above the three important techniques are combined

In order to achieve secure, scalable and fine-grained access control on outsourced data in the cloud, we utilize and

uniquely combine the following three advanced i) Key Policy Attribute based encryption ii) proxy re-encryption iii) lazy re-encryption. Each user has different access to the data stored in the cloud this can be achieved by using Key Policy Attribute based encryption this enables the user for a fine grained access to the file. The data owner is incharge of the data management this causes burden, also the user had to stay online to distribute the keys for the other users. To avoid these problems the two techniques are combined. By combinig techniques the user is able to access his contents with minimal efforts, cloud servers will not be able to read the original data. The cloud servers are propotional to the size of the user access structure, independent with the number of users. For every data stored the the owner gives it an attribute which is necessary for access, different files have the same attributes in the subset. In this technique we use hybrid encryption for the security of the data, the encryption is done with symmetric DEK’s and it is again encrypted with Key Policy Attribute based encryption which gives us fine grain access to the data. The cloud servers are given proxy re-encryption keys these help to update the user secret key components without actually disclosing any of the original data. The data owner need not stay online to do the encryption process the cloud server takes care of the task. To have a multiple key update we use the lazy re-encrytion which saves the computation overhead. The performance of the system and the complexity can be achieved by various operations like system setup, user revocation, new user grant, file deletion, file creation and file access. The data owner can give the responsibilities to the server the confidentiality and the accountability of the user secret key can be achieved.

Pros: i) This scheme achieves the desired security goals such as user access confidentiality, fine grained access control, data security and the secret key accountability.

ii) The complexity is reduced since the complexity does not depend on the number of user’s in the system. This scheme overcomes the major drawbacks of the existing access control schemes.

iii) The ecryption is done using hybrid techniques, first the data is encryoted using DEK’s and then the encrypted data is again encrypted with KP-ABE this improves the efficiency.

iv) The user need not carry the burdensome job of managing the encrypting process the users can delegate the duties to the powerful cloud server.

Cons: i) The system backup and the data retrievability has not been given much importance in this technique. The data retrievability is an important aspect where the data can be lost or damaged any time.

ii) The cloud server manages the data and also checks the integrity of the data without disclosing the original data, this gives cloud server superior control over the user and the service provider can deceive the user.

iii) This technique does not provide solution in using or achieving the data confidentiality, scalability and the fine grainedness simultaniously at the same time, this is considered as a drawback.

VIII SIXTH SOLUTION

This technique mainly concentrates on the third party auditor because the software and the data are stored in the cloud where the cloud service provider cannot be fully trusted. The introduction of the third party auditor helps the user in a way that the user need not keep track of the security and integrity of the data stored [10]. This scheme also concentrates on the dynamic data because the data stored in the cloud is edited or deleted which helps to improve the efficiency of the security measures. The system model has three important entities. i) The user is who stores the data in the cloud, and they rely on the cloud server to maintain and store the original data. ii) The cloud service provider is who manages this service; the provider takes care of the user’s data. iii) The third party auditor is who has more capabilities than the client; he checks the integrity and security of the data stored in the cloud on behalf of the client.

The checking and security scheme of this system works when there is no polynomial time algorithm in present to cheat the verifier also there should exist a polynomial time extractor that can recover the original data stored by multiple challenge response. The third party auditor’s duty is to send challenge messages to the client server to check the integrity of the original data. The security has to be verified in such a way that the algorithm interacts with the valid server because the adversary has full access to the data stored in the cloud than the user. This security model has slight edge over the existing models like PDP, PoR. These existing schemes do not consider the dynamic datasuch as the insertion or any modification cannot be considered in those models. In the existing models the verification and the data updating has lot of drawbacks and the adversary can easily manipulate the data. The new scheme proposed overcomes these drawbacks by authenticating during every protocol execution. This technique has three important design features such as public audibility which is not only the clients any random user can verify the corectness of the stored data. The system should be able to do the dynamic data operation and the blockless verfication such that no file block should be seen by the third party auditor or the verifier. The security protocol mainly deals with the integrity assurance of the data stored in the cloud then the public auditability and data dynamics.

The homomorphic authenticator technique is used to verify the data without retrieving any of its contents. This technique is generated with individual blocks of data the homomorphic authenticators cannot be tampered or forged it can be used only when a certain combination of the blocks is computed by the authenticator. The public key cryptographic based homomorphic or RSA based authenticator is used to verify the public auditability. This technique uses the Merkle Hash Tree structure [17] which helps to find out whether the data is damaged or modified. The hash values are constructed as a leaves in the binary tree Fig.5. Shows an example of the Merkle Hash Tree.

Fig.5. The sequence of access to the set of leaves

This shows that the integrity verification is done by the third party auditor to challenge the server. The dynamic data integrity assurance this shows that the scheme can efficiently handle the dynamic data, which include the data modification, data insertion and data deletion. The batch auditing with multi client data shows that the server can handle multiple verification sessions at a single moment. To increase the security of the user’s data storage in the cloud the data can be stored in multiple physical locations this can help to reduce the faults. The data redundancy is done so that the data can be retrieved and used.

Pros: i) Features of the proposed scheme are compared with the existing security storage techniques, this has various advancement towards the security when compared to the previous techniques.

ii) The dynamic data or the dynamic block can be done since the user might have to modify, insert or delete the original data the dynamic property helps to increase the storage security.

iii) The server and the verifier complexity is very less when compared with the existing techniques.

iv) Usually to ensure the data security the client should be given with certain tools to check the integrity of the stored data, but this introduction of third party audit reduces the client’s time and resources. The client no longer have to keep track of the existing data the third party can be fully trusted and the verification is done by the third party audit.

This paper has not been fully edited and can be changed before it is published, it will be totally unfair to point out the demerits in this technique. The main idea to discuss this scheme here is to show the latest security techniques that are proposed for the data storage security in the cloud.

IX-CONCLUSION

In this paper, we have discussed the different solutions for the data storage security in the cloud. The pros and cons of these techniques have been discussed; this paper just lists the different techniques which are proposed for the data storage security. Some techniques can propose schemes where the data stored can be secure but the cost and feasibility issues hinder the progress of those techniques. The main aim of introducing cloud computing is to reduce the cost, so the techniques have to be simple and more efficient. The other important factor is that it should be user friendly; schemes should not be more complicated. All the techniques discussed in this paper has some cons, the more efficient way can be using hybrid cloud. Proper definition for hybrid cloud has not been emerged, but it can be defined as combining two or more virtualized cloud servers to improve their functions. But joining two clouds can be called as combined cloud. To improve the efficiency we can consider the best techniques and combine those techniques and use it as a hybrid, because some techniques may not have the same demerits as the other. To ensure the integrity and the correctness of the user data and to avoid user’s role in the verification process using third party audit is more efficient and

IX. REFERENCES

[1] Cloud Security “A comprehensive guide to secure cloud computing” by Ronald L.Krutz and Russell Dean Vines.

[2] Cloud Computing “Implementation management and security” John W.Rittinghouse and James F.Ransom.

[3] Wassim Itani, Ayman Kayssi and Ali Chehab ”Privacy as a Service: Privacy-Aware Data Storage and Processing in Cloud Computing Architectures”. 2009 Eighth IEEE International Conference on Dependable, Autonomic and Secure Computing.

[4] Cong Wang, Qian Wang, Kui Ren and Wenjing Lou. “Ensuring Data Storage Security in Cloud Computing” 978-1-4244-3876-1/09/$25.00 ©2009 IEEE.

[5] Cong Wang, Qian Wang, Kui Ren and Wenjing Lou. “Privacy-Preserving Public Auditing for Data Storage” This paper was presented as part of the main Technical Program at IEEE INFOCOM 2010.

[6] Computer weekely online at http://www.computerweekly.com/Articles/2010/01/12/235782/Top-five-cloud-computing-security-issues.htm

[7] Zhang lianhong and Chen Hua “Secuirty Storage in the Cloud Computing: A RSA-based Assumption Data Integrity Check without Original Data” 2010 International Coriference on Educational and Information Technology (ICEIT 2010)

[8] Dai Yuefa, Wu Bo, Gu Yaqiang, Zhang Quan, Tang Chaojing

“Data Security Model for Cloud Computing” c 2009 ACADEMY PUBLISHER AP-PROC-CS-09CN004

[9] Shucheng Yu, Cong Wang, Kui Ren and Wenjing Lou “Achieving Secure, Scalable, and Fine-grained Data Access Control in Cloud Computing” This paper was presented as part of the main Technical Program at IEEE INFOCOM 2010.

[10] Qian Wang, Cong Wang, Kui Ren, Wenjing Lou and Jin Li “Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing” 1045-9219/10/$26.00 © 2010 IEEE.

[11] V. Goyal, O. Pandey, A. Sahai, and B. Waters, “Attribute-based encryption for fine-grained access control of encrypted data,” in Proc. of

CCS’06, 2006.

[12] S. D. C. di Vimercati, S. Foresti, S. Jajodia, S. Paraboschi, and

P. Samarati, “Over-encryption: Management of access control evolution

on outsourced data,” in Proc. of VLDB’07, 2007.

[13] T. Schwarz and E. L Miller, "Store, forget, and check Using

algebraic signatures to check remotely administered storage," in Proc.

of ICDCS'06, 2006.

[14] M. A. Shah, R. Swaminathan, and M. Baker, “Privacy-preserving audit

and extraction of digital contents,” Cryptology ePrint Archive, Report

2008/186, 2008, http://eprint.iacr.org/..

[15] H. Shacham and B. Waters, “Compact proofs of retrievability,” in Proc.

of Asiacrypt 2008, vol. 5350, Dec 2008, pp. 90–107.

[16] S.R. White and L. Comerford, “ABYSS: An Architecture for Software

Protection”, IEEE Transactions on Software Engineering, vol. 16, No. 6, June 1990, pp. 619-629.

[17] R. C. Merkle, “Protocols for public key cryptosystems,” Proc. of

IEEE Symposium on Security and Privacy’80, pp. 122–133, 1980.

[18] MEO Intermediate Circular Orbit (Medium Earth Orbit)

e- Words - A Glossary of Computers and Internet

[19] G. Landis, “A Supersynchronous Solar Power Satellite,” SPS-97: Space and Electric Power for Humanity, Aug. 24–28, 1997, Montreal, Canada, pp. 327–328.

[20] Geoffrey A. Landis. Reinventing the Solar Power Satellite. Feb. 2004.

Glenn Research Center, Cleveland, Ohio (NASA/TM—2004-212743) page.28

[21] http://pvcdrom.pveducation.org/SUNLIGHT/SPACE.HTM

[22] G.Goubau and F.Schwering, ”On the guided propagation of electromagnetic wave beams, “ IRE Trans. Antennas Propagat., vol. AP-9, pp. 248-256, May 1961.


Request Removal

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:

Request the removal of this essay


More from UK Essays