Database Migration and Architecture: Bee Colony Optimization

2835 words (11 pages) Essay

27th Mar 2018 Computer Science Reference this

Tags:

Disclaimer: This work has been submitted by a university student. This is not an example of the work produced by our Essay Writing Service. You can view samples of our professional work here.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

Abstract: It is compulsory for two servers to be compatible if you have to either import or export the data. All the servers have unique protocol service through which they communicate. It is not possible for a server to directly transmit or receive the data from any other server. A live example is the developed codes at different platforms like JAVA, Visual Studio and others. This task becomes more sophisticated when it comes to communication of data along with its architecture. This paper focused their work in migrating the data from one server to another with the use of XAML protocol in which three servers have been included to migrate the data. The first server is the server from where the data has to be migrated, the second server is the server where data is fetched to be migrated and the third server is the server where data has to be migrated. The entire work has been performed using Development tool visual studio 2010 with data base connectivity with SQL SERVER 05. In this paper we are proposing a technique for migration of the platform architecture along with the data with perfect accuracy to another cloud platform using Simple Bee Colony Optimization (BCO) concept will take a lot of effort due to the sophisticated architecture of a system protocol. This may lead to a new era in the cloud computing.

Keywords: BCO, Data Migration, XAML, SQL SERVER 05.

  1. INTRODUCTION:

Cloud computing is an Internet based computing technology, where the word ‘cloud’ means Internet and ‘computing’ refers to services that can accessed directly over the internet. Cloud provider maintains the ‘cloud’ data server or cluster that is collection of computer to provide computing services on a large scale. For providing both software services as well as management services this scale can be used. Any device like PCs’, tablets, smartphones, etc. personal can provide access to cloud computing services, as these devices can connect to the internet. This is because the technology infrastructure of cloud computing is not based on consumer premises. Cloud computing comes in various forms, shapes, and sizes as there is variety of cloud formations [1].

Cloud Computing can be also described as type of application and platform. Platform means to supply the servers or machines; machine can be virtual or physical. Machines can be configure and reconfigure. Type of application depends on the demand of its user, various resources are available over the internet through cloud computing. Resources come in forms – hardware and software resources can be used in scalable and flexible manner. Also the costs can be reduced.

There are mainly three aspects of cloud computing:

  • Iaas (Infrastructure as a Service) – number crunching, data storage and management services (computer servers).
  • SaaS (Software as a Service) – ‘web based’ applications (like Gmail).
  • PaaS (Platform as a Service) – essentially an operating system in the cloud like Google AppEngine [2].

Data migration the term ‘migration’ is the process of moving from one location to another. In the process of Data migration, the data is transferred between various computer systems, storage types, or formats. To achieve an automated migration, data migration is usually performed programmatically.

To give an efficient data migration method, data is mapped to the new system from the previous old system providing a design by data loading and data extraction. Programmatic data migration consists of many steps but it mostly includes data extraction in which the data from the old system writes to the current system [3].

In migration, to improve the quality of data, eliminate the redundancy or invalid information, manual and automated data cleaning is mostly done. Before deploying to the new system, various migration steps like designing, extraction, loading, cleaning and verification are mostly repeated for many applications whether of high or moderate complexity.

Four major types of data migration:

  1. BEE COLONY OPTIMIZATION (BCO):

The bee colony optimization (BCO) has been recently introduced as new approach in the field of Swarm Intelligence. There is a colony of honey bees that can extend their selves over the long distances. To exploit large number of food bees extend itself in multiple directions at the same time. The artificial bees represent the agents, which collectively solves complex problems.

The algorithm BCO is inspired by the original behavior of the bees’ in nature. By creating colony of artificial bees, BCO can successfully used to solve complex problems. The behavior of the artificial bees is partially similar to the behavior of bees’ in nature and partially dissimilar to the behavior of bees’ in nature.

The BCO algorithm is basically, based on population. The population of the artificial bees searches for the valid solution in the population. An artificial bee solves complex problems and described as agents. One solution is generated to the every problem by the artificial bees [4].

Bee colony optimization consists of two phases:

A) Forward pass: In forward pass, search space is explored by every artificial bee, also obtains a new solution and improves the solution and then bees’ again go back to the nest.

B) Backward pass: After bees’ go back to the nest they shared the solutions of various information.

  1. RELATED WORK

Consiglio Nazionale delle Ricerche et.al (2012) explain the working over the cloud platforms for the last few decades. According to him the general migration issue raises when your data is not secure at the one platform. Now the issue comes that whether we can transfer the data with the architecture from one end to another. He proposed that if we can use the TCP/IP technique to find out at which server the data is going to be migrated and if we can configure it to the server from where the data has to be migrated can make a difference into the migration but he did not talk about how an existing architecture allows the second server to be configured into itself [5].

Diva Agawam talks about the server compatibility, according to them as a basic network the PC equipments had been over, with the popularization of technology of embedded system and the internet. Traditional Ethernet fields are infiltrated from embedded equipments . Besides PC, there are several embedded equipments as nodes present. User can easily refer the correlative information if he has the web server accessing permission. The administrator can easily manage and validate the equipments but accessing it over IP, is a great challenge [6].

R.SUCHITRA said that in cloud environment, there is necessity of Server consolidation of virtual machines for cost cutting and energy conservation. With live migration server consolidation can be achieved of virtual machines. For Server Consolidation, we propose a been packing algorithm which is modified to reduce the instantiation of new servers and to avoid the migrations that are not necessary. The algorithm is simulated using multiple test cases and using java. For live migration of virtual machines, ideas are taken from the decreasing strategy of First Fit algorithm [7].

Jayson Tom Hilter talks about the SOAP proto calling in his words. SOAP is a messaging framework, based on XML. Over the internet for exchanging formatted data, SOAP is specially designed. It can be understand with the example of sending the complete documents and using reply and request messages or. It is not affected with the different operating system, programming languages, or platform of distributed computing. A more efficient way was needed to explain the messages and how these messages are communicated. The WSDL (Web Services Description Language) is a specific form of an XML Schema, implemented by Microsoft and IBM for defining the XML message, its operation, and its protocol mapping of a web service used during SOAP or other XML protocol [8].

Qura-Tul-Ain Khan, Said Nasser “talks that cloud computing is a computing platform which is present in large data center. To deliver cloud computing resources various problems occurs like privacy issues, security, and access, regulations, reliability, electricity and other issues. In every field cloud computing is able to address the servers to fulfill their wide range of needs [9].

  1. RESULTS

The proposed architecture migration system has been implemented using VSUAL STUDIO 2010. The performance of various database migration and architecture migration system is analyzed and discussed. Two servers minimum are involved in the data migration. To migrate the architecture system by using XAML language pattern avoiding the time delay of the data migration and ensuring the security analysis of the data getting migrated. The purpose of this work is justified when the data along with the architecture is migrated to another platform. To attain the goal, a mid level XAML architecture would be drawn which would show the compatibility with both the server. In the process, the middle server would first analyze the architecture of the first server from where the data has to be migrated and would generate the XAML for it. As XAML is one of the most light weight language and it is supported by all other platforms also, it would be easier for the second server to adapt the language. The middle server would do amendments in the local XAML according to the architecture which has to be migrated to the next sever. Once the second XAML is generated, it would use the TCP IP protocol service along with the SQL Query injector to transfer the XAML from one end to another and would migrate the architecture completely.

The successful migration of the architecture is examined by various parameters. Three parameters are used:

  1. Accuracy
  2. Reliability
  3. Error rate

Accuracy: Accuracy is the proximity of measurement results. Here we describe the accuracy in terms of percentage. Percentage ranges from 0-100. Here we attain the highest accuracy that means data is migrated successfully [10].

(1)

where,

TN is the number of true negative cases

FP is the number of false positive cases

FN is the number of false negative cases

TP is the number of true positive cases

Fig.(a) Accuracy graph

As shown in the above graph, maximum accuracy is attained i.e, 95% and more than this. In this proposed model for migration accuracy achieves best results.

Reliability: Reliability is the ability of a component or a system to perform the tasks successfully for a given time under provided conditions. It is the Consistency and validity of test results determined through statistical methods after repeated trials without degradation or failure [11].

(2)

Where,

R(t) = reliability

e = exponential (2.178)

ÊŽ = failure time

m = mtbf (mean time between failures)

t = time

Fig. (b) Reliability graph

As shown in above graph, maximum accuracy is attained i.e, 93% and more than this. In this proposed model for migration reliability achieves best results.

Error Rate: An Error rate is a deviation from accuracy or correctness. A ‘mistake’ is an error caused by a fault: the fault being misjudgment, carelessness, or forgetfulness [12].

(3)

where,

, (energy per bit to noise power spectral density ratio) or,

Es/ (energy per modulation symbol to noise density).

Fig.(c) Error Rate graph

As shown in above graph, minimum error rate is attained i.e, 5%. In this proposed model for migration error rate is very less.

As mentioned above the three parameters are evaluated from the proposed work. Accuracy, Reliability and Error rate, all three parameters achieves best results.

Accuracy (%)

Reliability (%)

Error rate (%)

95

92

8

94

93

6

93

91

7

92

90

5

95

93

6

Table I: Accuracy, Reliability and error rate values (in %) calculated from different data’s schemas that are migrated.

Fig. (d) Graph represents above table values per number of time execution

The above figure has two axis x-axis represents the number of time the execution takes place and y-axis represents the percentage of all three parameters.

  1. CONCLUSION

This research has a great scope in reducing the load over the server to provide the optimized result. In this work done till now, it successfully migrates the generated architecture and its data to another server. Here proposed a new approach based on Bee Colony Optimization (BCO) technique and Go Daddy server. The transfer accuracy is almost 90-95 percent. For successful migration XAML is used, as XAML is one of the most light weight language and it is supported by all other platforms also, it would be easier for the second server to adapt the language. Error rate is very less, so the proposed approach works well in migration.

In future, this approach can be applied to the system with more than two servers in the migration. The current system does not evaluate any computation time for the evaluation that how much time has been elapsed in the transfer. So in future time elapsed in transferring the data taken into consideration. Also, the transfer of the data is limited i.e. in the generation of the architecture system; you cannot generate more than a fixed number of columns.

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on the UKDiss.com website then please:

Related Lectures

Study for free with our range of university lectures!