Commercial Network Storage Platforms Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

This paper provides an overview of key technologies that have evolved around data storage and storage networking. The paper focuses on analyzing the system architectures of the different building blocks of storage networks. In recent years, enterprise data storage has seen explosive growth in demand from users. This growth is driven by increasingly more sophisticated applications that generate more rich and numerous quantities of content data, and an increasingly larger number of users/consumers of this rich content data. The rapid advancement of networking technology both in the LAN and the WAN has enable new applications that generate large demands on data storage. The rapid growth of information content is fueled by a profound change in the underlying infrastructure of computer networks that facilitates acquisition, processing, exchange and delivery of information among the processing units and the users.

These new applications drive the data storage demand in the following areas:

  • Capacity - the amount of data storage space is increasing. The historic data shows the growth of enterprise data storage has surpassed the exponential growth rate projected by Moore's law.
  • Performance - the bandwidth for delivering storage content is growing to match the increased speed of computer processing power, the speed of data communication networks, and the speed requirement of emerging applications such as multimedia applications.
  • Availability - as people and enterprises become more and more reliant on the content in the data storage, the reliability and availability of data storage systems networks must be dramatically increased to prevent the severe consequences that may result from loss of data content and loss of access to data. Mission critical storage networks are required to achieve “5-9's “ (99.999%) availability and the capability to recover from catastrophic disasters via mirroring and backup techniques that protect the content through geographic diversity.
  • Scalability - the data storage solution must not only be able to satisfy the current storage requirements, but also be easy to grow to address the increased demand of future applications.
  • Cost - the cost of ownership needs to be reduced. That includes not only the cost of hardware system, but also the cost of maintaining and managing the data storage.
  • Driven by the above requirements, various storage-networking technologies have undergone a rapid adoption to become mainstream enterprise solutions. Here it provides a brief introduction to various storage models and technologies. In addition, it provides an overview of various functional entities involved in building storage networks and the reference hardware architectures.

Storage Models

Here, we discuss about three categories of data storage technologies,

  • Direct Attached Storage
  • Network Attached Storage
  • Storage Area Networks.

We compare the benefit of sharing storage resources over the network and the different schemes which are used to accomplish the task of sharing storage resources.

1. Direct Attached Storage

DAS is the simplest, most commonly used storage model found in most of the standalone PCs, servers and workstations. A typical configuration consists of a computer that is directly connected to one or many HDDs or disks. Standard buses are used between the HDDs and computers such as ATA, Serial ATA (SATA), SCSI or Fibre Channel (FC). Some .host of the bus cabling definitions allow for multiple HDDs to be daisy chained together on each HBA (host bus adapter), HCA (host channel adapter) or integrated interface controller on the host computer.

Direct Attached Storage is a widely spread out technology in enterprise networks. It is very easy to understand, acquire and install and is cheaper. It is well suited to the purpose of attaching data storage resources to a server or a computer when administration, backup, capacity, high performance, high availability are not important requirements. For small enterprise network applications and home PCs, DAS is still the dominant choice, as the low-end requirements for growth in capacity, reliability and performance can be easily addressed by the advancements in hard disks and bus technologies.

The past few years have seen 2x sequential increase in HDD capacity per year, while maintaining the low cost point of HDDs targeting the personal computer market. The advent of Ultra-ATA, SATA, SATA-2, and Serial Attached SCSI (SAS) and FC bus standards alleviates the performance bottleneck on the bus interface. The quality of the HDDs has also much improved over the years. These technology advancements have helped DAS systems address the requirements of low-end data storage users.

DAS Block Diagram

DAS Architecture

The software layers of a Direct Attached Storage system are illustrated below. The DAS disk system is manages by the client OS. Software applications access data via file I/O system calls into the OS. The file I/O system calls are handled by the file system, which manages the directory data structure and mapping from files to disk blocks in a logical disk space.

The Volume manager manages the block resources that are located in one or more physical disks in the disk system and maps the accesses to the logical disk space to the physical volume address. The disk system device driver (DSDD) ties the OS to the disk controller or HBA (Host Bus Adapter) hardware that is responsible for the transfers of commands and data between the client computer and the disk system. The file level I/O

DAS Architecture

Initiated by the client application is mapped into block level I/O transfers that occurred over the interface between the client computer and the disk system. One of the key characteristics of DAS is the binding of storage resources to the individual computers. The shortcomings from such a resource binding become apparent when applications demand higher requirements on the data storage.

The DAS suffers from the following severe limitations. The storage capacity of the DAS is limited by the number of HDDs supported by the bus. Adding or removing a disk drive may disrupt the access to all the disks on the SCSI chain, thus making the storage resource unavailable for the duration of the maintenance period. The maximum capacity of a DAS system tops out when the SCSI bus is loaded with the maximum number of HDDs supported. The efficiency of the storage resource is low, as the storage capacity is bound to a given computer. The distributed nature of the storage resource not only means more content replication, but also means the free resources on one computer cannot be used by another computer/server whose disk space is running low.

The computers department of an enterprise has to monitor constantly the disk space usage level of each and every computer to add disks to individual computers/servers or move data around manually to ensure the request for disk space is satisfied. This quickly becomes an administrative nightmare as the number of systems in an enterprise grows. The availability of storage content of DAS is limited. Any server failure results in the content on the attached storage resources becoming inaccessible. If the storage resource is decoupled from the server, then a backup server can be used to take the control of the storage and provide access to data.

The performance of Direct Attached Storage applications is limited by the of processing speed of the individual server. As the content is only accessible by the attached server, parallel processing to share the workload among multiple servers is not possible. The maintenance work on a large computer network consists of DAS systems is tedious. To protect the data on the direct attached storage systems, backup or recovery of data is required for each computer.

It is a time taking process that affects the performance of the computers, but also it requires a lot of human intervention. Repairing the failures on the individual computers requires even more manual work. All these factors increase the total cost of DAS systems ownership.

2. Network Attached Storage

After looking at the consequences of binding storage to individual computers in the DAS model, the benefits of sharing storage resources over the network become obvious. SAN and NAS are the ways of sharing storage over the network. Network Attached Storage is generally referred as a storage that is directly attached to a computer network (LAN) through network file system protocols such as CIFS and NFS. The difference between SAN and NAS is that network attached storage does FILE LEVEL I/O while storage area network does block level I/O over the network.

The distinction between block level access and file level access is of little importance and can be easily stated as implementation details. NFS resides on disk blocks. A file access command refers to the file name or file handle which is translated into a sequence of block access commands on the physical disks. The difference between SAN and NAS is whether data is transferred across the network to the recipient in blocks directly, or in a file data stream which was processed from the data blocks.

As the FA model is built on a higher abstraction layer, it requires an extra layer of processing both in the host (file system redirector) computer, and in translating the function between file accesses and block accesses in the network attached storage system. The attached storage processing may result in extra overhead affecting the processing speed, or additional data transfer overhead

NAS Block Diagram

across the network both can be easily overcome as technology advances with Moore's law. Extra processing latency is the direct impact on the performance of I/O throughput in many applications. As the block level access does not require the extra layer of processing in the OS, it can achieve higher performance.

Network Storage Architecture

As the FA model is built on a higher abstraction layer, it requires an extra layer of processing both in the host (file system redirector) computer, and in translating the function between file accesses and block accesses in the network attached storage system. The attached storage processing may result in extra overhead affecting the processing speed, or additional data transfer overhead across the network both can be easily overcome as technology advances with Moore's law. Extra processing latency is the direct impact on the performance of I/O throughput in many applications. As the block level access does not require the extra layer of processing in the OS, it can achieve higher performance.

The main benefit of NAS with higher layer abstraction is ease-of-use. Many OS such as LINUX and UNIX have the support for NAS protocols such as NFS. Latest versions of Windows operating system have introduced support for the CIFS protocol. To setup NAS system, it involves connecting the Network Attached Storage system to the enterprise LAN and configuring the operating system on the workstations/servers to access the network attached storage filter.

Many benefits of shared storage can then be easily realized in a familiar LAN environment without introducing a new network infrastructure or new switching devices. File oriented access also makes it easy to implement different network across multiple computer OS platforms.

NAS Architecture

An example of NAS is shown. In this example, there are a number of computers and servers running a mixture of Windows and UNIX OS. The NAS device directly attaches to LAN and provides shared storage resources.

As the FA model is built on a higher abstraction layer, it requires an extra layer of processing both in the host (file system redirector) computer, and in translating the function between file accesses and block accesses in the network attached storage system. The attached storage processing may result in extra overhead affecting the processing speed, or additional data transfer overhead across the network both can be easily overcome as technology advances with Moore's law. Extra processing latency is the direct impact on the performance of I/O throughput in many applications. As the block level access does not require the extra layer of processing in the OS, it can achieve higher performance.

The main benefit of NAS with higher layer abstraction is ease-of-use. Many OS such as LINUX and UNIX have the support for NAS protocols such as NFS. Latest versions of Windows operating system have introduced support for the CIFS protocol. To setup NAS system, it involves connecting the Network Attached Storage system to the enterprise LAN and configuring the operating system on the workstations/servers to access the network attached storage filter.

Many benefits of shared storage can then be easily realized in a familiar LAN environment without introducing a new network infrastructure or new switching devices. File oriented access also makes it easy to implement different network across multiple computer OS platforms.

In the client system, the application File I/O access requests are handled by the client OSin the form of systems calls, identical to the systems calls that would be generated in a direct attached storage system. The difference is in how the systems calls are processed by the Operating System. The systems calls are intercepted by an I/O redirectorlayer that determines if the accessed data is part of the remote file system or the local attached file system. If the data is part of the DAS system, the systems calls are handled by the local file system.

If the data is part of the remote file system, the file director passes the commands onto the NFS Protocol stack that maps the file access system calls into command messages for accessing the remote file servers in the form of NFS or CIFS messages. These remote file access messages are then passed onto the TCP/IP protocol stack, which ensures reliable transport of the message across the network. The NIC driver ties the TCP/IP stack to the Network Interface card. The NICprovides the physical interface and media access control function to the LAN network. In the NAS device, the Network Interface Card receives the Ethernet frames carrying the remote file access commands.

The NIC driver presents the datagram to the TCP/IP stack. The TCP/IP stack recovers the original NFS or CIFS messages sent by the client system. The NFS file access handler processes the remote file commands from the NFS/CIFS messages and maps the commands into file access system calls to file system of the NAS device. The NAS file system, the volume manager and disk system device driver operate in a similar way as the DAS file system, translating the file I/O commands into block I/O transfers between the Disk Controller/ HBA and the Disk System that is either part of the NAS device or attached to the NAS device externally.

It is important to note that the Disk System can be one disk drive, a number of disk drives clustered together in a daisy-chain or a loop, an external storage system rack, or even the storage resources presented to a SAN network that is connected with the HBA of the NAS device. In all cases, the storage resources attached to the NAS device can be accessed via the HBA or Disk controller with block level I/O.

3. Storage Area Network

SAN provides block-orient I/O between the computer systems and the target disk systems. The SAN may use Fibre Channel or Ethernet (iSCSI) to provide connectivity between hosts and storage. In either case, the storage is physically decoupled from the hosts. The storage devices and the hosts now become peers attached to a common SAN fabric that provides high bandwidth, longer reach distance, the ability to share resources, enhanced availability, and other benefits of consolidated storage.

The SAN is often built on a dedicated network fabric that is separated from the LAN network to ensure the latency-sensitive block I/O SAN traffic does not interfere with the traffic on the LAN network. This examples shows an dedicated SAN network connecting multiple application servers, database servers, NAS filers on one side, and a number of disk systems and tape drive system on the other. The servers and the storage devices are connected together by the SAN as peers. The SAN fabric ensures a highly reliable, low latency delivery of traffic among the peers.

SAN Block Diagram

Although it is possible to share the network infrastructure between LAN and SAN in an iSCSI environment, there are a couple of reasons for maintaining the separation. First of all, the LAN network and the SAN network often exist in physically different parts of the Enterprise network. The SAN network is often restricted to connecting the servers and the storage devices that are typically located close to each other in a centralized environment.

The LAN often covers the connectivity between the servers and the desktop workstations or PCs, which spans a much wider area in the enterprise. Secondly, the traffic load on the LAN and the SAN, and the Quality of Service requirement are different. The SAN traffic typically demands higher dedicated bandwidth and higher availability with lower latency, which is difficult to ensure in a LAN network.

Additionally, the SAN many create high bandwidth demand for applications such as backup and mirroring for sustained periods of time, which can easily disrupt the performance of the LAN traffic when they share common network resources. Lastly, the SAN network is often built on a different network protocol, such as Fibre Channel, that is different from the prevalent LAN protocol of Ethernet.

Even when iSCSI SAN runs over Ethernet technology, the SAN may still be separated from the LAN either physically, or logically via VLAN to ensure the security and the QoS on the SAN traffic. The SAN software architecture required on the computer systems (servers), is essentially the same as the software architecture of a DAS system.

The key difference here is that the disk controller driver is replaced by either the Fibre Channel protocol stack, or the iSCSI/TCP/IP stack that provides the transport function for block I/O commands to the remote disk system across the SAN network. Using Fibre Channel as an example, the block I/O SCSI commands are mapped into Fibre Channel frames at the FC-4 layer (FCP). The FC-2 and FC-1 layer provides the signaling and physical transport of the frames via the HBA driver and the HBA hardware.

As the abstraction of storage resources is provided at the block level, the applications that access data at the block level can work in a SAN environment just as they would in a DAS environment. This property is a key benefit of the SAN model over the NAS, as some highperformance applications, such as database management systems, are designed to access data at the block level to improve their performance.

However, such applications have no difficulty migrating to a SAN model, where the proprietary file systems can live on top of the block level I/O supported by the SAN network. In the SAN storage model, the operating system views storage resources as SCSI devices. Therefore, the SAN infrastructure can directly replace Direct Attach Storage without significant change to the operating system.

SAN Architecture

Fibre Channel is the first network architecture to enable block level storage networking applications. The Fibre Channel standards are developed in the National Committee of Industrial Technology Standards (NCITS) T11 standards organization. The standards define a layered architecture for transport of block level storage data over a network infrastructure.

The protocols are numbered from FC-0 to FC-4, corresponding to the first four layers of the OSI layered network model: physical (FC-0), data link (FC-1, FC-2), network (FC-3), and transport (FC-4). The FC-0 layer defines the specification for media types, distance, and signal electrical and optical characteristics. The FC-1 layer defines the mechanism for encoding/decoding data for transmission over the intended media and the command structure for accessing the media.

SAN Architecture

FC-2 layer defines how data blocks are segmented into frames, how the frames are handled according to the class of service, and the mechanisms for flow control and ensuring frame data integrity. The FC-3 layer defines facilities for data encryption and compression. The FC-4 layer is responsible for mapping SCSI-3 protocols (FCP) and other higher layer protocols/services into Fibre Channel commands.

We can say that the FC protocol provides a purpose-built mechanism for transporting block level storage data across the network efficiently at Gig rates. As the storage area network model can easily replace the data attached storage without changes in the OS as its emergence fibre channel has enabled the rapid and efficient usage of SAN systems. As fibre channel is a new emerging network technology, its deployment faces the challenge of requiring a dedicated and new networking infrastructure to be built for the storage application. Like any new networking technology, fibre channel networking products take significant time and effort before they reach their full interoperability. Prior to that time, early adopters of this channel will struggle with such interoperability difficulties.

The Fibre channel protocol introduces a new set of concepts, terminology and management issues that the network administrator/users have to learn. Collectively, these factors have formed as barriers to mainstream adoption of the technology. Fibre channel usage has been limited to mostly large corporations that have a pressing need for the higher performance that it offers and can afford the price of adopting it early.

As the technology and products gradually reach higher maturity, affordability, and availability, adoption of Fibre Channel will expand towards more mainstream applications. IP storage technologies such as iSCSI and FCIP have emerged to take advantage of the ubiquity of IP and Ethernet network infrastructures both in the LAN, MAN, and WAN environments.

Ethernet dominates the enterprise network as the lowest cost, most deployed, and most understood technology in the world. IP has become the dominant protocol in the wide area data network that provides connectivity from anywhere to anywhere globally, and the TCP/IP protocol stack is the de facto protocol that most application software are built on. It is only natural, then, to combine these technologies to find a solution to the problem of transporting block level storage I/Os over the existing network infrastructure.

The benefit of using these common technologies is multi-fold. First, these technologies are very mature. The R&D investment and years of effort that have been put into these networking technologies is unsurpassed. The results are TCP/IP/Ethernet products that are very mature, have good interoperability, and are well supported in any operating system. Second, the scale of deployment has helped to lower the cost of TCP/IP/Ethernet networking devices. Riding the low cost curve of the mass-market products helps to reduce the cost of SAN infrastructure.

Not only is it easier to put the SAN network together, but it is also lower cost to manage a network infrastructure that is based on mainstream technologies. FCIP provides a means of encapsulation Fibre Channel frames within TCP/IP for linking Fibre Channel SAN islands over a wide area IP network. Each Fibre Channel SAN island is connected to the IP backbone via a FCIP device that is identified by an IP address. The FCIP devices establish TCP/IP connections among each other. The FCIP tunnels runs over the TCP connections.

iSCSI uses TCP/IP to provide reliable transport of SCSI commands directly over a IP/Ethernet network among the SCSI initiators and the SCSI targets. Because each host and storage device supports the Ethernet interface and the iSCSI stack, these devices can plug directly into an Ethernet or IP network infrastructure. From the network perspective, the iSCSI devices are simply regarded as normal IP or Ethernet nodes.

The network infrastructure need not be different than normal enterprise LAN or IP network. Significant cost savings can be achieved by constructing an iSCSI SAN using mass-market enterprise Ethernet switching devices. Additionally the SAN infrastructure can be seamlessly integrated with the Enterprise LAN to further reduce the cost of building and managing the network. In the iSCSI protocol layers, the iSCSI layer maps SCSI commands into TCP packets directly.

As in FCIP, the TCP ensures reliable transport of packets from the source to the destination. iSCSI also specifies the IPSec protocol for data security. At the data link and physical layer, Ethernet or any other protocol that can handle IP traffic may be used to carry the traffic on the physical media. iFCP is a gateway-to-gateway protocol for providing Fibre Channel Fabric services to Fibre Channel end devices over a TCP/IP network. An iFCP network emulates the services provided by a Fibre Channel Switch Fabric over TCP/IP.

Fibre Channel end nodes, including hosts and storage devices, can be directly connected to an iFCP gateway via Fibre Channel links, and operate as if they were connected by a virtual Fibre Channel Fabric. Normal Ethernet/IP network is used to connect the iFCP gateways together to provide the abstract view of a virtual Fibre Channel Fabric. These design considerations establish iFCP as a migration path from Fibre Channel SANs to IP SANs.

Over View of Network Storage Architectures

Scalability, Advantages and Disadvantages of Storage Models

Direct Attached Storage

Scalability

low level PC applications, high-end high-performance

mainframe applications and certain intensive and high performance

OLTP database applications.

Advantages

Independent to any network devices.

Disadvantages

As it is independent to any network, it cannot share or send information to any other computer in network.

Network Attached Storage

Scalability

Data Sharing and consolidated file sharing applications using Windows and UNIX OS, scientific and technical apps, intranet and internet apps, E-Commerce and similar type of applications.

Advantages

File sharing is easy, data availability increases if provided with built inRAIDandclusteringcapabilities.

Disadvantages

NAS fails to deliver if it is occupied with too many users, too many I/O operations or CPU processing power which is too demanding.Certain NAS devices fail to expose well-known services that are typical of a file server.

Storage Area Network

Scalability

SAN is used to provide transactional access to data which requires high-speed block-level access to the hard drives such as databases, email servers and high usage file servers.

Advantages

It simplifies storage administration and adds up flexibility, and also has the ability to allow servers to boot from SAN itself. It has more effective disaster recovery process. It can span a distant location containing a secondary storage array. It enables storage replication either implemented by disk array controllers, by server software, or by specialized SAN devices

Disadvantages

It is not compatible with a lot of applications,higher initial cost, manageability, security and contention for resources.

Shift from DAS to SAN and NAS

As DAS systems were involved in the never-ending task of supporting all version of UNIX, NT for their storage products, both data quest and IDC recently began projecting explosive growth for SAN and NAS products as a percentage of total storage market. This depends on four factors:

· True data sharing between heterogeneous clients is possible with NAS and it is not possible with DAS.

· Trends to re-centralize storage to reduce management costs.

· As the network speed increases, the performance gap which exists between NAS and DAS for many applications equalize.

· Strong standards of NAS results in simpler installation and lower management cost.

Confusion over NAS and SAN

Servers have implemented a variety of specialized hardware and software schemes to encourage the sale of storage with their processors. General purpose data attached storage systems have followed the same strategy. Due to clear NFS/CIFS standards, it is easier for the competitors to make inroads, hence, many do not support NAS and general purpose servers have developed their own proprietary visions of network storage. This is alternatively called as Storage Network or Storage Area Network.

The users have developed these proprietary visions to bring the benefits of NAS to their users without losing control of the storage and networking sale to NAS users. The SAN initiative is a loose configuration of users attempting to declare the weak standards of the past when talking about bringing the benefits of networking to storage architecture. Futuristic benefits, as stated are as follows:

· Data Sharing.

· Storage resource sharing/pooling.

· LAN and server-free backup.

· Interoperability of heterogeneous servers and storage.

· Easy storage resource management.

EMC has announced an ownership of Enterprise Storage Network (ESN) and Compaq has announced the ownership of Enterprise Storage Network Architecture (ENSA). As with UNIX and SCSI, SAN is likely to become a variety of similar architectures that does not belong to strong standards. This might create a major block to successful integration and data sharing between heterogeneous platforms.

NAS and SAN are valid technologies and serve important roles with different objectives. However, because of the complexity rising from many varieties of SCSI, UNIX and ownership SANs, only a small percentage of storage is actually connected to SANs. Recently, a survey of UNIX and NT sites with over 5000 employees by ITcentrix inc., only 7 percentage of enterprises have actually implemented SAN in production compared to about 48% that have implemented NAS.

File System:

Current Technologies

· File system protocols (NFS, CIFS, GFS, etc)

· Database systems (Oracle, MS SQL, Sybase etc)

· Metadata controllers, Directors

Upcoming Technologies

· Object-oriented storage

· Integrated Storage and information management tools

· Storage virtualisation

Future Network Storage Users

Wide-Area High Performance users

· Disaster Recovery, Caching, Mirroring, Global CRM

SME/SMI users

· SMEs can scale to support their larger enterprise customers

Mobile/Wireless users

· M-commerce, WAP, GPRS, PDA

Consumer/SOHO users

· Home Storage networks, remote workers

The Status of NAS and SAN intergation

Today's Fibre Channel SANs have no definite way to implement high performance and scalable Type III UNIX and Windows file sharing in the future without agreement on a universal file system such as has been implemented by NAS users.

FC-SANs may be eclipsed by SAN architectures based on the IP networking transport protocol. Most likely this transport will be Ethernet (TCP/IP) although we view this issue with great uncertainty. We have called this potential architecture Ethernet SAN or E-SAN to easily differentiate it from SANs based on Fibre Channel (FC-SAN).

Only hybrid variations of NAS and SAN integration can be deployed today

In spite of user hype from EMC, MTI and others claiming to offer integrated NAS and SAN architectures today, it is not possible to use an optimally integrated NAS and SAN architecture today. Most analysts predict that NAS and SAN will eventually become integrated in the future.

Only NAS provides non-IT department business benefits

In light of the demonstrated business advantages of many applications for NAS-based UNIX and Windows Type III data sharing, it is proposed that an enterprise can safely deploy a NAS architecture with optimal Type III data sharing characteristics. This will allow the enterprise to not wait to benefit from these considerable business benefits.

Commercial Network Storage Platforms

Manufacturer

Product Family

Start Capacity

Max Capacity

Block Protocols

File Protocols

Adaptec

Snap Server

0.2

44

FC,iSCI

AFP, IFS,FTP,

HTTP, NFS

Dell/EMC

AX

0.7

45

Isci

EMC

CLARiiON

0.3

353

FC,iSCI

HP

XP

2.7

851

FC,iSCI

IBM

System Storage DS

3.6

512

FC

Equal Logic

PS

3.5

84

iSCI

Netapp

FAS

4

176

FC,iSCI

CIFS,NFS,

FTP, WebDAV

Pillar

Axion

6.5

832

FC,iSCI

CIFS,NFS,

FTP

Sun

Sunfire X4500

12

24

iSCI

CIFS,NFS,

FTP,WebDAV

Network Appliance

F700: Single process, single system bus

Network App F700 was designed for cheaper cost, low-end, non-mission critical applications. This is an appropriate choice for most of the users. This is a single processor with single bus, single parity and uses RAID 4. Network appliance may announce an Intel based SMP design to allow greater performance, scalability than its current product.

EMC Celerra: clustered network

EMC was originally designed for mainframe data but here it provides a clustered network file server to the symmetrix storage subsystem. Symmetrix and Celerra are appropriate for open system data, co-located main frame and for users requiring remote mirroring for disaster tolerance.

The EMC's Celerra hybrid design has certain restrictions compared to Nett App or Auspex when it comes to CIFS and NFS data sharing and should be viewed more as partitioned storage than as an optimized file sharing, data sharing product.

Conclusion

Storage Systems are becoming the dominant investment in corporate data centers and a crucial asset in e-commerce, making the rate of growth of storage a strategic business problem and a major business opportunity for storage vendors. In order to satisfy user needs, storage systems should consolidate resources, deploy quickly, be centrally managed, be highly available, and allow data sharing.

It should also be possible to distribute them over global distances, make them secure against external and internal abuse, and scale their performance with capacity. Putting storage in specialized systems and accessing it from clients across a network provides significant advantages for users.

Moreover, the most apparent difference between the NAS and SAN versions of network storage- use of Ethernet in NAS and Fibre Channel in SAN-is not a core difference and may soon not even be a recognizable difference. Instead, we may have NAS servers that look like disks, disks that connect to and operate on Ethernet, arrays of disk bricks that, as far as the user is concerned, function as one big disk, and arrays of smart disks that verify every command against the rights of individual users.

References

1. Benner, A. Fibre Channel: Gigabit Communications and I/O for Computer

Networks. McGraw Hill, New York, 1996.

2. Callaghan, B. NFS Illustrated. Addison Wesley Publishing Co., Reading,

Mass., 2000.

3. Gibson, G., et al. A cost-effective, high-bandwidth storage architecture.

In Proceedings of the ACM 8th International Conference on Architectural

Support for Programming Languages and Operating Systems (ASPLOS)

(San Jose, Calif., Oct). ACM Press, New York, 1998, 92-103;

4. Hartman, J. and Ousterhout, J. The Zebra striped network file system.

In Proceedings of ACM Symposium on Operating Systems Principles

(SOSP) (Ashville, N.C., Dec.). ACM Press, New York, 1993, 29-43.

5. Hitz, D., Lau, J., and Malcolm, M. File systems design for an NFS file

server appliance. In USENIX Winter 1994 Technical Conference Proceedings

(San Francisco, Jan. 1994).

6. Kronenberg, N., et al. VAXclusters: A closely coupled distributed system.

ACM Transact. Comput. Syst. (TOCS) 4, 2 (May 1986), 130-146.

7. Lee, E. and Thekkath, C. Petal: Distributed virtual disks. In Proceedings

of the ACM 7th International Conference on Architectural Support for

Programming Languages and Operating Systems (ASPLOS) (Cambridge,

Mass., Oct). ACM Press, New York, 1996, 84-92.

8. McKusick, M., et al. A fast file system for Unix. ACM Transact. Comput.

Syst. (TOCS) 2, 3 (Aug. 1984).

9. Network Appliance, Inc. DAFS: Direct Access File System Protocol, Version

0.53 (July 31, 2000);

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.