Scsi Disks Are Manufactured Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

SCSI disks are manufactured to much stricter tolerances than ATA disks. The duty cycle for ATA is speced much lower than that of SCSI disks. This means the bearings and the actuator arm mechanics used in the higher priced drives are rated for more useful life than ATA. SCSI disks are built to get hammered by read/write data requests all day long, every day, for years. IDE is built to sit in a PC or laptop and get spun down when you go home.

Parallel SCSI uses a more complex bus than SATA, usually resulting in higher manufacturing costs. SCSI buses also allow connection of several drives on one shared channel, whereas SATA allows one drive per channel, unless using a port multiplier. Serial Attached SCSI uses the same physical interconnects as SATA, and most SAS HBAs also support SATA devices.

SATA 3 Gbit/s theoretically offers a maximum bandwidth of 300 MB/s per device which is only slightly worse than the rated speed for SCSI Ultra 320 with a maximum of 320 MB/s in total for all devices on a bus. SCSI drives provide greater sustained throughput than multiple SATA drives connected via a simple (i.e. command-based) port multiplier because of disconnect-reconnect and aggregating performance.In general, SATA devices link compatibly to SAS enclosures and adapters, whereas SCSI devices cannot be directly connected to a SATA bus.

SCSI, SAS, and fibre-channel (FC) drives are more expensive than SATA, so they are used in servers and disk arrays where the better performance justifies the additional cost. Note that, in general, the failure rate of a disk drive is related to the quality of its heads, platters and supporting manufacturing processes, not to its interface.

References:

1) http://searchstorage.techtarget.com

2) www.wikipedia.org

Part ii)

IBM 305 RAMAC(Random Access Memory Accounting)

The 305 was a flexible, electronic, general purpose data processing machine that enabled businesses to record transactions as they occurred and concurrently reflect each entry in affected accounts. It maintained records on a real-time basis, provided random access to any record, eliminated peak loads, and could simultaneously produce output by either print or punched cards.

The 305 system consisted of the IBM 305 Processing Unit (containing the magnetic process drum, magnetic core register and electronic logical and arithmetic circuits), the IBM 370 Printer (an 80-position serial-output printer with tape control carriage), the IBM 323 Card Punch (similar to the IBM 523 Gang Summary Punch, providing for 80 columns of output punching), the IBM 380 Console (containing the card feed, typewriter, keyboard and indicator lights and control keys), the IBM 340 Power Supply (supplying power for all components except the motors in the 350 disk storage unit), a utility table adjacent to the console, and the IBM 350 Disk Storage Unit.

The 350 Disk Storage Unit consisted of the magnetic disk memory unit with its access mechanism, the electronic and pneumatic controls for the access mechanism, and a small air compressor. Assembled with covers, the 350 was 60 inches long, 68 inches high and 29 inches deep. It was configured with 50 magnetic disks containing 50,000 sectors, each of which held 100 alphanumeric characters, for a capacity of 5 million characters.

Photo by U. S. Army Red River Arsenal

Disks rotated at 1,200 rpm, tracks (20 to the inch) were recorded at up to 100 bits per inch, and typical head-to-disk spacing was 800 microinches. The execution of a "seek" instruction positioned a read-write head to the track that contained the desired sector and selected the sector for a later read or write operation. Seek time averaged about 600 milliseconds.

In 1958, the 305 system was enhanced to permit an optional additional 350 Disk Storage Unit, thereby doubling storage capacity; and an additional access arm for each 350. With storage capacities of 5 million and 10 million digits, and the capability to be installed either singly or in pairs, the 350 provided the 305 system with storage capacities of 5, 10, 15 or 20 million characters.

More than 1,000 305s were built before production ended in 1961. The 305 RAMAC was one of the last vacuum tube systems designed in IBM.

IBM RAMAC in contrast with SATA:

IBM 350 RAMAC hard drive from 1956 that used 50 24-inch wide platters to hold a whopping 3.75MB of storage space. This, of course, is the size of an average 128Kbps MP3 file, in the physical space that could hold two commercial refrigerators. The IBM 350 was only used by government and industrial users, and was obsolete by 1969

Serial ATA (SATA) is a computer bus interface for connecting host bus adapters to mass storage devices such as hard disk drives and optical drives. Serial offered several advantages over the older interface: reduced cable size and cost (7 conductors instead of 40), native hot swapping, faster data transfer through higher signalling rates, and more efficient transfer through an (optional) I/O queuing protocol.

SATA host adapters and devices communicate via a high-speed serial cable over two pairs of conductors. In contrast, parallel ATA (the redesignation for the legacy ATA specifications) used a 16-bit wide data bus with many additional support and control signals, all operating at much lower frequency. To ensure backward compatibility with legacy ATA software and applications, SATA uses the same basic ATA and ATAPI command-set as legacy ATA devices.

References:

1) http://www-03.ibm.com/ibm/history/exhibits/storage/storage_350.html

2) www.wikipedia.org

Part iii)

Parameters Hard Disk Drive Solid State Drive

Speed HDD has higher latency, longer read/write times, and supports fewer IOPs (input output operations per second) compared to SSD. SSD has lower latency, longer read/write times, and supports more IOPs (input output operations per second) compared to HDD.

Heat, Electricity, Noise Hard disk drives use more electricity to rotate the platters, generating heat and noise. Since no such rotation is needed in solid state drives, they use less power and do not generate heat or noise.

Defragmentation The performance of HDD drives worsens due to fragmentation; therefore, they need to be periodically defragmented. SSD drive performance is not impacted by fragmentation. So defragmentation is not necessary.

Mechanical nature An HDD uses magnetism to store data on a rotating platter. A read/write head floats above the spinning platter reading and writing data. The faster the platter spins, the faster an HDD can perform An SSD does not have a mechanical arm to read and write data, it instead relies on an embedded processor (or �brain�) called a controller to perform a bunch of operations related to reading and writing data. The controller is a very important factor in determining the speed of the SSD, decisions it makes related to how to store, retrieve, cache and clean up data can determine the overall speed of the drive

Failure rate Mean time between failure rate of 1.5 million hours Mean time between failure rate of 2.0 million hours

Boot up time for Windows 7 Around 40 seconds average bootup time Around 22 seconds average bootup time

References:

1) http://www.storagereview.com/ssd_vs_hdd

2) Texas Instruments http://www.ti.com/solution/solid_state_drive_internal_external

Part iv)

Disk Array cache:

With advanced read-ahead and write-back caching capabilities, Disk array cache modules significantly improve I/O performance.

Read-ahead caching:

Adaptive read-ahead algorithm that anticipates data needs and reduces wait time. It can detect sequential read activity on single or multiple I/O threads and predict when sequential read requests will follow. The algorithm then reads ahead from the disk drives. When the read request occurs, the controller retrieves the data from high-speed cache memory in microseconds rather than from the disk drive in milliseconds. This adaptive read-ahead scheme provides excellent performance for sequential small block read requests.

Write-back caching:

A controller with write-back caching can �post� write data to high-speed cache memory and immediately return �back� completion status to the OS. The write operation completes in microseconds rather than milliseconds. Once the controller locates write data in the cache, subsequent reads to the same disk location come from the cache. Subsequent writes to the same disk location will replace the data held in cache. This is a �read cache hit.� It improves bandwidth and latency for applications that frequently write and read the same area of the disk. The write cache will typically fill up and remain full most of the time in high-workload environments. The controller uses this opportunity to analyze the pending write commands to improve their efficiency. The controller can use write coalescing that combines small writes to adjacent logical blocks into a single larger write for quicker execution. The controller can also perform command reordering, rearranging the execution order of the writes in the cache to reduce the overall disk latency. With larger amounts of write cache memory, the Smart Array controller can store and analyze a larger number of pending write commands, increasing the opportunities for write coalescing and command reordering while delivering better overall performance

Level 1 cache, often called primary cache, is a static memory integrated with processor core that is used to store information recently accessed by a processor. Level 1 cache is often abbreviated as L1 cache. The purpose of level 1 cache is to improve data access speed in cases when the CPU accesses the same data multiple times. For this reason access time of level 1 cache is always faster than access time of system memory.

References:

1) HP Smart Array controller technology

Solution 3:

Part i)

Snoopy bus cache coherence protocol:

Snoopy protocols apply only to small-scale bus-based multiprocessors a they require the use of a broadcast medium in which each cache �snoops� on the bus and watches for transactions which affect it. It invalidates that line out of its cache if it is present any time a cache sees a write on the bus. It checks to see if it has the most recent copy of the data, and if so, responds to the bus request any time a cache sees a read request on the bus.

Tradeoff of snoopy bus:

These snoopy bus-based systems are easy to build, but unfortunately as the number of processors on the bus increases, the single shared bus becomes a bandwidth bottleneck and the snoopy protocol�s reliance on a broadcast mechanism becomes a severe scalability limitation. To address these problems, architects have adopted the distributed shared memory (DSM) architecture.

Distributed Shared Memory cache coherency protocol:

In a DSM multiprocessor each node contains the processor and its caches, a portion of the machine�s physically distributed main memory, and a node controller which manages communication within and between nodes. Rather than being connected by a single shared bus, the nodes are connected by a scalable interconnection network. The DSM architecture allows multiprocessors to scale to thousands of nodes, but the lack of a broadcast medium creates a problem for the cache coherence protocol.

There are two major components to every directory-based cache coherence protocol:

� the directory organization

� the set of message types and message actions

Tradeoffs in DSM:

The directory memory overhead is the ratio of the directory memory to the total amount of memory. Keep the directory memory overhead low as it scales slowly with machine size is a concern of developer. Some directory data structures in directory organization may require more hardware to implement than others, have more state bits to check, or require traversal of linked lists rather than more static data structures.

Directories tend to have longer latencies (with a 3 hop request/forward/respond) but use much less bandwidth since messages are point to point and not broadcast. For this reason, many of the larger systems (>64 processors) use this type of cache coherence.

Solution 5:

Part i)

The Floyd-Warshall Algorithm is an efficient algorithm to find all-pairs shortest paths on a graph. That is, it is guaranteed to find the shortest path between every pair of vertices in a graph. The graph may have negative weight edges, but no negative weight cycles (for then the shortest path is undefined).

This algorithm can also be used to detect the presence of negative cycles�the graph has one if at the end of the algorithm, the distance from a vertex v to itself is negative.

Algortihm:

The Floyd�Warshall algorithm compares all possible paths through the graph between each pair of vertices. It is able to do this with only T(|V|3) comparisons in a graph. This is remarkable considering that there may be up to O(|V|2) edges in the graph, and every combination of edges is tested. It does so by incrementally improving an estimate on the shortest path between two vertices, until the estimate is optimal.

Consider a graph G with vertices V, each numbered 1 through N. Further consider a function shortestPath(i, j, k) that returns the shortest possible path from i to j using vertices only from the set {1,2,...,k} as intermediate points along the way. Now, given this function, our goal is to find the shortest path from each i to each j using only vertices 1 to k + 1.

For each of these pairs of vertices, the true shortest path could be either (1) a path that only uses vertices in the set {1, ..., k} or (2) a path that goes from i to k + 1 and then from k + 1 to j. We know that the best path from i to j that only uses vertices 1 through k is defined by shortestPath(i, j, k), and it is clear that if there were a better path from i to k + 1 to j, then the length of this path would be the concatenation of the shortest path from i to k + 1 (using vertices in {1, ..., k}) and the shortest path from k + 1 to j (also using vertices in {1, ..., k}).

If w(i,j) is the weight of the edge between vertices i and j, we can define shortestPath(i, j, k) in terms of the following recursive formula: the base case is shortestPath(i, j, 0)= w(i,j)

and the recursive case is shortestPath(i, j, 0)= w(i,j)= min(shortestPath(i, j, k-1), shortestPath(i, k,k-1)+ shortestPath(k,j,k-1))

This formula is the heart of the Floyd�Warshall algorithm. The algorithm works by first computing shortestPath(i, j, k) for all (i, j) pairs for k = 1, then k = 2, etc. This process continues until k = n, and we have found the shortest path for all (i, j) pairs using any intermediate vertices.

It is a powerful algorithm as, one can inspect the diagonal of the path matrix, and the presence of a negative number indicates that the graph contains at least one negative cycle.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.