This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Dynamic random-access memory has been the only high-density RAM which maintained its popularity usage over these last 30 years despite all the attempts in trying to replace it with other technologies. Still contributing to IT advances, throughout the years there has been an exceptional increase in memory capacity, from the 1-Kbit level in 1970 to the 1-8Gbit level today as shown in Figure 1.
Dynamic RAM consists of transistors, which are required to store the state of a binary digit as a charge built upon a transistor, similar to the charge that would be stored in a capacitor. This is the most popular method because of its ease of manufacturing and its cost effectivity. Still DRAM is much more complex to operate than static RAM, because stored charge can very easily leak away. Thus it is required to be topped up at frequent intervals which currently means every few milliseconds. 
This means that extra electronics have to be introduced into the system to carry out these operations, but this is of hardly of any consequence to the users of such systems. What is of consequence is that due to this topping up cycle, the speed of access for dynamic RAM is slower than the speeds with which once can access static RAM.  Dynamic RAM is currently about 50 or 60nsec. Although the DRAM cell is used to store a single bit, i.e. 0 or 1, it is still actually an analogue device. The capacitor present in such cell can store any charge within a range whilst a starting point value determines whether the charge is interpreted as a 1 or 0. 
The structure of a DRAM helps in understanding better some of the features of DRAMs, which are distinguishable when compared with other devices. A typical DRAM memory is made up from an array of memory cells with an equal number of rows and columns. Each memory cell present contains a value of one bit. The bits are addressed by using half of the bits (the most significant half) to select a row and the other half to select a column.
Figure 2: Dynamic RAM cell
Each DRAM memory cell is very simple - it consists of a capacitor and a MOSFET switch. A DRAM memory cell is therefore much smaller than an SRAM cell which needs at least two gates to implement a flip-flop. When a bit in the memory array is selected, all of the capacitors in the selected row are connected to a line shared by all the memory cells in a column. The following diagrams show this (Figure 3 making use of switches in place of transistors)
INTERFACE TO PC BUS
In a DRAM, cell transistors act as switches to close when voltage is applied in an address line, allowing current to flow, or open when no voltage is applied. The address line, which is also known as the word line gives a signal to the transistor to either open or close.
During a write operation, a voltage signal is applied to the bit line. A signal is then applied to the address line to close the transistor. The voltage applied on the bit line will then transfer to the capacitor and thus stored in it.
For the read operation, first selection of the address line takes place, the transistor turns on and the charge stored on the capacitor is fed out through a bit line to a sense amplifier. The charge stored in each memory cell capacitor is actually of a very small size therefore each column line is connected to a "sense amplifier" which amplifies the voltage present on the line. The sense amplifier also compares the voltage of the capacitor to a reference value to determine if the cell contains either logic 1 or logic 0. The readout from the cell discharges the capacitor; therefore to restore the charge back the write operation has to take place again.
Just like other integrated circuit products, semi conductor memory comes in packed chips, each containing an array of memory cells. For such a type of memory one of the main design issues is the number of bits of data that can be read or written at a particular set of time. Something to be noted is the way in which the physical arrangement of cells in a particular array can be interpreted by the processor as the logical arrangement of words in memory. An array is made up of a particular set of
words each having a specific number of bits.
Taking an example of a typical organization of a 16-Mbit DRAM, logically the memory array's organization is of 4 square arrays of 2048 by 2048 elements. Row and column lines connect the elements of the array. Row lines are connected to the select terminal of each cell in its row whilst column lines are connected to the Data-In/Sense terminal of each cell in its column.
Address lines supply the address of the word which is going to be selected. In this example, 11 address lines are needed to select one of 2048 rows. These 11 lines are transferred to the row decoder, which has 11 lines of input and 2048 lines for output. The logic of the decoder activates a single one of the 2048 outputs depending on the bit pattern on the 11 input lines.
An additional 11 address lines select one of 2048 columns of 4bits per column. 4 data lines are used for the input and output of 4bits to and from a data buffer. On input (write), the bit driver of each bit line is activated for a 1 or 0 according to the value of the corresponding data line. On output (read), the value of each bit line is passed through a sense amplifier and presented to the data lines. The row line selects which row of cells will be used for reading or writing. Since only 4 bits are read/written to this DRAM there should be multiple DRAMs connected to the memory controller to read/write a word of data to the bus.
Something worth noticing is that there are only 11 address lines (A0-A11), half the number you would expect for a 2048 x 2048 array. This is required to save on the number of pins. The 22 required address lines are passed through select logic external to the chip and multiplexed onto the 11 address lines. First, 11 address signals are passed to the chip to define the row address of the array m and then the other 11 address signals are presented for the column address. These signals are accompanied by row address select and column address select signals to provide timing to the chip. The write enable and output enable pins determine whether a write or read operation is performed. Two other pins, not shown in the figure are ground (Vss) and a voltage source (Vcc).
As can be seen from the previous figure, a DRAM also requires refresh operation. In a DRAM, as has been previously explained capacitors are used as the storage cells. Therefore when considering the capacitor's properties, charge stored in each cell leaks away over time. Therefore it is obviously necessary to refresh the value in each cell periodically. In typical DRAMs each row must be refreshed every 16, 32, 64, or 128 msec.
Refreshing a row is similar to reading it, except the data does not emerge from the columns. Note that through the read operation explained in the previous section, the data in each storage cell is refreshed as an effect of reading it. In a refresh operation each bit in the selected row is passed on to its respective sense amplifier. Each sense amplifier then amplifies the value on its bit lines and drives the refreshed value back into the storage cell.
The simplest technique to provide DRAM refresh is to include a device (such as a DMA controller or video display circuit) that accesses the RAM in such a way that all of the rows are accessed at least once during the minimum refresh time (typically every few tens of milliseconds). This is called RAS*-only refresh because it's not necessary to assert CAS* in order for the refresh operation to take place. Another technique is to add a circuit that periodically forces a cycle in which CAS* is asserted before RAS*. This is called CAS* before RAS* refresh.
Modern DRAMs have an internal refresh counter that cycles through the possible row values. On these DRAMs the CAS* before RAS* operation causes an internal row-refresh operation. The advantage of this type of refresh is that the refresh controller need only control RAS* and CAS*,
Memory systems made up of semiconductors are subject to errors. Such errors can be either hard failures or soft errors. Hard failures are permanent physical defects resulting in the affected memory cells not storing data reliably but become stuck at 0 or 1 or else switch without any reason between the two.
The majority of errors in DRAMs occur as a result of power supply problems or background radiation, mainly alpha particles. These particles are actually quite common because radioactive nuclei are found in small quantities in entirely all materials. Obviously both these type of errors are undesirable. That is why most modern memory systems include logic, which deals with these errors, i.e. detects and corrects them
Such errors can be tackled using redundant memory bits and memory controllers which exploit these bits, by implementing them within DRAM modules. These extra bits are used to record parity and to enable missing data to be reconstructed by error-correcting code (ECC). Parity allows the detection of all single-bit errors. The most common error-correcting code, is the Hamming code which allows both single-bit error correction and double-bit error correction with the latter error identified by adding an extra parity bit.
The Figure above uses Venn diagrams to explain better the use of the Hamming code on 4 bit words. With 3 intersecting circles there are seven compartments. The 4 data bits are assigned to the inner compartments. The remaining compartments are filled with what are called parity bits. Each parity bit is chosen so that the total number of 1s in its circle is even. Thus because circle A includes three 1s the parity bit in that circle is set to 1. Now, if an error changes one of the data bits, it is easily found. By checking the parity bits, discrepancies are found in Circle A and circle C but not in circle B. Only one of the seven compartments is in A and C but not B. The error can therefore be corrected by changing that bit.
As used in many modern PCs, an ECC-capable memory controller can detect and correct errors of a single bit per 64-bit "word" and detects errors of two bits per 64-bit word, without correction. Some systems also remove the errors, by writing the corrected version back to memory. The BIOS in some computers, and operating systems such as Linux, allow counting of detected and corrected memory errors; this allows identification and replacement of failing memory modules.
Throughout the years the mentioned DRAM technology has developed in such a way that one can now find various new memory technologies, but which still encompass the original DRAM architecture. Some of these technologies can be found on the market. In this section many of these technologies are examined.
TYPES OF MEMORY CARDS
FPM DRAM, which stands for fast-page mode Dynamic RAM, is the original type DRAM that was widely used. Prior to newer technologies that involve DRAM, FPM DRAM was the most common type of DRAM present in most PCs. FPM DRAM works by accessing a row of RAM without continuously specifying the row over and over again. Instead a row access strobe (RAS) signal is kept active while the column access strobe (CAS) signal changes to read a sequence of adjacent memory cells. This process reduces both access time and power requirement.
Extended Data-Out Dynamic RAM, also referred to as hyper-page-mode DRAM represents a minimal design change in the output buffer when compared to the standard FPM DRAM mentioned above. The old data is latched at the output while addressing of new data takes place. EDO DRAM shortens the effective page mode cycle time as the valid data output time is extended
As it has already been said, there are various types of DRAMs but one of the most widely used is the SDRAM, that is, synchronous DRAM. Unlike the traditional asynchronous DRAM, the SDRAM exchanges data with the processor through synchronization to an external clock signal while running at the full speed of the processor/memory bus, without imposing wait states. Classic DRAM has an asynchronous interface, which means that it responds as quickly as possible to changes in control inputs. SDRAM has a synchronous interface, meaning that it waits for a clock signal before responding to control inputs and is therefore synchronized with the computer's system bus.
In a typical DRAM, the processor presents addresses and control levels to the memory, indicating that a set of data at a particular location in memory should be either read from or written into the DRAM. After a delay, the access time, the DRAM either writes or reads the data. During the access-time delay, the DRAM performs various internal functions, such as activating the high capacitance of the row and column lines, sensing the data, and routing the data out through the output buffers. The processor must simply wait through this delay, slowing system performance.
With synchronous access, the DRAM moves data in and out under control of the system clock. In addition, the processor or other master issues the instruction and address information, which is latched by the DRAM. Then the DRAM responds after a set number of clock cycles. Meanwhile, the master can safely do other tasks while the SDRAM is processing the request.
In addition, the SDRAM employs a burst mode to eliminate the address setup time and row and column line precharge time after the first access. In burst mode, a series of data bits can be clocked out rapidly after the first bit has been accessed. This mode is useful when all the bits to be accessed are in sequence and in the same row of the array as the initial access. In addition, the SDRAM has a multiple-bank internal architecture that improves opportunities for on-chip parallelism.
The SDRAM performance is best when it is transferring large blocks of data serially, such as for applications like word processing, spreadsheets, and multimedia. There is now an enhanced version of SDRAM, known as double data rate SDRAM (DDR-SDRAM) that overcomes the once-per-cycle limitation. DDR SDRAM can send data to the processor twice per clock cycle. Similarly DDR2 and DDR3 have also entered the market with DDR4 currently being designed and anticipated to being available as early as 2013
RAMBUS DRAM is a type of synchronousÂ dynamic RAM. RDRAM was developed byÂ RambusÂ inc., in the mid-1990s as a replacement for then-prevalentÂ DIMMÂ SDRAMÂ memory architecture.
RDRAM was initially expected to become the standard in PC memory, after Intel agreed to license the Rambus technology for use with its future chipsets. Furthermore, RDRAM was expected to become a standard forÂ VRAM. However, RDRAM got involved in aÂ standards warÂ with an alternative technology -Â DDR SDRAM, quickly losing out on grounds of price, and, later on, performance. By the early 2000s, RDRAM was no longer supported by any mainstream computing architecture.
PCMCIA Memory Cards
This type of memory can be widely found placed on notebook computers. It is similar to DRAM with extra circuitry that allows it to be plugged into a computer's PCMCIA slot to add RAM to the computer system.
With a built-in memory refresh circuitry, this type of RAM is mostly used in embedded systems. This kind of RAM keeps its memory active by continuously applying a same voltage to the RAM chips, even when the power is off. Some functional examples bound with this type of memory are keeping type on a DVD player, storing one's favourite stations on his/her car radio and keeping the BIOS settings on a computer . If the power is ever completely lost, so is the memory held in Flash RAM. This is why DVD players tend to lose time after a power cut takes place. This is also why your computer loses its hard drive and other BIOS settings, should the backup battery on the motherboard ever fail.
DRAM POWER CONSUMPTION, SIZE AND SPEED
DRAM energy consumption contributes significantly to the total power usage of computing systems as can be observed in the figure above, which shows such a representation through a pie chart. Both static and dynamic random access memories are volatile therefore power must be continuously applied to the memory to preserve the bits. A dynamic RAM structure is much smaller and simpler when compared to an SRAM.
Being smaller makes DRAM much denser. This is because smaller cells result in more cells per unit area. DRAMs are slow because of the addressing scheme. In every type of DRAM, the external controlling entity (CPU, memory controller IC, etc.) must supply both a row and column address separately, which takes up 2 clock transitions. Another drawback for DRAM, which also complements to the speed of the DRAM, is the continuous refreshing it needs.
An Example of a Typical DRAM found in the market:
Nowadays with technology continously evolving, one can find various types of memory systems. For this assignment the example which is going to be described is the 1GB Corsair Value Select DDR PC400 Memory
The 1GB obviously stands for the 1GB of memory module that can be addressed.
DDR stands for double data rate synchronous dynamic random access memory. It has evolved from the traditional SDRAM as has been explained previously. The number 2 stands for the generation of that type of technology. One can also find both DDR2 and DDR3 present on the market.
PC400 is another way of writing DDR 400 or PC3200. Note that for the DDR SDRAM in the PCxxxx designation the xxxx is the memory bandwidth in MB/s i.e. PC3200 is 3200MB/s. This corresponds to a 200MHz clock (400MHz "data rate"). Since DDR SDRAM uses a 64-bit wide bus (8 bytes wide), 8 bytes * 400 MHz = 3200MB/s. Note these aren't real megabytes; they are found using 1MB = 1000 * 1000 B, not 1024 * 1024B.