This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
TheÂ cacheÂ is a small amount of high-speed memory, usually with a memory cycle time comparable to the time required by the CPU to fetch one instruction. The cache is usually filled from main memory when instructions or data are fetched into the CPU. Often the main memory will supply a wider data word to the cache than the CPU requires, filling the cache more rapidly. The amount of information which is replaces at one time in the cache is called theÂ line sizeÂ for the cache. This is normally the width of the data bus between the cache memory and the main memory. A wide line size for the cache means that several instruction or data words are loaded into the cache at one time, providing a kind of pre fetching for instructions or data. Since the cache is small, the effectiveness of the cache relies on the following properties of most programs:
Spatial localityÂ -- most programs are highly sequential; the next instruction usually comes from the next memory location.
Data is usually structured, and data in these structures normally are stored in contiguous memory locations.
Short loops are a common program structure, especially for the innermost sets of nested loops. This means that the same small set of instructions is used over and over.
Generally, several operations are performed on the same data values, or variables.
A Direct mapped cache configuration
AÂ protocolÂ for managing theÂ cachesÂ of a multiprocessor system so that no data is lost or overwritten before the data is transferred from a cache to theÂ targetÂ memory. When two or more computer processors work together on a single program, known asÂ multiprocessing, each processor may have its own memory cache that is separate from the largerÂ RAMÂ that the individual processors will access. A memory cache, sometimes called a cache storeÂ orÂ RAM cache, is a portion of memory made of high-speedÂ static RAMÂ (SRAM) instead of the slower and cheaperÂ dynamic RAMÂ (DRAM) used for mainÂ memory. Memory caching is effective because mostÂ programs accessÂ the same data or instructions over and over. By keeping as much of this information as possible in SRAM, the computer avoids accessing the slower DRAM.
When multiple processors with separate caches share a common memory, it is necessary to keep the caches in a state of coherence by ensuring that any sharedÂ operandÂ that is changed in any cache is changed throughout the entire system. This is done in either of two ways: through a directory-based or a snooping system. In a directory-based system, the data being shared is placed in a common directory that maintains the coherence between caches. In a snooping system, all caches on the bus monitor determine if they have a copy of the block of data that is requested on the bus.
In Uniprocessors when I/O operations occur
Direct Memory Access (DMA) between I/O device and memory
DMA device reads stale value in memory when processor updates cache
Processor reads stale value in cache when DMA device updates memory
Processors see different values for u after event 3
Unacceptable to programs, and frequent!
Organize the memory hierarchy to make it go away
Remove private caches and use a shared cache
Mark segments of memory as uncacheable
Detect and take actions to eliminate the problem
Write Back Cache
In addition to caching reads from memory, the system is capable of caching writesÂ toÂ memory. The handling of the address bits and the cache lines, etc. is pretty similar to how this is done when the cache is read.Â Write-Back Cache is also called "copy back" cache; this policy is "full" write caching of the system memory. When a write is made to system memory at a location that is currently cached, the new data is only written to the cache, not actually written to the system memory. Later, if another memory location needs to use the cache line where this data is stored, it is saved ("written back") to the system memory and then the line can be used by the new address. Write-back caching yields somewhat better performance than write-through caching because it reduces the number of write operations to main memory. With this performance improvement comes a slight risk that data may be lost if the system crashes.
Â An imaginaryÂ memoryÂ areaÂ supportedÂ by someÂ operating systemÂ (for example,Â WindowsÂ but notÂ DOS) in conjunction with theÂ hardware. You can think of virtual memory as an alternate set of memoryÂ addresses.Â Programs use these virtual addresses rather than real addresses toÂ storeÂ instructions andÂ data. When the program is actuallyÂ executed, the virtual addresses are convertedÂ into real memory addresses.
The purpose of virtual memory is to enlarge the address space, the set of addresses a program can utilize. For example, virtual memory might contain twice as many addresses asÂ main memory. A program using all of virtual memory, therefore, would not be able to fit in main memory all at once. Nevertheless, theÂ computerÂ could execute such a program byÂ copyingÂ into main memory those portions of the program needed at any given point during execution.
To facilitate copying virtual memory into real memory, the operating system divides virtual memory into pages, each of which contains a fixed number of addresses. Each page is stored on aÂ diskÂ until it is needed. When the page is needed, the operating system copies it from disk to main memory, translating the virtual addresses into real addresses.
The process of translating virtual addresses into real addresses is called mapping. The copying of virtual pages from disk to main memory is known as paging or swapping.
Implementation of virtual memory to help fragmentation
WORD COUNT: 3000 approx