Comparing Memory Management Between Windows And Linux Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Memory management system is important part of each operating system. The main function of this system is to manage the memory system of random access memory and the other storing devices which are available on the system. Main tasks that come under the memory management are allocation and deallocation of memory to the various processes running on the system. The memory management system should be optimized as its function affect the speed and the performance of the operating system.

Memory management system provides the following features:

Large Address Space:

The memory management system shows the user that large memory space is available instead of the actual memory it has. Virtual memory is many times greater than the physical memory of the system.


Every process of the system is provided the virtual address space. These address spaces completely differ from each other so the processes running on one application program not affect the other. The hardware also protect the virtual memory area from writing. Thus code and data is protected from getting overwritten by rogue applications.

Memory Mapping

Memory mapping is used to map image and data files into a processes address space. In memory mapping function , the contents of a file are linked directly into the virtual address space of the process.

Fair Physical Memory Allocation

The memory management subsystem allows each running process in the system a fair share of the physical memory of the system,

Shared Virtual Memory

Virtual memory is the main and an important part of the memory management system. The memory requirements were growing that's why the virtual memory was introduced. So programs were developed that made the illusion that system has large memory to be allocated to the processes. The kernels provide the facility to use the secondary storage devices to fulfill the extra requirements of the memory.

Some mapping functions are there for the virtual memory function to work. These functions are responsible for the address translation that convert the virtual addresses to the physical addresses. Virtual address is the reference address of the memory location and the physical address is the actual address of the memory location.

Virtual memory function generally use the paging or segmentation as per the need of the operating system.


In paging both the address space the virtual and real is divided into fixed size pages.The pages are manipulated and is placed at different place in hard disk and the physical memory.

The memory management unit does the address translation by using the page. The address

Fig 1: page table

Page table specifies the mapped physical pages and the virtual memory pages.The memory management unit changes the virtual memory address to the physical memory address consisting page frame number and the offset in that page. It also provides the functionality to apply protection on the page. Virtual memory takes the large address space in comparison to physical memory. With each virtual page a bit is associated with the page table to check whether the page is present in the physical memory. If the page is not there a page fault exception is generated. It is handled by the software which place the pages required back to the physical memory from the hard disk and error is generated if it is invalid.


Both the operating systems have the modern operating systems and have a lot of common features in between them.Some of the similarities in both of the memory management system are-

Hardware Abstraction Layer:

Both the operating systems have the hardware abstraction layer that is not system dependent. Thus the kernels are enabled to be coded independently.

Copy on write:-

If the pages are shared same copy of the page is used between the processes. But if an individual process makes any change in the page that is made private to that process only, thus increases the efficiency.

Shadow paging:

A shadow is created for the original object . The shadow object has some of the pages modified features from the original object but shares the rest of the pages with original object. They are formed as a result of Copy-On-Write action.

Memory mapped Files:

A file can be mapped onto memory, which then can be used with simple memory read/write instruction.

Inter-Process Communication: The memory mapped files are shared between processes thus makes the interprocess communication.

Memory management


Windows on 32 bit x86 systems accesses upto 4GB of physical memory. This is because that the processor's address bus which is 32 lines or 32 bits can only access address range from 0x00000000 to 0xFFFFFFFF that is of 4GB. Windows provides 4Gb logical space for each process.. The lower 2GB is for the user mode process and upper 2GB is reserved for Kernel mode code of the Windows. Windows uses the feature of paging for the memory management

Paging allows the software to use a logical memory address than the physical memory address. The paging unit of the processor translates the logical address into the physical address . This allows every process in the system to have its own 4GB logical address space.

Windows provides an independent, 2 GB user address space for each and every application (process) in the system. To the application only 2 GB of memory is appeared to be availablethan the total memory available. When an application requests more memory than the available memory, Windows NT satisfies the request by paging noncritical pages of memory-from this and/or other processes-to a pagefile and freeing those physical pages of memory. Thus a global heap exist for no long in the windows NT. Every process gets the private 32-bit address space from which all of the memory for the process is allocated-including code, resources, data, DLLs (dynamic-link libraries), and dynamic memory. The system is still limited by whatever hardware resources are available, but the management of available resources is performed independently of the applications in the system.


Linux implements the virtual memory data structure just like UNIX. virtual memory area structures are maintained in linked lists by it. These data structures represent continuous memory areas which are containing the same protection parameters . This list is searched whenever a page is to be found that consists a particular location. The structure also records the range of address it is mapping onto, protection mode, whether it is pinned in memory (not page-able), and the direction (up/down) it will grow in. It also records whether the area is public or private. If the number of entries increases than a particular number, usually 32, then the linked list is converted into a tree. This is a quite good approach which uses the best structure in the best situations.

Distribution of Process Address Space

Both the operating system distribute the process virtual address

space in the same way. Higher part is used by the kernel and lower by the lower part. The kernel part all process point to the same kernel code. So for switching a process, we need to switch the page table entries of the lower part, while the upper part can remain the same. In Linux 3GB is kept for the process and 1 GB given to the kernel, while in Windows, 2GB are kept for each.


The system used by Windows is verycomplicated. Windows uses clustered demand paging for fetching pages, and the clock algorithm for the page replacement.

In Clustered demand paging, the pages are only brought to memory when they are required. Also, instead of bring 1, Windows, often brings a cluster of them of 1-8 pages, depending on the current state of the system.

The kernel receives 5 kinds of page faults -

A protection violation has occurred.

A shared page has been written.

The stack needs to grow.

The page referenced is committed but not currently

mapped in.

The first two errors are irrecoverable. The third attempts to write read-only page. Copy that page somewhere else and make the new one read/write. In this way the copy-on-write works. The fourth needs to be responded by finding an extra page. The most important feature about the Windows Paging Systems that it makes heavy use of the working set concept. The working set is defined as the amount of main memory currently assigned to the process, so the working set consists of its pages that are present in the main memory. The size of the working set is, however, not constant. So the disadvantages that come with working sets are heavily reduced. The clock algorithm used by Windows is local. When a page fault occurs, and faulting process's

working set is below a minimum threshold, then the page is simply added to the working set. On the other hand, if the working set is higher than one another threshold, then it reduces the size of working set. Hence the algorithm can be called global. But the system does do some global optimizations too. For example, it increases the working set of processes that are causing a large number of page faults, and decreasing the working set for those who do not require

enough memory. Instead of just working when there is a page fault,just like Unix, Windows has a daemon thread working too, but called in this case as Balance Set Manager. This is invoked every 1 second, and it checkswhether there is enough free memory. If there is not, then it invokes the working set manager. The working set manager maintains to keep the free memory above threshold. It checks the working sets of process from old and big to the young and small. And depending on how many page faults they have generated, it increases or decreases them. If a page's reference bit is clear, then counter associated with the page is incremented. If the reference bit is set, the counter is set to zero. After the scan, pages with the highest counter are removed from the working set. Thus, the global aspect of this clock algorithm is given by this working set manager.

Windows divides the list of pages into four lists:

1. Modified Page List

2. Stand-bye Page list

3. Free Page list

4. Zeroed Page List



The Linux Virtual memory had focused on simplicity and low overhead. Hence it was primitive and had many problems, especially under heavy load. Linux uses a demand paged system with no prepaging. Until kernel version 2.2, Linux used NRU algorithm for page replacement, but due to the various shortcomings of the algorithm, they changed the edit and implemented an approximate Least Recently Used in 2.4 version. The aging to effect LRU is brought about by increasing the age of a page by a constant when the page is found to be referenced during a scan, and, decreased exponentially (divided by 2) when found not to have been referenced. This method approximates LRU fairly well.

Linux 2.4 divides the virtual pages into 4 lists

1. Active list

2. Inactive-dirty list

3. Inactive-clean list

4. Free list

For linux 2.4, the inactive list size was made dynamic. Now the system itself will decide how many inactive pages it should keep in the memory given the particular situation. system itself will decide how many inactive pages it should keep in the memory given the particular situation.

The unification of the buffer cache and page cache has been completed in 2.4. Another optimization present in the Linux Kernel, is that they now recognize continuous I/O, i.e. they now decrease the priority of the page "behind" and so that page becomes a candidate for eviction sooner. The page daemon in Linux is kswapd which awakens once a second, and frees memory if enough is not available.


Both the systems are originated in different backgrounds - Windows in Commercial Settings, and Linux in Hackers settings. Both are modern and have good theoretical concepts, and are all suitable for production environments. They have a lot of common features, and few differences, technically speaking. Windows is developed with strong monetary motivation, has gone through more effort in its design and development.

In case of Linux, the decision was taken often favoring simplicity against performance.

Therefore the Windows is developed into sophisticated, complex code whereas Unix is simple and elegant but still modern. The result of which is that Windows has more features but is difficult to maintain and improve from the developers, while Unix has less features but is easier to maintain and develop. Windows is likely to give better performance while occasionally crashing.