This purpose of this paper is to investigate and understand the Operating Systems and Computer Systems Architecture modules in Hardware Software Systems and Networks course. The first phase of the paper involves the memory management of the operating systems where we identify the types of mechanism employed, strategies used to manage the memory. Then, the problems faced on using these techniques will be presented along with the solutions. The final phase of this paper will involve the types of registers and their usage in modern computer. Areas like reasons for registers involved will be studied. Besides, the size of the registers and organization types of registers will be presented.
Chapter 1: Overview of Memory Management
According to whatis.techtarget, memory management is the act of controlling and coordinating computer memory, assigning portions called blocks to various running programs to optimize overall system performance. It can be resides in hardware, operating system (kernel) as well as software applications.
Memory management in hardware involves components that store data physically, such as RAM (Random Access Memory), memory caches, HDD (hard disk drives) and SSD (solid-state drives). While for the OS, it involves the allocation and reallocation of memory blocks according to the programs due to change of user demands. In the software application, memory management is to ensure the availability adequate memory for the objects and data structures of each running programs. It combines 2 related tasks which are allocation and recycling.
Chapter 2: Memory management in Linux
There are two components involved under memory management of Linux. The first component was to deal with allocating and freeing physical memory and the second was to handle virtual memory. In the paper, mechanism used by Linux will be verified and presented.
Chapter 2.1: Management of Physical Memory
According to dinobook, the primary physical-memory manager of Linux is the page allocator. Its responsible is to allocate and free all physical pages and is capable of allocating ranges of physically contiguous pages on request. The allocator uses two mechanisms as follows:
Chapter 2.1.1: Swapping
Swapping will be performed on a process to a backing store while the memory is out and then bringing back the process into memory for execution once the memory is enough.
In a round-robin CPU-scheduling algorithm environment like Linux, the memory manager will start to swap out the finished process from the memory space when a quantum expired. Then, it will swap in another process that has been freed as shown in Figure 2.1. At the same time, CPU scheduler will allocate a time slice to some other process in the memory. The process will be swapped with another process when it finishes its quantum. Ideally, the memory manager can swap processes fast enough so that some processes will be in the memory and is ready to be executed when the CPU is going to be rescheduled again. Yet, the quantum needs to be sufficiently large; hence the processes can be done between swaps.
Paging allows the physical-address space of process to be noncontiguous. It breaks physical memory into fixed-sized blocks named frames while logical memory is broken into same-sized blocks named pages. When a process is to be executed, its pages are loaded into any available memory frames from the backing store which is also divided into fixed-sized blocks that are of the same size with the memory frames.
Linux uses a buddy-heap algorithm in order to keep track of available physical pages. This buddy-heap algorithm will only allocate certain sizes of blocks and has many free lists, one for each permitted size. The permitted sizes are usually either powers of two or form a Fibonacci sequence, such that any blocks can be divided into two smaller blocks of permitted sizes except for the smallest one. When the allocator receives a request for memory, it rounds up the requested size to a permitted size and returns the first block from the sizeâ€™s free list. If the free list is empty, the allocator will splits a block from a larger size and returns one of the pieces for the request, adding others to the appropriate free list.
In Figure2.2, an allocator started off with a single block of 64kb. Then, an application requested a block of 8kb, the allocator would then check its 8kb free list if there is any available block. If it was not found, the allocator will then split the 64kb block into two 32kb blocks, then splitting one of them into two 16kb blocks, then splitting one of them into two 8kb blocks. After that, one of the 8kb blocks will then be return to the application and the remaining blocks of 8kb, 16kb and 32kb will be added into the appropriate free lists. Yet, if the application requested a block of 10kb, the allocator will round this request up to 16kb, returning the 16kb block and wasting 6kb in the process.
Chapter 2.2: Virtual Memory
Virtual memory is the separation of user logical memory from physical memory which allows an extremely large virtual memory to be provided yet in fact still only physical memory is available. In Linux, its virtual memory system is to maintain the visibility of address space to each process by creating pages of virtual memory on demand and manages the loading of those pages from disk or their swapping. The mechanism Linux used is as follow:
Chapter 2.2.1: Demand Paging
In Linux, it was used to relocate pages of memory from physical memory out to disk when that memory is needed. It is similar to a paging system but with swapping. There are two possible reasons if the processor is unable to find the virtual page frame number in the process page table, such as:
1. The process has tried to access an invalid memory address.
2. The physical page corresponding to the virtual address was not loaded into physical memory.
In the first case, the kernel will generate a page fault and terminates the process. While for the second case, a page fault will also be generated, but the kernel will try to swap in the required memory page into physical memory from hard disk. At the same time, another process will be brought into execution. Then, the page table will be updated and this process is brought back into execution again from the same instruction which caused the â€˜page faultâ€™. Then the pages can be placed in physical memory via the page fault.