Memory Management In Operating Systems

Operating Systems Part 8

ยท

8 min read

Memory Management In Operating Systems

Memory Management

  • Methods of managing primary memory for efficient utilization of memory.

  • Anything that CPU works upon has to be brought into the RAM or the primary memory first.

  • Degree of Multiprogramming focuses on bringing as many processes in the RAM as possible, so that the CPU is utilized properly, thereby increasing the performance of the system.

  • Larger the number of processes in the RAM, the better CPU utilization is.

  • This responsibility of bringing processes from secondary storage to the primary memory, preferably as many as possible is given to the Operating System.

Memory Management Techniques

  1. Contiguous - Fixed Partition(or Static), Variable Partition(or Dynamic)

  2. Non-Contiguous - Paging, Multilevel Paging, Inverted Paging, Segmentation, Segmented Paging

Contiguous Memory Management

Fixed Size Partitioning | Internal Fragmentation

  • No of partitions are fixed.

  • Size of each partition may or may not be same.

  • Contiguous allocation, so spanning is not allowed.

  • Limit On Process Size : Processes can be allocated any partition, as long as the size of the partition is either greater than the process or equal to it.

  • Internal Fragmentation : If a process is stored in one partition, then no matter how much space is left in that partition, it cannot be used to store any other process and goes wasted.

  • Limited degree of multiprogramming : You cannot store processes that are more in number than the partitions, because of fixed paritions. If there are 4 partitions, you can store only 4 processes, and their sizes are also limited by the maximum size allowed in their respective partitions.

  • External Fragmentation : Although we have availability of space, we cannot store a process because the space is in multiple different slots.

Variable Size Partitioning | Dynamic Partitioning

Partitions are not pre-made, memory is allocated to processes as and when they come. Variable sized partitions are made in runtime.

Advantages:

  1. No chance for internal fragmentation.

  2. No limitation on degree of multiprogramming or number of processes allowed to be stored.

  3. No limitation on process size.

Disadvantages:

  1. External fragmentation due to contiguous allocation.

  2. Can be solved by 'Compaction' where the processes are shifted to fill up the empty spaces and make room for newer processes to be stored.

  3. Compaction can be used but is undesirable because to shift a process, we first have to stop it from execution.

  4. Allocation & Deallocation is complex because the number of partitions and processes is not fixed.

Allocation Methods In Contiguous Memory Management

First Fit
Allocate the first hole that is big enough.

Advantage:
Simple and Fast.

Disadvantage:
Can create big holes leading to fragmentation problems.

Next Fit
Same as first fit but start search always from the last allocated hole.

Advantage:
Search starts from previously left location

Disadvantage:
Can create big holes leading to fragmentation problems.

Best Fit
Allocate the smallest hole that is big enough.

Advantage:
Least internal fragmentation. Searches the entire least and then decides where to store the process.

Disadvantage:
Can create small and tiny holes, that cannot be used to store any processes because of their small size. Very slow as well.

Worst Fit
Allocate the largest hole.

Advantage:
Leaves maximum left over space, where other processes can be stored easily.

Disadvantage:
Slow.

Non-Contiguous Memory Management

Paging

  • Process can be divided and stored in multiple spanning memory locations.

  • Solves the problem of external fragmentation.

  • The division of a process is a costly process because holes in memory are created dynamically and their size keeps changing.

  • Every time a process has to be stored in RAM, it has to be scanned for all the holes available, which makes it a time consuming process.

  • Each division of a process is called a page. This paging is done when the process is in the secondary memory.

  • Each slot or partition of a RAM is called frame.

  • Page size is always equal to the Frame size.

  • Number of pages = Process size/Page size

  • Memory Management Unit(MMU) has the role of mapping the processes between CPU and RAM by converting the addresses generated by CPU to absolute addresses of RAM.

  • MMU uses Page Table for this purpose. Page table contains frame numbers where that page is available in main memory.

  • Every process has its own page table.

  • The number of entries in the page table of a process will be equal to the number of pages a process has been divided into.

  • CPU works on logical address. Logical address has page number and page offset. Offset means size of page. This logical address is converted to physical address by the MMU and then it performs mapping.

Page Table Entries

The fields included in a page table are:

  1. Frame Number

  2. Valid(1)/Invalid(0) - Wether the page is present or absent in the specified frame number. Detects page fault.

  3. Protection(RWX) - Used by OS for read, write, execute permissions.

  4. Reference(0/1) - Least Recently Used(LRU) is a page replacement algorithm used while swap in and swap out of pages. A page which had been brought earlier in the frame, and is now being brought in again will be marked by 1, else 0.

  5. Caching - Enable/Disable. Frequently used data is cached for better CPU access times.

  6. Dirty/Modify - If data has been modified, but yet to be updated in the secondary memory, it is marked with 1, else 0. Frame Number is a mandatory field while the rest others are optional.

2-Level/Multilevel Paging

Page table sometimes has to be divided further in order to accomodate it with the frame size. This is where multilevel paging helps.

Inverted Paging

Instead of keeping a page table for all processes, a single global page table is created for all processes. The global page table contains frame number, page number and process ID.

Inverted page table has high searching times because of which it did not gain much popularity. Inverted page table saves memory but compromises on time. Nowadays, memory is getting cheaper and cheaper, but time is limited. Saving time is more preferable as compared to memory.

Thrashing

When degree of multiprogramming in the RAM is too high, then it leads to increasing number of page faults, leading to page fault service times, which takes a page from the secondary memory and is very time consuming. This degrades the performance of the system. This scenario where degree of multiprogramming, instead of improving CPU utilization, it worsens it due to page faults is called thrashing. Solution is to either increase main memory, which may not be very viable. Another solution is to use a Long Term Scheduler(LTS).

Segmentation vs Paging

Paging divides processes in a way where half of the instructions can be in one page, and the rest in some other page. This leads to incompleteness in instructions. We divide the process into pages without caring about what is written in them.

Segmentation works on user's point of view. It doesnt divide a program directly like paging does. It creates segments of the program. These segments can be anything like functions etc. Size of each segment is of various sizes.

Segment Table contains the segment number, base address and the size of each segment.

Overlay

  • A process with a large size, which is larger than the main memory itself can be accomodated in the main memory with overlay.

  • Operating systems do not provide any driver for overlay.

  • The process is partitioned by the user itself, and each partition is brought into the main memory, executed by CPU and stored back in secondary memory, and the next process partition is brought into the RAM, and this continues till all partitions are done.

  • The order at which partitions are going to be executed becomes very important.

  • Overlay is used in embedded systems where the functionality is always fixed.

  • Partitions created have to be independent. that means it should not be like one part of the code is in one partition and the other part is in second partition.

Virtual Memory

In virtual memory, instead of storing all the pages of each process in the main memory, some pages of many processes are stored, and are constantly swapped in and out using page replacement algorithms as and when they are needed by the CPU. Virtual memories are commonly used in modern day computers.

If a process page is not found in page table, a trap is generated, and context switching happens between the User and OS. Operating system now performs authentication for security purposes, and if authentication is passed, then operating system fetches the required page from the secondary storage and puts it into an empty space in main memory. The control goes back from OS to User.

Lets say 'p' is the probability that page fault occurs. Then,
Effective Memory Access Time (EMAT) = p(page fault service time) + (1-p)(main memory access time)

Page Replacement Algorithms

  1. FIFO - First In First Out

  2. Optimal Page Replacement - Replace the page whose demand will be most late, which is not supposed to be used for longest dimension of time in future.

  3. Least Recently Used(LRU) - Replace the least recently used page in past.

  4. Most Recently Used(MRU) - Replace the most recently used page in past.

Conclusion

You can read other articles written by me through these links.

Operating System Series
1. Introduction & Types of OS
2. Process States & Lifecycle
3. System Calls
4. User Mode vs Kernel Mode
5. CPU Process Scheduling
6. Process Synchronization
7. Deadlocks
8. Memory Management
9. Disk Management & Scheduling
10. File System in OS
11. Protection & Security

System Design Series
Introduction To Parallel Computing
Deep Dive Into Virtualization
Insights Into Distributed Computing

Cloud Computing Series
1. Cloud Service Models
2. Cloud Deployment Models
3. Cloud Security
4. Cloud Architecture
5. Cloud Storage
6. Networking In The Cloud
7. Cloud Cost Management
8. DevOps In Cloud & CI/CD
9. Serverless Computing
10. Container Orchestration
11. Cloud Migration
12. Cloud Monitoring & Management
13. Edge Computing In Cloud
14. Machine Learning In Cloud

Computer Networking Series
1. Computer Networking Fundamentals
2. OSI Model
3. TCP/IP Model : Application Layer
4. TCP/IP Model : Transport Layer
5. TCP/IP Model : Network Layer
6. TCP/IP Model : Data Link Layer

Version Control Series
1. Complete Guide to Git Commands
2. Create & Merge Pull Requests
3. Making Open Source Contributions

Linux
Complete Guide to Linux Commands

Thanks For Reading! ๐Ÿ’™
Garvit Singh

ย