Unit - 5
Memory Management
Computer memory can be defined as a collection of some data represented in the binary format. On the basis of various functions, memory can be classified into various categories.
A computer device that is capable to store any information or data temporally or permanently, is called storage device.
How Data is being stored in a computer system?
In order to understand memory management, we have to make everything clear about how data is being stored in a computer system.
Machine understands only binary language that is 0 or 1. Computer converts every data into binary language first and then stores it into the memory.
That means if we have a program line written as int α = 10 then the computer converts it into the binary language and then store it into the memory blocks.
The representation of inti = 10 is shown below.
The binary representation of 10 is 1010. Here, we are considering 32 bit system therefore, the size of int is 2 bytes i.e. 16 bit. 1 memory block stores 1 bit. If we are using signed integer then the most significant bit in the memory array is always a signed bit.
Signed bit value 0 represents positive integer while 1 represents negative integer. Here, the range of values that can be stored using the memory array is -32768 to +32767.
Well, we can enlarge this range by using unsigned int. In that case, the bit which is now storing the sign will also store the bit value and therefore the range will be 0 to 65,535.
Need for Multi programming
However, The CPU can directly access the main memory, Registers and cache of the system. The program always executes in main memory. The size of main memory affects degree of Multi programming to most of the extant. If the size of the main memory is larger than CPU can load more processes in the main memory at the same time and therefore will increase degree of Multi programming as well as CPU utilization.
Let's consider,
Process Size = 4 MB
Main memory size = 4 MB
The process can only reside in the main memory at any time.
If the time for which the process does IO is P,
Then,
CPU utilization = (1-P)
Let's say,
P = 70%
CPU utilization = 30 %
Now, increase the memory size, Let's say it is 8 MB.
Process Size = 4 MB
Two processes can reside in the main memory at the same time.
Let's say the time for which, one process does its IO is P,
Then
CPU utilization = (1-P^2)
Let's say P = 70 %
CPU utilization = (1-0.49) =0.51 = 51 %
Therefore, we can state that the CPU utilization will be increased if the memory size gets increased.
Key takeaway
Computer memory can be defined as a collection of some data represented in the binary format. On the basis of various functions, memory can be classified into various categories. We will discuss each one of them later in detail.
A computer device that is capable to store any information or data temporally or permanently, is called storage device.
Memory Management Techniques
Here, are some most crucial memory management techniques:
Single Contiguous Allocation
It is the easiest memory management technique. In this method, all types of computer's memory except a small portion which is reserved for the OS is available for one application. For example, MS-DOS operating system allocates memory in this way. An embedded system also runs on a single application.
Partitioned Allocation
It divides primary memory into various memory partitions, which is mostly contiguous areas of memory. Every partition store all the information for a specific task or job. This method consists of allotting a partition to a job when it starts & unallocated when it ends.
Paged Memory Management
This method divides the computer's main memory into fixed-size units known as page frames. This hardware memory management unit maps pages into frames which should be allocated on a page basis.
Segmented Memory Management
Segmented memory is the only memory management method that does not provide the user's program with a linear and contiguous address space.
Segments need hardware support in the form of a segment table. It contains the physical address of the section in memory, size, and other data like access protection bits and status.
What is Swapping?
Swapping is a method in which the process should be swapped temporarily from the main memory to the backing store. It will be later brought back into the memory for continue execution.
Backing store is a hard disk or some other secondary storage device that should be big enough inorder to accommodate copies of all memory images for all users. It is also capable of offering direct access to these memory images.
Benefits of Swapping
Here, are major benefits/pros of swapping:
● It offers a higher degree of multiprogramming.
● Allows dynamic relocation. For example, if address binding at execution time is being used, then processes can be swap in different locations. Else in case of compile and load time bindings, processes should be moved to the same location.
● It helps to get better utilization of memory.
● Minimum wastage of CPU time on completion so it can easily be applied to a priority-based scheduling method to improve its performance.
What is Memory allocation?
Memory allocation is a process by which computer programs are assigned memory or space.
Here, main memory is divided into two types of partitions
Low Memory - Operating system resides in this type of memory.
High Memory- User processes are held in high memory.
Partition Allocation
Memory is divided into different blocks or partitions. Each process is allocated according to the requirement. Partition allocation is an ideal method to avoid internal fragmentation.
Below are the various partition allocation schemes:
● First Fit: In this type fit, the partition is allocated, which is the first sufficient block from the beginning of the main memory.
● Best Fit: It allocates the process to the partition that is the first smallest partition among the free partitions.
● Worst Fit: It allocates the process to the partition, which is the largest sufficient freely available partition in the main memory.
● Next Fit: It is mostly similar to the first Fit, but this Fit, searches for the first sufficient partition from the last allocation point.
What is Paging?
Paging is a storage mechanism that allows OS to retrieve processes from the secondary storage into the main memory in the form of pages. In the Paging method, the main memory is divided into small fixed-size blocks of physical memory, which is called frames. The size of a frame should be kept the same as that of a page to have maximum utilization of the main memory and to avoid external fragmentation. Paging is used for faster access to data, and it is a logical concept.
What is Fragmentation?
Processes are stored and removed from memory, which creates free memory space, which are too small to use by other processes.
After sometimes, that processes not able to allocate to memory blocks because its small size and memory blocks always remain unused is called fragmentation. This type of problem happens during a dynamic memory allocation system when free blocks are quite small, so it is not able to fulfil any request.
Types
Two types of Fragmentation methods are:
● External fragmentation
● Internal fragmentation
External fragmentation can be reduced by rearranging memory contents to place all free memory together in a single block.
The internal fragmentation can be reduced by assigning the smallest partition, which is still good enough to carry the entire process.
What is Segmentation?
Segmentation method works almost similarly to paging. The only difference between the two is that segments are of variable-length, whereas, in the paging method, pages are always of fixed size.
A program segment includes the program's main function, data structures, utility functions, etc. The OS maintains a segment map table for all the processes. It also includes a list of free memory blocks along with its size, segment numbers, and its memory locations in the main memory or virtual memory.
What is Dynamic Loading?
Dynamic loading is a routine of a program which is not loaded until the program calls it. All routines should be contained on disk in a relocatable load format. The main program will be loaded into memory and will be executed. Dynamic loading also provides better memory space utilization.
What is Dynamic Linking?
Linking is a method that helps OS to collect and merge various modules of code and data into a single executable file. The file can be loaded into memory and executed. OS can link system-level libraries into a program that combines the libraries at load time. In Dynamic linking method, libraries are linked at execution time, so program code size can remain small.
Difference Between Static and Dynamic Loading
Static Loading | Dynamic Loading |
Static loading is used when you want to load your program statically. Then at the time of compilation, the entire program will be linked and compiled without need of any external module or program dependency. | In a Dynamically loaded program, references will be provided and the loading will be done at the time of execution. |
At loading time, the entire program is loaded into memory and starts its execution. | Routines of the library are loaded into memory only when they are required in the program. |
Difference Between Static and Dynamic Linking
Here is main difference between Static vs. Dynamic Linking:
Static Linking | Dynamic Linking |
Static linking is used to combine all other modules, which are required by a program into a single executable code. This helps OS prevent any runtime dependency. | When dynamic linking is used, it does not need to link the actual module or library with the program. Instead of it use a reference to the dynamic module provided at the time of compilation and linking. |
Summary:
Memory management is the process of controlling and coordinating computer memory, assigning portions called blocks to various running programs to optimize the overall performance of the system.
It allows you to check how much memory needs to be allocated to processes that decide which processor should get memory at what time.
In Single Contiguous Allocation, all types of computer's memory except a small portion which is reserved for the OS is available for one application
Partitioned Allocation method divides primary memory into various memory partitions, which is mostly contiguous areas of memory
Paged Memory Management method divides the computer's main memory into fixed-size units known as page frames
Segmented memory is the only memory management method that does not provide the user's program with a linear and contiguous address space.
Swapping is a method in which the process should be swapped temporarily from the main memory to the backing store. It will be later brought back into the memory for continue execution.
Memory allocation is a process by which computer programs are assigned memory or space.
Paging is a storage mechanism that allows OS to retrieve processes from the secondary storage into the main memory in the form of pages.
Fragmentation refers to the condition of a disk in which files are divided into pieces scattered around the disk.
Segmentation method works almost similarly to paging. The only difference between the two is that segments are of variable-length, whereas, in the paging method, pages are always of fixed size.
Dynamic loading is a routine of a program which is not loaded until the program calls it.
Linking is a method that helps OS to collect and merge various modules of code and data into a single executable file.
Key takeaway
It is the easiest memory management technique. In this method, all types of computer's memory except a small portion which is reserved for the OS is available for one application. For example, MS-DOS operating system allocates memory in this way. An embedded system also runs on a single application.
● In Operating Systems, Paging is a storage component used to recover processes from the secondary storage into the main memory as pages.
● The primary thought behind the concept of paging is to isolate each process as pages. The main memory will likewise be partitioned as frames.
● One page of the process is to be fitted in one of the frames of the memory. The pages are supposed to be store at the various locations of the memory however the need is consistently to locate the contiguous frames or holes.
● In this design, the operating system recovers data from secondary storage in same-size blocks called pages. Paging is a significant piece of virtual memory usage in current operating systems, utilizing secondary storage to give programs a chance to surpass the size of accessible physical memory.
● Support for paging has been taken care of by hardware. Nonetheless, ongoing structures have actualized paging by intently coordinating the hardware what's more, operating system, and particularly on 64-bit microprocessors.
Basic Method
● The fundamental technique for actualizing paging includes breaking physical memory into fixed-sized blocks called frames and breaking logical memory into blocks of a similar size known as pages.
● When a process needs to be executed, its pages are stacked into any accessible memory frames from their source (a file system or the support store).
● The backing store is partitioned into fixed-sized blocks that are of a similar size as the memory frames.
● The hardware support for paging is represented in the figure below. Each address created by the CPU is partitioned into two sections: a page number (p) and a page offset (d).
Fig 1 - Paging Hardware
Paging Hardware
● The page number is utilized as a list into a page table. The page table contains the base location of each page in physical memory.
● This base location is joined with the page offset to characterize the physical memory address that is sent to the memory unit. The paging model of memory has appeared in the figure below.
Fig 2 - Paging model of logical and physical memory
● The page size (like the frame size) is characterized by the hardware. The size of a page is regularly a power of 2, differing between 512 bytes and 16 MB for every page, contingent upon the computer architecture.
● The choice of a power of 2 as a page size makes the interpretation of a logical address into a page number and a page offset especially simple.
● If the size of the logical address space is 2m, and page size is 2n addressing units (bytes or words) then the high-request m - n bits of a logical address assign the page number, and the n low-request bits assign the page offset.
● Along these lines, the logical address is as per the following:
Page number | Page offset |
p | d |
m - n | m |
Where p is an index into the page table and d is the displacement inside the page.
● Since the operating system is overseeing physical memory, it must know of the assignment subtleties of physical memory-which frames are allotted, which frames are accessible, what number of total frames there are, etc.
● This data is commonly kept in a data structure considered a frame table. The frame table has one section for each physical page frame, demonstrating whether the last mentioned is free or designated and, on the off chance that it is allotted, to which page of which process or processes.
Hardware Support
● The hardware usage of the page table should be possible by utilizing committed registers. Be that as it may, the use or register for the page table is agreeable just if the page table is small.
● On the off chance that the page table contains an enormous number of sections, at that point, we can utilize TLB (translation Look-aside cushion), an extraordinary, little, quick look into hardware cache.
● The TLB is acquainted with fast memory.
● Every passage in TLB comprises of two sections: a tag and a value.
● At the point when this memory is utilized, an item is compared with all tags at the same time. If the item is discovered, at that point comparing value is returned.
Fig 3: Example
Main memory access time = m
If the page table is kept in main memory,
Successful Access time = m(for page table) + n(for specific page in page table)
Key takeaway
● In Operating Systems, Paging is a storage component used to recover processes from the secondary storage into the main memory as pages.
● The primary thought behind the concept of paging is to isolate each process as pages. The main memory will likewise be partitioned as frames.
● One page of the process is to be fitted in one of the frames of the memory. The pages are supposed to be store at the various locations of the memory however the need is consistently to locate the contiguous frames or holes.
● Memory segmentation is a computer (primary) memory management method of division of a PC's primary memory into segments or sections.
● In a computer system utilizing segmentation, a reference to a memory area incorporates a value that distinguishes a segment and an offset (memory area) inside that segment.
● Segments or sections are likewise utilized in object files of compiled programs when they are connected together into a program image and when the image is stacked into memory.
● A process is isolated into Segments. The chunks that a program is partitioned into which are not really the majority of similar sizes are called segments.
● Segmentation gives user's perspective on the process which paging does not give.
● Here the user's view is mapped to physical memory.
● There are kinds of segmentation:
● Virtual memory segmentation – Each process is partitioned into various segments, not which are all inhabitant at any one point in time.
● Simple Segmentation – Each process is partitioned into various segments, which are all stacked into memory at run time, however not really contiguously.
● There is no straightforward connection between logical addresses and physical addresses in segmentation. A table is maintained to stores the data about all such segments and is called Segment Table.
● Segment Table – It maps two-dimensional Logical address into one-dimensional Physical address. It's each table entry has:
- Base Address: It contains the beginning physical address where the fragments reside in memory.
- Limit: It determines the length of the fragment.
● Segments as a rule compare to regular divisions of a program, for example, singular schedules or data tables so division is commonly more noticeable to the programmer than paging alone.
● Different segments might be made for various program modules, or for various classes of memory utilization, for example, code and data portions. Certain segments might be shared between programs.
Key takeaway
Segments or sections are likewise utilized in object files of compiled programs when they are connected together into a program image and when the image is stacked into memory.
Pure segmentation is not very popular and not being used in many of the operating systems. However, Segmentation can be combined with Paging to get the best features out of both the techniques.
In Segmented Paging, the main memory is divided into variable size segments which are further divided into fixed-size pages.
- Pages are smaller than segments.
- Each Segment has a page table which means every program has multiple page tables.
- The logical address is represented as Segment Number (base address), Page number, and page offset.
Segment Number → It points to the appropriate Segment Number.
Page Number → It Points to the exact page within the segment
Page Offset → Used as an offset within the page frame
Each Page table contains various information about every page of the segment. The Segment Table contains information about every segment. Each segment table entry points to a page table entry and every page table entry is mapped to one of the pages within a segment.
Fig 4 – Logical address
Translation of logical address to a physical address
The CPU generates a logical address which is divided into two parts: Segment Number and Segment Offset. The Segment Offset must be less than the segment limit. Offset is further divided into a Page number and Page Offset. To map the exact page number in the page table, the page number is added to the page table base.
The actual frame number with the page offset is mapped to the main memory to get the desired word on the page of a certain segment of the process.
Advantages of Segmented Paging
- It reduces memory usage.
- Page table size is limited by the segment size.
- The segment table has only one entry corresponding to one actual segment.
- External Fragmentation is not there.
- It simplifies memory allocation.
Disadvantages of Segmented Paging
- Internal Fragmentation will be there.
- The complexity level will be much higher as compare to paging.
- Page Tables need to be continuously stored in the memory.
Key takeaway
Pure segmentation is not very popular and not being used in many of the operating systems. However, Segmentation can be combined with Paging to get the best features out of both the techniques.
In Segmented Paging, the main memory is divided into variable size segments which are further divided into fixed-size pages.
Virtual memory is a memory management functionality of an operating system (OS) that utilize hardware and software to enable a computer to make up for physical memory deficiencies by incidentally moving data from random access memory (RAM) to disk storage.
● Virtual address space is expanded utilizing active memory in RAM and inactive memory in hard disk drives (HDDs) to frame contiguous addresses that hold both the application and its data.
● Virtual Memory is a storage allocation plot in which secondary memory can be tended to just as it were a piece of main memory.
● The addresses a program may use to reference memory are recognized from the addresses the memory system uses to distinguish physical storage locales, and program created addresses are made an interpretation of consequently to the relating machine addresses.
● The size of virtual storage is constrained by the addressing plan of the computer system and measure of secondary memory is accessible not by the genuine number of the main storage locations.
● Virtual memory was created when physical memory - the installed RAM - was costly.
● Computers have a limited measure of RAM, so memory can run out, particularly when numerous programs keep running simultaneously.
● A system utilizing virtual memory utilizes a segment of the hard drive to copy RAM. With virtual memory, a system can stack bigger programs or various programs running simultaneously, enabling everyone to work as though it has vast memory and without obtaining more RAM.
● While duplicating virtual memory into physical memory, the OS partitions memory into page files or swap files with a fixed number of addresses. Each page is put away on a disk and when the page is required, the OS copies it from the disk to fundamental memory and makes an interpretation of the virtual addresses into real addresses.
● Current microprocessors expected for universally useful use, a memory management unit, or MMU, is incorporated with the hardware.
● The MMU’s main responsibility is to make an interpretation of virtual address into physical address. A fundamental model is given underneath −
● It is a procedure that is actualized utilizing both hardware and software. It maps memory addressed utilized by a program, called virtual addresses computers, into physical addresses in c memory.
- All memory references inside a process are logical addresses that are powerfully converted into physical addresses at run time. This implies a process can be swapped all through fundamental memory with the end goal that it involves better places in primary memory at various times over the span of execution.
- A process might be broken into number of pieces and these pieces need not be constantly situated in the main memory during execution. The blend of dynamic run-time address interpretation and utilization of page or portion table allows this.
● On the off chance that these attributes are available, at that point, it isn’t essential that every one of the pages or portions are available in the main memory during execution. This implies the required pages should be stacked into memory at whatever point required.
● Virtual memory is designed utilizing Demand Paging or Demand Segmentation.
Key takeaway
Virtual memory is a memory management functionality of an operating system (OS) that utilize hardware and software to enable a computer to make up for physical memory deficiencies by incidentally moving data from random access memory (RAM) to disk storage.
● As per the idea of Virtual Memory, to execute some process, just a piece of the process should be available in the main memory which implies that a couple of pages may be available in the main memory whenever.
● The difficult task is to decide which pages should be kept in the main memory and which should be kept in the secondary memory because we can't state ahead of time that a process will require a specific page at a specific time.
● Subsequently, to beat this issue, there is an idea called Demand Paging is presented. It proposes keeping all pages of the frames in the secondary memory until they are required. That means, it says that don't stack any page in the main memory until it is required.
● Demand paging says that pages should possibly be brought into memory if the executing process requests them. This is frequently known as lazy evaluation as just those pages requested by the process are swapped from secondary storage to main memory.
Fig 5 – Swapping memory
● On the other hand in pure swapping, where all memory for a process is swapped from secondary storage to main memory during the process start execution.
● To implement this process needs a page table is utilized. The page table maps logical memory to physical memory.
● The page table uses a bitwise administrator to stamp if a page is valid or invalid. A valid page is one that is present in the main memory. An invalid page is one that is present in secondary memory.
● At the point when a process attempts to get to a page, the following steps are followed as shown in the figure below:
- If the CPU attempts to call a page that is as of now not accessible in the main memory, it creates an interrupt showing memory access fault.
- The OS places the interrupted process in a blocking state. For the execution to continue the OS must call the required page into the memory.
- The OS will scan for the required page in the logical address space.
- The required page will be brought from logical address space to physical address space. The page replacement algorithms are utilized for deciding which page is to be replaced with the new page in the physical address space.
- Updates will be made to the page table accordingly.
- The sign will be sent to the CPU to proceed with the program execution and it will put the process again into a ready state.
● Henceforth at whatever point a page fault happens these steps are performed by the operating system and the required page is brought into memory.
Advantages
● Demand paging, instead of loading all pages right away:
● Just loads pages that are requested by the executing process.
● As there is more space in the main memory, more processes can be stacked, decreasing the context switching time, which uses a lot of resources.
● Less loading latency happens at program startup, as less data is gotten from secondary storage and less data is brought into main memory.
● As main memory is costly contrasted with secondary memory, this method helps altogether decrease the bill of material (BOM) cost in advanced cells for instance. Symbian OS had this element.
Disadvantage
● Individual programs face additional latency when they access the page for the first time.
● Minimal effort, low-control implanted systems might not have a memory management unit that supports page replacement.
● Memory management with page replacement algorithms turns out to be somewhat increasingly intricate.
● Conceivable security dangers, including weakness to timing attacks.
● Thrashing may happen because of repeated page faults.
Key takeaway
As per the idea of Virtual Memory, to execute some process, just a piece of the process should be available in the main memory which implies that a couple of pages may be available in the main memory whenever.
The difficult task is to decide which pages should be kept in the main memory and which should be kept in the secondary memory because we can't state ahead of time that a process will require a specific page at a specific time.
● Page replacement algorithms are the procedures utilizing which an Operating System chooses which memory pages to swap out, write to disk when a page of memory should be allocated.
● Paging happens at whatever point a page fault happens and a free page can't be utilized for allocation reason accounting to reason that pages are not available or the quantity of free pages is lower than required pages.
● At the point when the page that was chosen for replacement and was paged out, is referenced once more, it needs to read in from disk, and this requires I/O completion.
● This process decides the nature of the page replacement algorithm: the lesser the time waiting for page-ins, the better is the algorithm.
● A page replacement algorithm takes a gander at the restricted data about getting to the pages given by hardware and attempts to choose which pages ought to be replaced to limit the complete number of page misses while offsetting it with the expenses of primary storage and processor time of the algorithm itself.
● There is a wide range of page replacement algorithms. We assess an algorithm by running them on a specific string of memory reference and registering the number of page issues,
● There are two main parts of virtual memory, Frame allocation, and Page Replacement. It is imperative to have the ideal frame allocation and page replacement algorithm. Frame allocation is about what number of frames are to be designated to the process while the page replacement is tied in with deciding the page number which should be supplanted to make space for the mentioned page.
What if the algorithm is not optimal?
- If the quantity of frames which are dispensed to a process isn't adequate or exact at that point, there can be an issue of thrashing. Because of the absence of frames, a large portion of the pages will reside in the main memory and consequently, more page faults will happen.
Nonetheless, on the off chance that OS allots more frames to the process, at that point there can be an internal discontinuity.
2. On the off chance that the page replacement orithm algorithm isn't ideal, at that point, there will likewise be the issue of thrashing. On the off chance that the number of pages that are supplanted by the mentioned pages will be referred sooner rather than later at that point, there will be a progressively some swap-in and swap-out and along these lines, the OS needs to perform more replacements then common which causes execution inadequacy.
3. In this manner, the assignment of an ideal page replacement calculation is to pick the page which can restrict the thrashing.
Types of Page Replacement Algorithm
There are different page replacement algorithms. Every algorithm has an alternate strategy by which the pages can be replaced.
First in First out (FIFO) –
● This is the least difficult page replacement algorithm. In this algorithm, the operating system monitors all pages in the memory in a queue, the oldest page is in the front of the line. At the point when a page should be replaced page in the front of the queue is chosen for removal.
● Example Consider page reference string 1, 3, 0, 3, 5, 6 with 3-page frames. Find several page faults.
● At first, all slots are vacant, so when 1, 3, 0 came they are distributed to the unfilled spaces — > 3 Page Faults.
● At the point when 3 comes, it is in memory so — > 0 Page Faults.
● At that point 5 comes, it isn't available in memory so it replaces the oldest page space i.e 1. — >1 Page Fault.
● 6 comes, it is likewise not available in memory so it replaces the most established page space i.e 3 — >1 Page Fault.
● At long last when 3 come it isn't available so it replaces 0 — > 1-page fault
● Belady's anomaly – Belady's anomaly demonstrates that it is conceivable to have more page faults when expanding the number of page frames while utilizing the First in First Out (FIFO) page replacement algorithm. For instance, on the off chance that we consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3 slots, we get 9 all-out page faults, however, on the off chance that we increment spaces to 4, we get 10-page faults.
Optimal Page replacement
● This algorithm replaces the page which won't be alluded to for such a long time in the future. Although it cannot be implementable however it very well may be utilized as a benchmark. Different algorithms are contrasted with this regarding optimality.
● Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page frame. Discover number of page faults.
● At first, all openings are vacant, so when 7 0 1 2 are assigned to the unfilled spaces — > 4 Page faults
● 0 is as of now there so — > 0 Page fault.
● At the point when 3 came it will replace 7 since it isn't utilized for the longest length of time in the future— >1 Page fault.
● 0 is as of now there so — > 0 Page fault.
● 4 will replace 1 — > 1 Page Fault.
● Presently for the further page reference string — > 0 Page fault since they are as of now accessible in the memory.
● Optimal page replacement is flawless, yet unrealistic by and by as the operating system can't know future requests. The utilization of Optimal Page replacement is to set up a benchmark with the goal that other replacement calculations can be examined against it.
Least Recently Used
● This algorithm replaces the page which has not been alluded to for quite a while. This algorithm is only inverse to the optimal page replacement algorithm. In this, we take a look at the past as opposed to gazing at the future.
● Example- Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4 page frames. Find number of page faults.
● At first, all openings are vacant, so when 7 0 1 2 are apportioned to the vacant spaces — > 4 Page fault
● 0 is now their so — > 0 Page fault.
● at the point when 3 came it will replace 7 since it is least as of late utilized — >1 Page fault.
● 0 is now in memory so — > 0 Page fault.
● 4 will happens of 1 — > 1 Page Fault
● Presently for the further page reference string — > 0 Page fault since they are as of now accessible in the memory.
Key takeaway
● Page replacement algorithms are the procedures utilizing which an Operating System chooses which memory pages to swap out, write to disk when a page of memory should be allocated.
● Paging happens at whatever point a page fault happens and a free page can't be utilized for allocation reason accounting to reason that pages are not available or the quantity of free pages is lower than required pages.
● At the point when the page that was chosen for replacement and was paged out, is referenced once more, it needs to read in from disk, and this requires I/O completion.
● Thrashing is computer action that allows little or no progress, as memory or different resources have turned out to be depleted or too limited to even perform required functions.
● At this point, a pattern normally creates in which a request is made to the operating system by a process or program, then the operating system attempts to find resources by taking them from some different process, which in turn makes new demands that can't be fulfilled.
● In a virtual storage system (an operating system that deals with its local storage or memory in units called pages), thrashing is a condition wherein extreme paging operations are occurring.
● A system that is thrashing can be seen as either an exceptionally moderate system or one that has stopped.
● Thrashing is a condition or a circumstance when the system is spending a noteworthy segment of its time adjusting the page faults; however, the genuine processing done is truly insignificant.
● The fundamental idea included is that if a process is assigned too few frames, at that point, there will be an excess of and too successive page faults.
● Thus, no work would be finished by the CPU and the CPU use would fall definitely.
● The long-term scheduler would then attempt to improve the CPU usage by loading some more processes into the memory in this way expanding the level of multiprogramming.
● This would bring about a further decline in the CPU usage setting off a chained response of higher page faults followed by an expansion in the level of multiprogramming, called Thrashing.
Reasons for Thrashing
It brings about extreme performance issues.
- If CPU use is too low then we increase the level of multiprogramming by acquainting another process with the system. A global page replacement algorithm is utilized. The CPU scheduler sees the diminishing CPU utilization and builds the level of multiprogramming.
- CPU use is plotted against the level of multiprogramming.
- As the level of multiprogramming expands, CPU use likewise increases.
- If the level of multiprogramming is expanded further, thrashing sets in, and CPU use drops sharply.
- So, now, to expand CPU usage and to quit thrashing, we should diminish the level of multiprogramming.
Methods to Handle Thrashing
We previously observed Local replacement is superior to Global replacement to abstain from thrashing. Be that as it may, it likewise has drawbacks and not suggestible. Some more strategies are
Working Set Model
● This model depends on the locality. What region is stating, the page utilized as of recently can be utilized again, and the pages which are adjacent to this page will likewise be utilized.
● Working set methods state that set of pages in the latest D time.
● The page which finished its D amount of time in working set naturally dropped from it.
● So the precision of the working set relies upon the D we have picked.
● This working set model abstains from thrashing while at the same time keeping the level of multiprogramming as high as could be expected under the circumstances.
Page Fault Frequency
● It is some immediate methodology than the working set model.
● When thrashing happens we realize that it has a few frames. Also, on the off chance that it isn't thrashing, that implies it has an excessive number of frames.
● In light of this property, we allocate an upper and lower bound for the ideal page fault rate.
● As indicated by the page fault rate we assign or remove pages.
● On the off chance that the page fault rate becomes less than the limit, frames can be expelled from the process.
● So, if the page fault rate becomes more than the limit, a progressive number of frames can be distributed to the process.
Furthermore, if no frames are available because of the high page fault rate, we will simply suspend the processes and will restart them again when frames will be available.
Key takeaway
● Thrashing is computer action that allows little or no progress, as memory or different resources have turned out to be depleted or too limited to even perform required functions.
● At this point, a pattern normalSly creates in which a request is made to the operating system by a process or program, then the operating system attempts to find resources by taking them from some different process, which in turn makes new demands that can't be fulfilled.
● In a virtual storage system (an operating system that deals with its local storage or memory in units called pages), thrashing is a condition wherein extreme paging operations are occurring.
References:
- Silberschatz, Galvin, Gagne, "Operating System Concepts", John Wiley and Sons, 10th edition, 2018
- Stallings, “Operating Systems –Internals and Design Principles”, 9/E, Pearson Publications, 2018
- Andrew S. Tanenbaum, “Modern Operating Systems”, 4/E, Pearson Publications, 2015