Unit - 5
Memory Management
Q1) What is the memory allocation technique?
A1) Here, are some most crucial memory management techniques:
Single Contiguous Allocation
It is the easiest memory management technique. In this method, all types of computer's memory except a small portion which is reserved for the OS is available for one application. For example, MS-DOS operating system allocates memory in this way. An embedded system also runs on a single application.
Partitioned Allocation
It divides primary memory into various memory partitions, which is mostly contiguous areas of memory. Every partition store all the information for a specific task or job. This method consists of allotting a partition to a job when it starts & unallocated when it ends.
Paged Memory Management
This method divides the computer's main memory into fixed-size units known as page frames. This hardware memory management unit maps pages into frames which should be allocated on a page basis.
Segmented Memory Management
Segmented memory is the only memory management method that does not provide the user's program with a linear and contiguous address space.
Segments need hardware support in the form of a segment table. It contains the physical address of the section in memory, size, and other data like access protection bits and status.
Q2) Write about swapping?
A2) Swapping is a method in which the process should be swapped temporarily from the main memory to the backing store. It will be later brought back into the memory for continue execution.
Backing store is a hard disk or some other secondary storage device that should be big enough inorder to accommodate copies of all memory images for all users. It is also capable of offering direct access to these memory images.
Benefits of Swapping
Here, are major benefits/pros of swapping:
● It offers a higher degree of multiprogramming.
● Allows dynamic relocation. For example, if address binding at execution time is being used, then processes can be swap in different locations. Else in case of compile and load time bindings, processes should be moved to the same location.
● It helps to get better utilization of memory.
● Minimum wastage of CPU time on completion so it can easily be applied to a priority-based scheduling method to improve its performance.
Q3) Write short notes on memory allocation?
A3) Memory allocation is a process by which computer programs are assigned memory or space.
Here, main memory is divided into two types of partitions
Low Memory - Operating system resides in this type of memory.
High Memory- User processes are held in high memory.
Partition Allocation
Memory is divided into different blocks or partitions. Each process is allocated according to the requirement. Partition allocation is an ideal method to avoid internal fragmentation.
Below are the various partition allocation schemes:
● First Fit: In this type fit, the partition is allocated, which is the first sufficient block from the beginning of the main memory.
● Best Fit: It allocates the process to the partition that is the first smallest partition among the free partitions.
● Worst Fit: It allocates the process to the partition, which is the largest sufficient freely available partition in the main memory.
● Next Fit: It is mostly similar to the first Fit, but this Fit, searches for the first sufficient partition from the last allocation point.
Q4) What is fragmentation?
A4) Fragmentation
Processes are stored and removed from memory, which creates free memory space, which is too small to use by other processes.
After sometimes, that processes not able to allocate to memory blocks because its small size and memory blocks always remain unused is called fragmentation. This type of problem happens during a dynamic memory allocation system when free blocks are quite small, so it is not able to fulfill any request.
Types
Two types of Fragmentation methods are:
● External fragmentation
● Internal fragmentation
External fragmentation can be reduced by rearranging memory contents to place all free memory together in a single block.
The internal fragmentation can be reduced by assigning the smallest partition, which is still good enough to carry the entire process.
Q5) Write the difference between static and dynamic loading?
A5) Difference Between Static and Dynamic Loading
Static Loading | Dynamic Loading |
Static loading is used when you want to load your program statically. Then at the time of compilation, the entire program will be linked and compiled without need of any external module or program dependency. | In a Dynamically loaded program, references will be provided and the loading will be done at the time of execution. |
At loading time, the entire program is loaded into memory and starts its execution. | Routines of the library are loaded into memory only when they are required in the program. |
Q6) Write the difference Between Static and Dynamic Linking?
A6) Here is the main difference between Static vs. Dynamic Linking:
Static Linking | Dynamic Linking |
Static linking is used to combine all other modules, which are required by a program into a single executable code. This helps OS prevent any runtime dependency. | When dynamic linking is used, it does not need to link the actual module or library with the program. Instead of it use a reference to the dynamic module provided at the time of compilation and linking. |
Q7) Describe paging?
A7) Paging
● In Operating Systems, Paging is a storage component used to recover processes from the secondary storage into the main memory as pages.
● The primary thought behind the concept of paging is to isolate each process as pages. The main memory will likewise be partitioned as frames.
● One page of the process is to be fitted in one of the frames of the memory. The pages are supposed to be store at the various locations of the memory however the need is consistently to locate the contiguous frames or holes.
● In this design, the operating system recovers data from secondary storage in same-size blocks called pages. Paging is a significant piece of virtual memory usage in current operating systems, utilizing secondary storage to give programs a chance to surpass the size of accessible physical memory.
● Support for paging has been taken care of by hardware. Nonetheless, ongoing structures have actualized paging by intently coordinating the hardware what's more, operating system, and particularly on 64-bit microprocessors.
Basic Method
● The fundamental technique for actualizing paging includes breaking physical memory into fixed-sized blocks called frames and breaking logical memory into blocks of a similar size known as pages.
● When a process needs to be executed, its pages are stacked into any accessible memory frames from their source (a file system or the support store).
● The backing store is partitioned into fixed-sized blocks that are of a similar size as the memory frames.
● The hardware support for paging is represented in the figure below. Each address created by the CPU is partitioned into two sections: a page number (p) and a page offset (d).
Fig 1 - Paging Hardware
Paging Hardware
● The page number is utilized as a list into a page table. The page table contains the base location of each page in physical memory.
● This base location is joined with the page offset to characterize the physical memory address that is sent to the memory unit. The paging model of memory has appeared in the figure below.
Fig 2 - Paging model of logical and physical memory
● The page size (like the frame size) is characterized by the hardware. The size of a page is regularly a power of 2, differing between 512 bytes and 16 MB for every page, contingent upon the computer architecture.
● The choice of a power of 2 as a page size makes the interpretation of a logical address into a page number and a page offset especially simple.
● If the size of the logical address space is 2m, and page size is 2n addressing units (bytes or words) then the high-request m - n bits of a logical address assign the page number, and the n low-request bits assign the page offset.
● Along these lines, the logical address is as per the following:
Page number | Page offset |
p | d |
m - n | m |
Where p is an index into the page table and d is the displacement inside the page.
● Since the operating system is overseeing physical memory, it must know of the assignment subtleties of physical memory-which frames are allotted, which frames are accessible, what number of total frames there are, etc.
● This data is commonly kept in a data structure considered a frame table. The frame table has one section for each physical page frame, demonstrating whether the last mentioned is free or designated and, on the off chance that it is allotted, to which page of which process or processes.
Hardware Support
● The hardware usage of the page table should be possible by utilizing committed registers. Be that as it may, the use or register for the page table is agreeable just if the page table is small.
● On the off chance that the page table contains an enormous number of sections, at that point, we can utilize TLB (translation Look-aside cushion), an extraordinary, little, quick look into hardware cache.
● The TLB is acquainted with fast memory.
● Every passage in TLB comprises of two sections: a tag and a value.
● At the point when this memory is utilized, an item is compared with all tags at the same time. If the item is discovered, at that point comparing value is returned.
Fig 3: Example
Main memory access time = m
If the page table is kept in main memory,
Successful Access time = m(for page table) + n(for specific page in page table)
Q8) Explain segmentation?
A8) Segmentation
● Memory segmentation is a computer (primary) memory management method of division of a PC's primary memory into segments or sections.
● In a computer system utilizing segmentation, a reference to a memory area incorporates a value that distinguishes a segment and an offset (memory area) inside that segment.
● Segments or sections are likewise utilized in object files of compiled programs when they are connected together into a program image and when the image is stacked into memory.
● A process is isolated into Segments. The chunks that a program is partitioned into which are not really the majority of similar sizes are called segments.
● Segmentation gives user's perspective on the process which paging does not give.
● Here the user's view is mapped to physical memory.
● There are kinds of segmentation:
● Virtual memory segmentation – Each process is partitioned into various segments, not which are all inhabitant at any one point in time.
● Simple Segmentation – Each process is partitioned into various segments, which are all stacked into memory at run time, however not really contiguously.
● There is no straightforward connection between logical addresses and physical addresses in segmentation. A table is maintained to stores the data about all such segments and is called Segment Table.
● Segment Table – It maps two-dimensional Logical address into one-dimensional Physical address. It's each table entry has:
- Base Address: It contains the beginning physical address where the fragments reside in memory.
- Limit: It determines the length of the fragment.
● Segments as a rule compare to regular divisions of a program, for example, singular schedules or data tables so division is commonly more noticeable to the programmer than paging alone.
● Different segments might be made for various program modules, or for various classes of memory utilization, for example, code and data portions. Certain segments might be shared between programs.
Q9) Describe segmentation with paging?
A9) Pure segmentation is not very popular and not being used in many of the operating systems. However, Segmentation can be combined with Paging to get the best features out of both the techniques.
In Segmented Paging, the main memory is divided into variable size segments which are further divided into fixed-size pages.
- Pages are smaller than segments.
- Each Segment has a page table which means every program has multiple page tables.
- The logical address is represented as Segment Number (base address), Page number, and page offset.
Segment Number → It points to the appropriate Segment Number.
Page Number → It Points to the exact page within the segment
Page Offset → Used as an offset within the page frame
Each Page table contains various information about every page of the segment. The Segment Table contains information about every segment. Each segment table entry points to a page table entry and every page table entry is mapped to one of the pages within a segment.
Fig 4 – Logical address
Translation of logical address to a physical address
The CPU generates a logical address which is divided into two parts: Segment Number and Segment Offset. The Segment Offset must be less than the segment limit. Offset is further divided into a Page number and Page Offset. To map the exact page number in the page table, the page number is added to the page table base.
The actual frame number with the page offset is mapped to the main memory to get the desired word on the page of a certain segment of the process.
Advantages of Segmented Paging
- It reduces memory usage.
- Page table size is limited by the segment size.
- The segment table has only one entry corresponding to one actual segment.
- External Fragmentation is not there.
- It simplifies memory allocation.
Disadvantages of Segmented Paging
- Internal Fragmentation will be there.
- The complexity level will be much higher as compare to paging.
- Page Tables need to be continuously stored in the memory.
Q10) Define virtual memory?
A10) Virtual memory is a memory management functionality of an operating system (OS) that utilize hardware and software to enable a computer to make up for physical memory deficiencies by incidentally moving data from random access memory (RAM) to disk storage.
● Virtual address space is expanded utilizing active memory in RAM and inactive memory in hard disk drives (HDDs) to frame contiguous addresses that hold both the application and its data.
● Virtual Memory is a storage allocation plot in which secondary memory can be tended to just as it were a piece of main memory.
● The addresses a program may use to reference memory are recognized from the addresses the memory system uses to distinguish physical storage locales, and program created addresses are made an interpretation of consequently to the relating machine addresses.
● The size of virtual storage is constrained by the addressing plan of the computer system and measure of secondary memory is accessible not by the genuine number of the main storage locations.
● Virtual memory was created when physical memory - the installed RAM - was costly.
● Computers have a limited measure of RAM, so memory can run out, particularly when numerous programs keep running simultaneously.
● A system utilizing virtual memory utilizes a segment of the hard drive to copy RAM. With virtual memory, a system can stack bigger programs or various programs running simultaneously, enabling everyone to work as though it has vast memory and without obtaining more RAM.
● While duplicating virtual memory into physical memory, the OS partitions memory into page files or swap files with a fixed number of addresses. Each page is put away on a disk and when the page is required, the OS copies it from the disk to fundamental memory and makes an interpretation of the virtual addresses into real addresses.
● Current microprocessors expected for universally useful use, a memory management unit, or MMU, is incorporated with the hardware.
● The MMU’s main responsibility is to make an interpretation of virtual address into physical address. A fundamental model is given underneath −
● It is a procedure that is actualized utilizing both hardware and software. It maps memory addressed utilized by a program, called virtual addresses computers, into physical addresses in c memory.
- All memory references inside a process are logical addresses that are powerfully converted into physical addresses at run time. This implies a process can be swapped all through fundamental memory with the end goal that it involves better places in primary memory at various times over the span of execution.
- A process might be broken into number of pieces and these pieces need not be constantly situated in the main memory during execution. The blend of dynamic run-time address interpretation and utilization of page or portion table allows this.
● On the off chance that these attributes are available, at that point, it isn’t essential that every one of the pages or portions are available in the main memory during execution. This implies the required pages should be stacked into memory at whatever point required.
● Virtual memory is designed utilizing Demand Paging or Demand Segmentation.
Q11) Explain demand paging?
A11) As per the idea of Virtual Memory, to execute some process, just a piece of the process should be available in the main memory which implies that a couple of pages may be available in the main memory whenever.
● The difficult task is to decide which pages should be kept in the main memory and which should be kept in the secondary memory because we can't state ahead of time that a process will require a specific page at a specific time.
● Subsequently, to beat this issue, there is an idea called Demand Paging is presented. It proposes keeping all pages of the frames in the secondary memory until they are required. That means, it says that don't stack any page in the main memory until it is required.
● Demand paging says that pages should possibly be brought into memory if the executing process requests them. This is frequently known as lazy evaluation as just those pages requested by the process are swapped from secondary storage to main memory.
Fig 5 – Swapping memory
● On the other hand in pure swapping, where all memory for a process is swapped from secondary storage to main memory during the process start execution.
● To implement this process needs a page table is utilized. The page table maps logical memory to physical memory.
● The page table uses a bitwise administrator to stamp if a page is valid or invalid. A valid page is one that is present in the main memory. An invalid page is one that is present in secondary memory.
● At the point when a process attempts to get to a page, the following steps are followed as shown in the figure below:
- If the CPU attempts to call a page that is as of now not accessible in the main memory, it creates an interrupt showing memory access fault.
- The OS places the interrupted process in a blocking state. For the execution to continue the OS must call the required page into the memory.
- The OS will scan for the required page in the logical address space.
- The required page will be brought from logical address space to physical address space. The page replacement algorithms are utilized for deciding which page is to be replaced with the new page in the physical address space.
- Updates will be made to the page table accordingly.
- The sign will be sent to the CPU to proceed with the program execution and it will put the process again into a ready state.
● Henceforth at whatever point a page fault happens these steps are performed by the operating system and the required page is brought into memory.
Q12) Write the advantages and disadvantages of demand paging?
A12) Advantages
● Demand paging, instead of loading all pages right away:
● Just loads pages that are requested by the executing process.
● As there is more space in the main memory, more processes can be stacked, decreasing the context switching time, which uses a lot of resources.
● Less loading latency happens at program startup, as less data is gotten from secondary storage and less data is brought into main memory.
● As main memory is costly contrasted with secondary memory, this method helps altogether decrease the bill of material (BOM) cost in advanced cells for instance. Symbian OS had this element.
Disadvantage
● Individual programs face additional latency when they access the page for the first time.
● Minimal effort, low-control implanted systems might not have a memory management unit that supports page replacement.
● Memory management with page replacement algorithms turns out to be somewhat increasingly intricate.
● Conceivable security dangers, including weakness to timing attacks.
● Thrashing may happen because of repeated page faults.
Q13) What is a page replacement algorithm?
A13) Page replacement algorithm
● Page replacement algorithms are the procedures utilizing which an Operating System chooses which memory pages to swap out, write to disk when a page of memory should be allocated.
● Paging happens at whatever point a page fault happens and a free page can't be utilized for allocation reason accounting to reason that pages are not available or the quantity of free pages is lower than required pages.
● At the point when the page that was chosen for replacement and was paged out, is referenced once more, it needs to read in from disk, and this requires I/O completion.
● This process decides the nature of the page replacement algorithm: the lesser the time waiting for page-ins, the better is the algorithm.
● A page replacement algorithm takes a gander at the restricted data about getting to the pages given by hardware and attempts to choose which pages ought to be replaced to limit the complete number of page misses while offsetting it with the expenses of primary storage and processor time of the algorithm itself.
● There is a wide range of page replacement algorithms. We assess an algorithm by running them on a specific string of memory reference and registering the number of page issues,
● There are two main parts of virtual memory, Frame allocation, and Page Replacement. It is imperative to have the ideal frame allocation and page replacement algorithm. Frame allocation is about what number of frames are to be designated to the process while the page replacement is tied in with deciding the page number which should be supplanted to make space for the mentioned page.
Q14) What are the different types of page replacement algorithm?
A14) There are different page replacement algorithms. Every algorithm has an alternate strategy by which the pages can be replaced.
First in First out (FIFO) –
● This is the least difficult page replacement algorithm. In this algorithm, the operating system monitors all pages in the memory in a queue, the oldest page is in the front of the line. At the point when a page should be replaced page in the front of the queue is chosen for removal.
● Example Consider page reference string 1, 3, 0, 3, 5, 6 with 3-page frames. Find several page faults.
● At first, all slots are vacant, so when 1, 3, 0 came they are distributed to the unfilled spaces — > 3 Page Faults.
● At the point when 3 comes, it is in memory so — > 0 Page Faults.
● At that point 5 comes, it isn't available in memory so it replaces the oldest page space i.e 1. — >1 Page Fault.
● 6 comes, it is likewise not available in memory so it replaces the most established page space i.e 3 — >1 Page Fault.
● At long last when 3 come it isn't available so it replaces 0 — > 1-page fault
● Belady's anomaly – Belady's anomaly demonstrates that it is conceivable to have more page faults when expanding the number of page frames while utilizing the First in First Out (FIFO) page replacement algorithm. For instance, on the off chance that we consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3 slots, we get 9 all-out page faults, however, on the off chance that we increment spaces to 4, we get 10-page faults.
Optimal Page replacement
● This algorithm replaces the page which won't be alluded to for such a long time in the future. Although it cannot be implementable however it very well may be utilized as a benchmark. Different algorithms are contrasted with this regarding optimality.
● Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page frame. Discover number of page faults.
● At first, all openings are vacant, so when 7 0 1 2 are assigned to the unfilled spaces — > 4 Page faults
● 0 is as of now there so — > 0 Page fault.
● At the point when 3 came it will replace 7 since it isn't utilized for the longest length of time in the future— >1 Page fault.
● 0 is as of now there so — > 0 Page fault.
● 4 will replace 1 — > 1 Page Fault.
● Presently for the further page reference string — > 0 Page fault since they are as of now accessible in the memory.
● Optimal page replacement is flawless, yet unrealistic by and by as the operating system can't know future requests. The utilization of Optimal Page replacement is to set up a benchmark with the goal that other replacement calculations can be examined against it.
Least Recently Used
● This algorithm replaces the page which has not been alluded to for quite a while. This algorithm is only inverse to the optimal page replacement algorithm. In this, we take a look at the past as opposed to gazing at the future.
● Example- Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4 page frames. Find number of page faults.
● At first, all openings are vacant, so when 7 0 1 2 are apportioned to the vacant spaces — > 4 Page fault
● 0 is now their so — > 0 Page fault.
● at the point when 3 came it will replace 7 since it is least as of late utilized — >1 Page fault.
● 0 is now in memory so — > 0 Page fault.
● 4 will happens of 1 — > 1 Page Fault
● Presently for the further page reference string — > 0 Page fault since they are as of now accessible in the memory.
Q15) Define thrashing?
A15) Thrashing
● Thrashing is computer action that allows little or no progress, as memory or different resources have turned out to be depleted or too limited to even perform required functions.
● At this point, a pattern normally creates in which a request is made to the operating system by a process or program, then the operating system attempts to find resources by taking them from some different process, which in turn makes new demands that can't be fulfilled.
● In a virtual storage system (an operating system that deals with its local storage or memory in units called pages), thrashing is a condition wherein extreme paging operations are occurring.
● A system that is thrashing can be seen as either an exceptionally moderate system or one that has stopped.
● Thrashing is a condition or a circumstance when the system is spending a noteworthy segment of its time adjusting the page faults; however, the genuine processing done is truly insignificant.
● The fundamental idea included is that if a process is assigned too few frames, at that point, there will be an excess of and too successive page faults.
● Thus, no work would be finished by the CPU and the CPU use would fall definitely.
● The long-term scheduler would then attempt to improve the CPU usage by loading some more processes into the memory in this way expanding the level of multiprogramming.
● This would bring about a further decline in the CPU usage setting off a chained response of higher page faults followed by an expansion in the level of multiprogramming, called Thrashing.
Q16) What is the reason behind the thrashing?
A16) Reasons for Thrashing
It brings about extreme performance issues.
- If CPU use is too low then we increase the level of multiprogramming by acquainting another process with the system. A global page replacement algorithm is utilized. The CPU scheduler sees the diminishing CPU utilization and builds the level of multiprogramming.
- CPU use is plotted against the level of multiprogramming.
- As the level of multiprogramming expands, CPU use likewise increases.
- If the level of multiprogramming is expanded further, thrashing sets in, and CPU use drops sharply.
- So, now, to expand CPU usage and to quit thrashing, we should diminish the level of multiprogramming.