***Welcome to ashrafedu.blogspot.com * * * This website is maintained by ASHRAF***

Saturday, February 15, 2020

Demand paging


Demand Paging

The basic idea behind demand paging is that when a process is swapped in, its pages are not swapped in all at once. Rather they are swapped in only when the process needs them. ( on demand. ) This is termed a lazy swapper, although a pager is a more accurate term.

The basic idea behind paging is that when a process is swapped in, the pager only loads into memory those pages that it expects the process to need. Pages that are not loaded into memory are marked as invalid in the page table, using the invalid bit. If a bit is valid the associated page is both legal and in memory. If the bit is set to invalid the page is either not valid or is valid but is currently on disk.


On the other hand, if a page is needed that was not originally loaded up, then a page fault trap is generated, which must be handled in a series of steps:


    1. The memory address requested is first checked, to make sure it was a valid memory request.
    2. If the reference was invalid, the process is terminated. Otherwise, the page must be paged in.
    3. A free frame is located, possibly from a free-frame list.
    4. A disk operation is scheduled to bring in the necessary page from disk. (This will usually block the process on an I/O wait, allowing some other process to use the CPU in the meantime. )
    5. When the I/O operation is complete, the process's page table is updated with the new frame number, and the invalid bit is changed to indicate that this is now a valid page reference.
    6. The instruction that caused the page fault must now be restarted from the beginning, (as soon as this process gets another turn on the CPU.)

In an extreme case, NO pages are swapped in for a process until they are requested by page faults. This is known as pure demand paging. The hardware necessary to support virtual memory is the same as for paging and swapping: A page table and secondary memory.

A crucial part of the demand paging process is that the instruction must be restarted from scratch once the desired page has been made available in memory. For most simple instructions this is not a major difficulty. However there are some architectures that allow a single instruction to modify a fairly large block of data, and if some of the data gets modified before the page fault occurs, this could cause problems. One solution is to access both ends of the block before executing the instruction, guaranteeing that the necessary pages get paged in before the instruction begins.

Performance of Demand Paging

Demand paging can significantly affect the performance of a computer system. Let ‘p’ be probability of a page fault. The effective access time is:

effective access time = (1-p) x ma + p x pagefaulttime

With an average page fault service time of 8 milliseconds and a memory access(ma) time of 200 nanoseconds,

effective access time = (1-p) x 200 + p x 8 miliseconds        
                                    = (1-p) x 200 + p x 8000000
                                    = 200 + 7999800p      

which clearly depends heavily on p. Even if only one access in 1000 causes a page fault, the effective access time drops from 200 nanoseconds to 8.2 microseconds, a slowdown of a factor of 40 times. In order to keep the slowdown less than 10%, the page fault rate must be less than 0.0000025, or one in 399,990 accesses.

An additional aspect of demand paging is the handling and overall use of swap space. Since swap space is faster to access than the regular file system, because it does not have to go through the whole directory structure. For this reason some systems will transfer an entire process from the file system to swap space before starting up the process, so that future paging all occurs from the faster swap space.





1 comment: