Fiveable

๐Ÿ–ฒ๏ธOperating Systems Unit 3 Review

QR code for Operating Systems practice questions

3.3 Virtual memory and paging

๐Ÿ–ฒ๏ธOperating Systems
Unit 3 Review

3.3 Virtual memory and paging

Written by the Fiveable Content Team โ€ข Last updated September 2025
Written by the Fiveable Content Team โ€ข Last updated September 2025
๐Ÿ–ฒ๏ธOperating Systems
Unit & Topic Study Guides

Virtual memory is a game-changer in memory management. It creates the illusion of a vast address space for processes, allowing programs to use more memory than physically available. This clever trick uses secondary storage as an extension of main memory.

Paging is the secret sauce behind virtual memory's magic. It divides memory into fixed-size blocks called pages, enabling non-contiguous allocation and efficient memory sharing between processes. This system also supports fine-grained memory protection and clever techniques like copy-on-write.

Virtual memory and memory management

Abstraction and Illusion of Large Address Space

  • Virtual memory creates an abstraction of physical memory providing processes with the illusion of a large, contiguous address space
  • Programs utilize more memory than physically available by using secondary storage (hard drives, SSDs) as an extension of main memory
  • Enables efficient memory allocation and deallocation leading to better memory utilization and process isolation
  • Manages mapping between virtual addresses (used by processes) and physical addresses (in main memory)

Memory Protection and Demand Paging

  • Prevents processes from accessing memory outside their allocated space enhancing system security
  • Supports implementation of demand paging where only required portions of a program are loaded into main memory
    • Example: Large application loads only currently used modules (text editor loads spelling check only when needed)
    • Example: Operating system loads device drivers on-demand rather than at boot time

Paging and its benefits

Paging Mechanism and Memory Management

  • Divides both physical and virtual memory into fixed-size blocks called pages
  • Virtual address space split into virtual pages while physical memory divided into page frames of the same size
  • Eliminates external fragmentation by allowing non-contiguous allocation of physical memory
  • Supports efficient memory allocation and deallocation managing only whole pages
  • Facilitates sharing of memory between processes allowing multiple virtual pages to map to the same physical page frame
    • Example: Shared libraries (libc) mapped to same physical pages for multiple processes
    • Example: Copy-on-write for efficient process forking (child process shares parent's pages until modification)

Memory Protection and Efficiency

  • Enables fine-grained memory protection by setting access rights at the page level
    • Example: Read-only pages for code segments, read-write for data segments
    • Example: No-execute (NX) bit to prevent code execution from data pages
  • Supports implementation of copy-on-write techniques improving memory efficiency for process forking
    • Initially shares all pages between parent and child processes
    • Creates separate copy of a page only when one process attempts to modify it

Address translation with page tables

Page Table Structure and Address Translation Process

  • Page tables store mapping between virtual page numbers (VPNs) and physical page frame numbers (PFNs)
  • Virtual address typically divided into a virtual page number (VPN) and a page offset
  • VPN used as index into page table to retrieve corresponding physical page frame number (PFN)
  • Page offset combined with PFN to form complete physical address
    • Example: 32-bit virtual address with 4KB pages
      • Upper 20 bits form VPN, lower 12 bits form offset
      • Page table entry contains 20-bit PFN
      • Final physical address combines 20-bit PFN with 12-bit offset

Translation Lookaside Buffers and Page Table Entries

  • Translation Lookaside Buffers (TLBs) cache recent address translations improving performance
    • Hardware cache storing recently used page table entries
    • Reduces number of memory accesses required for address translation
  • Page table entries often include additional metadata such as valid bits, dirty bits, and access rights
    • Valid bit indicates if page is currently in memory
    • Dirty bit shows if page has been modified since loaded
    • Access rights specify read, write, execute permissions for the page

Performance of page size vs page table structure

Page Size Considerations

  • Larger page sizes reduce number of entries in page table decreasing memory overhead and TLB misses
    • Example: 4KB pages vs 2MB pages in x86-64 systems
    • Fewer TLB entries needed to cover same amount of memory
  • Smaller page sizes provide finer-grained memory allocation and reduce internal fragmentation
    • Example: 4KB pages waste less space for small allocations compared to 2MB pages
  • Page size affects granularity of data transfer between main memory and secondary storage during paging operations
    • Larger pages may lead to unnecessary data transfer
    • Smaller pages increase number of I/O operations

Page Table Structures and Their Impact

  • Multi-level page tables reduce memory consumption for sparse address spaces but increase number of memory accesses for translation
    • Example: x86-64 uses 4-level page tables
    • Allows efficient representation of large, sparsely used address spaces
  • Inverted page tables save memory in systems with large virtual address spaces but may increase lookup time
    • Hash table-like structure indexed by physical frame number
    • Requires search operation to find matching virtual address
  • Choice of page table structure impacts time and space complexity of address translation operations
    • Tree-based structures (multi-level) vs hash-based structures (inverted)
  • Hardware support such as dedicated MMU circuits significantly improves address translation performance
    • Example: TLB implemented in hardware for fast translation
    • Example: Page walk accelerators in modern CPUs