Simply Explained

Virtual Memory

  • A process is what we usually call a program. Each process gets its own memory sometimes called virtual memory. Usually, it’s known as a process address space.
  • Since each process has an isolated, distinct address space, a crash of one process will not bring down other processes.
  • When virtual memory allocates more space for the processes than the actual available physical memory or RAM, it tries to use the hard disk for the extra space. If the process does not access too often the data that the operating system happens to move to the hard disk, then everything is alright. If it tries to access the information frequently then all the processes and the programs will slow down which is known as thrashing.
  • The operating system has a data structure that keeps track of which virtual memory areas are in use.
  • Before using a part of its address space a process must announce the operating system by calling function mmap (VM area was mapped).
  • The process may also inform the operating system that it intends to not use a certain area or portion of the address space by calling munmap.
  • When a process is created certain areas are mapped automatically. eg. call stack, heap, code etc. These areas can grow or shrink.
  • The data structure that keeps track of the virtual memory areas in use is essentially a list of intervals. An address is in use if it is inside one of these intervals.
  • The virtual memory and physical memory are sliced into slices of equal size. Each slice in the physical memory is called a frame. Each slice in virtual memory is called a page. 
  • Pages and frames have the same size. The size of each page or frame is usually 4KB or 8KB.
  • We have a list of virtual addresses that point at the beginning of frames. The resulting data structure which maps pages to frames is called a page table.
  • Translation from a virtual address into a physical address needs to be done every time a process accesses memory. The translation is, therefore, performance critical which is why most systems have hardware support for it called MMU (Memory Management Unit).
  • The MMU usually contains a piece of hardware that caches page table entries and is called a TLB (Translation Lookaside Buffer).
  • TLB is a content addressable memory. The TLB tracks a set of mappings can do fast lookups’ and yet it does not use an array.  The hardware is designed in a completely different way.
  • When the straightforward translation fails the processor passes control to a page fault handler, which is a part of the operating system that should fix the problem.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.