git: vkernel: Restore MAP_VPAGETABLE support with COW/VPTE fix (2)

Matthew Dillon dillon at crater.dragonflybsd.org
Fri Dec 19 19:35:04 PST 2025


commit 8ec33efba7563efab10f0d90b5684480d94daf0a
Author: Matthew Dillon <dillon at apollo.backplane.com>
Date:   Fri Dec 19 19:26:10 2025 -0800

    vkernel: Restore MAP_VPAGETABLE support with COW/VPTE fix (2)
    
    * Retain the wiring of vkernel-related pages but remove
      the PG_VPTMAPPED and replace its functionality.  The
      main problem with PG_VPTMAPPED is that it could not discern
      between mappings that had to be wired, and caching/mappings
      that might still be present that are not wired.  The pages
      would remain wired until the underlying VM object is completely
      destroyed, which is undesirable.
    
    * Instead, we actually bump vm_page->wire_count.  This allows the
      related pages to become pageable again after the vkernel has exited
      (after all pmaps that wire them are gone), which means we aren't as
      dependent on the file fd/unlink sequence.
    
    * Recode vm_page->wire_count and VM page wiring in general.  This
      impacts the whole kernel, not just vkernel support.  The following
      features:
    
      (a) Map entries now support multiple wiring originations.
    
      (b) Tracking of page wiring is now between the "W" bit in the PTE and
          the pmap and related map_entry, and no longer requires pages to
          be pre-faulted.
    
      (c) We do not have to pre-fault the related pages (necessary for vkernel
          operation since much of the map will not be pre-faulted).  This may
          also be useful if/when we decide to change how mlock*() works in
          the future, because pre-faulting in a 64-bit address space can
          really be misused.
    
      (d) Removed vestiges of the page protection layering that the original
          BSD wiring code had imposed, which did not allow wired pages to be
          replaced in a memory map.  Now wired pages can be replaced in a
          mmap, which plays better with mlock*() functionality since we want
          programs to behave normally in the face of mlock*() use.

Summary of changes:
 sys/kern/kern_slaballoc.c              |   1 -
 sys/platform/pc64/x86_64/pmap.c        | 162 ++++++++++++++++++++++++---------
 sys/platform/vkernel64/platform/init.c |   6 +-
 sys/platform/vkernel64/platform/pmap.c |  84 +++++++++++++----
 sys/vm/pmap.h                          |   2 +-
 sys/vm/vm_contig.c                     |   5 +
 sys/vm/vm_fault.c                      |  56 ++++++++++--
 sys/vm/vm_map.c                        |  94 +++++++++++--------
 sys/vm/vm_map.h                        |   2 +
 sys/vm/vm_mmap.c                       |  13 +--
 sys/vm/vm_object.c                     |  12 ++-
 sys/vm/vm_page.c                       |   7 +-
 sys/vm/vm_page.h                       |   6 +-
 13 files changed, 316 insertions(+), 134 deletions(-)

http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/8ec33efba7563efab10f0d90b5684480d94daf0a


-- 
DragonFly BSD source repository


More information about the Commits mailing list