cvs commit: src/sys/sys src/sys/vm
dillon at crater.dragonflybsd.org
Mon Oct 25 21:33:52 PDT 2004
dillon 2004/10/25 21:33:11 PDT
DragonFly src repository
sys/vm vm_map.c vm_zone.c vm_zone.h
Fix bugs in the vm_map_entry reservation and zalloc code. This code is a bit
sticky because zalloc must be able to call kmem_alloc*() in order to extend
mapentzone to allocate a new chunk of vm_map_entry structures, and
kmem_alloc*() *needs* two vm_map_entry structures in order to map the new
data block into the kernel. To avoid a chicken-and-egg recursion there must
already be some vm_map_entry structures available for kmem_alloc*() to use.
To ensure that structures are available the vm_map_entry cache maintains
a 'reserve'. This cache is initially populated from the vm_map_entry's
allocated via zbootinit() in vm_map.c. However, since this is a per-cpu
cache there are situations where the vm_map subsystem will be used on other
cpus before the cache can be populated on those cpus, but after the static
zbootinit structures have all been used up. To fix this we statically
allocate two vm_map_entry structures for each cpu which is sufficient for
zalloc to call kmem_alloc*() to allocate the remainder of the reserve.
Having a lot preloaded modules seems to be able to trigger the bug.
Also get rid of gd_vme_kdeficit which was a confusing methodology to
keep track of kernel reservations. Now we just have gd_vme_avail and
a negative count indicates a deficit (the reserve is being dug into).
From-panic-reported-by: Adam K Kirchhoff <adamk at xxxxxxxxxxxx>
Revision Changes Path
1.33 +1 -1 src/sys/sys/globaldata.h
1.35 +40 -26 src/sys/vm/vm_map.c
1.17 +36 -9 src/sys/vm/vm_zone.c
1.6 +1 -0 src/sys/vm/vm_zone.h
More information about the Commits