cvs commit: src/sys/conf files options src/sys/i386/i386 mp_machdep.c src/sys/i386/include types.h src/sys/kern kern_slaballoc.c imgact_aout.c imgact_elf.c kern_malloc.c sys_process.c vfs_bio.c src/sys/sys slaballoc.h globaldata.h ...

Matthew Dillon dillon at
Tue Aug 26 18:43:36 PDT 2003

dillon      2003/08/26 18:43:08 PDT

  Modified files:
    sys/conf             files options 
    sys/i386/i386        mp_machdep.c 
    sys/i386/include     types.h 
    sys/kern             imgact_aout.c imgact_elf.c kern_malloc.c 
                         sys_process.c vfs_bio.c 
    sys/sys              globaldata.h malloc.h 
    sys/vfs/procfs       procfs_mem.c 
    sys/vm               vm_extern.h vm_fault.c vm_init.c 
                         vm_kern.c vm_map.c vm_map.h vm_page.c 
                         vm_zone.c vm_zone.h 
  Added files:
    sys/kern             kern_slaballoc.c 
    sys/sys              slaballoc.h 
  SLAB ALLOCATOR Stage 1.  This brings in a slab allocator written from scratch
  by your's truely.  A detailed explanation of the allocator is included but
  first, other changes:
  * Instead of having vm_map_entry_insert*() and friends allocate the
    vm_map_entry structures a new mechanism has been emplaced where by
    the vm_map_entry structures are reserved at a higher level, then
    expected to exist in the free pool in deep vm_map code.  This preliminary
    implementation may eventually turn into something more sophisticated that
    includes things like pmap entries and so forth.  The idea is to convert
    what should be low level routines (VM object and map manipulation)
    back into low level routines.
  * vm_map_entry structure are now per-cpu cached, which is integrated into
    the the reservation model above.
  * The zalloc 'kmapentzone' has been removed.  We now only have 'mapentzone'.
  * There were race conditions between vm_map_findspace() and actually
    entering the map_entry with vm_map_insert().  These have been closed
    through the vm_map_entry reservation model described above.
  * Two new kernel config options now work.  NO_KMEM_MAP has been fleshed out
    a bit more and a number of deadlocks related to having only the kernel_map
    now have been fixed.  The USE_SLAB_ALLOCATOR option will cause the kernel
    to compile-in the slab allocator instead of the original malloc allocator.
    If you specify USE_SLAB_ALLOCATOR you must also specify NO_KMEM_MAP.
  * vm_poff_t and vm_paddr_t integer types have been added.  These are meant
    to represent physical addresses and offsets (physical memory might be
    larger then virtual memory, for example Intel PAE).  They are not heavily
    used yet but the intention is to separate physical representation from
    virtual representation.
  The slab allocator breaks allocations up into approximately 80 zones based
  on their size.  Each zone has a chunk size (alignment).  For example, all
  allocations in the 1-8 byte range will allocate in chunks of 8 bytes.  Each
  size zone is backed by one or more blocks of memory.  The size of these
  blocks is fixed at ZoneSize, which is calculated at boot time to be between
  32K and 128K.  The use of a fixed block size allows us to locate the zone
  header given a memory pointer with a simple masking operation.
  The slab allocator operates on a per-cpu basis.  The cpu that allocates a
  zone block owns it.  free() checks the cpu that owns the zone holding the
  memory pointer being freed and forwards the request to the appropriate cpu
  through an asynchronous IPI.  This request is not currently optimized but it
  can theoretically be heavily optimized ('queued') to the point where the
  overhead becomes inconsequential.  As of this commit the malloc_type
  information is not MP safe, but the core slab allocation and deallocation
  algorithms, non-inclusive the having to allocate the backing block,
  *ARE* MP safe.  The core code requires no mutexes or locks, only a critical
  Each zone contains N allocations of a fixed chunk size.  For example, a
  128K zone can hold approximately 16000 or so 8 byte allocations.  The zone
  is initially zero'd and new allocations are simply allocated linearly out
  of the zone.  When a chunk is freed it is entered into a linked list and
  the next allocation request will reuse it.  The slab allocator heavily
  optimizes M_ZERO operations at both the page level and the chunk level.
  The slab allocator maintains various undocumented malloc quirks such as
  ensuring that small power-of-2 allocations are aligned to their size,
  and malloc(0) requests are also allowed and return a non-NULL result.
  kern_tty.c depends heavily on the power-of-2 alignment feature and ahc
  depends on the malloc(0) feature.  Eventually we may remove the malloc(0)
  NOTE!  This commit may destabilize the kernel a bit.  There are issues
  with the ISA DMA area ('bounce' buffer allocation) due to the large backing
  block size used by the slab allocator and there are probably some deadlock
  issues do to the removal of kmem_map that have not yet been resolved.
  Revision  Changes    Path
  1.11      +1 -0      src/sys/conf/files
  1.5       +1 -0      src/sys/conf/options
  1.16      +12 -0     src/sys/i386/i386/mp_machdep.c
  1.5       +7 -5      src/sys/i386/include/types.h
  1.6       +10 -5     src/sys/kern/imgact_aout.c
  1.8       +8 -2      src/sys/kern/imgact_elf.c
  1.12      +6 -2      src/sys/kern/kern_malloc.c
  1.12      +2 -2      src/sys/kern/sys_process.c
  1.14      +12 -2     src/sys/kern/vfs_bio.c
  1.16      +17 -0     src/sys/sys/globaldata.h
  1.7       +16 -1     src/sys/sys/malloc.h
  1.6       +2 -2      src/sys/vfs/procfs/procfs_mem.c
  1.5       +1 -0      src/sys/vm/vm_extern.h
  1.7       +1 -1      src/sys/vm/vm_fault.c
  1.4       +1 -0      src/sys/vm/vm_init.c
  1.8       +54 -23    src/sys/vm/vm_kern.c
  1.11      +333 -163  src/sys/vm/vm_map.c
  1.7       +21 -7     src/sys/vm/vm_map.h
  1.9       +6 -1      src/sys/vm/vm_page.c
  1.9       +10 -1     src/sys/vm/vm_zone.c
  1.5       +1 -0      src/sys/vm/vm_zone.h

More information about the Commits mailing list