bus_dmamem_alloc confusion

Chuck Tuffli chuck_tuffli at agilent.com
Thu Oct 21 20:45:36 PDT 2004

On Sun, Oct 17, 2004 at 12:10:56PM -0700, Matthew Dillon wrote:
> :This is using the normal kernel slab allocator. The only think specialy
> :handled in bus_dmamem_alloc is that it is avoiding page crossing.
> :
> :> 
> :> and for an 8096 byte allocation, the addresses come back
> :> 
> :> Mem:  k=0xcc6b4000 b=0x14000 a=0x2000
> :
> :This is using contigmalloc, like all allocations over page size do.
> :
> :How do you try to mmap the page?
> :
> :Joerg
>     Well, presumably via /dev/mem when given the physical address.  It's 


	fdM = open("/dev/mem", O_RDWR);
	virt = mmap(0, mem.length,

where mem.baddr is the physical address passed back by the kernel.

. ..
>     But 0x14000 as an address is completely wrong.  If you want to put
>     the code up somewhere for me to look at I'll be happy to take a look.
>     contigmalloc() should work fine so my guess is that the issue is in how
>     you are converting the pages from virtual to physical addresses.
> 					-Matt

Thanks! I put a bzip2 tar at http://www.tuffli.net/pmem_prob.tbz that
includes the driver (pdrv.[ch]) and its Makefile as well as a user
space program that exercises it (ptest.c). None of the code relies on
any particular hardware except for some unclaimed PCI device.

I traced through some of the vm code and see where the phys_avail[]
regions get converted into the vm_page_array (vm_page_startup()) which
is what contigmalloc ultimately uses. Nothing looked obviously weird,
but then again, I'm not totally sure what I'm looking for. One thing I
didn't understand were the two regions (at least on i386) described by
phys_avail[] (0x1000-0x9efff for the first and 0x4ca000-0x7eb7fff for
the second). The system this is on has 64MB of memory (0x9f000?), but
I don't understand what the second region might represent.

vm_contig_pg_alloc() ends up making a match (in this case) to
phys_addr = 0x16000 with the vm_page_t looking like

(gdb) p i
$53 = 21
(gdb) p/x pga[i]
$54 = {pageq = {tqe_next = 0xc085b540, tqe_prev = 0xc085c540},
  hnext = 0x0, listq = {tqe_next = 0x0, tqe_prev = 0x0}, object = 0x0,
  pindex = 0x0, phys_addr = 0x16000, md = {pv_list_count = 0x0,
  pv_list = {tqh_first = 0x0, tqh_last = 0xc085a568}}, queue = 0x17,
  flags = 0x0, pc = 0x16, wire_count = 0x0, hold_count = 0x0,
  act_count = 0x0, busy = 0x0, valid = 0x0, dirty = 0x0}
(gdb) where
#0  vm_contig_pg_alloc (size=8192, low=0, high=4294967295,
    alignment=8192, boundary=0) at
#1  0xc027e3e1 in contigmalloc_map (size=8192, type=0xc0372b20,
    flags=4609, low=0, high=4294967295, alignment=8192, boundary=0,
    map=0xc03b40b4) at
#2  0xc027e3b9 in contigmalloc (size=8192, type=0xc0372b20,
    flags=4609, low=0, high=4294967295, alignment=8192, boundary=0) at
#3  0xc02d6855 in bus_dmamem_alloc (dmat=0xcc597cc0, vaddr=0xc0ca4260, 
    flags=1, mapp=0xc0ca426c) at
. ..

Let me know if there is anything else I can dump that would be useful.

Chuck Tuffli
Agilent Technologies

More information about the Kernel mailing list