Blogbench RAID benchmarks

Matthew Dillon dillon at apollo.backplane.com
Mon Jul 18 17:43:50 PDT 2011


:I've done a new set of tests on FreeBSD 8.2 and DragonFly 2.10.
:kern.physmem was set to 4GB and I used 150 iterations just to be sure.
:
:Someone sent me instructions on how to enable the AHCI driver on FreeBSD, so
:we will get a less meaningless comparison for the single SATA drive case.
:
:This time, DragonFly wasn't CPU bound during the tests, but the results
:have not changed so much.
:
:FreeBSD write performance scales with the disk subsystem and read performance
:goes in the other direction.
:
:DragonFly doesn't scale; there's a bit of variation for reads but the write
:curve is perfectly flat.
:
:
:On DragonFly, most of the blogbench threads were in 'hmrrcm' or 'vnode'
:states.
:I also got a few kernel messages like these:

    This is looking more real.  DragonFly is clearly prioritizing reads over
    writes which is preventing writes from flushing very quickly, while
    FreeBSD is prioritizing writes over reads which is preventing reads
    from getting through.

    In the post-cache-blowout portion of the test (past blog ~600 or so)
    FreeBSD's read activity drops into the ~800 range while DragonFly's
    read activity stabilizes in the ~25000 range.  At the same time FreeBSD
    is maintaining a high write rate in ~5000 range while DragonFly's write
    rate drops to extremely low levels (~30 range).  However, the data set
    size is going to be extremely different between the two due to the
    low write numbers for DragonFly, so even the read numbers can't be
    fairly compared.

    What we are seeing here is a case where both operating systems are
    producing very unbalanced results.  The 'final score' for blogbench
    is extremely misleading.  Neither result is desireable.  In FreeBSD's
    case the read performance drops way too low and in DragonFly's case
    the write performance drops way too low.

    How is the RAID volume formatted?  DragonFly should be showing improved
    performance on reads going to 4-disk raid.  The stripe size needs to be
    fairly large, like ~1MB or so.

    I will experiment a bit with a one and two-disk stripe to see if I can
    figure out what is going on with the write activity.  I've never seen
    writes that low on prior tests (though reads look reasonable to me).

						-Matt






More information about the Kernel mailing list