HAMMER Update 16-June-2008 - HEADS UP: MEDIA CHANGED AGAIN (2 of 4)
Matthew Dillon
dillon at apollo.backplane.com
Tue Jun 17 10:06:15 PDT 2008
:Le Tue, Jun 17, 2008 at 01:53:11AM -0700, Matthew Dillon ecrivait :
:> I believe that going to the larger blocksize will significantly improve
:> performance as 64K and 128K single-record writes will cut the B-Tree
:> overhead down by 200-800% verses the 16K single-record writes HAMMER
:> does now. This should also, coupled with some work on the low level
:> allocator, greatly improve random read performance in the blogbench test.
:
: Hello Matthew,
:
: If there is any additional feature you would like to have in Blogbench in
:order to stress other areas of HAMMER or in order to simulate different real-life
:cases, just let me know.
:
: I'm planning to make a new release in a few days with minor changes, so
:why not add some new knobs and features that might be useful to filesystem
:development by the way.
:
: Best regards,
:
:--
:Frank Denis - j [at] pureftpd.org
There are several features I would definitely like to see in blogbench.
I would like a feature which allows me to limit the amount of reading
or writing going on... a feature to 'cap' the I/O rate for individual
columns in order to test the performance of other columns.
Here's an example. I run blogbench on UFS and I run it on HAMMER. I
want to compare the read performance of the two. The problem though
is that the write-rate on UFS is much lower then the write-rate on
HAMMER. The higher write rate on HAMMER has a huge impact on the
read performance going on at the same time and I am unable to compare
read performance. I'd like to cap the write rate on the HAMMER test
to match UFS, so I can then compare the read performance between the
two.
Likewise I would like to put a cap on read performance in order to be
able to compare relative write performance.
I found myself trying to reduce the number of writer threads to get
HAMMER's write rate down closer to what UFS could do, in order to
compare the read performance. That doesn't really work very well
though.
--
A second feature that would be cool would be a specification for the
maximum numbers of blogs back the random reader/re-writer threads
access. For example, if you said --maxhistory=250 then blogbench
would build an ever-increasing data set, but when it gets past blog
250 it would not access more then 250 blogs back as the test continues
and the data-set grows. So if it were on blog 600 it would not go any
further back then blog 350.
The idea here being that on a real blog server people do not access
older blogs at the same rate that they access more recent blogs, but
at the same time the 'database' is ever-growing... nothing is thrown
away.
For my purposes I would be able to use it to 'settle' blogbench at a
particular cache footprint size in order to test edge cases where
blogbench is just starting to blow out the system caches. It would
still create an ever-increasing data set, it just wouldn't access the
whole thing as the test progresses.
-Matt
Matthew Dillon
<dillon at backplane.com>
More information about the Kernel
mailing list