Blogbench RAID benchmarks
Freddie Cash
fjwcash at gmail.com
Thu Jul 21 15:57:45 PDT 2011
On Thu, Jul 21, 2011 at 3:35 PM, Freddie Cash <fjwcash at gmail.com> wrote:
On Mon, Jul 18, 2011 at 7:06 PM, Matthew Dillon <dillon at apollo.backplane.com> wrote:
  Ok, well this is interesting.  Basically it comes down to whether we
  want to starve read operations or whether we want to starve write
  operations.
  The FreeBSD results starve read operations, while the DragonFly results
  starve write operations.  That's the entirety of the difference between
  the two tests.
Would using the disk scheduler's in FBSD/DFly help with this at all?FreeBSD includes a geom_sched class for enabling pluggable disk scheduler's (currently only round-robin algorithm is implemented). http://info.iet.unipi.it/~luigi/geom_sched/
Page 39 of the presentation on GEOM_SCHED shows the following, indicating that it should make a big difference in the blogbench results (note the second result with greedy read and write):
Some preliminary results on schedulerâs performance in some easycases (the focus here is on the framework).Measurement is using multiple dd instances on a ï¬lesystems, allspeeds in MiB/s.two greedy readers, throughput improvement
NORMAL: 6.8 + 6.8 ; GSCHED RR: 27.0 + 27.0one greedy reader, one greedy writer, capture eï¬ectNORMAL: R: 0.234 W:72.3 ; GSCHED RR: R:12.0 W:40.0multiple greedy writers, only small loss of througputNORMAL: 16+16; RR: 15.5 + 15.5
one sequential reader, one random reader (ï¬o)NORMAL: Seq: 4. 2 Rand: 4.2; RR: Seq: 30 Rand: 4.4Â
And I believe DFly has dsched?Â
  This is all with swapcache turned off.  The only way to test in a
  fair manner with swapcache turned on (with a SSD) is if the FreeBSD
  test used a similar setup w/ZFS.
ZFS includes it's own disk scheduler, so geom_sched wouldn't help in that case. Would be interesting to see a comparison of HAMMER+swapcache and ZFS+L2ARC, though.
-- Freddie Cashfjwcash at gmail.com
-- Freddie Cashfjwcash at gmail.com
More information about the Kernel
mailing list