git: hammer2 - multi-thread read-ahead XOPs
Matthew Dillon
dillon at crater.dragonflybsd.org
Wed Jun 8 23:06:02 PDT 2016
commit 640406cb75258a91ef56a5c1c1deaa0f2edb8b7a
Author: Matthew Dillon <dillon at apollo.backplane.com>
Date: Wed Jun 8 22:24:51 2016 -0700
hammer2 - multi-thread read-ahead XOPs
* Distribute asynchronous logical buffer read-ahead XOPs to multiple
worker threads. XOPs related to a particular inode are usually sent
to just one worker thread to reduce collision retries.
This works around the high messaging overhead(s) associated with the
current XOP architecture by spreading the pain around. And even though
the default check code is now xxhash, distributing the checks also helps
a great deal. The H2 chain topology actually parallelizes quite well for
read operations.
Streaming reads through the filesystem now run at over 1 GByte/sec (they
capped out at ~340MB/sec before). The effect on things like 'tar'
are not quite as pronounced but small-file scan/read performance will
typically improve by a tiny bit too.
* This change is probably more SSD-friendly than HDD-friendly for streaming
reads due to out-of-order queueing of the I/O requests. I ran a quick
read test on a WD black and it appeared to perform acceptably so for
now I'm going to run with it. Adjusting read-ahead scale via
vfs.hammer2.cluster_enable can be used to find a good value (for now).
* Remove the 'get race' kprintfs. This case now occurs very often due
to distributed read-aheads.
* chain->flags must use atomic ops, fix some cases I muffed up in recent
commits.
Summary of changes:
sys/vfs/hammer2/hammer2.h | 2 ++
sys/vfs/hammer2/hammer2_chain.c | 7 ++++++-
sys/vfs/hammer2/hammer2_strategy.c | 6 +++++-
sys/vfs/hammer2/hammer2_thread.c | 11 ++++++++---
4 files changed, 21 insertions(+), 5 deletions(-)
http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/640406cb75258a91ef56a5c1c1deaa0f2edb8b7a
--
DragonFly BSD source repository
More information about the Commits
mailing list