HAMMER update 10-Feb-2008
Matthew Dillon
dillon at apollo.backplane.com
Sun Feb 10 19:30:10 PST 2008
:Matthew Dillon wrote:
:> Here's what is left:
:> * Structural locking. The B-Tree is fine-grained locked but the
:> locks for the blockmap are just a hack (one big lock).
:
:Have you decided how to implement multi-master replication yet?
:
:--
:Jason Smethers
My current plan is to use a quorum algorithm similar to the one I wrote
for the backplane database years ago. But there are really two major
(and very complex) pieces to the puzzle. Not only do we need a
quorum algorithm, but we need a distributed cache coherency algorithm
as well. With those two pieces individual machines will be able
to proactively cache filesystem data and guarantee transactional
consistency across the cluster.
The quorum algorithm is fairly straightforward, all the complexity there
is basically on how to deal with broken connections, missing hosts,
and things of that ilk.
The caching algorithm is going to be a lot more complex. It will have
to have timeouts with quorum-based watchdogs for refreshment in order
to deal with hosts that drop out of the cluster, on top of
everything else it has to do. It will have to be range-based at
multiple levels in order to limit the memory footprint and work with
super-large filesystems.
I don't even want to start thinking about it yet. What HAMMER will
provide to the system as a whole is the real-time mirroring aspects,
inode numbers and transaction id's which are forever-unique (allowing
data to be passively cached indefinitely), historical access for
as-of transactions which will allow parallel transactions to occur
and detect collisions at commit time rather then during the transaction,
and a bunch of other features.
-Matt
Matthew Dillon
<dillon at backplane.com>
More information about the Kernel
mailing list