No subject

Unknown Unknown
Wed Feb 25 09:15:47 PST 2009


6DAF-6B80-4ADE-A66D-7AC352F6A175 at yberwaffe.com> <20090223164444.GA64491 at icarus.home.lan>
From: Matthew Dillon <dillon at apollo.backplane.com>
Subject: Re: OT - was Hammer or ZFS based backup, encryption
Date: Wed, 25 Feb 2009 09:10:56 -0800 (PST)
BestServHost: crater.dragonflybsd.org
List-Post: <mailto:users at crater.dragonflybsd.org>
List-Subscribe: <mailto:users-request at crater.dragonflybsd.org?body=subscribe>
List-Unsubscribe: <mailto:users-request at crater.dragonflybsd.org?body=unsubscribe>
List-Help: <mailto:users-request at crater.dragonflybsd.org?body=help>
List-Owner: <mailto:owner-users at crater.dragonflybsd.org>
Sender: users-errors at crater.dragonflybsd.org
Errors-To: users-errors at crater.dragonflybsd.org
Lines: 44
NNTP-Posting-Host: 216.240.41.25
X-Trace: 1235582261 crater_reader.dragonflybsd.org 881 216.240.41.25
Xref: crater_reader.dragonflybsd.org dragonfly.users:12071

    Generally speaking the idea with HAMMER's snapshotting and mirroring
    is that everything is based on transaction-ids stored in the B-Tree.

    The mirroring functionality does not require snapshotting per-say,
    because EVERY sync HAMMER does to the media (including the automatic
    filesystem syncs done by the kernel every 30-60 seconds) is effectively
    a snapshot.

    There is a downside to the way HAMMER manages its historical data store
    and it is unclear how much of burden this will wind up being without some
    specific tests.  The downside is that the historical information is stored
    in the HAMMER B-Tree side-by-side with current information.

    If you make 50,000 modifications to the same offset within a file,
    for example, with a fsync() inbetween each one, and assuming you don't
    prune the filesystem, then you will have 50,000 records for that HAMMER
    data block in the B-Tree.  This can be optimized... HAMMER doesn't have
    to scan 50,000 B-Tree elements.  It can seek to the last (most current)
    one when it traverses the tree.  I may not be doing that yet but there is
    no data structure limitation that would prevent it.  Even with the
    optimization there will certainly be some overhead.

    The mitigating factor is, of course, that the HAMMER B-Tree is pruned
    every night to match the requested snapshot policy.

    --

    It would be cool if someone familiar with both ZFS's mirroring and
    HAMMER's mirroring could test the feature and performance set.  What
    I like most about HAMMER's mirroring is that the mirroring target can
    have a different history retention policy then the master.

    HAMMER's current mirror streaming feature is also pretty cool if I do
    say so myself.  Since incremental mirroring is so fast, the hammer
    utility can poll for changes every few seconds and since the stream
    isn't queued it can be killed and restarted at any time.  Network
    outages don't really effect it.

    I also added a very cool feature to the hammer mirror-stream directive
    which allows you to limit the bandwidth, preventing the mirroring
    operation from interfering with production performance.

						-Matt






More information about the Users mailing list