OT - was Hammer or ZFS based backup, encryption

Jeremy Chadwick jdc at parodius.com
Sun Feb 22 11:51:57 PST 2009


On Sun, Feb 22, 2009 at 08:12:36PM +0100, Ulrich Spörlein wrote:
> On Sun, 22.02.2009 at 06:33:44 -0800, Jeremy Chadwick wrote:
> > On Sun, Feb 22, 2009 at 01:36:28PM +0100, Michael Neumann wrote:
> > > Okay "zpool remove" doesn't seem to work as expected, but it should
> > > work well at least for RAID-1 (which probably no one uses for large
> > > storage systems ;-). Maybe "zfs replace" works, if you replace an old
> > > disk, with a larger disk, and split it into two partitions, the one
> > > equally sized to the old, and the other containing the remainder of the
> > > space. Then do:
> > > 
> > >   zfs replace tank old_device new_device_equally_sized
> > >   zfs add tank new_device_remainder
> > > 
> > > But you probably know more about ZFS than me ;-)
> > 
> > In this case, yes (that I know more about ZFS than you :-) ).  What
> > you're trying to do there won't work.
> > 
> > The "zfs" command manages filesystems (e.g. pieces under a zpool).  You
> > cannot do anything with devices (disks) with "zfs".  I think you mean
> > "zpool", especially since the only "replace" command is "zpool replace".
> > 
> > What you're trying to describe won't work, for the same reason I
> > described above (with your "zpool add tank ad8s1" command).  You can
> > split the disk into two pieces if you want, but it's not going to
> > change the fact that you cannot *grow* a zpool.  You literally have to
> > destroy it and recreate it for the pool to increase in size.
> > 
> > I've been through this procedure twice in the past year, as I replaced
> > 250GB disks with 500GB, and then 500GB disks with 750GB.  It's a *huge*
> > pain, and I cannot imagine anyone in an enterprise environment using ZFS
> > to emulate a filer -- it simply won't work.  For individual servers
> > (where disks are going to remain the same size unless the box is
> > formatted, etc.), oh yes, ZFS is absolutely fantastic.
> 
> This is nonsense, of course. Here's proof (running on FreeBSD 7.1)
>
> {snip}

You're correct -- my statement is incorrect/inaccurate.

The problem I was attempting to describe: all pool members must be the
same size, otherwise all members are considered to be equal to the size
of the smallest.  In English: you cannot "mix-and-match" different sized
disks.

Example (with real disks):

da1: 65536MB (134217728 512 byte sectors: 255H 63S/T 8354C)
da2: 65536MB (134217728 512 byte sectors: 255H 63S/T 8354C)
da3: 65536MB (134217728 512 byte sectors: 255H 63S/T 8354C)
da4: 262144MB (536870912 512 byte sectors: 255H 63S/T 33418C)

testbox# zpool create tank raidz1 da1 da2 da3
testbox# df -k /tank
Filesystem  1024-blocks   Used     Avail Capacity  Mounted on
tank          131303936      0 131303936     0%    /tank
testbox# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
tank                    191G    192K    191G     0%  ONLINE     -
testbox# zpool offline tank da2
Bringing device da2 offline
testbox# zpool replace tank da2 da4

Wait a few moments for the resilvering to take place...

testbox# zpool status | grep scrub
 scrub: resilver completed with 0 errors on Sun Feb 22 11:32:15 2009
testbox# df -k /tank
Filesystem 1024-blocks Used     Avail Capacity  Mounted on
tank         131303936    0 131303936     0%    /tank

If da1 and da3 were replaced with 256GB disks, the pool should grow.

In this example, essentially 192GB of space on da3 is "wasted" (unused
and unavailable) due to what I've described.

zpool destroy/create *will not* fix this situation either, so I was also
wrong in that regard.  Case point:

testbox# zpool destroy tank
testbox# zpool create tank raidz1 da1 da3 da4
testbox# df -k /tank
Filesystem 1024-blocks Used     Avail Capacity  Mounted on
tank         131303936    0 131303936     0%    /tank

-- 
| Jeremy Chadwick                                jdc at parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.              PGP: 4BD6C0CB |





More information about the Users mailing list