hammer_alloc_data panic
Matthew Dillon
dillon at apollo.backplane.com
Tue Jul 15 17:36:28 PDT 2008
:> space to do the remainder with no limitations:
:>
:> hammer reblock /home 25
:> hammer reblock /home 50
:> hammer reblock /home 75
:> hammer reblock /home 90
:> hammer reblock /home
:> whew...
:
:What penalty would there be (time taken, perhaps?) to have this be the
:default iterative behavior when reblocking?
Basically just time and disk bandwidth, since it would have to scan
the B-Tree multiple times. It's a good idea. To really make it work
well the filesystem needs to be able to provide feedback to the
hammer utility as to the level of fragmentation.
Ultimately some form of automation will make this a non-issue. My
preference is a thread initiated by the filesystem itself which
does a slow iteration over the B-Tree. At the very least it could
measure the fragmentation and provide the feedback that the hammer
utility needs. Some careful thought is needed to come up with the
proper solution.
There's just no time to think it through before the release, we will have
to do it afterwords. First adopters are going to be well aware of the
issues so there's no rush. I think our best bet for the release
is simply to document the issue as fully as possible.
--
HAMMER creates somewhat of a new paradigm, requiring people to think
about storage a bit differently then they would normally. It fits
very well with the massive amounts of storage that is now becoming
available. What do you with a terrabyte drive that, short of piracy,
would take years to fill up? With HAMMER the whole point is to run
it on large storage media and in full historical mode (hence why that
is the default), so you get free snapshots and fine-grained backups.
-Matt
Matthew Dillon
<dillon at backplane.com>
More information about the Bugs
mailing list