hammer snapshot blocks on high file system activity
Michael Neumann
mneumann at ntecs.de
Wed Jul 9 01:53:34 PDT 2008
Matthew Dillon wrote:
:Hi,
:
:I'm doing a "cpdup /usr/src /hammer/src" and a "hammer snapshot
:/hammer/snap-%Y" at the same time. "hammer snapshot" takes as long as
:"cpdup". Indeed, it does not return until the "cpdup" is completed.
:
:I think it's a problem in "hammer sync". But taking a snapshot is
:useless if one has to wait until no more file activity happens (which is
:rarely the case on a server).
:
:Regards,
:
: Michael
It isn't waiting for the activity to stop, it is trying to sync out
the caches. It should be able to do that in parallel with running
activity but /usr/src is only ~450MB or so and it is likely that
a large chunk of the target copy will have been cached, depending
on how much memory you have.
Okay, I'm trying now with cpdup /usr /hammer/usr2 and a "cp 10GB-file
/hammer" at the same time. But I'm having trouble to delete the
/hammer/usr2 directory structure afterwards:
# rm -rf /hammer/usr2
/hammer/usr2/src/usr.bin/paste/paste.1 rename-after-copy failed: No
such file or directory
/hammer/usr2/src/usr.bin/paste/paste.c create (uid 0, euid 0) failed:
No such file or directory
rm: usr2/src/usr.bin: Directory not empty
rm: usr2/src: Directory not empty
rm: usr2/: Directory not empty
When I did the "cp 10GB-file" paired with the cpdup, "hammer snapshot"
completed after 7 seconds. But if I don't do a "cp", it takes much
longer (I've aborted it).
It is possible to query the last synchronized transaction id
and generate a softlink based on that, without doing a new sync.
This would give you a snapshot as-of 0-60 seconds ago verses 'now'.
That is not usually what the user desires, though.
You'd need to copy more data, on the order of a few gigs, to measure
how long it takes snapshot to stage out the caches. There could be
a bug there but I'm pretty sure it is coded properly.
Remember that hard drives cannot do 70MBytes/sec worth of random I/O,
they can only do that rate when reading or writing large linear swaths.
I've seen 180 MB/sec read rate of a 1 GB large file, when read twice
(the first read "only" reaches 100 MB/sec). Does that mean a large part
of the file is cached by hammer (I've only 1 GB main memory)? If that is
the case, reading a big file would destroy the cache-set, which might
include a lot of small and frequently used files.
When you get into more random I/O the data rate will drop to
5-15 MBytes/sec.
Good to know! I run "bonnie++", but with a wrong setting so it generated
1 million files in one directory, which slowed down the whole benchmark
a bit ;-). I don't want to try out the same for UFS (I did a sort of
this a few month ago and it showed that UFS can't create so many files
in one directory). Is there a usable limit of files per directory in
HAMMER?
Regards,
Michael
More information about the Bugs
mailing list