is hammer for us
Matthew Dillon
dillon at apollo.backplane.com
Tue Aug 11 20:46:34 PDT 2009
:I am a student doing fluid dynamics research. We generate a lot of
:data (close to 2TB a day). We are having scalability problems with
:NFS. We have 2 Linux servers with 64GB of RAM, and they are serving
:the files.
:
:We are constantly running into I/O bottle neck problems. Would hammer
:fix the scalability problems?
:
:TIA
If you are hitting an I/O bottleneck you need to determine where the
bottleneck is. Is it in the actual accesses to the disk subsystem?
Are the disks seeking randomly or accessing data linearly? Is the
transfer rate acceptable? Is it the network? Is it the NFS
implementation? Is it the underlying filesystem on the server? Are
there parallelism issues?
You need to find the answer to those questions before you can determine
a solution.
Serving large files typically does not create a filesystem bottleneck.
i.e. any filesystem, even something like ZFS, should still be able
to serve large linear files at the platter rate. Having a lot of ram
only helps if there is some locality of reference in the data set.
i.e. if the data set is much larger then available memory but there
is no locality of reference and the disk drives are hitting their seek
limits, no amount of ram will solve the problem.
(DragonFly's 64 bit support isn't reliable yet, so DragonFly can't
access that amount of ram right now anyhow).
-Matt
Matthew Dillon
<dillon at backplane.com>
More information about the Users
mailing list