Initial filesystem design synopsis.

Markus Hitter mah at jump-ing.de
Thu Feb 22 11:15:39 PST 2007


I'm no expert either, but I think I can answer a few bits.

Am 22.02.2007 um 17:44 schrieb Jose timofonic:

I readed about tons of distributed file systems: 9P,
Andrew File System, NFS, Transarc AFS and OpenAFS,
Ceph, Coda, GoogleFS, Haiku's NetFS...
Typical networked file systems use a client - server approach. One  
machine serves the file, the same or other machines read/write files.  
The storage is always centralised and typically on on the server  
only. Andrew File System would be an exception on the latter.

Goal of a distributed file system would be to spread the storage over  
multiple machines. One machine writes a file, all machines see the  
file local on disk. Sort of a mix of NFS and BitTorrent.


I just heared NFS runs quite bad over
wrong [wide?] internet connections (WAN) [...]
NFS is great in local networks, but in the internet, security becomes  
a headache.


It will be part of "DillonFS" or another layer running
on top of it (I readed something about implementing
SYMLINK)? How it will be the distributed behaviour?
This is the topic of the current discussion. How to get the files  
distributed, wether to enhance ZFS or write the whole thing from  
scratch.

What about interoperability over other operating
systems
So far nobody asked for interoperability. You'd get data in and out  
by resharing the FS by NFS, FTP, WebDAV, etc.


It could be nice if Matt explains a bit his
plans about this topic if he wants to do.
Well, he currently does.

Markus

- - - - - - - - - - - - - - - - - - -
Dipl. Ing. Markus Hitter
http://www.jump-ing.de/








More information about the Kernel mailing list