UFS filesystem size limit

Wiger van Houten Wiger at hccnet.nl
Sun Sep 4 03:02:23 PDT 2005

I would like to add an option for encrypting the filesystem to that list,  
that would be REALLY welcome too ;).

On Sat, 03 Sep 2005 23:54:49 +0200, Andreas Hauser <andy at xxxxxxxxxxxxxxx>  

dillon wrote @ Sat, 3 Sep 2005 09:49:03 -0700 (PDT):

    A friend of mine swears by linux, but curses just about every  
    he tries (and curses UFS as well).  Linux FS's have a lot of hype  
    the only thing that they really have going for them is the 'instant
    reboot' feature... if you trust them enough, that is.  Reiser is
    unbelievably sensitive to disk errors, to the point where you can  
    the whole filesystem when something unexpected happens.  JFS has  
    performance.  etc etc.  Linux filesystems are not poster childs.
    Frankly, anyone who feels a need to put a million files into a  
    directory gets when they deserve.
For the trust, ext2, ext3 and UFS play in one league for me, when i use
the problems, i had with them, as a basis.
Well JFS uses less CPU than XFS, which will make it faster on a busy  
But let's not forget, XFS and JFS were not native but imported and
somehow degraded a bit on the move.

Some aggregation would certainly be nice though (less but better  
and with better licenses for sharing with us :)

If the hardware dies on you you can not trust the FS anymore anyways.
Getting your backups is the much more secure and often faster way.
Hiding the disk problems is not really helping, when with todays  
you can be pretty sure it will certainly die.

    There are two things I want for UFS:  (1) Nearly instant reboots
    (without having to depend on softupdates), and (2) an ability to  
    or shrink the filesystem.  Both are quite achievable goals.

The most intersting points in the filesystem area for me are:

* Snapshots
  This finally makes backups atomic
  -> no more "file has changed while backuping".
  Second it makes it easy to make the backups available to the users  
[1], like
  "/var/backup/${YEAR}/${MONTH}/${DAY}" or as some have it "./.snapshot".
  -> no more "I just deleted this file. Can you restore it?"

* Journaling
  If this works out how i hope, then it will be easy to mirror  
everything to
  a failover sever. No more expensive rsync(1)s or complex mail setups to
  get the mail to the backup mirror. Fast recovery after a crash is nice
  too, but then it shouldn't crash in the first place ;) In high
  availability scenarious you probably prefer the fail over server,
  especially since you want to debug the crash. So 5 minutes for
  fsck or log replay are OK for me, need not be instant.

* volume management
  Growing and shrinking are not so interesting but hot-plugging in  
  disk and extending a life FS that would be great, need not be as  
  as vinum though. I always liked how that works on True64 (or what's the
  nom du jour). Like "mkdir volume1 && cd volume1 && ln -s /dev/disk1 .   
  ln -s /dev/disk2 .", there you go.

* Networked FS
  ClusterFS Like GFS or Lustre are interesing and not only for computing
  clusters, e.g. we have 4-8 NFS servers serve one shared GFS here for  
  Homedirs. I am not sure if the journaling can enable such things but  
  you said something in the direction.
  NFS4[2] is something i badly want and i am sure a year down the road  
this will
  be a hard requirement at many places. I always imagined i could just
  export my /usr/ports and most problems with ports would have gone away  

* excessive usage
  Zettabyte filesystems with a trillion files per dir would be nice :-)
  In the cluster area such things are needed. Look at the problems the  
  has with the amount of data coming out of the LHC[3].


This really well done in plan9:
There are patches for nearly all BSD systems out there:
Though Jeff thinks the OpenSolaris implementation is the better way to  


Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

More information about the Users mailing list