No space left on device

Predrag Punosevac punosevac72 at gmail.com
Sun Jul 24 10:01:40 PDT 2022


Predrag Punosevac <punosevac72 at gmail.com> wrote:

> Hi DragonFly users,
> 
> My DragonFly file server which worked like a swiss watch for number of
> years
> 
> dfly# uname -a
> DragonFly dfly.int.bagdala2.net 6.2-RELEASE DragonFly
> v6.2.2.3.gca806c-RELEASE #33: Sun Jul 10 23:48:15 EDT 2022
> root at dfly.int.bagdala2.net:/usr/obj/usr/src/sys/X86_64_GENERIC  x86_64
> 
> appears to be broken. The machine was recently upgraded to DragonFly
> 6.2.2 per
> 
> https://marc.info/?l=dragonfly-users&m=165490764112773&w=2
> 
> This morning I tried to upgrade packages (no particular reason) when I
> got this message
> 
> dfly# pkg upgrade
> pkg: Loading of plugin 'provides' failed: Shared object "libpcre.so.1"
> not found, required by "provides.so"
> pkg: Plugins cannot be loaded
> 
> I would swear that I checked for the package updates after upgrading OS
> but I thought that pkg could be temporary broken as it happened once
> last year due to some Lua libraries. It appears that is not the case so
> I started suspecting that I didn't make install-all or make upgrade due
> to my advanced age :-)
> 
> Anyhow this gave me the clue
> 
> dfly# git pull
> remote: Enumerating objects: 80, done.
> remote: Counting objects: 100% (64/64), done.
> remote: Compressing objects: 100% (30/30), done.
> remote: Total 30 (delta 26), reused 0 (delta 0), pack-reused 0
> error: unable to create temporary file: No space left on device
> fatal: failed to write object
> fatal: unpack-objects failed
> 
> 
> dfly# df -h
> Filesystem                  Size   Used  Avail Capacity  Mounted on
> ROOT                       21.7G  21.7G     0B   100%    /
> devfs                      1024B  1024B     0B   100%    /dev
> /dev/serno/B620550018.s1a  1008M   734M   194M    79%    /boot
> /pfs/@@-1:00001            21.7G  21.7G     0B   100%    /var
> /pfs/@@-1:00002            21.7G  21.7G     0B   100%    /tmp
> /pfs/@@-1:00003            21.7G  21.7G     0B   100%    /home
> /pfs/@@-1:00004            21.7G  21.7G     0B   100%    /usr/obj
> /pfs/@@-1:00005            21.7G  21.7G     0B   100%    /var/crash
> /pfs/@@-1:00006            21.7G  21.7G     0B   100%    /var/tmp
> procfs                     4096B  4096B     0B   100%    /proc
> BACKUP                     1862G   419G  1443G    23%    /data
> MIRROR                      465G   163G   303G    35%    /mirror
> /data/pfs/@@-1:00001       1862G   419G  1443G    23%    /data/backups
> /data/pfs/@@-1:00002       1862G   419G  1443G    23%    /data/nfs
> tmpfs                      1944M     0B  1944M     0%    /var/run/shm
> 
> 
> That should not happen as the cronjob should take care of system
> snapshots. I am using HAMMER1 on this machine as it was originally
> provisioned long before HAMMER2 was production ready. I did have a
> power-outage last night so that is why I started tinkering with this to
> begin with. After manually running 
> 
> hammer cleanup
> 
> the system looks better
> 
> dfly# df -h
> Filesystem                  Size   Used  Avail Capacity  Mounted on
> ROOT                       21.7G  12.0G  9997M    55%    /
> devfs                      1024B  1024B     0B   100%    /dev
> /dev/serno/B620550018.s1a  1008M   734M   194M    79%    /boot
> /pfs/@@-1:00001            21.7G  12.0G  9997M    55%    /var
> /pfs/@@-1:00002            21.7G  12.0G  9997M    55%    /tmp
> /pfs/@@-1:00003            21.7G  12.0G  9997M    55%    /home
> /pfs/@@-1:00004            21.7G  12.0G  9997M    55%    /usr/obj
> /pfs/@@-1:00005            21.7G  12.0G  9997M    55%    /var/crash
> /pfs/@@-1:00006            21.7G  12.0G  9997M    55%    /var/tmp
> procfs                     4096B  4096B     0B   100%    /proc
> BACKUP                     1862G   419G  1443G    23%    /data
> MIRROR                      465G   162G   303G    35%    /mirror
> /data/pfs/@@-1:00001       1862G   419G  1443G    23%    /data/backups
> /data/pfs/@@-1:00002       1862G   419G  1443G    23%    /data/nfs
> tmpfs                      1944M     0B  1944M     0%    /var/run/shm
> 

Hi Matt,

Please check out this

dfly# df -h
Filesystem                  Size   Used  Avail Capacity  Mounted on
ROOT                       21.7G  14.1G  7837M    65%    /
devfs                      1024B  1024B     0B   100%    /dev
/dev/serno/B620550018.s1a  1008M   734M   194M    79%    /boot
/pfs/@@-1:00001            21.7G  14.1G  7837M    65%    /var
/pfs/@@-1:00002            21.7G  14.1G  7837M    65%    /tmp
/pfs/@@-1:00003            21.7G  14.1G  7837M    65%    /home
/pfs/@@-1:00004            21.7G  14.1G  7837M    65%    /usr/obj
/pfs/@@-1:00005            21.7G  14.1G  7837M    65%    /var/crash
/pfs/@@-1:00006            21.7G  14.1G  7837M    65%    /var/tmp
procfs                     4096B  4096B     0B   100%    /proc
BACKUP                     1862G   419G  1443G    23%    /data
MIRROR                      465G   162G   303G    35%    /mirror
/data/pfs/@@-1:00001       1862G   419G  1443G    23%    /data/backups
/data/pfs/@@-1:00002       1862G   419G  1443G    23%    /data/nfs
tmpfs                      1944M     0B  1944M     0%    /var/run/shm


and compare to the above. The root usage grew 2G but everything else is
the same. That is even after I do hammer cleanup. Now
check out this

dfly# du -h -s *
6.0K    COPYRIGHT
1.6M    bin
734M    boot
  0B    build
155G    data
  0B    dev
1.7M    etc
 10K    home
8.5M    lib
576K    libexec
  0B    mirror
  0B    mnt
  0B    pfs
 72K    proc
 13M    rescue
8.5K    root
 12M    sbin
  0B    sys
  0B    test-hammer2
  0B    tmp
5.5G    usr
844M    var


The same as yesterday. I checked log files and I don't see anything in
there. I was looking perhaps for a messages from some failed process
which could balloon hammer history. Any ideas. I was about to say that
I am almost 90% sure there is a regression introduced with the last
release but something is not logical. BACKUP and MIRROR are separate HDD
and file systems using H1. They are stable. Only the root is growing. I
am doing swapcache on the separate drive.  I will shut up until I have
time to go back and perhaps bisect commits. I was moron to udate the 
rescue system. Any hints how to go backward?


Predrag








> 
> pkg upgrade works as expected
> 
> dfly# pkg upgrade
> Updating Avalon repository catalogue...
> Fetching meta.conf: 100%    163 B   0.2kB/s    00:01    
> Fetching packagesite.pkg: 100%    6 MiB   1.1MB/s    00:06    
> Processing entries: 100%
> Fetching provides database: 100%   12 MiB   2.6MB/s    00:05    
> Extracting database....success
> Avalon repository update completed. 30176 packages processed.
> All repositories are up to date.
> Checking for upgrades (1 candidates): 100%
> Processing candidates (1 candidates): 100%
> The following 1 package(s) will be affected (of 0 checked):
> 
> Installed packages to be UPGRADED:
>         curl: 7.83.1 -> 7.84.0 [Avalon]
> 
> Number of packages to be upgraded: 1
> 
> 1 MiB to be downloaded.
> 
> Proceed with this action? [y/N]: y
> [1/1] Fetching curl-7.84.0.pkg: 100%    1 MiB 371.4kB/s    00:04    
> Checking integrity... done (0 conflicting)
> [1/1] Upgrading curl from 7.83.1 to 7.84.0...
> [1/1] Extracting curl-7.84.0: 100%
> 
> 
> I can upgrade the sytem
> 
> dfly# git pull
> remote: Enumerating objects: 80, done.
> remote: Counting objects: 100% (64/64), done.
> remote: Compressing objects: 100% (30/30), done.
> remote: Total 30 (delta 26), reused 0 (delta 0), pack-reused 0
> Unpacking objects: 100% (30/30), 7.70 KiB | 63.00 KiB/s, done.
> From git://git.dragonflybsd.org/dragonfly
>    353c2689d5..c0211a14ce  master     -> origin/master
> Already up to date.
> 
> 
> 
> Where do I go from here? Any hints? Did anybody notice any regressions
> after the last upgrade? The file system on my machine still seems a bit
> too large 12GB but that is due to the existance of /usr/src as well 
> /usr/obj/usr directory used for build the sytem. The last directory is
> 3.2GB
> 
> Best,
> Predrag


More information about the Users mailing list