cvs commit: src/sys/vfs/hammer hammer.h hammer_flusher.c hammer_inode.c hammer_io.c hammer_object.c hammer_prune.c hammer_reblock.c hammer_recover.c hammer_transaction.c hammer_vfsops.c hammer_vnops.c
dillon at crater.dragonflybsd.org
Sun Jun 8 21:20:55 PDT 2008
dillon 2008/06/08 21:19:10 PDT
DragonFly src repository
sys/vfs/hammer hammer.h hammer_flusher.c hammer_inode.c
HAMMER 53C/Many: Stabilization
* HAMMER queues dirty inodes reclaimed by the kernel to the backend for
their final sync. Programs like blogbench can overload the backend and
generate more new inodes then the backend can dispose of, running
M_HAMMER out of memory.
Add code to stall on vop_open() when this condition is detected to
give the backend a chance to catch-up. (see NOTE 1 below).
* HAMMER could build up too many meta-data buffers and cause the system
to deadlock in newbuf. Recode the flusher to allow a block of UNDOs,
the volume header, and all related meta-data buffers to be flushed
piecemeal, and then continue the flush loop without closing out the
transaction. If a crash occurs the recovery code will undo the partial
* Fix an issue located by FSX under load. The in-memory/on-disk record
merging code was not dealing with in-memory data records properly
The key field for data records is (base_offset + data_len), not just
(base_off), so a 'match' between an in-memory data record and an on-disk
data records requires a special case test. This is the case where the
in-memory record is intended to overwrite the on-disk record, so the
in-memory record must be chosen and the on-disk record discarded for
the purposes of read().
* Fix a bug in hammer_io.c related to the handling of B_LOCKED buffers
that resulted in an assertion at umount time. Buffer cache buffers
were not being properly disassociated from their hammer_buffer countparts
in the direct-write case.
* The frontend's direct-write capability for truncated buffers (such as
used with small files) was causing an assertion to occur on the backend.
Add an interlock on the related hammer_buffer to prevent the frontend
from attempting to modify the buffer while the backend is trying to
write it to the media.
* Dynamically size the dirty buffer limit. This still needs some work.
(NOTE 1): On read/write performance issues. Currently HAMMER's frontend
VOPs are massively disassociated from modifying B-Tree updates. Even though
a direct-write capability now exists, it applies only to bulk data writes
to disk and NOT to B-Tree updates. Each direct write creates a record
which must be queued to the backend to do the B-Tree update on the
media. The flusher is currently single-threaded and when HAMMER gets
too far behind doing these updates the current safeties will cause
performance to degrade drastically. This is a known issue that
will be addressed.
Revision Changes Path
1.76 +15 -2 src/sys/vfs/hammer/hammer.h
1.20 +104 -107 src/sys/vfs/hammer/hammer_flusher.c
1.67 +65 -7 src/sys/vfs/hammer/hammer_inode.c
1.36 +62 -11 src/sys/vfs/hammer/hammer_io.c
1.63 +19 -9 src/sys/vfs/hammer/hammer_object.c
1.5 +12 -15 src/sys/vfs/hammer/hammer_prune.c
1.17 +14 -13 src/sys/vfs/hammer/hammer_reblock.c
1.22 +43 -24 src/sys/vfs/hammer/hammer_recover.c
1.16 +2 -0 src/sys/vfs/hammer/hammer_transaction.c
1.41 +17 -6 src/sys/vfs/hammer/hammer_vfsops.c
1.61 +18 -1 src/sys/vfs/hammer/hammer_vnops.c
More information about the Commits