[DragonFlyBSD - Bug #2803] HAMMER: Warning: UNDO area too small!

bugtracker-admin at leaf.dragonflybsd.org bugtracker-admin at leaf.dragonflybsd.org
Wed Mar 11 15:42:45 PDT 2015


Issue #2803 has been updated by tkusumi.

Category set to VFS subsystem

I don't know details of the situation but maybe either should work ?
(A) is the straight forward way but (B) could be effective in certain cases.

(A) Specify larger ondisk undo fifo size by -u on newfs
     -u undosize
             Specify the size of the fixed UNDO/REDO FIFO.  The undosize is
             specified in bytes.  By default 0.1% of the root volume's size is
             used, with a reasonable minimum and a reasonable cap.  The
             UNDO/REDO FIFO is used to sequence meta-data out to the media for
             instant crash recovery.

(B) Change size of on-memory lru list which prevents fs from consuming on-disk undo fifo if the same (offset, size) pair has recently been added to this lru. The size of lru is currently 1024 (not tunable) and this number seems to be some sort of heuristic.


I don't know if (B) is really that effective, but when the fs somehow happens to generate undos for the same (offset, size), having too small lru ends up generating duplicated on-disk undos which I think is not necessary and should be avoided. (by the way it actually uses rbtree to lookup for the cache instead of seeking the lru list, so having larger lru size isn't a big problem in terms of performance).

I actually made a patch for (B) that makes lru size tunable using sysctl about a month ago but never really tested.


---
commit 50e302b13040146bc9756ce2277c5e68c399d452
Author: Tomohiro Kusumi <xxx>
Date:   Mon Feb 16 00:14:24 2015 +0900

    .

diff --git a/sys/vfs/hammer/hammer.h b/sys/vfs/hammer/hammer.h
index 0921035..be118ce 100644
--- a/sys/vfs/hammer/hammer.h
+++ b/sys/vfs/hammer/hammer.h
@@ -826,10 +826,7 @@ typedef struct hammer_reserve *hammer_reserve_t;
 /*
  * The undo structure tracks recent undos to avoid laying down duplicate
  * undos within a flush group, saving us a significant amount of overhead.
- *
- * This is strictly a heuristic.
  */
-#define HAMMER_MAX_UNDOS               1024
 #define HAMMER_MAX_FLUSHERS            4

 struct hammer_undo {
@@ -942,7 +939,7 @@ struct hammer_mount {
        struct hammer_lock snapshot_lock;
        struct hammer_lock volume_lock;
        struct hammer_blockmap  blockmap[HAMMER_MAX_ZONES];
-       struct hammer_undo      undos[HAMMER_MAX_UNDOS];
+       struct hammer_undo      *undos;
        int                     undo_alloc;
        TAILQ_HEAD(, hammer_undo)  undo_lru_list;
        TAILQ_HEAD(, hammer_reserve) delay_list;
@@ -1065,6 +1062,7 @@ extern int hammer_double_buffer;
 extern int hammer_btree_full_undo;
 extern int hammer_yield_check;
 extern int hammer_fsync_mode;
+extern int hammer_max_undos;
 extern int hammer_autoflush;
 extern int64_t hammer_contention_count;

diff --git a/sys/vfs/hammer/hammer_undo.c b/sys/vfs/hammer/hammer_undo.c
index 9e644a2..c48ab3f 100644
--- a/sys/vfs/hammer/hammer_undo.c
+++ b/sys/vfs/hammer/hammer_undo.c
@@ -443,7 +443,7 @@ hammer_enter_undo_history(hammer_mount_t hmp, hammer_off_t offset, int bytes)
                node->bytes = bytes;
                return(0);
        }
-       if (hmp->undo_alloc != HAMMER_MAX_UNDOS) {
+       if (hmp->undo_alloc < hammer_max_undos) {
                node = &hmp->undos[hmp->undo_alloc++];
        } else {
                node = TAILQ_FIRST(&hmp->undo_lru_list);
diff --git a/sys/vfs/hammer/hammer_vfsops.c b/sys/vfs/hammer/hammer_vfsops.c
index 4e23479..d503f6c 100644
--- a/sys/vfs/hammer/hammer_vfsops.c
+++ b/sys/vfs/hammer/hammer_vfsops.c
@@ -115,6 +115,7 @@ int hammer_double_buffer;
 int hammer_btree_full_undo = 1;
 int hammer_yield_check = 16;
 int hammer_fsync_mode = 3;
+int hammer_max_undos = 1024;
 int64_t hammer_contention_count;
 int64_t hammer_zone_limit;

@@ -287,6 +288,8 @@ SYSCTL_INT(_vfs_hammer, OID_AUTO, yield_check, CTLFLAG_RW,
           &hammer_yield_check, 0, "");
 SYSCTL_INT(_vfs_hammer, OID_AUTO, fsync_mode, CTLFLAG_RW,
           &hammer_fsync_mode, 0, "");
+SYSCTL_INT(_vfs_hammer, OID_AUTO, max_undos, CTLFLAG_RW,
+          &hammer_max_undos, 0, "");

 /* KTR_INFO_MASTER(hammer); */

@@ -496,6 +499,11 @@ hammer_vfs_mount(struct mount *mp, char *mntpt, caddr_t data,
                hmp->snapshot_lock.refs = 1;
                hmp->volume_lock.refs = 1;

+               hmp->undos = kmalloc(sizeof(*hmp->undos) * hammer_max_undos,
+                               M_HAMMER, M_WAITOK | M_ZERO);
+
                TAILQ_INIT(&hmp->delay_list);
                TAILQ_INIT(&hmp->flush_group_list);
                TAILQ_INIT(&hmp->objid_cache_list);
@@ -955,6 +963,7 @@ hammer_free_hmp(struct mount *mp)
        kmalloc_destroy(&hmp->m_misc);
        kmalloc_destroy(&hmp->m_inodes);
        lwkt_reltoken(&hmp->fs_token);
+       kfree(hmp->undos, M_HAMMER);
        kfree(hmp, M_HAMMER);
 }

----------------------------------------
Bug #2803: HAMMER: Warning: UNDO area too small!
http://bugs.dragonflybsd.org/issues/2803#change-12623

* Author: ftigeot
* Status: New
* Priority: Normal
* Assignee: 
* Category: VFS subsystem
* Target version: 
----------------------------------------
The kernel prints this message when mounting a ~20 TB HAMMER filesystem.

No special undo size was used with newfs_hammer and it didn't complain when it created the filesystem.

The system is running DragonFly 4.1 (current master).



-- 
You have received this notification because you have either subscribed to it, or are involved in it.
To change your notification preferences, please click here: http://bugs.dragonflybsd.org/my/account



More information about the Bugs mailing list