From shamaz.mazum at gmail.com Sun Mar 15 10:57:54 2015 From: shamaz.mazum at gmail.com (Vasily Postnicov) Date: Sun, 15 Mar 2015 20:57:54 +0300 Subject: Some questions on HAMMER internals Message-ID: Hello. I have read that data corresponding to any file in HAMMER fs is stored in 8-megabyte blocks (I believe, they are called big-blocks). Also, all inodes, directory entries, etc. are indexed in a global per-FS B+tree. If one, for example, changes a file, a new element in the tree is created, starting with new "create transaction id" and an old element's delete transaction id is updated, so, in this way, the history is maintained. Suppose then, that I read from a file with path /hammer_mountpoint/a or from older version of the same file /hammer_mountpoint/@@0x/a. So how corresponding data in big blocks can be found? If a search in the B-tree is performed, then what key is used? Also, each element in the tree has a 4-byte "localization" field. The first two bytes is a PFS id. What are the last two? What are "rt", "ot", "key" and "dataof" fields shown by "hammer show" command? Is that correct, that PFSes have their own obj and inode space, so if I mirror one PFS to another, B-tree will have elements with the same obj fields, but with different localization? Can anybody again explain what "fake transaction id" means? I read man 1 undo, but still cannot get it. And the last question (it's the reason why I try to understand HAMMER a bit more): I cannot access a file in a snapshot generated by "hammer snapq" command. "undo -ai" shows many fake transaction ids and kernel prints a message: HAMMER: WARNING: Missing inode for dirent "midori" obj_id = 0000000272ed2679, asof =0000000280c49ec0, lo=00030000 It can happen for an arbitrary tid, but how can it be for a snapshot tid (in my case, 0x0000000280c49ec0)? Current versions of all files seems to be OK. Should I send a bug report? With regards, Vasily. -------------- next part -------------- An HTML attachment was scrubbed... URL: From freeflyer2 at online.de Tue Mar 31 06:10:57 2015 From: freeflyer2 at online.de (freeflyer) Date: Tue, 31 Mar 2015 15:10:57 +0200 Subject: NFS exporting multiple snapshot directories Message-ID: <20150331131057.GA34720@iota.hb> Hi, I've a couple of snapshots directories under /, let's say /pfs1 /pfs2 /pfs1.snap /pfs2.snap I want to export them via NFS. /pfs1 -maproot=nobody /pfs2 -maproot=nobody /pfs1.snap -maproot=nobody,ro /pfs2.snap -maproot=nobody,ro Doing 'showmount -e ' shows only the 1st snapshot directory (plus the pfs directories). Exchanging the order has the same effect: only the 1st snapshot listed in /etc/exports is exported. Any idea how to fix that? Best, FF From shamaz.mazum at gmail.com Sun Mar 15 10:57:54 2015 From: shamaz.mazum at gmail.com (Vasily Postnicov) Date: Sun, 15 Mar 2015 20:57:54 +0300 Subject: Some questions on HAMMER internals Message-ID: Hello. I have read that data corresponding to any file in HAMMER fs is stored in 8-megabyte blocks (I believe, they are called big-blocks). Also, all inodes, directory entries, etc. are indexed in a global per-FS B+tree. If one, for example, changes a file, a new element in the tree is created, starting with new "create transaction id" and an old element's delete transaction id is updated, so, in this way, the history is maintained. Suppose then, that I read from a file with path /hammer_mountpoint/a or from older version of the same file /hammer_mountpoint/@@0x/a. So how corresponding data in big blocks can be found? If a search in the B-tree is performed, then what key is used? Also, each element in the tree has a 4-byte "localization" field. The first two bytes is a PFS id. What are the last two? What are "rt", "ot", "key" and "dataof" fields shown by "hammer show" command? Is that correct, that PFSes have their own obj and inode space, so if I mirror one PFS to another, B-tree will have elements with the same obj fields, but with different localization? Can anybody again explain what "fake transaction id" means? I read man 1 undo, but still cannot get it. And the last question (it's the reason why I try to understand HAMMER a bit more): I cannot access a file in a snapshot generated by "hammer snapq" command. "undo -ai" shows many fake transaction ids and kernel prints a message: HAMMER: WARNING: Missing inode for dirent "midori" obj_id = 0000000272ed2679, asof =0000000280c49ec0, lo=00030000 It can happen for an arbitrary tid, but how can it be for a snapshot tid (in my case, 0x0000000280c49ec0)? Current versions of all files seems to be OK. Should I send a bug report? With regards, Vasily. -------------- next part -------------- An HTML attachment was scrubbed... URL: From freeflyer2 at online.de Tue Mar 31 06:10:57 2015 From: freeflyer2 at online.de (freeflyer) Date: Tue, 31 Mar 2015 15:10:57 +0200 Subject: NFS exporting multiple snapshot directories Message-ID: <20150331131057.GA34720@iota.hb> Hi, I've a couple of snapshots directories under /, let's say /pfs1 /pfs2 /pfs1.snap /pfs2.snap I want to export them via NFS. /pfs1 -maproot=nobody /pfs2 -maproot=nobody /pfs1.snap -maproot=nobody,ro /pfs2.snap -maproot=nobody,ro Doing 'showmount -e ' shows only the 1st snapshot directory (plus the pfs directories). Exchanging the order has the same effect: only the 1st snapshot listed in /etc/exports is exported. Any idea how to fix that? Best, FF From shamaz.mazum at gmail.com Sun Mar 15 10:57:54 2015 From: shamaz.mazum at gmail.com (Vasily Postnicov) Date: Sun, 15 Mar 2015 20:57:54 +0300 Subject: Some questions on HAMMER internals Message-ID: Hello. I have read that data corresponding to any file in HAMMER fs is stored in 8-megabyte blocks (I believe, they are called big-blocks). Also, all inodes, directory entries, etc. are indexed in a global per-FS B+tree. If one, for example, changes a file, a new element in the tree is created, starting with new "create transaction id" and an old element's delete transaction id is updated, so, in this way, the history is maintained. Suppose then, that I read from a file with path /hammer_mountpoint/a or from older version of the same file /hammer_mountpoint/@@0x/a. So how corresponding data in big blocks can be found? If a search in the B-tree is performed, then what key is used? Also, each element in the tree has a 4-byte "localization" field. The first two bytes is a PFS id. What are the last two? What are "rt", "ot", "key" and "dataof" fields shown by "hammer show" command? Is that correct, that PFSes have their own obj and inode space, so if I mirror one PFS to another, B-tree will have elements with the same obj fields, but with different localization? Can anybody again explain what "fake transaction id" means? I read man 1 undo, but still cannot get it. And the last question (it's the reason why I try to understand HAMMER a bit more): I cannot access a file in a snapshot generated by "hammer snapq" command. "undo -ai" shows many fake transaction ids and kernel prints a message: HAMMER: WARNING: Missing inode for dirent "midori" obj_id = 0000000272ed2679, asof =0000000280c49ec0, lo=00030000 It can happen for an arbitrary tid, but how can it be for a snapshot tid (in my case, 0x0000000280c49ec0)? Current versions of all files seems to be OK. Should I send a bug report? With regards, Vasily. -------------- next part -------------- An HTML attachment was scrubbed... URL: From freeflyer2 at online.de Tue Mar 31 06:10:57 2015 From: freeflyer2 at online.de (freeflyer) Date: Tue, 31 Mar 2015 15:10:57 +0200 Subject: NFS exporting multiple snapshot directories Message-ID: <20150331131057.GA34720@iota.hb> Hi, I've a couple of snapshots directories under /, let's say /pfs1 /pfs2 /pfs1.snap /pfs2.snap I want to export them via NFS. /pfs1 -maproot=nobody /pfs2 -maproot=nobody /pfs1.snap -maproot=nobody,ro /pfs2.snap -maproot=nobody,ro Doing 'showmount -e ' shows only the 1st snapshot directory (plus the pfs directories). Exchanging the order has the same effect: only the 1st snapshot listed in /etc/exports is exported. Any idea how to fix that? Best, FF From shamaz.mazum at gmail.com Sun Mar 15 10:57:54 2015 From: shamaz.mazum at gmail.com (Vasily Postnicov) Date: Sun, 15 Mar 2015 20:57:54 +0300 Subject: Some questions on HAMMER internals Message-ID: Hello. I have read that data corresponding to any file in HAMMER fs is stored in 8-megabyte blocks (I believe, they are called big-blocks). Also, all inodes, directory entries, etc. are indexed in a global per-FS B+tree. If one, for example, changes a file, a new element in the tree is created, starting with new "create transaction id" and an old element's delete transaction id is updated, so, in this way, the history is maintained. Suppose then, that I read from a file with path /hammer_mountpoint/a or from older version of the same file /hammer_mountpoint/@@0x/a. So how corresponding data in big blocks can be found? If a search in the B-tree is performed, then what key is used? Also, each element in the tree has a 4-byte "localization" field. The first two bytes is a PFS id. What are the last two? What are "rt", "ot", "key" and "dataof" fields shown by "hammer show" command? Is that correct, that PFSes have their own obj and inode space, so if I mirror one PFS to another, B-tree will have elements with the same obj fields, but with different localization? Can anybody again explain what "fake transaction id" means? I read man 1 undo, but still cannot get it. And the last question (it's the reason why I try to understand HAMMER a bit more): I cannot access a file in a snapshot generated by "hammer snapq" command. "undo -ai" shows many fake transaction ids and kernel prints a message: HAMMER: WARNING: Missing inode for dirent "midori" obj_id = 0000000272ed2679, asof =0000000280c49ec0, lo=00030000 It can happen for an arbitrary tid, but how can it be for a snapshot tid (in my case, 0x0000000280c49ec0)? Current versions of all files seems to be OK. Should I send a bug report? With regards, Vasily. -------------- next part -------------- An HTML attachment was scrubbed... URL: From freeflyer2 at online.de Tue Mar 31 06:10:57 2015 From: freeflyer2 at online.de (freeflyer) Date: Tue, 31 Mar 2015 15:10:57 +0200 Subject: NFS exporting multiple snapshot directories Message-ID: <20150331131057.GA34720@iota.hb> Hi, I've a couple of snapshots directories under /, let's say /pfs1 /pfs2 /pfs1.snap /pfs2.snap I want to export them via NFS. /pfs1 -maproot=nobody /pfs2 -maproot=nobody /pfs1.snap -maproot=nobody,ro /pfs2.snap -maproot=nobody,ro Doing 'showmount -e ' shows only the 1st snapshot directory (plus the pfs directories). Exchanging the order has the same effect: only the 1st snapshot listed in /etc/exports is exported. Any idea how to fix that? Best, FF From shamaz.mazum at gmail.com Sun Mar 15 10:57:54 2015 From: shamaz.mazum at gmail.com (Vasily Postnicov) Date: Sun, 15 Mar 2015 20:57:54 +0300 Subject: Some questions on HAMMER internals Message-ID: Hello. I have read that data corresponding to any file in HAMMER fs is stored in 8-megabyte blocks (I believe, they are called big-blocks). Also, all inodes, directory entries, etc. are indexed in a global per-FS B+tree. If one, for example, changes a file, a new element in the tree is created, starting with new "create transaction id" and an old element's delete transaction id is updated, so, in this way, the history is maintained. Suppose then, that I read from a file with path /hammer_mountpoint/a or from older version of the same file /hammer_mountpoint/@@0x/a. So how corresponding data in big blocks can be found? If a search in the B-tree is performed, then what key is used? Also, each element in the tree has a 4-byte "localization" field. The first two bytes is a PFS id. What are the last two? What are "rt", "ot", "key" and "dataof" fields shown by "hammer show" command? Is that correct, that PFSes have their own obj and inode space, so if I mirror one PFS to another, B-tree will have elements with the same obj fields, but with different localization? Can anybody again explain what "fake transaction id" means? I read man 1 undo, but still cannot get it. And the last question (it's the reason why I try to understand HAMMER a bit more): I cannot access a file in a snapshot generated by "hammer snapq" command. "undo -ai" shows many fake transaction ids and kernel prints a message: HAMMER: WARNING: Missing inode for dirent "midori" obj_id = 0000000272ed2679, asof =0000000280c49ec0, lo=00030000 It can happen for an arbitrary tid, but how can it be for a snapshot tid (in my case, 0x0000000280c49ec0)? Current versions of all files seems to be OK. Should I send a bug report? With regards, Vasily. -------------- next part -------------- An HTML attachment was scrubbed... URL: From freeflyer2 at online.de Tue Mar 31 06:10:57 2015 From: freeflyer2 at online.de (freeflyer) Date: Tue, 31 Mar 2015 15:10:57 +0200 Subject: NFS exporting multiple snapshot directories Message-ID: <20150331131057.GA34720@iota.hb> Hi, I've a couple of snapshots directories under /, let's say /pfs1 /pfs2 /pfs1.snap /pfs2.snap I want to export them via NFS. /pfs1 -maproot=nobody /pfs2 -maproot=nobody /pfs1.snap -maproot=nobody,ro /pfs2.snap -maproot=nobody,ro Doing 'showmount -e ' shows only the 1st snapshot directory (plus the pfs directories). Exchanging the order has the same effect: only the 1st snapshot listed in /etc/exports is exported. Any idea how to fix that? Best, FF From shamaz.mazum at gmail.com Sun Mar 15 10:57:54 2015 From: shamaz.mazum at gmail.com (Vasily Postnicov) Date: Sun, 15 Mar 2015 20:57:54 +0300 Subject: Some questions on HAMMER internals Message-ID: Hello. I have read that data corresponding to any file in HAMMER fs is stored in 8-megabyte blocks (I believe, they are called big-blocks). Also, all inodes, directory entries, etc. are indexed in a global per-FS B+tree. If one, for example, changes a file, a new element in the tree is created, starting with new "create transaction id" and an old element's delete transaction id is updated, so, in this way, the history is maintained. Suppose then, that I read from a file with path /hammer_mountpoint/a or from older version of the same file /hammer_mountpoint/@@0x/a. So how corresponding data in big blocks can be found? If a search in the B-tree is performed, then what key is used? Also, each element in the tree has a 4-byte "localization" field. The first two bytes is a PFS id. What are the last two? What are "rt", "ot", "key" and "dataof" fields shown by "hammer show" command? Is that correct, that PFSes have their own obj and inode space, so if I mirror one PFS to another, B-tree will have elements with the same obj fields, but with different localization? Can anybody again explain what "fake transaction id" means? I read man 1 undo, but still cannot get it. And the last question (it's the reason why I try to understand HAMMER a bit more): I cannot access a file in a snapshot generated by "hammer snapq" command. "undo -ai" shows many fake transaction ids and kernel prints a message: HAMMER: WARNING: Missing inode for dirent "midori" obj_id = 0000000272ed2679, asof =0000000280c49ec0, lo=00030000 It can happen for an arbitrary tid, but how can it be for a snapshot tid (in my case, 0x0000000280c49ec0)? Current versions of all files seems to be OK. Should I send a bug report? With regards, Vasily. -------------- next part -------------- An HTML attachment was scrubbed... URL: From freeflyer2 at online.de Tue Mar 31 06:10:57 2015 From: freeflyer2 at online.de (freeflyer) Date: Tue, 31 Mar 2015 15:10:57 +0200 Subject: NFS exporting multiple snapshot directories Message-ID: <20150331131057.GA34720@iota.hb> Hi, I've a couple of snapshots directories under /, let's say /pfs1 /pfs2 /pfs1.snap /pfs2.snap I want to export them via NFS. /pfs1 -maproot=nobody /pfs2 -maproot=nobody /pfs1.snap -maproot=nobody,ro /pfs2.snap -maproot=nobody,ro Doing 'showmount -e ' shows only the 1st snapshot directory (plus the pfs directories). Exchanging the order has the same effect: only the 1st snapshot listed in /etc/exports is exported. Any idea how to fix that? Best, FF From shamaz.mazum at gmail.com Sun Mar 15 10:57:54 2015 From: shamaz.mazum at gmail.com (Vasily Postnicov) Date: Sun, 15 Mar 2015 20:57:54 +0300 Subject: Some questions on HAMMER internals Message-ID: Hello. I have read that data corresponding to any file in HAMMER fs is stored in 8-megabyte blocks (I believe, they are called big-blocks). Also, all inodes, directory entries, etc. are indexed in a global per-FS B+tree. If one, for example, changes a file, a new element in the tree is created, starting with new "create transaction id" and an old element's delete transaction id is updated, so, in this way, the history is maintained. Suppose then, that I read from a file with path /hammer_mountpoint/a or from older version of the same file /hammer_mountpoint/@@0x/a. So how corresponding data in big blocks can be found? If a search in the B-tree is performed, then what key is used? Also, each element in the tree has a 4-byte "localization" field. The first two bytes is a PFS id. What are the last two? What are "rt", "ot", "key" and "dataof" fields shown by "hammer show" command? Is that correct, that PFSes have their own obj and inode space, so if I mirror one PFS to another, B-tree will have elements with the same obj fields, but with different localization? Can anybody again explain what "fake transaction id" means? I read man 1 undo, but still cannot get it. And the last question (it's the reason why I try to understand HAMMER a bit more): I cannot access a file in a snapshot generated by "hammer snapq" command. "undo -ai" shows many fake transaction ids and kernel prints a message: HAMMER: WARNING: Missing inode for dirent "midori" obj_id = 0000000272ed2679, asof =0000000280c49ec0, lo=00030000 It can happen for an arbitrary tid, but how can it be for a snapshot tid (in my case, 0x0000000280c49ec0)? Current versions of all files seems to be OK. Should I send a bug report? With regards, Vasily. -------------- next part -------------- An HTML attachment was scrubbed... URL: From freeflyer2 at online.de Tue Mar 31 06:10:57 2015 From: freeflyer2 at online.de (freeflyer) Date: Tue, 31 Mar 2015 15:10:57 +0200 Subject: NFS exporting multiple snapshot directories Message-ID: <20150331131057.GA34720@iota.hb> Hi, I've a couple of snapshots directories under /, let's say /pfs1 /pfs2 /pfs1.snap /pfs2.snap I want to export them via NFS. /pfs1 -maproot=nobody /pfs2 -maproot=nobody /pfs1.snap -maproot=nobody,ro /pfs2.snap -maproot=nobody,ro Doing 'showmount -e ' shows only the 1st snapshot directory (plus the pfs directories). Exchanging the order has the same effect: only the 1st snapshot listed in /etc/exports is exported. Any idea how to fix that? Best, FF From shamaz.mazum at gmail.com Sun Mar 15 10:57:54 2015 From: shamaz.mazum at gmail.com (Vasily Postnicov) Date: Sun, 15 Mar 2015 20:57:54 +0300 Subject: Some questions on HAMMER internals Message-ID: Hello. I have read that data corresponding to any file in HAMMER fs is stored in 8-megabyte blocks (I believe, they are called big-blocks). Also, all inodes, directory entries, etc. are indexed in a global per-FS B+tree. If one, for example, changes a file, a new element in the tree is created, starting with new "create transaction id" and an old element's delete transaction id is updated, so, in this way, the history is maintained. Suppose then, that I read from a file with path /hammer_mountpoint/a or from older version of the same file /hammer_mountpoint/@@0x/a. So how corresponding data in big blocks can be found? If a search in the B-tree is performed, then what key is used? Also, each element in the tree has a 4-byte "localization" field. The first two bytes is a PFS id. What are the last two? What are "rt", "ot", "key" and "dataof" fields shown by "hammer show" command? Is that correct, that PFSes have their own obj and inode space, so if I mirror one PFS to another, B-tree will have elements with the same obj fields, but with different localization? Can anybody again explain what "fake transaction id" means? I read man 1 undo, but still cannot get it. And the last question (it's the reason why I try to understand HAMMER a bit more): I cannot access a file in a snapshot generated by "hammer snapq" command. "undo -ai" shows many fake transaction ids and kernel prints a message: HAMMER: WARNING: Missing inode for dirent "midori" obj_id = 0000000272ed2679, asof =0000000280c49ec0, lo=00030000 It can happen for an arbitrary tid, but how can it be for a snapshot tid (in my case, 0x0000000280c49ec0)? Current versions of all files seems to be OK. Should I send a bug report? With regards, Vasily. -------------- next part -------------- An HTML attachment was scrubbed... URL: From freeflyer2 at online.de Tue Mar 31 06:10:57 2015 From: freeflyer2 at online.de (freeflyer) Date: Tue, 31 Mar 2015 15:10:57 +0200 Subject: NFS exporting multiple snapshot directories Message-ID: <20150331131057.GA34720@iota.hb> Hi, I've a couple of snapshots directories under /, let's say /pfs1 /pfs2 /pfs1.snap /pfs2.snap I want to export them via NFS. /pfs1 -maproot=nobody /pfs2 -maproot=nobody /pfs1.snap -maproot=nobody,ro /pfs2.snap -maproot=nobody,ro Doing 'showmount -e ' shows only the 1st snapshot directory (plus the pfs directories). Exchanging the order has the same effect: only the 1st snapshot listed in /etc/exports is exported. Any idea how to fix that? Best, FF From shamaz.mazum at gmail.com Sun Mar 15 10:57:54 2015 From: shamaz.mazum at gmail.com (Vasily Postnicov) Date: Sun, 15 Mar 2015 20:57:54 +0300 Subject: Some questions on HAMMER internals Message-ID: Hello. I have read that data corresponding to any file in HAMMER fs is stored in 8-megabyte blocks (I believe, they are called big-blocks). Also, all inodes, directory entries, etc. are indexed in a global per-FS B+tree. If one, for example, changes a file, a new element in the tree is created, starting with new "create transaction id" and an old element's delete transaction id is updated, so, in this way, the history is maintained. Suppose then, that I read from a file with path /hammer_mountpoint/a or from older version of the same file /hammer_mountpoint/@@0x/a. So how corresponding data in big blocks can be found? If a search in the B-tree is performed, then what key is used? Also, each element in the tree has a 4-byte "localization" field. The first two bytes is a PFS id. What are the last two? What are "rt", "ot", "key" and "dataof" fields shown by "hammer show" command? Is that correct, that PFSes have their own obj and inode space, so if I mirror one PFS to another, B-tree will have elements with the same obj fields, but with different localization? Can anybody again explain what "fake transaction id" means? I read man 1 undo, but still cannot get it. And the last question (it's the reason why I try to understand HAMMER a bit more): I cannot access a file in a snapshot generated by "hammer snapq" command. "undo -ai" shows many fake transaction ids and kernel prints a message: HAMMER: WARNING: Missing inode for dirent "midori" obj_id = 0000000272ed2679, asof =0000000280c49ec0, lo=00030000 It can happen for an arbitrary tid, but how can it be for a snapshot tid (in my case, 0x0000000280c49ec0)? Current versions of all files seems to be OK. Should I send a bug report? With regards, Vasily. -------------- next part -------------- An HTML attachment was scrubbed... URL: From freeflyer2 at online.de Tue Mar 31 06:10:57 2015 From: freeflyer2 at online.de (freeflyer) Date: Tue, 31 Mar 2015 15:10:57 +0200 Subject: NFS exporting multiple snapshot directories Message-ID: <20150331131057.GA34720@iota.hb> Hi, I've a couple of snapshots directories under /, let's say /pfs1 /pfs2 /pfs1.snap /pfs2.snap I want to export them via NFS. /pfs1 -maproot=nobody /pfs2 -maproot=nobody /pfs1.snap -maproot=nobody,ro /pfs2.snap -maproot=nobody,ro Doing 'showmount -e ' shows only the 1st snapshot directory (plus the pfs directories). Exchanging the order has the same effect: only the 1st snapshot listed in /etc/exports is exported. Any idea how to fix that? Best, FF From shamaz.mazum at gmail.com Sun Mar 15 10:57:54 2015 From: shamaz.mazum at gmail.com (Vasily Postnicov) Date: Sun, 15 Mar 2015 20:57:54 +0300 Subject: Some questions on HAMMER internals Message-ID: Hello. I have read that data corresponding to any file in HAMMER fs is stored in 8-megabyte blocks (I believe, they are called big-blocks). Also, all inodes, directory entries, etc. are indexed in a global per-FS B+tree. If one, for example, changes a file, a new element in the tree is created, starting with new "create transaction id" and an old element's delete transaction id is updated, so, in this way, the history is maintained. Suppose then, that I read from a file with path /hammer_mountpoint/a or from older version of the same file /hammer_mountpoint/@@0x/a. So how corresponding data in big blocks can be found? If a search in the B-tree is performed, then what key is used? Also, each element in the tree has a 4-byte "localization" field. The first two bytes is a PFS id. What are the last two? What are "rt", "ot", "key" and "dataof" fields shown by "hammer show" command? Is that correct, that PFSes have their own obj and inode space, so if I mirror one PFS to another, B-tree will have elements with the same obj fields, but with different localization? Can anybody again explain what "fake transaction id" means? I read man 1 undo, but still cannot get it. And the last question (it's the reason why I try to understand HAMMER a bit more): I cannot access a file in a snapshot generated by "hammer snapq" command. "undo -ai" shows many fake transaction ids and kernel prints a message: HAMMER: WARNING: Missing inode for dirent "midori" obj_id = 0000000272ed2679, asof =0000000280c49ec0, lo=00030000 It can happen for an arbitrary tid, but how can it be for a snapshot tid (in my case, 0x0000000280c49ec0)? Current versions of all files seems to be OK. Should I send a bug report? With regards, Vasily. -------------- next part -------------- An HTML attachment was scrubbed... URL: From freeflyer2 at online.de Tue Mar 31 06:10:57 2015 From: freeflyer2 at online.de (freeflyer) Date: Tue, 31 Mar 2015 15:10:57 +0200 Subject: NFS exporting multiple snapshot directories Message-ID: <20150331131057.GA34720@iota.hb> Hi, I've a couple of snapshots directories under /, let's say /pfs1 /pfs2 /pfs1.snap /pfs2.snap I want to export them via NFS. /pfs1 -maproot=nobody /pfs2 -maproot=nobody /pfs1.snap -maproot=nobody,ro /pfs2.snap -maproot=nobody,ro Doing 'showmount -e ' shows only the 1st snapshot directory (plus the pfs directories). Exchanging the order has the same effect: only the 1st snapshot listed in /etc/exports is exported. Any idea how to fix that? Best, FF From shamaz.mazum at gmail.com Sun Mar 15 10:57:54 2015 From: shamaz.mazum at gmail.com (Vasily Postnicov) Date: Sun, 15 Mar 2015 20:57:54 +0300 Subject: Some questions on HAMMER internals Message-ID: Hello. I have read that data corresponding to any file in HAMMER fs is stored in 8-megabyte blocks (I believe, they are called big-blocks). Also, all inodes, directory entries, etc. are indexed in a global per-FS B+tree. If one, for example, changes a file, a new element in the tree is created, starting with new "create transaction id" and an old element's delete transaction id is updated, so, in this way, the history is maintained. Suppose then, that I read from a file with path /hammer_mountpoint/a or from older version of the same file /hammer_mountpoint/@@0x/a. So how corresponding data in big blocks can be found? If a search in the B-tree is performed, then what key is used? Also, each element in the tree has a 4-byte "localization" field. The first two bytes is a PFS id. What are the last two? What are "rt", "ot", "key" and "dataof" fields shown by "hammer show" command? Is that correct, that PFSes have their own obj and inode space, so if I mirror one PFS to another, B-tree will have elements with the same obj fields, but with different localization? Can anybody again explain what "fake transaction id" means? I read man 1 undo, but still cannot get it. And the last question (it's the reason why I try to understand HAMMER a bit more): I cannot access a file in a snapshot generated by "hammer snapq" command. "undo -ai" shows many fake transaction ids and kernel prints a message: HAMMER: WARNING: Missing inode for dirent "midori" obj_id = 0000000272ed2679, asof =0000000280c49ec0, lo=00030000 It can happen for an arbitrary tid, but how can it be for a snapshot tid (in my case, 0x0000000280c49ec0)? Current versions of all files seems to be OK. Should I send a bug report? With regards, Vasily. -------------- next part -------------- An HTML attachment was scrubbed... URL: From freeflyer2 at online.de Tue Mar 31 06:10:57 2015 From: freeflyer2 at online.de (freeflyer) Date: Tue, 31 Mar 2015 15:10:57 +0200 Subject: NFS exporting multiple snapshot directories Message-ID: <20150331131057.GA34720@iota.hb> Hi, I've a couple of snapshots directories under /, let's say /pfs1 /pfs2 /pfs1.snap /pfs2.snap I want to export them via NFS. /pfs1 -maproot=nobody /pfs2 -maproot=nobody /pfs1.snap -maproot=nobody,ro /pfs2.snap -maproot=nobody,ro Doing 'showmount -e ' shows only the 1st snapshot directory (plus the pfs directories). Exchanging the order has the same effect: only the 1st snapshot listed in /etc/exports is exported. Any idea how to fix that? Best, FF From shamaz.mazum at gmail.com Sun Mar 15 10:57:54 2015 From: shamaz.mazum at gmail.com (Vasily Postnicov) Date: Sun, 15 Mar 2015 20:57:54 +0300 Subject: Some questions on HAMMER internals Message-ID: Hello. I have read that data corresponding to any file in HAMMER fs is stored in 8-megabyte blocks (I believe, they are called big-blocks). Also, all inodes, directory entries, etc. are indexed in a global per-FS B+tree. If one, for example, changes a file, a new element in the tree is created, starting with new "create transaction id" and an old element's delete transaction id is updated, so, in this way, the history is maintained. Suppose then, that I read from a file with path /hammer_mountpoint/a or from older version of the same file /hammer_mountpoint/@@0x/a. So how corresponding data in big blocks can be found? If a search in the B-tree is performed, then what key is used? Also, each element in the tree has a 4-byte "localization" field. The first two bytes is a PFS id. What are the last two? What are "rt", "ot", "key" and "dataof" fields shown by "hammer show" command? Is that correct, that PFSes have their own obj and inode space, so if I mirror one PFS to another, B-tree will have elements with the same obj fields, but with different localization? Can anybody again explain what "fake transaction id" means? I read man 1 undo, but still cannot get it. And the last question (it's the reason why I try to understand HAMMER a bit more): I cannot access a file in a snapshot generated by "hammer snapq" command. "undo -ai" shows many fake transaction ids and kernel prints a message: HAMMER: WARNING: Missing inode for dirent "midori" obj_id = 0000000272ed2679, asof =0000000280c49ec0, lo=00030000 It can happen for an arbitrary tid, but how can it be for a snapshot tid (in my case, 0x0000000280c49ec0)? Current versions of all files seems to be OK. Should I send a bug report? With regards, Vasily. -------------- next part -------------- An HTML attachment was scrubbed... URL: From freeflyer2 at online.de Tue Mar 31 06:10:57 2015 From: freeflyer2 at online.de (freeflyer) Date: Tue, 31 Mar 2015 15:10:57 +0200 Subject: NFS exporting multiple snapshot directories Message-ID: <20150331131057.GA34720@iota.hb> Hi, I've a couple of snapshots directories under /, let's say /pfs1 /pfs2 /pfs1.snap /pfs2.snap I want to export them via NFS. /pfs1 -maproot=nobody /pfs2 -maproot=nobody /pfs1.snap -maproot=nobody,ro /pfs2.snap -maproot=nobody,ro Doing 'showmount -e ' shows only the 1st snapshot directory (plus the pfs directories). Exchanging the order has the same effect: only the 1st snapshot listed in /etc/exports is exported. Any idea how to fix that? Best, FF