<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">2015-03-16 2:57 GMT+09:00 Vasily Postnicov <span dir="ltr"><<a href="mailto:shamaz.mazum@gmail.com" target="_blank">shamaz.mazum@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div><div><div>Hello.<br><br></div>I have read that data corresponding to any file in HAMMER fs is stored in 8-megabyte blocks (I believe, they are called big-blocks). Also, all inodes, directory entries, etc. are indexed in a global per-FS B+tree. If one, for example, changes a file, a new element in the tree is created, starting with new "create transaction id" and an old element's delete transaction id is updated, so, in this way, the history is maintained.<br><br></div>Suppose then, that I read from a file with path /hammer_mountpoint/a or from older version of the same file /hammer_mountpoint/@@0x<tid>/a. So how corresponding data in big blocks can be found? If a search in the B-tree is performed, then what key is used?<br><br></div></div></div></div></div></div></div></div></blockquote><div><br></div><div><div>how: you search the per-fs (not per-pfs) btree using whatever keys in question, including your tid.</div><div>what: see hammer_btree_cmp() in sys/vfs/hammer/hammer_btree.c</div><div><br></div><div>whatever you do to the fs (e.g. read(2), write(2), etc) will eventually make a query to the ondisk btree (or inmemory rbtree if possible), and whatever stored ondisk is storage space obtained from big block allocator.</div></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div></div>Also, each element in the tree has a 4-byte "localization" field. The first two bytes is a PFS id. What are the last two? What are "rt", "ot", "key" and "dataof" fields shown by "hammer show" command? Is that correct, that PFSes have their own obj and inode space, so if I mirror one PFS to another, B-tree will have elements with the same obj fields, but with different localization?<br></div></div></div></div></div></div></div></blockquote><div><br></div><div><br></div><div><div>the upper 16 bits (localization >> 16) is a pfs id whether 'localization' is the one from ondisk node, or the one from inmemory inode member.</div><div>the lower 16 bits of the localization is a type field, either inode or not inode.<br></div><div><br></div><div>the upper 16 bits of the localization is made on pfs initialization, so if two pfs (e.g. master and slave) have different pfs id, then they do have different localization value whether you mirror-copy it or do something else.<br></div></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><br></div>Can anybody again explain what "fake transaction id" means? I read man 1 undo, but still cannot get it.<br><br></div>And the last question (it's the reason why I try to understand HAMMER a bit more): I cannot access a file in a snapshot generated by "hammer snapq" command. "undo -ai" shows many fake transaction ids and kernel prints a message:<br><br></div>HAMMER: WARNING: Missing inode for dirent "midori"<br></div> obj_id = 0000000272ed2679, asof =0000000280c49ec0, lo=00030000<br><br></div>It can happen for an<span> arbitrary tid, but how can it be for a snapshot tid (in my case, 0x</span><span>0000000280c49ec0)? Current versions of all files seems to be OK. Should I send a bug report?<br><br></span></div><span> With regards, Vasily.<br></span></div>
</blockquote></div><br></div></div>