Porting tmpfs
Nikita Glukhov
a63fvb48 at gmail.com
Sat Mar 28 12:33:32 PDT 2009
> I think for now just use the buffer cache and get it working, even
> though that means syncing data between the buffer cache and the backing
> uobj.
Yesterday I've got it working using buffer cache. vop_read() and vop_write()
have been stolen from HAMMER. Also I've implemented tmpfs_strategy() simply by
moving here tmpfs_mappedread() and tmpfs_mappedwrite() with some changes.
Now there is ability to run files from tmpfs and it unmounts without deadlocks
(how it was earlier). It survives after fsstress but still has
problems with fsx -
reading bad data after truncating up. When mapped writing is used one
error sometimes happens: at vnode_pager_generic_getpages() after
VOP_READ() "page failed but no I/O error". I've became familiar with
that error when I was trying to
implement vop_getpages().
> I think the way to fix this is to implement a feature in the real
> kernel that tells it that the VM object backing the vnode should
> never be cleaned (so clean pages in the object are never destroyed),
> and then instead of destroying the VM object when the vnode is
> reclaimed we simply remove the vnode association and keep the VM
> object as the backing uobj.
Now uobj is allocated by swap_pager. Is it possible to use swap_pager object in
the capacity of vnode's object to get swapping working or it may interfere with
buffer cache?
Attachment:
tmpfs.patch
-------------- next part --------------
A non-text attachment was scrubbed...
Name: bin00001.bin
Type: application/octet-stream
Size: 136367 bytes
Desc: "Description: Binary data"
URL: <http://lists.dragonflybsd.org/pipermail/kernel/attachments/20090328/5e8634f6/attachment-0020.bin>
More information about the Kernel
mailing list