[PATCH] tmpfs work update 013010 (was tmpfs initial work)

Naoya Sugioka naoya.sugioka at gmail.com
Sun Jan 31 01:14:25 PST 2010


Hi Matt and others,

Here comes 3rd iteration.

1) reimplement tmpfs_read(), tmpfs_write() call by following Matt's
suggestion . I referred
    HAMMER and userfs implementation. An old vm_page_gram() logic now gone here

2) tmpfs_strategy() is a kind of temporary hack. Just call
vm_pager_strategy(). Tmpfs's
    anonymous vm object is swap object type, so It will reach swap
backend store.
    As Matt suggested, I'll come back later here to implement more
efficient logic.

3) implement  tmpfs_advlock() with lockf structure

The change makes tmpfs more stable (make nativekernel works on tmpfs
:) though, still
some issues. 1)An umount will hit assertion. 2)There are some cases to
observe non-res
from tmpfs too. 3)An userland command (mount_tmpfs) is no progress,
has a lack of features.

I'll start looking a new truncation/extention API beyond this change,
then dive into above
issues and items.

thank you, Any comments are always welcome.
-Naoya

On Wed, Jan 20, 2010 at 1:50 AM, Matthew Dillon
<dillon at apollo.backplane.com> wrote:
>
> :Hi Matt,
> :
> :Thank you for the precise response. It is a same strategy of previous porter.
> :I thought it is a way to remove a dirty hack (vm object and anonymous
> :object shares rb_memq)
> :I'll play around implementing a buffer cache (or maybe, a page
> :cache)...it is a most interesting
> :part of this poring.
> :
> :Regarding to the old APIs, I try to migrate the code to follow your vm
> :change first.
> :
> :thank you again,
> :-Naoya
>
>    Cool.  Using the buffer cache is actually pretty easy to do.
>    You are already using vop_stdgetpages() and vop_stdputpages()
>    so you don't have to worry about those functions.  In fact,
>    those functions require a working buffer cache anyway so when
>    you implement the buffer cache mmap() will magically start
>    working properly.
>
>    If you use a fixed buffer size (say 16K) then you can use the
>    new API to control truncations and extensions.  Basically
>    nvtruncbuf() and nvextendbuf().  NFS uses the new API now too
>    so you have a working example you can use.
>
>    Using the buffer cache is pretty easy.  Essentially
>    you are implementing a buffering layer in vop_read() and
>    vop_write() which directly maps the VM pages in the backing
>    object into kernel memory via the buffer cache, allowing
>    you to use uiomove() to copy data from user memory into
>    the VM pages and vise-versa.  For reference material HAMMER
>    has the easiest-to-follow code for vop_read/vop_write.  NFS
>    is fairly easy to follow too.
>
>    Apart from vop_read, vop_write, and dealing with ftruncate()
>    (file extensions and truncations via vop_setattr), plus the
>    implied file extension which occurs when a write() appends or
>    extends a file via vop_write, you also need to deal with
>    vop_strategy.
>
>    vop_strategy is the function which in a normal filesystem is
>    used to read data from the media into the buffer cache or write
>    data from the buffer cache to the media.  The READ operations is
>    going to be a NOP.  The WRITE operation will have to be coded to
>    deal with redirtying the underlying VM pages.
>
>    --
>
>    In terms of associating swap space with the VM object, you don't
>    have to worry about it until you get everything else working.
>    Once you do if you want to take a stab at it what you would do
>    is implement the reading from swap and writing to swap in
>    vop_strategy().  READ would no longer be a NOP, and WRITE would
>    write the data to swap space instead of redirtying the VM pages.
>
>                                                -Matt
>
>
Attachment:
0003-tmpfs-013010-update-work.patch
-------------- next part --------------
A non-text attachment was scrubbed...
Name: bin00002.bin
Type: application/octet-stream
Size: 145601 bytes
Desc: "Description: Binary data"
URL: <http://lists.dragonflybsd.org/pipermail/users/attachments/20100131/20b3489b/attachment-0016.bin>


More information about the Users mailing list