Ridiculous idea: Cache as ramdisk?
Matthew Dillon
dillon at apollo.backplane.com
Tue Sep 23 09:19:53 PDT 2003
:> applications for memory, it will push writes out to swap. So it only uses
:> as much memory/swap space as what you put in swap.
:
:as far as i understood the original poster, he wants two things:
:
:a) he (and me too) wants a huge buffer cache per filesystem, that has the
:criterias, that kip macy illustrates for tmpfs.
:
:i guess it is important for this scheme to have (one buffer cache)
:filesystem, because you may treat /usr differently than
:/my_big_fat_static_data_partition, but i am not sure if with more than one
:cache overhead is an issue.
:
:b) he wants a possibility to retain a specific file in the cache.
:this can be generalized to the idea, that each file has different
:buffer-cache priority.
:
:this surely would be nice i fear, it would be, but overhead comes into play
:again.
:
:apart from this, i like this approach very much.
:
:~ibotty
Well, the existing buffer and path caches are not really well suited to
'locking' certain elements into ram. They really wants a 'backing store'
to play with.
But, that said, it will basically attempt to cache as much as it can in
memory. The two primary limitations which tends to cause cache flushes is
kern.maxvnodes, because in order to flush a vnode the underlying data
cached with that vnode will also be flushed even if memory is available
to hold it, and limitations with the buffer cache.
I believe we could implement a general mechanism to turn off
delayed-writes, write-behind, and other elements on a per-filesystem
basis, and basically revert the associated data from dirty buffer cache
entries to dirty VM pages and then allow them to be paged out on demand
by the VM paging system. This removes the buffer cache limitations
and leaves us only with the vnode limitations.
The MFS filesystem comes fairly close to doing this already since
its backing store is simply the memory associated with a process, but
it has the disadvantage of causing data to be stored in memory twice
(once in the VM / buffer cache, once in the MFS's procs memory space).
Despite the duplication of data, the fact that MFS filesystems do not
automaticaly flush data to a physical storage device has proven to be
a huge performance advantage over the years.
So, in short, we already have the pieces, we just have to get them to
fit together in a reasonable way.
-Matt
Matthew Dillon
<dillon at xxxxxxxxxxxxx>
More information about the Kernel
mailing list