[GSOC] System V IPC in userspace week3 report

Samuel J. Greear sjg at evilcode.net
Sun Jul 7 16:33:41 PDT 2013


Performance is only one goal of this project and system call overhead is
only a single aspect of performance. There are other objectives here, to be
able to use sysv services inside of a jail w/o exposing the global kernel
sysv resources to that jail. To be able to compartmentalize sysv resource
usage. To make it easier to ration out / enforce a quota on sysv ipc
resources for individual consumers.

Implementing semaphores and message queues on top of sysv shm would
entirely defeat large parts of the purpose of this project.

Look harder at mmap(2).

Sam


On Sun, Jul 7, 2013 at 5:08 PM, grigore larisa <larisagrigore at gmail.com>wrote:

> Hello,
>
> This week I have done the following things:
> - extended my tests in order to support multiple clients
> - implemented a hashtable in order to find easy a client using its pid. It
> can be used to verify if a client is already connected.
> - resolved some bugs related to locking and polling
> - investigated the impact of implementing shared memory in userland
>
> These days I've studied how shm is implemented in kernel and how I could
> move it in userland. Moving it in userland means moving only data
> associated with each segment and permission checks. The operation of
> allocating or mapping a segment must still be done in kernel.
>
> As far as I understand, the project purpose is to implement in userland
> those parts of sysv ipc resources that help big ipc consumers to have
> better performance. I think this can be a good idea for semaphores and
> message queues, where some syscalls can be avoided. In shared memory case,
> I don't see that possible.
> For shmget call, two messages must be sent (to the daemon and back to the
> client) plus a syscall made by daemon when it must allocate a segment (when
> some client need a shared memory resource that doesn't exist, such a
> segment is allocated). For shmclt, one or two messages must be sent,
> depending one the command. Maybe, only in shmat and shmdt I can avoid to
> send messages to the daemon if some data are kept by the driver (number of
> clients that use the resource for example) but the client will still do a
> syscall to map/detach the shared memory object (as in the current
> implementation).
>
> In semaphores and message queues case, even if obtaining/controlling
> (*get, *ctl) the object is more expensive than the kernel implementation
> because of the communication with the daemon, semop()/msgget()/msgrcv()
> (that are more frequently used) in userland are less expensive because they
> do operations on shared memory.
>
> I think is a better idea to implement userland semaphores and queues on
> top of sysv shm already existing. What do you think about this?
>
> Larisa
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.dragonflybsd.org/pipermail/kernel/attachments/20130707/e589dd7b/attachment-0003.htm>


More information about the Kernel mailing list