<div dir="ltr">Performance is only one goal of this project and system call overhead is only a single aspect of performance. There are other objectives here, to be able to use sysv services inside of a jail w/o exposing the global kernel sysv resources to that jail. To be able to compartmentalize sysv resource usage. To make it easier to ration out / enforce a quota on sysv ipc resources for individual consumers.<div>
<br></div><div style>Implementing semaphores and message queues on top of sysv shm would entirely defeat large parts of the purpose of this project.</div><div style><br></div><div style>Look harder at mmap(2).</div><div style>
<br></div><div style>Sam</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Sun, Jul 7, 2013 at 5:08 PM, grigore larisa <span dir="ltr"><<a href="mailto:larisagrigore@gmail.com" target="_blank">larisagrigore@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hello,<div><br></div><div>This week I have done the following things:</div><div>- extended my tests in order to support multiple clients</div>
<div>- implemented a hashtable in order to find easy a client using its pid. It can be used to verify if a client is already connected.</div>
<div>- resolved some bugs related to locking and polling</div><div>- investigated the impact of implementing shared memory in userland</div><div><br></div><div><div style="font-family:arial,sans-serif;font-size:13px">
These days I've studied how shm is implemented in kernel and how I could move it in userland. Moving it in userland means moving only data associated with each segment and permission checks. The operation of allocating or mapping a segment must still be done in kernel.</div>
<div style="font-family:arial,sans-serif;font-size:13px"><br></div><div style="font-family:arial,sans-serif;font-size:13px">As far as I understand, the project purpose is to implement in userland those parts of sysv ipc resources that help big ipc consumers to have better performance. I think this can be a good idea for semaphores and message queues, where some syscalls can be avoided. In shared memory case, I don't see that possible.</div>
<div style="font-family:arial,sans-serif;font-size:13px">For shmget call, two messages must be sent (to the daemon and back to the client) plus a syscall made by daemon when it must allocate a segment (when some client need a shared memory resource that doesn't exist, such a segment is allocated). For shmclt, one or two messages must be sent, depending one the command. Maybe, only in shmat and shmdt I can avoid to send messages to the daemon if some data are kept by the driver (number of clients that use the resource for example) but the client will still do a syscall to map/detach the shared memory object (as in the current implementation).</div>
<div style="font-family:arial,sans-serif;font-size:13px"><br></div><div style="font-family:arial,sans-serif;font-size:13px">In semaphores and message queues case, even if obtaining/controlling (*get, *ctl) the object is more expensive than the kernel implementation because of the communication with the daemon, semop()/msgget()/msgrcv() (that are more frequently used) in userland are less expensive because they do operations on shared memory.</div>
<div style="font-family:arial,sans-serif;font-size:13px"><br></div><div style="font-family:arial,sans-serif;font-size:13px">I think is a better idea to implement userland semaphores and queues on top of sysv shm already existing. What do you think about this?</div>
<span class="HOEnZb"><font color="#888888">
<div style="font-family:arial,sans-serif;font-size:13px"><br></div><div style="font-family:arial,sans-serif;font-size:13px">Larisa</div></font></span></div></div>
</blockquote></div><br></div>