System V IPC in userspace week13 report

Samuel J. Greear sjg at evilcode.net
Mon Sep 16 07:26:24 PDT 2013


Larisa,

Excellent work!

I know you had written some test code to exercise the SysV API's you
implemented. How hard would it be to extend that test code into a
micro-benchmark for each API that will do an operation n times and at the
end print statistics on average/max call times? (gettimeofday(2) overhead
would be as much as the calls themselves, so you would probably either have
to call the op 100+ times for each time call or use the tsc). If this is
fairly easy to do, then even better would be a concurrent version that you
can specify a number of threads or etc. for concurrency against the single
resource. Such tests shouldn't be concerned with the performance of, eg,
semaphore creation, only modifying operations -- if that makes things
easier.

I wouldn't consider doing this a high priority, getting your code ready for
commit is probably the most important thing right now. It would be
interesting to see the numbers though, and the resulting data may well
indicate where the need for improvement in performance lies.

Best,
Sam


On Mon, Sep 16, 2013 at 8:12 AM, Grigore Larisa <larisagrigore at gmail.com>wrote:

> Hi all,
>
> This week I have integrated my implementation with postgresql and I have
> doing performances testing with pqbench. The results are comparable with
> the kernel implementation. I had a bug in my mutex implementation because I
> thought using lwp_gettid will return an id unique in the system.
>
> Now I am trying to find ways to gain more performance. In my
> implementation I was using only mutexes and in semaphores case I am using
> only one mutex per group. I've implemented rwlocks and I want to see what
> is the performance with them and a mutex per semaphore.
>
> This week I will gather all results and do a final refactoring.
>
> Thanks,
> Larisa
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.dragonflybsd.org/pipermail/kernel/attachments/20130916/a546b6c3/attachment-0002.html>


More information about the Kernel mailing list