[GSOC] Userland System V Shared Memory / Semaphore / Message Queue implementation

grigore larisa larisagrigore at gmail.com
Tue Apr 23 10:02:30 PDT 2013


Hello,

My name is Larisa Grigore and I am student in the second year at a master
program "Internet
Engineering" from Polytechnics University of Bucharest, Romania, Computer
Science and
Engineering Department.

Currently I am working at a research project for master program. The
project is about "High Availability".  We try to keep a system, that runs
over a microkernel
from L4 family, alive no matter what happens with its components. The
detected faults are segmentation faults, deadlocks and killed threads. From
the point of view of recovery, two possible options are provided:
restarting an address space or restoring it from some checkpoint. The user
can order classic and incremental checkpoint and use those two types in
correlation with fork mechanism. In order to build the HA module, memory
management and fork with copy-on-write mechanisms were implemented.
High Availability features, previously developed for native applications
running on top of microkernel,  were extended for Linux operating system. A
kernel module was implemented to support fault detection (deadlocks,
segmentation faults and killed processes) and recovery. The process can be
restarted from zero or from a specific checkpoint.
The project aims to cover cases where Android fails to restore properly an
application. After studding frequent android application failures, we
concluded that High Availability support can be useful for some classes of
applications too.

>From July 2013, I am Associated Teaching Assistant in University
Polytechnics of Bucharest. I am teaching labs and develop lab material for
Operating Systems classes.

I am interested in "Userland System V Shared Memory / Semaphore / Message
Queue implementation" project on
GSOC page. Here are few ideas after some research in System V area:
- daemon
  - it will take care for the management of system v resources; all
operations as creation/destruction will be implemented through messages
sent
  to it;
- communication with daemon
  - first step (registration)
    - using a known named pipe to tell the daemon that the process wants to
open a communication
  - second step (communication)
    - both client process and daemon will open another named pipe, based on
client pid and will use it to talk each other

- shared memory
  - daemon will create files in order to be mapped in the processes address
space and will keep other information related

- semaphores
  - an implementation similar to Posix Unnamed semaphores (memory-based
semaphores) [1]
  - client will ask for a semaphore and the daemon will return a file and
an offset for the semaphore
  - acquiring and releasing a semaphore will be done as sem_wait and
sem_post posix implementation; atomic functions will be used in order
  to test the semaphore value; umtx_sleep(2)/umtx_wakeup(2) will be used in
case one process must block
  - the daemon is responsible for telling the clients where to find a
semaphore
  - there are two approaches:
    1. for each semaphore, a process must map a file with the size of
PAGE_SIZE; the problem
    with this approach is the big amount of memory used in case of multiple
semaphores
    2. more semaphores corresponding to a file; here some security issues
appear because an application may have access to
    semaphores not opened by it

- message queues
  - client will send a message to the daemon, asking for a message queue;
the daemon will respond with a file to map in its address space
  - the queue size will depend on file size
  - the file will contain, beside messages written by processes, some
control information (related to number of messages, the first message
offset, etc)

Any feedback on this is welcome.

[1] http://linux.die.net/man/7/sem_overview
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.dragonflybsd.org/pipermail/kernel/attachments/20130423/a32e312e/attachment-0015.html>


More information about the Kernel mailing list