Bruce R. Montague
brucem at mail.cruzio.com
Thu Jun 2 11:48:16 PDT 2005
Hi, Joerg... re:
On Thu, 2 Jun 2005 19:28:08 +0200, Joerg Sonnenberger wrote:
> [...] But not
> serving interrupts for a long time for _any_ device *does* hurt.
A hallmark of the fork routine technique when used
with interrupt handlers is that interrupts are usually
enabled while the fork routines run, that is, direct
"first-level" interrupt handlers can still run.
Typically the particular device which interrupted
and registered a fork routine is not re-enabled to
interrupt until near the end of the fork routine
that acts as the "second level" handler. Sometimes
multiple fork queues are used, one per priority
level, so priority is respected...
A first-level interrupt handler in this scheme
primarily registers information in some data structure
and queues a fork routine to process the data
(data/registers that must be unloaded immediately
are read and stored in the data struct by the 1st
level handler). A problem with the interleaved fork
routine approach is that stacks cannot be used to
record information across executions of fork routines
(for instance, to process multiple I/O completions),
such info must be stored in explicit data structures.
With fork routines, most I/O completion driver code
ends up being written as explicit "hard-real-time"
routines. Historically programmers have found this
more difficult than using stack-based approaches
where they can, to some extent, ignore time.
This issue reflects the two major approaches to
concurrent programming: dispatching bounded-length
routines (MS windows apps, X11 apps, etc.) and
threads. Both have their drawbacks (threads have
more context, may require more stack space,
context-switch slower, and are not in an implicit
critical section when they run...)
More information about the Kernel