just curious

Peter da Silva peter at taronga.com
Thu Jul 17 20:17:41 PDT 2003


>    Oh, scary thought... you could 'append' new messages to the linked list
>    WHILE the system is running the chain.

I think it would be a bad idea to allow applications to modify messages in
flight. At the very least that should be undefined *and* it should be an error
whenever it can be cheaply detected: you don't want to repeat the mistake of
the Amiga where programs taking advantage of things like this prevented better
implementations later on. What happens if the API changes so messages are
copied... or *not* copied... or it differs from implementation to
implementation.

>    This would provide an incredible economy of scale to certain
>    specialized applications like web servers by allowing a pipeline of 
>    system call requests to be maintained and thus remove ALL unnecessary
>    system call overhead during times of heavy load.

Not all of it... you still have unavoidable dependencies:

	fd = open(...)
	status = read(fd, ...)

You can't issue the second call until the first completes. But you can buffer
calls up. The Amiga++ model I was thinking about had this kind of arrangement:

With no protection boundary:

	sender --> message --> recipient
	wait  <-- response <--

With a protection boundary:

	sender --> message --> proxy --> syscall --> proxy --> recipient
	                                             wait  <-- recipient
				     <-- syscall <--
	wait  <-- response <-- proxy

With multiple messages, that would be 2 boundary crossings per call. Or
say the messages all go through in one syscall when the user waits?

	sender --> message --> proxy
	sender --> message --> proxy
	sender --> message --> proxy
	sender --> message --> proxy --> syscall --> proxy --> recipient
	                                             proxy --> recipient
	                                             proxy --> recipient
	                                             proxy --> recipient
						     wait
							   <-- recipient
							   <-- recipient
	wait
				     <-- syscall <--
              <-- response <-- proxy
              <-- response <-- proxy			   <-- recipient
	wait					     wait  <-- recipient
				     <-- syscall <--
              <-- response <-- proxy
              <-- response <-- proxy

Three boundary crossings instead of eight.

Yeh, you can buffer things up a bit, but you want to have a fairly short
timeout to minimize latency, and you need to cross the boundary when you
do a wait anyway, so you might as well do it then.

To get the best performance in this situation you make ALL your Send()
calls return EASYNC if the last previous call was less than epsilon ticks
ago, and keep just returning EASYNC until you Wait() or delta ticks pass
since the last call... then pump them all through at once.

So you'll get no calls for a while, then one going through right away,
then nothing for either delta or N*epsilon ticks (where N is the number
of system calls you have in a row without a Wait()) whichever comes
first, then a bunch of messages in one call... and then nothing for
delta more ticks or until the right response is ready...

>     I've thought about that a bit more, and I think the time we break the
>     API is when we have the emulation layer ready to go (and thus we wind
>     up not actually breaking the API :-)).  

That sounds good. That means:

>     Adding the syscall messaging interface itself does NOT need to break the
>     API.

It just adds another syscall? (read) no, even better. Nice.

>     Since it costs us nothing to retain the original API until the emulation
>     layer is ready, we might as well retain it.  People coming in from 4.x
>     will thank us for it because their machines will still be able to boot
>     into the new kernel :-).

That's me!






More information about the Kernel mailing list