Worlds greatest kernel

Matthew Dillon dillon at apollo.backplane.com
Tue Oct 7 20:13:55 PDT 2003


:Could this be taken a step farther?  If you're going to set the core
:OS up such that it can maintain separate OS execution enviroments,
:and add in virtualizing library storage (IE each OS flavor has it's
:own /usr/libs, or even each app has it's own /usr/libs view) could
:you not also sling in CPU virtualization as well?  Fire up a
:dragonfly binary compiled for x86 on a PPC box, and FX32! style
:emulation runs the binary, translating lib calls to the PPC native
:ones where possible, x86 ones exposed by the virtualized lib view as
:needed.  You could also define a 'FAT' binary format that contains
:the executables for all avalable dragonfly supported cpus.  Heck,
:you could have a new platform's emulator up and running and
:generating 'native' bytecode before you get your hands on the
:platform, just depending intially on virtualized / translated lib
:calls.
 
    We have been discussing variant symlinks and VFS 'environments', which
    address the ability to create an environment that looks like whatever
    you want it to look like.

    Packing multiple cpu targets in a binary has been done (NeXT did it),
    but the result is usually too bloated to really be all that useful.

    CPU virtualization through the use of an intermediate binary format
    (like a byte code) is more desireable, but also rather more difficult.
    The biggest problems are: (1) 32/64 bit address spaces and
    (2) byte ordering.  To be efficient structures and procedure arguments
    and return values must still be laid out in the code (not dynamic),
    and the compiler either needs to be able to hardwire offsets (which will
    be different depending on the byte ordering of the machine), or it needs
    to be able to generate relocation records to deal with those offsets
    at load time.  

    One also has to decide whether the byte code is to use a registered
    abstraction or a stack abstraction.  It is possible to optimize both
    to a particular target cpu, but both abstractions have their benefits and
    detractors.  Keep in mind that most byte code abstractions in use today
    are targeted to a particular language.  We would need one that is
    language agnostic and could serve as a backend to GCC.

    As a goal I far prefer CPU virtualization (byte code like) because the
    operating system can generate a run-time binary image and cache it in
    memory or swap.  I have played the CPU virtualization game before, using
    my DICE compiler as a base.  It is definitely a tough nut to crack.

							-Matt






More information about the Kernel mailing list