DragonflyBSD GEOM? (Re: Is it time to dump disklabel and use GPT instead?)
ahornung at gmail.com
Wed Jul 28 09:44:16 PDT 2010
On 28 July 2010 17:14, Adam Vande More <amvandemore at gmail.com> wrote:
> Well it's 3 main actually(up, down, and event threads basically), and
> depending on the module you can spawn other kernel threads to handle
> asynchronous stuff to pass off when it's ready. The design has awfully nice
> performance characteristics like there is a centralized place for bio stuff,
> optimization would seem to be much easier to implement. You can see
> evidence of this when using something like gjournal. On gjournaled device,
> single threaded IO takes a big hit putting it in the Linux realm of
> performance, however as the threads grow, so does the ability to optimize
> requests being sent to physical devices. That's why gjournal devices scale
> so much better as concurrency grows. I have not observed any performance
> bottlenecks on my GEOM ssd devices and I have no idea what a theoretical or
> real-world maximum would be on the GEOM design.
I absolutely don't see how forcing the I/O from N different threads
onto 2 (events are not I/O effectively) is better than having each I/O
maintain (mostly) it's own context. Your particular case may not
suffer from any performance impact, but I was mostly talking about a
> Are you taking about the g_linux_lvm module? I would not consider that huge
> ~1200 lines. That's tiny compared to what you had to go through to import.
> Now let's see you write an io scheduler for the new devices. See my point?
I mean every module that needs metadata. From the top of my head I
could mention the lvm parser and geli, but I'd suspect most others,
too. So GEOM has a lvm parser that is utterly incomplete and obviously
offers no management whatsoever. How is that superior to having most
of the lvm functionality (in userland), easy to keep up to date and
offering the same tools as on Linux? In any case this is not about LVM
vs geom since LVM is only one consumer of the device mapper.
Now let's see... you write an I/O scheduler on DragonFly... you simply
use the dsched framework which fits nicely on top of the disk
subsystem. As a matter of fact I could even change dm slightly to use
the disk subsystem, too, and hence allow I/O schedulers, mbr, gpt and
disklabels on top of dm devices, but I don't think there's much point
to it at this time.
> Perhap they would, hard to say though. If someone is able to learn LVM/DM
> stuff though, they certainly would be able to GEOM. LVM does have a few
> advantages over GEOM like being able to resize on the fly, but that
> functionality does not work here.
My point is that there's no need to learn anything new. Also, this is
completely subjetive. Maybe you prefer GEOM; I'd argue some of the
Linux counterparts are way more intuitive.
LVM is only one consumer of device mapper as I said before, so there's
really no point in doing this comparison. LVM was imported strictly
because of the compatibility with Linux. cryptsetup is another
consumer of device mapper, which offers a different interface. My
point here is that it's extremely simple to write userland tools to
fit anyone's needs. I'm currently working on a mirror target for the
device mapper; it'll also have its own userland tool, and not
dependent on LVM which you seem to find cumbersome.
> I know of the differences I guess I didn't relay my whole thought here.
> gpart was brought up because it's a geom class, and it does have more
> functionality than the existing D_BSD support(the man page notes there is
> much to be done). My point is that loader support would also be easier to
> import if it had the same target, the geom_part_gpt class.
I still don't get your point. GPT support in the loader is not
assisted in any way by geom or any other similar mess.
More information about the Users