UFS in CF for /boot and Hammer for the rest and failover+load balancing
justin at shiningsilence.com
Mon Oct 3 18:55:00 PDT 2011
The vkernels could work as complete substitutes for jails, I think.
(depends on just how you are going to use them.)
You can't pool Hammer drives now - you just establish master->slave
relationships. Multi-master is not possible in this version of
Hammer. The hammer(5) and hammer(8) man pages have a lot of detail:
You can use the installer to install to disk, and use cpdup to copy
/root over to a CF card. Consult the swapcache man page for how to
set that up, and I think you will be set. The normal caveat applies
that I'm saying "Of course this would work" without actually trying
any of it.
On Mon, Oct 3, 2011 at 3:15 AM, Zenny <garbytrash at gmail.com> wrote:
> Thank you Justin for a comprehensive reply. Appreciate it!
> I shall check with the vkernel stuffs to limit resources for jails
> (seems like a sharp learning curve ;-) )
> Since you stated that 4 drives are overkill, does hammer allow
> create a pool like in ZFS of two master drives and two slave drives
> with remote machine which works exactly as a failover + load
> balancing (as in the case of DRBD in Linux or HAST in the coming
> Where exactly can find the detailed document for scripting for a
> streaming the HAMMER data to the remote machines?
> As I stated earlier, I want:
> /boot in CF or SanDisk
> / in HDD and other data
> swapcahce in SSD or in HAMMER /
> in order to separate data from the operating system. But I could
> not find documents for manual installation mode to meet my
> requirements. Let me know if there are any. Thanks!
> On Sun, Oct 2, 2011 at 11:50 PM, Justin Sherrill <justin at shiningsilence.com>
>> I'm not sure about the jails. They I think work the same on
>> DragonFly, though the resource limits aren't there. You could
>> potentially use virtual kernels to get a similar effect. See the
>> vkernel man page for that.
>> You should be able to set up the root and other volumes normally. 4
>> hard drives may be overkill - you can stream from master to slave
>> volumes in Hammer, for which 2 drives will work. If you want more
>> duplication, hardware RAID may be a good idea; people have been trying
>> out Areca cards with success recently.
>> AES256 is supported, or at least I see the tcplay(8) man page has an
>> example using it. I haven't used disk encryption enough to know it
>> You can use Hammer to stream data to other machines, and then in the
>> event of something going wrong, promote the slave drive in the
>> surviving unit to master. This would require some scripting or manual
>> intervention; this isn't covered with an automatic mechanism.
>> On Sun, Oct 2, 2011 at 5:50 AM, Zenny <garbytrash at gmail.com> wrote:
>> > Hi:
>> > I am pretty new to Dragonfly or BSD world. HammerFS seems to be very
>> > innovative. Thanks to Matt and team for their hard work.
>> > I would like to do something with Hammer+UFS like the following,
>> > inspired by Paul's work
>> > (http://www.psconsult.nl/talks/NLLGG-BSDdag-Servers/), but could not
>> > figure out exactly:
>> > 1) Creation of a server with a jail with minimal downtime as offered
>> > by nanobsd scripts in FreeBSD. Two failover kernels. Is there such
>> > scripts for DragonflyBSD?
>> > 2) I want to have the minimal boot (ro UFS) and configurations like
>> > that of the nanobsd image on a compact flash while the entire root and
>> > data in an array of HDDs (at least 4) with of course an SSD for
>> > swapcache. The latter could be Hammer to avoid softraid.
>> > 3) All HDDs should be encrypted with AES256 (I could not find whether
>> > DragonflyBSD supports that), and accessible either in the /boot of CF
>> > or somewhere else (could be ssh tunneled from another network).
>> > 4) I could not figure out the features of jail available for
>> > DragonflyBSD. FreeBSD-9-CURRENT has the resource containers
>> > (http://wiki.freebsd.org/Hierarchical_Resource_Limits). Are they
>> > applicable in DragonflyBSD's case.
>> > 5) Is there any way that the two similar servers in two different
>> > locations can securely mirror for failover as well as load-balancing?
>> > Appreciate your thoughtful inputs! Apology in advance if my post above
>> > appears to be pretty naive. Thanks in advance to the entire DF
>> > community and developers!
>> > zenny
More information about the Users