HPC (Was Re: Thoughts on llvm/clang in base)

peeter (must) karu.pruun at gmail.com
Fri Oct 19 04:45:13 PDT 2012

On Wed, Oct 17, 2012 at 1:49 PM, Chris Turner
<c.turner at 199technologies.com> wrote:
> On 10/15/12 09:17, peeter (must) wrote:
>> Just to chime in with this, we're using OpenMPI on DF and it'd be
>> very, very bad news for us if support for OpenMPI would be
>> discontinued for the mentioned reason. We'd need to change the
>> platform then. . .
> Interesting - would love to hear more about this setup if you can
> share (how it's used/configured/hardware/application etc)!
> I have not heard many stories of HPC/parallel compute on DF
> so it would be interesting to hear some news -
> Also - last I tried - on pkgsrc, openmpi failed to build
> however mpich2 did build - so, any patches to fix that build
> would probably be welcome - if you do have, submit to
> netbsd's bugtracker & let us know here.
> Cheers,
> - Chris


Sorry for the delay---Im afraid our setup is quite modest at this
moment! We are a couple of PhD students who use openmpi on DFly for
numerical computations related to our projects in physics. Briefly, we
liked DFly and were running it on our server already. We then needed
more computational power and saw an article in the BSD magazine on how
to build Beowulf Clusters with DragonflyBSD, 3/2012. At this point we
have a single quad core i7 machine running our code written using
openmpi, and we plan to add more cores in a few months time.

We didn't use pkgsrc, just downloaded the openmpi at


think openmpi-1.7a1r26605.tar.gz at the time (guess now more recent
versions), and it compiles fine on DFly. Some of us use java
(openjdk7), so added java bindings as well,

# ./configure --prefix=/usr/local --enable-mpi-java
# make all install

For compiling java you then have the mpijavac wrapper.

All in all computing times for us have decreased by an order of
magnitude, instead of 1-5 days (on a single core) to half a day, so
we're pretty happy. And we plan to add more cores soon, so hopefully
can give more info as our setup becomes more fancy.

Cheers, Peeter


More information about the Users mailing list