Phoronix benchmarks DF 3.2 vs Ubuntu - question

Tomas Bodzar tomas.bodzar at gmail.com
Wed Nov 28 11:36:14 PST 2012


On Wed, Nov 28, 2012 at 7:05 PM, Matthew Dillon
<dillon at apollo.backplane.com> wrote:
>     In the past we've found that many of these so-called benchmarks
>     are so poorly written that they don't actually test what they say they
>     are testing.

This is excellent write up about such things
http://dtrace.org/blogs/brendan/2012/10/23/active-benchmarking/

>  For example, quite a few of them wind up doing malloc()
>     calls in critical loops, or gettimeofday(), or other unnecessary
>     system calls, and stupid things like that.  And as Alex said,
>     a large chunk of any cpu benchmark that isn't written directly in
>     assembly is going to test the compiler more than it will test the
>     operating system.
>
>     Similarly, file I/O benchmarks often focus on only reading or only
>     writing and don't reflect the reality of mixed loads that most real-world
>     systems have to contend with.
>
>     Network benchmarks often test single-threaded or single-stream
>     performance, which is pretty much worthless, far more than they
>     test concurrent stream performance and fairness which servers are
>     more likely to have to deal with.
>
>     Benchmarks sometimes can identify bottlenecks and other issues that are
>     worthy of action.  The recent postgres/pgbench tests identified some
>     significant issues that we were able to address in the release, for
>     example.
>
>     I glanced at that posting a week or three ago and generally speaking
>     the more recent DragonFly did do marginally better, probably due to
>     the positive effects the scheduler changes have on the cpu caches.
>
>
>                                         -Matt
>                                         Matthew Dillon
>                                         <dillon at backplane.com>



More information about the Users mailing list