Phoronix benchmarks DF 3.2 vs Ubuntu - question

Matthew Dillon dillon at apollo.backplane.com
Wed Nov 28 10:05:23 PST 2012


    In the past we've found that many of these so-called benchmarks
    are so poorly written that they don't actually test what they say they
    are testing.  For example, quite a few of them wind up doing malloc()
    calls in critical loops, or gettimeofday(), or other unnecessary
    system calls, and stupid things like that.  And as Alex said,
    a large chunk of any cpu benchmark that isn't written directly in
    assembly is going to test the compiler more than it will test the
    operating system.

    Similarly, file I/O benchmarks often focus on only reading or only
    writing and don't reflect the reality of mixed loads that most real-world
    systems have to contend with.

    Network benchmarks often test single-threaded or single-stream
    performance, which is pretty much worthless, far more than they
    test concurrent stream performance and fairness which servers are
    more likely to have to deal with.

    Benchmarks sometimes can identify bottlenecks and other issues that are
    worthy of action.  The recent postgres/pgbench tests identified some
    significant issues that we were able to address in the release, for
    example.

    I glanced at that posting a week or three ago and generally speaking
    the more recent DragonFly did do marginally better, probably due to
    the positive effects the scheduler changes have on the cpu caches.


					-Matt
					Matthew Dillon 
					<dillon at backplane.com>



More information about the Users mailing list