GCC-4 compiler bug

Joerg Sonnenberger joerg at britannica.bec.de
Thu Nov 29 01:54:05 PST 2007

On Thu, Nov 29, 2007 at 08:47:48AM +0900, Thomas Zander wrote:
> On 29/11/2007, Simon 'corecode' Schubert <corecode at fs.ei.tum.de> wrote:
> > Yes, I think it is stupid.  But I don't think that any newer GCC version fixes this.  We'll probably have to change GCC not to include this optimization.  Mind you that this happens only for signed overflows.  Unsigned overflow should still work as we expect.
> Maybe it would be helpful to know if this is actually a bug in the
> gcc-4.1.x branch or if it was on purpose. What about Mezz's posting?
> If gcc-4.2 on FreeBSD behaves differently, what happened? Did they
> hack gcc in their base system or has the stock gcc-4.2 reverted to the
> old (and desired) behaviour?

He compiled the code with -O0. That is the relevant difference.

GCC is correct according to the standard and the standard has a good
reason for this. This is similiar to the situation you face with
floating point -- depending on the hardware you can't efficiently handle
signed overflow without truncation or checks all over the place. With
i387 the only way to force double accuracy is to explictly store and
reload the register. Note that variable assignment is supposed to have
the same meaning as an explicit cast and both *should* force the
truncation to the destination width. Otherwise the standard explicitly
says that the specification machine is using arithmetic on the Z ring --
which doesn't make a difference for unsigned operations, but does for
signed. The trivial example is that
	int i;
	for (i = 1; i != 0; ++i)
is an infinite loop.


More information about the Kernel mailing list