Frama-C news and ideas

To content | To menu | To search | Frama-C Home

Making oughts from ises

A previous post discussed the nature of uninitialized and indeterminate memory throughout the C standards. The argument was “avoid using uninitialized data, even if you think you know what you are doing; you may be right, but regardless your compiler might think it knows what you are doing and be wrong about it”. I tried to strengthen this argument by claiming that writing the example for that post was how I found my first compiler bug “by hand”.

An example involving unsequenced addition and sequenced comma operators

That bit about a compiler bug being my first, I now fear, was an exaggeration. I had found and forgotten another arguable GCC bug in 2012 while investigating sequence points. First, consider the program below.

#include <stdio.h>

int i, j;

int f(void)
  return j;

int g(void)
  return i;

int main()
  printf("%d\n", f() + g());

Although functions f() and g() may be called in an unspecified order, calling a function and returning from it are sequence points. The program therefore avoids the sin of undefined behavior caused by C99's clause 6.5:2.

6.5:2 Between the previous and next sequence point an object shall have its stored value modified at most once by the evaluation of an expression. Furthermore, the prior value shall be read only to determine the value to be stored.

The program can only print an unspecified choice of either f() + g() when calling f() first or f() + g() when calling g() first, the two of which happen to result in the same value 1.

The same reasoning that makes the C program above always print 1 should make the program below always print 1 too:

$ cat t2.c
#include <stdio.h>

int i, j;

int main()
  printf("%d\n", (0,i++,j) + (0,j++,i));

But unfortunately:

$ gcc t2.c
$ ./a.out 

This program should print the same result as the previous program (the one with f() and g()) because the modifications of j are sequenced by 0, on the left and ,i on the right, and similarly for i. In C99 at least, the compiler must pick a choice between adding an incremented i to the initial value of j, or the other way round. This example should print 1, in short, for the same reason that the first example in this post could be expected to print 1, or that an example that prints cos(x) * cos(x) + sin(x) * sin(x) with MT-unsafe functions cos() and sin() can still be expected to print a floating-point value close to 1.0.

Pet peeve: guessing at what the standard ought to say from what compilers do

This anecdote slowly but surely brings us to the annoyance I intended to denounce all along, and this annoyance is when programmers infer what the standard ought to be saying from what compilers do. Any discussion of the kind of corner cases I described recently invite arguments about how compilers want to optimize the generated code as if uninitialized data could not be used, or as if the two operands of + were evaluated in parallel.

This is backwards. Yes, compilers want to optimize. But compilers are supposed to follow the standard, not the other way round. And section 6 of the C99 standard does not once use the word “parallel”. It says that the evaluation order of sub-expressions is unspecified (6.5:3), and that unsequenced side-effects cause undefined behavior (6.5:2), but nothing about parallel evaluation.

Never mind the example

Actually, C99's 6.5:3 clause really says “[…] the order of evaluation of subexpressions and the order in which side effects take place are both unspecified”. I am unsure about the implications of this thing about the order of side-effects. I might even, and the long-time reader of this blog will savor this uncharacteristic admission, be wrong: perhaps a compiler is allowed to generate code that prints 2 for t2.c after all.

Nevertheless, this does not detract from the point I intended to make, which is that it is bad engineering practice to take the current behavior of popular compilers as language definition. If the C99 language allows to “miscompile” (0,i++,j) + (0,j++,i), so be it. But having read the relevant parts of the standard (,, and in addition to the previously mentioned clauses), it seems to me that if the standard allows to miscompile the above, it also allows to miscompile f(1) + f(2) for any function f() with global side-effects.

Global side-effects include temporarily modifying the FPU rounding mode, using a statically allocated scratch buffer, initializing a table of constants at first call, calling malloc() and free(), or even random(). All these examples are intended to be invisible to the caller, and lots of library functions you routinely use may be doing them.

So in this case my argument remains very much the same: that compilers are not implementing aggressive optimizations (yet) in presence of f(1) + f(2) should be no excuse for not clarifying whether the standard allows them.


1. On Saturday, April 13 2013, 17:53 by John Regehr

One aspect of this problem is that the entire history of C is characterized by attempts to standardize the union (intersection?) of all existing compilers' behavior. I've heard people who justify this as the only practical option: the degree of investment in compilers and code bases will lead people to ignore standards they don't like. The slow and non-adoption of C99 seems to indicate that this is true, even if we don't like it.

2. On Sunday, April 14 2013, 11:41 by Nhat Minh Lê

While I agree with the general idea, I'd like to ask about this specific ambiguity: you seem to base your observation on the C99 standard; what do you think of the C11 wording? From my understanding of the standard, it seems that your comma example is undefined behavior, so GCC is allowed to do as it pleases.

Here's how I read it: C11 introduces (along with its new memory model) the notion of sequenced-before as a partial relation. Sequence points still exist in places where the standard hasn't been updated, but they are now made a subcase of sequenced-before: ( "The presence of a sequence point between the evaluation of expressions A and B implies that every value computation and side effect associated with A is sequenced before every value computation and side effect associated with B."

6.5 now states "The value computations of the operands of an operator are sequenced before the value computation of the result of the operator." and "If a side effect on a scalar object is unsequenced relative to either a different side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined." The definition of the comma operator is unchanged, as is the definition of the plus operator.

So, going back to `(0,i++,j) + (0,j++,i)', the sequenced-before edges go like this:

0 -> i++ -> j -> result of , -> result of +
0 -> j++ -> i -> result of , -> result of +

`i' is a value computation of the variable `i' and `i++' is a side effect affecting the same variable. They are unsequenced relative to each other, so, according to 6.5, the behavior is undefined.

The function example is apparently allowed by an explicit clause in, as follows. "Every evaluation in the calling function (including other function calls) that is not otherwise specifically sequenced before or after the execution of the body of the called function is indeterminately sequenced with respect to the execution of the called function." I find the wording awkward but footnote 94 is clear on the intent of the standard, even though it is not normative. "In other words, function executions do not ``interleave'' with each other."

3. On Sunday, April 14 2013, 15:07 by pascal


I agree that C11 makes it clear the status of my example, and the committee seems to have thought things through. I am convinced, I will start reading the 2011 iteration of the standard.

I was going to arrive to my own conclusion, perhaps in a last blog post if I found enough material, that it is fair game for a static analyzer to reject the example where uninitialized memory is used in a computation and the example (0,i++,j)+(0,j++,i) on the grounds that they are at least unspecified.  Choosing to reject these as “at least unspecified” means never having to worry whether they are really undefined and in which version of the C standard.

This is a real choice: the C interpreter KCC for instance does not make this choice. As far as I know, it tries to distinguish between unspecified and undefined behavior, and provides a “model-checking” mode where all alternatives are explored for unspecified behavior.

Also, this choice has real consequences, because the “blocking semantics” for errors mean that the rest of the code may not be explored. This is particularly annoying when the target code for some reason contains an implementation of a memmove-like function. Frama-C's value analysis can stop the analysis at that implementation of memmove(), whether it is written the wrong way (e.g. if (src < dst) invoking undefined behavior if src and dst do not point inside one same object) or carefully (for instance if ((uintptr_t)src < (uintptr_t)dst) for a quite portable test). Although there is an option to circumvent this particular difficulty by continuing the analysis with {0, 1} as the result of (src < dst).