Frama-C news and ideas

To content | To menu | To search | Frama-C Home

Thursday, April 4 2013

Google forking WebKit

Blink as seen from the inside

As you have undoubtedly heard if you follow at all this sort of thing, as of April 3, Google is forking WebKit. Its Chrome browser will henceforth rely on its own variation of the popular rendering engine, Blink. This is big news. If I were Google, I would have pulled a Gmail and made this serious announcement two days earlier. Google security engineer Justin Schuh gives the inside scoop.

Something particularly strikes me in Justin's post. This post reacts to that aspect of Justin's, and to that aspect only.

As of 2013, Cybersecurity is an unsolved problem

Let us be honest, cybersecurity, in general, in 2013, is a bit of an unsolved problem. Despite this state of affair, or because of it, it can be difficult to get some of the concerned companies to recognize there exists a problem at all.

Fortunately, some are more open than others regarding security. Google is more open than most, and when an exceptional event, such as the forking of WebKit, justifies it, someone there may remind us of all the Chrome team does for security in a convenient list:

[…] the Chrome security team has taken a very active role in WebKit security over the last several years, and really led the pack in making Webkit more robust against exploits. We’ve fuzzed at previously unheard of scale <>, paid out hundreds of thousands of dollars in bug bounties <>, performed extensive code auditing, fixed many hundreds of security bugs, and introduced a slew of hardening measures. […]

My point is only tangentially relevant to web rendering engines, so I am taking the above list out of context, and adding my own punchline: despite of all these efforts, measured in millions of dollars, it is likely that there is still one exploitable security flaw in Chrome, and it is possible that, as you read this, someone somewhere is exploiting this flaw.

To be very clear, I am not blaming Google here. Google allocates the millions of dollars that cybersecurity warrants. It is open about how it uses this money; Google uses all the techniques that have proved their worthiness: random testing, bug bounties, code reviews, defensive programming. I am only saying that cybersecurity is difficult, since even doing all this does not guarantee absolute confidence. If you were to embed Chromium on some kind of connected device, even after doing all the above, you had better leave room in the design for regular software updates.

Wednesday, March 13 2013

Reading indeterminate contents might as well be undefined

Warning: on a punctiliousness scale ranging from zero to ten, this post is a good nine-and-a-half. There was no tag for that, so I tagged it both “C99” and “C11”. The faithful reader will know what to expect. There is a bit of C90, too.

To summarize, it may appear that according to the letter of modern C standards, it is only dangerous to use uninitialized variables, instead of very dangerous. Nevertheless, this post shows that it does not matter what the standards say: compilers will bite you even when you are arguably right.

Some context in the form of a link

In 2012, Xi Wang wrote a nice blog post showing it is not a good idea to use an uninitialized variable as a source of additional entropy when trying to create a random seed.

“Xoring an uninitialized variable with whatever other source of entropy you already have cannot hurt”, the conventional thinking goes. Conventional thinking is wrong. Your typical modern compiler deletes the code that gathers the original entropy, since it is only going to be xored with an uninitialized variable. Hence the title of Xi Wang's blog post, More Randomness or Less.

In C90, “indeterminate” was simple

In the nineties, real men were real men, C standards were short, and reading indeterminate contents(such as uninitialized variables) was listed in the very definition of “undefined behavior”:


Unspecified behavior — behavior, for a correct program construct and correct data, for which the Standard imposes no requirements.

Undefined behavior — behavior, upon use of a nonportable or erroneous program construct, of erroneous data, or of indeterminately-valued objects, for which the Standard imposes no requirements.

“Undefined behavior” means the compiler can do what it wants, so the behavior noticed by Xi Wang can in no way be held against a C90 compiler.

In 1999, C standards became more complicated

The C99 standard does not directly list “reading indeterminate contents” as undefined behavior. Instead, it defines indeterminate contents as “either an unspecified value or a trap representation”. Reading a trap representation causes undefined behavior ( The nuance here is that the type unsigned char is guaranteed not to have any trap representations (and thus can always be used to read indeterminate contents).

Less randomness : the simplified version

“But my compilation platform does not have trap representations for type int, either, therefore I can use an uninitialized int variable and expect an unspecified value (a much better prospect than undefined behavior)”, one may think. This line of reasoning is attractive. It could even explain the behavior shown in Xi Wang's blog post and reproduced in simplified form below:

$ cat i.c
int f(int x)
  int u;
  return u ^ x;
$ gcc -O2 -std=c99 -S -fomit-frame-pointer i.c
$ cat i.s

On this 64-bit platform, the argument x passed to f() is in register %edi, and the result of f() is expected in register %eax. Thus, by executing instruction ret directly, function f() is not even giving us back the entropy we provided it. It is instead giving us the current contents of %eax, which may not be random at all.

(Giving us back the entropy we passed to it would have been mov %edi, %eax followed by ret, a longer sequence.)

One may argue that the compiler has only opportunistically chosen the most convenient value for variable u, that is, x xored with the current contents of %eax, so that u xored with x is just the current contents of register %eax. This fits the interpretation of “unspecified value” for C99's definition of “indeterminate contents”. It is a good argument, but just wait until you have seen the next example.

The next example

#include <stdio.h>

int main(int c, char **v)
  unsigned int j;
  if (c==4) 
    j = 1; 
    j *= 2;
  printf("j:%u ",j);

If fewer than three command-line arguments are passed to the program, it should display an unspecified even number for j, right?

$ gcc -v
Using built-in specs.
Target: x86_64-linux-gnu
gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1) 
$ gcc -O2 t.c
$ ./a.out 
j:1 c:1

GCC version 4.4.3 has decided that since the “else” branch was reading from an uninitialized variable j, only the “then” branch was worth compiling. This is acceptable if reading uninitialized variable j is undefined behavior, but not if it is unspecified behavior. Let us insist:

$ gcc -Wall -O2 -std=c99 t.c
$ ./a.out 
j:1 c:1

Although we are requesting the C99 standard to be followed by GCC, the program is not printing for variable j the even unspecified value we are entitled to.

(In passing, a proper static analyzer would know that if it is going to show variable j as containing 1, it might as well show c as containing 4. Also, a proper static analyzer would remind you that your program must, in essence, only be used with three command-line arguments. The reason compilers do not do this is covered elsewhere)

Between 1999 and 2011, C standards did not get shorter

In 2007, Rich Peterson, working at HP, was disappointed to find that the “Not a Thing” (NaT) value that registers can have on the Itanium architecture could not be used to implement an uninitialized unsigned char.

One thing led to another, and the C11 standard was amended with the phrase “If the lvalue designates an object of automatic storage duration that could have been declared with register storage class (never had its address taken), and that object is uninitialized (not declared with an initializer, and no assignment to it has been performed prior to the use), the behavior is undefined.”

That would have been my reaction too, if I was paid by the word. Anyway, this additional sentence re-introduces undefined behavior where there was none in C99.

In the example above, the address of j was never taken, so maybe that's GCC's excuse. Let us check:

#include <stdio.h>

int main(int c, char **v)
  unsigned int j;
  unsigned int *p = &j;
  if (c==4) 
    j = 1; 
    j *= 2;
  printf("j:%u ",j);

$ gcc -O2 t.c
$ ./a.out 
j:1 c:1

No, GCC is still acting as if j *= 2; was undefined.


I am not saying that this is not a bug in GCC. Perhaps it was fixed in later versions (in fact, that version does not accept -std=c11, so it must be rather old). My thesis is that you might as well avoid reading from uninitialized variables as if it was undefined behavior, because otherwise compilers will bite you. This statement holds even if what we have witnessed here is a bug in GCC version 4.4.3.

Also, if this is a bug in GCC 4.4.3, this is the first time I identify a bug in GCC without the assistance of a random program generator. In other words, compiler bugs are rare, but they become surprisingly common if you stick to a strict interpretation of a necessarily ambiguous standard. And speaking of Csmith, if there is indeed a GCC bug here, said bug cannot be detected with Csmith, which does not generate programs like mine.

Sunday, March 3 2013

Correct rounding or mathematically-correct rounding?

CRlibm is a high-quality library of floating-point elementary functions. We used it as reference a long time ago in this blog while looking at lesser elementary function implementations and the even lesser properties we could verify about them.

A bold choice

The CRlibm documentation contains this snippet:

[…] it may happen that the requirement of correct rounding conflicts with a basic mathematical property of the function, such as its domain and range. A typical example is the arctangent of a very large number which, rounded up, will be a number larger than π/2 (fortunately, ◦(π/2) < π/2). The policy that will be implemented in crlibm will be

• to give priority to the mathematical property in round to nearest mode (so as not to hurt the innocent user who may expect such a property to be respected), and

• to give priority to correct rounding in the directed rounding modes, in order to provide trustful bounds to interval arithmetic.

The choice for directed rounding modes is obviously right. I am concerned about the choice made for round-to-nearest. The documentation states the dilemma very well. One can imagine slightly out of range values causing out-of-bound indexes during table look-ups and worse things.

I seldom reason about floating-point programs. I work on static analysis and am only concerned about floating-point inasmuch as it is a requirement for writing a static analyzer correct for programs that include floating-point computations.

However, when I do reason about floating-point programs, I am more often compounding approximations, starting from the base assumption that a correctly rounded function returns a result within 1/2ulp of the mathematical result than I am assuming that atan(x) ≤ π/2. The choice the CRlibm implementors made means that suddenly, the reasoning I often make is wrong. The value of atan(x) in the program may not be 1/2ulp from the real arctangent of the same x. It can be more when x is very large and mathematical-correctness overrode correct rounding.

Truck drivers fall asleep at the wheel when they face long, dull stretches of straight empty roads. Similarly, it is good to have another special case to consider when reasoning about floating-point computations. With only infinites and denormals to worry about, it can get, you know, a bit too easy.

Oh well, it's only π/2

In this section I rhetorically assume that it is only π/2 for which there is a problem. The CRlibm documentation reminds us that in the case of double precision, we were lucky. Or perhaps it isn't luck, and the IEEE 754 committee took the desirableness of the property (double)π/2 < π/2 into account when it chose the number of bits in the significand of the double-precision format.

How lucky (or careful) have we been, exactly? Let us test it with the program below — assuming my compilation platform works as intended.

#include <stdio.h>

#define PI(S) 3.1415926535897932384626433832795028841971693993751##S

float f = PI(f);
double d = PI();
long double ld = PI(L);

int main(){
  printf("   3.14159265358979323846264338327950288419716939937510\n");
  printf("f  %.50f\n", f);
  printf("d  %.50f\n", d);
  printf("ld %.50Lf\n",ld);

The result of compiling and executing the program is, for me:

f  3.14159274101257324218750000000000000000000000000000
d  3.14159265358979311599796346854418516159057617187500
ld 3.14159265358979323851280895940618620443274267017841

As you can see, the nearest single-precision float to π is above π, as is the nearest 80-bit long double. The same goes for π/2 because the floating-point representations for π and π/2 only differ in the exponent. Consequently, the issue raised by the CRlibm implementors will come up for both functions atanf() and atanl(), when it is time to get them done. We were not very lucky after all (or careful when defining the IEEE 754 standard).

A subjective notion

But what exactly is the informal “mathematical correctness” notion that this post is predicated upon? Yes, the “innocent user” may expect mathematical properties to be respected as much as possible, but there are plenty of mathematical properties! Let us enumerate some more:

If x ≤ 1 in a program, then exp(x) should always be lower than the mathematical constant e.

So far so good. The above is a good rule for an exponential implementation to respect. We are making progress.

Here is another property:

If x ≥ 1 in a program, then exp(x) should always be greater than the mathematical constant e.

We are decidedly unlucky today, because at most one of these is going to be true of any floating-point function exp(). The programmatic value exp(1) must be either above or below the mathematical constant e (it is never equal to it because the mathematical constant e does not have a finite representation in binary).

Why does it matter anyway?

Let us revisit the argument:

to give priority to the mathematical property in round to nearest mode (so as not to hurt the innocent user who may expect such a property to be respected)

I alluded to a possible problem with a programmer computing an array index from atanf(x) under the assumption that it is always lower than π/2. But how exactly would an innocent user even notice that atanf(1e30) is not lower than π/2? The value π/2 cannot exist in eir program any more than e. The user might innocently write an assertion like:


This assertion will never trigger! The function atanf() will indeed return at most the single-precision float 3.1415926535897932f/2.0f. It does not matter that this number is actually slightly larger than π/2. For all intents and purposes, in the twisted world of single-precision floating-point, this number is π/2.


There are other scenarios in which the innocent user might genuinely have an unpleasant surprise. The result of a computation may be converted to decimal for humans to read and the user may be surprised to see a value outside the range ey expected. But this user would have the wrong expectations, just as if ey expected 10.0 * atan(x) to always be less than 5π. Plenty of these users and developers can be found. But my opinion, for what it is worth, is that by making special cases you are not helping these users, only feeding their delusions.

The correct way to set expectations regarding the results of a floating-point program is numerical analysis. Numerical analysis is hard. Special cases such as the authors of CRlibm threaten to implement only seem to make it harder.

Tuesday, February 26 2013

Portable modulo signed arithmetic operations

In a recent post, John Regehr reports on undefined behaviors in Coq! In Coq! Coq! Exclamation point!

Coq is a proof assistant. It looks over your shoulder and tells you what remains to be done while you are elaborating a mathematical proof. Coq has been used to check that the 4-color theorem really holds. It would be supremely ironic if an undefined behavior in a bit of C code in the OCaml implementation used to compile Coq caused it to accept as valid some incorrect proofs, for instance in the formally verified (in Coq) compiler CompCert.

Your mission, should you decide to accept it

Where are these undefined behaviors? They are in several places, but some of them are in the implementation of the bytecode arithmetic operations. In OCaml, basic operations such as addition and multiplication are specified as returning the mathematical result modulo 2^31, 2^32, 2^63 or 2^64, depending on the compilation platform and the integer type.

The problem here is not limited to OCaml. The Java Virtual Machine too specifies that arithmetic operations on its 32-bit integer type return results modulo 2^32. Someone wishing to implement a JVM in portable C would face the same issues.

In fact, this is just such a big issue that we should collectively try to work out a solution now.

A safer signed addition instruction

Let us assume a 32-bit platform. In this case, values of the unboxed OCaml type int range between -2^30 and 2^30-1 (31 bits of information). They are stored at run-time in the 31 most significant bits of a machine word, with the least significant bit set to one to distinguish it from a pointer. This representation is taken into account when applying arithmetic operations to machine words representing unboxed integers.

Here is an example, in file interp.c in the OCaml implementation. We are in the middle of the large switch that decodes bytecode instructions:

      accu = (value)((intnat) accu + (intnat) *sp++ - 1); Next;

The - 1 in this expression compensates for the fact that the least significant bit is set in both operands, and we want it to be set in the result (and no carry to be propagated). But as written, there is an undefined overflow in the C implementation of the bytecode interpreter when the unboxed integers in accu and *sp total more than 2^30-1. In order to avoid a signed overflow here, the instruction ADDINT could be written:

      accu = (value)((uintnat) accu + (uintnat) *sp++ - 1U); Next;

This solution takes advantage of the fact that unlike signed addition, the behavior of unsigned addition when the result overflows the destination type is defined by the C standard. The solution above does make assumptions about the conversion from a signed to an unsigned type and vice versa, but these are implementation-defined (and we are only targeting implementations that define these conversions as 2's-complement).

No overhead is involved in this approach: a half-decent C compiler should generate the same instruction sequence for the new version as it already generates for the old version.


You may be vaguely worried about multiplication. For instance, you may remember that while, in a typical instruction set, there is only one addition instruction that works for both signed and unsigned operands, there are dedicated signed multiplication and unsigned multiplication instructions. These are IMUL and MUL in the x86 family of processors.

Fear not! The two multiplications are only because on x86 and many other processors, 32-bit multiplication returns a 64-bit result. If you want all 64 bits of the results to be what you expect, you need to choose the instruction that interprets the operands' highest bit correctly. But if on the other hand you only care for the lowest 32 bits of the result (that is, you only want a result modulo 2^32), then it does no matter that you use unsigned multiplication. Think of it as (x + k1 * 2^32) * (y + k2 * 2^32) for unknown k1 and k2.

And this is our case. Thus, in interp.c, the implementation for bytecode instruction MULINT could be changed from:

  accu = Val_long(Long_val(accu) * Long_val(*sp++)); Next;


  accu = Val_long((intnat)((uintnat)Long_val(accu) * (uintnat)Long_val(*sp++))); Next;


As pointed out by John Regehr in the comments, the two fixes above are appropriate because the OCaml language definition specifies wrap-around behavior for its (signed) arithmetic operations. One should not assume that all the warnings emitted by IOC have a similar cause. Some may be deeper bugs requiring the logic to be fixed, instead of conversions to unsigned types that only ensure wrap-around behavior.

Issues of the nature discussed here should be in the bytecode version of the OCaml compiler only. By contrast, the native compiler should translate OCaml-level multiplication directly to assembly multiplication, bypassing the portable^Wwoefully underspecified assembly that is the C language.

Even programs generated by the native compiler interface with bits of OCaml runtime written in C, but this runtime is small.

So actually, there is not much cause for worry. If you are using a C compiler or a static analysis platform that involve OCaml code on a modern computer, they are probably already compiled with the native OCaml compiler.

Saturday, February 16 2013

From Facebook to Silent Circle, through the “metrics” Frama-C plug-in

From Facebook to Silent Circle

Some computers at Facebook were recently compromised because of a zero-day in Java. Nothing unexpected. Last december, instead of writing a full blog post, I lazily linked to Robert Graham predicting this sort of thing for the year 2013.

Speaking of Facebook, do you know who cares for your privacy? The good people at Silent Circle. They provide end-to-end encryption to use on your mobile phone, so that the only people to hear your discussions with your loved ones or business partners are you and them, and perhaps passengers in the same compartment as you, if you are this sort of sociopath.

Speaking of Robert Graham, he has written another excellent post, on the dangers of only focusing on the obviously important, the hard-to-get-right but known-to-be-hard-to-get-right cryptographic stuff, and neglecting the mundane, such as parsing. He takes the example of source code recently released by Silent Circle, finding traces of amateurism in the implementation of SDP. This is not were the hard cryptographic stuff takes place. But because this is a C++ implementation, any exploitable flaw at this level could have deep security implications. A buffer overflow could result in execution of code chosen by the attacker, and the attacker-chosen code could disable the cryptography entirely, or worse.

An attack on the protocol implementation is the equivalent of a $5 wrench in a well-known cryptography-related XKCD comic that probably does not require linking to at this point.

A mysterious commit message

[Metrics] Fix most complicated plugin ever, metrics (globals counted twice if declared and defined)

The above is the message attached by a facetious colleague to commit 21435 during the just elapsed week. After inquiry, the message is at least one-third sarcastic. Still, no-one ever said that the metrics plug-in should be simpler or easier to get right on first try than any of the other Frama-C plug-ins. And in fact, this was neither the first nor the last minor issue fixed in that plug-in last week.

Frama-C's metrics plug-in gathers various measurements of a syntactic nature. The “metrics” plug-in can help check how much of an unfamiliar codebase has actually been analyzed with a plug-in of a more semantic flavor, such as the value analysis. It is used, in context, in the next section of this post.

One reason “metrics” as hard to get right as any other Frama-C plug-in is that it deals with syntax. Semantic plug-ins such as the value analysis only need manipulate abstract objects such as functions and variables. In the metrics plug-in, details matter, such as the nuance between the declaration (in a .h header file) and the definition (in a .c file) of the same function (that was the bug being fixed in commit 21435), or the subtle difference between the source location of a variable and the source location of its initializer (that was another bug fixed this week).

Metaphorically, the metrics plug-in is to the Frama-C platform what the SDP parser is to an end-to-end encryption solution. It is not sexy, but it is just as hard to get right and as essential as the rest.

Using the metrics plug-in

Life being what it is, we sometimes find ourselves analyzing very unfamiliar code. I am talking code without documentation here, and often without either the hardware or a minimal description of the hardware that would be necessary to understand what behaviors can be exercised in the source code. This is one case when the metrics plug-in helps either anticipate how much work will be necessary, or how much has already been done.

Inspired by Robert Graham's post, I converted the Silent Circle SDP parser from C++ to C so as to see whether I could not find a bug in it using Frama-C's value analysis. At some point I had this modified parseSDP.c file and this modified sdp.h file. The originals can be seen here for parseSDP.c and sdp.h.

$ frama-c -cpp-command "gcc -C -E -I. -I `frama-c -print-share-path`/libc" parseSDP.c -metrics
[kernel] preprocessing with "gcc -C -E -I. -I /usr/local/share/frama-c/libc  parseSDP.c"
parseSDP.c:120:[kernel] warning: Calling undeclared function ipstr2long. Old style K&R code?
parseSDP.c:121:[kernel] warning: Calling undeclared function reverseIP. Old style K&R code?
parseSDP.c:328:[kernel] warning: Calling undeclared function u. Old style K&R code?
[metrics] Defined functions (6)
           findMediaId (0 call); hasAttrib (0 call); findCrLf (4 calls);
           findip (3 calls); getMediaParams (1 call); parseSDP (0 call); 
          Undefined functions (10)
           printf (1 call); strlen (1 call); strncmp (6 calls); atoi (2 calls);
           isdigit (4 calls); islower (1 call); isxdigit (1 call); ipstr2long (1 call);
           reverseIP (1 call); u (4 calls); 
          Potential entry points (3)
           findMediaId; hasAttrib; parseSDP; 

This snapshot arrives at a good time to use the metric plug-in, as above. The plug-in says that good entry points to start the analysis from are findMediaId(), hasAttrib() and parseSDP(). The heuristic here is that these functions are provided and aren't called, so they are likely to be part of the API.

The metrics plug-in also says that the only missing functions are those from printf() to isxdigit() (these are part of the standard library), u() (I introduced this function myself when converting from C++ to C), and ipstr2long() and reverseIP(), that are the only two that require digging up from elsewhere in the codebase. At that point I decided that the functions probably weren't important and that I could let Frama-C infer some sort of behavior for them based on their respective types.

The $5-wrench kind of flaw in the verification process would be to fail to notice that an important function is missing. Then the value analysis might report few or no issues, not because the code is safe, but because the analysis failed to visit most of it. In another scenario, the value analysis might fail to visit a large part of the code because there is an error early on that makes it look like the rest is unreachable. The “metrics” plug-in is the best safeguard we have against this kind of mistake, which is easy to make when verifying unfamiliar code.

A quick look at function parseSDP()

I finally used the commandline:

$ frama-c -cpp-command "gcc -C -E -I. -I `frama-c -print-share-path`/libc" parseSDP.c \
      -lib-entry -main parseSDP -context-valid-pointers -context-width 20 -val

Here are a few alarms found this way, cherry-picked as interesting because they are caused directly from the contents of the buffer pointed to by buf, that we assume to be under an attacker's control for this exercise.

Some alarms seem by contrast to be caused by the contents of *psdp. These are not as interesting to investigate, because that structure probably has some invariants it is supposed to satisfy, so the alarms we get there are only caused by the lack of context.

From the way it is handled inside parseSDP(), it seems that buf is only assumed to point to a zero-terminated string. It is a good way to pass a Sunday afternoon to see whether these alarms correspond to a real danger of buffer overflow for some input.

parseSDP.c:84:[kernel] warning: out of bounds read. assert \valid_read(buf+1);
parseSDP.c:82:[kernel] warning: out of bounds read. assert \valid_read(buf);
parseSDP.c:162:[kernel] warning: out of bounds read. assert \valid_read(buf);
parseSDP.c:165:[kernel] warning: out of bounds read. assert \valid_read(buf-2);
parseSDP.c:167:[kernel] warning: out of bounds read. assert \valid_read(buf+1);
parseSDP.c:262:[kernel] warning: out of bounds read. assert \valid_read(buf);
parseSDP.c:291:[kernel] warning: out of bounds read. assert \valid_read(buf);
parseSDP.c:294:[kernel] warning: out of bounds read. assert \valid_read(buf);
parseSDP.c:300:[kernel] warning: out of bounds read. assert \valid_read(buf);
parseSDP.c:303:[kernel] warning: out of bounds read. assert \valid_read(buf);
parseSDP.c:308:[kernel] warning: out of bounds read. assert \valid_read(buf);
parseSDP.c:308:[kernel] warning: out of bounds read. assert \valid_read(buf);
parseSDP.c:309:[kernel] warning: out of bounds read. assert \valid_read(buf);

Thursday, January 24 2013

Customers, customers, customers

The recent posts on extremely minor undefined behaviors in zlib neatly tie in with a discussion on John Regehr's blog about the future-proofitude of C and C++.

Another insightful post in this regard is this one by Derek Jones. Derek claims that the situation is different for proprietary compilers with paying customers. The argument rings true to me. The only proprietary compiler I know that competes with GCC or Clang in terms of aggressive optimization (at the cost of breakage of existing code) is the Intel C++ compiler. But that is not a counter-example: I did not pay for icc, nor do I know anyone who pays for it in order to use it in production.

Sunday, December 30 2012

December in Security

Robert Graham, of the blog Errata Security, predicts that “vulnerabilities in Acrobat Reader, Adobe Flash, and Java today […] will be announced and patched in 2013”.

As fate would have it, he could safely have included Internet Explorer 8 in his list of software products used by millions to process untrusted inputs.

Monday, November 26 2012


A remark I have not heard about the last two posts is the following.

Pascal, how can you, in your last post, claim that formal verification would have found the bug in your decimal-to-floating-point function? This is the kind of outrageous claim you rant against in your penultimate post! Formal verification did not find the bug. A code review did.

My answer is twofold. First, subscribers get the blog they deserve. If you cannot rail and mock such obvious contradictions, what is to prevent me from putting them in? Also, I was not advertising a particular OCaml formal verification tool. I do not know any that accepts the full language. It was fully confident in the absence of any such tool that I claimed that if it existed, and was usable and automatic enough to have actually been used, it could have found the bug disclaimed in the last post. If provided with an appropriate specification. Which, in the case at hand, could only be a decimal-to-floating-point implementation in itself.

The truth is that Frama-C development involves a lot of old-fashioned testing and debugging. One factor is that available OCaml programmer-assistance tools do not always scale to projects the size of Frama-C. Another is that Frama-C uses some hard-to-get-right, hard-to-diagnose techniques (for instance, mixing data unmarshaling and hash-consing).

Speaking of which, I think my Frama-C debugging colleagues will agree with me that this introduction to debugging is a nice read.

Sunday, November 18 2012

About the rounding error in these Patriot missiles

An old rant: misusing Ariane 5 in motivation slides

I was lucky to be an intern and then a PhD student at INRIA, while it was still called “INRIA” (it is now called “Inria”). This was about when researchers at INRIA and elsewhere were taken to task to understand the unfortunate software failure of the Ariane 5 maiden flight. So I heard the story from people I respect and who knew at least a little bit about the subject.

Ever since, it has been irking me when this example is taken for the purpose of motivating some formal technique or other. Unless you attend this sort of CS conference, you might not believe the non-solutions that get motivated as getting us all closer to a world in which Ariane 5 rockets do not explode.

What irks me is this: even if the technique being motivated were comparably expensive to traditional testing, requiring comparable or a little less time for comparable or a little more confidence, that would not guarantee it would have been used to validate the component. The problem with Ariane 5 was not a failure of traditional tests. The problem was that, because of constraints of time and money, traditional tests were not applied to the component that eventually failed.

If your method is not cheaper and faster than not doing the tests, do not imply that it would have saved Ariane 5. It might have been omitted too.

The report is online. Key quote: “no test was performed to verify that the SRI would behave correctly when being subjected to the count-down and flight time sequence and the trajectory of Ariane 5.”

A new rant: the Patriot missile rounding error

If you attend that sort of conference, you have also heard about this other spectacular software failure, the Patriot missile rounding error. This bug too is used to justify new techniques.

This one did not irk me until today. I did not happen to be in the right place to get the inside scoop by osmosis. Or at the wrong time.

I vaguely knew that it had something to do with a clock inside the Patriot missile measuring tenths of seconds and the constant 0.1 not being representable in binary. When researchers tell the story on a motivation slide, it sounds like a stupid mistake.

There is nothing reprehensible in calculating with constants that are not represented finitely in binary. Other computations inside the missile surely involved the number π, which is not finitely representable in binary either. The designer of the system simply must understand what ey is doing. It can get a little tricky, especially when the software is evolved. Let me tell you about the single-precision cosf() function from the Glibc library in another rant.

Similarly with the Ariane 5 case, a rather good-looking summary of the issue is available. Assuming that this summary is correct, and it certainly looks more plausible than the rash explanations you get at motivation-slide time, the next time I hear a researcher use the Patriot missile example to motivate eir talk, I will ask the questions that follow.

  1. When are you adapting your system to ad-hoc, 24-bit fixed-point computations (not whether it is theoretically possible but when you are doing it)?
  2. When are you adapting your system to ad-hoc, non-IEEE 754, 48-bit floating-point computations?
  3. Will your system then detect the drift between the same computation having gone one path and the other?

If your system is only practical in an alternate universe in which the Patriot missile software is cleanly implemented with good old IEEE 754 double-precision values: sorry, but in that universe, the Patriot missile does not exhibit the problem you are claiming to solve.

Thanks to Martin Jambon for proof-reading this post.

Monday, November 12 2012

November in Security

Bruce Schneier is, among other things, the author of the blog Schneier on Security. He is also one of the co-authors of the Skein cryptographic hash function, the SHA-3 contestant being verified in Frama-C's value analysis tutorial in the manual and then on this blog. I feel silly introducing him, considering he is rather on the “well-respected” side of the security researcher spectrum. But Bruce did provide this month's cybersecurity link, this essay by Gary McGraw.

Key quote: The kind of defense I advocate (called "passive defense" or "protection" above) involves security engineering -- building security in as we create our systems, knowing full well that they will be attacked in the future. One of the problems to overcome is that exploits are sexy and engineering is, well, not so sexy.

Bruce, in his typical fisheye approach to security, also recently provided this other link about security in fairy wren nests. I thought this made a lot of sense, but then again, Hongseok Yang lent me a copy of Dawkins' The Selfish Gene when we were both post-doc students at KAIST, so your mileage may vary.

Since I came back to France, I bought my own copy of The Selfish Gene to lend others. If you are around and are curious, just ask.

- page 2 of 4 -