Frama-C news and ideas

To content | To menu | To search | Frama-C Home

Tag - floating-point

Entries feed - Comments feed

Friday, May 3 2013

Rounding float to nearest integer, part 2

The previous post offered to round a positive float to the nearest integer, represented as a float, through a conversion back and forth to 32-bit unsigned int. There was also the promise of at least another method. Thanks to reader feedback, there will be two. What was intended to be the second post in the series is hereby relegated to third post.

Rounding through bit-twiddling

Several readers seemed disappointed that the implementation proposed in the last post was not accessing the bits of float f directly. This is possible, of course:

  assert (sizeof(unsigned int) == sizeof(float));
  unsigned int u;
  memcpy(&u, &f, sizeof(float));

In the previous post I forgot to say that we were assuming 32-bit unsigned ints. From now on we are in addition assuming that floats and unsigned ints have the same endianness, so that it is convenient to work on the bit representation of one by using the other.

Let us special-case the inputs that can mapped to zero or one immediately. We are going to need it. We could do the comparisons to 0.5 and 1.5 on u, because positive floats increase with their unsigned integer representation, but there is no reason to: it is more readable to work on f:

  if (f <= 0.5) return 0.;
  if (f <= 1.5) return 1.;

Now, to business. The actual exponent of f is:

  int exp = ((u>>23) & 255) - 127;

The explicit bits of f's significand are u & 0x7fffff, but there is not need to take them out: we will manipulate them directly inside u. Actually, at one point we will cheat and manipulate a bit of the exponent at the same time, but it will all be for the best.

A hypothetical significand for the number 1, aligned with the existing significand for f, would be 1U << (23 - exp). But this is hypothetical, because 23 - exp can be negative. If this happens, it means that f is in a range where all floating-point numbers are integers.

  if (23 - exp < 0) return f;
  unsigned int one = 1U << (23 - exp);

You may have noticed that since we special-cased the inputs below 1.5, variable one may be up to 1 << 23 and almost, but not quite align with the explicit bits of f's significand. Let us make a note of this for later. For now, we are interested in the bits that represent the fractional part of f, and these are always:

  unsigned int mask = one - 1;
  unsigned int frac = u & mask;

If these bits represent less than one half, the function must round down. If this is the case, we can zero all the bits that represent the fractional part of f to obtain the integer immediately below f.

  if (frac <= one / 2)
  {
    u &= ~mask;
    float r;
    memcpy(&r, &u, sizeof(float));
    return r;
  }

And we are left with the difficult exercise of finding the integer immediately above f. If this computation stays in the same binade, this means finding the multiple of one immediately above u.

“binade” is not a word, according to my dictionary. It should be one. It designates a range of floating-point numbers such as [0.25 … 0.5) or [0.5 … 1.0). I needed it in the last post, but I made do without it. I shouldn't have. Having words to designate things is the most important wossname towards clear thinking.

And if the computation does not stay in the same binade, such as 3.75 rounding up to 4.0? Well, in this case it seems we again only need to find the multiple of one immediately above u, which is in this case the power of two immediately above f, and more to the point, the number the function must return.

  u = (u + mask) & ~mask;
  float r;
  memcpy(&r, &u, sizeof(float));
  return r;


To summarize, a function for rounding a float to the nearest integer by bit-twiddling is as follows. I am not sure what is so interesting about that. I like the function in the previous post or the function in the next post better.

float myround(float f)
{
  assert (sizeof(unsigned int) == sizeof(float));
  unsigned int u;
  memcpy(&u, &f, sizeof(float));
  if (f <= 0.5) return 0.;
  if (f <= 1.5) return 1.;
  int exp = ((u>>23) & 255) - 127;
  if (23 - exp < 0) return f;
  unsigned int one = 1U << (23 - exp);
  unsigned int mask = one - 1;
  unsigned int frac = u & mask;
  if (frac <= one / 2)
    u &= ~mask;
  else
    u = (u + mask) & ~mask;
  float r;
  memcpy(&r, &u, sizeof(float));
  return r;
}

To be continued again

The only salient point in the method in this post is how we pretend not to notice when significand arithmetic overflows over the exponent, for inputs between 1.5 and 2.0, 3.5 and 4.0, and so on. The method in next post will be so much more fun than this.

Thursday, May 2 2013

Harder than it looks: rounding float to nearest integer, part 1

This post is the first in a series on the difficult task of rounding a floating-point number to an integer. Laugh not! The easiest-looking questions can hide unforeseen difficulties, and the most widely accepted solutions can be wrong.

Problem

Consider the task of rounding a float to the nearest integer. The answer is expected as a float, same as the input. In other words, we are looking at the work done by standard C99 function nearbyintf() when the rounding mode is the default round-to-nearest.

For the sake of simplicity, in this series of posts, we assume that the argument is positive and we allow the function to round any which way if the float argument is exactly in-between two integers. In other words, we are looking at the ACSL specification below.

/*@ requires 0 ≤ f ≤ FLT_MAX ;
  ensures -0.5 ≤ \result - f ≤ 0.5 ;
  ensures \exists integer n; \result == n;
*/
float myround(float f);

In the second ensures clause, integer is an ACSL type (think of it as a super-long long long). The formula \exists integer n; \result == n simply means that \result, the float returned by function myround(), is a mathematical integer.

Via truncation

A first idea is to convert the argument f to unsigned int, and then again to float, since that's the expected type for the result:

float myround(float f)
{
  return (float) (unsigned int) f;
}

Obvious overflow issue

One does not need Frama-C's value analysis to spot the very first issue, an overflow for large float arguments, but it's there, so we might as well use it:

$ frama-c -pp-annot  -val r.c -lib-entry -main myround
...
r.c:9:[kernel] warning: overflow in conversion of f ([-0. .. 3.40282346639e+38])
   from floating-point to integer. assert -1 < f < 4294967296;

This overflow can be fixed by testing for large arguments. Large floats are all integers, so the function can simply return f in this case.

float myround(float f)
{
  if (f >= UINT_MAX) return f;
  return (float) (unsigned int) f;
}

Obvious rounding issue

The cast from float to unsigned int does not round to the nearest integer. It “truncates”, that is, it rounds towards zero. And if you already know this, you probably know too the universally used trick to obtain the nearest integer instead of the immediately smaller one, adding 0.5:

float myround(float f)
{
  if (f >= UINT_MAX) return f;
  return (float) (unsigned int) (f + 0.5f);
}

This universally used trick is wrong.

An issue when the ULP of the argument is exactly one

The Unit in the Last Place, or ULP for short, of a floating-point number is its distance to the floats immediately above and immediately below it. For large enough floats, this distance is one. In that range, floats behave as integers.

There is an ambiguity in the above definition for powers of two: the distances to the float immediately above and the float immediately below are not the same. If you know of a usual convention for which one is called the ULP of a power of two, please leave a note in the comments.

int main()
{
  f = 8388609.0f;
  printf("%f -> %f\n", f, myround(f));
}

With a strict IEEE 754 compiler, the simple test above produces the result below:

8388609.000000 -> 8388610.000000

The value passed as argument is obviously representable as a float, since that's the type of f. However, the mathematical sum f + 0.5 does not have to be representable as a float. In the worst case for us, when the argument is in a range of floats separated by exactly one, the floating-point sum f + 0.5 falls exactly in-between the two representable floats f and f + 1. Half the time, it is rounded to the latter, although f was already an integer and was the correct answer for function myround(). This causes bugs as the one displayed above.

The range of floating-point numbers spaced every 1.0 goes from 2^23 to 2^24. Half these 2^23 values, that is, nearly 4 millions of them, exhibit the problem.

Since the 0.5 trick is nearly universally accepted as the solution to implement rounding to nearest from truncation, this bug is likely to be found in lots of places. Nicolas Cellier identified it in Squeak. He offered a solution, too: switch the FPU to round-downward for the time of the addition f + 0.5. But let us not fix the problem just yet, there is another interesting input for the function as it currently stands.

An issue when the argument is exactly the predecessor of 0.5f

int main()
{
  f = 0.49999997f;
  printf("%.9f -> %.9f\n", f, myround(f));
}

This test produces the output 0.499999970 -> 1.000000000, although the input 0.49999997 is clearly closer to 0 than to 1.

Again, the issue is that the floating-point addition is not exact. The argument 0.49999997f is the last float of the interval [0.25 … 0.5). The mathematical result of f + 0.5 falls exactly midway between the last float of the interval [0.5 … 1.0) and 1.0. The rule that ties must be rounded to the even choice means that 1.0 is chosen.

A function that works

The overflow issue and the first non-obvious issue (when ulp(f)=1) can be fixed by the same test: as soon as the ULP of the argument is one, the argument is an integer and can be returned as-is.

The second non-obvious issue, with input 0.49999997f, can be fixed similarly.

float myround(float f)
{
  if (f >= 0x1.0p23) return f;
  if (f <= 0.5) return 0;
  return (float) (unsigned int) (f + 0.5f);
}

A better function that works

Changing the FPU rounding mode, the suggestion in the Squeak bug report, is slightly unpalatable for such a simple function, but it suggests to add the predecessor of 0.5f instead of 0.5f to avoid the sum rounding up when it shouldn't:

float myround(float f)
{
  if (f >= 0x1.0p23) return f;
  return (float) (unsigned int) (f + 0.49999997f);
}

It turns out that this works, too. It solves the problem with the input 0.49999997f without making the function fail its specification for other inputs.

To be continued

The next post will approach the same question from a different angle. It should not be without its difficulties either.

Wednesday, May 1 2013

A conversionless conversion function

A rant about programming interview questions

Software development is a peculiar field. An applicant for a more traditionally artistic position would bring a portfolio to eir job interview: a selection of creations ey deems representative of eir work and wants to be judged by. But in the field of software development, candidates are often asked to solve a programming problem live, the equivalent of telling a candidate for a photography job “these are nice photographs you have brought, but could you take a picture for me, right now, with this unfamiliar equipment?”

Lots have already been written on programming job interview questions. Googling “fizzbuzz” alone reveals plenty of positions taken, reacted to, and counter-argumented. I do not intend to add to the edifice. Taking a step back, however, I notice that many of these posts do not tackle the question of why what works for the traditional arts should not be appropriate for the art of programming.


What I intend to discuss is the poor quality of “do this simple task, but avoid this simple construct that makes it trivial” interview questions. I hate those. Everytime I hear a new one it seems to reach a new high in sheer stupidity. The questions are usually very poorly specified, too. One such question might be to convert a floating-point value to integer without using a cast. Is floor() or ceil() allowed? Are other library functions than these ones that solve the problem too directly allowed? May I use a union to access the bits of the floating-point representation? Or memcpy()?

Well, I have solved this particular question once and for all. The conversion function makes up the second part of this post. It uses only floating-point computations, no tricks. Now, one just needs to learn it and to regurgitate it as appropriate at interview time (there is no way one can write a working version of this program on blackboard, too). Who is hiring?

A program

The function below requires a strict IEEE 754 implementation. If your GCC is generating x87 instructions, options -msse2 -mfpmath=sse should prevent it from doing so and allow you to run the program:

#include <math.h>
#include <float.h>
#include <stdio.h>
#include <limits.h>

/*@ requires 0 <= f < ULLONG_MAX + 1 ; */
unsigned long long dbl2ulonglong(double f)
{
  if (f < 1) return 0;
  unsigned long long l = 0;

  for (double coef = 3; coef != 0x1.0p53; coef = 2 * coef - 1, l >>= 1)
    {
      double t = coef * f;
      double o = f - t + t - f;
      if (o != 0)
	{
	  l |= 0x8000000000000000ULL;
	  f -= fabs(o);
	}
    }
  l |= 0x8000000000000000ULL;
  for ( ; f != 0x1.0p63; f *= 2) l>>=1;
  return l;
}

int main()
{
  double f = 123456.;
  unsigned long long l = dbl2ulonglong(f);
  printf("result:%.18g %llu\n", f, l);

  f = ULLONG_MAX * 0.99;
  l = dbl2ulonglong(f);
  printf("result:%.18g %llu\n", f, l);

  printf("rounding:%llu %llu %llu %llu\n", 
	 dbl2ulonglong(DBL_MIN),
	 dbl2ulonglong(1.4),
	 dbl2ulonglong(1.5),
	 dbl2ulonglong(1.6));

  return 0;
}

The expected result is:

result:123456 123456
result:1.82622766329724559e+19 18262276632972455936
rounding:0 1 1 1

Incidentally, does anyone know how to write a correctness proof for this function? A formal proof would be nice, but just an informal explanation of how one convinces oneself that o = f - t + t - f is the right formula would already be something.

Saturday, April 6 2013

Non-expert floating-point-using developers need accurate floating-point libraries the most

Quotes on the Internet

In 2012, Lukas Mathis took a quote out of the context of a blog post by Marco Arment and ran with it. The result was a though-provoking essay. Key quote:

This is a sentiment you often hear from people: casual users only need «entry-level» devices. Even casual users themselves perpetuate it: «Oh, I’m not doing much on my computer, so I always just go with the cheapest option.» And then they buy a horrid, underpowered netbook, find out that it has a tiny screen, is incredibly slow, the keyboard sucks, and they either never actually use it, or eventually come to the conclusion that they just hate computers.

In reality, it’s exactly backwards: proficient users can deal with a crappy computer, but casual users need as good a computer as possible.

Lukas fully develops the idea in his post, Crappy Computers. Go ahead and read it now if you haven't already. This blog will still be here when you come back.

Floating-point libraries

The idea expressed by Lukas Mathis applies identically in a much more specific setting: developing with floating-point computations. The developers who most need accurate floating-point libraries are those who least care about floating-point. These developers will themselves tell you that it is all the same to them. They do not know what an ULP (“unit in the last place”) is, so what difference is it to them if they get two of them as error where they could have had one or half of one?

In this, they are just as wrong as the casual computer users who pick horrid netbooks for themselves.

Floating-point-wise, programming environments are not born equal

All recent processors for desktop computers provide basic operations +, -, *, / and square root for IEEE 754 single- and double-precision floating-point numbers. Each operation has its assembly instruction, and since the assembly instruction is the fastest way to implement the operation, compilers have no opportunity to mess things up in a misguided attempt at optimizing for speed.

Who am I kidding? Of course compilers have plenty of opportunities to mess things up.

  1. It may seem to a compiler that a compile-time computation is even faster than the assembly instruction provided by the processor, so that if the program computes x / 10.0, the compiler may compute 1 / 10.0 at compile-time and generate assembly code that multiplies x by this constant instead. This transformation causes the result to be less accurate in some rare cases.
  2. Or a compiler may simplify source-code expressions as if floating-point operations were associative when they aren't. It may for instance optimize a carefully crafted floating-point expression such as a + b - a - b into 0.

Nevertheless, there has been much progress recently in standard compliance for compilers' implementations of floating-point. Overall, for programs that only use the basic operators, the situation has never been better.


The situation is not as bright-looking when it comes to mathematical libraries. These libraries provide conversion to and from decimal, and transcendental elementary functions implemented on top of the basic operations. They are typically part of the operating system. Implementations vary wildly in quality from an operating system to the next.


Expert developers know exactly what compromise between accuracy and speed they need, and they typically use their own functions instead of the operating system's. By way of illustration, a famous super-fast pretty-accurate implementation of the inverse square root function is used in Quake III and has been much analyzed.


The casual developer of floating-point programs, on the other hand, will certainly use the functions provided by the system. Some of eir expectations may be naive, or altogether impossible to reconcile with the constraints imposed by the IEEE 754 standard. Other expectations may be common sense, such as a sin() function that does not return -2.76.

For such a developer, the mathematical libraries should strive to be as accommodating and standard-compliant as possible, because ey needs it, regardless of what ey thinks.

An example

To illustrate, I have written a string-to-long conversion function. It could have been an entry in John Regehr's contest, but since the deadline is passed, I have allowed the function to expect only positive numbers and to fail miserably when the input is ill-formed.

The function looks like this:

long str2long(char *p)
{
  size_t l = strlen(p);
  long acc = 0;
  
  for (size_t i=0; i<l; i++)
    {
      int digit = p[i] - '0';
      long pow10 = pow(10, l - 1U - i);
      acc += digit * pow10;;
    }
  return acc;
}

Neat, huh?


I tested this function more than the last function. This time I compiled it and invoked it on a few strings:

  printf("%ld %ld %ld %ld\n",
         str2long("0"),
         str2long("123"),
         str2long("999"),
         str2long("123456789123456789"));

You can download the entire C code for yourself. If you run it, you should get:

0 123 999 123456789123456789

I wrote my function to work for all well-formed inputs that fit in a long (but I only tested it for four values, so do not embed it in your space shuttle, please). Some of the reasons why I expect it to work are implicit: for one, powers of ten up to 10^22 are exactly representable as double-precision floating-point numbers. Also, I happen to know that on the system I use, the mathematical library is one of the best available.


I am not, in fact, a floating-point expert. I could be completely illiterate with respect to floating-point and have written the exact same function. In fact, this happened to StackOverflow user1257. (I am not saying that StackOverflow user1257 is illiterate with respect to floating-point, either. Ey wrote a function similar to mine, after all.)

User1257's function returned 122 when applied to the string "123" !!!

This was so troubling that user1257 suspected a compiler bug. The reality is more likely that on eir computer, the statement long pow10 = pow(10, 2); sets variable pow10 to 99. The function pow() only needs to be inaccurate by 1ULP for this result to come up because of C's truncation behavior (towards zero) when converting from floating-point to integer.

Conclusion

My str2long() function would fail just the same if it was run in user1257's compilation environment. I still think that my function is correct and that I should be able to expect results to the ULP from the math library's pow() function. A floating-point expert would never even encounter the issue at all. I might be able to diagnose it and to cope. But the floating-point beginner simply needs an environment in which long pow10 = pow(10, 2); sets pow10 to 100.

If you program, and if you use floating-point at all, beware of relying on the math library equivalent of a crappy netbook.

Sunday, March 3 2013

Correct rounding or mathematically-correct rounding?

CRlibm is a high-quality library of floating-point elementary functions. We used it as reference a long time ago in this blog while looking at lesser elementary function implementations and the even lesser properties we could verify about them.

A bold choice

The CRlibm documentation contains this snippet:

[…] it may happen that the requirement of correct rounding conflicts with a basic mathematical property of the function, such as its domain and range. A typical example is the arctangent of a very large number which, rounded up, will be a number larger than π/2 (fortunately, ◦(π/2) < π/2). The policy that will be implemented in crlibm will be

• to give priority to the mathematical property in round to nearest mode (so as not to hurt the innocent user who may expect such a property to be respected), and

• to give priority to correct rounding in the directed rounding modes, in order to provide trustful bounds to interval arithmetic.

The choice for directed rounding modes is obviously right. I am concerned about the choice made for round-to-nearest. The documentation states the dilemma very well. One can imagine slightly out of range values causing out-of-bound indexes during table look-ups and worse things.


I seldom reason about floating-point programs. I work on static analysis and am only concerned about floating-point inasmuch as it is a requirement for writing a static analyzer correct for programs that include floating-point computations.

However, when I do reason about floating-point programs, I am more often compounding approximations, starting from the base assumption that a correctly rounded function returns a result within 1/2ulp of the mathematical result than I am assuming that atan(x) ≤ π/2. The choice the CRlibm implementors made means that suddenly, the reasoning I often make is wrong. The value of atan(x) in the program may not be 1/2ulp from the real arctangent of the same x. It can be more when x is very large and mathematical-correctness overrode correct rounding.

Truck drivers fall asleep at the wheel when they face long, dull stretches of straight empty roads. Similarly, it is good to have another special case to consider when reasoning about floating-point computations. With only infinites and denormals to worry about, it can get, you know, a bit too easy.

Oh well, it's only π/2

In this section I rhetorically assume that it is only π/2 for which there is a problem. The CRlibm documentation reminds us that in the case of double precision, we were lucky. Or perhaps it isn't luck, and the IEEE 754 committee took the desirableness of the property (double)π/2 < π/2 into account when it chose the number of bits in the significand of the double-precision format.


How lucky (or careful) have we been, exactly? Let us test it with the program below — assuming my compilation platform works as intended.

#include <stdio.h>

#define PI(S) 3.1415926535897932384626433832795028841971693993751##S

float f = PI(f);
double d = PI();
long double ld = PI(L);

int main(){
  printf("   3.14159265358979323846264338327950288419716939937510\n");
  printf("f  %.50f\n", f);
  printf("d  %.50f\n", d);
  printf("ld %.50Lf\n",ld);
}

The result of compiling and executing the program is, for me:

   3.14159265358979323846264338327950288419716939937510
f  3.14159274101257324218750000000000000000000000000000
d  3.14159265358979311599796346854418516159057617187500
ld 3.14159265358979323851280895940618620443274267017841

As you can see, the nearest single-precision float to π is above π, as is the nearest 80-bit long double. The same goes for π/2 because the floating-point representations for π and π/2 only differ in the exponent. Consequently, the issue raised by the CRlibm implementors will come up for both functions atanf() and atanl(), when it is time to get them done. We were not very lucky after all (or careful when defining the IEEE 754 standard).

A subjective notion

But what exactly is the informal “mathematical correctness” notion that this post is predicated upon? Yes, the “innocent user” may expect mathematical properties to be respected as much as possible, but there are plenty of mathematical properties! Let us enumerate some more:


If x ≤ 1 in a program, then exp(x) should always be lower than the mathematical constant e.

So far so good. The above is a good rule for an exponential implementation to respect. We are making progress.


Here is another property:

If x ≥ 1 in a program, then exp(x) should always be greater than the mathematical constant e.


We are decidedly unlucky today, because at most one of these is going to be true of any floating-point function exp(). The programmatic value exp(1) must be either above or below the mathematical constant e (it is never equal to it because the mathematical constant e does not have a finite representation in binary).

Why does it matter anyway?

Let us revisit the argument:

to give priority to the mathematical property in round to nearest mode (so as not to hurt the innocent user who may expect such a property to be respected)

I alluded to a possible problem with a programmer computing an array index from atanf(x) under the assumption that it is always lower than π/2. But how exactly would an innocent user even notice that atanf(1e30) is not lower than π/2? The value π/2 cannot exist in eir program any more than e. The user might innocently write an assertion like:

assert(atanf(x)<=(3.1415926535897932f/2.0f));

This assertion will never trigger! The function atanf() will indeed return at most the single-precision float 3.1415926535897932f/2.0f. It does not matter that this number is actually slightly larger than π/2. For all intents and purposes, in the twisted world of single-precision floating-point, this number is π/2.

Conclusion

There are other scenarios in which the innocent user might genuinely have an unpleasant surprise. The result of a computation may be converted to decimal for humans to read and the user may be surprised to see a value outside the range ey expected. But this user would have the wrong expectations, just as if ey expected 10.0 * atan(x) to always be less than 5π. Plenty of these users and developers can be found. But my opinion, for what it is worth, is that by making special cases you are not helping these users, only feeding their delusions.

The correct way to set expectations regarding the results of a floating-point program is numerical analysis. Numerical analysis is hard. Special cases such as the authors of CRlibm threaten to implement only seem to make it harder.

Thursday, November 29 2012

Solution to yesterday's quiz

Yesterday's quiz was about the expression *(char*)(float[]){x*x} - 63 (for big-endian architectures) or *(3+(char*)(float[]){x*x}) - 63 (for little-endian ones). This post provides an explanation.


First, let us try the function on a few values:

int main(){
  for (unsigned int i=0; i<=20; i++)
    printf("l(%2u)=%d\n", i, l(i));
}

This may provide the beginning of a hint:

l( 0)=-63
l( 1)=0
l( 2)=1
l( 3)=2
l( 4)=2
l( 5)=2
l( 6)=3
l( 7)=3
l( 8)=3
l( 9)=3
l(10)=3
l(11)=3
l(12)=4
l(13)=4
l(14)=4
l(15)=4
l(16)=4
l(17)=4
l(18)=4
l(19)=4
l(20)=4


The construct (float[]){…} is C99's syntax for anonymous arrays, a kickass programming technique. This is an unabated quote.


In the case at hand, the construct converts to float the contents of the braces and puts the result in memory. The function puts the float in memory in order to read its most significant byte. That's *(char*)… on a big-endian architecture, and *(3+(char*)…) on a little-endian one.


One reason to read a single char is to circumvent strict aliasing rules—which do not apply to type char. A simpler version of the same function would have been (*(int*)(float[]){x} >> 23) - 127, but that version would break strict aliasing rules. Also, it would be too obvious.


The most significant bits of a single-precision IEEE 754 floating-point representation are, in order, one sign bit and eight exponent bits. By reading the most significant byte, we get most of the exponent, but one bit is lost. To compensate for this, the operation is applied to x*x, whose exponent is double the exponent of x.


In conclusion, yesterday's one-liner returns an integer approximation of the base-2 logarithm of a reasonably small unsigned int x. On a typical 32-bit architecture, it is exact for powers of two up to 2¹⁵. If x is zero, the function returns its best approximation of -∞, that is, -63.

Wednesday, November 28 2012

C99 quiz

Here is a convenient one-liner:

int l(unsigned int x)
{
  return *(char*)(float[]){x*x} - 63;
}

What does it do on my faithful PowerMac?


The Intel version is not as nice. That's progress for you: *(3+(char*)(float){x*x}) - 63

Monday, November 19 2012

Funny floating-point bugs in Frama-C Oxygen's front-end

In a previous post, almost exactly one year ago, before Frama-C Oxygen was released, I mentioned that the then future release would incorporate a custom decimal-to-binary floating-point conversion function. The reason was that the system's strtof() and strtod() functions could not be trusted.

This custom conversion function is written in OCaml. It can be found in src/kernel/floating_point.ml in the now available Oxygen source tree. This post is about a couple of funny bugs the function has.

History

There had been arguments about the inclusion of the “correct parsing of decimal constants” feature in Frama-C's front-end, and about the best way to implement it. My colleague Claude Marché was in favor of using the reputable MPFR library. I thought that such an additional dependency was unacceptable and I was against the feature's inclusion as long as I could not see a way to implement it that did not involve such a dependency.

When I presented my dependency-avoiding solution to Claude, he said: “But how do you know your function works? Did you prove it?”. To which I replied that no, I had not proved it, but I had thoroughly tested it. I had written a quick generator of difficult-to-convert decimal numbers, and I was rather happy with the confidence it gave me.

My generator allowed me to find, and fix, a double rounding issue when the number to convert was a denormal: in this case the number would first be rounded at 52 significant digits and then at the lower number of significant digits implied by its denormal status.


I owe Claude a beer, though, because there were two bugs in my function. The bugs were not found by my random testing but would indeed have been found by formal verification. If you want to identify them yourself, stop reading now and start hunting, because the bugs are explained below.

Stop reading here for a difficult debugging exercise

Say that the number being parsed is of the form numopt1.numopt2Enum3 where numopt1 and numopt2 expand to optional strings of digits, and num3 expands to a mandatory string of digits.

The sequences of digits numopt1 and numopt2 can be long. The string numopt2 in particular should not be parsed as an integer, because the leading zeroes it may have are significant.

At this point of the parsing of the input program, we have already ensured that num3 was a string of digits with an optional sign character at the beginning. In these conditions, it is tempting to simply call the OCaml function int_of_string. The function int_of_string may still fail and raise an exception if the string represents a number too large to be represented as a 31- or 63-bit OCaml int.

This is easy to fix: if the program contains a literal like 1.234E9999999999999999999, causing int_of_string to fail when parsing the exponent, return infinity. A vicious programmer might have written 0.000…01E9999999999999999999, but this programmer's hard-drive is not large enough to contain all the digits that would prevent infinity to be the correct answer.

Similarly, if int_of_string chokes because the programmer wrote 1.234E-9999999999999999999, the function can safely return 0.0 which, for the same reason, is always the correctly rounded floating-point representation.


Or so it would seem. The above logic is implemented in function parse_float inside Frama-C Oxygen, and this is where the bugs are.

Stop reading here for an easy debugging exercise

During a code review, my colleague Boris Yakobowski and I found that Oxygen had the following unwanted behaviors. Read on for the solution.

Exhibit one: spurious warning

double d = 0.0E-9999999999999999999;

For the program above, a warning is emitted whereas the constant is, in fact, exactly represented. The only issue here is the spurious warning:

$ frama-c e1.c
[kernel] preprocessing with "gcc -C -E -I.  e1.c"
e1.c:1:[kernel] warning: Floating-point constant 0.0E-9999999999999999999
is not represented exactly.
Will use 0x0.0000000000000p-1022.
See documentation for option -warn-decimal-float

Exhibit two: confusion between zero and infinity

double d = 0.0E9999999999999999999;

This bug is more serious:

$ frama-c e2.c
[kernel] preprocessing with "gcc -C -E -I.  e2.c"
e2.c:1:[kernel] warning: Floating-point constant 0.0E9999999999999999999
is not represented exactly.
Will use inf.
See documentation for option -warn-decimal-float

These two related bugs are fixed in the development version of Frama-C.

Sunday, November 18 2012

About the rounding error in these Patriot missiles

An old rant: misusing Ariane 5 in motivation slides

I was lucky to be an intern and then a PhD student at INRIA, while it was still called “INRIA” (it is now called “Inria”). This was about when researchers at INRIA and elsewhere were taken to task to understand the unfortunate software failure of the Ariane 5 maiden flight. So I heard the story from people I respect and who knew at least a little bit about the subject.

Ever since, it has been irking me when this example is taken for the purpose of motivating some formal technique or other. Unless you attend this sort of CS conference, you might not believe the non-solutions that get motivated as getting us all closer to a world in which Ariane 5 rockets do not explode.


What irks me is this: even if the technique being motivated were comparably expensive to traditional testing, requiring comparable or a little less time for comparable or a little more confidence, that would not guarantee it would have been used to validate the component. The problem with Ariane 5 was not a failure of traditional tests. The problem was that, because of constraints of time and money, traditional tests were not applied to the component that eventually failed.

If your method is not cheaper and faster than not doing the tests, do not imply that it would have saved Ariane 5. It might have been omitted too.


The report is online. Key quote: “no test was performed to verify that the SRI would behave correctly when being subjected to the count-down and flight time sequence and the trajectory of Ariane 5.”

A new rant: the Patriot missile rounding error

If you attend that sort of conference, you have also heard about this other spectacular software failure, the Patriot missile rounding error. This bug too is used to justify new techniques.

This one did not irk me until today. I did not happen to be in the right place to get the inside scoop by osmosis. Or at the wrong time.

I vaguely knew that it had something to do with a clock inside the Patriot missile measuring tenths of seconds and the constant 0.1 not being representable in binary. When researchers tell the story on a motivation slide, it sounds like a stupid mistake.

There is nothing reprehensible in calculating with constants that are not represented finitely in binary. Other computations inside the missile surely involved the number π, which is not finitely representable in binary either. The designer of the system simply must understand what ey is doing. It can get a little tricky, especially when the software is evolved. Let me tell you about the single-precision cosf() function from the Glibc library in another rant.


Similarly with the Ariane 5 case, a rather good-looking summary of the issue is available. Assuming that this summary is correct, and it certainly looks more plausible than the rash explanations you get at motivation-slide time, the next time I hear a researcher use the Patriot missile example to motivate eir talk, I will ask the questions that follow.

  1. When are you adapting your system to ad-hoc, 24-bit fixed-point computations (not whether it is theoretically possible but when you are doing it)?
  2. When are you adapting your system to ad-hoc, non-IEEE 754, 48-bit floating-point computations?
  3. Will your system then detect the drift between the same computation having gone one path and the other?


If your system is only practical in an alternate universe in which the Patriot missile software is cleanly implemented with good old IEEE 754 double-precision values: sorry, but in that universe, the Patriot missile does not exhibit the problem you are claiming to solve.


Thanks to Martin Jambon for proof-reading this post.

Friday, November 2 2012

Short, difficult programs

When researchers start claiming that they have a sound and complete analyzer for predicting whether a program statement is reachable, it is time to build a database of interesting programs.

Goldbach's conjecture

My long-time favorite is a C program that verifies Goldbach's conjecture (actual program left as an exercise to the reader).

If the conjecture is true, the program never terminates (ergo a statement placed after it is unreachable). If the conjecture is false, the program terminates and moves on to the following statement. The program can be implemented using unsigned long long integers that should be good for counter-examples up to 18446744073709551615. It can alternately use dynamic allocation and multi-precision integers, in which case, depending whether your precise definition of “analyzing a C program” includes out-of-memory behaviors, you could claim that the reachability of the statement after the counter-example-finding loop is equivalent to the resolution of “one of the oldest and best-known unsolved problems in number theory and in all of mathematics”.

Easier programs than that

No-one expects the resolution of Goldbach's conjecture to come from a program analyzer. This example is too good, because it is out of reach for everyone—it has eluded our best mathematicians for centuries. What I am looking for in this post is easier programs, where the solution is just in reach. Examples that it would genuinely be informative to run these sound, complete analyzers on. If they were made available.

With integers

For 32-bit ints and 64-bit long longs, I know that the label L is unreachable in the program below, but does your average sound and complete program analyzer do?

/*@ requires 2 <= x ;
    requires 2 <= y ; */
void f(unsigned int x, unsigned int y)
{
  if (x * (unsigned long long)y == 17)
    L: return;
}

The key is that with the aforementioned platform hypotheses, the multiplication does not overflow. Label L being reached would mean that the program has identified divisors of the prime number 17, which we don't expect it to.


In the program below, the multiplication can overflow, and not having tried it, I have truly no idea whether the label L is reachable. I expect it is, statistically, but if you told me that it is unreachable because of a deep mathematical property of computations modulo a power of two, I would not be shocked.

/*@ requires 2 <= x ;
    requires 2 <= y ; */
void f(unsigned long long x, unsigned long long y)
{
  if (x * y == 17)
    L: return;
}

With floating-point

Floating-point is fun. Label L in the following program is unreachable:

/*@ requires 10000000. <= x <= 200000000. ;
    requires 10000000. <= y <= 200000000. ; */
void sub(float x, float y)
{
  if (x - y == 0.09375f)
    L: return;
}

Frama-C's value analysis, and most other analyzers, will not tell you that label L is unreachable. It definitely looks reachable, The difference between x and y can be zero, and it can be 1.0. It looks like it could be 0.09375 but it cannot: the subtracted numbers are too large for the difference, if non-zero, to be smaller than 1.0.


So the subtlety in the example above is the magnitude of the arguments. What about smaller arguments then?

/*@ requires 1.0 <= x <= 200000000. ;
    requires 1.0 <= y <= 200000000. ; */
void sub(float x, float y)
{
  if (x - y == 0.09375f)
    L: return;
}

This time the label L is easily reachable, for instance with the inputs 2.09375 for x and 2.0 for y.


What about this third program?

/*@ requires 1.0 <= x <= 200000000. ;
    requires 1.0 <= y <= 200000000. ; */
void sub(float x, float y)
{
  if (x - y == 0.1f)
    L: return;
}

The target difference 0.1f is larger than 0.09375f, so it should be easier to obtain as the difference of two floats from the prescribed range, right? In fact, it isn't. The numbers 0.099999904632568359375 and 0.10000002384185791015625 can be obtained as values of x-y in the above program, for the former as the result of picking 1.099999904632568359375 for x and 1.0 for y. The value 0.1f, on the other hand, cannot be obtained as the subtraction of two floats above 1.0, because some bits are set in its significand that cannot be set by subtracting floats that large.

Conclusion

I expect it will be some time before automatic analyzers can soundly and completely decide, in all two-line programs, whether the second line is reachable. Frama-C's value analysis does not detect unreachability in any of the programs above, for instance (the value analysis is sound, so it soundly detects that L may be reachable when it is). Please leave your own difficult two-line programs in the comments.


The floating-point examples in this post owe quite a bit to my colleague Bruno Marre explaining his article “Improving the Floating Point Addition and Subtraction Constraints” to me over lunch.

- page 2 of 5 -