Frama-C news and ideas

To content | To menu | To search | Frama-C Home

Tag - floating-point

Entries feed - Comments feed

Wednesday, September 25 2013

The problem with differential testing is that at least one of the compilers must get it right

A long time ago, John Regehr wrote a blog post about a 3-3 split vote that occurred while he was finding bugs in C compilers through differential testing. John could have included Frama-C's value analysis in his set of C implementations, and then the vote would have been 4-3 for the correct interpretation (Frama-C's value analysis predicts the correct value on the particular C program that was the subject of the post). But self-congratulatory remarks are not the subject of today's post. Non-split votes in differential testing where all compilers get it wrong are.

A simple program to find double-rounding examples

The program below looks for examples of harmful double-rounding in floating-point multiplication. Harmful double-rounding occurs when the result of the multiplication of two double operands differs between the double-precision multiplication (the result is rounded directly to what fits the double format) and the extended-double multiplication (the mathematical result of multiplying two double numbers may not be representable exactly even with extended-double precision, so it is rounded to extended-double, and then rounded again to double, which changes the result).

$ cat dr.c
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <float.h>
#include <limits.h>

int main(){
  printf("%d %a %La\n", FLT_EVAL_METHOD, DBL_MAX, LDBL_MAX);
  while(1){
    double d1 = ((unsigned long)rand()<<32) +
                           ((unsigned long)rand()<<16) + rand() ;
    double d2 = ((unsigned long)rand()<<32) +
                           ((unsigned long)rand()<<16) + rand() ;
    long double ld1 = d1;
    long double ld2 = d2;
    
    if (d1 * d2 != (double)(ld1 * ld2))
      printf("%a*%a=%a but (double)((long double) %a * %a))=%a\n", 
	     d1, d2, d1*d2,
	     d1, d2, (double)(ld1 * ld2));
  }
}

The program is platform-dependent, but if it starts printing something like below, then a long list of double-rounding examples should immediately follow:

0 0x1.fffffffffffffp+1023 0xf.fffffffffffffffp+16380

Results

In my case, what happened was:

$ gcc -v
Using built-in specs.
Target: i686-apple-darwin11
...
gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)
$ gcc -std=c99 -O2 -Wall dr.c && ./a.out 
0 0x1.fffffffffffffp+1023 0xf.fffffffffffffffp+16380
^C

I immediately blamed myself for miscalculating the probability of easily finding such examples, getting a conversion wrong, or following while (1) with a semicolon. But it turned out I had not done any of those things. I turned to Clang for a second opinion:

$ clang -v
Apple clang version 4.1 (tags/Apple/clang-421.11.66) (based on LLVM 3.1svn)
Target: x86_64-apple-darwin12.4.0
Thread model: posix
$ clang -std=c99 -O2 -Wall dr.c && ./a.out 
0 0x1.fffffffffffffp+1023 0xf.fffffffffffffffp+16380
^C

Conclusion

It became clear what had happened when looking at the assembly code:

$ clang -std=c99 -O2 -Wall -S dr.c && cat dr.s
...
	mulsd	%xmm4, %xmm5
	ucomisd	%xmm5, %xmm5
	jnp	LBB0_1
...

Clang had compiled the test for deciding whether to call printf() into if (xmm5 != xmm5) for some register xmm5.

$ gcc -std=c99 -O2 -Wall -S dr.c && cat dr.s
...
	mulsd	%xmm1, %xmm2
	ucomisd	%xmm2, %xmm2
	jnp	LBB1_1
...

And GCC had done the same. Although, to be fair, the two compilers appear to be using LLVM as back-end, so this could be the result of a single bug. But this would remove all the salt of the anecdote, so let us hope it isn't.


It is high time that someone used fuzz-testing to debug floating-point arithmetic in compilers. Hopefully one compiler will get it right sometimes and we can work from there.

Monday, August 5 2013

Exact case management in floating-point library functions

The documentation of the correctly-rounded CRlibm floating-point library states, for the difficult pow() function (p. 159):

Directed rounding requires additional work, in particular in subnormal handling and in exact case management. There are more exact cases in directed rounding modes, therefore the performance should also be inferior.

The phrase “exact case” refers to inputs that need to be treated specially because no number of “Ziv iterations”, at increasing precisions, can ever resolve which way the rounding should go.


Quiz: Isn't an exact case an exact case independently of the rounding mode? How can exact cases vary with the rounding mode?


If you can answer the above quiz without having to rubber duck through the entire question on an internet forum, you have me beat.

Wednesday, July 24 2013

More on the precise analysis of C programs for FLT_EVAL_METHOD==2

Introduction

It started innocently enough. My colleagues were talking of supporting target compilers with excess floating-point precision. We saw that if analyzing programs destined to be compiled with strict IEEE 754 compilers was a lovely Spring day at the beach, analyzing for compilers that allow excess precision was Normandy in June, 1944. But we had not seen anything, yet.

The notion of compile-time computation depends on the optimization level

One first obvious problem was that of constant expressions that were evaluated at compile-time following rules that differed from run-time ones. And who is to say what is evaluated at compile-time and at run-time? Why, it even depends, for one same compiler, on the optimization level:

#include <stdio.h>

int r1, r2;

double ten = 10.0;

int main(int c, char **v)
{
  ten = 10.0;
  r1 = 0.1 == (1.0 / ten);
  r2 = 0.1 == (1.0 / 10.0);
  printf("r1=%d r2=%d\n", r1, r2);
}

Note how, compared to last time, we make the vicious one-line change of assigning variable ten again inside function main().

$ gcc -v
Target: x86_64-linux-gnu
…
gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1) 
$ gcc -mno-sse2 -mfpmath=387 -std=c99 -O2  s.c && ./a.out 
r1=1 r2=1
$ gcc -mno-sse2 -mfpmath=387 -std=c99  s.c && ./a.out 
r1=0 r2=1

So the problem is not just that the static analyzer must be able to recognize the computations that are done at compile-time. A precise static analyzer that went this path would in addition have to model each of the tens of optimization flags of the target compiler and their effects on the definition of constant expression.


Fortunately for us, after many, varied complaints from GCC users, Joseph S. Myers decided that 387 floating-point math in GCC was at least going to be predictable. That would not solve all the issues that had been marked as duplicates of the infamous bug 323 over its lifetime, but it would answer the valid ones.

A ray of hope

Joseph S. Myers provided a reasonable interpretation of the effects of FLT_EVAL_METHOD in the C99 standard. The comparatively old compiler we used in the previous post and in the first section of this one does not contain the patch from that discussion, but recent compilers do. The most recent GCC I have available is SVN snapshot 172652 from 2011. It includes the patch. With this version of GCC, we compile and execute the test program below.

#include <stdio.h>

int r1, r2, r3, r4, r5, r6, r7;

double ten = 10.0;

int main(int c, char **v)
{
  r1 = 0.1 == (1.0 / ten);
  r2 = 0.1 == (1.0 / 10.0);
  r3 = 0.1 == (double) (1.0 / ten);
  r4 = 0.1 == (double) (1.0 / 10.0);
  ten = 10.0;
  r5 = 0.1 == (1.0 / ten);
  r6 = 0.1 == (double) (1.0 / ten);
  r7 = ((double) 0.1) == (1.0 / 10.0);
  printf("r1=%d r2=%d r3=%d r4=%d r5=%d r6=%d r7=%d\n", r1, r2, r3, r4, r5, r6, r7);
}

We obtain the following results, different from the results of the earlier version of GCC, but independent of the optimization level and understandable (all computations are done with FLT_EVAL_METHOD==2 semantics):

$ ./gcc-172652/bin/gcc -mno-sse2 -mfpmath=387 -std=c99  t.c && ./a.out 
r1=1 r2=1 r3=0 r4=0 r5=1 r6=0 r7=0

As per the C99 standard, the choice was made to give the literal 0.1 the value of 0.1L. I am happy to report that this simple explanation for the values of r2 and r7 can be inferred directly from the assembly code. Indeed, the corresponding constant is declared in assembly as:

.LC1:
	.long	3435973837
	.long	3435973836
	.long	16379
	.long	0


Quiz: why is it obvious in the above assembly code for a long double constant that the compiler used the long double approximation for 0.1 instead of the double one?


As described, the semantics of C programs compiled with FLT_EVAL_METHOD==2 are just as deterministic as if they were compiled with FLT_EVAL_METHOD==0. They give different results from the latter, but always the same ones, regardless of optimization level, interference from unrelated statements, and even regardless of the particular compiler generating code with FLT_EVAL_METHOD==2. In the discussion that followed between Joseph Myers and Ian Lance Taylor, this is called “predictable semantics” and it is a boon to anyone who whishes to tell what a program ought to do when executed (including but not limited to precise static analyzers).

Implementation detail: source-to-source transformation or architecture option?

Now that at least one C compiler can be said to have predictable behavior with respect to excess precision, the question arises of supporting FLT_EVAL_METHOD==2 in Frama-C. This could be one more of the architecture-dependent parameters such as the size of type int and the endianness.

The rules are subtle, however, and rather than letting each Frama-C plug-in implement them and get them slightly wrong, it would be less error-prone to implement these rules once and for all as a source-to-source translation from a program with FLT_EVAL_METHOD==2 semantics to a program that when compiled or analyzed with FLT_EVAL_METHOD==0 semantics, computes the same thing as the first one.

The destination of the transformation can be a Frama-C AST

A translated program giving, when compiled with strict IEEE 754 semantics, the FLT_EVAL_METHOD==2 semantics of an existing program can be represented as an AST in Frama-C. Here is how the translation would work on an example:

double interpol(double u1, double u2, double u3)
{ 
  return u2 * (1.0 - u1) + u1 * u3;  
}

Function interpol() above can be compiled with either FLT_EVAL_METHOD==0 or with FLT_EVAL_METHOD==2. In the second case, it actually appears to have slightly better properties than in the first case, but the differences are minor.

A source-to-source translation could transform the function into that below:

double interpol_80(double u1, double u2, double u3)
{ 
  return u2 * (1.0L - (long double)u1) + u1 * (long double)u3;  
}

This transformed function, interpol_80(), when compiled or analyzed with FLT_EVAL_METHOD==0, behaves exactly identical to function interpol() compiled or analyzed with FLT_EVAL_METHOD==2. I made an effort here to insert only the minimum number of explicit conversions but a Frama-C transformation plug-in would not need to be so punctilious.

The source of the transformation cannot be a Frama-C AST

There is however a problem with the implementation of the transformation as a traditional Frama-C transformation plug-in. It turns out that the translation cannot use the normalized Frama-C AST as source. Indeed, if we use a Frama-C command to print the AST of the previous example in textual form:

~ $ frama-c -print -kernel-debug 1 t.c
…
/* Generated by Frama-C */
int main(int c, char **v)
{
  /* Locals: __retres */
  int __retres;
  /* sid:18 */
  r1 = 0.1 == 1.0 / ten;
  /* sid:19 */
  r2 = 0.1 == 1.0 / 10.0;
  /* sid:20 */
  r3 = 0.1 == 1.0 / ten;
  /* sid:21 */
  r4 = 0.1 == 1.0 / 10.0;
  …
}

Explicit casts to a type that an expression already has, such as the casts to double in the assignments to variables r3 and r4, are erased by the Frama-C front-end as part of its normalization. For us, this will not do: these casts, although they convert a double expression to double, change the meaning of the program, as shown by the differences between the values of r1 and r3 and respectively or r2 and r4 when one executes our example.

This setback would not be insurmountable but it means complications. It also implies that FLT_EVAL_METHOD==2 semantics cannot be implemented by individual plug-ins, which looked like a possible alternative.


To conclude this section on a positive note, if the goal is to analyze a C program destined to be compiled to the thirty-year old 8087 instructions with a recent GCC compiler, we can build the version of Frama-C that will produce results precise to the last bit. The amount of work is not inconsiderable, but it is possible.

But wait!

But what about a recent version of Clang? Let us see, using the same C program as previously:

#include <stdio.h>

int r1, r2, r3, r4, r5, r6, r7;

double ten = 10.0;

int main(int c, char **v)
{
  r1 = 0.1 == (1.0 / ten);
  r2 = 0.1 == (1.0 / 10.0);
  r3 = 0.1 == (double) (1.0 / ten);
  r4 = 0.1 == (double) (1.0 / 10.0);
  ten = 10.0;
  r5 = 0.1 == (1.0 / ten);
  r6 = 0.1 == (double) (1.0 / ten);
  r7 = ((double) 0.1) == (1.0 / 10.0);
  printf("r1=%d r2=%d r3=%d r4=%d r5=%d r6=%d r7=%d\n", r1, r2, r3, r4, r5, r6, r7);
}
$ clang -v
Apple LLVM version 4.2 (clang-425.0.24) (based on LLVM 3.2svn)
$ clang -mno-sse2 -std=c99  t.c && ./a.out
r1=0 r2=1 r3=0 r4=1 r5=1 r6=0 r7=1

Oh no! Everything is to be done again… Some expressions are evaluated as compile-time with different results than the run-time ones, as shown by the difference between r1 and r2. The explicit cast to double does not seem to have an effect for r3 and r4 as compared to r1 and r2. This is different from Joseph Myers's interpretation, but if it is because floating-point expressions are always converted to their nominal types before being compared, it may be a good astonishment-lowering move. The value of r5 differs from that of r1, pointing to a non-obvious demarcation line between compile-time evaluation and run-time evaluation. And the values of r5 and r6 differ, meaning that our interpretation “explicit casts to double have no effects” based on the comparison of the values of r1 and r3 on the one hand and r2 and r4 on the other hand, is wrong or that some other compilation pass can interfere.

What a mess! There is no way a precise static analyzer can be made for this recent version of Clang (with these unfashionable options). Plus the results depend on optimizations:

$ clang -mno-sse2 -std=c99 -O2  t.c && ./a.out
r1=0 r2=1 r3=0 r4=1 r5=1 r6=1 r7=1

FLT_EVAL_METHOD is not ready for precise static analysis

In conclusion, it would be possible, and only quite a lot of hard work, to make a precise static analyzer for programs destined to be compiled to x87 instructions by a modern GCC. But for most other compilers, even including recent ones, it is simply impossible: the compiler gives floating-point operations a meaning that only it knows.

This is the sort of problem we tackled in the Hisseo project mentioned last time. One of the solutions we researched was “Just do not make a precise static analyzer”, and another was “Just analyze the generated assembly code where the meaning of floating-point operations has been fixed”. A couple of years later, the third solution, “Just use a proper compiler”, is looking better and better. It could even be a certified one, although it does not have to. Both Clang and GCC, when targeting the SSE2 instruction set, give perfect FLT_EVAL_METHOD==0 results. We should all enjoy this period of temporary sanity until x86-64 processors all sport a fused-multiply-add instruction.

Two things I should point out as this conclusion's conclusion. First, with the introduction of SSE2, the IA-32 platform (and its x86-64 cousin) has gone from the worst platform still in existence for predictable floating-point results to the best. It has correctly rounded operations for the standard single-precision and double-precision formats, and it retains hardware support for an often convenient extended precision format. Second, the fused-multiply-add instruction is a great addition to the instruction set, and I for one cannot wait until I get my paws on a processor that supports it, but it is going to be misused by compilers to compile source-level multiplications and additions. Compilers have not become wiser. The SSE2 instruction set has only made it more costly for them to do the wrong thing than to do the right one. They will break predictability again as soon as the opportunity comes, and the opportunity is already in Intel and AMD's product pipelines.

Friday, July 19 2013

The word “binade” now has its Wikipedia page…

… but that's only because I created it.

If you are more familiar than me with Wikipedia etiquette, feel free to adjust, edit, or delete this page. Also, although a Wikipedia account is necessary to create a page, I think it is not required for editing, so you can add to the story too (but if you do not have an account you are perhaps no more familiar than me with Wikipedia etiquette).

Saturday, July 6 2013

On the precise analysis of C programs for FLT_EVAL_METHOD==2

There has been talk recently amongst my colleagues of Frama-C-wide support for compilation platforms that define FLT_EVAL_METHOD as 2. Remember that this compiler-set value, introduced in C99, means that all floating-point computations in the C program are made with long double precision, even if the type of the expressions they correspond to is float or double. This post is a reminder, to the attention of these colleagues and myself, of pitfalls to be anticipated in this endeavor.


We are talking of C programs like the one below.

#include <stdio.h>

int r1;

double ten = 10.0;

int main(int c, char **v)
{
  r1 = 0.1 == (1.0 / ten);
  printf("r1=%d\n", r1);
}

With a C99 compilation platform that defines FLT_EVAL_METHOD as 0, this program prints "r1=1", but with a compilation platform that sets FLT_EVAL_METHOD to 2, it prints “r1=0”.

Although we are discussing non-strictly-IEEE 754 compilers, we are assuming IEEE 754-like floating-point: we're not in 1980 any more. Also, we are assuming that long double has more precision than double, because the opposite situation would make any discussion specifically about FLT_EVAL_METHOD == 2 quite moot. In fact, we are precisely thinking of compilation platforms where float is IEEE 754 single-precision (now called binary32), double is IEEE 754 double-precision (binary64), and long double is the 8087 80-bit double-extended format.


Let us find ourselves a compiler with the right properties :

$ gcc -v
Using built-in specs.
Target: x86_64-linux-gnu
…
gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1) 
$ gcc -mfpmath=387 -std=c99 t.c && ./a.out 
r1=0

Good! (it seems)

The test program sets r1 to 0 because the left-hand side 0.1 of the equality test is the double-precision constant 0.1, whereas the right-hand side is the double-extended precision result of the division of 1 by 10. The two differ because 0.1 cannot be represented exactly in binary floating-point, so the long double representation is closer to the mathematical value and thus different from the double representation. We can make sure this is the right explanation by changing the expression for r1 to 0.1L == (1.0 / ten), in which the division is typed as double but computed as long double, then promoted to long double in order to be compared to 0.1L, the long double representation of the mathematical constant 0.1. This change causes r1 to receive the value 1 with our test compiler, whereas the change would make r1 receive 0 if the program was compiled with a strict IEEE 754 C compiler.

Pitfall 1: Constant expressions

Let us test the augmented program below:

#include <stdio.h>

int r1, r2;

double ten = 10.0;

int main(int c, char **v)
{
  r1 = 0.1 == (1.0 / ten);
  r2 = 0.1 == (1.0 / 10.0);
  printf("r1=%d r2=%d\n", r1, r2);
}

In our first setback, the program prints “r1=0 r2=1”. The assignment to r2 has been compiled into a straight constant-to-register move, based on a constant evaluation algorithm that does not obey the same rules that execution does. If we are to write a precise static analyzer that corresponds to this GCC-4.4.3, this issue is going to seriously complicate our task. We will have to delineate a notion of “constant expressions” that the analyzer with evaluate with the same rules as GCC's rules for evaluating constant expressions, and then implement GCC's semantics for run-time evaluation of floating-point expressions for non-constant expressions. And our notion of “constant expression” will have to exactly match GCC's notion of “constant expression”, lest our analyzer be unsound.

Clarification: What is a “precise” static analyzer?

This is as good a time as any to point out that Frama-C's value analysis plug-in, for instance, is already able to analyze programs destined to be compiled with FLT_EVAL_METHOD as 2. By default, the value analysis plug-in assumes IEEE 754 and FLT_EVAL_METHOD == 0:

$ frama-c -val t.c
…
t.c:9:[kernel] warning: Floating-point constant 0.1 is not represented exactly.
 Will use 0x1.999999999999ap-4.
 See documentation for option -warn-decimal-float
…
[value] Values at end of function main:
          r1 ∈ {1}
          r2 ∈ {1}


The possibility of FLT_EVAL_METHOD being set to 2 is captured by the option -all-rounding-modes:

$ frama-c -val -all-rounding-modes t.c
…
t.c:9:[kernel] warning: Floating-point constant 0.1 is not represented exactly.
 Will use 0x1.999999999999ap-4.
 See documentation for option -warn-decimal-float
…
[value] Values at end of function main:
          r1 ∈ {0; 1}
          r2 ∈ {0; 1}

The sets of values predicted for variables r1 and r2 at the end of main() each contain the value given by the program as compiled by GCC-4.4.3, but these sets are not precise. If the program then went on to divide r1 by r2, Frama-C's value analysis would warn about a possible division by zero, whereas we know that with our compiler, the division is safe. The warning would be a false positive.

We are talking here about making a static analyzer with the ability to conclude that r1 is 0 and r2 is 1 because we told it that we are targeting a compiler that makes it so.


The above example command-lines are for Frama-C's value analysis, but during her PhD, Thi Minh Tuyen Nguyen has shown that the same kind of approach could be applied to source-level Hoare-style verification of floating-point C programs. The relevant articles can be found in the results of the Hisseo project.

To be continued

In the next post, we will find more pitfalls, revisit a post by Joseph S. Myers in the GCC mailing list, and conclude that implementing a precise static analyzer for this sort of compilation platform is a lot of work.

Tuesday, May 14 2013

Contrarianism

If I told you that when n is a positive power of two and d an arbitrary number, both represented as double, the condition (n - 1) * d + d == n * d in strictly-IEEE-754-implementing C is always true, would you start looking for a counter-example, or start looking for a convincing argument that this property may hold?

If you started looking for counter-examples, would you start with the vicious values? Trying to see if NaN or +inf can be interpreted as “a positive power of two” or “an arbitrary number” represented “as double”? A subnormal value for d? A subnormal value such that n*d is normal? A subnormal value such that (n - 1) * d is subnormal and n * d is normal?

Or would you try your luck with ordinary values such as 0.1 for d and 4 for n?


This post is based on a remark by Stephen Canon. Also, I have discovered a truly remarkable proof of the property which this quick post is too small to contain.

Saturday, May 11 2013

Big round numbers, and a book review

Nearly 15 months ago, according to a past article, this blog celebrated its 15-month anniversary, and celebrated with the announcement of minor milestones having been reached: 100 articles and 50 comments.

Fifteen months after that, the current count is nearly 200 articles and 200 comments. Also, the blog managed to get 100 subscribers in Google's centralized service for never having to mark the same post as read twice, Reader. This was a close call.


A lot of recent posts have been related to floating-point arithmetic. I would like to reassure everyone that this was only a fluke. Floating-point correctness became one of the Frama-C tenets with our involvement in two collaborative projects, U3CAT and Hisseo, now both completed. Very recently, something must have clicked for me and I became quite engrossed by the subject.


As a result of this recent passion, in the last few days, I started reading the “Handbook of Floating-Point Arithmetic”, by Jean-Michel Muller et al. This book is both thick and dense, but fortunately well organized, so that it is easy to skip over sections you do not feel concerned with, such as decimal floating-point or hardware implementation details. This book is an amazing overview. It contains cristal-clear explanations of floating-point idioms that I was until then painstakingly reverse-engineering from library code. Fifteen years from now, people will say, “… and you should read the Handbook of Floating-Point Arithmetic. It is a bit dated now, but it is still the best reference, just complete it with this one and that one”, just like people might say now about Aho, Sethi and Ullman's Dragon book for compilation.

Except that right now, the book is current. The references to hardware are references to hardware that you might still have, or certainly remember having had. The open questions are still open. If you were offered the chance to read the Dragon book when it came out and was all shiny and new, would you pass? If not, and if there is the slightest chance that you might hold an interest in the mysteries of floating-point computation in the foreseeable future, read this book now, for the bragging rights.

In addition, the book goes down to the lowest levels of detail, with occasional snippets of programs to make it clear what is meant. The snippets are C code, and irreproachable C code.

Thursday, May 9 2013

A 63-bit floating-point type for 64-bit OCaml

The OCaml runtime

The OCaml runtime allows polymorphism through the uniform representation of types. Every OCaml value is represented as a single word, so that it is possible to have a single implementation for, say, “list of things”, with functions to access (e.g. List.length) and build (e.g. List.map) these lists that work just the same whether they are lists of ints, of floats, or of lists of sets of integers.

Anything that does not fit in in a word is allocated in a block in the heap. The word representing this data is then a pointer to the block. Since the heap contains only blocks of words, all these pointers are aligned: their few least significants bits are always unset.

Argumentless constructors (like this: type fruit = Apple | Orange | Banana) and integers do not represent so much information that they need to be allocated in the heap. Their representation is unboxed. The data is directly inside the word that would otherwise have been a pointer. So while a list of lists is actually a list of pointers, a list of ints contains the ints with one less indirection. The functions accessing and building lists do not notice because ints and pointers have the same size.

Still, the Garbage Collector needs to be able to recognize pointers from integers. A pointer points to a well-formed block in the heap that is by definition alive (since it is being visited by the GC) and should be marked so. An integer can have any value and could, if precautions were not taken, accidentally look like a pointer. This could cause dead blocks to look alive, but much worse, it would also cause the GC to change bits in what it thinks is the header of a live block, when it is actually following an integer that looks like a pointer and messing up user data.

This is why unboxed integers provide 31 bits (for 32-bit OCaml) or 63 bits (for 64-bit OCaml) to the OCaml programmer. In the representation, behind the scenes, the least significant bit of a word containing an integer is always set, to distinguish it from a pointer. 31- or 63-bit integers are rather unusual, so anyone who uses OCaml at all knows this. What users of OCaml do not usually know is why there isn't a 63-bit unboxed float type for 64-bit OCaml.

There is no unboxed 63-bit floating-point type in OCaml

And the answer to this last question is that there is no particular reason one shouldn't have a 63-bit unboxed float type in OCaml. Defining one only requires carefully answering two more intricately related questions:

  • What 63-bit floating-point format should be used?
  • How will the OCaml interpreter compute values in this format?

In 1990, when 64-bit computers were few, Xavier Leroy decided that in his (then future) Caml-light system, the type for floating-point would be 64-bit double precision. The double precision floating-point format did not come close to fitting in the then-prevalent 32-bit word:

Floating-point numbers are allocated in the heap as unstructured blocks of length one, two or three words, depending on the possibilities of the hardware and on the required precision. An unboxed representation is possible, using the 10 suffix for instance, but this gives only 30 bits to represent floating-point numbers. Such a format lacks precision, and does not correspond to any standard format, so it involves fairly brutal truncations. Good old 64-bit, IEEE-standard floating point numbers seem more useful, even if they have to be allocated.

First, a remark: it is not necessary to distinguish floats from ints: that is what the static type-system is for. From the point of view of the GC they are all non-pointers, and that's the only important thing. So if we decide to unbox floats, we can take advantage of the same representation as for integers, a word with the least significant bit set. And nowadays even the proverbial grandmother has a 64-bit computer to read e-mail on, hence the temptation to unbox floats.

Second, the reticence to truncate the mantissa of any existing format remains well-founded. Suppose that we defined a format with 51 explicit mantissa bits as opposed to double-precision's 52. We could use the double-precision hardware for computations and then round to 51 bits of mantissa, but the sizes are so close that this would introduced plenty of double rounding errors, where the result is less precise than if it had been rounded directly to 51 bits. As someone who has to deal with the consequences of hardware computing 64-bit mantissas that are then rounded a second time to 52-bit, I feel dirty just imagining this possibility. If we went for 1 sign bit, 11 exponent bits, 51 explicit mantissa bits, we would have to use software emulation to round directly to the correct result.

This post is about another idea to take advantage of the double-precision hardware to implement a 63-bit floating-point type.

A truncationless 63-bit floating-point format

Borrow a bit from the exponent

Taking one of the bits from the 11 reserved for the exponent in the IEEE 754 double-precision format does not have such noticeable consequences. At the top of the scale, it is easy to map values above a threshold to infinity. This does not involve double-rouding error. At the bottom of the scale, things are more complicated. The very smallest floating-point numbers of a proper floating-point format, called subnormals, have an effective precision of less than the nominal 52 bits. Computing with full-range double-precision and then rounding to reduced-range 63-bit means that the result of a computation can be computed as a normal double-precision number with 52-bit mantissa, say 1.324867e-168, and then rounded to the narrower effective precision of a 63-bit subnormal float.

Incidentally, this sort of issue is the sort that remains even after you have configured an x87 to use only the 53 or 24 mantissa bits that make sense to compute with the precision of double- or single-precision. Only the range of the mantissa is reduced, not that of the exponent, so numbers that would be subnormals in the targeted type are normal when represented in a x87 register. You could hope to fix them after each computation with an option such as GCC's -ffloat-store, but then they are double-rounded. The first rounding is at 53 or 24 bits, and the second to the effective precision of the subnormal.

Double-rounding, Never!

But since overflows are much easier to handle, we can cheat. In order to make sure that subnormal results are rounded directly to the effective precision, we can bias the computations so that if the result is going to be a 63-bit subnormal, the double-precision operation produces a subnormal result already.

In practice, this means that when the OCaml program is adding numbers 1.00000000001e-152 and -1.0e-152, we do not show these numbers to the double-precision hardware. What we show to the hardware instead is these numbers multiplied by 2^-512, so that if the result need to be subnormal in the 63-bit format, and in this example, it needs, then a subnormal double-precision will be computed with the same number of bits of precision.

In fact, we can maintain this “store numbers as 2^-512 times their intended value” convention all the time, and only come out of it at the time of calling library functions such as printf().

For multiplication of two operands represented as 2^-512 times their real value, one of the arguments needs to be unbiased (or rebiased: if you have a trick to remember which is which, please share) before the hardware multiplication, by multiplying it by 2^512.

For division the result must be rebiased after it is computed.

The implementation of the correctly-rounded function sqrt() for 63-bit floats is left as an exercise to the reader.

Implementation

A quick and dirty implementation, only tested as much as shown, is available from ideone. Now I would love for someone who actually uses floating-point in OCaml to finish integrating this in the OCaml runtime and do some benchmarks. Not that I expect it will be very fast: the 63-bit representation involves a lot of bit-shuffling, and OCaml uses its own tricks, such as unboxing floats inside arrays, so that it will be hard to compete.

Credits

I should note that I have been reading a report on implementing a perfect emulation of IEEE 754 double-precision using x87 hardware, and that the idea presented here was likely to be contained there. Google, which is prompt to point to the wrong definition of FLT_EPSILON, has been no help in finding this report again.

Definition of FLT_EPSILON

Correct and wrong definitions for the constant FLT_EPSILON

If I google “FLT_EPSILON”, the topmost result is this page with this definition:

FLT_EPSILON the minimum positive number such that 1.0 + FLT_EPSILON != 1.0.

No, no, no, no, no.

I don't know where this definition originates from, but it is obviously from some sort of standard C library, and it is wrong, wrong, wrong, wrong, wrong. The definition of the C99 standard is:

the difference between 1 and the least value greater than 1 that is representable in the given floating point type, b^(1−p)

The GNU C library gets it right:

FLT_EPSILON: This is the difference between 1 and the smallest floating point number of type float that is greater than 1.

The difference

On any usual architecture, with the correct definition, FLT_EPSILON is 0x0.000002p0, the difference between 0x1.000000p0 and the smallest float above it, 0x1.000002p0.


The notation 0x1.000002p0 is a convenient hexadecimal input format, introduced in C99, for floating-point numbers. The last digit is a 2 where one might have expected a 1 because single-precision floats have 23 explicit bits of mantissa, and 23 is not a multiple of 4. So the 2 in 0x1.000002p0 represents the last bit that can be set in a single-precision floating-point number in the interval [1…2).


If one adds FLT_EPSILON to 1.0f, one does obtain 0x1.000002p0. But is it the smallest float with this property?

#include <stdio.h>

void pr_candidate(float f)
{
  printf("candidate: %.6a\tcandidate+1.0f: %.6a\n", f, 1.0f + f); 
}

int main(){
  pr_candidate(0x0.000002p0);
  pr_candidate(0x0.000001fffffep0);
  pr_candidate(0x0.0000018p0);
  pr_candidate(0x0.000001000002p0);
  pr_candidate(0x0.000001p0);
}

This program, compiled and executed, produces:

candidate: 0x1.000000p-23	candidate+1.0f: 0x1.000002p+0
candidate: 0x1.fffffep-24	candidate+1.0f: 0x1.000002p+0
candidate: 0x1.800000p-24	candidate+1.0f: 0x1.000002p+0
candidate: 0x1.000002p-24	candidate+1.0f: 0x1.000002p+0
candidate: 0x1.000000p-24	candidate+1.0f: 0x1.000000p+0

No, 0x0.000002p0 is not the smallest number that, added to 1.0f, causes the result to be above 1.0f. This honor goes to 0x0.000001000002p0, the smallest float above half FLT_EPSILON.

Exactly half FLT_EPSILON, the number 0x0.000001p0 or 0x1.0p-24 as you might prefer to call it, causes the result of the addition to be exactly midway between 1.0f and its successor. The rule says that the “even” one has to be picked in this case. The “even” one is 1.0f.

Conclusion

Fortunately, in the file that initiated this rant, the value for FLT_EPSILON is correct:

#define FLT_EPSILON 1.19209290E-07F // decimal constant

This is the decimal representation of 0x0.000002p0. Code compiled against this header will work. It is only the comment that's wrong.

Saturday, May 4 2013

Rounding float to nearest integer, part 3

Two earlier posts showed two different approaches in order to round a float to the nearest integer. The first was to truncate to integer after having added the right quantity (either 0.5 if the programmer is willing to take care of a few dangerous inputs beforehand, or the predecessor of 0.5 so as to have fewer dangerous inputs to watch for).

The second approach was to mess with the representation of the float input, trying to recognize where the bits for the fractional part were, deciding whether they represented less or more than one half, and either zeroing them (in the first case), or sending the float up to the nearest integer (in the second case) which was simple for complicated reasons.

Variations on the first method

Several persons have suggested smart variations on the first theme, included here for the sake of completeness. The first suggestion is as follows (remembering that the input f is assumed to be positive, and ignoring overflow issues for simplicity):

float myround(float f)
{
  float candidate = (float) (unsigned int) f;
  if (f - candidate <= 0.5) return candidate;
  return candidate + 1.0f;
}

Other suggestions were to use modff(), that separates a floating-point number into its integral and fractional components, or fmodf(f, 1.0f), that computes the remainder of f in the division by 1.


These three solutions work better than adding 0.5 for a reason that is simple if one only looks at it superficially: floating-point numbers are denser around zero. Adding 0.5 takes us away from zero, whereas operations f - candidate, modff(f, iptr) and fmodf(f, 1.0) take us closer to zero, in a range where the answer can be exactly represented, so it is. (Note: this is a super-superficial explanation.)

A third method

General idea: the power of two giveth, and the power of two taketh away

The third and generally most efficient method for rounding f to the nearest integer is to take advantage of this marvelous rounding machine that is IEEE 754 arithmetic. But for this to work, the exact right machine is needed, that is, a C compiler that implements strict IEEE 754 arithmetic and rounds each operation to the precision of the type. If you are using GCC, consider using options -msse2 -mfpmath=sse.

We already noticed that single-precision floats between 2^23 and 2^24 are all the integers in this range. If we add some quantity to f so that the result ends up in this range, wouldn't it follow that the result obtained will be rounded to the integer? And it would be rounded in round-to-nearest. Exactly what we are looking for:

  f                 f + 8388608.0f
_____________________________

0.0f                8388608.0f
0.1f                8388608.0f
0.5f                8388608.0f
0.9f                8388609.0f
1.0f                8388609.0f
1.1f                8388609.0f
1.5f                8388610.0f
1.9f                8388610.0f
2.0f                8388610.0f
2.1f                8388610.0f

The rounding part goes well, but now we are stuck with large numbers far from the input and from the expected output. Let us try to get back close to zero by subtracting 8388608.0f again:

  f                 f + 8388608.0f             f + 8388608.0f - 8388608.0f
____________________________________________________________________

0.0f                8388608.0f                              0.0f
0.1f                8388608.0f                              0.0f
0.5f                8388608.0f                              0.0f
0.9f                8388609.0f                              1.0f
1.0f                8388609.0f                              1.0f
1.1f                8388609.0f                              1.0f
1.5f                8388610.0f                              2.0f
1.9f                8388610.0f                              2.0f
2.0f                8388610.0f                              2.0f
2.1f                8388610.0f                              2.0f

It works! The subtraction is exact, for the same kind of reason that was informally sketched for f - candidate. Adding 8388608.0f causes the result to be rounded to the unit, and then subtracting it is exact, producing a float that is exactly the original rounded to the nearest integer.

For these inputs anyway. For very large inputs, the situation is different.

Very large inputs: absorption

  f                 f + 8388608.0f             f + 8388608.0f - 8388608.0f
____________________________________________________________________

1e28f                  1e28f                               1e28f
1e29f                  1e29f                               1e29f
1e30f                  1e30f                               1e30f
1e31f                  1e31f                               1e31f

When f is large enough, adding 8388608.0f to it does nothing, and then subtracting 8388608.0f from it does nothing again. This is good news, because we are dealing with very large single-precision floats that are already integers, and can be returned directly as the result of our function myround().


In fact, since we entirely avoided converting to a range-challenged integer type, and since adding 8388608.0f to FLT_MAX does not make it overflow (we have been assuming the FPU was in round-to-nearest mode all this time, remember?), we could even caress the dream of a straightforward myround() with a single execution path. Small floats rounded to the nearest integer and taken back near zero where they belong, large floats returned unchanged by the addition and the subtraction of a comparatively small quantity (with respect to them).

Dreams crushed

Unfortunately, although adding and subtracting 2^23 almost always does what we expect (it does for inputs up to 2^23 and above 2^47), there is a range of values for which it does not work. An example:

  f                 f + 8388608.0f             f + 8388608.0f - 8388608.0f
____________________________________________________________________

8388609.0f          16777216.0f                        8388608.0f

In order for function myround() to work correctly for all inputs, it still needs a conditional. The simplest is to put aside inputs larger than 2^23 that are all integers, and to use the addition-subtraction trick for the others:

float myround(float f)
{
  if (f >= 0x1.0p23)
    return f;
  return f + 0x1.0p23f - 0x1.0p23f;
}

The function above, in round-to-nearest mode, satisfies the contract we initially set out to fulfill. Interestingly, if the rounding mode is other than round-to-nearest, then it still rounds to a nearby integer, but according to the FPU rounding mode. This is a consequence of the fact that the only inexact operation is the addition. The subtraction, being exact, is not affected by the rounding mode.

For instance, if the FPU is set to round downwards and the argument f is 0.9f, then f + 8388608.0f produces 8388608.0f, and f + 8388608.0f - 8388608.0f produces zero.

Conclusion

This post concludes the “rounding float to nearest integer” series. The method highlighted in this third post is actually the method generally used for function rintf(), because the floating-point addition has the effect of setting the “inexact” FPU flag when it is inexact, which is exactly when the function returns an output other than its input, which is when rintf() is specified as setting the “inexact” flag.

Function nearbyintf() is specified as not touching the FPU flags and would typically be implemented with the method from the second post.

- page 1 of 5