This time for real: verifying function cosine
By pascal on Thursday, March 3 2011, 13:27 - Permalink
This post follows that post where a typical double-precision implementation for a cosine function is shown. That implementation is not a correctly rounded implementation, but its
double result is precise to a level close to what the double-precision floating-point representation allows. It uses low-level tricks that make it faster on nineties computers.
In this post, I offer a slightly simplified version of the same function (download). This function does not compute exactly the same results, but according to a few billion tests I ran, it does not seem more wrong than the original. I expect the simplified version is actually faster than the original on tens computers, but this is not why I modified it. I needed to make changes because the value analysis does not yet precisely handle all the low-level conversions in the original (accessing the 32 most significant bits of a
double). Although the value analysis does not say anything false, it is only possible to analyze the function precisely if this idiom is handled precisely.
If it was important to analyze as-is code that uses this low-level idiom, implementation of this treatment could be prioritized according to the results of the experiment in this post. Results on the original function can be expected to be the same after the new feature is implemented as the results for the modified function described here.
I implemented a quick-and-dirty value analysis builtin for the purpose of this verification. It is not provided so much for being re-used as for possible comments: it is large enough that a bug in it could invalidate the verification. This builtin checks that its second argument is close enough to the cosine of its first argument (it uses its third argument as allowable distance). The builtin displays a log line for each call, so that you can check that it was properly called.
Demonstrating how easy it is to write an unsafe builtin, this implementation assumes without verification that it is called in a range where function cosine is decreasing.
I then wrote the following lines of C:
x = Frama_C_double_interval(..., ...); m = mcos(x); Frama_C_compare_cos(x, m, 0x1.0p-21);
Next, I arranged to have the value analysis repeatedly analyze these lines with bounds passed to
Frama_C_double_interval that, taken together, cover entirely the range from 0 to π/4. I will not say how I did, because it is not very interesting. Besides, the method I used is convenient enough to discourage someone from writing an Emacs Lisp macro to generate sub-division assertions. That would be an unwanted side-effect: it in fact has complementary advantages with the would-be assertion generation macro that I recommend someone look into.
The verification used a few hours of CPU time (I did not try very hard to make it fast). One thing I did was adapt the input sub-interval width to the part of the curve that was being verified, from two millionths for inputs near zero to a quarter of a millionth for inputs near π/4.
During this experiment, I actually used the cosine implementation from CRlibm as reference function. This library, incidentally developed at my alma mater, provides a number of fast and correctly rounded implementations for mathematical functions.
The value analysis was configured with the
-all-rounding-modes option, so the results are guaranteed for arbitrary rounding modes, and are guaranteed even if the compiler uses supplemental precision for intermediate results (e.g. if the compiler uses the infamous 80-bit historial floating-point unit of Intel processors).
Here are the logs I obtained (warning: I did not make any file of that level of boringness publicly available since my PhD, plus these are really big, even after compression. Handle with care): log1.gz log2.gz log3.gz log4.gz log5.gz log6.gz log7.gz log8.gz log9.gz log10.gz log11.gz log12.gz log13.gz log14.gz log15.gz. To save you the bother of downloading even the first file, here is what the first three lines look like.
[value] CC -0.0000000000000000 0.0000019073486328 0.9999999999981809 1.0000000000000002 0.9999999999981810 1.0000000000000000 OK [value] CC 0.0000019073486328 0.0000038146972656 0.9999999999927240 0.9999999999981811 0.9999999999927240 0.9999999999981811 OK [value] CC 0.0000038146972656 0.0000057220458984 0.9999999999836291 0.9999999999927242 0.9999999999836291 0.9999999999927242 OK
From the fact that the builtin did not complain for any sub-interval, we can conclude that on the interval 0..π/4, the proposed implementation for cosine is never farther than 2^-21 (half a millionth) from the real function cosine.
According to the aforementioned billion tests I made, the implementation seems to be much more precise than above (to about 2^-52 times its argument). The key difference is that I am quite sure it is precise to half a millionth, and that I have observed that it seems to be precise to about 2^-52 times its argument.
Guillaume Melquiond provided irreplaceable floating-point expertise in the preparation of this post.