Frama-C news and ideas

To content | To menu | To search | Frama-C Home

Search

Your search for Skein returned 20 results.

Friday, April 26 2013

Of compiler warnings discussions

A discussion I often have addresses the question of whether a compiler¹ can warn for all possible illegal actions a program could take at run-time within a specific, well-understood family² .

(1) or some other piece of software that receives a program as input and, in finite time, does something with it, for example a static analyzer

(2) for example the family of out-of-bounds memory accesses


For some reason my opinion differs significantly from the consensus opinion, but since this is my^H^H the Frama-C blog, I thought I would nevertheless put it here for exposure.

The Halting problem is only relevant as a metaphor

The Halting problem is defined for Turing machines. A computer is not a Turing machine. A Turing machine has an infinite ribbon, whereas a real computer has finite memory.


Imagine a real, physical computer running a program. Imagine that you sit at the keyboard, typing new inputs indefinitely, swapping the keyboard for a new one when wear and tear make it unusable, putting the computer on a battery and moving it to another planet when the Sun explodes, rigging your will to ensure your descendants will carry on the typing, this sort of thing.

Seems like a glimpse of infinity, right? But the mathematical abstraction for this computer is not a Turing machine, it is a boring Mealy machine (an automaton with inputs and outputs). A Turing machine has an infinite ribbon it can write to and read from again. The above set-up does not. It is a Mealy machine with 256^(10^9) states (assuming a 1TB hard-drive).

A sound analyzer is not impossible

A sound analyzer for out-of-bounds accesses is an analyzer that warns for every out-of-bounds access. Making one is easier than it seems. There is always the solution of warning for every dereferencing construct in the program (in C, these would be *, [] and ->).

A useful sound analyzer is not impossible

That last analyzer, although sound, was pretty useless, was it not? General theoretical results on the undecidability of deciding of the run-time behavior of Turing machine would predict —if allowed to be intrapolated to computers and programs— that it is impossible to build a sound and complete analyzer.

A complete analyzer for out-of-bounds accesses warns only for accesses that can really be out-of-bounds for some program inputs.

Indeed, a sound and complete analyzer is difficult. It is impossible for Turing machines, and it is hard even for mere computers and programs.

Luckily, the theoretical difficulty is only in reference to arbitrary programs. It is theoretically difficult to make the promise to warn for all out-of-bounds accesses, and to warn only for real out-of-bound accesses, for arbitrary programs, but an analyzer does not have to do that to be useful. It only needs to deal with programs people are actually interested in (an excessively small subset of all programs).

And it does not even need to be sound and complete to be useful. An analyzer that warns for all out-of-bounds accesses with only a few false positives is still useful (and still sound. “Complete” is what it is not).

Conversely, an analyzer that warns only for actual out-of-bounds accesses and only omits a few is still useful and complete, although it is not sound. But I have not spent the last 8 years working on such an analyzer, so it is not my job to advertise one. Whoever wrote one can tout it in its own blog.


This blog contains examples of analyses made using a sound analyzer. The analyzer does not emit so many false positives that it is unusable. The analyses involve a variety of real programs: QuickLZ (there were a few false positives, but a bug was found and was fixed), the reference implementation for Skein (no bug was found but there were no false positives), Zlib (in this ongoing analysis there are plenty of false positives but one extremely minor issue has already been found), …

But this cannot work in presence of dynamically loaded libraries!

Well, yes, in order to do “program analysis”, the program must be available. It is in the name.

Speaking of binary-only libraries or programs, the counter-arguments I have written above apply the same to the analysis of a program in binary form. Analyzing a binary program is significantly harder than analyzing source code, but a binary analyzer does not have to be unsound, it does not have to be incomplete, and it does not have to decide the Halting problem. The examples I have provided are for analyses of source code because that's what I work on, but look for the blog of a binary static analyzer and there is no theoretical reason you won't be pleasantly surprised.

But this cannot work if there are inputs from the user / from a file!

Of course it can. The more you know and don't tell to the analyzer, the more its warnings may subjectively feel like false positives to you, but the analyzer can always assume the worst about any inputs:

#define EOF (-1)

/*@ ensures \result == EOF || 0 <= \result <= 255 ; */
int getchar(void);

int main(int c, char **v)
{
  int a[5], i;
  i = getchar();
  a[i] = 6;
}
$ frama-c -pp-annot t.c -val
...
t.c:10:[kernel] warning: accessing out of bounds index [-1..255]. assert 0 ≤ i < 5;

The first four lines would not have to be written everytime. They model a function from the standard library and would only need to be updated when the specification of the standard C library changes.

Speaking of which, these four lines are not a very good modelisation of function getchar(). It is possible to do better than this, and the above is only an example, simplified for clarity but best not re-used.

Conclusion

In conclusion, it is possible to make sound static analyzers, because theoretical undecidability results are about Turing machines, because people are not interested in arbitrary programs, and because a couple false positives once in a while do not necessarily make a sound analyzer unusable. The theoretical counter-arguments apply to binary code, to concurrent programs and when reversing the use of the “sound” and “complete” adjective, but I only have examples for the static analysis of sequential C source code.

Monday, November 12 2012

November in Security

Bruce Schneier is, among other things, the author of the blog Schneier on Security. He is also one of the co-authors of the Skein cryptographic hash function, the SHA-3 contestant being verified in Frama-C's value analysis tutorial in the manual and then on this blog. I feel silly introducing him, considering he is rather on the “well-respected” side of the security researcher spectrum. But Bruce did provide this month's cybersecurity link, this essay by Gary McGraw.

Key quote: The kind of defense I advocate (called "passive defense" or "protection" above) involves security engineering -- building security in as we create our systems, knowing full well that they will be attacked in the future. One of the problems to overcome is that exploits are sexy and engineering is, well, not so sexy.


Bruce, in his typical fisheye approach to security, also recently provided this other link about security in fairy wren nests. I thought this made a lot of sense, but then again, Hongseok Yang lent me a copy of Dawkins' The Selfish Gene when we were both post-doc students at KAIST, so your mileage may vary.


Since I came back to France, I bought my own copy of The Selfish Gene to lend others. If you are around and are curious, just ask.

Wednesday, October 10 2012

RERS 2012 competition: our solutions for problems 1-9

Previously on this blog

Although it was so brief that you may have missed it, I previously mentioned here the 2012 RERS Grey Box Challenge, an interesting competition where the questions involve programs in C syntax.


I pointed out that some questions were about the reachability of assertions in the programs. As it is documented, Frama-C's value analysis can guarantee that some assertions are not reachable. When its authors are using it, they can ascertain that no approximation is taking place, and that for this reason, any assertion that the analysis diagnoses as perhaps reachable is indeed reachable for some sequence of inputs. Because of the way it work, the analysis is unable to exhibit the sequence of inputs, but one such sequence definitely exist.

We then moved on to properties expressed as LTL formulas. Some of the LTL formulas in the competition expressed reachability properties. It was easy to diagnose those by applying Frama-C's value analysis to an instrumented program. The other properties were more difficult liveness properties. With more instrumentation, involving either a transition counter or a non-deterministic predecessor state, it was possible to diagnose these too using the value analysis.

Colleagues and I have obtained solutions for all properties in the initial problems 1-9 of the competition. More information about these solutions is available in the last part of this post. First, now that the exercise is done, I would like to revisit its benefits.

Features gained, bugs fixed

New built-in Frama_C_interval_split()

A key part of the main() function used for the verification of any kind of property in the competition was:

  input = unknown_int();
  /*@ assert input == 1 || input == 2 || input == 3 ||
                input == 4 || input == 5 || input == 6 ; */

These three lines are intended to replace the following one in the original main() function provided by the organizers. Their line works for interactive sessions where the operator inputs number in the range 1..6.

        scanf("%d", &input);        


The call to unknown_int() could also have been Frama_C_interval(1, 6). Then the value analysis would be able to tell that the assertion is not an assumption, only a hint. Regardless, the assertion is there, together with option -slevel, to make the analyzer study separately what happens in executions where a different value is read for input.


This is a classic trick used in many earlier studies. It is mentioned in the study of the Skein-256 hash function and in nearly all others since. Still, these assertions do get tiring to write when many cases need to be listed. Until now, there was no shortcut (I have been using Emacs macros myself).

Enter Frama_C_interval_split(l, u). It does the same thing that Frama_C_interval(l, u) does, but it automatically causes the individual values between l and u inclusive to be propagated separately, without need for a laborious ACSL assertion. The Frama_C_interval_split() built-in is available in Oxygen (I think), and since it is a true value analysis built-in, the user does not need to remember to include file builtin.c to make it accessible. A simple prototype int Frama_C_interval_split(int, int); will do.

The informational message “unrolling up to ... states” could be off

If you have used option -slevel (and as previously said, if you use Frama-C's value analysis at all, you should definitely use this option from time to time), you know that it sometimes logs a message to tell how much superposition has been done on a same program statement. It looks like:

...
[value] Semantic level unrolling superposing up to 100 states
...
[value] Semantic level unrolling superposing up to 200 states
...

The algorithm that takes care of this message was like this:

  target := 100;
  if new number of states attached to the statement ≥ target
  then
    print "Semantic level unrolling superposing up to " target " states"
    target := target + 100
  endif

The intention was that the value displayed would indicate the maximum number of superposed states to the nearest hundred.


Suppose the first wave of propagated states to reach a statement contains a thousand of them. The algorithm displays “superposing up to 100 states”, and updates the target to 200. If the second wave contains 2500 more states, the algorithm then displays “superposing up to 200 states” and updates the target to 300. And so on. If states keep arriving in large numbers, variable target gets irremediably behind.


This had not been noticed until this competition, but in model-checking mode, manipulating tons of states, target never gets a chance to catch up and the number displayed can be much lower than the number of states actually superposed.


This bug is present in Frama_C Oxygen. It is fixed in the development version.

An unnamed option was tested more thoroughly

Last but proverbially not least, thanks to this competition, the value analysis option that caches previous analyses of a function call in order to re-use them without loss of precision has received more testing. Who knows, someday we may even trust it enough to reveal its name.

Conclusion

The next section describes our solutions. If you did not at least attempt the competition yourself, it is unlikely that you will find it interesting: you should stop reading now. Here is my personal conclusion before you go: case studies are fun, and model-checking is a fun sub-field of computer science. At the same time, a piece of software is not an automaton: if you try to specify one as if it was the other you will certainly make mistakes and verify something other than you intended. I think that our experience trying to participate in the competition demonstrates that. If you are not convinced, please try to answer the competition's questions for yourself, and then, and only then, read the next section.

Our solutions to the 2012 RERS challenge

Description

The reachability of assertions is straightforward to answer with the value analysis, so I won't describe that. Here is an archive containing our answers for the LTL properties part. The archive contains mostly everything. Many files there were generated from others. In particular, the *.log files are analysis log files that took about one core-month to generate on a 2.66 GHz Intel Xeon X5650 workstation. The last file to be generated was “results”. It contains the predicted statuses of properties as interpreted with the conventions that follow.


A specific LTL property for a specific problem is checked by putting together the problem's transition function (function calculate_output() and its callees), a file buchimgt.c specific to the problem (this file contains the definition of the predecessor state and functions that manipulate it), a file main.c that contains a generic main loop, and a file such as output Z occurs after output U until output W.c that is specific to a property (but independent of the problem to which it is linked).

The problem-specific files are generated by the script genallproblemfiles.

The property-specific files such as output Z occurs after output U until output W.c were generated from templates such as output _ occurs after output _ until output _.c. The templates were hand-crafted by my colleague Sébastien Bardin to work together with the rest of the instrumentation.

The current status of the LTL property is maintained by an automaton designed for this kind of work (called a Büchi automaton). The generic main() function describes our interpretation of the problem. In pseudo-code:

forever
   read input 1..6
   compute output from input and current state
   compute transition in Büchi automaton from input and output
   if the current trace can no longer fail to satisfy the property
      prune 
   if the current trace can no longer satisfy the property
      display current state for verification
      abort analysis
   non-deterministically update predecessor state

The actual C file is designed so that the value analysis will remain complete as well as sound when analyzing it with high -slevel. A secondary objective is for the analysis to be as fast as possible despite current limitations. Readability only ranks third as an objective.

Still, regardless of the strange structure of the program, it is basically C. Any C programmer can read the main() and understand what is being verified. The only non-standard function is Frama_C_interval_split(). The informal description of Frama_C_interval_split() is that the function returns a value between the bounds passed to it and that the results provided by the analyzer hold for all values that it can return. If a simple Büchi automaton is plugged in that fails when output Z is emitted, it is relatively easy to understand on the basis of this main() function what we mean by “Z is emitted” and that Z can never be emitted if the analysis passes without the automaton raising a flag. This is how we can verify, as an example, the LTL property G !oZ.


As you can see on the pseudo-code, the Büchi automaton is updated with an input and the corresponding computed output, which are considered simultaneous. When calculate_output() fails to return an output because of a failed assertion, no pair (input, output) is presented to the automaton. To illustrate, consider the hypothetical property "output X occurs before input A". This property is weird, because each input A-F can happen at each instant, including the first one. It may seem that the property is tautologically false. With our interpretation, this property would be true if the system always failed the instant it read A until it had output X. These inputs would not count because the Büchi automaton would not see them.

The above is what we considered the most reasonable interpretation of what the competition organizers intended, at the time we sent our answers. On October 1, the organizers published example solutions that made us realize that they intended something else, and we had to retract our participation.

Differences with the official interpretation, and remaining unknowns

As the challenge period was coming to a close, organizers published solutions to additional problems 10 and 11 (the challenge was supposed to close at the end of September. The solutions illustrating the semantics were published on October 1 and the deadline extended).

One can only assume that these solutions also define the semantics of LTL formulas for problems 1-9, since no example is available for these. Unfortunately, there are subtle differences between the two families of programs.


Outputs corresponding to invalid inputs

The transition functions in problems 1-9 can return -1 or -2. They only return -2 when the rest of the program state was unchanged. The organizers communicated to us that traces in which -2 is emitted indefinitely should be ignored:

The same holds by the way for invalid inputs, i.e., calculate_output returning 2 [sic]. They are not meant to be part of the traces that LTL properties talk about - a liveness property such as F oW hence does not become false alone by infinitely often executing an invalid input.

I did not see this last point made explicit on the website.

For problems 1-9, since the program state is unchanged when -2 is returned, it seems that the easiest way to comply is to treat -2 as a fatal error that interrupts the trace. We do not lose much of importance in doing so because any trace with -2 outputs can be shortened into a trace without these outputs. We do lose some inputs during such a shortening, so it is still slightly invasive and we would like to be sure that this is what the organizers intended. Indeed, the main() function provided by the organizers does seem to treat -1 and -2 differently, and does call the inputs that lead to -2 “invalid”.

        if(output == -2)
                fprintf(stderr, "Invalid input: %d\n", input);
        else if(output != -1)
                        printf("%d\n", output);

The code above matches more or less the “if an unexpected input event is provided, an error message is printed and the ECA system terminates” statement on the competition's website. The code above does not stop on -2 but it seems clear that the “unexpected input” from the explanation is the -2 from the code and that, as informally stated, it interrupts the current trace.


You might expect programs 10-13 to follow the same pattern, but it turns out that instead of sometimes -1 and sometimes -2, they always return -2, both when the program state has been changed by the transition function and when it hasn't. You might still hope to interpret -2 as “error”, but in programs 10 and 11 it happens in the middle of traces that are given as example for the questions on the reachability of assertions. The solution page says:

error_46 reachable via input sequence
[E, A]

And, when you try it:

$ gcc Problem10.c 
$ ./a.out 
5
Invalid input: 5
1
Assertion failed: (0), function calculate_output, file Problem10.c, line 66.
Abort trap

So it does not look that -2 should be treated as an error after all. In the above organizer-provided trace, the error caused by input 5 (E) does not make error_46 unreachable (actually, I have it on good authority that error_46 is unreachable when, in the transition function, return -2; is treated as a trace-interrupting error).


In conclusion, for problems 1-9, we treated -1 as just another output letter and -2 as a trace-interrupting error. This does not correspond to what the organizers did in their example solutions to problem 10. We still do not know what the organizers intended. The transition function returning -2 instead of -1 in problems 10-13 seems particularly inscrutable when the organizers have previously told us by e-mail to ignore the traces with finitely many non-(-2) outputs.

Our aforelinked solutions were computed with this convention. We provide them in case someone wants to use the same convention and compare their results to ours.


Are inputs and outputs simultaneous or are they different events?

Again, we have no idea. In our modelization, the output happens at the same time as the input it was computed from.

But in example traces provided by the organizers, it looks like they are distinct events!! An example trace is [iD, oV, iB] ([oY, iC, oV, iB])*.

But if input and output are separate events, why is the first LTL formula for problem 10 expressed as (! iA WU (oU & ! iA)) ? The atom (oU & ! iA) makes no sense, because if the event is any sort of output event (U or another), then it cannot be an input event. The atom (oU & ! iA) is just equivalent to oU. All LTL properties that talk about inputs and outputs in the same instant are similarly nonsensical.

In our solutions, influenced by the shape of the competition's LTL formulas which talk about input and output in the same instant, we considered that input and corresponding output are simultaneous. Both are passed to the Büchi automaton at the time of computing a transition. An input can happen together with the output -1. If the transition function aborts because of an assertion or returns -2, then the input is not seen by the Büchi automaton (in the case of the -2 output, this would be easy to change by modifying the main() function. The trace could be interrupted just after the Büchi automaton transition instead of just before).


Is there any reachability LTL property at all in the competition?

The example solutions all provide infinite traces as counter-examples. This is consistent with the explanation “an LTL property holds for a certain ECA program if and only if all possible runs which do not lead to an error satisfy the LTL property”, but it is a very strange way to approach reachability questions (which indeed are no longer reachability questions at all). To answer a LTL question that would ordinarily have been a reachability question, such as “output W occurs at most twice”, in this setting, one must, when W has been encountered three times on a trace, wonder whether this trace is the prefix of at least one infinite trace. If there is a sequence of inputs that keeps the trace alive, then the trace is a counter-example. If all inputs that can be added to extend to this prefix eventually lead to an error, even if it is at a very far horizon, then the trace is not a counter-example (remember not to take into account the trace that indefinitely outputs -2 when trying to answer this sub-question).

In short, in this setting, no LTL formula ever expresses a reachability property. It does not prevent us to handle these LTL properties (Sébastien assures me that our method theoretically handles all liveness properties. We certainly handle those that are already liveness properties without the weird “all possible runs that do not lead to an error” quantification).


This discussion gives me an idea for a powerful static analysis tool. I will call it Marvin. You ask it “Is this software safe? Can this dangerous angle be reached?” and it answers “Who cares? The Sun is going to blow up in five billion years. Good riddance if you ask me. Oh god, I'm so depressed…”

Thanks

As the initiator of this attempt to participate to the competition, I would like to thank my colleagues Virgile Prevosto, Boris Yakobowski and Sébastien Bardin, from whom I personally learnt a lot about various aspects of software verification; my warmest thanks are for the organizers of the 2012 RERS Grey Box Challenge.

Friday, August 24 2012

On writing a dedicated model-checker for the RERS competition

In recent posts, I have shown that Frama-C's value analysis could answer many reachability questions, and some questions that weren't originally phrased as reachability questions, about the programs in the RERS competition.

If you are vaguely familiar with the internals of Frama-C's value analysis, and if you tried analyzing some of the competition programs yourself, you may have noticed that these analyses use only a small fraction of the plug-in's capabilities. The analyzer is only ever propagating abstract states that correspond to singletons. It does juggle with many program states, but the programs here have small states that are encoded in just a few variables (the analyzer would have been able to manipulate the states encoded on a much larger number of variables and would efficiently share in memory the values common to several of the explored states). There are no bit-fields, no structs with padding (both of which might make identical states look different if carelessly handled). The programs obviously do not execute any undefined behavior for lack of any operation that might introduce them. There is a single outermost loop. There is no arithmetic at all.

In favor of the general verifier

A specialized verifier that was designed for just these programs would have a huge opportunity to do a better job on them. On the other hand, the advantage of working on a more general verifier is that it is useful for more tasks. This enables to spend more time improving it. Some of the improvements enhance the analysis of many programs, including the specific programs built only from assignments, equality tests and conditionals considered in the competition. Some of these improvements are too sophisticated to justify implementing in a verifier that only handles programs with assignments, equality tests and conditionals, because such a verifier will never be usable to find that the SHA-3 candidate Skein's reference implementation does not exhibit undefined behavior, that AES may be susceptible to timing attacks (but Skein isn't), where a couple of bugs are in an old undocumented piece of control software, that there is a subtle memory overflow in compression library QuickLZ, or that units of code have the data and control flows mandated by specification.

What a dedicated verifier might look like

In the particular case of these programs, the same propagation done by the value analysis could be done in C, by a program that would incorporate the transition function directly and execute it as compiled code. This would be much more efficient than what the value analysis does, and in this particular case, it would give the same results. From experiments interpreting Csmith programs, the value analysis slowdown with respect to compiled code can be expected to be of the order of 10000.

Accumulating reachable states

In order to reproduce what Frama-C's value analysis does, a dedicated verifier would need to store states that have already been visited, and to be able to recognize, after applying the transition function once more, whether the obtained state was one that was already visited.

In this particular case, this could be done in constant time with a hashset. Note, however, that it is only possible to compute a hash in this specialized case because all states are “singleton” states. If some states represented several values at once, e.g. a1 ∈ {1; 2} a2 ∈ {0; 1}, the good criterion would then be whether the newly computed state is included in one of the previously propagated states. Testing inclusion in one of the previous states cannot be done in constant time with a hashset (hashsets only allow you to test whether the new state is equal to an existing state, and you need to be careful to use compatible equality and hash functions).

Frama-C's value analysis uses a data structure that is generally as efficient as hashsets when the manipulated states are singletons, and that often remains efficient when testing inclusion in one another of states that are not singletons.

Storing states that remain to be propagated

A small thing: the propagation algorithm also requires a workqueue. Any propagation order will do (Frama-C's value analysis propagation order is not specified either), but since C comes with so few data structures, I just thought I would mention it. For a dedicated verifier in C, one would need to find some linked list implementation or write eir own. Frama-C's value analysis may interpret C slower than the code can be executed once compiled, but it comes with ready to use data structures for the reached set and for the workqueue. One needs not even know they are there to use the analyzer.

Specific drawbacks

Obviously, this ad-hoc verifier could be expected to be must faster than the value analysis. This makes the experiment tempting. What prevents me from starting work on it is the lack of generality of the resulting tool. Some examples follow.

Undefined behavior

Suppose that you had implemented such a specialized verifier for the competition's program, and that you were looking for more transition functions written in C that the same verifier could apply to. You would certainly find some, but would you in general be certain that the transition function never exhibits undefined behavior (not for the first transition, and not for the transition from any reachable state to a successor)? If one isn't certain that the transition function does not cause undefined behavior, from a formal verification point of view, the tool is worthless. An undefined behavior can cause anything to happen. In particular, an out-of-bounds memory access in the transition function can mess up the verifier's reached set or workqueue, and compromise the results of the verification.


Any automatic verifier is susceptible to the caveat that a bug in the verifier can compromise results. This is different: your implementation could be flawless, but it would still be susceptible to a bug in the transition function, a bug in the system being verified.

Of course, you could, as a preliminary step in your verification, check that the transition function does not have any undefined behavior for any input state. If you find that there are a lot of different undefined behaviors in C and that it's a bother to detect them all, we have a tool that we have been working on. It also answers reachability questions.

General interference between unrelated traces

Even if you have only “frank” run-time errors to fear—undefined behavior that compilers kindly translate to run-time errors, at least when optimizations are disabled—, a run-time error in one transition will still interrupt the entire dedicated analyzer. You will notice when this happens, and you can add a guard to the particular division by zero or NULL dereference, but this iterative process may end up taking as long as the value analysis for the same result. The value analysis, when encountering division by zero or NULL dereference, makes a note of it, considers the trace as terminated, and goes on to the propagation of the next trace. The end result, a list of run-time errors to worry about and a reachable states set, is the same, but it is obtained in a single pass.

There is also the issue of non-termination of the transition function. The value analysis detects cases of non-termination more or less quickly; again, when the infinite loop is detected it simply goes on to the next trace. With an ad-hoc verifier, if you expect the verification to take days, it may take days before you realize that the verifier is executing an infinite loop in its third considered transition.

Conclusion

In summary, considering the general C verification framework we already have, it looks like it wouldn't be a good investment of our time to develop a dedicated fast verifier for the competition—although it would provide an insightful datapoint if someone did.

Perhaps participating in the meeting will convince us that Event Condition Action (ECA) systems are more generally useful than our current experience has led us to believe. We could work on a verifier for them if there is a need. There is not much code to reuse from a general-purpose C abstract interpreter. There would be ideas to reuse, certainly. I will come to one of them in the next post.

Saturday, December 31 2011

Do not use AES in a context where timing attacks are possible

Justification

There recently was a thread in the Frama-C mailing list on verifying the Rijndael cipher, standardized and better-known as AES. Nowadays, AES is mostly famous for being sensitive to timing attacks. An attacker measuring the time it takes to encrypt known plaintext with an unknown key can deduce the key (key quote: "Using secret data as an array index is a recipe for disaster").

This brings me back to an experiment with University of Minho researchers where Frama-C's value analysis was used to verify that cryptographic C functions have constant execution time. The justification and the conclusion are the same as in this blog post that had served as the starting point of that experiment.


There are two aspects to constant-time programming: checking that the execution path does not depend on secrets, and (remember the key quote above) checking that array indices do not depend on secrets. I implemented the first check as an extension of Frama-C's dependencies analysis. Manuel Bernardo Barbosa et al similarly implemented the second one, and they used both analyses to verify that cryptographic functions in the NaCl library were safe from timing attacks. I had a glance at their implementation but I had not made the effort to port it to new Frama-C releases (I assume they had not either).

Example

Consider the function f() below:

int t[10] = {0, 1, 2, 2, 1, 8, 9, 17, 54, 79};

int u[10];

int g;
int a;

void f(int n)
{
  int i;
  int *p;

  for (i = 0; i < t[n] * t[n+1] - 3; i++)
    {
      p = u + i + a;
      if (g)
	*p = i;
    }
}

Execution path dependencies analysis relies on standard dependencies analysis (option -deps), which rely on the value analysis. Say you are interested in the execution times of function f() when array t has its initial values, global variables a and g are each 0 or 1, and the argument n is between 0 and 5. You would then create the following analysis context, as you would for any verification based on the value analysis:

main(){
  a = Frama_C_interval(0, 1);
  g = Frama_C_interval(0, 1);
  f(Frama_C_interval(0, 5));
}

Analyzing this complete program for execution path dependencies:

$ frama-c -experimental-path-deps -deps share/builtin.c ep.c
...
Computing path dependencies for function f
Path dependencies of f: t[0..6]; g; n

Within the initial conditions defined by the function main() (and in particular, n between 0 and 5), the execution path depends on the initial values of t[0..6], g and n at the time f() is called. Note that variables p, a and u are not included in this list, although they are used, because the execution path does not depend on them. So if f() was a cryptographic function and variable a contained a secret, you would not have to worry that this secret can be deduced by observing through timing that the function has different execution paths depending on a.


But does this mean that you are safe from timing attacks, though? As demonstrated in the article linked first in this post, you should also worry that a malicious adversary will infer information from the array indices used in f():

$ frama-c -experimental-mem-deps -deps share/builtin.c ep.c
...
Computing mem dependencies for function f
Mem dependencies of f: t[0..6]; a; n

The above option, that I quickly re-implemented when I saw that the verification of cryptographic functions was being discussed in the mailing list, tells which of f()'s inputs influence the computation of array indices, in the same way that option -experimental-path-deps tells which of f()'s inputs influence the execution path inside f(). In this example, it tells that the contents of input variable a can leak through a cache-timing attack (because a is used in the computation of p, and later p may be dereferenced for writing).

What about AES, then?

$ frama-c rijndael.c -cpp-command "gcc -C -E -I /usr/local/Frama-C/share/frama-c/libc" -no-annot -slevel 9999 mymain.c -experimental-mem-deps -deps
...
Mem dependencies of rijndaelEncrypt: rk; Nr_0; pt; ct; in[0..15]; key[0..39];
                                     Te0[0..255]; Te1[0..255]; Te2[0..255]; Te3[0..255]

The analysis finds that there is a risk that in[0..15] (the input buffer) and key[0..39] (the key) may be leaked through cache-timing attacks. The other variables are Nr_0, the number of rounds, pt and ct, just pointers, and the Tei arrays, the S-boxes. Those aren't secrets so it doesn't matter that they are listed.

Like every other analysis in Frama-C, this analysis always makes approximations on the safe side, so we expected to find this result (since we knew the attack had been realized). The question is, is the analysis precise enough to conclude that another cryptographic function is safe? The answer is yes: the hash function Skein-256 can be shown to have an execution time that depends only on the length of the input buffer, but not of its contents.

Implementation

The option -experimental-path-deps has been available as a hidden, proof-of-concept feature inside the official Frama-C distribution roughly since the Valgrind blog post was published in 2010.

I re-implemented the analysis of memory accesses dependencies analysis demonstrated here as -experimental-mem-deps. It has been available since Frama-C version Oxygen.

Note that the feature was very lightly tested. The new option -experimental-mem-deps was written in 45 minutes and the examples in this post are the only testing it received. If you feel that automatic analyses such as these are too automatic (they don't force you to write invariants for loops in the target code, for instance), you can instead spend your time usefully by devising test suites to make sure they work as intended in all cases.

Conclusion

One must be careful with a posteriori studies. There are plenty of bugs for which, once they have been pointed out, it is tempting to say "hey, I could have found it with this or that static analysis tool! My tool finds these bugs. Everyone is so stupid not to be using it". The point is that, much like a dowser being tested, you only found the bug with your tool when someone else had told you where it was. Had you tried to find it before being told, perhaps you would have given up right in the setup phase, or you would have had so many false positives to review that you would never have gotten round to the bug, or you would have gotten round to it but would have dismissed it quickly as another false positive.

So I want to make it absolutely clear that I am not saying that the AES implementation conveniently posted in the mailing list should have been checked with Frama-C when it was written. This is a causal impossibility. However, timing attacks are a recognized family of attacks now, and cryptographic functions analyze rather well (based on experiments with AES and Skein), so Frama-C is definitely a tool that can be used nowadays to verify that new cryptographic functions are being implemented according to the state of the art.


This blog post owes to Adam Langley for describing the method in the context of Valgrind, to Manuel Bernardo Barbosa and his students for courageously writing the first version of the array indices dependencies analysis (which I am sure took them more than 45 minutes) and applying it, and to 罗婷 for posting on frama-c-discuss a version of AES that was ready to analyze.

Thursday, October 27 2011

Covering all interlacings in a single short context

Answering the two questions from last post in reverse order:


What shorter analysis context would test arbitrarily long sequences of interrupts, with repetitions?

There was a hint in the previous post in the link to the Skein-256 tutorial. A stronger hint would have been to link directly to this post, where an analysis context that encompasses many possible executions is offered. The analysis context there is for arbitrary numbers of calls to Skein_256_Update(), and the key part of the analysis context is:

  ...
  Skein_256_Init(&skein_context, HASHLEN * 8);
  while (Frama_C_interval(0,1))
    {
      for (i=0; i<80; i++)
        msg[i] = Frama_C_interval(0, 255);
      Skein_256_Update(&skein_context, msg, 80);
    }
  Skein_256_Final( &skein_context, hash);
  ...

By similarity with the above example, an analysis context for arbitrarily long sequences of interrupts, with repetitions, can be written:

...
void analysis_context(void)
{
  main();
  while (Frama_C_interval(0,1))
    {
       if (Frama_C_interval(0,1)) interrupt_device1();
       if (Frama_C_interval(0,1)) interrupt_device2();
       if (Frama_C_interval(0,1)) interrupt_device3();
    }
}

Most pitfalling value analysis usability pitfall number one: when using Frama_C_interval(), one should not forget to include file builtin.c as part of the analysis project.


The first question was actually more difficult:

What shorter analysis context would test, in a single analysis, exactly the same behaviors as the lengthy six entry points ?

Answer: the function below would.

int called1, called2, called3;
...
void analysis_context(void)
{
  main();
  while (!(called1 && called2 && called3))
    {
       if (Frama_C_interval(0,1) && !called1) { called1 = 1; interrupt_device1(); }
       if (Frama_C_interval(0,1) && !called2) { called2 = 1; interrupt_device2(); }
       if (Frama_C_interval(0,1) && !called3) { called3 = 1; interrupt_device3(); }
    }
}

Variables called1, called2, called3 are defined as zero-initialized global variables, and are used to remember whether each interrupt has already been called.

The value analysis will provide results that hold for all possible executions of the above context. It does not matter that some of the possible executions of the above code escape in livelock and fail to terminate. All the executions in which the three interrupts are called in some order and the program then terminates are taken into account, which is what matters for verifying safety.


To reiterate, we are not yet worried about how the verification will be done at this stage. It may turn out later that changing the context code slightly makes it easier while still covering all the same executions. Or the value analysis may be able to brute-force its way through, with an option such as -slevel-function analysis_context:33. We'll let the person in charge of verification make his choices and justify them if he needs to. Writing the analysis context is, in a way, a specification activity. While not completely ignoring the verification's feasibility, it should be done without worrying too much about such details.


And what if the execution of interrupts are in fact interlaced (that is, if they are in fact interruptible)? There is a plug-in for that (a phrase I would like to suggest as new Frama-C slogan). That plug-in is not distributed at this stage, but if you are interested in the verification of comparatively precise properties on code with fine-grained concurrency, write in and we'll tell you about it.


Post-scriptum: here is a complete example:

int t[20], s[30];

int *p, i, r;

main()
{
  p = t;
  i = 0;
}

void interrupt_device1(void)
{
  r = p[i + Frama_C_interval(0,9)];
}

void interrupt_device2(void)
{
  p = (Frama_C_interval(0,1)) ? t : (s + Frama_C_interval(0,10));
}

void interrupt_device3(void)
{
  i = Frama_C_interval(0,10);
}

void analysis_context(void)
{
  main();
  while (Frama_C_interval(0,1))
    {
       if (Frama_C_interval(0,1)) interrupt_device1();
       if (Frama_C_interval(0,1)) interrupt_device2();
       if (Frama_C_interval(0,1)) interrupt_device3();
    }
}

For some reason, this example does not even need the -slevel option to be found safe:

$ frama-c -val -main analysis_context  .../share/builtin.c t.c
...
[value] Values for function analysis_context:
          p ∈ {{ &t ; &s + [0..40],0%4 }}
          i ∈ [0..10]
          r ∈ {0}
...

Oh, variable r remains zero throughout the infinite loop. The silliness of my examples sometimes...

Verifying for all interlacings

A prospective user experimenting with Frama-C wonders whether it is possible to verify the behavior of an interrupt-driven system for all possible interlacing of interrupts.

We assume, for now, that interrupts are themselves uninterruptible. The prospective user wants to verify that nothing wrong happens when each interrupt is called once, in any order. He wrote the analysis context below.

void test_case1(void)
{
	main();
	interrupt_device1();
	interrupt_device2();
	interrupt_device3();
}

void test_case2(void)
{
	main();
	interrupt_device1();
	interrupt_device3();
	interrupt_device2();
}

void test_case3(void)
{
	main();
	interrupt_device2();
	interrupt_device1();
	interrupt_device3();
}

void test_case4(void)
{
	main();
	interrupt_device2();
	interrupt_device3();
	interrupt_device1();
}

void test_case5(void)
{
	main();
	interrupt_device3();
	interrupt_device1();
	interrupt_device2();
}

void test_case6(void)
{
	main();
	interrupt_device3();
	interrupt_device2();
	interrupt_device1();
}

Function main() does a bit of initialization. Each of the functions test_case1(), …, test_case6() is used as an entry point for the value analysis.

This method works: studying the six entry points one after the other, properties about many possible executions can be found. The problem is that this method does not scale well to larger numbers of interrupts.


The next post will answer two questions, that I invite you to think about for yourself. Note that the writing of analysis contexts is mostly orthogonal to the careful selection of options and annotations that help the value analysis conclude to the safety of the code. This is why I am suggesting these exercises without offering the code of interrupt_device1(), …, interrupt_device3(). Only in the most difficult cases (as happened for the verification of the strongest properties for Skein-256) is it necessary to adapt the analysis context to make the verification possible. By the same token, proofreading a verification in order to check that the right property was verified usually only means proofreading the analysis context, not the options and annotations that enabled the value analysis to reach its conclusion. Most options are safe to use, and the value analysis tells whether it uses a user annotation as an assumption or only as a hint.


Here are the questions the next post will focus on:

  1. What shorter analysis context would test exactly the same behaviors as the six entry points above in a single analysis?
  2. What shorter analysis context would test arbitrarily long sequences of interrupts, with repetitions?

Monday, October 17 2011

Features in Frama-C Nitrogen, part 1

Here is a series of posts that highlight interesting features in the recently released Frama-C Nitrogen 20111001. There is new functionality in Nitrogen, that we hope will entice both existing and prospective users. Other improvements yet will only have meaning for existing users. I will start off with two items from this second category.

Nitrogen compiles with OCaml 3.10.2.

The only non-optional dependency for compiling Nitrogen is an OCaml compiler. But which one? Despite a now long history, the OCaml language is still evolving relatively rapidly. We have never been happier with our choice of an implementation language: it is evolving in a direction that suits us, and it is evolving conservatively (compare Perl 6, PHP 5, Python 3, …). On the other hand, Frama-C is large and contains tricky code; the plug-in architecture with dynamic loading in general on the one hand, and the hash-consing programming technique on the other hand, are only two examples. It is large enough and tricky enough to reveal many of the subtle difference between any two versions of the OCaml compiler.

Nitrogen compiles with the latest version of OCaml, of course. That's 3.12.1 at the time of this writing. We already know that it won't compile as-is with the future OCaml 3.13 (a patch will be released in due time). Similarly, support for older versions has to be gained, version by version. If you have only written medium-sized OCaml programs, you could easily underestimate the difficulty of this. I was lucky enough not to have to deal with it much during this cycle, but some of my colleagues had to. It always is a struggle. Sometimes the equivalent of #define/#if constructs from C pre-processing would help, but this is not idiomatic in OCaml. Again, the only non-optional dependency for compiling Nitrogen is an OCaml compiler, so we won't use fine solutions such as Cppo.


For Windows and Mac OS X, OCaml is not part of the operating system, so we ship the version we like together with the binary package (if we make one). With Unix, the issues are different: there are too many flavors for us to distribute binaries, but there are efficient package systems and distributions to painlessly bring in required dependencies. Often, Frama-C itself is packaged in binary or source form as part of these distributions, thanks to the work of many volunteers. It may take some time for packages to be created for Nitrogen, and some users do not want to wait. Linus Token, for instance, may rely on a computer he bought two years ago. Frama-C works fine on two-years old computers, as seen in the next section. Linus installed his Unix distribution of choice when he bought his computer, and now he expects Frama-C's sources to compile with the OCaml version from his distribution (OCaml programmers can be expected to have the latest OCaml compiler installed from its own sources, but Linus is not an OCaml programmer). The Unix distribution installed two years ago was on average 3 months old at that time, and it may have been frozen for stability 3 months before being released. For Linus, Frama-C has to compile with a 2.5-year-old compiler. And then there are industrial users who like to trail a little bit on the Linux front, but at the same time want all the latest Frama-C features. For Nitrogen, we chose to retain compatibility with OCaml 3.10.2, released in February 2008, and all OCaml versions released since.

The value analysis is up to four times faster on realistic examples

The second Nitrogen feature I want to highlight today the value analysis' speed. Here is a quick and dirty comparison for programs that could already be analyzed with Carbon. There are new alarms and precision improvements in Nitrogen, but I made sure that in this comparison, the two versions were doing the same amount of work.

For this comparison, I did not cherry-pick benchmarks. I looked for programs of varying sizes in the archives of the blog, and used the three that came first, having decided in advance that I wouldn't dismiss any results I didn't like. Each analysis was run three times, and the median time was kept. This is on a Core 2 mobile processor, and the frequency is pinned at 1.5GHz through CoolBook. In plain English, the timings are for an oldish but not antiquated notebook.

The options I used were:

-slevel 1000 -val count.c
-slevel 10000 -val -no-results -cpp-command "gcc -C -E -m32 -Dprintf=Frama_C_show_each" sha1.c
-val -slevel 1000 -slevel-function main:0 *.c -cpp-command "gcc -m32 -C -E -I. "

The programs count.c and sha1.c came from this post. For sha1.c I had to disable the endianness detection code, implemented with a switch, to put the Carbon and Nitrogen versions on an equal footing. With the Carbon version, there used to be a difficult choice to make in presence of switch, remember? I could also have used option -simplify-cfg for the Carbon execution, but then the analyzers would not have been analyzing exactly the same program. The Skein-256 example is the verification that an arbitrary number of calls to Skein_256_Update(..., 80) never cause a run-time error, using Josh Ko's instrumentation.

                             count.c     sha1.c      Skein-256

Carbon                       0m0.491s    0m2.862s    1m10.201s
Nitrogen without Zarith      0m0.410s    0m1.724s    0m37.782s
Nitrogen with Zarith         0m0.313s    0m0.962s    0m16.700s

Total speed-up                 1.5          3            4

How did the value analysis improve so much overall? As exemplified by the timings with and without Zarith, this is the result of many small enhancements and refinements at all levels.

Conclusion

As promised, the two features described here are only worth noting for faithful users. It only matters that Frama-C compiles with OCaml 3.10.2 if you intend to compile it at all, and you only care that it is much faster than before if you were already using it. Even so, some of the existing users may not notice them. This is the kind of feature that I like, because it does not complicate the user interface —the value analysis benchmarks above use the same options and produce the same results— and improves the software nonetheless. Existing users and people who try Frama-C for the first time with Nitrogen will both have a better time of it thanks to the effort spent on the two points described in this post, and on tens of others, big and small, some of which I hope will receive their own minute of spotlight in this series. I have been forced to allude to one other small improvement, a better treatment of switch in the value analysis, that I like even better because it removes the need to learn one option.

Friday, July 22 2011

Animated donut verification

Here's a cool obfuscated C program by Andy Sloane that draws a revolving donut.

You know where this is heading... I am going to suggest that someone should verify it. I will get us started.


1. Download the code


2. Determine what library functions it needs:

$ frama-c -metrics donut.c
[kernel] preprocessing with "gcc -C -E -I.  donut.c"
donut.c:1:[kernel] user error: syntax error

Oops. The donut is a little bit too obfuscated for our front-end, it seems. The problem here is the idiom k; to declare an int variable k. Let's change this:

int          k;double sin()
         ,cos();main(){float A=
       0,B=0,i,j,z[1760];char b[
     1760];printf("\x1b[2J");for(;;
  ){memset(b,32,1760);memset(z,0,7040)
  ;for(j=0;6.28>j;j+=0.07)for(i=0;6.28
 >i;i+=0.02){float c=sin(i),d=cos(j),e=
 sin(A),f=sin(j),g=cos(A),h=d+2,D=1/(c*
 h*e+f*g+5),l=cos      (i),m=cos(B),n=s\
in(B),t=c*h*g-f*        e;int x=40+30*D*
(l*h*m-t*n),y=            12+15*D*(l*h*n
+t*m),o=x+80*y,          N=8*((f*e-c*d*g
 )*m-c*d*e-f*g-l        *d*n);if(22>y&&
 y>0&&x>0&&80>x&&D>z[o]){z[o]=D;;;b[o]=
 ".,-~:;=!*#$@"[N>0?N:0];}}/*#****!!-*/
  printf("\x1b[H");for(k=0;1761>k;k++)
   putchar(k%80?b[k]:10);A+=0.04;B+=
     0.02;}}/*****####*******!!=;:~
       ~::==!!!**********!!!==::-
         .,~~;;;========;;;:~-.
             ..,--------,*/
$ frama-c -metrics donut.c
[kernel] preprocessing with "gcc -C -E -I.  donut.c"
[metrics] Syntactic metrics
           Defined function (1):
             main  (0 call); 
           Undefined functions (5):
             putchar  (1 call); sin  (4 calls); cos  (4 calls); printf  (2 calls);
             memset  (2 calls); 

Functions putchar() and printf() do not influence the rest of the execution, so we can postpone finding a good modelization for them until the very end. We have a memset() implementation leftover from the Skein tutorial. Math functions cos() and sin() can in a first step be specified as returning [-1.0 .. 1.0]. The value analysis has for a long time had a Frama_C_cos() built-in that does a little better than that, and the next release will have a Frama_C_sin(), but these are not necessary yet.

Let's use a file sincos.c such as:

#include "share/builtin.h"

double sin(double x)
{
  return Frama_C_double_interval(-1.0, 1.0);
}

double cos(double x)
{
  return Frama_C_double_interval(-1.0, 1.0);
}


3. Launch an imprecise analysis:

$ bin/toplevel.opt -val share/builtin.c sincos.c donut.c share/libc.c | grep assert

We use the command above and get the results below. There are 9 alarms and 2 of them are redundant with the other 7 (these two would be removed in the GUI).

share/libc.c:51:[kernel] warning: out of bounds write. assert \valid(tmp);
donut.c:14:[kernel] warning: accessing out of bounds index [-7610..9610]. assert 0 ≤ o ∧ o < 1760;
donut.c:14:[kernel] warning: accessing uninitialized left-value: assert \initialized(z[o]);
donut.c:14:[kernel] warning: accessing out of bounds index [-7610..9610]. assert 0 ≤ o ∧ o < 1760;
donut.c:15:[kernel] warning: out of bounds read. assert \valid(".,-~:;=!*#$@"+tmp_7);
donut.c:15:[kernel] warning: accessing out of bounds index [-7610..9610]. assert 0 ≤ o ∧ o < 1760;
donut.c:17:[kernel] warning: accessing uninitialized left-value: assert \initialized(b[k]);
donut.c:17:[kernel] warning: accessing out of bounds index [0..1760]. assert 0 ≤ k ∧ k < 1760;
donut.c:14:[value] warning: overflow in float: [--..--] -> [-3.40282346639e+38 .. 3.40282346639e+38]. assert(Ook)
[scope] [rm_asserts] removing 2 assertion(s)


4. The imprecise analysis was fast. Let us launch a more precise analysis:

$ bin/toplevel.opt -val share/builtin.c sincos.c donut.c share/libc.c -slevel 200 | grep assert
share/libc.c:51:[kernel] warning: out of bounds write. assert \valid(tmp);
donut.c:14:[kernel] warning: accessing out of bounds index [-7610..9610]. assert 0 ≤ o ∧ o < 1760;
donut.c:14:[kernel] warning: accessing uninitialized left-value: assert \initialized(z[o]);
donut.c:14:[kernel] warning: accessing out of bounds index [-7610..9610]. assert 0 ≤ o ∧ o < 1760;
donut.c:15:[kernel] warning: accessing out of bounds index [-7610..9610]. assert 0 ≤ o ∧ o < 1760;
donut.c:15:[kernel] warning: out of bounds read. assert \valid(".,-~:;=!*#$@"+tmp_7);
donut.c:17:[kernel] warning: accessing uninitialized left-value: assert \initialized(b[k]);
donut.c:17:[kernel] warning: accessing out of bounds index [200..1760]. assert 0 ≤ k ∧ k < 1760;
donut.c:14:[value] warning: overflow in float: [--..--] -> [-3.40282346639e+38 .. 3.40282346639e+38]. assert(Ook)
[scope] [rm_asserts] removing 2 assertion(s)

It's still the same assertions. The development version of Frama-C shows the range of the index k (that is, [200..1760]) when it prints the assertion assert 0 ≤ k ∧ k < 1760;. That suggests that pushing the value of -slevel to more than 1760 may eliminate this alarm.


I will let someone else continue from here if they want. This program could be very rewarding to verify (a little birdie tells me that there is at least one true alarm to be found, using the value analysis or any other techniquesI have completely lost faith in that little birdie. There may or may not be a bug, we will know for sure when we are done, but don't stay awake all night looking for one), so I recommended as a fun exercise for those who are bored from sandy holiday beaches.

Thursday, June 2 2011

Skein tutorial, part 7: not dead, but resting

Do you remember the Skein tutorial? It went off to a good start (it started this blog) and then was never really completed. I blame laziness. Looking back at that first post, I notice that indeed, before Boron, we were shipping software without documentation (just think! Ahem). At the time, I also thought that the extended tutorial would be over in a couple of posts, a mistake I have learnt not to make any more. September 2010, those were the days…

The story so far

The solution provided in this post was one of two promised solutions. It was in fact my own solution. While conceiving it, I was determined not to look at Skein's source code. I could claim that this was to make the verification method completely independent of the code, which is obviously good. It allows, for instance, to swap in another implementation — say, another entry in the NIST SHA3 contest — and hopefully to verify it just by relaunching the same script. The true reason I didn't want to look at the code is more likely written in bold at the top of this post.

Because I didn't want to look at Skein's source code, my solution involved the meta-argument that you can cover all possible executions of the loop

  while (Frama_C_interval(0,1))
  {
      arbitrarize_msg();
      Skein_256_Update(&skein_context, msg, 80);
  }

by analyzing instead the two pieces of code below, which between them do all the same things:

  while (Frama_C_interval(0,1))
  {
      arbitrarize_msg();
      Skein_256_Update(&skein_context, msg, 80);

      arbitrarize_msg();
      Skein_256_Update(&skein_context, msg, 80);
  }
  arbitrarize_msg();
  Skein_256_Update(&skein_context, msg, 80);

  while (Frama_C_interval(0,1))
  {
      arbitrarize_msg();
      Skein_256_Update(&skein_context, msg, 80);

      arbitrarize_msg();
      Skein_256_Update(&skein_context, msg, 80);
  }

The alternate solution

In July 2010, I was teaching at the FLOLAC summer school and I was lucky to have Josh Ko as my Teaching Assistant. He did not have my prejudices against looking at the code being verified. The alternate, and better, solution that follows is his.


We determined when we elaborated the solution based on pairing the calls to Skein_256_Update(), that the issue was the field skein_context.h.bCnt, that contained 0 before the first iteration, and alternately 16 or 32 after that. Pairing the calls to Skein_256_Update() was one way to force similar calls to be analyzed together and prevent a loss of precision caused by not knowing whether skein_context.h.bCnt was 16 or 32. Josh Ko's solution instead prevents the loss of precision by nudging the value analysis into studying these cases separately by way of an annotation inside Skein_256_Update():

--- skein.c.orig	2011-06-02 21:06:29.000000000 +0200
+++ skein.c	2011-06-02 21:06:50.000000000 +0200
@@ -159,6 +159,8 @@
 
     Skein_Assert(ctx->h.bCnt <= SKEIN_256_BLOCK_BYTES,SKEIN_FAIL);     /* catch uninitialized context */
 
+    /*@ assert ctx->h.bCnt == 0 || ctx->h.bCnt == 16 || ctx->h.bCnt == 32 ; */
+
     /* process full blocks, if any */
     if (msgByteCnt + ctx->h.bCnt > SKEIN_256_BLOCK_BYTES)
         {

We have seen how to provide useful annotations in this post. The above annotation is another example. With it, we can use this main():

void main(void)
{
  int i;
  u08b_t hash[HASHLEN];

  Skein_256_Ctxt_t skein_context; 
  Skein_256_Init(&skein_context, HASHLEN * 8);

      arbitrarize_msg();
      Skein_256_Update(&skein_context, msg, 80);

      arbitrarize_msg();
      Skein_256_Update(&skein_context, msg, 80);

  while (Frama_C_interval(0,1))
    {
      arbitrarize_msg();
      Skein_256_Update(&skein_context, msg, 80);
    }
  Skein_256_Final( &skein_context, hash);
}
$ time frama-c -val -slevel 1000 -slevel-function main:0 *.c -cpp-command "gcc -m32 -C -E -I. " > log2

real	1m48.218s
user	1m45.133s
sys	0m1.325s

This single main() cover all the programs with n>=2 calls to Skein_256_Update(…, 80):

$ grep ssert log2
skein.c:162:[value] Assertion got status valid.

And that's that: no alarm is emitted, and the annotation that we provided for the value analysis is verified before being used (so it is not an assumption).

For the sake of completeness, the cases of 0 and 1 calls to Skein_256_Update(…, 80) should also be checked separately. We have done that before, so there is no need to repeat it here.

Generalizing

The advantage of the method presented in this post over the previous one is that it can more easily be generalized to different buffer lengths passed to Skein_256_Update(). The more complete verification that I last alluded to uses, among other annotations, the assertion below.

    /*@ assert                                                                                                                 
      ctx->h.bCnt ==  0 || ctx->h.bCnt ==  1 ||                                                                                 
      ctx->h.bCnt ==  2 || ctx->h.bCnt ==  3 ||                                                                                 
      ctx->h.bCnt ==  4 || ctx->h.bCnt ==  5 ||                                                                                 
      ctx->h.bCnt ==  6 || ctx->h.bCnt ==  7 ||                                                                                 
      ctx->h.bCnt ==  8 || ctx->h.bCnt ==  9 ||                                                                                 
      ctx->h.bCnt == 10 || ctx->h.bCnt == 11 ||                                                                                 
      ctx->h.bCnt == 12 || ctx->h.bCnt == 13 ||                                                                                 
      ctx->h.bCnt == 14 || ctx->h.bCnt == 15 ||                                                                                 
      ctx->h.bCnt == 16 || ctx->h.bCnt == 17 ||                                                                                 
      ctx->h.bCnt == 18 || ctx->h.bCnt == 19 ||                                                                                 
      ctx->h.bCnt == 20 || ctx->h.bCnt == 21 ||                                                                                 
      ctx->h.bCnt == 22 || ctx->h.bCnt == 23 ||                                                                                 
      ctx->h.bCnt == 24 || ctx->h.bCnt == 25 ||                                                                                 
      ctx->h.bCnt == 26 || ctx->h.bCnt == 27 ||                                                                                 
      ctx->h.bCnt == 28 || ctx->h.bCnt == 29 ||                                                                                 
      ctx->h.bCnt == 30 || ctx->h.bCnt == 31 ||                                                                                 
      ctx->h.bCnt == 32 ; */

- page 1 of 2