Frama-C news and ideas

To content | To menu | To search | Frama-C Home

Tag - cybersecurity

Entries feed - Comments feed

Monday, January 14 2013

Why verify zlib?

As an interlude in the zlib verification thread, this post asks two questions. Is there any chance of finding a bug in zlib, and does it matter?

Could there be a bug in zlib?

It is not entirely impossible. The previous post in this blog pointed to a relatively minor issue in the zlib source code, and the next post will show a more serious (albeit still minor) one.

In July of 2005, a buffer overflow in zlib was found by Tavis Ormandy. Key quote: “An attacker could construct a malformed data stream, embedding it within network communication or an application file format, potentially resulting in the execution of arbitrary code when decoded by the application using the zlib library.”

If you have been following the zlib thread, this is exactly the context we are investigating: decompression of a maliciously crafted stream.

Does it matter?

In 2005, people did think it mattered. The CVE summary points out that zlib is involved when decompressing a PNG image (most users think of PNG images as innocuous objects that they should be allowed to look at regardless of origin without compromising the host computer). Other examples are the “Windows binary of Virtual Floppy Drive 2.1” and “Official Windows binaries of curl”. I do not think that list is exhaustive. Does the HTTP/1.1 protocol not allow for zlib-compressed data to be exchanged between client and server? A malicious client could cause a buffer overflow on the server with a simple request, and/or a malicious server could cause a buffer overflow in the client by replying to a request with specially crafted pretend-compressed data.

And a benchmark for the future

I am making a deliberate effort not to look at that bug now, but it goes without saying that when the verification of zlib is done, I will check that the same method find the bug that was in the pre-July-2005 version.

Tuesday, January 1 2013

Software obsolescence and security

A couple of months ago, I packed a computer into a brown plastic bag of the kind usually used for trash. I then carried the carefully packed computer down to the basement. Physically, the computer still works. It is still beautiful (it is an iMac G4). It has been with me for the last ten years: I bought it second-hand in December 2002 from my friend Damien Doligez. I have used it for both work and entertainment for several years, and then gave it to my computerwise-unsophisticated sister who has been using it since. The design, much friendlier than later iMacs, and even than earlier iMacs, evokes the Luxo Pixar character. This computer was built to last: after ten years, the white plastics are still white. I pick this insignificant detail not because I particularly care about the tint of my computing devices, but because I don't. This is not an accident. People took care of the details when designing this computer, all the way down to how it would look as it aged.

The iMac G4 came with the cutest matched loudspeakers. The integrated modem would not find much use nowadays, but the computer also has USB, firewire and ethernet ports, and Wi-Fi. The thinking behind the integrated modem remains avant-gardist even if the technology is now outdated: this computer was the hub of its buyer's future digital lifestyle. It was intended to be connected, and it was intended not to look like an untidy mess in your living room.


300px-IMac.jpg


I opened it a couple of times to upgrade the hard disk and optical drive. It reminded me of the glimpse I once took as a student of the insides of a Sun workstation. I don't think I could explain this particular feeling to someone who did not see the inside of a Sun workstation—or of an iMac. As a concrete example that does not do justice to the experience, the unused screw holes of the factory-installed optical drive I retrieved from the computer had been taped over, so that air would not be sucked through the drive, lest dust accumulate on the drive's door as years passed.


Unfortunately, some functions mostly everyone expects from a computer nowadays is to display e-mail and web pages, and this computer is no longer fit for this purpose. This isn't a case of running new software on old hardware. This has never worked, but one used to be able to use loved hi-tech gadgets way past obsolescence as long as one avoided the mistake of installing new software on them. For lustra, you could keep using the spreadsheet you were used to for accounting or your simple-but-sufficient word processor. Even games kept working!

The problem nowadays is that the option not to upgrade software is no longer available: software more than a few years old ceases to receive security patches. The iMac G4 in question cannot be upgraded past Mac OS X 10.4. On Mac OS X 10.4, Firefox cannot be upgraded past version 3.6. Firefox 3.6 received its last security update in March 2012 and has now joined the Great Repository in the Sky.


Here is an idea for 2013: how about making software that works? Of course, it would not automatically adapt to new protocols, but as long as the user was satisfied with the old ones, this software would simply keep working. It would not have to be constantly updated with security patches because it would have been made right the first time, just like hardware sometimes is.

I would not want to give the impression that I am unfairly picking on the fine developers of Firefox. Firefox 3.6 continued to be maintained long after Safari for OS X 10.4 had ceased to receive security updates. In fact, the TenFourFox project, a maintained variant of Firefox for precisely the kind of computer I took to the basement, is a testament to the power of Open Source Software. Unfortunately, there is no Adobe Flash support in TenFourFox, and I fear that this might complicate the life of a computerwise-unsophisticated user.


Acknowledgements: besides selling me his computer, Damien Doligez officially owned the first OS X-able mac I used, as a PhD student. Harry McCracken suggested to liven up this post with a picture. Gerhard Piezinger photographed an iMac very much like mine, and I used his picture.

Sunday, December 30 2012

December in Security

Robert Graham, of the blog Errata Security, predicts that “vulnerabilities in Acrobat Reader, Adobe Flash, and Java today […] will be announced and patched in 2013”.


As fate would have it, he could safely have included Internet Explorer 8 in his list of software products used by millions to process untrusted inputs.

Thursday, December 6 2012

Formally verifying zlib

In a blog post earlier this year, John Regehr wonders when software verification will finally matter. He means “formal verification”, I am pretty sure. “Verification” is what practitioners of, say, the software development V-cycle have been doing for decades, and it has kept us safe for that long—at least, when it mattered most.

Formal verification on the other hand, despite having been used here and there, is not yet the big deal that it could be. You'd better read the post, I fear I may not be doing it justice.


In that post, John writes:

For example, I’d be happy to drop a proved-correct gzip into my Linux machine as long as it was respectably fast.


Okay, John, let us do this, and see how far we can go.


We'll start with zlib, that we are going to approach like we did QuickLZ, by verifying that the decompression feature is resilient to malicious inputs. I have described the first potential issue on this code review site. If you have an opinion on this potential issue, you can contribute by expressing it there (registering is a one-click process), or here in the comments if you prefer.


Edited to add: too late, Andrew Šveikauskas has already determined that our first alarm was a false positive, as you can see on codereview.stackexchange.com. Ah well… It would have been nice if out first alarm were a true positive, but that was unlikely.

In case you regret missing out on the fun, here is another one: when reaching line inffast.c:269, where from is computed as out - dist to be accessed next, what prevents variable out to point one byte into the output buffer and dist to be 2, or 3, or 19? I expect the library must guard against this in the code above, but this is the end of a long day and I did not find where. Also, this would typically be the kind of relational information that the value analysis fails to remember, so I am not promising there is a bug to find here.

Monday, November 12 2012

November in Security

Bruce Schneier is, among other things, the author of the blog Schneier on Security. He is also one of the co-authors of the Skein cryptographic hash function, the SHA-3 contestant being verified in Frama-C's value analysis tutorial in the manual and then on this blog. I feel silly introducing him, considering he is rather on the “well-respected” side of the security researcher spectrum. But Bruce did provide this month's cybersecurity link, this essay by Gary McGraw.

Key quote: The kind of defense I advocate (called "passive defense" or "protection" above) involves security engineering -- building security in as we create our systems, knowing full well that they will be attacked in the future. One of the problems to overcome is that exploits are sexy and engineering is, well, not so sexy.


Bruce, in his typical fisheye approach to security, also recently provided this other link about security in fairy wren nests. I thought this made a lot of sense, but then again, Hongseok Yang lent me a copy of Dawkins' The Selfish Gene when we were both post-doc students at KAIST, so your mileage may vary.


Since I came back to France, I bought my own copy of The Selfish Gene to lend others. If you are around and are curious, just ask.

Tuesday, October 30 2012

October in security

Today, the New York Times has an homage to Peter G. Neumann.

Many people cite Albert Einstein’s aphorism “Everything should be made as simple as possible, but no simpler.” Only a handful, however, have had the opportunity to discuss the concept with the physicist over breakfast. One of those is Peter G. Neumann, now an 80-year-old computer scientist at SRI International […]

For many of those years, Dr. Neumann (pronounced NOY-man) has remained a voice in the wilderness, tirelessly pointing out that the computer industry has a penchant for repeating the mistakes of the past. He has long been one of the nation’s leading specialists in computer security, and early on he predicted that the security flaws that have accompanied the pell-mell explosion of the computer and Internet industries would have disastrous consequences.


In other news, it appears that South Carolina's Department of Revenue has seen its computers hacked, with 3.6 million social security numbers and 387,000 payment cards details leaked.

Monday, October 8 2012

September in security

October is National Cyber Security Awareness Month (if you are in the United States; otherwise, it is Another Country's Cyber Security Awareness Month).


In celebration, here is a short list of recent cyber-security failures:

  1. An iPhone user navigating to a malicious webpage can see eir personal information (address book, browsing history, photos, …) revealed to the attacker.
  2. Security vulnerability after security vulnerability were found in Oracle's Java. These appear to allow a malicious website to plant malware on an unsuspecting user's computer who navigates to the wrong webpage.
  3. Security vulnerabilities were found in various versions of Internet Explorer. Again, navigate to the wrong webpage, and malware gets installed on your computer.


What do all these items have in common? Firstly, impact: many people have an iPhone, or use Internet Explorer as web browser. They may not know what Java is, but it is likely to be enabled in whatever browser they use—many internet banking sites rely on it, for instance. Secondly, the listed vulnerabilities all made headlines in September 2012.

Friday, March 16 2012

Security and safety

I usually feel uncomfortable diving into the subject of safety vs security, because while I think I have a good intuition of the difference between them, I find this difference hard to formalize in writing. Fortunately, a large “security” “administration” provides:

“We use layers of security to ensure the security of [...]” (source, proudly linked from here)


The difference between safety and security is that if the aforementioned administration was trying to ensure the safety of the traveling public, then accumulating a large enough number of rubbish detection layers would work. In safety, you are working against a harsh but neutral universe. It is okay simply to do your best to ensure that a safety-critical component works as designed, and then to put in some redundancy, and perhaps some fail-safe mechanism, because you are actually multiplying probabilities of failure. Assuming your design works, and naming p₁ and p₂ the (low) probabilities that the component fails and that the fail-safe fails, the probability that two out of three redundant components and the fail-safe fail simultaneously is of the order of some factor of p₁²*p₂. Accumulate enough layers and you can make the overall probability of failure low enough.

And that's the difference, really.

The intuition is that good protection can be obtained by accumulating imperfect layers. The intuition works well when the probabilities of failure of each layer are somewhat independent. When they are completely independent, probability theory tells us that they can be multiplied. And the definition of security, as opposed to safety, is that when dealing with a security issue, you cannot assume that the probabilities of failure of your layers are independent, because all the layers tend to fail in the same conditions, namely, when a smart attacker is trying to get through them.

Consider layer number two in the previously linked diagram, “Customs and border protection”. This means requiring the potentially dangerous visitor to fill in a form asking whether ey plans to overthrow the government during eir stay. Terrorists who fail at this layer are the same terrorists who cannot figure out how to gather arbitrary quantities of liquid explosive past security 100ml at a time, or inside their bodies, or not to use liquid explosives at all. Conversely, and critically, the very same terrorist who can figure out the latter issue can figure out the correct answer to the government-overthrowing multiple-choice question on eir applicable immigration form.


And if you are still unconvinced, please consider computer security. Tons of partial protection layers have been implemented in recent years: non-executable this and randomized that. If each layer had improved computer security commensurately to its ability to thwart the average hacker, we would safe browsing the “interwebs” (this is a technical term that security experts use). Unfortunately, all the protection layers are only good at disarming the same hackers: the inferior ones. Each layer adds some security, but not the quantity of security that intuition predicts.

page 2 of 2 -