Posts in category security

On language complexity as authority and new hope for secure systems

Why is the overwhelming majority of common networked software still not secure, despite all effort to the contrary? Why is it almost certain to get exploited so long as attackers can craft its inputs? Why is it the case that no amount of effort seems to be enough to fix software that must speak certain protocols?

The video of The Science of Insecurity by Meredith Patterson crossed my radar several times last year, but I just recently found time to watch it. She offers hope:

In this talk we'll draw a direct connection between this ubiquitous insecurity and basic computer science concepts of Turing completeness and theory of languages. We will show how well-meant protocol designs are doomed to their implementations becoming clusters of 0-days, and will show where to look for these 0-days. We will also discuss simple principles of how to avoid designing such protocols.

In memory of Len Sassaman

In discussion of Postel's Principle, she argues:

  • Treat input-handling computational power [aka input language complexity] as privilege, and reduce it whenever possible.

This is essentially the principle of least privilege, which is the cornerstone of capability systems.

I have been arguing for keeping web language complexity down since I started working on HTML. The official version is the 2006 W3C Technical Architecture Group finding on The Rule of Least Power, but as far back as my 1994 essay, On Formally Unconvertable Document Formats, I wrote:

The RTF, TeX, nroff, etc. document formats provide very sophisticated automated techniques for authors of documents to express their ideas. It seems strange at first to see that plain text is still so widely used. It would seem that PostScript is the ultimate document format, in that its expressive capabilities include essentially anything that the human eye is capable of perceiving, and yet it is device-independent.

And yet if we take a look at the task of interpreting data back into the ideas that they represent, we find that plain text is much to be preferred, since reading plain text is so much easier to automate than reading GIF files (optical character recognition) or postscript documents (halting problem). In the end, while the source to a various TeX or troff documents may correspond closely to the structure of the ideas of the author, and while PostScript allows the author very precise control and tremenous expressive capability, all these documents ultimately capture an image of a document for presentation to the human eye. They don't capture the original information as symbols that can be processed by machine.

To put it another way, rendering ideas in PostScript is not going to help solve the problem of information overload -- it will only compound the situation.

But as recently as my Dec 2008 post on Web Applications security designs, I didn't see the connection between language complexity and privilege, and had little hope of things getting better:

The E system, which is a fascinating model of secure multi-party communication (not to mention lockless concurrency), [...] seems an impossibly high bar to reach, given the worse-is-better tendency in software deployment.

On the other hand, after wrestling with the patchwork of javascript security policies in browsers in the past few weeks, the capability approach in adsafe looks simple and elegant by comparison. Is there any chance we can move the state-of-the-art that far?

After all, who would be crazy enough to essentially throw out all the computing platforms we use and start over?

I've been studying CapROS: The Capability-based Reliable Operating System. Its heritage goes back through EROS in 1999 and KeyKOS in 1988 to GNOSIS in 1979. After a few hours of study, I started to wonder where the pull would come from to provide energy to complete the project. Then this headline crossed my radar:

I saw some comments encouraging them to look at EROS. I hope they do. Meanwhile, Capsicum: practical capabilities for UNIX lets capability approaches co-exist with traditional unix security.

These days, the browser is the biggest threat vector, and turing-complete data, i.e. mobile code, remains notoriously difficult to secure:

The sort of thing that gives me hope is chromium-capsicum - a version of Google's Chromium web browser that uses capability mode and capabilities to provide effective sandboxing of high-risk web page rendering.

Another is servo, Mozilla's exploration into a new browser architecture built on rust. Rust is a new systems programming language designed toward concerns of “programming in the large”, that is, of creating and maintaining boundaries – both abstract and operational – that preserve large-system integrity, availability and concurrency.

It took me several hours, but the other night I managed to build rust and servo. While servo is clearly in its infancy, passing a few dozen tests but not bearing much resemblance to an actual web browser, rust is starting to feel quite mature.

I'd like to see more of a least-authority approach in the rust standard library. Here's hoping for time to participate.

Web Security Best Practices in Medical Informatics: OWASP Top 10

A lot of what the Medical Informatics division does for Frontiers, the KUMC CTSA program, is install, configure, maintain, support, enhance, or--in a few cases--build from scratch systems to manage data, facilitate work-flow, and enforce policy in clinical and translational research.

As our development team grows, it's increasingly important that everybody is up to speed on best practices in secure web application development.

A few months ago, I picked up a copy of The Tangled Web by Michal Zalewski because while I was a long-time participant in the development of the architecture and standards for the Web, I didn't follow a lot of the nitty gritty details as they developed. Who knew that Internet Explorer would take back-ticks (`) around attribute values in HTML? I do now, thanks to Zalewski.

I was chatting with a couple teammates about the risks around drupal customization, and I suggested that they should read this book too. That seemed daunting, but we agreed that a reading group around the book looked like fun.

When I got out the calendar to plan the first meeting, I looked at the first few chapters and realized that the tour of the foundations of the Web provided there would be great if we had started a couple months ago. Plus, the book is much more browser-focused, while a lot of what we do is back-end integration with databases and such.

The OWASP Top 10 Web Application Security Risks looks like a better fit where we are right now:

  1. Injection
  2. Cross-Site Scripting (XSS)
  3. Broken Authentication and Session Management
  4. Insecure Direct Object References
  5. Cross-Site Request Forgery (CSRF)
  6. Security Misconfiguration
  7. Insecure Cryptographic Storage
  8. Failure to Restrict URL Access
  9. Insufficient Transport Layer Protection
  10. Unvalidated Redirects and Forwards

I expect we'll follow up with The Tangled Web in due course.

Meanwhile, I notice there's a OWASP Kansas City chapter that meets Wed. Sept 12, 2012 6:30 PM at McCoys Foundry in Westport. That reminds me... we have an open position for a Biomedical Informatics Software Engineer.