Posts in category research

On language complexity as authority and new hope for secure systems

Why is the overwhelming majority of common networked software still not secure, despite all effort to the contrary? Why is it almost certain to get exploited so long as attackers can craft its inputs? Why is it the case that no amount of effort seems to be enough to fix software that must speak certain protocols?

The video of The Science of Insecurity by Meredith Patterson crossed my radar several times last year, but I just recently found time to watch it. She offers hope:

In this talk we'll draw a direct connection between this ubiquitous insecurity and basic computer science concepts of Turing completeness and theory of languages. We will show how well-meant protocol designs are doomed to their implementations becoming clusters of 0-days, and will show where to look for these 0-days. We will also discuss simple principles of how to avoid designing such protocols.

In memory of Len Sassaman

In discussion of Postel's Principle, she argues:

  • Treat input-handling computational power [aka input language complexity] as privilege, and reduce it whenever possible.

This is essentially the principle of least privilege, which is the cornerstone of capability systems.

I have been arguing for keeping web language complexity down since I started working on HTML. The official version is the 2006 W3C Technical Architecture Group finding on The Rule of Least Power, but as far back as my 1994 essay, On Formally Unconvertable Document Formats, I wrote:

The RTF, TeX, nroff, etc. document formats provide very sophisticated automated techniques for authors of documents to express their ideas. It seems strange at first to see that plain text is still so widely used. It would seem that PostScript is the ultimate document format, in that its expressive capabilities include essentially anything that the human eye is capable of perceiving, and yet it is device-independent.

And yet if we take a look at the task of interpreting data back into the ideas that they represent, we find that plain text is much to be preferred, since reading plain text is so much easier to automate than reading GIF files (optical character recognition) or postscript documents (halting problem). In the end, while the source to a various TeX or troff documents may correspond closely to the structure of the ideas of the author, and while PostScript allows the author very precise control and tremenous expressive capability, all these documents ultimately capture an image of a document for presentation to the human eye. They don't capture the original information as symbols that can be processed by machine.

To put it another way, rendering ideas in PostScript is not going to help solve the problem of information overload -- it will only compound the situation.

But as recently as my Dec 2008 post on Web Applications security designs, I didn't see the connection between language complexity and privilege, and had little hope of things getting better:

The E system, which is a fascinating model of secure multi-party communication (not to mention lockless concurrency), [...] seems an impossibly high bar to reach, given the worse-is-better tendency in software deployment.

On the other hand, after wrestling with the patchwork of javascript security policies in browsers in the past few weeks, the capability approach in adsafe looks simple and elegant by comparison. Is there any chance we can move the state-of-the-art that far?

After all, who would be crazy enough to essentially throw out all the computing platforms we use and start over?

I've been studying CapROS: The Capability-based Reliable Operating System. Its heritage goes back through EROS in 1999 and KeyKOS in 1988 to GNOSIS in 1979. After a few hours of study, I started to wonder where the pull would come from to provide energy to complete the project. Then this headline crossed my radar:

I saw some comments encouraging them to look at EROS. I hope they do. Meanwhile, Capsicum: practical capabilities for UNIX lets capability approaches co-exist with traditional unix security.

These days, the browser is the biggest threat vector, and turing-complete data, i.e. mobile code, remains notoriously difficult to secure:

The sort of thing that gives me hope is chromium-capsicum - a version of Google's Chromium web browser that uses capability mode and capabilities to provide effective sandboxing of high-risk web page rendering.

Another is servo, Mozilla's exploration into a new browser architecture built on rust. Rust is a new systems programming language designed toward concerns of “programming in the large”, that is, of creating and maintaining boundaries – both abstract and operational – that preserve large-system integrity, availability and concurrency.

It took me several hours, but the other night I managed to build rust and servo. While servo is clearly in its infancy, passing a few dozen tests but not bearing much resemblance to an actual web browser, rust is starting to feel quite mature.

I'd like to see more of a least-authority approach in the rust standard library. Here's hoping for time to participate.

"In-Home Monitoring in Support of Caregivers for Patients with Dementia" obtains NSF US-Ignite grant

The U.S. National Science Foundation (NSF) awarded us an exploratory research (EAGER) grant for In-Home Monitoring in Support of Caregivers for Patients with Dementia. The investigator team is:

  • Dr. Russ Waitman, Principal Investigator, is Director of Biomedical Informatics at KU Medical Center.
  • Dr. Kristine Williams, Co-Investigator, is Associate Professor of Nursing and Associate Scientist of Gerontology at the University of Kansas.
  • Dr. James Sterbenz, Co-Investigator, is the lead PI of an NSF GENI project: The Great Plains Environment for Network Innovation (GpENI).

This project develops, integrates, and tests advanced video and networking technologies to support family caregivers in managing behavioral symptoms of individuals with dementia, a growing public health problem that adds to caregiver stress, increases morbidity and mortality, and accelerates nursing home placement. The project builds upon a recent University of Kansas Medical Center (KUMC) clinical pilot study that tested the application of video monitoring in the home to support family caregivers of persons with Alzheimer’s disease who exhibited disruptive behaviors. The proposed project focuses on expanding the in-home technological tools available to strengthen the linkage between patients and caregivers with their healthcare team via multi-camera full-motion/high definition video monitoring. Google’s deployment this year of a 1 Gpbs fiber network throughout Kansas City provides the ideal environment for measuring the impact that ultra-high speed networking will have on health care.

fig 2 from US Ignite_FINAL_EAGERv_14.docx from Russ 30 Aug 2012

In a January NSF press release, the National Science Foundation (NSF) "announced that it will serve as the lead federal agency for a White House Initiative called US Ignite, which aims to realize the potential of fast, open, next-generation networks."

Our new connection with US Ignite provides access to resources in that community such as Mozilla Ignite and the GENI network lab. If you'd like to get involved, email Dan Connolly and Russ Waitman.