The Secure Internet

Security and usability are enemies.

This is a kind of pithy way of putting the widely-held sentiment that making a product more secure inherently makes it less usable, and that making a product more usable often makes it less secure as well. I'm going to try to convince you that this isn't true.

Anchors:

Security & Usability

First, let's talk a little bit about what security and usability are, in this context. Computers are tools used by humans to achieve human goals; therefore, we can say that usability is how effectively humans can use a particular computer system to achieve their goals. The property of security is a companion to usability, and we can say that security is the property that only the "right" humans can use a system to achieve the "right" goals, where both values of "right" come from society in general1.

We don't have to look far to find examples from the past of security and usability being in tension. For example, think about the infamous "SSL error" warning page provided by web browsers in case an SSL negotiation fails when connecting to a secure site. In this case, the human goal in play is often something like "buy this thing from this store" or "check how much money is in my bank" or similar. When the SSL negotiation fails, there's some chance it's because of a misconfiguration of the server, and some chance it's because of a hostile third party. If there was a misconfiguration, the SSL error page just reduced the usability of the system by preventing the human from achieving something they could have otherwise done; if there was really a hostile third party, the SSL error page increased the security of the system by preventing credential theft or various other badness.

Here's another example, much in the news lately: malware. One of the most common malware vectors is bundling with legitimate programs, either with or without the consent of the legitimate program's author. On current systems, this is a big problem, because generally once a program is running, there's nothing preventing it from replacing or snooping on other parts of the system. Generally the advice security people give to users is not to download and run programs except from trusted sources, but that has a usability tradeoff when the user's goal is best served by a program only available from an untrusted source, to say nothing of the difficult problem of evaluating trust.

So what can we do? We could try giving up and declaring the problem not worth solving. We could try blaming the users. We could try designing a system that is both usable and secure! In case you didn't read the first paragraph, that's what I'm going to propose.

The Big Idea

Here's a proposal. This might be heretical, and not the good kind of heritical, because it relies on two distasteful concepts:

Nonetheless, stick with me for a bit.

Let's assume a couple of ground rules. We're going to start with the world we have now, more or less, so no sweeping declarations that we will all have a flag day and switch to something new. We're also going to start with the people we have now, which means no assumptions that users can successfully deal with webs of trust, or key rotation, or even reliably memorize more than one or two short secrets.

Now, let's introduce the components of the system:

The User

The user, for this system, is assumed to be reasonably literate and able to keep small physical objects (credit card or passport sized) that they use often reasonably secure, but is not especially hardware- or software- literate. For the sake of the story we'll develop below, we'll call our user David.

The Card

The card is the only component of the system David needs to carry around with him. It's a fraction of an inch thick and roughly the size of a credit card; it has his picture, name, and date of birth printed on one side, a machine-readable version of same expressed as a QR code, an electrical interface on one end, and a button with an LED in it on the other end. It is issued by the government as photo ID, and it contains a small and slow microcontroller. David carries this card in his wallet. If he loses it or it is stolen, the government will issue him a new one and revoke the old one.

The small and slow microcontroller contains a keypair, burnt into it when it was manufactured, and a digital certificate from a government authority that that keypair was burnt into an ID card owned by David, who is unambiguously identified by a government-assigned number for this use. It also contains two secret keys, also burnt into it when manufactured.

The small and slow microcontroller only knows how to do two operations.

The first is that when it receives a command over its electrical interface, containing a blob of data supplied by its host system, it flashes the LED on the other end of the card. When the button on the other end is physically pressed, the card produces a digital signature with its keypair, asserting that it was asked to sign that blob of data, and appends the stored certificate from the government. David does not know any of this, but he does know that when the light on the card flashes, it's asking him to touch the button.

The second is that when it receives a command over its electrical interface, containing a blob of data supplied by its host system, it can "seal" or "unseal" this data, without any interaction needed from David. When data is sealed, it can only be unsealed later by the same card, and only if not modified since then.

The Computer

The computer, at a hardware level, is extremely similar to all existing computers. It has a processor, RAM, permanent storage, network interfaces, a display, and so on. Its firmware, however, is special: it can validate that the kernel of the installed operating system has not been tampered with. David could, if he desired, disable that behavior with a hardware switch, but he's never had any reason to.

The kernel, in turn, ensures that the rest of the operating system has not been modified when starting up. This includes the interface for launching programs, and the interface for managing which programs are installed, both of which are trusted parts of the operating system's interface. David's computer, like most, also has a web browser installed on it, which he uses to interact with services hosted by other computers.

The Story

When David sits down to use a computer, it prompts him to plug in his access card. When he does so, the operating system asks the card to sign some random data to prove its identity, and the card begins to blink. David touches the button on the card, and he is logged in instantly. The operating system also asks the card to unseal some keys stored in the computer's persistent storage, which it uses to decrypt David's files stored there and make them available for use.

When David navigates to his bank's website, his web browser, a trusted part of the system, asks his card to prove its identity to the website. The card's light flashes, David pushes the button, and he is authenticated to his bank. There are no passwords needed (nor even usernames).

There's a problem, though: the card is pretty dumb. It will happily prove its identity to anyone who asks, as long as David presses the button. The web browser has to help out here, by including the hostname it's authenticating to in the blob of data it asks the card to sign, which lets the bank ensure that the card authenticating to them really thinks it's talking to them, and not some other site.

David goes ahead launching other programs as he needs. Programs are prevented from interfering with other programs, or with the operating system, by the operating system's security features. Whenever a program asks to use the card's identity features, the operating system hides everything else and displays a dialogue, clearly saying which program wants to use the card and what site it wants to use it for.

There's still a problem, though. Another program could make itself look like David's web browser and ask him to authenticate to his bank's website, and then secretly transfer money out of his account. Ideally, this shouldn't be possible, and we can in fact design it away.

We'll do this by requiring that David use a particular program, not an arbitrary web browser, to communicate with his bank. When David initially visits his bank's website, it prompts him to identify with his card, then produces an application, signed by the bank and containing his identity, for him to download. He downloads it and the operating system makes it available in a separate "trusted apps" section of the interface; all of these apps prominently display the names they are signed with.

This program, generated by his bank, also contains a token required to communicate with the bank, unique to David. A malicious application that had pretended to be David's web browser could have gotten hold of this program, but cannot modify it without invalidating the signature. It could extract the token and create a new, similar application signed with a different certificate, but the application will be displayed with a name other than that of David's bank.

In future, whenever David wants to interact with his bank, he runs that program, which identifies him to the bank using David's card and the token embedded in it when it was generated, and authenticates the bank using a known public key, also embedded in the app. The operating system prevents other programs from interfering with the bank's app, just like it prevents the bank's app from interfering with other programs.

Evaluation

There are definitely political and technical difficulties with this system. David's system is necessarily less modifiable, in its default mode, than most systems today are; it's roughly as modifiable as a Chromebook. The identifying card also puts identity online firmly in the hands of the government, which many people are rightly uneasy about. Still, this provides an example of a system that is both useable and secure against many attacks that are effective against today's systems.

You'll note that I've moved the goalposts a bit, here. This system is less of a general-purpose computer than we're used to, and something more like iOS or Chrome OS, which is intended to let the user interact with remote services and install local apps, but not modify the operating system.

1: Note that definitions of "right" may be subjective.