A hitchhiker’s guide to the HTML5 + EME maze

W3C’s work on HTL5 and the Encrypted Media Extensions specification keeps drawing criticism and controversy. I spent today attending Amelia Andersdotter’s event at the European Parliament in Brussels about HTML5 and DRM, as an interested individual member of the W3C community who doesn’t speak for anyone but myself.

The topic is fraught with controversy: The W3C Director found “Content Protection” to be in scope for the HTML Working Group; the deliverable that the group is working on under this heading is EME. The specification itself defines a reasonably simple JavaScript API that permits a Web application to hand key material to a Content Decryption Module (the actual DRM black box). The general API leaves the nature of the key material unspecified; in the general case, that’s likely to be key material that is by itself encrypted, and not accessible to the browser. The EME spec defines one very simple CDM, Clear Key, which assumes that key material is accessible to the Web application and the browser (therefore, to the user); this is the sort of not- really-DRM that will later on permit the HTML WG to demonstrate interoperability of the API without having to dive into proprietary CDMs.

As far as it’s discernable today, EME has significant implementer interest; the motivation there is, of course, to use it as an interface to connect proprietary DRM systems with the Web. As with any controversy, there are plenty of confusing points to go around.

On fundamentals, some argue that content protection is, basically, the same thing as password protection for content that you buy for, or a paywall, or perhaps encryption of confidential material online. That’s a false equivalence: The commercial driver for standardization of EME are existing DRM systems — the proprietary CDMs that I mentioned above. The attacker against whom content is protected is the user (and the browser code, which could be under the user’s control); the attack is use of content in a way that isn’t explicitly authorized by the rights holder.

The DRM systems used in this context cannot be implemented in Open Source, they are typically patent encumbered, and they arguably are corrosive to the notion of putting general-purpose, modifiable computing into users’ hands. And while it is conceivable to build a watermarking-based system on top of EME, that would sound like a pretty awkward approach, and it isn’t why implementers are interested in EME.

All of that, however, doesn’t mean that EME (the interface) can’t be implemented in open source: EME, together with the ClearKey CDM that’s part of the specification, should be implementable in Open Source software, without royalty, just fine. It just doesn’t provide the protection that rights holders are after; the real deployment of EME is as an interface toward proprietary CDMs that are implemented in closed source software, and partially in hardware.

Some proponents of EME try to make it palatable by pointing out that, just maybe, it could help users protect the privacy of their personal information online — we heard that argument today. That doesn’t sound like it’s very plausible: EME is a pass-through API for browser implementation, tightly coupled to inline media elements in HTML. The basic model is actually very simple. Now, it is true that some in the privacy community have looked at policy enforcement using trusted computing mechanisms. But it doesn’t look like EME specifically, or the CDMs it interfaces with, are even in the same ballpark. I respectfully suggest that we just drop that part of the conversation and focus on the actual reasons for deploying EME.

Another argument that is frequently made is that, because EME is made part of a core Web technology (HTML5), “browsers do not have a choice.” That isn’t exactly true, either: EME is a separate spec from HTML5. The two documents can go to W3C Recommendation (or not) independently of each other. Just because somebody says they implement HTML5, that doesn’t mean they have to implement EME. That debate, however, is ultimately a debate about words, not about substance: The deployment driver is the desire to provide playback of DRMed video content, not the exact nature of the API spec, and how it is split across different documents.

The real focus of the discussion, then, ought to be on the merits (or not) of what EME actually is: A carefully scoped interoperability layer on top of existing, proprietary DRM systems, to enable the designers of Web applications (think youtube, think netflix) to pass key material to these CDMs in a way that’s interoperable across multiple browsers. That abstraction layer doesn’t “do” DRM; it can probably be implemented in open source software without royalty; but it isn’t very useful unless we end up in a world with a few widely implemented CDMs that ship with browsers across different platforms, and for which “protected” content on the Web is encoded.

Some of the questions to ask in this context: If EME is successfully standardized by W3C and broadly deployed by browsers — is that, by itself, an improvement over a future in which either of these (standardization, deployment) doesn’t happen? What would other plausible futures for EME or, more generally, for DRMed content sold today even look like? By what criteria would we evaluate those? What’s the impact of these futures on large content providers, small content providers, browser vendors, and innovation for the network?

How does that reasoning change if we assume either of EME being the end of DRM integration into the web platform, or EME being the beginning of DRM integration into the web platform? And which of these is more likely?

What is the weight that we might assign to “goodies” that could come with EME? For example, open APIs further down in the stack (between CDM and browser), or additional transparency into the DRM hat gets deployed on the Web? And what is the weight that we might assign to side effects of DRM deployment through EME — such as, perhaps, additional privacy concerns, and serious accessibility issues?

Finally, what does this entire discussion say about the governance model that we collectively want to apply to Web standards — how do we collectively reconcile between W3C as a member-driven organization, its accountability to the broader public, and its stewardship role for the Web?

Technologists and the values of the surveillance state.

Along with yesterday’s revelations, Bruce Schneier writes in the Guardian :

Again, the politics of this is a bigger task than the engineering, but the engineering is critical. We need to demand that real technologists be involved in any key government decision making on these issues. We’ve had enough of lawyers and politicians not fully understanding technology; we need technologists at the table when we build tech policy.

What Schneier is getting at is, of course, important: Policy-makers need to understand the technology they’re messing around with, and they need to understand the impact of their decisions.  Technologists might be able to help them understand those points.

But that is too short-sighted: If anything, we’re seeing over and over again that the NSA, and plenty of policy-makers, understand the possibilities of a global network perfectly well — and have learned to wield their resources to turn it into a global instrument of surveillance. With an estimated three billion users of the global Internet (four billion to go, though), the surveillance debate has long transcended the world of just us technologists. And with an estimated three billion users of the global Internet, the notion that technologists can simply “take back the network” tastes of techno-idealism and techno-elitism — however much, as somebody working on Internet and Web technology, I might like that idea.

The conflict that we’re living through now is more fundamental. It is about the vision we have of a networked society.

In the vision that many of us have been working towards (and that is, in various ways, at the intellectual roots of the Internet), we get the benefits of increasingly seamless communication and collaboration, we get exposure to other views, we have the knowledge and tools available that make us more creative and more productive, that bring us closer to other human beings, and improve our understanding of each other. In this vision, the network turns our society into a better one.  In this vision, we can have trust in the network. In this vision, we can use the network to communicate with our loved ones. In this vision, we can trust the network with our private life and personal secrets. Geopolitically, this Internet is a network that (as a very smart man once put it) serves as a powerful projection of Western values and a civil society across the world. This is for everyone.

We are learning this summer that the network that we have actually built has become a Trojan horse, inspired by a dark and dystopian view of humanity: The dangerous species homo sapiens cannot be trusted with fast, private, perhaps anonymous communication at scale. Communication (a fundamental piece of what makes us human!) needs to be domesticated, for feral communication (and humanity) bears uncontrollable risks.

We are learning this summer that the hidden domestication of communications technology hasn’t just taken the form of attacks on crypto systems or endpoints or network hardware (all of which we would expect): Instead, what we see is an assertion of the primacy of surveillance in the design, deployment, and operation of Internet technologies at global scale, at the expense of the security and privacy of their civilian users.

The crossroads that policy-makers are at is less about understanding technology: It is about understanding that the design of technology is never simply value-neutral.  It is about choosing the values that we embed into the technology we build and deploy.

Are these still the values of an open society? Or are they the values of the oppressive surveillance state?

Keep running

One of my favorite runs in the world is the loop along the Charles River between Boston and Cambridge — connecting MIT, Harvard, BU, and Back Bay, if you run the long version. That’s but a few blocks away from where yesterday’s attack happened.

I’ve never run the Boston Marathon. During the day, I was joking with a friend there about who was how far from qualifying. I didn’t quite say “there’s a challenge to compete on, let’s run it next year”, mostly because I didn’t think I’d be in shape to make that challenge — I’ve never actually run a marathon, and the few half marathons I’ve done were well above 2h. No way I’d qualify.

A few hours later, the news hit twitter.

We quickly established that some MIT-based colleagues who had been helping with the communication infrastructure around the run were taken care of. A former colleague who used to run the marathon wasn’t in town this year. That was good news. And then, the fog of terror: Was a fire at the JFK library related? (didn’t seem so) Had more bombs been found, or not (none found)? Had the cell phone networks been shut down? (probably not, also: probably a bad idea) Classical media didn’t do much better than the social media rumor mill. Some news sites were down, given all the traffic.

On the day after, the news is full of security taking over, and full of reactions and worries around the world. How can we make sports events secure?

And there is that urge to say something, anything, when one really doesn’t have anything to say — for example, this blog post.

Bruce Schneier has it right: keep calm and carry on. We mustn’t let fear take over public spaces, or our thinking.

Here’s hoping that, next year, the Boston Marathon will be even harder to get into, because more people will want to run it.

Book review: Kissinger, On China

Henry Kissinger’s “On China” is part historical and strategic tour de force, part personal memoir, part political legacy. It’s a book you must read.

Starting with a quick survey of China’s long history, Kissinger sets out to investigate the interaction between China and other powers near and far — from the barbarian management strategies practiced by the Middle Kingdom over millennia, through the unequal treaties of the 19th century, to the subsequent century and a half of turmoil, both foreign and domestic.  Kissinger is at his best as a writer and story-teller when he can mix strategy, history, personal memory, and the explication of diplomacy: The diplomacy and negotiation of the 1960s and 70s are at the heart of the book, and worth the read alone — both as an account of Chinese and US strategic challenges and eventual alignment, and as the story of careful negotiation and diplomacy, with all its absurdities and difficulties.

Yet, all that — and the subsequent material about the country’s stabilization under Deng and tense relationships post Tiananmen — is merely a foil before which Kissinger sets out, in the book’s epilogues, the strategic imperatives and challenges that the US (and the West more broadly) face in interacting with a resurging China today:

Both sides run great risks through confrontation; both sides need to concentrate on complex domestic adjustments.  Neither can afford to confine itself to its domestic evolution, important as it is.  Modern economics, technology, and weapons of mass destruction proscribe preemption.  The histories and economies of both countries compel them to interact.  The issue is whether they do so as adversaries or in a framework of potential cooperation. […] history lauds not conflicts of societies but their reconciliations.

Are we facing an inevitable conflict (as Germany and the UK might have before World War I, by some analysis), Kissinger asks, or can we manage to evade conflict, by recognizing what relationships, what histories, and what potential futures are at stake?

Over to WordPress.com

This blog started on self-rolled software (deservedly lost), then moved to Movable Type, then to posterous.  As a result of Posterous’ untimely demise, it’s now hosted on WordPress.com, but under a domain name under my control.

Two quick notes.

1. It was reasonably easy to redirect the URIs of the old Movable Type instance of this blog to its new version.  Wouldn’t it be nice if posterous at least gave us a chance to keep old links intact?  Alas, none of that.

2. Why wordpress.com?  I originally looked for something self-hostable.  WordPress is reasonable blogging software, but sufficiently insecure that I don’t want to have to administer it. The paid, cloud-hosted service sounded like the right balance between ease of use, outsourced administration, and ability to just install the software myself and move on should I wish to.

 

Questions about Privacy, Decision-making, and Big Data

Inspired by the Big Data panel at this year’s Computers, Privacy and Data Protection conference, a few quick questions. We know that human cognition is full of bias and fallacy, and that humans aren’t Econs. Among other pieces, we know that humans…

Inspired by the Big Data panel at this year’s Computers, Privacy and Data Protection conference, a few quick questions.

We know that human cognition is full of bias and fallacy, and that humans aren’t Econs. Among other pieces, we know that humans confuse correlation for causation, and that machine learning and big data operate on the level of correlations only. We also know that machine learning can generate good hypotheses for what might be a controlling variable, and what might be a useful course of action.

The questions, then: What determine’s society’s attitude toward the tradeoffs between machine and human decision making, and is that attitude rational? What are the qualities we seek in these decisions?

And: Who’s said interesting things about these questions since danah boyd’s work in 2010, e.g., in “Privacy and Publicity in the Age of Big Data“?

Stealing my own mobile phone number

When in the US, I’ll usually avoid roaming fees by using a T-Mobile SIM card and a Boston number. Due to a recent phone upgrade, I had to move to a different SIM card form factor. Imagine my surprise when the interaction at the T-Mobile shop in Be…

When in the US, I’ll usually avoid roaming fees by using a T-Mobile SIM card and a Boston number. Due to a recent phone upgrade, I had to move to a different SIM card form factor.

Imagine my surprise when the interaction at the T-Mobile shop in Berkeley today went, roughly, like this: “What’s your number” — ” 857 …” – “Thomas?” – “yes” – “Hold on.”

I paid for the new SIM card, in cash. I put it into the recently-acquired phone. It worked. I walked out of the shop. At no point did I have to prove ownership of a SIM card that belonged to that phone number. And at no point did I have to produce any credentials.

Now, I’m suspecting that some of this might be related to me lacking a US street address — I’m just traveling here. But even if they were to ask me about an address: Just knowing somebody’s phone number and address, and nodding convincingly when asked whether I’m their first name, doesn’t strike me as a useful way to check that I actually am the owner of that number.

Anybody else see a problem here?