Web Tracking Protection and User Privacy: Next Steps

This item also appears on the W3C blog. There’s a lot of movement about Web Tracking and User Privacy lately, and it’s been almost two weeks since the last update. We’ve since announced the W3C workshop on Web Tracking and User Privacy for 28/29 A…

This item also appears on the W3C blog.

There’s a lot of movement about Web Tracking and User Privacy lately, and it’s been almost two weeks since the last update.

We’ve since announced the W3C workshop on Web Tracking and User Privacy for 28/29 April 2011. The good people at the Center for Internet Technology Policy at Princeton have agreed to host us for this workshop. As always with W3C workshops, we’ll seek position papers from a broad community. We’ve lined up a great program committee (thanks all!) that will help us pull together the agenda of the workshop based on those position papers. Position papers are due by 25 March.

Earlier this week (see Alex Fowler’s announcement over at Mozilla), the IETF has published two relevant Internet-Drafts. Both are individual submissions, i.e., starting points for a broader community discussion. In the Overview of Universal Opt-Out Mechanisms for Web Tracking, Alissa Cooper and Hannes Tschofenig paint the larger landscape of available opt-out mechanisms — required reading for the April workshop. In Do Not Track: A Universal Third-Party Web Tracking Opt Out (also known as draft-mayer), Jonathan Mayer, Arvind Narayanan (both at Standford), and Sid Stamm (Mozilla) propose a technical specification for a Do Not Track header.

How does their proposal compare to Microsoft’s Web Tracking Protection Member Submission? A few observations. Most importantly, draft-mayer focuses on the opt-out header; it doesn’t cover either the tracking list idea or the DOM property defined in the submission. Further, the draft distinguishes between three (not two) states: DNT: 1 (“I don’t want to be tracked”), DNT: 0 (“it’s ok to track me”), and no header — the latter case is called out explicitly as “no preference.” Another interesting addition is the use of DNT as an HTTP response header: The protocol proposed here is that Web sites that support “do not track” play the header back when they send a page, and that clients (and others) can use that to keep statistics about who’s respecting an opt-out.

Also worth comparing: The two statements on what “do not track” actually means. At first glance, they’re quite different in scope and in level of detail; Mozilla’s version has a long initial set of exceptions. Drilling down on what direction the definition of “do not track” should take will be an important agenda item for April.

Meanwhile, on the political stage: As the BBC reports, EU Member States aren’t prepared to actually enforce a European Directive about cookies and user tracking. Instead, we can expect the debate about behavioral advertising, opt-outs, and tracking protection lists to take center stage in Europe as well.

All of this suggests some interesting discussions in the Web Tracking space at the April workshop: Which of the tracking protection mechanisms are a good idea? What are the merits of the various design options? How do they interact with different cultural and legal expectations around the globe? Which ones should we take up for standards work at the W3C? What’s the right coordination story for this work?

Serendipitous reuse of data is good. Finality of data collection is good. Discuss.

I’m at the PrimeLife workshop on Open Data and Privacy. We’ve been trying to even frame the discussion all morning. Here’s my framing of the interesting space of the discussion: Let’s posit that public datasets are likely to include personally ide…

I’m at the PrimeLife workshop on Open Data and Privacy. We’ve been trying to even frame the discussion all morning.

Here’s my framing of the interesting space of the discussion:

  • Let’s posit that public datasets are likely to include personally identified or identifiable information.
  • Let’s posit that the datasets are available for re-use, and that there are overwhelming public policy and economic incentives for that to happen.
  • Let’s posit that the data is actually re-used in a way that involves identifying the individuals the data are about.

Put differently, let’s assume that we have a hard clash between privacy principles and open data principles. What does a meaningful privacy conversation look like in this space?

%d bloggers like this: