Nesson and Kang: Privacy

<< Getting ready — announcement of blogging training for those interests – 6:00 at the Berkman Center >>

Jerry Kang, introduced by Charlie

(talk using MindManager — a tool to look into, IMHO)

On his WWW site

Why do we care about privacy?

What’s new?

What might we do about it?

Why do we care:

Information privacy — control over how data is collected, used, and distributed about me – confusions: who gets to decide; this is mine by territory

Really, we want to talk about processing of data; about me.

Why do we care about secrecy? Some examples

  • – avoiding embarrassment (we all pee, but we don’t need to see a picture); averting misuse (identify theft);

  • – constructing intimacy (allows secrets to be the currency of social relations);

  • – dignity (privacy to construct our personality, allowable mistakes as we experiment with defining who we are; respecting our automony to make choices – if you only do things that you should presume will be on the cover of the NYTimes, you will probably edit your actions – distinction between bird-watching and people-watching/surveillance);

  • – democracy — part of the experimentation argument; why secret ballot? no anonymous decisionmaking is an important idea of democracy

There are counter-values, of course:

  • Commerce – information is needed to accomplish much of market clearing; credit; insurance

  • Deception – your public face and your private face; we are allowed to present ourselves differently according to context; deception as a strategy that may be beneficial, or not — we want to defeat some kinds of deceptions

  • Free expression – again, freedom of speech is a protection, too. If I know something about you (e.g., a airline pilot seen drinking before flying), why can’t I share that information?

So, it’s a balancing act.

What’s new with the digital age?

Technology – the technology of cyberspace; the content of data collected is

-very detailed — the interaction itself *IS* data

-computer processible – created within a computer; it’s immediately processible

-indexed to the individual – unique identifiers, IP addresses, cookies, registration

-permanent – computers don’t forget; storage is cheap

(compare with picking up a magazine in Barnes and Noble – all this sort of data is collectible, but it’s hard, not really saved, etc. But, look at it online, and you’re always broadcasting who you are, and what you’re doing — and computer is monitoring and saving it)

It’s just become incredibly cheap to collect and save data

Also under technology — the players in the game are different – something like Google, as the one-stop searching shop, knows EVERYTHING that EVERYONE is doing – even if they never share, they still have it and can act on it. Now that aggregation is easy and doable, some things that privacy protects us from is now doable.

Another techno-game on the horizon — pervasive computing; RFIDs in everything; sensors everywhere; all networked together/self organizing mesh networks — so, you can never log-off; and the economics of data collection in real space will start to match that of data collection online

Politics – the all-seeing eye is a 9-11 thing; note that the state can subpoena private data (irrespective of the privacy policy that is published) Note that the state also frequently buys data from commercial sources. the public/private boundary is blurring

Law as a weak constraint – the law is not great at managing this; right now, we largely rely upon self policing of firms; the online privacy policy notice is about it (FCC/deceptive business practices)

As we “cyberize” real space, this is just going to get a bigger and bigger issue

— comments/questions —-

Q: VA general assembly — another countervailing idea to the privacy ideal is accountability/the flip side of deception – sins of omission

A: Agreed

Q: Luke of USC – when things get put into a log/record/dbase entry, they take on a life of their own, irrespective of the correctness/timeliness of the data stored.

A: Which way does this cut? Accountability versus rehabilitation? This is a great example of the balancing problem. Who gets to decide which way the privacy data goes?

Our mental health depends upon our ability to forget/to put things behind us. Yet, this domain forces us to confront this issue.

Q: Another issue — collecting information that’s incorrect, yet difficult to correct – accuracy and correctability as a dimension of working to avert misuse

A: Your digital doppleganger – your data record that follows you around; some say this is something to resolve with BETTER data collection; others say that this means that some things should not be collected

Q: As a newsgatherer, there’s the issue of finding the boundaries on what is legit and illegit newsgathering. The weak constraint in the law does seem to have some effect here. (Food Lion case — sanitary food preparation learned of through deception)

A: Here’s the reaction – – not an expert — it doesn’t seem obvious to me that there is this clear schizophrenia of news vs. online data collection. That doesn’t seem to appear, largely because the salience of privacy invasion is not always apparent — we don’t always have the invasions thrown in our face when we deal with this online, while the newmaker is doing to throw the findings in your face. As an individual, your power to respond is weaker than that of Food Lion/companier

Q: The welfare state and privacy – another tension

A: Distributive justice is hard to accomplish without knowledge about individuals – benefits mean giving up some privacy

———————————-

So, what to do about this??

Form of the debate:

A clash of civilizations (america v europe)

america – market talk; privacy is a widget; let the market do it; exercise your freedom in the market; exchange for value; and in a good market, we get allocative efficiency – kind of a caricature, but this is a good short term mechanism

europe – dignity talk; privacy is a fundamental human right; we do not auction off babies; we let the law decide what it a fundamental human right.

Substance – turning to the substance suggests that the ultimate elements are the same.

at the core, they seem to be the same.

– Dignity talk says (consent is required) (apparatus to ensure that there is a process to protect consent)

– Market talks says (clear property rights needed/so who gets initial entitlement?/many possible results/these days, it’s largely in the commons)

there are good reasons to think that efficiency emerges when you give the entitlement to the individual — same result as dignity talk — (asymmetries and collective action theory says this works)

Moreover, we also need a lot of mechanisms to deal with intangible property; analagous to the requirements asserted in the dignity talk

So we are largely likely to end up in the same space? Aren;t there some real differences?

Dignity talk hates the market approach because there’s too little control for individuals to exert; individuals have a hard time making a good bargain. Rather, the system is set up to fortify the individual’s position in these situations.

But, market talks can also lead to fortifying individuals; contract law has lots of instruments to do this (objection on my part: isn’t this like Larry’s objection to fair use – you need to litigate to get this?)

Market approach says that the dignity approach is too stilted — there are situations where the balance of interests should go against privacy; the market achieves that balance more efficiently

But dignity talk leads to systems that explicitly generate exceptions to the dignity right within the supporting instutions created.

So what am I saying? In form, there is a huge clash between the human right-market widget argument (the philosopher and the economist); but there are core similarities.

Dignity’s consent can be accomplished by giving the initial entitlement in the individual; then you get essentially the same situation

Dignity’s institutions are going to be needed to protect this new kind of intangible property

Moreover, (see above) the critiques also end up fully parallel.

So, it may be that we will all end up in the same place; and it may be that rather than arguing about which regime is “right” we should move on to the real, mechanical issues that are the same for both.

——— Q&A ————

A: A general question about the relationship between IP and privacy regimes that we create to make the market system work.

The technology can work – digital privacy management vs. DRM

Moreover, it offers up a pushback — if you want to protect your IP, why can’t I protect my privacy? Is this your self interest, or is this a principled position

Q: Amazing that you haven’t raised the reasonableness test — that was at one time a dimension; the “reasonable expectation of privacy” is sort of out the window? Have we lost something?

A: The wiretap introduced a need for this more complex notion of privacy. It sitll exists, and there may be different statutory regimes. It certainly exists still? Even though the standard exists, it ends up in a kind of circular argument — the expectation is tied to the technological context; if the culture is not “aware” of the technology, the standard shifts.

This makes people think of this standard as bankrupt; but it’s still in the law. The familiarity with the tech changes the notion of reasonableness

Q: Any sense of how many people have opted out of certain financial data disclosure policy stuff that we get in the mail?

A: Well, I don’t know the data; and I know that I haven’t opted out. This is the so called “sticky default rule” — the supposedly mutable rule is so hard/messy/time consuming to actually change, we get into trouble.

This is why the idea of initial allocation to individuals; possibly to even establish inalienability

—————————-

What can we do to reframe this debate?

1) soft pedal the concern about market talk/dignity talk – unproductive

2) the substance is something we ought to be focusing on

a) who gets the “thing” – the initial entitlement

b) how will the choices get made, and how to ensure that the decisionmaker is fortified to do it well/effectively – this is where inalienability may emerge (can’t ask, can’t tell) – there are lots of intermediate forms of the way we might frame/constrain the kinds of exchanges that we will allow; ability to correct

c) what are the societal overrides; what is allowable contexts within which we can override the rights/market actions of individuals. How to pick/adjudicate/etc.

d) How much supporting information infrastructure needed to enforce – various flavors

That;s the claim — answer these four question, rather than talking to me about dignity or markets

Q: Let’s try an example — GMail — how does this work

A: GMail – you get a Gig of space for your mail; you have to put up with ads in your e-mail; no human sees your mail (unless the government asks for it); the computer reads your mail to pick the ads; your address won’t be disclosed without your permission (or if we go out of business)

1 Gig – the system never forgets; you have identified yourself; and your e-mail info is now tied to your searching activity

a) Initial entitlement – you agree to the service; until you decide to get the service, it’s yours. Of course, the email you receive is also scanned, and the sender may not have agreed

b) fortifying the individual; lots of notice; do you allow individuals to say yes to the GMail exchanges that are proposed? Or do we need to fortify the individual?

c) societal overrides – what does the government say about this? the electronic commerce protection act speaks to this; the opportunity for law enforcement access can be quite troubling

d) supportive infrastructure — create acces rights?

In Jerry’s mind, the big question is the fortifying of the individual? Can you say yes?

Q: Mitch – It’s still about the reification of information into a moveable form; it’s not the collection, it’s the conclusions drawn from the aggregation of the bits. The conclusions drawn from the bits (with lots of processing power) make it too easy generate conclusions that are hard to escape.

A: Yes, reification/digitization makes it awful; the data trail will get bigger; and we will get more comfortable with giving out information. The question is will we come to a point where we will be prepared to discuss this; and then to draw some lines in the sand?

For example – suppose I could teach my cell phone to let me know when I’m being scanned; the moment for that is now, before the technology evolves past our point to generate the architecture that we need to engender the kind of political discussion that I’m talking about.

Time is of the essence — if we wait to long to engage the development of these technologies, we will miss the opportunity to set up a way to explore these issues, and to set up a possible set of architectural/institutional features that will help to fortify the individual in this regime.