June 30, 2003

Lessig and Zittrain - Pornography and Jurisdiction [5:40 pm]

(entry last updated: 2003-06-30 19:09:54)

Starting with a Zittrain-Lessig dialog. (Donna’s notes)

We posit a policy problem: how to solve the pornography problem? - step one, state the problem. Is it existence? No. It’s the accessibility of pornography that children can see. One solution might be to ban porn. [assume we know pornography when we see it]

Isn’t there a Constitutional argument? Doesn’t banning go to far - Michigan v. Butler. - so cannot get rid of all porn for all people, because some people can have access to porn

So, let’s just make sure that children cannot get porn - figure out how to card people in cyberspace - let’s employ identity. For example, says Z, let’s demand a credit card, which children won’t have

L: So I have to give my credit card to pornographers to get porn? I don’t want to give out my credit card - in fact, I don’t want to have any ID associated with my demand for porn.

Z: What about a digital ID? Something that associates some set of information with that ID, one piece of which is your age.

L: That seems to work. Should give me the anonymity the Constitution required. The CDA problem is avoided, I think. There’s still a burden of ID to access speech - doesn’t the First Amendment require that I have access to porn. After all, an average porn consumer is not carded, right? S/he looks old enough without a need for a card.

L: Given that there’s a compelling reason to avoid children getting porn, is this really the least egregious burden? After all, the pornographer also has the burden of ensuring a robust mechanism for checking these IDs. Isn’t there a solution that might avoid this burden? If so, then that’s the Constitutionally required approach.

L: How about requiring kids ID, rather than adult ID. How about a kid-enabled browser? Parents could set up children’s computers, so parents have the burden, instead of the porn consumer.

Z: So, we require the porn www site IDing itself, and the children’s browser will look for it and block it. For example, let’s look at the Content settings for the Internet Explorer - settings can be applied. OOPS - CNN.COM is blocked by a level 4 setting in violence. Looks like trouble.

L: So, sites will have to do their labeling - let’s pass a law. Wait - labeling is only required under the law for pornography - so detailed labeling is not necessary. Just a Porn Flag - or, as the goal posts move, "harmful to minors"

So we get a new burden - what does harmful to minors mean? Let’s skip this altogether and let the free market generate ratings by third parties. Once these ratings appear, browsers will arise that recognize the ratings. For example, the ADL Hate Filter; or Net Nanny. Filters - RSACI, which is now owned by the same site that does filtering for the ADL

Looks like the use of <meta …> tsg. Larry points out that labeling can characterize well beyond the porn/no porn axis. Is it possible that this is more burdensome?

Z: Why isn’t filtration along multiple axes not better, because parents can police more speech - it may be less free speech, but kids are protected. It’s digital babysitting.

L: What if you criticize NetNanny? Mightn’t they block you because of that? Say PeaceFire, which tells you how to evade filtering software?

Z: Zounds! Better block that site; it’s depriving parents of the right to manage their children’s online activity.

Is Cyberspace Burning? - the ACLU explanation that filters are going to eliminate free speech on the internet.

(Note: this is NOT an easy session to summarize - I’m looking forward to seeing how Donna does this!)

"Harmful to minors" - versus pornography. Larry raises the question, beyond the constitutional issues, of effectiveness - will this really work?

Terry tells us that Posner upholds the AIMSter block

Copyright law

and the principles of equitable relief are quite complicated

enough without the superimposition of First

Amendment case law on them; and we have been told

recently by the Supreme Court not only that “copyright

law contains built-in First Amendment accommodations”

but also that, in any event, the First Amendment “bears

less heavily when speakers assert the right to make

other people’s speeches.” Eldred v. Ashcroft, 123 S. Ct. 769, 788-89 (2003). Or, we add, to copy, or enable the copying of, other people’s music.

OK - can I get back on track - I doubt it, but let’s see …

We’re now on the subject of eBay and Nazi memorabilia; in France, nazi memorabilia is “pornography” as we have defined it - plus the law is written such that there is the opportunity for private action. Yahoo! France faced this, and went to the US courts asking to defend them against orders from French courts because this restriction on speech would be a violation of the US constitution.

Here we are, therefore, into jurisdiction - now we’re on iCrave TV - rebroadcasting of TV on the internet being legal in Canada, but illegal in the US. What happened in this case was that US courts said that iCrave was doing illegal activity, and they shutdown, even though they were in Canada (well, a small bit in Pittsburgh).

Bringing us to Sealand. And IP mapping (Quova) - as it turns out, Quova can be used to get 80% effectiveness to filter out the French from seeing nazi memorabilia. iCrave promised they would get 99% accuracy, but not acceptable to the US court (copyright v. other restrictions).

An example from Google, where a search on stormfront yields two different results depending on your location - in Germany, the white supremacist group doesn’t appear - no formal declarations of these filters exist at Google, so we find that the technical world is zoning the internet, because the suppliers of content are being asked to filter.

China is another example, except that the ISP, essentially, is blocking sites.

Leading to an internet that is locally defined by the local jurisdiction - "national soverignity is paramount" - Treaty of Westphalia. A technologically implemented mechanism to sustain local jurisdictions.

Q&A

  1. Comment: what sort of strategies can be used to deal with the tradeoff between the kind of issue advocacy that looks like pornography to filters and truly harmful content. A terribly messy problem.

  2. How does Jonathan get his research results? The trick is dialup into AT&T Beijing and then try to access sites - expensive phone connections, but informative - although it’s truly 20 questions. Eventually, the dialup strategy didn’t work, so other tricks were used.

  3. A question of power emerges from this discussion: is it really bad that states don’t get to exercise their power? Larry argues that doing this co-opts the opportunity to carry out the debate about the values underlying these choices.

  4. A reiteration of the need for sovereignity of states. Jonathan argues that the thing he’s most worried about is the internet issues. Larry picks up on something Jonathan says, by saying that the effectiveness of the filtering/zoning combination allows lawyers to get out of the complications of jurisdictions. This easy out means that we won’t address the hard doctrinal questions that really need to be addressed.

  5. Why is child porn such a big deal? Compared with bomb making instructions, etc? Lots of dodging, ducking and weaving.

I think I made a mess of this. No wonder not many people do this…..

Postscript: I confirmed with Jonathan what I thought was going on at the end: there’s at least an open question in some minds here (and Jonathan in particular) on the subject of whether sovereignty on the internet is a bad thing or a good thing - i.e. should cyberspace mirror realspace in these sorts of questions. While on one hand, we tend to think that free speech is generally a good thing, we also have examples that show that it can be terribly harmful (threats, hate speech on up though a host of consequences of the conversion of speech into various forms of commerce).

permalink to just this entry

April 2014
S M T W T F S
« Feb    
 12345
6789101112
13141516171819
20212223242526
27282930  
posts

0.221 || Powered by WordPress