Terry and Jonathan: Domain Names

<<About ready to start round 4>>

Terry: The domain name system, starting with Jonathan

Jonathan: Internet governnance: domain name law and policy

The difficulty of discussing cyberlaw — not a real thing; “The Law of the Horse” (get ref later) – but a struggle to establish what it means.

Internet governance is a particularly complex thing. Domain names, we all know what they are, but we don’t all know why we should care

So, Internet Corporation for Assigned Names and Numbers — ICANN. What is it?

What does CDT have to say about “Domain Names?” (see cdt.org) Or, lets look at the ACM’s WWW site on its Internet Governance Project. It looks like a block whole of a topic. Also the world summit on the information society (WSIS). Read the plan of action — hmm what is there to do? Doesn’t really set you up to do much but form committees

So, is there a “there” there?

Well, one place where the architecture is expected to effect things is domain names. A mess: some instruments to repair the mess.

1) Legal interventions

2) Political interventions

Who runs the internet?

Origin of the mess: Jon Postel, Cerf, Croker — builders of the ArpaNet.

How to set up a network for each computer knows how to find the computer it wants to talk to? Dave Clark points out the power of the committee structure used to keep people on the outside of things – keep moving things around to keep people on the outside. This committee tried to work through this problem of finding and “naming” computers for communication.

So, these committees grew to become the Internet Engineering Task Force – IETF – the name of their activity. You can participate, but you can’t join. (ietf.org) – ” we reject kings, princes and voting — working code and rough consensus is the goal”

The hum as a voting mechanism; the norms of the organization as they work toward setting up the “rules of the road” – protocols and standards. Titled and numbered – the requests for comments — RFCs. Reworked interatively until it stabilizes – and a protocol/standard is set

“… the beginning of a dialog and not an assertion of control” — RFC 2555

So, this process is applied to finding computers on the net. The first approach was a unique number to each computer. (Side story on running out of numbers, IPV6, etc.)

Decided that the numbers should be roughly associated with locations; so names were added to the computer ID system; an indirection instrument as well.

So, we got The List; kept at MIT; got too popular and too hard to update. It would not scale easily.

So a change. In 1984, a heirarchical naming system was set up, with .root at the home list. From there you ask for .edu list holder, from there who has the harvard.edu list, etc…..

Who gets to run .root — Jon Postel; also some of the top level domains (TLD). Jon ran this until he got bored with it; it was too mechanical/administrative.

Went to the NSF — find someone else to run these lists. NSF found Network Solutions (.com, .net, .org).

Postel, when asked, started offering up more TLDs, starting with .uk, and then found an ISO list 3166-1 and he used that list to set the TLDs – Jon lets someone else handle the problem (had to grandfather uk, because their initial is GB on the ISO list)

Problems start to arise:

1) Network Solutions started making (*gasp*) money. NSI said, let’s just rent names and the NSF can stop paying contracts. And suddenly people have to start paying for their domain names.

2) Also, time to expand the namespace — who gets to do this, and how?

3) And finally corporations start to discover “cybersquatting”

Legal Intervention — Mop number one


How was NSI doing name allocation? First come, first serve.

Seems easy, but has some problems (why should you get “mobil.com”)

First kind of dispute: Josh Quittner registers “mcdonalds.com” — the origin of cybersuqatting

Other kinds of disputes here:

  • Typosquatting e.g., Misrocost.com

  • Competitors: Kaplan.com registered by Princeton Review

  • Noncompetitors: The guy, Howard Johnson, registers howardjohnson.com – the hotel chain complains

  • Retailers: webergrills.com registered by the BBQ Pit, a retailer of Weber Grills

  • Commonercial v. Non commercial user: reverse domain name hijacking — a child’s nickname, Pokey.org, is registered by his parents – the toy company goes after him

  • Fan sites: brucespringsteen.com, not owned by the Boss, who owns brucespringsteen.net – he wants the .com version

  • Gateway: southafrica.com, owned by a travel agency

  • Parody and commentary: verizonsux.com becomes verizoneatspoop.com, etc….;

  • People Eating Tasty Animals gets peta.com

The companies disadvantages by this sort of thins went for legal recourse

Trademark law

  1. Identical marks on competitive products – non nabisco cookie labeled oreos

  2. Similar mark on competitive product – “Boreos”

    • – How close do they look (Squirt/Quirst)

    • – How close to they sound (Huggies/Dougies)
    • – How similar is their meaning (Apple/Pineapple)
  3. Similar mark on noncompetitive product – Boreo Bottled Water – lots of factors to ascertain that there is a likelihood of consumer confusion (who’s making, who’s endorsing, Post-sale confusion making others think it’s a special product, initial interest – something to trick you into making the effort to go get something that isn’t there, but since you are there anyway, you go for the second best)

  4. Dilution doctrine — no confusion, but erosion of the power of the plaintiff’s mark (Greatest Show on Earth/Greatest Snow on Earth — “Don’t leave home without it” on a condom wrapper)

Application to domain name dispute not so good — why? No use in commerce; No real consumer confusion; Dilution requires that the plaintiff’s mark is famous; Judicial proceedings are slow and expensive

Back to “Z”:

Formation of ICANN — the nerds give up and turn it all over to the lawyers

The Internet dog wags the Law tail — tough cookies; so how to reverse the order so the Internet has to follow the law instead

Postel’s problems, again — expand the namespace and resolve the disputes

1) Add some new names and let NSI solve it (rejected)

2) Do an RFC and let the IETF help solve it (rejected again)

3) Convened a committee – the International Ad Hoc Committee (IAHC) – tried this

The IAHC (iahc.org) see the member list

Wrote the gTLD_MOU -= generic top level domain memorandum of understanding – died, because the IP interests were not satisfied with the outcome

Led to a “constitutional crisis” — no authority, no organization. Moreover, Jon had turned over the primary .root to NSI

NSI started throwing their weight around, and Jon wasn’t allowed to add names. There were mirrored roots; and Jon organized a test of the network to use the non A .root – NSI called the FBI and said that the internet was being hacked. Meltdown

Commerce Department and Ira Magaziner got into the picture – issued a statement of policy – DNS Statement of Policy

The international forum on the white paper met to discuss, surprise, the white paper. A constitutional convention on the internet name space. Jon/IANA didn’t like this — saw this as a way for NSI to get cover for doing what they’re doing anyway.

Instead, Jon Postel and NSI worked out their differences and ICANN was formed; voting, participation, consensus; and a method for resolving domain name disputes

Terry: Lets then look at the mechanism for domain name dispute resolution

2 components, designed to respond to the problems of the earlier situation (1) UDRP and (2) Anticybersquatting consumer protection act.

So UDRP: forbids abusive registrations and use of domain names

1) identical or misleadingly similar to a trademark owned by some else

2) No legitimate interest

3) Used in bad faith

“Bad Faith:”

Grabbed it to sell to the natural owner

Grabbed it to keep others out

To dirupt the competitors business

Divert traffic to grab business

“Legitimate interest:”

Used (or prepared to use it) to offer real goods and services

Defendant commonly known by the name

Defendent used for a noncommercial or fair use with intent to misleadingly divert or tarnish

What does this cover? Any name issued by ICANN – about 30 million domain names. Note, since ICANN is not a sovereign, what is the source of its power?


In practice: the complainant picks a forum (WIPO these days, the most plaintiff-friendly) – 20 days to respond; no additional submissions; arbitrators must issue a decision in 14 more day. Mostly, respondents default

Remedies limited: Cancellation of registration or transfer of registration

As of this month: 9377 proceedings; more than under trademake, but a small fraction of total. Rate of filings declining, mostly because there was a boom in 2000 in registrations.

WIPO gets about 70% of the cases; and they tend to favor plaintiffs.

Anticybersquatting Consumer Protection Act — In the US

Resembles the UDRP; civil action for trademark holders — bad faith and

Several factors to work through

Remedies are more elaborate — injunctive relief; damages; statutory ramages; in rem jurisdiction (can bring suit against the domain name itself)

Examples of ACPA actions:

  • PETA sued people eating tasty animals; court compelled parodist to turn name over to peta

  • barbiesplaypen.com went from a porn site to Mattel

Emerging doctrine — some straightforward cases go through ICANN mechanisms; the complex ones through trademark mechanisms instead. Sadly, what actually happens is that the complex issues are also brought before the ICANN UDRP process — plaintiffs get to pick forum; no appeal; no review


Complex, unpredictable; TM power; impediments to free speech

Z: Why do we care/should we care/does this really matter?

Increasingly, this looks just archaic and beyond the point.

1) Well, without a domain name, you can’t be heard on the Internet

2) It matters who is the other person in the contract

3) Historical accident – the internet boom made us care a lot about this; who gets to run it; etc.

Maybe Google is the answer to the problem; and no one is saying that Google should be “run in the public interest”

How upset would you be if Google de-lists you?

What if Microsoft is going to add functions that gut your firm’s business?

What if all documents will now have your name permanently attached to your documents,

What if your ISP won’t let you server pages?

What if ICANN takes away your domain name?

At what point do you feel that there should be an agency who acts to resolve conflicts and acts to arbitrate displutes?


Q: I worked for TuCows on ICANN-related issues. I agree that ICANN/WSIS is something that people are latching onto, but they’re still not quite sure what it’s supposed to be doing. Once people see that this is just a fad, then they’ll move on to other things.

Q: DWiner – Does Google repond to DMCA requests?

A: See ChillingEffects.org – search on Google – see that they get them all the time

Note that Google indicates that entries in a search were subject to DMCA removal, and then Google points you to the complaint so you can see what the complainant didn’t want you to see

Q: Seems unfair to tie WSIS to the ICANN mess. WSIS did some good things, and the process was important

A: Z: As a way to improve awareness and to articulate needs, WSIS is probably just fine. A place for airing the problems, but it’s still not a great place to set up internet governance.

Terry: I have a reaction to your last slide – let’s survey a variety of things and which one might be troubling. Compared to many of these is that the loss of a domain name is not so terrible.

In the 19th century, there was a set of legal thoughts called the crystallization of the public-party (something). A demarkation of what was the private domain, and what was the domain of governments. Then the constitution got applied to the things that were the public things, and not to the private things. This division has held over a century, with small examples (a company builds a town, for example)

But, this story points to an idea that, perhaps, the things that we assert that the constitutional freedoms that we apply to public agencies, are now something that private agencies like ICANN have to hew to as well.

Z: I like this — it frames things that I’ve been trying to explain about why internet governance is something that does matter — and in a way that perhaps suggests that there’s a need to reframe our assumptions about the role of certain private agencies to work to defend those things that we currently only require governments to maintain.

A great summation

Larry and Yochai: End to End

<<Back from lunch and off again>>

Yochai: This morning we introduced key structuring frameworks for thinking about internet law; the sources of constraint on action and the mapping of the layer structure onto digital communication. We started on the physical layer; now we move to the logical layer

Thus, end to end, or e2e — organization of the network at the logical layer.

Larry’s going to introduce e2e, and then I’ll talk about pressures on the way it works today.

Larry: What could be more tedious than discussion of thechnical protocols after lunch *laughter*

1935: Armstrong uses FM to broadcast an organ recital from Long Island to the Empire State Building. Inventor of FM – less static and higher fidelity than AM, the dominant method of broadcasting; traveled longer distances because it penetrated the ionosphere.

RCA was the technology leader in (AM) radio. RCA had asked Armstrong to develop a technology to remove static from AM; but he brought them FM – “a whole damn new industry to compete with RCA”

So, RCA went to the government and got the FCC to carve up spectrum to make FM hard to use. And they fought Armstrong’s patents. In 1954, as the patents were about to expire, RCA offered to settle for a fraction of his legal fees. Armstrong committed suicide.

1964: Paul Baran of RAND was looking for a way to develop redundant communications in the face of a war threat for the DoD. Packet switching as a concept was floated as a key piece of this technology.

AT&T, the dominant network technology holder, was a circuit switcher, not a packet switcher. (a) It won’t work (b) And we won’t let a competitor use our network anyway so forget about it.

Third example: Video to computers, versus video to televisions. How to make it work, cuz’ it looks like a great idea. 1998, Excit/AT&T were trying out @Home, and they thought about using IP pipes to deliver bband. People asked AT&T about delivering video – “we had not spent this much $$ to have the blood sucked from our veins” by creating a competitor to our cable system

Three examples: innovation has an important side effect that we don’t talk about much, even though we really favor it

Kahn/Cerf – grad students

WWW – CERN/Swiss

ICQ – Israeli

Napster – Shawn Fanning

Kids and non-Americans are some of our key innovators in the digital network – the ideas come from outside the core of the institutions that have a vested interest in the past

We should be architecting the opportunity for innovation, without fear of the incumbents — outsiders as innovators who don’t worry about the interests of the status quo

Logical layer – not including the operating system/application layer for the moment

E2E in the logical layer – intelligence at the edge of the network; simple at the core. Not like AT&T network – where the core gets to decide; rather, the users get to decide how they want to communicate over the net (David Isenbeg)

Isenberg refers to the idea of the “stupid network” while working at AT&T – heretical thought; lots of resistance; eventually asked to leave AT&T

The idea was older than Isenberg – Saltzer, Clark and Reed papers – evolution of the thinking

1981 v1 of e2e – functionality of the network can organized in an e2e way – the function implemented with the knowledge and help of an application (no hardware). The possible in the network as a way to think about reliability and security

1998 v2 of e2e – a more broad thery – now, it’s about preferred, rather than possible. It’s a policy for architecture design, rather than a rul — do the best you can to preserve e2e as possible.

Embedded though a suite of protocols – TCP/IP – the hourglass slide – IP at the core

“Architecture has policy [implications]” (A thesis I’m working on)

Implication 1: Flexibility — The makers were humble enough about their work that they had no complete set of ideas about what the network would be used to do, so they decided to design it so they would stay out of the way — a desire to avoid coordination at the core.

Implication 2: No coordination (required) – we’ll try to engineer a need for coordination out of the network.

For example – voice over IP – a way of using the packet switched network to send voice telephone calls. Don’t need to ask the FCC about whether or not we can make VoIP. Instead, somebody writes a program – and if someone else uses the program, then the two users can talk to each other – and as long as it respects the protocol, it works without permission

Implication 3: Rapid innovation

Example – Gopher from the U of Minn. A file distribution and navigation tool. The university started to discuss how to raise money by charging for the use of the gopher protocol — people bailed, and turned to a new protocol – WWW as a superset of protocols (Gopher Manifesto) that embeds many things, including gopher.

Gopher lived and died according to what the USERS wanted/did — makes no difference what the dominant provider wants to have happer

Competitive Implications: Maximize competition — Let’s start with the commons; we need to forget about the Garrett Hardin’s “tragedy” story. The point is that there are some commons that lead to comedy, rather than tragedy.

For example: English language — its value derives from the number who use it. The more who use it, the more valuable the resource is

There are many such resources — they are, among other things, “non-rivalrous” — your consumption of the resource does NOT mean that I cannot consume it either

E2E builds an innovation commons — everyone in the network is free to create without restricting anyone else’s freedom (assuming you play by the rules of the network protocols). And the more who create, the more innovation, the more new things that happen

Competitive Implications 2: Minimize strategic threat – Strategic threat, as in the ability for an actor/competitor to act in a way that is harmful to competition. So, rather than competing with competitors, but acting to weaken the ability of competitors to compete

Example: Microsoft and Netscape/Java – a possible cross platform compatibility technology. It’s not a question of whether it worked, but if it had, it would have changed the position of the Microsoft O/S

Bill Gates’ memo says – Yikes! This could be trouble! What to do? “!” The answer was to interfere by taking Netscape out of the picture and putting Internet Explorer in place instead – making it impossible for Java to construct a potential competitor

So, Windows was open for many kinds of applications, but not browsers that would run Java or something else that would otherwise compete with Windows

USCourt of Appeals held that was illegal; again, protection from innovative competition

E2E makes it hard to do this – lowers the cost of innovation

Implications 3: Consumer financed growth

While the internet was/is neat, the power of the network comes from putting intelligent machines on the edge of the network; consumer purchased devices added machines to the network, and the financing of the growth of the network and its capabilities

Consider 3G v 802.11 — individuals buy cards, base stations, etc. No central organizer setting up and financing the deployment – consumers do it themselves

“e2e is heaven”

Sadly, e2e is being eroded by those who have interests that are not compatible with the implications of e2e

The physical layer and the content layer are putting pressure on e2e, corrupting the core

Example – Policy-based routing: There are lots a benign reasons to sell a router that prioritizes traffic on the network – but now the network owner gets to decide which content is privledged. Pick and choose winners on the network – a discrimination at the packet level

Example: AT&T and video; another xBox and cable. Microsoft rolled out the xBox with the ability to play games over the network. Cable companies said — “will we let you run your games on our network?” Problem: chilling effect. Of course, Microsoft might be able to throw its weight around to make a deal, but what about the next Shawn Fanning?

From the content side: let’s talk about media concentration.

Concentration in ownership makes it easy to play control games. Ted Turner and Barry Diller have pointed out that, had the industry been structured as it is today, they would not have been able to achieve the innovative success that they managed.

The FCC, to the extent they think about this, say that the Internet will save us from monolithic media. Well, the e2e internet might do thing. But, if the internet is no longer e2e, then the internet can’t manage this, because there are now controllers in the system.

Possible solutions:

1) “Open access” — (see Reed Hundt from last year) — you are allowed as an ISP to get access to the pipes that your customers use at a rate that is fair and equitable; so you can offer ISP over DSL — note that cable did not have open access requirements – this way there is competition in access to the network

Success until the 2000 “regime change” — then it weakened quickly. Good and bad reasons. The good reasons were concerns about the financial viability of opening up; the bad reasons were poor and unfair application of regulations.

The Japanese get 100 MBit DSL for $25/mo; the monopoly telecoms were told to open the pipes, so they did.

In the US, the Baby Bells fought, because they were all competing for every scrap — and resistance to helping outsiders in killed it

2) Network neutrality — the FCC needs to make sure that the network works the same for everyone — content should not be privledged. Odd coalition has formed to maintain this (Amazon, Disney, consumer protection agencies)

3) “Free culture”

Architecture is policy — e2e has policy consequences — under threat — there are some responses to this threat

Yochai: Pressures on End 2 End – legitmate pressures

1) Lack of trustworthiness in peers — spam, viruses — there are advantages to putting security and filtering on the network (firewalls, spam filters, etc)

A firewall can screw up protocols; without knowledge, it can be hard to get through

This means that there has to be a committment at the designer level to ensure that e2e is preserved

2) quality of service – policy routers (real time voice, streaming) – latency as a problem to solve. So, let the network recognize that “the express train is coming through” and get out of the way.

The problem is what to do when a new application comes in.

Consider Skype: seems to work but it takes work by design to preserve e2e

3) ISP may want to differentiate themselves — say by improving response. One way, add expensive bandwidth. Alternatively, write anticipating algorithms that examine user actions and predict next actions — and look ahead to cache what a user will probably want next

4) Third party interests — employers; government officials; others with legitimate interests in seeing who is using the network, how. (Say, excessive bandwidth hogs, spammers, surfing porn during work hours)

Potentially legitimate

5) Less sophisticated users – provide functions without requiring the user to be intelligent users (remember your password, configuring your computer) maybe the ISP should help the user out – set up a porn filter, spam filter etc for the “lusers” (for example, AOL)

Key issue – A Tradeoff Between Freedom and Control

Sounds perjorative, but there are real policy concerns here

Economic regimes

Innovation is destructive and upsetting; there can be reasons to strive for a certain kind of stability, achieve allocation efficiencies

Network externalities — another unavoidable issue; on one hand, it raises value, but there’s a question of distribution — who gets to extract the rent

Political regimes

  • Who gets to decide – first party or third parties

  • Voluntary or involuntary protections — implications for transparency

  • Universality vs balkanization

  • Commercial vs. noncommercial

  • autonomy vs order

  • popular/pluralist/discourse-centered democracy vs. elitist democracy or republicanism

How to reconcile these conflicts? They are a domain for debate

If we stand back and take a more expansive look at the logical layer, there are actually other things that really influence the logical layer – higher level protocols, operating systems, applications — who sets them up, how open are they, etc.

We see the discussion of the trusted computing layer as a way that inserts a choke-point in the logical layer – DRM/CBDTPA/”Palladium” — these are also threats beyond just the TCP/IP discussion that Larry cited above.

But QoS (quality of service) is a persistent issue


Q: Andrew Odlysko’s papers on price discrimination (note that Varian tells us that price discrimination is not a bad thing) The questioner asks about the immoral dimensions of price discrimination

Larry: I’m not sure that good v evil is a useful context for this sort of discussion. There are better questions

Q: Application developer notes that the bigger threat than the DRM issues is the concentration issues at the application layer – IE v Mozilla; what happens if Microsoft elects to redefine the WWW, what do those outside the MS world do? If Quicktime becomes the video standard, Linux video programmers/businesses are dead. Any thoughts

Yochai: This is central to the Microsoft litigation Java/Netscape litigation. If you look at the extent to which alternatives are provided — 2600 and the DeCSS application begins to attack the problem of why can a Linux user not view a legally acquired DVD — courts have been unsympathetic and demonstrate what the law is doing wrong.

Moz v IE — this was cited in the MicroSoft case; that the Microsoft embrace and extend of Java was seen as an issue. Has the European Union fine changed Microsoft’s propensity to do things like this? Hard to say; harder with Quicktime

The operating system is a bigger linchpin in the problem

Larry: Dominance at the application layer (not the platform layer). If the trouble is at the platform layer, then that’s a problem. But is it’s just an application, it’s not a concern.

Media Player, for example, was the EU focus. In Larry’s view, he doesn’t get it — Media Player has to become a platform, making Media Player the only way to get content.

(sorry — this is a little incoherent)

If the 2600case had been about an open dvd player, rather than a fragment of code, would the court have ruled otherwise?

Yochai: Probably not

Yochai: Larry, lets probe platform as an idea — is MS Word a platform?

Larry: If it’s cross-platform compatible, then concentration in the market is not a problem. If the files can’t be used across platform, then there’s a problem

Dave Winer: Technologists see these issues differently, and there are some tricky things on the horizon and the law moves too slowly to keep the competitors viable — they’ll die first. We need more white hat activities that can help to get the two sides to work together to make some progress on these fronts

Q: Do things like Max WiFi offer us hope to preserve e2e

A: Yes, but this depends upon coalition building — we need powerful partners whose interest match ours. Sadly, the content layers have influence all out of keeping with their size — their lobbying has been so excellent. Jack Valenti and those of his ilk have done a great job — we need to build a similarly powerful coalition on our side as well

Q: More on platform

Larry: Platforms are applications that people build things on top of them — neccessary but not sufficient.

(Sorry, I think I blew the writeup of this discussion — hopefully there are better accounts out there that I can refer to <G>)

Let’s Hope That Other Universities Are Watching

Napster gags university over RIAA’s student tax

Ohio University has put up a survey site to see if students are willing to pay $3 per month for the Napster music service. The $3 figure is the first concrete number given by any school indicating how much Napster and its RIAA bully force are looking to muscle out of students. Ohio University believes it will need 5,000 students to pay the $3 fee to make Napster a break-even proposition for the school. Napster has demanded that Ohio University stay silent about the price before anyone catches wind of the cost.

“Napster called us today and said we should not publicize the details or discuss our contract,” said Sean O’Malley, spokesman Communication Network Services at OU. “The price was an idea they had suggested early on.”

So far, Napster has refused to provide exact details as to how much Penn State University and the University of Rochester are “paying” for the company’s service at their schools. Napster bills the public $10 per month for its service, but both Penn State and Rochester have admitted to getting steep discounts.

Slashdot discussion: Napster Gags University Over Fees

Yochai – Wires & Wireless/Physical Layer

Refresher of the DVDs

Models of communication – three defined in terms of who talks to who and who gets to decide

Broadcast model, telephony model and the internet model

Broadcast – intelligence at the core; simple devices at the edges; content at the core;

Telephone: content at the edges; intelligence at the core; communication defined by the architecture and polices at the core; hard to change

Internet: inverts the network intelligence; intelligence at the edges; core is stupid; no control at the core

Stakes in the architecture:

political values and economic considerations

Political values: democracy and autonomy; equality and justice; basic point is that everyone is a pamphleteer; everyone is a participant. Autonomy: who gets to define how the window to the network is designed; increasing transition from consumers to users; creators rather than merely buyers of product

Economic concerns: innovation and efficiency

Finally, the layered view of the information environment: content, logical, physical

notions of control within the layers; open versus proprietary elements

Freedom across all the layers necessary to achive “freedom” in this kind of communication – free at some and unfree at others is not going to work

Now, let’s map this to the state of play at the physical layer – not so much on the machine layer, but on the wireless layer. This is where we’re closest to a closed system, but with opportunities to get to an open one. Towards a duopoly in wireless.

For a practical matter, the PC is not a bottleneck (“what about DRM”) on openess. Handhelds (cellphones) are generally proprietary and closed; but the boundary between PC and handhelds is apparently becoming more porous. The extent to which a cellphone becomes a small PC

CBDPTA (Hollings) and others introduce a legal layer onto hardware to control copyright infringement, and thus limiting the features of some of this hardware. Trusted systems — is the PC to become an appliance or remain a tool? CBDTPA would have required that hardware embed controls/make the PC behave according to the interests of a content owner rather than a PC owner.

FCC’s broadcast flag rule is actually working to take PCs that are capable of receiving TV signals, and making them behave according to the dictates of Hollywood. Trying to make a PC more like a TV – things are finished at the core, and the machines at the edge merely pay to use content.

Yochai remains optimistic because the size of the content industries is small compared with the size of the computer and consumer electronics industry. So, Hollywood isn’t going to win, as far as Yochai cares – they just need to buy some more effective lobbyists.

Now, let’s turn to the wires – DSL and cable – and unregulated duopoly.

FCC reports on the state of broadband access in the US (FCC Report on High Speed Services). Cable and ADSL have been growing; satellite has not grown really at all; fixed wireless, the same. Declining, in fact. Electric utilities were also supposed to come in; again no. When you look to the symmetric big pipes, cable is really dominant in advanced services. High speed to homes and businesses, purely DSL and cable. Symmetric to homes, really cable, with some ADSL

The other wire and fiber dominates non-home, non-business applications; cable seems not to have done too well.

The basic point, while there’s a long list of options, the fact is that there really is little competition for homes and SOHO – telephones and cable really dominate.

Two years ago 66% of SOHO got high speed from Cable; 30% got from baby bells; 2% other ILECs and 2% other local exchange carriers. Essentially the same.

Most homes have a wire from the cable company, and one from the phone company. How did we get there.

Historically, telecomm wires are a natural monopoly – the most efficient way to set up a network of physical wires – expensive to do – worth setting up a monopoly to minimize upfront capital costs. So, you pick one and license one; you regulate to avoid monopoly rents.

Reality was a bit less rosy – hard for agencies to regulate, easy for companies to trick them. In 1990s, the digital revolution meant that everyone had a new network to develop; no obvious winner; no obvious future. So, let them all into the market and let them compete — let the market establish which tech will be the winner. So, get rid of regulation, which we’re all tired of anyway. Competition becomes the second-best choice over a perfectly regulated monopoly.

(While this is US-centric, it’s pretty much a global set of assumptions about the way to go – some details different, but essentially a template for action)

1996 Act: aggressive regulation of incumbents, requiring them to provide access to their network. Functional regulation, but a key difference between telephony and cable – cable was less regulated in this respect. Municipalities tried to raise the level of access; courts said that municipalities/local francising authorities said that cable was to be regulated by the FCC only. the AOL-Time Warner merger did require more access over cable. But this seems to be eroding in the face of a theory that says there should not be competition within a mode, but competition between modes — i.e., cable and telephone compete, rather than competition in cable service.

So, is this working? Is this competitive/open? Yochai says NO

Ways out –

(1) municipal fiber to the home; not really working. The network as a public service, common good

(2) Open wireless networks

Municipal fiber to the home – Bristol, VA – 17,000 people, income 68% of national median. FTTH penetration is about 14%; Like rural electrification, fiber as a public good. Development notions, rather than a profit notion – benefits accrue to the municipality because of the good network, like benefits from good roads. Also, long time horizon/planning – hard to make work in a firm

Other towns; Chicago, Palo Alto, others considering this, but it moves slowly. Put in the fiber and then let others drive/compete on the “dark fiber”

Hurdle: Incumbents are fighting these approaches. Abilene, TX; state legislatures remove home rule authority to provide telecomm; also in Missouri. 1996 telecommunications act says that state cannot regulate telecomm; went to FCC; FCC liked the idea; states, however, can preempt; missouri cities fought all the way to the Supreme Court (“any entity”) – SC says decided that states can decide what municipalities can do – in fact, SC municipalities are not legal entities independent from the state, so the state can push them out.

Q: even dark fiber? A: probably, because it’s still providing elements of a telecomm network, so it’s outlawed

So, it’s looking like a tough battle here, let’s turn to open wireless networks

2) Wireless network – a more complex network than the one that we use now in 802.11x. Actually make each wireless device not only a network device, but also a network router/relayer. So the network itself exists in the ether. The network is then owned by the owners of the network devices — the last mile can get solved by this sort of mesh network.

This is a radical concept; and it’s tied to the idea that spectrum ownership is no longer a technological necessity. Interference is not a problem, because we now have devices smart enough to do the digital signal processing to manage signals better. The engineering model of stupid machines (like radios) led to the idea of spectrum ownership/allocation. With DSP/processing power, smarter devices can now distinguish among many signals in the same “space,” and the spectrum may no longer need to be protected/sheltered from the old notions of “interference.” A room full of 802.11x devices illustrate the potential, if not the entire answer.

Why – cheap processing; Claude Shannon’s information theory; and multi-user information theory

Shannon: communication is the probability that the receiver understood what the sender sent – correct decoding at a distance. Smart processing can overcome an increase in the probability of incorrect decoding.

Multi-user information theory: nearby detectors have higher probability of getting the signal, and far away detectors rely upon relaying among the network participants to get the information propagated across the network. Again, with more processing power, the ability to find the right piece of the information stream goes up. Cooperation helps increase bandwidth

“Repeater network” seems to be the working term — note that, in this architecture, the more users in the network, the *better* the network works; not only does cooperation increase bandwidth, but also more participants in the network increases bandwidth.

Displacement — how much does what other actions in the network (as well as the physical constraints) influence the ability of others to communication while A talks to B

Thus, it may be that there really is “no such thing as spectrum” — at least when it comes to the idea of spectrum as a scarce resource

In the face of this technology, does a market give us a way to make it work? What role does the market play in this space? How to establish market clearing prices? The network is far too fluid and adaptive to meter the market; instead think about the market for the equipment that the users acquire to be a part of the network. And the regulation today is in allowing who gets to build machines that exploit certain parts of the spectrum.

User capitalized networks; your wireless card, base station, etc. form the network – no last mile problem.

Downsides: transaction costs and administrative costs are going to be high

However, there’s the other benefits of an open network – innovation; welfare; security; robustness — these benefits may make the other costs worth putting up with.

Q & A

Q: “Z” – this seems like a great idea; it’s like carpooling to get into the HOV lane. How to avoid free-riding — how to enforce routing? Micropayments as a mechanism?

A: Note that this was not planted – I wrote some papers on this to suggest that these are good questions that are hard to answer. A couple points

1) Firms have built up around the idea that there’s a need for organization to put together capital assets to achieve certain ends/goods/etc. It appears, however, that when the capital asset requirements are low or commonly held, that there are services that can be supplied without formal organization. We have done this before (“can I ask for a favor”), but we haven’t thought about it formally.

Because we all have a computer, we can collectively accomplish certain services that are “just favors.”

2) Micropayments impose a ton of transactions costs that make it really hard to reap a benefit. This is really a question of norms, not organization. We agree to play nice together

Q: You mentioned the idea that, if we could make automobiles by holding our watches together for a while, we’d organize firms for car making differently. But, makers also give us a liability target – responsibility of makers

A: There’s a theory of the importance of liability, but the practice is a little subtler than that – you don’t sue Microsoft for each PC crash/hang.

So, first there’s the design of the system to reinforce the goals of the community; second there’s the notion that there’s a tradeoff between freedom and manageability. People won’t let themselves be made liable unless they get control over the way the system works — so is that free enough to make us happy? Or should we make room for other things?

Larry and Jonathan — Regulation of Pornography

Larry: Law, Markets, Norms, Architecture — brief review (high speed pass through all the overheads) *laughter* This is Jonathan’s kind of humor – get used to it….

OK – here we go to think about this framework for a particular problem — selective regulation of speech – pornograhy. A great illustration, not merely titillating, as it covers all the mechanisms that governments to selectively regulate speech

Remember, the machanisms of control are adaptive and dynamic, not static. Moreover, they interact and the law seems to be the most effective instrument that cooperates/reinforces the other three instruments. (Of course, this is Harvard Law, so that will be the perspective)

As an American, the notion of regulating speech is immediately alien to our concepts — the Constitution says no law to regulate speech, but in reality there’re lots of regulations

Consider obscenity/pronography. If it’s porn, children cannot access, but adults can. How to accomplish this?

The “all versus some” problem. Lets start with “real space.” Make a law – can’t sell porn to kids; moreover, there are norms – pornographers really generally don’t want to see porn to kids; the market engenders barriers to entry, again helping to limit access (you need $$$); the real limit, though, is that it’s pretty hard for a kid to disguise the fact that s/he’s a kid — real space gives us a relatively effective mechanism to authenticate age – almost self-authenticating

Now let’s try cyberspace. Assume same law, and same norms. However, the market limit is weakened, in particular, the cost of porn is much smaller; so the market does not help to reinforce this policy; the fundamental limit though, is that in cyberspace, no on knows you’re a dog – we hve loss our self authentication.

(Larry greets Berkman Fellow Dave Winer, who Larry apparently wasn’t expecting to see)

Without these reinforcing mechanisms, the effectiveness of the basic law saying “don’t sell porn to kids” goes way down

“Laws affect things that regulate” – so how might the law be changed to increase the regulability of porn on the net – IF (age == minor) AND (content == porn) THEN Block access

IF(Age==Minor) — Communications Decency Act, CDA — if you’re serving harmful to minors speech, then you go to jail. Forced people who wanted to serve that speech to demand IDs – authenticate your age

(Larry asks if Jonathan will speak) – Z: CDA can’t work – ID on the net is very difficult to achieve.

Larry: You said three things

(1) The net cannot be regulated – the net libertarian perspective (so, why worry – it won’t work)

(2) Terrible burden – now it’s terribly expensive to authenticate, and therefore people will either elect not to speak or will make it difficult for those who have speech that looks like it might be problematic (e.g., Abu Ghraib photos on CNN – porn? How to screen) — what about credit card numbers – kids don’t have them; just give the pornographer your credit card number? Z: uh-huh, right

(3) Why should users be burdened with presenting ID? This burden is in the wrong place

OK – lets try another option – how about constraining the users through limits on the account – permissions

Let’s try this — a browser that announces “I’m a kid” — so block access

Problem — a loss of privacy; plus it makes it possible for websites to target content to kids; vulnerable

New try – Accounts again: instead of broadcasting, we limit the browser by giving it a “Kid mode” — the browser will look at content (<HTM> – harmful to minors tag) and block sites that include this tag – how about that?

Problem: how will someone know what’s harmful to minors and how will the content provider

Let Z come up with ideas: How Microsoft has solved this proplem — IE’s options, with a tab called content, giving us the content advisor. I can select levels of control, with terribly subtle terms – “wanton and gratuitous violence”

People have been paid to then rate the WWW sites along these dimensions.

Larry: For example, NetNanny

Z: NetNanny is something that you can install, you get a subscription, and the machine gets a list of out of bounds sites; the browser now knows what should be blocked – a list of banned sites.

The lists are trade secrets – you can’t look at the list – the list is a competitive asset

Seth Finkelstein has undertaken to explore the bounds of NetNanny by scripting his way through the internet to see what’s blocked – for example, with IE’s tool turned on, the anti-censorware site is blocked – so is CNN.Com

So, once you become a critic of censorware, your site is included in the sites that are blocked – hmmmmm

problem 1: Secret lists: a problem; granted a competitive problem, but this generally seems like a problem

problem 2: remember that we started with selective blocking of pornography; not about blocking images of violence (c.f., gaming industry). If the goal was blocking sexually explicit speech, these tools are well beyond regulating that kind of speech — OVERINCLUSIVENESS

So, it’s secret and overinclusive

Z: We can solve that – Platform for Internet Content Selection – PICS; a W3C project so that anyone can make up a list and offer it up to anyone. Moreover, intersections or unions; an open system for making lists. And then we can comment on the lists – that beatws overinclusiveness, and parents get lots of options

L: takes care of the first problem; public lists – but what about overinclusiveness problem?

We can criticize the ratings, but … we still have to cope with a list – it’s a general architecture for filtering, but the law is only about pornography — HORIZONTAL PORTABILITY PROBLEM

Z: HPP – once we solve one problem, we now have in instrument that can be expanded beyond the formal limits set in law — The Anti-Defamation League Hate Filter

Z: Don’t you want to protect your child

L: Do we want to build an architecture that makes the internet into a perfect censor-machine?

Z: Well, then maybe we need to revert to a simpler focus

L: If we narrow the scope of censoring, then we’re back to the harmful to minors tags – a simple rule, not easy to apply, but not impossible

Problem: how to know who is regulated, and by whom? Jursidiction problem (say, for example, Alan Davidson’s/CDT discussion of the Pennsylvania law)

Z: Jurisdiction in a system that doesn’t honor/effect/display location – how to get the internet to respect local boundaries without overdoing it

Supply side filtering, say — Let’s try Googling StormFront; we get that in the US; in Germany, however, you go to google.de – and google.de does not show you StormFront – google blocks searches that lead to hate speech sites for users in Germany

Google is honoring the law in Germany;

L: the obvious criticism – what about setting up a site offshore (say Sealand) – if we had our servers there, then we could set up a free internet

Z: Sealand, an abandoned antiaircraft platform off England – a principality – passports, foreign affairs – the prince of sealand uses sealand to set up servers in a place whose laws don’t limit content delivery — and other kinds of restrictions on network

We could employ the Powell doctrine and take it over; we could get their IP provider to block them – so there is one law in Sealond – not child pornography

The other approach is to start filtering on the demand side, rather than the supply side. So, let soverign nations start blocking networks using networks that are under their jurisdiction — the great firewall of china – netnanny implemented at the ISP level

Some governments have tried to assign – anonymizer, under contract to the US government, gives Iranians a way around their firewall. We use Voice of America to tell people what the new domain name to anonymizer – BTW, this is to give the Iranians access to pornography (“ass” – Embassy; “hot” – Hotmail – blocked, too)

Saudi Arabia: site exists to allow users to suggest blocks/request lifting of blocks

In the United States: Pensylvania’s law – allow you to suggest sites to block; ISP may then be asked to block an address – overinclusiveness leads to things like getting, say, Geocities to clean up their sites

L: Let’s talk about this – Pennsylvania gets to do this? The Attorney General does this? Is there not appeal?

Z: In fact, there really isn’t a way to know what’s going on — and the government won’t even release the list

L: Why aren’t people attacking this?

Z: Child porn is a third rail; CDT has tried this and they are gingerly moving through the courts; but this is not easy. CDT is working through the procedural issues, rather than touching on child pornography

Q: Winer – Abu Ghraib – that’s porn; what’s happening here?

Z: Miller gives us some boundaries; body functions; prurient interest; no countervailing cultural, scientific value according to community standards — so Abu Ghraib

L: CDT’s court challenge is tomorrow – this is a quiet activity – but it is happening

(Alan Davidson, who spoke at MIT on this subject this term, feels that this is a winnable case, but we’ll see)

Some settled norms that we had with professional publishers are lost in the internet — the cost of distribution has diffused the norms in ways that have made this hard (compared with how Playboy managed its business)

“Independently securable neighborhoods?” Do we want to have the state decide what we can see? And what happens to those who develop anti-censorware?

Q: Do the technological solutions rely upon tagging? What are the limits on forcing people to generate this kind of self policing/tagging?

A: Larry: This is a good question that is not yet answered. Stores being asked to put up a sign that says “Pornography here” might have a problem — a burden for the stores, since people will worry about entering a store with such a sign – but HTML is hidden in some ways, so this might not be as burdensome a requirement

The same things have been proposed with spam – people suggesting that tagging spam so that filters can work.

At this point, this is a court problem – until it’s adjudicated, we don’t really have any guidance yet.

Unless tagging is adopted, it really can’t fly until there’s buy-in

Q: What about some other kind of processing? What if we require image tagging?

A: Z: A related thing – how granular should the tagging be? A neat question.

In the late ’90s, there were dot-coms that analyzed pictures on the fly, and there are certain things than can be used to distinguish porn pictures from others. However, there doesn’t seem to be a commercial tool out there yet

Q: What about the cultural-technical solution – education, monitoring, etc. are sometimes better instruments for attacking this problem. What about them?

A: Larry – the problem of regulation is a problem of trading off the different modalities. Getting them to reinforce, is an approach. There are alternatives – change norms so that children self-police, for example.

The next try of the online child pornography act – there was testimony that said the pure architecture is a bad solution – people should also develop the moral sense that gets children to recognize that there *are* choices that they need to to be trained to make

Q: (SethF) Censorware ends up being a huge privacy problem – moreover, the goal of blocking keeps coming up against a host other applications online – so censorware keeps adding more and more things to the list that are incidental to the original intent of the blocking (*I think*)

L: 2 ways that gov’t regulate speech (a) no one ever sees it; (b) govt changes the mix so that you don’t get so much of it

One Madonna album into Budapest isn’t going to be a problem for maintaining cultural integrity; speed bumps to slow things down might be effective enough

So, do you want absolute control or just enough to slow the flow? This balance is hard to strike, but it may be all you need.

Z: “Small fences can corral large mammals” — maybe just a little is enough

Rule and sanction – corrective punishment once a rulebreaking is discovered, rather than absolute blocking? Maybe a better approach

Terry’s “Background for ILaw”

The program began three years ago, here in Cambridge. July 1, 2001 — and effort to show how the legal system influences the development and use of the internet. Surpringly intense, with attendees from all over the world — setting a tone that makes it important to ensure global inclusiveness..

The Singapore event later than year; then a 2002 event in Cambridge;

Then an event in Brazil

Then the Sanford event last year

And finally, we’re back here in Cambridge.

(Sequential filling of a world map showing participants – Africa, Greenland and Antarctica remain unrepresented)

20 countries represented here today; 1/3 academia; 1/4 business; 15% lawyers; 15% nonprofits

Introductions to the teachers – Terry, Yochai, Larry, Charlie, Jerry Kang, John Palfrey, “Z” – Jonathan

the iLaw Team also introduced

Note that we hit the ground running — that’s why you got the DVDs

Got In!

After much to-ing and fro-ing, I’m into the Harvard network — thanks, Jesse!! It looks like the Mac users are having the most trouble, although Jim Flowers is also having some Windows trouble.

The principals (well, I haven’t seen Charlie yet) are all here, and it’s almost 9:00. Terry’s at the podium, and the slides are up, so we’re off …..