July 1, 2003

ICANN: Fisher and Zittrain [7:41 pm]

(entry last updated: 2003-07-01 20:50:01)

I’m *sure* Donna did better with this: yep - don’t miss her notes

Z: We are going to talk about internet governance today, in the effort to tie everything together.

ICANN - have your heard of it; are you on it? (Ray, next to me). Domain names, why do people care, and so what?

  1. How the mess came about: Start with Cerf, John Postel, et al. Dave Clark: “Well it started out with 12 people in a room…” The nerds having a good time, and trying to keep the rest of us away.

    Requests for Comments, within the IETF. Iteration through the comments until it converged to a protocol. RFC 2555 - “These notes were the beginning of a dialog, and not an assertion of control.”

    So we have an engineering problem: the internet is driven by numbers. Let’s put a mnemonic name that is associated with a number, and then we just appear to use easy to manage names. A so-called namespace; a lookup list.

    The original list didn’t scale too well, so the dot structure was added. And, at the same time, the list gets distributed according to the dot names. And the top level domains (edu, com, net) are found by asking the .root list. A heirarchy of distributed domain names dynamically resolved into numbers.

    Jon Postel maintained the .root at USC; mirrors were set up A, B, C, …. Internet Assigned Numbers/Name Authority.

    In 1993, Jon Postel decided the job was boring and hard. The NSF had been funding him, so they generated a RFP for someone to maintain the list of names. Jon might maintain .root, and the contractor (eventually Network Solutions Inc.) maintains the list. NSI figures out that a fee for renting the name from them would offset NSF fees.

    Problems emerge: NSI making a lot of money; why can’t others? We need more top level domains. And cybersquatting starts to take place, to the horrors of corporations.

    Josh Quittner, for example, found that he could get mcdonalds.com. Kaplan.com pointed to the Princeton Review, a competing group. Offered to give it up for a case of beer, but Kaplan elected to sue instead. “Kaplan has no sense of humor, no vision,and no beer” (alternative telling)

  2. Terry Fisher: Clean up #1 legal intervention - Lawsuits as a solution to the problem. The first pass at this was based on the use of trademark law (1993-1999).

    Four kinds of trademark infringement: (a) identical marks on competitive products; (b) similar marks on competitive product (areo instead of oreo) - will this confuse the consumer? becomes the operant question - measured against the marketing environment; (c) similar marks on non-competitive products - even more detailed showing required to consider consumer confusion - possible confusion includes confusion as to the source of the product, confusion as to endorsement (rolls royce radio tubes), confusion post sale (bolt on parts to make a car look like a Ferrari; prestige drops; thus confusing the consumer); (d) dilution - the defendant’s behavior dilutes the power of the trademark, either by blurring it or tarnishing it.

    Back to Terry Fisher: In the US, the dilution doctrine is coming into its own (originated in Germany). Trade Related Aspects of Intellectual Property agreement (TRIPS) leading to coordination across boundaries.

    Trademark law is not designed for domain names, so some manipulation required. The Toeppen cases suggest that squatting might confuse and certainly dilutes. Other countries move in this direction.

    Problems and limitations of trademark as applied to this problem. First, it is expensive to bring suit - thus settlements moved money; Second - jurisdictional variations make it hard; Third - you really have to strain to make it work. Dilution, in the US, applies only to famous marks, used in commerce, and requires a showing of blurring or tarnishing, which may be hard to accomplish. Problematic as this problem gets revved up.

  3. J. Zittrain: Clean up #2 political intervention - taking over the domain name system.

    Jon Postel still struggling with the problem, so he thought about changing the technology.

    • Jon could add some new names

    • Put out an RFC, add some new names

    • Forms a committee

    The International ad hoc committee - IAHC is formed. And they generate the gTLD-MOU - Generic Top Level Domain Memorandum of Understanding. A big boring document in search of consensus, but it did have some good ideas.

    A conflict emerged between NSI and Postel. Jon finds it’s time to stop running the A .root, so he gave it to NSI - immeditely regretting it. So, NSI was the keeper, but Jon was the administrator. And NSI made it clear to Jon that if the gTLD-MOU process was finalized, NSI would not abide by it.

    Jon got half the roots to conduct a test - use the B.root instead of the A.root. A huge brouhaha. The US Government moved in and Ira Magaziner was moved into Commerce, and moved into the problem. “The White Paper” a statement of policy that says it’s time to privatize domain names.

    Attempts within the internet community during the summer of 1998 to develop something that will work. The International Forum on the White Paper (IFWP) forms to discuss a solution to this initiative. First meeting held in Reston, VA - home of NSI. No real agenda, yet things start to percolate. Jon Postel, meanwhile, starts to think about a new IANA and the rules thereof.

    NSI again wants Postel out of the picture, so they also generate a set of rules. One suggestion to negotiate at harvard, canned by Magaziner, so the meeting was held in DC. Thus was formed ICANN (IANA was rejected for obscure reasons).

    At the same time, we have changes in the kind of disputes that arise. It’s not just simple cybersquatting anymore. So we get typosquatting - exploiting the misspelling of users (microsoft.com); conflicts between non-competitors; retailers exploiting name variant; commercial vs. non-commercial users (pokey.org); fan sites; parody and commentary (peta.org v peta.com - introducingmondy.co.uk).

    Two new dispute resolutions mechanisms contructed. One is the UDRP - uniform domain name resolution policy (?). ICANN recognizes certain registrars, and requires that its licensees sign a contract that includes the agreement to be boind by the UDRP - a contract of adhesion that is global in its reach.

    Ostensibly it governs abusive registrations and use of domain names - “legitimate interest” and “bad faith” as key terms of art that govern success in recovery (largely derived from trademark law). Imposes an expeditious process favoring plaintiffs; done by e-mail; 20 days to respond to a complaint; 14 days to decision. Limited remedies: no monies involved, but cancellation of registration or transfer of the name to the plaintiff. You have 10 days to take it to court, which can be difficult.

    See the Mueller Report (6/24/202) Success by Default:

    A New Profile of Domain Name Trademark

    Disputes under ICANN’s UDRP.

    In addition to the UDRP, we see an addition to trademark law, known as the Anticyberquatting Consumer Protection Act - the UDP on steroids, offering up a civil cause of action using much the same language as the UDRP. Similar terms of art, but also a safe harbor provision based on intent. The key difference is that the remedies are serious money - $100,000 per name, potentially.

    Courts seem to be willing to work this act in favor of plaintiffs.

  4. How well did it work? - the state of ICANN today. What did we learn, if anything.

    ICANN constructs a board, many committees, and some at large members. The at-large membership was called for by Magaziner - the Boston Working group.

    One worldwide election was held; a disaster according to the board; the public interest groups thought it went ok, or at least fixably. Adopted the results, and immediately went to work making sure that there would never be another election. “Succession by right of kings,” as it now stands - no real electoral process.

    When asked if they care about this disenfranchisement, the audience sits in (stunned) silence. Not many worred about this, although Ray London does care (note Barbara Roseman is here and refining some of the description of ICANN - ITU is acting to try to be a part of the process).

    Alternative roots/other competitive issues

    The changing organizations and the changing rules that are produced out of these groups. Against this a set of legal evolutions, moving us through a set of new approaches.

    goals?

    1. First possession

    2. Avoid consumer confusion

    3. Provide incentives to establish good will

    4. Freedom of expression

    5. Identity/Community/Equality

    6. Efficient Web Navigation

    In a world of Google, do we really need domain names anymore? Two puzzles: (a) if you believe the architecture matters at a deep level, and that markets are a problem, yet we see that ICANN could, in fact, act like a government if they wanted to, and would have an effect on this intrinsically important architecture. (b) Or who’s in charge of this anyway? And how can we build a business upon something that is so oddly managed?

    A number of possible actions that affect you a great deal are listed: yet, governance is not generally viewed as the path to a resolution of conflict over these actions - other recourses are selected/applied. Why does the issue of how ICANN is governed supposed to be the answer to the problem of domain names? The second puzzle in Jonathan’s list.

    Sorry, I think I lost track of something at the close here. I’m sure to come back to this.

permalink to just this entry

Yochai, Les Valdasz and Reed Hundt Panel [4:55 pm]

(entry last updated: 2003-07-01 18:38:32)

(While waiting for the session to start, read this Slashdot flamefeat: Bill Gates On Linux, responding to this USA Today article: Gates on Linux - an astounding rewriting of the past 20 years of PC history!)

Larry has called us to order…. (Donna’s notes here)

A panel discussion today from Les Valdasz (ex-Intel) and Reed Hundt (ex-FCC chair, currently at McKinsey). This afternoon we turn from theory to practical. Larry and Yochai will also cross-examine


(Sorry - Reed Hundt has asked that the postings of his statements be taken down, so I have to put in some work to sanitize this posting. It will be back, but I won’t get to it until tonight…)

Reed Hundt’s office called to request that his remarks be removed from blogs, because apparently he did ask not to be quoted. I realize this is a toughie and that most people don’t like to remove things they’ve blogged, but it would be great if you could consider removing his portion of your entry.

OK - here’s an attempt - note that I will be running this past the Berkman’s to make sure they’re OK with it. But, for the moment at least, my notes are back.


(Since Reed Hundt asked not to be quoted, the following is going to be an interpretation of some of the key things that I thought he said. It should be noted that these are my thoughts on what I thought I heard, and I am the only person responsible for the ideas expressed - you have to rely on my interpretation, which could certainly be wrong, and Mr. Hundt should feel free to contradict me as he sees fit - FRF)

Reed Hundt’s opening remarks were largely focused upon a set of concerns arising out of what appears to be a wholesale reversal of course in the FCC’s actions over the past three years. While the period preceding those three years was certainly not perfect, there were many actions taken that strove to ensure that a competitive market for communication services would be maintained.

In spite of these efforts, the telecomm business has seen essentially no real revenue growth, outside of wireless. Compared with wireless, internet revenue is not even on the radar. Moreover, most of the overall revenue growth in telecomm has not been captured by the carriers; rather, the big winners have been the hardware companies with good technology assets.

The lesson from wireless is that the traditional carriers didn’t make much from the new technology; yet the internet business is being constrained to support the entrenched network owners (cable and telephone) in direct contravention of the lessons of the success of the wireless market. Moreover, a duopoly has been created, where wireless was about offering access to many more firms. And, we have allowed these established firms to use revenues from other business units to support their internet businesses, violating yet another lesson learned in wireless.

The reasons for the creation of this structure remain cloudy at best - there doesn’t appear to be a clear rationale, or a policy imperative that has been articulated that this set of actions would require.

Moreover, there are some policies that should be revisited. In no particular order, these include:

  1. Mandate interconnection at the broadband provider level. Access should not be something that the provider can turn on or off at the provider’s discretion

  2. Universal service for broadband should be promoted rather than subsidizing voice interconnection

  3. The Antitrust Division of the DoJ should revisit the current presumption that vertical integration in telecommunication is not harmful. (Aside: Certainly not if you believe Yochai’s layer model!)

  4. And maybe spectrum should be auctioned to hardware makers, who then supply hardware to intermediary service providers - then the price of the auction is reflected in the price of the hardware and the technologies embedded to emply that spectrum.

====== End of Summary of Reed Hundt’s Opening Remarks =====


Les Valdasz: I have never been in a room with so many Apple logos facing me as I speak - *laughter*

I want to talk about how one firm has worked to push wireless. And you should know that I was not been a believer in wireless for a long time. Seeing the rise of the voluntary hot spot, however, I have seen that there is something real there.

A surprise that the innovation has led to this kind of reliable access. I’m not going to argue that it’s not great to have wireless for your PC - or even your Mac! It is, of course, but getting there is not terribly straightforward. It’s not just designing, building and supplying a product. We already had 10,000,000 cards out there; yet there is still lots of ad hoc issues facing those who want to get on the wireless nets today - even with this standard.

Participation in standards committees are needed, so Intel is now a part of them. But you also need to engage the community of users. We do this in several ways. First, we put lots of money into firms in the wireless ecosystem, to promote development of much of the necessary glue - security, antennas, etc.

Before we got to Centrino, we had 15,000 easy access points installed that would work without hassle with our tech. We want to get many more of these access points out.

While we’re doing this, the government issued a report that wireless was unsafe, insecure and unreliable. Terribly helpful, of course. So Intel had to invest in a host of associated technologies, and we made sure that it would work by ensuring that Intel ate its own dog food - wireless is available at every Intel site. Secure enough for work, and probably better than many wired locations in that respect.

Regulatory agencies are unavoidable in communications. There are some needs that emerge out of this effort. We need more unlicensed spectrum. It would be nice if this were reconciled across jurisdictions, but we could cope. It would be really nice, however, if spectrum policy were to reflect the state of the technologies available - in particular, the concept of the non-interfering use of spectrum otherwise allocated is an opportunity that the FCC does seem to be moving toward, albeit slowly.

Ubiquitous access is thus possible; where does that take us? 802.11 is short range, something else is longer distance (I missed the number). This could upset the tragic inadequacy of the current home broadband capabilities; and thus opening up a host of new applications - VoIP, entertainment, etc. It should be that one can just buy transport, with applications supplied by a wider range of competitors than we see today.

With luck Centrino pushes us toward this future.


Larry: OK, Les, tell me what you think about what Reed had to say about how the dinosaurs are defeating the innovators.

Reed told a terrible story; will wireless be allowed to blow these dinosaurs away? Will the political fight go your way.

Les: I am not politically savvy; and you ask a great question. The vision of the current participants are seeing the world as it is, rather than it might be. The Internet may very well need its own infrastructure (!). Municipalities are probably going to be the needed base for this sort of political action.

But, it may also be that the Verizons of the world will get interested, and the land grab weill be on. We have an opportunity, perhaps.

Yochai: The DoD seems to be playing around with Congress in spectrum allocation, negotiating some sort of non-interference/operating restrictions game. Similarly, the unused TV spectrum in particular localities that could be shared right now, so a software radio can be used on otherwise allocated spectrum. And we have Verizon giving out WiFi spots at payphones being used as a selling point to getting you to subscribe to Verizon DSL. Three areas where there is action. Are there opportunities/pitfalls?

Les: This is clearly an example of the chaos I spoke about. Intel should do a deal with Verizon - more access is better; I have no idea if they are doing so. More spectrum is nice, but we need to makethe most of what we have, too. Since it takes along time to move the government, and a short time to innovate the technology, we need to do both.

And a small plug, again, to get away from the nonsense of spectrum auction.

Reed: Here he says something about the idea that technology owners who can put that technology into hardware make the money these days. And he doesn’t see that giving away spectrum works any better than selling it.

Larry: Let’s get back to the point Les made about life within the technological world of rationality. But how to get Washington to understand the land grab that he fears?

Reed: Here he says that almost anything could happen - maybe even the cable and telecomm companies can learn to innovate in this space. And maybe spectrum is not getting priced correctly, not to mention that unlicensed spectrum still is costly to manage, a cost being borne by the hardware buyer

Les: I know more about the computer side rather than the dynamics of the communication area. But, the mainframe companies would never have gotten into the PC business without competition. It’s very hard to engineer change from within a business. How can one create competitive pressures to the the firm behavior that is needed to innovate? We need market pressures.

Reed: He agrees to the need for market pressure - that’s what leads to innovation. But the government should intervene as little as possible as they work to induce that innovation

Larry: But we’ve failed, as you’ve said, in the broadband context.

Reed: We’ve collectively reacted oddly to the flight of capital from this market. Just as the response to the Depression was the creation of agencies to create scarcity so that there was a return to capital investment in communication (that’s what the creation of the FCC did). Do we want to create a situation where there are more opportunities to lose money in this market?

A question on vertical integration: Reed points out that the antitrust perspective today doesn’t believe that there is a real economic loss to vertical integration. In network markets, this ought to be revisited.

A question about capital flight from telecomm; it seems to be about foolish decisions, not about interconnection: Reed suggests two models; the government creates a ubiquitous network, or one where interconnection is required. In the latter, more capital is saved.

Q: we spoke of how pipe owners could exert control over content. What might the FCC do today to avoid this problem? Open access or something else? Reed says this is just too hard to solve - there haven’t been good answers to managing this problem at the regulatory level. This may come down to a question of what sort of distribution model should exist?


I’m not sure I really managed this one well, and I didn’t get to ask my question, so you get to see it here: Les talks about the need for market pressure to promote innovation, and Reed is really happy about markets. Is it possible that part of our problem today is that we are seeing the development of markets that don’t exert the necessary pressure?

permalink to just this entry

Larry Lessig -the end to end principle - e2e - and the future of ideas [2:28 pm]

(entry last updated: 2003-07-01 15:46:44)

(Larry’s brought the lights down, so it must be time to start the next session) (Donna’s notes are here)

1935 - Edwin Howard Armstrong, broadcast an organ recital from a transmitter located in Long Island, demonstrating FM radio. A novel technology, and in the face of AM radio. Demonstrably a better technology for sound transmission; no static, higher fidelity, penetrates the ionosphere so lower power required to transmit over the same distance.

An employee of RCA, he screwed up the company’s ownership of AM - and Sarnoff of RCA fought this all the way. He coopted the FCC, he fought Armstrong’s patents for 6 years, until Armstrong was bankrupted and killed himself.

1964 - Paul Baran of Rand came up with “packet switching.” AT&T was shown this technology, who said they hated it. We doubt it will work, and we won’t help to create a competitor to us. Thus, the internet was delayed

How about another - let’s stream video over the internet. But we are bandwidth limited until broadband comes into being. In 1998, Excite and ATamp;T joined up and some thought to try to do this. Somers of AT&T says no way. Why compete

Innovators - internet-Cerf/Kahn - students

WWW - CERN/Swiss grad student

ICQ - Israeli kid

Hotmail - Indian immigrant

Napster - BU Students

Note: all foreigners and kids. Whom we might call “outsiders.:

Does the architecture help or hinder this sort of innovation? The key idea is the end-to-end character of the logical layer. "Intelligence at the edge; simple at the core." A design concept.

Contrast with switched networks. If AT&T liked your innovation, it would get deployed. If they didn’t like it, AT&T would keep you out. If profits are challenged, you’re out; if they are enhanced, you’re in. (cf. Baran and the video story).

In an end-to-end network, this sort of control cannot be exerted. The network is blind to the use of the packets - it just routes and delivers the packets. Thus, it’s driven by what the users want, rather than what the network allows.

David Isenberg introduced this idea to AT&T - he noted that the smart network of the company built limitations into the way that this network could evolve. Wrote about it, circulated within the company. The Isenberg 1997stupid networks paper meant he was to go. He left once he was vested, and he left to sell this.

This was a reinvention of some MIT ideas by Jerry Salzer, David Clark, and Jerry Reed (van Schewick dissertation - versions of stupid nets). The notion is that network function can be done best by letting the application complete the intent of the transmission. E.g., rather than checking data within the network, let the application manage the problem of data integrity - it will have to anyway.

This evolves into the notion of a preferred design - a bias in the design ofthe network. To the extent possible, make choices that put functionality at the edge of the network, rather than within the network. Not to say an explicit rule, but a working design argument.

Implications far beyond the architecture of the machines. The original notion of the internet IP protocols is that it’s as simple as possible.

First, the technical consequences.

  1. Flexibility in the way that the network develops, in part because of

  2. No coordination among network users needed to try something new. (e.g., voice over IP. Just take sound, digitize, packetize, drop into the net, reverse the process - sound) - innovation without coordination
  3. Fast evolution of the network application (e.g., gopher went from 1991 to 1993 took off; massive growth; 1993 the first browser is release and UMn decides to charge money for gopher; the gopher dies - “dead gophers anywhere” - no network administrator interhttp://www.scn.org/~bkarger/gopher-manifestovention; demand and innovation took care of it) (gopher manifesto) The network cannot defend itself.

Competitive consequences:

  1. Maximizes competition: We start with the commons; a resource that everyone has access to, priced or not. All, however, are free in the sense that no one has proprietary control over access to that resource (e.g., language is a commons). The Tragedy of the Commons Garrett Hardin - poisons the concept of a commons in most of the US educational process.

    Lessig refutation of Hardin: the tragedy requires a rivalrous resource (if I use it, you can’t). Not all resources are rivalrous (e.g., ideas, language). Language, for example, actual becomes MORE valuable as more use it - a comity. So, is the resource one that invites a tragedy, or not. Is this a question of tangible or intangible assets?

    L: economists keep seeing the commons aspect of the internet, and assuming it will die (Larry fails/elects to consider the network economists, of course).

    Larry proposes an “innovation commons,” as a product of the end to end architecture. Everyone has an equal right to innovate in this space. The need is to maintain the commons, although there are many efforts to break it, generally by introducing property rights, which Larry asserts is only appropriate for rivalrous goods - or, more carefully, property is only appropriate when the benefits of using property exceed the costs of the system

    Competitive power to innovate is maximized in this space, therefore, because no one is locked out.

  2. Minimizes strategic threat: Strategic behavior, in the law, is the notion that behavior that undermines the intent of the competitive marketplace. One such strategy is defensive monopolization - the core of the government’s case against Microsoft.

    (The case, Netscape and Java could possibly change competition for applications on the PC platform. With Netscape/Java, applications can be written once, run everyone - not technically successful, it appears, but not. The case asserts that Microsoft decided to attack this strategy by displacing Netscape with something MS controls, like IE. Thus, MS can protect itself from this insidious plan, by closing the platform to competition in their application.)

    The US courts found defensive monopolication is certainly illegal. End to end takes this kind of protection out of the game - the network cannot protect itself from innovation.

    This lowers the cost of innovation - barriers to entry go away, so the cost of entry falls.

  3. Consumer financed growth: If you think like a utility company and you think about innovation, your notion of innovation is how expensive is it to deploy this innovation, and how much will you benefit from doing so? However, some of the most rapid innovation (e.g., the internet) takes place in domains where the consumers invest in the deployment of new technology. (Note: always using the internet as the example of rapid innovation is a bit of intellectual monoculture that weakens this discussion).

    Consider 3G v. 802.11. 3G was conceived in the old model of innovation deployment; invest and make the 5 year plan. Now, it’s obsolete at the point of deployment. 802.11 is much faster, cheaper and better for at least some things. The market pull for 802.11, operating in unlicensed space, led to rapid deployment of innovation.

    If this is possible with just this small innovation commons in spectum, what might happen if there were more space within which to work?

So, “e2e is heaven.” But the pessimisn - we’re “on our way to hell.”

What is happening is that the e2e layer is being pressured by the owners of the physical layer and the content layer. Corrupting the core:

  1. Policy based routing: a layering on of a new technology that allows the physical layer to treat certain packets differently than others (”All pigs are equal, but some pigs are more equal than others”).

    Xbox and cable as an example. Microsoft is now favoring the e2e network. MS wants to use the Xbox to allow gameplaying on the network. Cable companies want to permit (for a fee) gaming on their networks. So, cable companies want to extract some rent

  2. Content begins to eat the conduit: Consider media concentration - one company controls/coordinates messages, stifling independent voices. Diller attacked it because of the corruption of the message. Turnet attacked it because he could never have competed in this space today.

    The FCC response is that the Internet will solve this problem - it’s an independent source. Of course, this is true only to the degree that the internet remains e2e. But is companies can influence the architecture so that this is no longer true, then the internet solves nothing.

    Does the FCC act to leave the internet e2e. No - the FCC says that businesses should be allowed to innovate in whatever way they want in the internet space - including changing the architecture as they see fit.

Possible solutions: maintain e2e like the electric power grid, rather than turning it into the cable TV network, which is allowed to discriminate in service based on fees or worse. Three debates:

  1. Physical layer - Open access: ISPs shouldn’t be able to block participation in the network hardware. Ensure competition among providers. It failed in the US, but is successful in Japan (100 megabit/set for $50/month in Japan). NTT was not facing competition, so they did what they were told. In the US, where the Baby Bells were competing to exist, they barely complied and fought all the way.

  2. A logical layer push for regulating a “neutral network.” FCC regulates the net to make sure that pricing discrimination won’t take place among all net participants.

  3. “Free culture” - the power of the content layer to behave strategically - copyright, DRM, CBDTPA, etc.

So, the layers are all represented in the issues that we face, and that this framework leads us to a set of policy discussions that we have to consider with care. Changes at any of these layers, leading to lockup, will finish off the end to end objective/benefits.

Do any governments get this? So far, the basic claim is that there is “no need” to preserve this - the “market will take care of it(self).” So, end to end is at risk….

Q&A

  1. Doesn’t the notion of the innovation commons ignore the effects of scale upon innovation?> Aren’t there some innovations that require scale that only companies can supply?

    Larry agrees that these firms certainly had great innovators, but there seems always to have been a conflict within these firms between the innovators and the business divisions. Moreover, Larry argues that complex systems should be allowed to self organize, rather than be managed. (This is an interesting hypothesis; and it’s something that Larry defends, although he conflates self-organization with simplicity, I think.)

  2. What happens to the notion of property? Investment was made, either in monies or effort, and the sacredness of property influences all of this discussion.

    Larry argues that the notion of property emerges out of a social need, rather than an absolute right. What has happened has been the loss of the idea that property exists to satisfy certain social needs. What has happened is that we have the cart before the horse. There are many assets that we elect not to “propertize,” and we have to be careful about what we elect to call property.

    In the Eldred case, there was a brief signed by 17 economists, including several Nobel arguments, who asserted that property as a concept requires careful consideration of what the property is expected to accomplish for society. It’s not just efficiency - there are other social objectives that should be served - and balance has to be struck.

permalink to just this entry

2003 July 1 - Yochai Benkler [12:12 pm]

(entry last updated: 2003-07-01 13:41:22)

(Hubris, I know, but I’m going to try it again today….) (Donna’s links)

(Yochai’s now at Yale, according to Larry’s intro)

Title: The Technical is Political: Access to an open information environment. An extension of the architecture of communication and its implications for the political economy of communication.

  1. Models of Communication - Who gets to say what to whom and who gets to control that exchange of information. At the extremes we have the Broadcast model (special presenters speaking from the core to a receptive core) and the internet model (multiple presenters, throughout the system, distributed capital) - with the telephone in between, where content is at the adges, but the distribution follolws specific sorts of rules mandated by the capital owners.

    Yochai then puts up a painful graphic, mostly to depict complexity in the modalities of communication. The chain is (a) noise/signal conversion; (b) intelligence produiction, (c) message production, (c) transmission and (e) reception. Yochain shows that the number of these elements that “belong” to carriers vs users varies widely along this spectrum of communication models.

  2. The stakes of architecture - at stake is democracy. In particular political democracy. Jonas of IDT is quoted as pointing out that he wants to control content by controlling the pipe. (Note that not everyone agrees that this happens). Versus the internet as a domain where everyone can be a pamphleteer - access makes it possible to change/form opinion from the bottom up.

    Another stake: personal automony. Because you can work the arhitecture, you can manipulate quality of service to move people from one source of information to another - essentially offer preferential capability to make one source more attractive than another. (This page takes forever to load - let’s try another site that is faster)

    Another stake: innovation. I don’t need permission to innovate in an open network, but I do need to in the controlled networks

    Another stake: efficiency - in the economic sense that monopoly is less efficient in the marketplace than competition. Also, the efficiency of flexibility - by optimizing a network to achieve one task, it’s now suboptimal for another task. So, deadweight losses as we wait for the network to change.

  3. The State of play at the physical layer - one way to think about the policy problems is tothink of the layered model of communication. Yochai chooses three layers to construct his policy discussion: the content layer, the logical layer, the physical layer. (Shades of the class we’re putting together for CMI! This is the working model employed in many communications policy research activities at MIT) (Yochai points out that the logical layer is also the standards layer - the other two layers should be self-evident; the wires at one and the content at the other)

    At the physical layer in the transition to broadband, we have a little wireless, a little more satellite, a little more DSL and a lot of cable - on Yochai’s plot, it looks like about 70% of the bar - not to scale according to Yochai. An FCC Third 706 Report, Feb 2002 statistics are cited. Looks like growth, especially in cable and DSL. But maybe not.

    Lets look at homes - now it’s entirely cable and DSL - all the other wired methods (T1, etc) are used by bigger institutions. If you go to SOHO - Cable is dominant, followed by the Bells’ DSL, then a sliver of other sources of broadband. Is this competition? Not necessarily, according to Yochai. Just the phone companies and the cable companies run the physical layer that runs into the home and small office.

    Historically, of course, communications has been considered to be a natural monopoly - in particular, multiple providers would be more costly than a single, because of the capital costs of constructing parallel networks. Thus, we had monopolies, but we need regulators to make sure that the monopolist doesn’t exert its power to extract monopoly rents.

    In the 1990s, a move takes place to change telecommunications law to accommodate the new technologies of wired networks - cable TV and telephones. We need to get them to upgrade their networks to get them to provide broadband. How to do this - with competition or monopoly? Given that the regulators seem to have failed to save us from the inefficiencies of a regulated monopoly, the experiment is to try a little competition instead of an imperfectly regulated monopoly. (glossing of much of network economics here)

    The 1996 Telecommunications Act. Aggressive regulation to require sharing of bottlenecks in the network - to construct intra-modal competition in telephone companies. However, we won’t require cable to share the same way that we require telcos to share.

    The phone companies have fought this in the courts, and have managed to slow the degree to which the Bells have relinquished market to other participants. But the cable companies get a free ride, even though they have to deal with local requirements as cable is deployed. Local jurisdictions tried to set rules in these agreements that match the requirements that the telcos face. This too was fought in court, and the rulings have tended to distinguish cable from communication - thus constructing a loophole that keeps the cables out of this sharing regime.

    In the past year and a half, the requirements to share are bing increasingly gutted by the FCC and the DC Circuit Court of Appeals. While there was a vision that there will be competitors on each wire, there are going to be two parallel networks - the telephones and the cables. The shift is to say that competition between the two modes (cable and telcos) is all that’s needed - we don’t need competition within the distribution mode - duopoly.

    So - cable will be controlled by one company in each location; DSL will be controlled by one company in each location; competition will be between the two modes (assuming you have two to choose). Is duopoly enough?

    Q&A:

    1. A refinement of Yochai’s claim of efficiency in monopolies - it’s more efficient in that the cost to each user is as low as it can be, assuming the regulators do their jobs to keep the monopolist from extracting excess rent. Eventually, this becomes an explanation of natural monopoly - a consequence of the nature of the technologies available to supply the good.

    2. You’re talking about the cable and the telcos, what happened to the internet? Can’t that be regulated too?

      Yochai points out that the internet is not the physical layer - it’s the logical layer. So there’s something else we have to worry about.

  4. Issues

    Let’s talk about another possible physical layer - the open wireless network. Wi-Fi plus something. The plus is that the end user devices have to route, as well as receive and transmit. An ad hoc infrastructure that can scale without necessarily relying upon substantial infrastructure - the end-user needs drive the development of the last mile (the OWL network)

    As you look at the topology, a key point is that there have to be at least some members on the OWL who are also connected to the hardwire internet network.

    How to consider whether this is a good thing to do or not. Note first there are no owner - the end users make the network by buying the hardware they need. Moreover, there is no license - just equipment that can be acquired which does not require a license.

    Some consequences: the end user equipment will probably be more expensive than it might be otherwise. Because the equipment does not require the user to subscribe to an ISP, the consumer will be willing to pay more for the equipment than they would for something that connects to leased network lines. Also a new notion of valuation is needed to assess the worth of wireless.

    Which brings us to open spectrum - (I’m only going to summarize, because this is pretty widely available online already). Interference in spectrum is partly an issue of the “dumbness” of the receivers. Licensing is set up to protect stupid radio devices, in this scheme.

    Cheap processing means that we can make smart receivers, so they can think harder about what the signals mean. Also, new theories, especially Shannon’s Information Theories (look up link). So, interference can be tolerated.

    Also, cooperative transmission routing can also improve efficency, by load balancing and by working to minimize transmission inside the range of other transceivers - again minimizing the effects of interference.

    A counter intuitive effect, the addition of users does not reduce the capacity of the network - rather, it increases the capacity - a new repeater in the network and a new cooperator/path opportunity. And there are even more technological strategies out there - multiuser detection, spatial diversity, and sharing information about signal structure to ease filtering.

    Note: Much of this is still at the theoretical/model stage. Lots of things to learn to find out whether this will really work.

    So what does this do to displacement in the network? In elements of this techniology, we reach a point where there is no displacement of the network resource with a particular communication - if there is no displacement/loss of netork function, then the marginal cost of the communication is $0 - and it should be priced accordingly, i.e., free.

    IOW, spectrum can only be valued once you think about the technology and the local deployment of the technology - meaning that markets in spectrum are possibly far too expensive for what they sell and introduce too much inefficiency into the market for communication.

    The transactions costs in this OWL network are stupendous; Yochai argues that this means there shouldn’t be a market. (But there are also arguments that would say that this means that we need a property/market construct to reduce these transaction costs.)

    Yochai argues that there are open and closed pieces of the logical layer - TCP/IP open, MS O/Ses are closed, Linux open. Trusted systems introduce a closed layer; as might CBDTPA. Similarly, there are pieces in the content network that are open or closed. Copyright, KaZaA, censorship, free sharing.

    Yochai argues that we need an OWL or something like that to ensure that there is at least one path, from the physical layer to the content layer, for communication that cannot be blocked though ownership/permissions.

Q&A

  1. Where does the connection to the global internet fit into this? Doesn’t that still mean that there’s a potential block?

    I don’t know; it might work, but we are currently facing the last mile as the ugliest problem. So far, we don’t see abuses at the backbone, so we may just get lucky.

  2. How might the current providers block this development?

    One thing is the push to make spectrum property; thus prohibit this technology from being legal. So far, the FCC seems to have made some substantial changes in the direction toward open spectrum, rather than away from it. Several windows in spectrum are being opened up, and we will see whether this really works.

    But there will definitely be regulatory tricks. Look at the Verizon activity in NYC, where their pay phones are going to be wireless access. This may lead to some interesting market plays, either giving Verizon ownership of the OWL (thus allowing them to make you one of their subscribers) or getting you used to having this sort of access, so you’ll start paying more for (Verizon) hardware that will give it to you in the form that you like most.

*Whew!*

permalink to just this entry

2003 July 01 - AM Links [11:22 am]

(entry last updated: 2003-07-01 11:58:47)

  • This week’s Tangled Web from Billboard points out that REM is endorsing the sharing of MP3s of concert performances, etc, a la the Grateful Dead.

  • A trend or just a blip: MP3.com Removes “High-Bandwidth” Streams. The Slashdot article wonders whether this is a consequence of the appearance of iTunes, or whether something else is afoot.

  • Wired News covers the Intel v. Hamidi decision, the 4-3 California Supreme Court decision that limits the degree to which trespass can be used to keep individuals from using company e-mails: Ex-Intel Coder Wins E-Mail Case. Here’s the NYTimes article: Intel Loses Decision in E-mail Case [pdf] (note the keyword in the NYTimes URL - with which side of the case do you think the Times’ sentiments lie?)

    Lawmeme: Hamidi wins!

  • Ed Felten joins Derek’s first and later posting in commenting on the AIMster decision: AIMster Loses

    I noted three interesting things in the opinion. First, the court seemed unimpressed with Aimster’s legal representation. At several points the opinion notes arguments that Aimster could have made but didn’t, or important factual questions on which Aimster failed to present any evidence. For example, Aimster apparently never presented evidence that its system is ever used for noninfringing purposes.

    Second, the opinion states, in a surprisingly offhand manner, that it’s illegal to fast-forward through the commercials when you’re replaying a taped TV show. “Commercial-skipping … amounted to creating an unauthorized derivative work … namely a commercial-free copy that would reduce the copyright owner’s income from his original program…”

    Finally, the opinion makes much of the fact that Aimster traffic uses end-to-end encryption so that the traffic cannot be observed by anybody, including Aimster itself. Why did Aimster choose a design that prevented Aimster itself from seeing the traffic? The opinion assumes that Aimster did this because it wanted to remain ignorant of the infringing nature of the traffic. That may well be the real reason for Aimster’s use of encryption.

    Lawmeme also has comments: Aimster Loses!

  • Blubster gets some Wired News ink this morning: Giving Sharers Ears Without Faces, and offers up a little more detail on the how of its workings

    Blubster developer Pablo Soto of Madrid said his music-swapping service relaunched today as a secure, decentralized system providing users with anonymous accounts.

    The MP2P network (short for Manolito Peer-to-Peer) on which Blubster is based consists of more than 200,000 users sharing over 52 million files, according to Soto. The update is also said to include a new, streamlined file-distribution method that disassociates transfers from specific users.

    [...] "The biggest privacy weakness of our previous version was the ability to query a list of shared songs for any user — now that can be disabled," Soto said. "It may be possible to gather IP addresses from the network, but not data about what content specific users are sharing."

    Blubster uses an Internet data transfer protocol known as UDP for content look-up and transfer negotiating. Unlike the TCP protocol that serves this function in other file-sharing networks, UDP is a so-called “connectionless” method that doesn’t reveal links between nodes or acknowledge transmission in an identifiable manner.

permalink to just this entry

2003 June 30 - post ILaw [12:12 am]

(entry last updated: 2003-07-01 01:15:27)

(Note that I will be doing this multiple postings/day thing on the assumption that I can hack the innards of Personal Weblog enough to give me breaks by day in the output - but that isn’t going to happen this week unless I have a really down day)

A good day - if for no other reason than to see that Larry can be optimistic - his closing comments in the last session today, where he reinforced Terry Fisher’s suggestion that blogging’s benefits lie not in the answers they might generate, but in the fact that they are evidence of participation by individuals who might otherwise be unconnected to the political process, was a stunner to me. Last year he purposely adopted the negativist perspective (and he promises it for the rest of the week as well), so this was a welcome change. Below a few pieces of www news that I didn’t get to today.

  • BBC: RealNetworks bags Vodafone deal, over Microsoft.

  • Larry points to the latest broadside from Miriam Rainsford: Musicians Say No To Persecution And Prosecution Of Music Lovers

  • Trespass to chattels; trespass to chattels; trespass to chattels! Now that I’ve gotten that out of my system, you can read about the latest developments in Intel v. Hamidi. Slashdot discussion: Court Rejects Intel Electronic Trespass Charge. The California Supreme Court’s ruling

    Intel’s claim fails not because e-mail transmitted through the Internet enjoys

    unique immunity, but because the trespass to chattels tort-unlike the causes of

    action just mentioned-may not, in California, be proved without evidence of an

    injury to the plaintiff’s personal property or legal interest therein.

    Note that there are two dissenting opinions, and a concurring one. Here’s an excerpt from one of the dissents:

    In my

    view, the repeated transmission of bulk e-mails by appellant Kourosh Kenneth

    Hamidi (Hamidi) to the employees of Intel Corporation (Intel) on its proprietary

    confidential e-mail lists, despite Intel’s demand that he cease such activities,

    constituted an actionable trespass to chattels. The majority fail to distinguish open

    communication in the public “commons” of the Internet from unauthorized

    intermeddling on a private, proprietary intranet. Hamidi is not communicating in

    the equivalent of a town square or of an unsolicited “junk” mailing through the

    United States Postal Service. His action, in crossing from the public Internet into

    a private intranet, is more like intruding into a private office mailroom,

    commandeering the mail cart, and dropping off unwanted broadsides on 30,000

    desks. Because Intel’s security measures have been circumvented by Hamidi, the

    majority leave Intel, which has exercised all reasonable self-help efforts, with no

    recourse unless he causes a malfunction or systems “crash.”

  • Slashdot on the EFF’s P2P promotion campaign: EFF Ad Campaign On File Swapping

  • Slashdot on the NYTimes article on TCPA/Palladium or whatever Microsoft wants to call it this week: A Critical Look at Trusted Computing (see the NYTimes URl/pdf in this Furdlog entry)

  • Well, that didn’t take long: enter Blubster. From their press release:

    Optisoft S.L., provider of popular peer-to-peer program Blubster, today announced the launch of Blubster 2.5 in the wake of the latest litigious efforts by the RIAA and MPAA to erode consumer privacy and monopolize control of the P2P entertainment market. As Verizon has been handed a court decision forcing the company to reveal the identity of Internet subscribers accused of music piracy, Blubster has re-launched with a new secure, decentralized, self-assembling network that provides users with private, anonymous accounts. (www.blubster.com).

    [...]

    "If other means of delivering media files could be compared to a postal system with an identifiable sender and receiver, then Blubster’s proprietary MP2P network could be likened to throwing a bottled message into the vast ocean,” said Pablo Soto. “The message may get to a destination, but no one knows the full path of its journey nor what is in each bottle."

  • BMG is ready to dive into DRM for CDs: BMG tinkers with CD copy controls - more customers for SunnComm.

    The Bertelesmann AG division, which produces contemporary artists including Norah Jones, Avril Lavigne and No Doubt, said it plans to begin selling CDs in the United States protected with SunnComm’s MediaMax CD-3 product.

    See the BMG press release here, which includes this key point:

    MediaMax CD-3 is a collection of technologies that provides copy management for CDs and DVDs while simultaneously enhancing and expanding the consumer’s experience. MediaMax CD-3 is tightly integrated with Microsoft’s (NASDAQ:MSFT) Windows Media Platform and the Digital Rights Management capabilities associated with the latest Windows Media Platforms. The company licenses and uses Windows Media Audio DRM capabilities from Microsoft as the security feature for these files.

    See http://www.microsoft.com/presspass/press/2003/jan03/01-20SessionToolkitPR.asp

  • Salon’s Farhad Majoo writes about putting academic research online rather than in academic journals: The free research movement

    On June 26, Rep. Martin Sabo, a Minnesota Democrat, introduced the Public Access to Science Act, a bill intended to rectify the situation. The act would amend U.S. copyright law to deny copyright protection to all “scientific work substantially funded by the federal government.” Since the U.S. government is the world’s largest sponsor of scientific research — the White House asked for more than $57 billion for science in 2003 — Sabo’s bill would have profound implications for scientific publishing. If passed, it would instantly put a huge swath of newly published research into the public domain, upending the journals’ pay-for-access business models.

  • As Terry Fisher pointed out today, AIMster got slapped with an injunction today. Recording Industry v John Deep includes this closing paragraph:

    Copyright law and the principles of equitable relief are quite complicated enough without the superimposition of First Amendment case law on them; and we have been told recently by the Supreme Court not only that “copyright law contains built-in First Amendment accommodations” but also that, in any event, the First Amendment “bears less heavily when speakers assert the right to make other people’s speeches.” Eldred v. Ashcroft, 123 S. Ct. 769, 788-89 (2003). Or, we add, to copy, or enable the copying of, other people’s music.

    A second read suggests that Posner didn’t really like anyone’s claims on either side, but felt that Deep failed to suggest there were any noninfringing uses of the tool, and that the recording industry faced unredeemable harm, so he sustained the injunction. I’m sure I’ll hear more about it this week. See also the CNet coverage: Court: Anonymous P2P no defense.

    More: Derek on the Madster/Aimster ruling

permalink to just this entry

July 2003
S M T W T F S
« Jun   Aug »
 12345
6789101112
13141516171819
20212223242526
2728293031  
posts

0.207 || Powered by WordPress