« February 2006 | Main | April 2006 »

March 18, 2006

Local "keep the bad stuff out" problem

We locally had an annoying pollution experience yesterday. Our research group at UM runs an ICD wiki for sharing our research, announcements, &c. Access is pretty open, and sure enough, after about a year in operation, a splogger found us. He or she created an account and added spam links to about 40 pages in the wiki (invisible to us but visible to search engines, to increase the link rankings for the underlying spam sites). One of our grad students, Rick Wash, spent hours cleaning things up for us. What's the solution?...

We haven't thought about any incentive schemes to protect our wiki yet (time to start thinking!). The obvious technological solution is to limit editing access to accounts authorized by a moderator, but that is not a great solution: we have over 120 new master's students entering the program every year, and we want them to be able to participate, but we don't have an automated system in place to give them accounts, so either they get to create their own, or we have to install some more overhead.

We could use the human solution, as Wikipedia does: let anyone in, but keep a close eye on changes, clean them up and disable abusing accounts -- what Rick did this time. But we don't have a lot of hard-core users, and that could become quite a large burden on the few who have wiki-admin skills.

Just a mildly painful reminder that there's a reason for us to be researching these problems!

Posted by jmm at 12:37 PM | Comments (0)

March 17, 2006

ICD with a twist

In a comment on Felten's blog article about false congestion as an incentive to send less traffic, Jim Horning reminded me of a classic article by Coffman and Kleinrock about incentives in scheduling computer resources:

E.G. Coffman and L. Kleinrock. “Computer scheduling methods and their countermeasures.� In AFIPS conference proceedings, volume 32, pages 11–21, 1968.

Coffman and Kleinrock argue that users will adapt to any scheduling rule implemented. Therefore, they argue, an incentive-compatible designer would decide which new behavior she wants users to adopt, and then implement a scheduling rule that to which that behavior is the best countermeasure. That's a very apt and clever way to express the principle of incentive-centered design!

Posted by jmm at 01:38 AM

Creating false congestion to selectively discourage sending on the Internet?

My first significant foray into research on incentive-centered design was my work with Hal Varian, Liam Murphy and others in the early-mid 1990s on incentives for congestion control in the Internet. Ed Felten in his popular "Freedom to Tinker" blog has brought up one of the key issues we in the networking community debated back then -- but the issue is still valid today!http://battellemedia.com/archives/002391.php

The issue is this: the primary mechanism for congestion control in the Internet (we're talking about the transport of packets here) is technological: a standard protocol called "slowstart" (developed by Van Jacobsen at LBL). All compliant TCP stacks implement slowstart. In simple terms, when a server or user starts sending packets (a file, a streaming video, whatever), the flow rate starts slowly, and slowstart checks how long it takes for ACK (acknowledgment) packets to return from the other end. If the return time is within the acceptable range, the flow rate ramps up. But if congestion at any point is high enough to delay the ACKs beyond the threshold, then slowstart immediately drops the sending rate (typically cutting it in half as I recall), what is known as "backing off".

If TCP software all implements this protocol, then senders are effectively playing a cooperative game, backing off to reduce traffic when there is congestion.

One problem with this: What if some users use a version of TCP that turns off slowstart? They would then hog all available bandwidth, while everyone else was backing off to give them room. Similarly, there is another protocol on the Internet called UDP that does not use slowstart: it is supposed to be available for a small number of special purposes, but from time to time people have written software that uses UDP to transmit regular traffic, thus hogging bandwidth.

Felten raises a variation on this problem: what if a network service provider started discriminating against senders by sending false congestion signals to them (actually, delaying or blocking the ACK packets so the senders think the network is congested)? Those users would back off (if they were slowstart-compliant), bearing more than their share of the cost of congestion. This might, for example, be one step towards implementing a non-neutral net charging scheme, which has been much in the news recently: "pay up or we'll cause you to send less traffic."

Of course, as Felten also points out, if network service providers started doing this, the willingness of senders to abide by the entirely voluntary slowstart protocol might quickly erode, and then everyone would be worse off as congestion skyrocketed.

(Thanks to Rick Wash for pointing out Felten's article.)

Posted by jmm at 01:30 AM | Comments (0)

Ester Dyson on charging for sending

Esther Dyson has a column in today's New York Times (registration required) (You've Got Goodmail summarizing fairly well the economic incentives arguments for supporting experimentation with systems that charge for sending "priority" email, and also the market competition argument for why this will ultimately benefit users, not create monopoly profits. She also discusses a variant which is, essentially, the same as the "Attention Bond Mechanism" proposed by our own Loder, van Alstyne and Wash.

Posted by jmm at 01:05 AM

March 15, 2006

i-Newswire is out, that's who

A couple of days after Rich Wiggins posted his blog story about the ability to place false news stories in Google News, CNN has picked up the story, and Google has now dropped i-Newswire as a source for Google News.

i-Newswire was a user-contributed content (UCC) service, and thus subject to the pollution problem I've been discussing (link and link). More precisely, i-Newswire is an un-moderated or un-edited UCC service (all press release newswires rely on user-contributed content, but most employ editors to decide whether press released are legitimate).

Google News, on the other hand, is not a UCC, and is edited: there is central control over which content feeds are included. So, in a crude way, Google can handle the pollution problem: if pollution is coming in through channel A, turn channel A off. Google News may be a case where a technological pollution prevention approach will work pretty well, obviating the need for an incentive system.

Posted by jmm at 10:25 PM | Comments (0)

March 14, 2006

Not even gangsters are safe from spam

Daren Briscoe reported in Newsweek that gangs are using the web to recruit members and communicate. But gosh, they have to deal with spam too:

But the Web has also given rival gangs a new, less violent way to settle scores—flooding each other's sites with junk e-mail. Stalker says he spends hours every week deleting threatening or insulting messages from other gangs from his Web site. Not even a gangster is safe from spam.

I wonder: if you're a gangster, maybe you have a somewhat wider range of incentives you can use to discourage spammers?

Posted by jmm at 09:15 AM | Comments (0)

Digg, Google News...User-contributed "news"

I'm developing an interest in the phenomenon of user-contributed content, and the two fundmental incentives problems that it faces: pollution (keeping the bad stuff out) and the private provision of public goods (inducing contributions of the good stuff). User-contributed "news" is one example to explore.

Digg.com is one currently hot user-contributed news site:

Digg is a technology news website that combines social bookmarking, blogging, RSS, and non-hierarchical editorial control. With digg, users submit stories for review, but rather than allow an editor to decide which stories go on the homepage, the users do.

Slashdot of course is the grande dame. Digg and Slashdot both rely on multiple techniques of community moderation to try to maintain the quality of content (keep out the pollution). For example, proposed stories for Digg are not promoted to the homepage until they have sufficient support from multiple users; and users can report bad entries (apparently to a team of human editors).

How effective (and socially costly) are these community moderation techniques? By now we've all heard about Wikipedia founder Jimmy Wales manipulating his own Wikipedia entry, which led to publicity about multiple members of Congress, etc., who have been doing the same thing.

And even if a site has an efficient moderation system to filter out pollution, there is still the problem of inducing people to volunteer time and effort to contribute to the public good by creating valuable content. Obviously, this can happen (see Slashdot, Wikipedia). But suppose you are designing a new user-contributed content service: how are you going to create a community of users, and how are you going to induce them to donate (high quality) content?

Apparently we can now start to count Google News as a site for user-contributed news.

Posted by jmm at 08:18 AM | Comments (0)

Spamming Google News: Who's in, who's out?

An old acquaintance of mine, Rich Wiggins, recently blogged about his discovery of how easy it is to insert content in Google News. He discovered this when he noticed regular press releases published in Google News that were a front for the musings of self-proclaimed "2008 Presidential contender" Daniel Imperato. Who?

Wiggins figured out how Imperato did it, and tested the method by publishing a press release (screen shot) about his thoughts while celebrating his 50th birthday in Florida. Sure enough, you can find this item by searching on "Rich Wiggins" in Google News.

This is (for now) a fun example of one of the two fundamental incentives problems for important and fast-growing phenomenon of user-contributed content:


  1. How to keep the undesirable stuff out?
  2. How to induce people to contribute desirable stuff?

The first we can call the pollution problem, the second the private provision of public goods problem. Though Wiggins example is funny, will we soon find Google News polluted beyond usefulness (the decline of the Usenet was largely due to spam pollution).

Blogs, of course, are a major example of user-contributed content. At first glance, they don't suffer as much from the first problem: readers know that blogs are personal, unvetted opinion pages, and so they don't blindly rely on what is posted as truth. (Or do they?) But then there's the problem of splogging, which isn't really a problem for blogs as much as for search engines that are being tricked into directing searchers to fake blog pages that are in fact spam advertisements (a commercial variant on the older practice of Google bombing).

There is a lengthy and informative Wikipedia article that discusses the wide variety of pollution techniques (spamming) that have been developed for many different settings (besides email and blogs, also instant messaging, cell phones, online games, wikis, etc.), with an index to a family of detailed articles on each subtype.

Posted by jmm at 07:44 AM | Comments (0)

March 04, 2006

AOL compromises on sender fees

Nonprofits say AOL "backed down", but actually it is holding its ground: AOL announced that it would provide the higher service class to qualified (non-spamming) non-profits at no cost. So, AOL is holding the line on charging commercial senders, but has made an exception. I say it's a compromise because, as I noted before, if nonprofit mailings are so low value that they're not worth a quarter-cent stamp, then maybe they should reduce how much they send, but at least AOL is going forward with the experiment for commercial senders.

Related entries: "Yahoo! and AOL start charging" and "Backlash to Sender-Pays".

Posted by jmm at 05:07 PM | Comments (0)

Commercializing P2P

Cringely writes about moves toward commercial development and use of peer-to-peer file sharing networks for distribution. The demand-side motivator: mass-market video distribution (think "American Idol" and "Desperate Housewives"). What will make the supply side work: why will users donate their upstream bandwidth to help commercial content distributors?

Most interesting for ICD is Peer Impact, which pays users to participate: part of gross revenues go to nodes that share (provide upload bandwidth) and part to users who do marketing (e.g., a fan site with a link to content). Grid Networks has announced that it will be upgrading its system to allow for a variety of incentives.

Found via Slashdot

Posted by jmm at 01:10 PM | Comments (0)

Backlash to sender-pays email incentives

Not too surprisingly, as soon as AOL and Yahoo! announced the were implementing an (optional) sender-pays email system, there was a huge uproar. So it has ever been since the Internet grew into a public net (out of its early days as a research and military net): anything vaguely smacking of converting real, user-suffered costs into monetary form is reviled as "the end of the Internet as we know it". In this case, the end of the spam-encrusted, diminishing-reliability, low-rent Internet as we know it.

The Electronic Frontier Foundation (EFF) is run by smart people with widespread support, and they've done a lot of good for the Internet over the years, so their rant on this is worth reading.

Now the EFF is organzinging a coalition of non-profits to challenge AOL. (See EFFector vol. 19, no. 9, 3 March 2006 -- not online yet but it will be here soon.)

"Over fifty groups with nearly 15 million members joined with us, including Free Press, the U.S. Humane Society, the Gun Owners of America, MoveOn.org, RightMarch.com, the AFL-CIO, and Computer Professionals for Social Responsibility."

My first reaction is: That's right, AOL doesn't "own" user inboxes, it merely provides a commercial service to maintain them. And so, if users don't like AOL's attempt to provide more reliable, low-spam email service, they can switch to another provider. There's a pretty active market for email services: why assume this market is broken? Indeed, there are some pretty smart people at EFF, and they understand this:

"One might trust that the market will eventually sort this out: rewarding ISPs that do not sell access to their users' inboxes and that work to improve deliverability for everyone, not just senders who pay. But the market speaks slowly -- in the meantime, this system will push small speakers into a choice of paying or not being sure that their messages are getting through to their members."

I don't know if this particular sender-pays mechanism is going to work well, but I think we should be encouraging experiments, and that this is one of the most promising in a while. We know filtering will never provide a complete solution. Why not try an incentive-based solution?

In case you don't know the details, the AOL and Yahoo! systems do not require any senders to pay, though it's hard to tell that from the backlash. Rather, it allows senders to pay (one-quarter of a penny per email) to obtain a higher "class" of service (like the difference between first class and bulk class snail mail), which will not be pre-filtered by the email service provider (ESP). Yes, eventually that price could increase (my guess is it has to increase if it is actually going to discourage many spammers!), and yes, eventually AOL and Yahoo! could reduce the quality of the lower service class (say, by filtering it more aggressively so more "good" mail is siphoned off to spam folders). But over that same time frame users can switch to other ESPs if they don't like the service.

There is a bit of disingenuousness in the non-profit organization protests. They want to "speak" at low cost: that is, they want to send spam! They are concerned that they won't be able to afford first-class service for the millions of emails they send out. For example, conservative political organizer RightMarch.com sends out 2 to 3 million email messages a week, and joined the EFF coalition because they are worried "we might not be able to afford sending them" (link). (In fairness, many of these organizations may be sending only opt-in, non-spam bulk email, but if the value of the mails they send is less than a quarter-penny each, is there a great social loss if they send fewer?)

Posted by jmm at 10:03 AM | Comments (0)

March 03, 2006

Incentivized social bookmarking

Why do people bother to put their bookmarks in del.icio.us? Why make the effort to use tags that are familiar to other users (do people make this effort very much)?

The usual first response I get when I ask why people contribute to del.icio.us is that it is a convenient place to do their private bookmarking (largely because it's portable, available from any connected machine), and the public benefits of their effort are just icing on the cake.

CollaborativeRank is a search engine that uses del.icio.us to weight the information returned, and generates those weights based on how expert different users are at finding new content others find interesting (based on how often later users copy the bookmarks they first identified). CollaborativeRank claims that it is providing an incentive for users to be timely and effective del.icio.us contributors:

"There is something missing from PageRank: a reward system for people who create hyperlinks to helpful or timely websites," said CollaborativeRank's developer, Amir Michail, a software engineering researcher at the University of New South Wales in Sydney, Australia.

"Del.icio.us (already) has a reason for users to provide helpful or timely URLs: convenient management of their bookmarks. CollaborativeRank gives them another reason: greater influence over search rankings."

Interesting idea, no? And worth taking a look at CollaborativeRank as a new type of search engine, in any case. Here is a Wired article with more details.

Posted by jmm at 11:23 PM | Comments (1)

Web 2.0 vulnerabilities

Wired ran an article last fall about vulnerabilities becoming apparent in various "Web 2.0" applications (whatever those are). Some are similar to spam in email: for example, splogs (fake blogs created to attract search engine interest and drive viewers to see their Viagra ads).

Many interesting social computing applications have enough openness that they are vulnerable to misuses and manipulations. A traditional approach is to develop technical means to close or limit the vulnerabilities (like filters for spam). We know that the inevitable trade-off between the benefits (even necessity) of some degree of openness for social applications and the resulting vulnerability means technical solutions are unlikely to be 100% satisfactory. That leaves open the room for incentive-based mechanisms to discourage misuse of social computing applications, like the various payment schemes proposed to fight spam. What incentive scheme might reduce splogging, for example?

For many social computing applications, financial incentive schemes may be undesirable, suggesting a growing need to develop effective non-pecuniary incentive mechanisms.

Posted by jmm at 10:56 PM | Comments (0)

March 01, 2006

Aligning incentives with responsibility for peer-to-peer filtering

In its recent Grokster decision, the U.S. Supreme Court said that P2P technologies were not required to implement filtering for copyrighted content, but that failure to install filtering would be considered as one factor in assessing whether the software actively induces infringement (which is illegal).

Pam Samuelson wrote a column in Comm. ACM (access restricted; available to UM readers) in which she points out the many costs and infeasbilities of imposing a filtering duty on software creators (she argues that the blurry line is sufficiently problematic that filtering may become a de facto duty).

Some of the problems she identifies include keeping up with filename modifications, and fingerprint and watermark hash avoidance schemes. Generically, the problems include

Without endorsing the Supreme Court decision or Pam's fear that this decision will ultimate mean filtering is a de facto requirement for P2P file-sharing software, let's think about how incentives might help solve the filtering problems she identifies.

While I was reading, I thought the problems might not be so difficult...if copyright owners bore some of the responsibility for implementing them. Copyright holders have the incentive to limit distribution of content; file-sharing software providers, if anything, have an incentive to maximize the distribution of content. If as a matter of policy, we want to maintain some limits on the distribution of copyrighted content, then it is incentive-compatible to put the copyright holders to work on implementing protection technologies. The alternative is to construct articial incentives (such as jail time) to shift that burden onto file-sharing software providers: why not use the natural incentives already in place?

For example, rather than requiring the P2P software publishers to collect a database of fingerprints, and to keep up with changes, they could implement a simple API that allows copyright owners to contribute the fingerprints of content they own. (A contributor could be required to register and provide a verifiable digital signature to help ensure that they were legitimate copyright holders.) That would largely deal with the problem of "what to filter", and would be consistent with earlier requirements that copyright holders register their copyrights.

Posted by jmm at 11:06 PM | Comments (2)

Ad-supported web affects content creation incentives

For years I've been interested in the way that the financial model for information content affects the incentives to create, the quality and the diversity of the content.

The basic point is simple: if readers are paying for content (buying a book, subscribing to a service), then presumably the creators are trying to create content valuable to the readers. In the other leading model, advertisers pay for content: then presumably creators are trying to create content that attracts the attention of readers, but isn't necessarily of high value to them.

Does this explain the quality difference between typical broadcast TV shows and the subscriber content like "The Sopranos" and "Six Feet Under"?

I, together with a couple of students, gathered a lot of facts and notes on this topic several years back, but I haven't written much on it. (The idea shows up in passing in a couple of my scholarly articles.)

The Wall Street Journal published a nice column illustrating just this point for current web site content creation.

"If there is a topic in the news, people will be searching on it. If you can get those searchers to land on a seemingly authoritative page you've set up, you can make money from their arrival. Via ads, for instance. Then, to get your site ranked high in search engines, it's best to have 'original content' about whatever the subject of your site happens to be. The content needs to include all the keywords that people might search for. But it can't be just an outright copy of what's on some other site; you get penalized for that by search engines."
The WSJ author contracted as a freelance writer to create content for a site, and found that the assignment was primarily to cut-and-paste content from elsewhere with enough changes to fool the search engines.

I think there are some important ICD opportunities here for people thinking about creating content portals and other information services.

(Thanks to Rick Wash for pointing me to this column.)

Posted by jmm at 10:50 PM | Comments (0)

Sometimes people die for lack of ICD

According to The New York Times, a study in The Journal of the American Medical Association found that doctors misdiagnose fatal illnesses about 20% of the time, and that this rate has been the same since 1930 (Leonhardt, "Why Doctors So Often Get it Wrong", NYT, 22 Feb 2006). The NYT argues that the problem is largely one of missing incentives:

Doctors, nurses, lab technicians and hospital executives are not actually paid to come up with the right diagnosis. They are paid to perform tests and to do surgery and to dispense drugs.

This problem is a bit off-topic for us, since it doesn't directly concern the design of information or communication technologies. However, the NYT reports that a software program has been developed to help avoid the problem: a diagnostic expert system from Isabel Healthcare that takes as input a list of symptoms, and returns a list of possible diagnoses, to remind doctors of the range of possibilities in case they forget or don't recognize some. However, the NYT suggests, the software is not being adopted widely because it is expensive, and hospitals do not have the right incentives to get the diagnoses right.

Question for thought: how might we introduce incentives for using an expert system like this that might improve the efficiency and success rate of diagnosis? Perhaps insurance companies (which save on drug and treatment costs if incorrect diagnoses are avoided) might offer a credit to hospitals for reducing their mis-diagnosis rate below the average for the past three years?

(Thanks to George Furnas for pointing me to this article.)

Posted by jmm at 12:52 AM | Comments (0)