February 21, 2006
What is unsolicited, unwanted email?
This article in the New York Times, while driving home a sad and painful situation we academics all share (not being able to provide our students with all the attention they want), also illustrates the problem faced by any incentive system to discourage unwanted email: where is the boundary?
The article discusses the increase we all have experienced in email from students, sometimes inappropriate or unreasonably demanding (not always!). Clearly, some of this mail we would rather not receive. But what filter or incentive system or other mail management mechanism can tell which mail from our students we don't want to receive? The possibilities for Type II errors are a bit scary.
February 08, 2006
Misleading recommendations: payola
As I mentioned a couple of entries ago, payola ("pay to play") schemes are still ongoing in the radio and music distribution industry. Sony and Warner settled with NY State for millions; now Attorney General Eliot Spitzer says he has proof that some of the largest radio groups have taken payments from top executives in the recording industry.
Lots of anti-spam ideas are crap
Here is a tongue-in-check list of flaws in proposed anti-spam technologies / protocols / policies. It was written as a humorous commentary on the immediate flaming most proposals receive, but it is a pretty insightful and potentially useful checklist to run against any serious proposals (from craphound.com):
Your post advocates a
( ) technical ( ) legislative ( ) market-based ( ) vigilante
approach to fighting spam. Your idea will not work. Here is why it won't work. (One or more of the following may apply to your particular idea, and it may have other flaws which used to vary from state to state before a bad federal law was passed.)
( ) Spammers can easily use it to harvest email addresses
( ) Mailing lists and other legitimate email uses would be affected
( ) No one will be able to find the guy or collect the money
( ) It is defenseless against brute force attacks
( ) It will stop spam for two weeks and then we'll be stuck with it
( ) Users of email will not put up with it
( ) Microsoft will not put up with it
( ) The police will not put up with it
( ) Requires too much cooperation from spammers
( ) Requires immediate total cooperation from everybody at once
( ) Many email users cannot afford to lose business or alienate potential employers
( ) Spammers don't care about invalid addresses in their lists
( ) Anyone could anonymously destroy anyone else's career or business
Specifically, your plan fails to account for
( ) Laws expressly prohibiting it
( ) Lack of centrally controlling authority for email
( ) Open relays in foreign countries
( ) Ease of searching tiny alphanumeric address space of all email addresses
( ) Asshats
( ) Jurisdictional problems
( ) Unpopularity of weird new taxes
( ) Public reluctance to accept weird new forms of money
( ) Huge existing software investment in SMTP
( ) Susceptibility of protocols other than SMTP to attack
( ) Willingness of users to install OS patches received by email
( ) Armies of worm riddled broadband-connected Windows boxes
( ) Eternal arms race involved in all filtering approaches
( ) Extreme profitability of spam
( ) Joe jobs and/or identity theft
( ) Technically illiterate politicians
( ) Extreme stupidity on the part of people who do business with spammers
( ) Dishonesty on the part of spammers themselves
( ) Bandwidth costs that are unaffected by client filtering
( ) Outlook
and the following philosophical objections may also apply:
( ) Ideas similar to yours are easy to come up with, yet none have ever
been shown practical
( ) Any scheme based on opt-out is unacceptable
( ) SMTP headers should not be the subject of legislation
( ) Blacklists suck
( ) Whitelists suck
( ) We should be able to talk about Viagra without being censored
( ) Countermeasures should not involve wire fraud or credit card fraud
( ) Countermeasures should not involve sabotage of public networks
( ) Countermeasures must work if phased in gradually
( ) Sending email should be free
( ) Why should we have to trust you and your servers?
( ) Incompatiblity with open source or open source licenses
( ) Feel-good measures do nothing to solve the problem
( ) Temporary/one-time email addresses are cumbersome
( ) I don't want the government reading my email
( ) Killing them that way is not slow and painful enough
Furthermore, this is what I think about you:
( ) Sorry dude, but I don't think it would work.
( ) This is a stupid idea, and you're a stupid person for suggesting it.
( ) Nice try, asshole! I'm going to find out where you live and burn your
Screening for Good (email) Actors
Dave Crocker of Brandenburg Consulting wrote a message today to Dave Farber's "Interesting People" mail list in which he made the following observation:
We must continue with efforts to detect and deal with Bad Actors, but there is a separate path that is at least as valuable: We need methods for distinguishing Good Actors. Folks who are deemed "safe". In effect, we need a Trust Overlay for Internet mail, to permit differential handling of mail from these good actors. In general terms, a trust overlay requires reliable and accurate identification of the actor and a means of assessing their goodness.
In other words, authentication and reputation.
Crocker is talking about screening for "good actors": some test that distinguishes trusted senders from the rest (this is not necessarily equivalent to identifying "bad actors" because there may be a vast middle that is neither good nor bad). Screening mechanisms are one of the two categories of fundamental mechanisms for dealing with hidden information problems, the hidden information in this case being the sender's private knowledge of whether she is a good or a bad type.
Part of a good actor screening mechanism, Crocker argues, is a reputation mechanism, which of course is fundamentally an ICD problem. He suggests that there is good progress on authentication mechanisms (he favors DKIM, to which he is contributing, but in his IP Journal article he discusses SPF, SenderID and other variants, too), but he believes that "there is no candidate for standardized reputation reporting."
Final comment on Crocker: he points out that the Goodmail system that AOL and Yahoo! just announced they will use is trying to implement a good actor screening system by allowing good actors to buy higher class transit for their mail.
(Crocker, by the way, is one of the networking engineers who has been important to the development of Internet protocols since long before the Internet was a commercial, public platform. He was an area director of the IETF 1989-1993, and was one of the authors of early Internet email protocols 1978-82.)
February 07, 2006
Incentives to review
IgoUgo is (apparently) a popular travel review site (I found it mentioned in the NYT article linked in my preceeding entry).
They offer to pay incentives to people who post reviews: "Go Points" they will redeem for gift cards, frequent flier miles (natch!), etc, from Amazon, iTunes, and others. Here's a screen shot (in case they change it or take it away):
Incentives to misrepresent
The NYT discusses an increasing problem with informal review and recommendation sites: insincere or misleading postings. Here, they talk about hotels that either post fake (positive) reviews about themselves, or that offer inducements (discounts, etc.) to customers to post positive reviews, or that bribe web sites and blogs to remove negative reviews.
The Times suggests this is not a problem for professional reviewers in the traditional media, but of course that's horse manure: restaurant critics who are spotted get special treatment, book and movie reviewers are offered inducements. The "payola" scandals in broadcast radio are famous (and ongoing).
Where is the ICD in this? Well, for starters, those with an interest in the outcome are offering incentives to induce (misleading) behavior. Which drives home the countervailing incentives problem: how do we discourage low-quality reviews and recommendations, and encourage high quality?
(It may be obvious, but this is another example of the growing world of "spam": unsolicited communications that are undesired or misleading. They are not pushed into users' mailboxes, but other than that, how different is it from adds touting the miraculous powers of herbal "viagra"?)
February 05, 2006
Incentives for large email senders: Yahoo and AOL start charging
We've been expecting this for years. Looks like a serious large-scale experiment in email charging is beginning. But first some background...
Spam email has apparently been with us since 1978, but many of us didn't start to worry about it until the famous Canter and Siegel "green card" email in April 1994. This was the first truly large-scale unsolicited commercial email.
The ICD problem: Email recipients don't want many of the emails that reach their inboxes: at a minimum it takes time to read enough to know that they should be deleted (and of course, some contain viruses, etc.). However, senders may find it worthwhile to send (many) emails that recipients don't want: there is little or no cost to the sender of having an already-sent mail deleted, and some small fraction of recipients may purchase an advertised product or take other action desired by the sender. Because the cost of sending is low, we get a lot of undesired, unsolicited mail in our boxes.
There are various strategies for reducing undesired, unsolicited email. Many of them are technical in nature: e.g., filters. Others create incentives for senders to reduce production. Legal rules are one example: the Can-SPAM Act imposes penalties on senders of unsolicited email that violates certain conditions.
Economic incentives have been an obvious idea. For one thing, we're all familiar with tradition "snail" mail systems that charge postage to senders: the added cost of sending reduces the amount of unsolicited mail (though it does not eliminate it!).
Two UM grad students and a former School of Information (UM) faculty member have published a paper describing an economic incentive system (forthcoming in Berkeley Electronic Press's scholarly journal, Advances in Economic Analysis and Policy). Their system would, among other things, require senders to post a financial bond that could be claimed by recipients if recipients are unhappy about receiving mail from that sender. Roughly speaking, this is a form of probabilistic stamp: sometimes the sender pays, sometimes not.
Which brings us back to the announcement by AOL and Yahoo! that they are going to start using Goodmail's system. Senders can pay Goodmail on the order of a quarter cent, and their mail will be marked as "Certified". AOL and Yahoo! say they will give preferred treatment to mail that is certified. AOL is most explicit at this point: it will not put such mail through its spam filters, and will not strip off attachments and images, or de-activate embedded links. Thus, mail without a Goodmail stamp may be filtered off to a "spam" folder, or may be degraded.
I found amusing a complaint by the CEO of a competing service (Bonded Sender): "A lot of e-mailers won't be able to afford it." Does a claim that companies sending mass mailings can't afford a quarter of a cent per message tell us something about the value they think their mail has? Put another way, the whole point is that many e-mailers won't find it worthwhile to send low value unsolicited mail.
February 04, 2006
What is Incentive-Centered Design (ICD)?
ICD is the science of designing systems or institutions that align participants’ (individual) incentives with overall system (social) goals. Incentive-centered design is fundamental for modern information systems because performance of distributed and collaborative systems depends critically on the strategic choices users make when interacting with the system and with each other, yet mismatch between individual interests and system goals is pervasive. Careful attention to individual incentives can lead to vast improvements in systems and institutions. This approach necessarily builds equally on the social sciences that address motivated human behavior, cognition and group processes, and on the engineering sciences that address computation and communications system design. We take a broad view of individual motivations for strategic behavior, drawing on economic, psychological, and sociological theories, and combine these with the design and engineering sciences of artificial intelligence, software, operations research and networking.
We apply ICD to
- user-contributed content
- reputation systems
- public goods provision
- recommender systems
- online auction design
- prediction markets
- matching systems
- social computing
ICD in various forms it is gaining interest from many overlapping research communities. Nevertheless, as a coherent field ICD is still quite young, and its potential as a multidisciplinary foundation for research on information system problems has not yet fully developed.
For the past several years, I've been one of the leaders of a group of faculty and students at UM (and beyond) developing "incentive-centered design" (ICD) as a core intellectual field for information science.
I'm going to experiment with keeping a blog to express thoughts I have about ICD, gather links to relevant stories in the websphere, point to research projects, etc. I doubt that I am going to try to attract a lot of readers, or to create a lot of content. Low volume, and I hope high quality, or at least things that will be useful to a small coterie of fellow travelers.
Whoami: Jeff MacKie-Mason (or Jeff Mason in my non-professional persona), jmm.