April 26, 2010
Humans paid by the robots
Macduff Hughes, at Google, captures the main point I've been making for years: screening out unwanted intruders is an economic problem, and CAPTCHAs are an economic (signaling) mechanism, trying to raise the price sufficiently for bad guys to keep them out.
November 26, 2008
New UCC opportunity, new opportunity for manipulation and spam
Google has made available a striking set of new features for search, which it calls SearchWiki. If you are logged in to a Google account, when you search you will have the ability to add or delete results you get if you search that page again, re-order the results, and post comments (which can be viewed by others).
But the comments are user-contributed content: this is a relatively open publishing platform. If others search on the same keyword(s) and select "view comments" they will see what you entered. Which might be advertising, political speech, whatever. As Lauren Weinstein points out, this is an obvious opportunity for pollution, and (to a lesser extent in my humble opinion, because there is no straightforward way to affect the behavior of other users) manipulation. In fact, she finds that comment wars and nastiness started within hours of SearchWiki's availability:
It seem inevitable that popular search results in particular will
quickly become laden with all manner of "dueling comments" which can
quickly descend into nastiness and even potentially libel. In fact,
a quick survey of some obvious search queries shows that in the few
hours that SearchWiki has been generally available, this pattern is
*already* beginning to become established. It doesn't take a
lot of imagination to visualize the scale of what could happen with
the search results for anybody or anything who is the least bit
Lauren even suggests that lawsuits are likely by site owners whose links in Google become polluted, presumably claiming they have some sort of property right in clean display of their beachfront URL.
August 23, 2008
Good stuff in, bad stuff out
A fun ad from IBM that makes the point... (Thanks to Mark McCabe)
March 29, 2008
Presentation at Yahoo! Research on user-contributed content
Yahoo! Research invited me to speak in their "Big Thinkers" series at the Santa Clara campus on 12 March 2008. My talk was "Incentive-centered design for user-contributed content: Getting the good stuff in, Keeping the bad stuff out."
My hosts wrote a summary of the talk (that is a bit incorrect in places and skips some of the main points, but is reasonably good), and posted a video they took of the talk. The video, unfortunately, focuses mostly on me without my visual presentation, panning only occasionally to show a handful of the 140 or so illustrations I used. The talk is, I think, much more effective with the visual component. (In particular, it reduces the impact of the amount of time I spend glancing down to check my speaker notes!)
In the talk I present a three-part story: UCC problems are unavoidably ICD problems; ICD offers a principled approach to design; and ICD works in practical settings. I described three main incentives challenges for UCC design: getting people to contribute; motivating quality and variety of contributions; and discouraging "polluters" from using the UCC platform as an opportunity to publish off-topic content (such as commercial ads, or spam). I illustrated with a number of examples in the wild, and a number of emerging research projects on which my students and I are working.
January 08, 2008
MetaFilter manipulated by nonprofit that reports on honesty and reliability of nonprofits
This is particularly piquant because the manipulator founded his organization (GiveWell) as a nonprofit to help people evaluate the quality (presumably, including reliability!) of nonprofit charitable organizations, and GiveWell itself is supported by charitable donations.
The manipulation was simple, and reminiscent of the well-publicized book reviews by authors and their friends on Amazon: the executive pseudonymously posted a question asking where he could go to get good information about charities, and then under his own name (but without identifying his affiliation) answered his own question by pointing to his own organization.
When discovered, the GiveWell board invoked old-fashioned incentives: they demoted the Executive Director (and founder), docked his salary, and required him to attend a professional development training program. Of course, the expected cost of being caught and punished was not, apparently, a sufficient incentive ex ante, but the organization apparently hopes by imposing the ex post punishment he will be motivated to behave in the future, and by publicizing it other employees will be similarly motivated. The publicity provides an additional incentive: the ED's reputation has been severely devalued, presumably reducing his expected future income and sense of well-being as well.
January 07, 2008
UCC search arrives...manipulation and pollution to follow soon
Jimmy Wales announced the release of the public "alpha" of his new, for-profit search service, Wikia Search. The service is built on a standard search engine, but its primary feature is that users can evaluate and comment on search results, building a user-contributed content database that Wikia hopes will improve search quality, making Wikia a viable but open (and hopefully profitable) alternative to Google.
Miguel Helft, writer for the
Like other search engines and sites that rely on the so-called â€świsdom of crowds,â€? the Wikia search engine is likely to be susceptible to people who try to game the system, by, for example, seeking to advance the ranking of their own site. Mr. Wales said Wikia would attempt to â€śblock them, ban them, delete their stuff,â€? just as other wiki projects do.
The tension is interesting: Wikia promotes itself as a valuable alternative to Google largely because its search and ranking algorithms are open, so that users know more about why some sites are being selected or ranked more highly than others.
â€śI think it is unhealthy for the citizens of the world that so much of our information is controlled by such a small number of players, behind closed doors,â€? [Wales] said. â€śWe really have no ability to understand and influence that process.â€?
But, although the search and ranking algorithms may be public, whether or not searches are being manipulated by user contributed content will not be so obvious. It is far from obvious which approach is more dependable and "open". Wikia's success apparently will depend on its ad hoc and technical methods for "blocking, banning and deleting" manipulation.
September 08, 2007
Op-ed in Wall Street Journal advocates hybrid solution to spam
Three researchers published an op-ed in today's Wall Street Journal (subscription only) suggesting that two practical methods to greatly reduce spam are now technically workable, but will not be implemented without cooperation on standards by the major email providers. They urge the providers to agree on a hybrid system:
To break this logjam, we advocate a hybrid system that would allow email users to choose their preferred email system. Those who want anonymity and no incremental cost for email can continue to send emails under the current system, without authentication and without sender bonds. Those who want the lowestcosts and don't care about anonymity (most legitimate businesses would likely fall into this category) can send email that is user authenticated, but not bonded. People who want anonymity but are willing to pay to demonstrate the value they place on the recipient's attention can post a bond. Payment could be made anonymously via a clearinghouse, using the electronic equivalent of a tiny traveler's check bundled with each message. Those with especially high-value messages can make them both authenticated and bonded.
The authors are Jonathan Koomey (Lawrence Berkeley National Labs), Marshall van Alstyne (Boston U) and Erik Brynjolfsson (MIT Sloan).
The ideas are not new; they are trying to create public pressure. The authentication system in play is DKIM, a standard approved by the IETF earlier this year. The sender bond method was detailed in a paper by Thede Loder, Rick Wash and van Alstyne. Loder has started a company offering the service (Boxbe); Wash is currently one of my Ph.D. students (though he did this research while working with van Alstyne while Marshall was my colleague at Michigan).
January 06, 2007
Spam as security problem
Here is the blurb Rick Wash and I wrote for the USENIX paper (slightly edited for later re-use) about spam as a security problem ripe for ICD treatment. I've written a lot about spam elsewhere in this blog!
Spam (and its siblings spim, splog, spit, etc.) exhibits a classic hidden information problem. Before a message is read, the sender knows much more about its likely value to the recipient than does the recipient herself. The incentives of spammers encourage them to hide the relevant information from the recipient to get through the technological and human filters.
While commercial spam is not a traditional security problem, it is closely related due to the adversarial relationship between spammers and email users. Further, much spam carries security-threatening payloads: phishing and viruses are two examples. In the latter case, the email channel is just one more back door access to system resources, so spam can have more than a passing resemblance to hacking problems.
December 24, 2006
Spamming Web 2.0
The New York Times today ran a short note highlighting CNET's story about commercial spamming of Digg.com and similar sites. There are companies being paid upwards of $15,000 to get a product placed on the front page of Digg, and most recently a top 30 Digger admitted that he entered an agreement to help elevate a new business to the front page of Digg (and solicited the other top 30 Diggers to participate).
The world was pretty darned excited when it discovered email (for most people, in the early 1990s). Spam followed in a big way within a year or two. It's clear to me that we're on the same trajectory with user-contributed content sites on the Web. There is an ever-increasing need for incentive-centered designs to help keep the bad stuff out.
August 02, 2006
Spam as cockroach (welcome to blog spam)
Cockroaches: doesn't matter how many times you think you've killed them all and blocked their entry points, they keep coming back.
For those who think (foolishly, in my opinion), that Gmail has "solved" the spam problem...it's not just email: Blogosphere suffers spam explosion. This isn't hot news, exactly, but it's a nice comment on the growing problem of splogging (and notice that Frauenfelder uses "pollution", my favorite word for characterizing the problem). It's a cost imposed on a by-stander by the self-serving activities of another (in this case, usually advertising products for sale).
April 08, 2006
Spam economics: Private stamps vs. repudiable bond payments to recipients
After years during which everyone talked about economic incentives to better sender and receiver interests in unsolicited email, we may finally be seeing the dawn of the incentive-centered design era for email.
AOL and Yahoo! this winter announced they were adopting the Goodmail system to create a special class of incoming mail: senders that paid the Goodmail fee per message would have their mail placed directly in user inboxes, with no server-side filtering or blocking by the ESP (email service provider, AOL and Yahoo! in this case). Mail without the Goodmail stamp will receive traditional treatment, being filtered and possibly placed in the user's spam folder.
A rather loud debate immediately followed, focused primarily on one concern: AOL and Yahoo! would tighten the filtering screws on unstamped email, eventually shoving so much of it into the spam folder that everyone would be "forced" to pay for the Goodmail stamp or likely have their mail discarded, unopened by users (or users would be forced to treat their spam folder as a regular inbox, and lose the benefits of the filtering). Nonprofits in particular howled because, they claimed, their mail is valuable, but they are too poor to pay for the stamps. (If members of non-profits aren't willing to pony up $0.25 per email in member fees, just how valuable are the millions of pieces of mail that non-profits want to send?)
But rather than get into that debate right now (see "Backlash to sender-pays email incentives"), I want to discuss the economics of two different but related approaches to using financial incentives to economically filter spam: the private stamp (Goodmail) approach, and the use of recipient-repudiable bonds ("stamps" vs. "bonds" for short).
The bond approach is similar to stamps, with critical differences. The sender pays for a (digitally-signed) stamp; mail with that stamp goes directly into the reader's inbox, unfiltered. However, after opening the message, the reader can either keep the stamp (push a button in the mail client to "deposit stamp"), or relinquish it back to the sender, which can be interpreted as a message that "I valued this mail, you can send more like it in the future."
How do the differences matter? First, an implementation issue: it is relatively easy for a third-party provider like Goodmail to implement a payment deposit system; it is not nearly so easy, at least right now, for individual email users to receive a micropayment attached to every email and deposit it. Email clients aren't programmed for this, and in any case, the necessary micropayments infrastructure just doesn't exist (yet) at that level of granularity.
Assuming that technical detail can be solved in the near future, how else are the two different? One of the most important differences is the very limited role for recipient preferences in the private stamp approach. A stamp of, say, $0.01, will discourage senders from sending email that is worth less than $0.01 for the sender. But the threshold is being set by the third party (Goodmail, perhaps together with an ESP like AOL), not by individual users, and thus does not directly reflect the value to the recipient of receiving unsolicited email (or not). Arguably, competition between ESPs would push the stamp price to about the right average level over time, but it would not reflect heterogeneity in user preferences.
A bond system could with little or no cost allow each user to set their own threshold for the required size of bond, thus allowing recipients to customize their own mail preferences.
Another problem with the stamp approach is that that goes through this channel pays for a stamp. For mail that both sender and recipient agree is desirable, that incurs unnecessary expense. But perhaps more important, it will prevent some desirable mail from being sent. Suppose the stamp is $0.01, and a sender has mail to send that the sender values at only worth $0.005 if delivered, but the recipient also values at $0.02 if received. The sender won't be willing to buy the stamp, and the mail won't get sent. With a repudiable bond, however, the sender might send a trial message, and if the recipient repudiates the bond, the sender will know the recipient values the mail and will allow similar messages to arrive without a bond payment in the future.
Why won't recipients always keep the bond payment? Well, first, this would just make the system work the same as stamps (except that users get the money, not a third party), so that's not a reason why bonds are worse. However, it also doesn't make sense in the example above. If I want to receive, say, an electronic catalog, but I keep the bond, then the sender may stop sending to me, and I lose out.
This is a very quick review of the two approaches, and yes, of course the issues can be more subtle. See Loder et al. for a scholarly discussion of the two.* Vanquish Labs, a vendor of a bond system, has an online article that critiques the Goodmail stamp approach (February 2006 : CertifiedMail = Certified Disaster).
*Thede Loder, Marshall Van Alstyne, and Rick Wash. "An economic solution to unsolicited communication". Advances in Economic Analysis and Policy, 6 (1), 2006.
Keeping bad stuff out: Making a play on social news sites?
About a month ago, some rumors were about that Google was about to acquire Sun Microsystems. The news got hot when blog stories claiming an acquisition was imminent were promoted to the front page on community/social new site Digg.com. It pretty quickly became clear that the rumors were largely unfounded. What hasn't been quickly resolved is whether or not someone tried to manipulate Digg, possibly to cash in on speculative trading in Google or Sun stock.
The basic idea is simple: get enough shill users to vote for a financially-significant rumor to promote it to the front page, thus automatically getting more widespread attention, and hope that the burst of attention causes a temporary stock price adjustment that can be exploited. (For example, in an acquisition the price of Sun would almost surely increase, and thus gullible readers might start buying it and bidding it up; the scam artist could purchase shares in advance to sell at the inflated price, or sell it short at the bubble price and collect when price returns to normal.)
Digg claims that it almost surely was not manipulated, but it seems clear that such manipulation is possible in user-contributed content news sites. Recall how Rich Wiggins found that people could get flim-flam press releases fed into Google News (here and here), and how authors using pseudonyms have promoted their own books with favorable "reviews" on Amazon.com.
It appears that in the past Digg has been manipulated (though apparently as an experiment, not to manipulate stock prices).
April 07, 2006
Another technical "screen" without incentives design facing trouble
CAPTCHAs ("completely automated public Turing test to tell computers and humans apart"!) are an increasingly common technical device to try to "keep the bad stuff out" of many services: users are asked to type in a word displayed with distortion that is easy for most humans to read, but difficult for image recognition software, before they can access various services.
Well, not too surprisingly, programmers have taken on this challenge, and software is getting better at doing the recognition and getting around the CAPTCHAs. And it turns out there's a lot of cheap labor available for spammers and others to hire to thwart them, too.
As a general matter, technical screens are not incentive-compatible, and when the value of getting past them is reasonably high, we can expect that they will get caught up in "arms races" and rarely be highly effective against pollution.
March 18, 2006
Local "keep the bad stuff out" problem
We locally had an annoying pollution experience yesterday. Our research group at UM runs an ICD wiki for sharing our research, announcements, &c. Access is pretty open, and sure enough, after about a year in operation, a splogger found us. He or she created an account and added spam links to about 40 pages in the wiki (invisible to us but visible to search engines, to increase the link rankings for the underlying spam sites). One of our grad students, Rick Wash, spent hours cleaning things up for us. What's the solution?...
We haven't thought about any incentive schemes to protect our wiki yet (time to start thinking!). The obvious technological solution is to limit editing access to accounts authorized by a moderator, but that is not a great solution: we have over 120 new master's students entering the program every year, and we want them to be able to participate, but we don't have an automated system in place to give them accounts, so either they get to create their own, or we have to install some more overhead.
We could use the human solution, as Wikipedia does: let anyone in, but keep a close eye on changes, clean them up and disable abusing accounts -- what Rick did this time. But we don't have a lot of hard-core users, and that could become quite a large burden on the few who have wiki-admin skills.
Just a mildly painful reminder that there's a reason for us to be researching these problems!
March 17, 2006
Ester Dyson on charging for sending
Esther Dyson has a column in today's New York Times (registration required) (You've Got Goodmail summarizing fairly well the economic incentives arguments for supporting experimentation with systems that charge for sending "priority" email, and also the market competition argument for why this will ultimately benefit users, not create monopoly profits. She also discusses a variant which is, essentially, the same as the "Attention Bond Mechanism" proposed by our own Loder, van Alstyne and Wash.
March 14, 2006
Not even gangsters are safe from spam
Daren Briscoe reported in Newsweek that gangs are using the web to recruit members and communicate. But gosh, they have to deal with spam too:
But the Web has also given rival gangs a new, less violent way to settle scoresâ€”flooding each other's sites with junk e-mail. Stalker says he spends hours every week deleting threatening or insulting messages from other gangs from his Web site. Not even a gangster is safe from spam.
I wonder: if you're a gangster, maybe you have a somewhat wider range of incentives you can use to discourage spammers?
Spamming Google News: Who's in, who's out?
An old acquaintance of mine, Rich Wiggins, recently blogged about his discovery of how easy it is to insert content in Google News. He discovered this when he noticed regular press releases published in Google News that were a front for the musings of self-proclaimed "2008 Presidential contender" Daniel Imperato. Who?
Wiggins figured out how Imperato did it, and tested the method by publishing a press release (screen shot) about his thoughts while celebrating his 50th birthday in Florida. Sure enough, you can find this item by searching on "Rich Wiggins" in Google News.
This is (for now) a fun example of one of the two fundamental incentives problems for important and fast-growing phenomenon of user-contributed content:
- How to keep the undesirable stuff out?
- How to induce people to contribute desirable stuff?
The first we can call the pollution problem, the second the private provision of public goods problem. Though Wiggins example is funny, will we soon find Google News polluted beyond usefulness (the decline of the Usenet was largely due to spam pollution).
Blogs, of course, are a major example of user-contributed content. At first glance, they don't suffer as much from the first problem: readers know that blogs are personal, unvetted opinion pages, and so they don't blindly rely on what is posted as truth. (Or do they?) But then there's the problem of splogging, which isn't really a problem for blogs as much as for search engines that are being tricked into directing searchers to fake blog pages that are in fact spam advertisements (a commercial variant on the older practice of Google bombing).
There is a lengthy and informative Wikipedia article that discusses the wide variety of pollution techniques (spamming) that have been developed for many different settings (besides email and blogs, also instant messaging, cell phones, online games, wikis, etc.), with an index to a family of detailed articles on each subtype.
March 04, 2006
AOL compromises on sender fees
Nonprofits say AOL "backed down", but actually it is holding its ground: AOL announced that it would provide the higher service class to qualified (non-spamming) non-profits at no cost. So, AOL is holding the line on charging commercial senders, but has made an exception. I say it's a compromise because, as I noted before, if nonprofit mailings are so low value that they're not worth a quarter-cent stamp, then maybe they should reduce how much they send, but at least AOL is going forward with the experiment for commercial senders.
Backlash to sender-pays email incentives
Not too surprisingly, as soon as AOL and Yahoo! announced the were implementing an (optional) sender-pays email system, there was a huge uproar. So it has ever been since the Internet grew into a public net (out of its early days as a research and military net): anything vaguely smacking of converting real, user-suffered costs into monetary form is reviled as "the end of the Internet as we know it". In this case, the end of the spam-encrusted, diminishing-reliability, low-rent Internet as we know it.
The Electronic Frontier Foundation (EFF) is run by smart people with widespread support, and they've done a lot of good for the Internet over the years, so their rant on this is worth reading.
Now the EFF is organzinging a coalition of non-profits to challenge AOL. (See EFFector vol. 19, no. 9, 3 March 2006 -- not online yet but it will be here soon.)
"Over fifty groups with nearly 15 million members joined with us, including Free Press, the U.S. Humane Society, the Gun Owners of America, MoveOn.org, RightMarch.com, the AFL-CIO, and Computer Professionals for Social Responsibility."
My first reaction is: That's right, AOL doesn't "own" user inboxes, it merely provides a commercial service to maintain them. And so, if users don't like AOL's attempt to provide more reliable, low-spam email service, they can switch to another provider. There's a pretty active market for email services: why assume this market is broken? Indeed, there are some pretty smart people at EFF, and they understand this:
"One might trust that the market will eventually sort this out: rewarding ISPs that do not sell access to their users' inboxes and that work to improve deliverability for everyone, not just senders who pay. But the market speaks slowly -- in the meantime, this system will push small speakers into a choice of paying or not being sure that their messages are getting through to their members."
I don't know if this particular sender-pays mechanism is going to work well, but I think we should be encouraging experiments, and that this is one of the most promising in a while. We know filtering will never provide a complete solution. Why not try an incentive-based solution?
In case you don't know the details, the AOL and Yahoo! systems do not require any senders to pay, though it's hard to tell that from the backlash. Rather, it allows senders to pay (one-quarter of a penny per email) to obtain a higher "class" of service (like the difference between first class and bulk class snail mail), which will not be pre-filtered by the email service provider (ESP). Yes, eventually that price could increase (my guess is it has to increase if it is actually going to discourage many spammers!), and yes, eventually AOL and Yahoo! could reduce the quality of the lower service class (say, by filtering it more aggressively so more "good" mail is siphoned off to spam folders). But over that same time frame users can switch to other ESPs if they don't like the service.
There is a bit of disingenuousness in the non-profit organization protests. They want to "speak" at low cost: that is, they want to send spam! They are concerned that they won't be able to afford first-class service for the millions of emails they send out. For example, conservative political organizer RightMarch.com sends out 2 to 3 million email messages a week, and joined the EFF coalition because they are worried "we might not be able to afford sending them" (link). (In fairness, many of these organizations may be sending only opt-in, non-spam bulk email, but if the value of the mails they send is less than a quarter-penny each, is there a great social loss if they send fewer?)
March 03, 2006
Web 2.0 vulnerabilities
Wired ran an article last fall about vulnerabilities becoming apparent in various "Web 2.0" applications (whatever those are). Some are similar to spam in email: for example, splogs (fake blogs created to attract search engine interest and drive viewers to see their Viagra ads).
Many interesting social computing applications have enough openness that they are vulnerable to misuses and manipulations. A traditional approach is to develop technical means to close or limit the vulnerabilities (like filters for spam). We know that the inevitable trade-off between the benefits (even necessity) of some degree of openness for social applications and the resulting vulnerability means technical solutions are unlikely to be 100% satisfactory. That leaves open the room for incentive-based mechanisms to discourage misuse of social computing applications, like the various payment schemes proposed to fight spam. What incentive scheme might reduce splogging, for example?
For many social computing applications, financial incentive schemes may be undesirable, suggesting a growing need to develop effective non-pecuniary incentive mechanisms.
February 21, 2006
What is unsolicited, unwanted email?
This article in the New York Times, while driving home a sad and painful situation we academics all share (not being able to provide our students with all the attention they want), also illustrates the problem faced by any incentive system to discourage unwanted email: where is the boundary?
The article discusses the increase we all have experienced in email from students, sometimes inappropriate or unreasonably demanding (not always!). Clearly, some of this mail we would rather not receive. But what filter or incentive system or other mail management mechanism can tell which mail from our students we don't want to receive? The possibilities for Type II errors are a bit scary.
February 08, 2006
Lots of anti-spam ideas are crap
Here is a tongue-in-check list of flaws in proposed anti-spam technologies / protocols / policies. It was written as a humorous commentary on the immediate flaming most proposals receive, but it is a pretty insightful and potentially useful checklist to run against any serious proposals (from craphound.com):
Your post advocates a
( ) technical ( ) legislative ( ) market-based ( ) vigilante
approach to fighting spam. Your idea will not work. Here is why it won't work. (One or more of the following may apply to your particular idea, and it may have other flaws which used to vary from state to state before a bad federal law was passed.)
( ) Spammers can easily use it to harvest email addresses
( ) Mailing lists and other legitimate email uses would be affected
( ) No one will be able to find the guy or collect the money
( ) It is defenseless against brute force attacks
( ) It will stop spam for two weeks and then we'll be stuck with it
( ) Users of email will not put up with it
( ) Microsoft will not put up with it
( ) The police will not put up with it
( ) Requires too much cooperation from spammers
( ) Requires immediate total cooperation from everybody at once
( ) Many email users cannot afford to lose business or alienate potential employers
( ) Spammers don't care about invalid addresses in their lists
( ) Anyone could anonymously destroy anyone else's career or business
Specifically, your plan fails to account for
( ) Laws expressly prohibiting it
( ) Lack of centrally controlling authority for email
( ) Open relays in foreign countries
( ) Ease of searching tiny alphanumeric address space of all email addresses
( ) Asshats
( ) Jurisdictional problems
( ) Unpopularity of weird new taxes
( ) Public reluctance to accept weird new forms of money
( ) Huge existing software investment in SMTP
( ) Susceptibility of protocols other than SMTP to attack
( ) Willingness of users to install OS patches received by email
( ) Armies of worm riddled broadband-connected Windows boxes
( ) Eternal arms race involved in all filtering approaches
( ) Extreme profitability of spam
( ) Joe jobs and/or identity theft
( ) Technically illiterate politicians
( ) Extreme stupidity on the part of people who do business with spammers
( ) Dishonesty on the part of spammers themselves
( ) Bandwidth costs that are unaffected by client filtering
( ) Outlook
and the following philosophical objections may also apply:
( ) Ideas similar to yours are easy to come up with, yet none have ever
been shown practical
( ) Any scheme based on opt-out is unacceptable
( ) SMTP headers should not be the subject of legislation
( ) Blacklists suck
( ) Whitelists suck
( ) We should be able to talk about Viagra without being censored
( ) Countermeasures should not involve wire fraud or credit card fraud
( ) Countermeasures should not involve sabotage of public networks
( ) Countermeasures must work if phased in gradually
( ) Sending email should be free
( ) Why should we have to trust you and your servers?
( ) Incompatiblity with open source or open source licenses
( ) Feel-good measures do nothing to solve the problem
( ) Temporary/one-time email addresses are cumbersome
( ) I don't want the government reading my email
( ) Killing them that way is not slow and painful enough
Furthermore, this is what I think about you:
( ) Sorry dude, but I don't think it would work.
( ) This is a stupid idea, and you're a stupid person for suggesting it.
( ) Nice try, asshole! I'm going to find out where you live and burn your
Screening for Good (email) Actors
Dave Crocker of Brandenburg Consulting wrote a message today to Dave Farber's "Interesting People" mail list in which he made the following observation:
We must continue with efforts to detect and deal with Bad Actors, but there is a separate path that is at least as valuable: We need methods for distinguishing Good Actors. Folks who are deemed "safe". In effect, we need a Trust Overlay for Internet mail, to permit differential handling of mail from these good actors. In general terms, a trust overlay requires reliable and accurate identification of the actor and a means of assessing their goodness.
In other words, authentication and reputation.
Crocker is talking about screening for "good actors": some test that distinguishes trusted senders from the rest (this is not necessarily equivalent to identifying "bad actors" because there may be a vast middle that is neither good nor bad). Screening mechanisms are one of the two categories of fundamental mechanisms for dealing with hidden information problems, the hidden information in this case being the sender's private knowledge of whether she is a good or a bad type.
Part of a good actor screening mechanism, Crocker argues, is a reputation mechanism, which of course is fundamentally an ICD problem. He suggests that there is good progress on authentication mechanisms (he favors DKIM, to which he is contributing, but in his IP Journal article he discusses SPF, SenderID and other variants, too), but he believes that "there is no candidate for standardized reputation reporting."
Final comment on Crocker: he points out that the Goodmail system that AOL and Yahoo! just announced they will use is trying to implement a good actor screening system by allowing good actors to buy higher class transit for their mail.
(Crocker, by the way, is one of the networking engineers who has been important to the development of Internet protocols since long before the Internet was a commercial, public platform. He was an area director of the IETF 1989-1993, and was one of the authors of early Internet email protocols 1978-82.)
February 07, 2006
Incentives to misrepresent
The NYT discusses an increasing problem with informal review and recommendation sites: insincere or misleading postings. Here, they talk about hotels that either post fake (positive) reviews about themselves, or that offer inducements (discounts, etc.) to customers to post positive reviews, or that bribe web sites and blogs to remove negative reviews.
The Times suggests this is not a problem for professional reviewers in the traditional media, but of course that's horse manure: restaurant critics who are spotted get special treatment, book and movie reviewers are offered inducements. The "payola" scandals in broadcast radio are famous (and ongoing).
Where is the ICD in this? Well, for starters, those with an interest in the outcome are offering incentives to induce (misleading) behavior. Which drives home the countervailing incentives problem: how do we discourage low-quality reviews and recommendations, and encourage high quality?
(It may be obvious, but this is another example of the growing world of "spam": unsolicited communications that are undesired or misleading. They are not pushed into users' mailboxes, but other than that, how different is it from adds touting the miraculous powers of herbal "viagra"?)
February 05, 2006
Incentives for large email senders: Yahoo and AOL start charging
We've been expecting this for years. Looks like a serious large-scale experiment in email charging is beginning. But first some background...
Spam email has apparently been with us since 1978, but many of us didn't start to worry about it until the famous Canter and Siegel "green card" email in April 1994. This was the first truly large-scale unsolicited commercial email.
The ICD problem: Email recipients don't want many of the emails that reach their inboxes: at a minimum it takes time to read enough to know that they should be deleted (and of course, some contain viruses, etc.). However, senders may find it worthwhile to send (many) emails that recipients don't want: there is little or no cost to the sender of having an already-sent mail deleted, and some small fraction of recipients may purchase an advertised product or take other action desired by the sender. Because the cost of sending is low, we get a lot of undesired, unsolicited mail in our boxes.
There are various strategies for reducing undesired, unsolicited email. Many of them are technical in nature: e.g., filters. Others create incentives for senders to reduce production. Legal rules are one example: the Can-SPAM Act imposes penalties on senders of unsolicited email that violates certain conditions.
Economic incentives have been an obvious idea. For one thing, we're all familiar with tradition "snail" mail systems that charge postage to senders: the added cost of sending reduces the amount of unsolicited mail (though it does not eliminate it!).
Two UM grad students and a former School of Information (UM) faculty member have published a paper describing an economic incentive system (forthcoming in Berkeley Electronic Press's scholarly journal, Advances in Economic Analysis and Policy). Their system would, among other things, require senders to post a financial bond that could be claimed by recipients if recipients are unhappy about receiving mail from that sender. Roughly speaking, this is a form of probabilistic stamp: sometimes the sender pays, sometimes not.
Which brings us back to the announcement by AOL and Yahoo! that they are going to start using Goodmail's system. Senders can pay Goodmail on the order of a quarter cent, and their mail will be marked as "Certified". AOL and Yahoo! say they will give preferred treatment to mail that is certified. AOL is most explicit at this point: it will not put such mail through its spam filters, and will not strip off attachments and images, or de-activate embedded links. Thus, mail without a Goodmail stamp may be filtered off to a "spam" folder, or may be degraded.
I found amusing a complaint by the CEO of a competing service (Bonded Sender): "A lot of e-mailers won't be able to afford it." Does a claim that companies sending mass mailings can't afford a quarter of a cent per message tell us something about the value they think their mail has? Put another way, the whole point is that many e-mailers won't find it worthwhile to send low value unsolicited mail.