« February 2010 | Main | April 2011 »

April 26, 2010

Humans paid by the robots

Not a new story, but the New York Times reports some interesting details (including prices) of human farms hired by robots (well, not really) to solve CAPTCHAs.

Macduff Hughes, at Google, captures the main point I've been making for years: screening out unwanted intruders is an economic problem, and CAPTCHAs are an economic (signaling) mechanism, trying to raise the price sufficiently for bad guys to keep them out.

Posted by jmm at 07:58 AM | Comments (0)

April 19, 2010

Even academics pollute Amazon reviews (updated)

[Oops. Turns out that Orlando Figes himself was the poison pen reviewer, and that he simply compounded his dishonesty by blaming his wife. That's got to put a bit of strain on the home life.]

That people use pseudonyms to write not-arm's-length book reviews on Amazon is no longer news.

But I couldn't resist pointing out this new case, if nothing else as an especially fun example to use in teaching. Dr. Stephanie Palmer, a senior law (!) lecturer at Cambridge University (UK), was outed by her husband, Prof. Orlando Figes, for writing reviews under a pseudonym that savaged the works of his rivals, while also writing a review of a book by her husband that it was a "beautiful and necessary" account, written by an author with "superb story-telling skills." Story-telling, indeed.

A closing comment by the editor of the Times Literary Service, which broke the story: "What is new and is regrettable is when historians use the law to stifle debate and to put something in the paper which is untrue....[Figes's] whole business is replacing a mountain of lies with a few truths".

Via The Guardian.

Posted by jmm at 11:13 PM | Comments (0)

April 06, 2010

Yelp's new idea

Yelp!, the local business user-contributed review site, has a well-known set of manipulative incentive problems. First, businesses might want to write overly positive reviews of themselves (under pseudonyms). Second, they might want to write negative reviews of their competitors. Third, they might want to pay Yelp to remove negative reviews of themselves. This last has received a lot of attention, including a class action suit against Yelp alleging that some of its sales people extort businesses into paying to remove unfavorable reviews.

Yelp has always filtered reviews, trying to remove those that it suspects are biased either too positive or too negative. But of course it makes both Type I and Type II errors, and some of the Type IIs (filtering out valid reviews) may be at the root of some of the extortion claims (or not).

Yelp has now made a rather simple, but I suspect quite favorable change: "http://mashable.com/2010/04/06/yelp-extortion-claims/"it is making all filtered reviews visible (on another page). This transparency, it hopes, will let users see that it is even-handed in its filtering, and that its errors are not themselves biased (or influenced).

Embracing transparency is a strategy that seems to work more often than not in this Web 2.0 age of the Internet. I think it will here. Most folks will never bother to look at the filtered-out reviews, and thus will rely on the very reviews that Yelp thinks are most reliable. Those who do look, if Yelp is indeed being even-handed, will probably find the filtering interesting, but will ignore these reviews in choosing which business to frequent. The main risk to Yelp is likely to be that imitators will better be able to reverse-engineer their filtering formulae.

Posted by jmm at 12:57 AM | Comments (0)