« CAPTCHAs (1): Technical screens are vulnerable to technical progress | Main | Been splogged »

May 10, 2006

CAPTCHAs (2): Technical screens vulnerable to motivated humans

A particularly interesting approach to breaking purely technical screens, like CAPTCHAs, is to provide humans with incentives to end-run the screen. The CAPTCHA is a test that is easy for humans to pass, but costly or impossible for machines to pass. The goal is to keep out polluters who rely on cheap CPU cycles to proliferate their pollution. But polluters can be smart, and in this case the smart move may be "if you can't beat 'em, join 'em".

Say a polluter wants to get many free email accounts from Yahoo! (from which to launch pollution distribution, such as spamming). Their approach was to have a computer go through the process of setting up an account at Yahoo! and to replicate this many times to get many accounts. For many similar settings, it is easy to write code to automatically navigate the signup (or other) service.

CAPTCHAs make it very costly for computers to aid polluters, because most computers fail, or take a very long time decoding a CAPTCHA.

As I discussed in my CAPTCHAs (1) entry, one approach for polluters to get around the screen is to improve the ability of computers to crack the CAPTCHA. But another is to give in: if humans can easily pass the screen, then enlist large numbers of human hours to get past the test repeatedly. There are at least two ways to motivate humans to prove repeatedly to a CAPTCHA that they are human: pay low-wage workers (usually in developing countries) to sit at screens all day and solve CAPTCHAs, or give (higher-wage) users some other currency they value to solve the CAPTCHAs: the most common in-kind payment has been access to a collection of pornography in exchange for solving a CAPTCHA.

This puts us back in the usual problem space for screening: how to come up with a screen that is low cost for desirable human participants, but high cost for undesirable humans?

The lesson is that CAPTCHAs may be able to distinguish humans from computers, but only if the computers act like computers. If they enlist humans to help them, the CAPTCHAs fail.

Ironically, enlisting large numbers of humans to solve problems that are hard for computers is an example of what Louis von Ahn (one of the inventors of CAPTCHAs) calls "social computing".

Posted by jmm at May 10, 2006 12:37 AM

Comments

Login to leave a comment. Create a new account.