« September 2007 | Main | December 2007 »

October 15, 2007

Nobel prize to three for incentive-centered design

Today the Royal Swedish Academy of Sciences awarded the Nobel Prize in Economics to Roger Myerson, Eric Maskin and Leonard Hurwicz for their (independent) contributions to the theory of mechanism design, which is the theoretical branch of economics that provides a good bit of the intellectual foundation for incentive-centered design. Here is a copy of the NYT article announcing the award.

Myerson, among other things, is credited with formalizing the Revelation Principle, on which we rely heavily to solve analytically for desirable mechanisms. He is also co-author of the famous Myerson-Satterthwaite paper in which they prove the impossibility of a mechanism that is guaranteed to simultaneously satisfy budget balance, individual rationality, and voluntary participation for a vast class of interesting problems (basically, anything that involves bilateral private information).

Maskin (a teacher of mine when I got my Ph.D. at MIT), among other things, is co-author (with Drew Fudenberg) of the formal proof of the famous "folk theorem" which demonstrates that almost any Pareto efficient outcome (including fully collusive outcomes) is an equilibrium in an infinitely repeated game if the rate of discounting is high enough. He also has made important contributions to auction theory, such as his 1984 paper with John Riley in which they solved for revenue-maximizing auction designs for monopoly sellers.

Hurwicz is widely regarded as the founder of mechanism design theory, starting with his 1960 paper in which he set out the formal definition of a mechanism. In 1972 he was the first to introduce the formal concept of incentive-compatibility, which is of course central to incentive-centered design.

This is not the first time the Nobel has been awarded for contributions to incentive-centered design. James Mirrlees and William Vickrey were awarded the prize in 1996 for their independent contributions to "the economic theory of incentives under asymmetric information", which is the central problem addressed by mechanism design (and both of their cited contributions were specifically mechanism design solutions to the problem: Mirrlees with tax system design, and Vickrey with auction design). Also, in 2001 Akerlof, Spence and Stiglitz shared the prize for their independent contributions to the economics of asymmetric information.

Posted by jmm at 12:25 PM | Comments (0)

October 10, 2007

Is cheaper monitoring technology a good thing?

In settings in which the effort or actions of one party to a transaction (say, an employee or a professional services provider like a consultant or a lawyer) cannot be easily observed or verified, we have a situation known as the hidden action problem. The asymmetry of information means that contracts and interactions will likely not be as efficient or productive as they would if the action were equally observable (and verifiable) on both sides.

In the very large literature on this problem (in economics, management science and law), one standard bit of received wisdom is that there is a direct trade-off between costly monitoring and providing costly incentives to perform that are self-enforcing (that is, the incentives make it in the self-interest of the party with the information advantage to act as if the information were equally shared). (See, e.g., Eisenhardt, K. (1989). Agency Theory: An Assessment and Review. Academy of Management
Review
, 14(1), 57-74.) As the costs of technology or systems to monitor behavior fall for certain applications, principals will want to rely more on monitoring and less on inducement.

One of my graduate students, Pieter Kleymeer, pointed out a company selling Cataphora, which it claims is "the most comprehensive electronic evidence management and analysis platform available". They promote this both for litigation applications (who said what to whom about what when?), but also, for example, for monitoring the electronic communications of one's own employees.

Pieter asked me the following very good question: "What does this say about the inherent trust built into a contract? If we don't need contracts to incentivize behavior and instead directly monitor behavior, don't we lose trust or good faith between the parties?"

What Pieter raises is that following the cost curve down for monitoring technology may have adverse consequences. When people are induced to behave honestly because they want to (positive incentives) rather than because they can't get away with untrustworthy behavior and will be punished (monitoring), perhaps there is more and better social capital, which creates a positive feedback loop and other social benefits. (*Maybe* -- it's not necessarily the case that people respond "better" or differently to positive rather than negative incentives -- but it's a plausible conjecture that they might.) The immediate savings from using less expensive monitoring technology rather than providing costly incentives to encourage loyalty, honesty and trustworthiness may be outweighed by losses from allowing a culture to develop in which people are only trying to avoid getting caught. I don't know if there is any research literature on this possible structural cost from relying more on monitoring.

One of my colleagues in the Economics Department (Joel Slemrod) is fond of saying that the US has a voluntary income tax system: monitoring is so low that it makes rational sense for most people to cheat (more) on their taxes, yet compliance appears (based on lots of studies) to be relatively high. Somehow we have established a system of trust and civic responsibility in which enough people *want* to honestly pay their taxes that we raise enough revenue without more draconian monitoring and enforcement.

Posted by jmm at 12:30 PM | Comments (0)