« July 2009 | Main | September 2009 »

August 08, 2009

Threads of Research on Generic CDA Strategies

Research on trading strategy for generic continuous double auctions (CDAs) seems to take place on four parallel and minimally interacting threads. By "generic CDA", I mean models of two-sided continuous trading of an abstract good, as distinct from strategies for predicting movements in financial markets.

1. Auction Theory. The static or one-shot double auction is well-characterized in auction-theoretic terms. Work on the dynamic case is much more rare and less successful. The pinnacle of this work as far as I can tell is a 1987 paper by Robert Wilson, which is heroic and insightful but does not reach definitive conclusions.

2. Artificial Trading Agents. Given the limited success of game-theoretic treatments, researchers have encoded strategies computationally in artificial trading agents, and evaluated them in simulation. Prominent efforts in this category includes work by Gjerstad, Cliff, Tesauro, and others. Julian Schvartzman and I have one of the latest contributions on this thread.

3. Agent-Based Finance. There is a substantial literature that also simulates heuristic agent strategies, but with the aim of analyzing global properties of market dynamics (e.g., reproducing qualitative phenomena from financial markets), rather than identifying superior strategies. Blake LeBaron is a major representative researcher in this area, and has written a fairly recent survey.

4. Market Microstructure. The finance literature addresses trading strategy, primarily from the perspective of market makers or liquidity providers. Their models differ from those above, in that the traders do not have fundamental private value for the abstract good.

Threads #2 and #3 share some common heritage in the Santa Fe Double Auction Tournament of 1990, and some strategies proposed in one thread are used in the other. In my recent work I am attempting to connect #1 and #2 by performing game-theoretic analysis of trading agents. Thread #4 seems the most isolated, though in principle its results should be quite relevant to the other threads, and vice versa.

So the question is: what are the most promising approaches for unifying these threads? Or is there some reason (beyond differences of their primary academic communities) they should develop independently?

Posted by wellman at 11:50 AM | Comments (0) | TrackBack

August 04, 2009

Technology Enablers of Latency Arbitrage

Ralph Frankel, CTO of Solace Systems, has a fascinating article on the technology for shaving microseconds and milliseconds off the market-response time they can achieve for high-frequency trading functions. He classifies "tricks of the trade" into five categories, each incorporating sophisticated specializations that combine to provide a significant edge. (And aptly labels this latency arbitrage, as distinguished from statistical arbitrage and any other technique that might be employed in algorithmic trading.)

He concludes by raising the question: "Is latency arbitrage fair?", ultimately answering with "Yes—if you’re willing to invest in the same technology".

The engineer in me is deeply impressed with what the systems from Solace can apparently do. The economist in me is horrified by the waste of resources and talent. Never mind fairness--the latency reduction arms race entails substantial costs (a boon for Solace Systems), but no consequent benefit in overall market performance. The situation gives us a choice to pay the transaction cost in computer software/hardware, or in latency arbitrage, but either way it's a transaction cost.

(Thanks to the Felix Salmon blog for the link to Frankel's illuminating article.)

Posted by wellman at 03:13 PM | Comments (3) | TrackBack

August 03, 2009

Short-Lived Dark Pools

Felix Salmon cited my original post on employing one-second call markets as a counter to high-frequency trading. He ends his post by raising the following question for his readers (far more numerous than mine) to consider:

Would this plan essentially give everybody in the market the advantages of being in a dark pool which only exists for one second? On its face, I think it’s a good idea. What would the downside be?
To answer the first question: Yes, I think that moving everybody into a short-lived dark pool is a good way to think about this. Why not provide dark pool access for the masses?

Some of Felix's commenters weighed in on potential downsides (and upsides, and side-sides...). These included a good fraction of non sequiturs, but certainly there are many practical questions to be addressed before such a sweeping change could be implemented. I hope to address some of these in future posts, as well as more thorough scholarly works.

Posted by wellman at 02:49 PM | Comments (0) | TrackBack

August 01, 2009

Google Faculty Summit

I just returned from the Google Faculty Summit, a gathering of ~100 professors (mostly computer scientists) in Mountain View from universities across North America, and a few from South America as well. Google holds this event annually, as part of their university relations effort, fostering recruiting channels and research collaborations between the company and academia.

One of the highlights of the 1.5-day program was an informal talk by Larry Page, Google co-founder (and U.Michigan alumnus). Larry discussed a wide range of topics, devoting surprising attention to the topic of artificial intelligence, which he argued was being under-emphasized in computer science research these days. He expressed his opinion that much current AI research lacks the ambition to tackle the really fundamental problem, which he suggested would ultimately be solved with simple ideas and a huge engineering effort to bring to scale. (I suspect that most AI researchers would broadly agree with this, though they might quarrel with sweeping characterizations of the field.) Larry also asserted that Google's algorithms for placing ads on content pages (a fundamental operation of their AdSense service) came out of their early efforts on more general AI text understanding problems. Thus, he credited AI research with half of Google's current revenue.

Visiting innovative companies is of course a most worthwhile way for even the busiest academics to spend their time. In addition to learning about emerging technologies and making valuable connections, we get a glimpse of what problems these companies think are important, and where the real technical challenges are. Reflecting on all the impressive work from Google we saw presented at this meeting (Google Earth, Flu Trends, Book Search, Statistical Machine Translation, Wave, just to name a few), however, it occurred to me that once Google recognizes that a problem is important and ripe for innovation, they are probably well on their way to producing great solutions. Perhaps a better strategy for academic researchers like myself is to try to diagnose where Google (and other companies) may have a blind spot--problems that are important and solvable but the industry players just don't see that yet, or perhaps do not see how the advances would benefit the company. Regardless, it's clearly advantageous to be aware of what the capable people at Google and other cutting-edge technology companies are up to.

Posted by wellman at 07:55 AM | Comments (0) | TrackBack