August 08, 2009

Threads of Research on Generic CDA Strategies

Research on trading strategy for generic continuous double auctions (CDAs) seems to take place on four parallel and minimally interacting threads. By "generic CDA", I mean models of two-sided continuous trading of an abstract good, as distinct from strategies for predicting movements in financial markets.

1. Auction Theory. The static or one-shot double auction is well-characterized in auction-theoretic terms. Work on the dynamic case is much more rare and less successful. The pinnacle of this work as far as I can tell is a 1987 paper by Robert Wilson, which is heroic and insightful but does not reach definitive conclusions.

2. Artificial Trading Agents. Given the limited success of game-theoretic treatments, researchers have encoded strategies computationally in artificial trading agents, and evaluated them in simulation. Prominent efforts in this category includes work by Gjerstad, Cliff, Tesauro, and others. Julian Schvartzman and I have one of the latest contributions on this thread.

3. Agent-Based Finance. There is a substantial literature that also simulates heuristic agent strategies, but with the aim of analyzing global properties of market dynamics (e.g., reproducing qualitative phenomena from financial markets), rather than identifying superior strategies. Blake LeBaron is a major representative researcher in this area, and has written a fairly recent survey.

4. Market Microstructure. The finance literature addresses trading strategy, primarily from the perspective of market makers or liquidity providers. Their models differ from those above, in that the traders do not have fundamental private value for the abstract good.

Threads #2 and #3 share some common heritage in the Santa Fe Double Auction Tournament of 1990, and some strategies proposed in one thread are used in the other. In my recent work I am attempting to connect #1 and #2 by performing game-theoretic analysis of trading agents. Thread #4 seems the most isolated, though in principle its results should be quite relevant to the other threads, and vice versa.

So the question is: what are the most promising approaches for unifying these threads? Or is there some reason (beyond differences of their primary academic communities) they should develop independently?

Posted by wellman at 11:50 AM | Comments (0) | TrackBack

August 04, 2009

Technology Enablers of Latency Arbitrage

Ralph Frankel, CTO of Solace Systems, has a fascinating article on the technology for shaving microseconds and milliseconds off the market-response time they can achieve for high-frequency trading functions. He classifies "tricks of the trade" into five categories, each incorporating sophisticated specializations that combine to provide a significant edge. (And aptly labels this latency arbitrage, as distinguished from statistical arbitrage and any other technique that might be employed in algorithmic trading.)

He concludes by raising the question: "Is latency arbitrage fair?", ultimately answering with "Yes—if you’re willing to invest in the same technology".

The engineer in me is deeply impressed with what the systems from Solace can apparently do. The economist in me is horrified by the waste of resources and talent. Never mind fairness--the latency reduction arms race entails substantial costs (a boon for Solace Systems), but no consequent benefit in overall market performance. The situation gives us a choice to pay the transaction cost in computer software/hardware, or in latency arbitrage, but either way it's a transaction cost.

(Thanks to the Felix Salmon blog for the link to Frankel's illuminating article.)

Posted by wellman at 03:13 PM | Comments (3) | TrackBack

August 03, 2009

Short-Lived Dark Pools

Felix Salmon cited my original post on employing one-second call markets as a counter to high-frequency trading. He ends his post by raising the following question for his readers (far more numerous than mine) to consider:

Would this plan essentially give everybody in the market the advantages of being in a dark pool which only exists for one second? On its face, I think it’s a good idea. What would the downside be?
To answer the first question: Yes, I think that moving everybody into a short-lived dark pool is a good way to think about this. Why not provide dark pool access for the masses?

Some of Felix's commenters weighed in on potential downsides (and upsides, and side-sides...). These included a good fraction of non sequiturs, but certainly there are many practical questions to be addressed before such a sweeping change could be implemented. I hope to address some of these in future posts, as well as more thorough scholarly works.

Posted by wellman at 02:49 PM | Comments (0) | TrackBack

July 30, 2009

Cost/Benefit of High-Frequency Trading

On Marginal Revolution, Tyler Cowen discusses high-frequency trading and gets to the nub of the issue:

The philosophical question is why it might possibly be beneficial to have market prices adjust within five seconds rather than within fifteen. One second rather than five? 0.25 rather than one?

Of course, it's an economic question, not just philosophy. A commenter named "a student of economics" argues persuasively that there is in fact no benefit at these short time scales:

The social benefit is the net present value of having "accurate" prices a few fractions of a second earlier. This benefit is a function of the discount rate, which is customarily expressed in annual terms, and the value of the changes in decision-making that might result under the new, more "accurate" prices.

I submit that the social benefit is trivial.

The private benefit is in the billions, but that is almost 100% rent-seeking. It is gained almost entirely at the expense of another party who's computer or trading algorithm might be fractionally slower.

This is an nice example of almost pure rent-dissipation. It's a case where the invisible hand of the market does more harm than good, by directing enormous capital resources and some of our most brilliant minds toward an activity that creates essentially zero value.

We should set up the rules of the game to maximize incentives for value creation and minimize incentives for rent dissipation. That suggests banning or heavily taxing HFT.

Rather than banning or taxing, we could adopt market rules that eliminate the benefit of HFT, as in the discrete-time trading scheme (one-second call markets) advocated in my previous post.

Posted by wellman at 01:54 PM | Comments (0) | TrackBack

Countering High-Frequency Trading

The recent NYT article by Charles Duhigg on high-frequency trading (HFT) has set off a flurry of argument about the benefits and threats of this activity to financial trading systems. The revelation that some systems provide advance information (exposing incoming orders 30-500 milliseconds before they are submitted to the general market) to select HFT systems has drawn particular fire. Some have suggested that rapidity of response capability per se could open up manipulation possibilities or is otherwise destabilizing. We have also seen questions about whether diverting trade surplus toward whomever builds the biggest fastest network is an efficient use of resources, and the implications for perceptions of fairness across the trading public.

Let us start from the premise that asymmetry of information about incoming orders is inherently undesirable. Leveling the playing field in information promotes efficiency and lowers the cost of entry for the broader investing public.

The root of the problem, in my view, is the system's support for continuous-time trading. In a continuous market, trades are executed instantaneously whenever there are matching orders, and introduction of an unmatched order likewise causes an instantaneous update to the information available to traders.

An alternative would be a discrete-time market mechanism (technically, a call market), where orders are received continuously but clear only at periodic intervals. The interval could be quite short--say, one second--or aggregate over longer times--five or ten seconds, or a minute. Orders accumulate over the interval, with no information about the order book available to any trading party. At the end of the period, the market clears at a uniform price, traders are notified, and the clearing price becomes public knowledge. Unmatched orders may expire or be retained at the discretion of the submitting traders.

Even with a period as short as one second, the call market totally eliminates any advantage of HFT systems. It does not eliminate the opportunities for algorithmic trading in general--just those that come from sub-second response time. No party has privileged information about order flow, and no party benefits by getting a shorter wire to the "trading floor".

Moreover, the call market eliminates the disparity in risk incurred by those who dangle limit orders in continuous trading systems. Given the latency in retracting an order, an HFT system can take advantage of existing orders when new information becomes available. Because of this, systems need to provide extra incentives for the so-called "liquidity providers" who are willing to maintain a limit order. In the discrete-time systems, nobody can hang back and prey on liquidity providers. In order to trade, one must incur the one-second (or whatever) exposure. But everyone is doing that, and no information about orders becomes available anyway in that interval, so this exposure spreads out the risk evenly.

Finally, call markets also have efficiency advantages over continuous mechanisms. Aggregating orders over time eliminates some of the noise in matching based on arbitrary arrival sequences, thus producing higher surplus overall, and less price volatility. There is a classic tradeoff here between efficiency and execution delay. But with clearing intervals as short as a second, it seems hard to argue that this delay has very much real cost.

Switching over to discrete-time systems entails some technological changes, all well within feasibility in my view. The conceptual change of mindset required of the trading community may be a more serious challenges, but the benefits could be immense.

Posted by wellman at 09:21 AM | Comments (1) | TrackBack

July 29, 2009

2009 Trading Agent Competition Results

The tenth annual Trading Agent Competition was completed earlier this month at the IJCAI-09 conference in Pasadena, California. TAC-09 featured three games: the supply chain management (SCM), market design (reverse TAC, or "CAT"), and Ad Auction (AA) games. Preliminary rounds began in June, involving 42 teams (13 in SCM, 14 in CAT, and 15 in the new AA game) from 14 countries. A full list of participating teams is available.

Based on the qualifying and seeding rounds, the 14 CAT agents, 15 AA agents, and 12 of the SCM agents proceeded to the final tournament, on 13–14 July. The first day (semifinals) coincided with the Workshop on Trading Agent Design and Analysis (TADA-09), where researchers presented their latest results on trading agent technology.

The competition was as always an exciting event. Although the table below lists only the top-scoring few agents for each game, many entries exhibited strong trading strategy. Complete SCM-09 score tables are also available.

GameAgentAssociation
CAT 1 Jackaroo University of Western Sidney
2 CUNY.CS City University of New York
3 IAMwildCAT University of Southampton
SCM 1 Deep Maize University of Michigan
2 TacTex University of Texas
3 MinneTAC University of Minnesota
Ad Auction 1 TacTex University of Texas
2 AstonTAC Aston University
3 Schlemazl Brown University

A few articles and other descriptions of TAC-09 agents are currently available. We will post additional reports as the participants provide post-tournament agent descriptions and analyses.

TAC-09 would like to thank our bronze sponsor, Advanced Technologies Integration, Inc.

The University of Michigan also acknowledges the generous support of Microsoft, Yahoo!, and Google enabling the production of the TAC Ad Auctions game.

Posted by wellman at 08:12 PM | Comments (0) | TrackBack