Behavior Based Algorithmic Trading Strategy Identification

Author:
Yang, Steve, Systems Engineering - School of Engineering and Applied Science, University of Virginia
Advisors:
Scherer, William, Systems and Information Engineering, University of Virginia
Beling, Peter, Systems and Information Engineering, University of Virginia
Abstract:

Electronic markets have emerged as popular venues for the trading of a wide variety of financial assets, and computer based algorithmic trading has also asserted itself as a dominant force in financial markets across the world. Identifying and understanding the impact of algorithmic trading on financial markets has become a critical issue for market operators and regulators. We propose to characterize traders' behavior in terms of the reward functions most likely to have given rise to the observed trading actions. Our approach is to model trading decisions as a Markov Decision Process (MDP), and use observations of an optimal decision policy to find the reward function. This is known as Inverse Reinforcement Learning (IRL), and a variety of approaches for this problem are known. Our IRL-based approach to characterizing trader behavior strikes a balance between two desirable features in that it captures key empirical properties of order book dynamics and yet remains computationally tractable. Using an IRL algorithm based on linear programming, we are able to achieve more than 90% classification accuracy in distinguishing High Frequency Trading from other trading strategies in experiments on a simulated E-Mini S&P 500 futures market.

Furthermore we investigate and address incomplete observation and non-deterministic police issues related to real market observations. We develop models based on Gaussian Process Inverse Reinforcement Learning as well. The primary objective of this study is to model Algorithmic trading behavior using Bayesian inference under the framework of inverse reinforcement learning (IRL). We model trader's behavior as a Gaussian process in the reward space. With incomplete observations of different market participants, we aim to recover the optimal policies and the corresponding reward functions to explain their behaviors under different circumstances. We show that Algorithmic trading behavior can be accurately identified using Gaussian Process Inverse Reinforcement Learning (GPIRL) algorithm developed by Qiao and Beling (Qiao and Beling [2011]), and it is superior to the linear
features maximization approach. Real market data experiments using GPIRL model give more than 95% trader identification accuracy consistently using support vector machines (SVM) based classification method. We also show that there is a clear connection between the existing summary statistic based trader classification (Kirilenko et al. [2011]) and our behavior based classification. In order to address potential change of trading behavior over time, we propose a score based classification approach to address variations of Algorithmic trading behavior under different market conditions. We further conjecture that because our behavior based identification is a better re
ection of traders' choice of actions and value propositions under different market conditions than the summary statistic based method, it is therefore more informative and robust than the summary statistic based approach, and it is well suited for discovering new behavior patterns of market participants.

Overall, we prove the hypothesis that that Algorithmic Trading strategies can be accurately identified using behavior based modeling techniques under the Inverse Reinforcement Learning framework and these strategies can be proposed based on observations of individual trading actions for market surveillance and other economic researches regarding the impact of different Algorithmic Trading strategies to financial market quality in general.

Degree:
PHD (Doctor of Philosophy)
Keywords:
Algorithmic Trading, High Frequency Trading, Inverse Reinforcement Learning, Optimization, Gaussian Process
Language:
English
Rights:
All rights reserved (no additional license for public reuse)
Issued Date:
2012/05/03