Time-series analysis using Neural networks July 19, 2006Posted by jbarseneau in Uncategorized.
1 comment so far
The role and importance of time series analysis in finance was recognized in 2003 by awarding the Nobel Prize of economics to Robert Engle for his work on the Autoregressive Conditional Heteroscedastic (ARCH) model, pioneered in 1982. However, detecting trends and patterns in financial time-series has been of great interest to the finance world for decades. So far, the primary means of detecting trends and patterns has involved statistical methods such as statistical clustering and regression analysis and more recently the Autoregressive Conditional Heteroscedastic (ARCH) model, as mentioned above, and Generalized ARCH (GARCH) model which are considered today the most often applied time-varying model. The mathematical models associated with these methods for economical forecasting, however, are linear and may fail to forecast the turning points in economic cycles because in many cases the data they model may be highly nonlinear.
Time series analysis is the fitting of stochastic processes to time series. Any associative array of times and numbers can be viewed as a time series. The times may not necessarily be of a regular interval length. For example, the historical fluctuations in the price of a NYMEX Gold Contract can be said to be the time series for NYMEX Gold. Analysts throughout the economy will use the tools outlined here to aid in the management of their corresponding businesses. Energy traders, for example, will often attempt to forecast power consumption based upon both weather normals and short term weather forecasts.
add a comment
Intelligent Agents have been used in the financial industry for a variety of applications, from Simple Network Management Protocol-based (SNMP) monitoring tools for operational infrastructure management to more sophisticated functions like sorting, sifting and digesting the mountains of information that will surround consumers. In the ‘Always On Real Time Access’ Networks of the future, these agents pick out relevant information as they autonomously roam around the always-connected network autonomously.
In computer science, an intelligent agent (IA) is a software agent that exhibits some form of artificial intelligence that assists the user and will act on their behalf, in performing repetitive computer-related tasks. While the working of software agents used for operator assistance or data mining (sometimes referred to as bots) is often based on fixed pre-programmed rules, “intelligent” here implies the ability to adapt and learn.In some literature IAs are also referred to as autonomous intelligent agents, which means they act independently, and will learn and adapt to changing circumstances. –Wikipedia
Computational Intelligence has promised many things over the last three decades; automated stock picking, portfolio optimization, neural prosthesis, predictive models for complex systems and many more. To say the very least, these methods have come up short on all fronts. There are furious debates in academia why this is so, I’m only concerned with moving this field forward and delivery some of the computational scalability these methods show in theory. So instead of tackling all the different reasons why they do not deliver theoretical scale, I’m going to concentrate on some obvious whys how to improve these methods.
I have done a lot of work with the Self-organizing Kohonen Feature map (SOM); which is a neural network that resembles the way in which the visual cortex self-organizes the features it is presented by the visual system. These neural networks have proven to be very useful in simulation mode or low data frequency. I initially used the computational method to reverse engineer 500 KLOC of COBOL code into and object-oriented representation for the European Space Agency. The system performed extremely well and recognized 80% of the objects that human engineers agreed to. But it was very slow. And what I want to do now needs high performance: the analysis of Level II quotes in real time for the full NASDAQ Market.
add a comment
Advances in computing have profoundly changed our society. It has provided us with the ability to capture and process massive volumes of meaningful data. We never before have had the amount of financial data available for analysis nor have had the processing power to analysis a complete market in real-time. This in conjunction with the advances in computational methods, we have been recently equipped to examine financial data in real-time and more efficiently than anytime previously.
It is proposed that thier is a large scientific and commercial relevance exposed as a result of the convergence of four advanced and diverse fields of; (i) model-driven trading, (ii) computational intelligence, (iii) the availability of high frequency market data, and (iv) the evolution of enabling technologies; such as available 64-bit processors, high performance data managers and grid computing.
We sould be able demonstrate the unique power of this technology convergence by analyzing quote depth, which is still not commercially available in historic form, in real-time and identifying important non-seasonal patterns. One can examine the BID-ASK depth of the NASDAQ cash equity market by loading and committing inhomogeneous time-series market data into cache memory. By applying the dataset to a continuously adaptive and biologically-inspired computational method high speed pattern recognition can be conducted. The resultant identified patterns should indicate market anomalies and will form stylized facts that in turn can be used to supply a paradigm for model-driven trading. Because of the technology barriers to entry and a high level of domain specific knowledge required the method described here has not been attempted by any known large non-bank entities and is truly ground breaking.