Warning: Declaration of ElementorPro\Modules\Posts\Skins\Skin_Content_Base::register_controls(Elementor\Widget_Base $widget) should be compatible with Elementor\Controls_Stack::register_controls() in /home/customer/www/blueowlpress.com/public_html/wp-content/plugins/elementor-pro/modules/theme-builder/widgets/post-content.php on line 30
Posts > Blue Owl Press

Posts

Norgate Data for Python

Norgate Data is expanding their services to include support for Python.

Norgate is a supplier of high quality end-of-day data.  I have been using their data for several years and recommend it highly.  They have recently released an API (currently for Windows, with plans to add support for Linux in the future) that enables direct access to the Norgate data files from Python.  I have been one of the testers. 

The API provides access to all of the Norgate data fields — price, dividends, splits, index membership, etc.  Data can be imported into a variety of formats, including Numpy array and Pandas DataFrame.  

Begin by visiting the NorgateData PyPi page where you will find extensive documentation and fully operational examples.  

Install the library using pip from the command prompt.

> pip install norgatedata

Stationarity

A reader asked this question related to stationarity.

Hi Howard,

I’m following up on a topic that you know well – stationarity. I’ve viewed your video on the topic and done quite a bit of reading on the topic. Perhaps reading more on the topic has clarified some of the research and viewpoints on the topic, but it certainly hasn’t provided clarity from a trading system perspective.

I did dive into running ADF tests on some issues in various time frames. I’ve included one sample from XLY. Most of the tests for the 6-month samples returned non-stationary results. You mentioned in your book that one needs to experiment with various timeframes to find what may work best for testing. If the timeframes consistently show non-stationarity, where does one go from there? I hope this doesn’t sound like I am looking for a specific answer, just trying to understand better how you handle the stationarity issue in your trading system development. It’s a fascinating topic on top of so many others in developing trading systems, so I will keep on learning. Any thoughts you have on the topic are greatly appreciated (as always).

Kind regards,
Anon


Greetings, and thanks for your question —

Stationarity is a feature of a distribution. Prices of financial data series are rarely stationary themselves. Some can be made stationary through differencing, some through detrending. Price change or log of price change is sometimes stationary.

What we want is stationarity for the distribution of the signal and following trade. Rather than focus on price, form series based on the signal you are using, and test it for stationarity. If it is stationary, good; if it is not, apply a variety of transformations in search of a stationary distribution. Consider and test differencing, detrending, position-in-range (such as stochastic or percent rank), technical indicators (such as RSI), statistical indicators (such as deviation from the mean), distributional transformations (such as Box-Cox or softmax).

You will probably still find that the distribution drifts over time, requiring monitoring and periodic adjustment.

Finding a stationary signal is necessary, but not sufficient. We also need the trades that follow the signal to have good characteristics — sufficiently profitable without excessive risk.

Best regards, Howard

Supercomputers for the masses

I have recently completed building a new computer for machine learning.  The capabilities of computers available to hobbyists are amazing.

The metric by which scientific computers are measured is FLOPs — Floating point operations per second.  The higher, the better.  If the only metric is floating point arithmetic, the human brain has a few FLOPs.  Taken as a whole, the human brain performs about 10 ^ 16 operations per second, very few of which are floating point. 

Coming out of graduate school in 1972, I was manager of an engineering company’s Control Data Corporation 6400 Supercomputer.  It had its own room with a glass-panel front wall, raised floor, conditioned power, dedicated air conditioner, and staff of two CDC technicians.  At installation, its price was about $5 million (in 1972 dollars).  It could execute about 200 kilo FLOPs — 200,000 floating point instructions per second — 2 * 10 ^ 5.   

Computer scientists and engineers have always been looking forward to the next generation of hardware.  In 1965, Gordon Moore, Founder of Intel, noted that density of transistors (transistors per square inch) doubled about every year, a figure he later modified to every two years.  Moore’s Law, as it has become known, now suggests that computer chip performance doubles about every 18 months due to an increase in both density and execution speed.  Note the chart on the Moore’s Law  page.  Capability is approximately a straight line on a semi-log chart showing that the increase in density has been exponential and consistent.  When I first began working with individual chips in the early 1970s, the Intel 8008 and Motorola 6800 were available to hobbyists, each with about 5,000 transistors on a chip about the size of a postage stamp.  Current Intel i7 chips (in the upper right corner of that chart) such as the one you probably have in your home computer have several billion transistors on a chip about the size of a book of matches.   

World War II created the immediate need for electronic digital computers.  One early application was computation of artillery firing tables.  Another was simulation of the decay process of uranium in design of the atomic bomb.  The early computers — Harvard Mark I and ENIAC — filled large rooms, and could perform a few operations per second.  The simulation profession has always been anticipating faster processors with more memory and at lower cost.  Supercomputers have been expensive, relying heavily on government laoratories.  Capabilities, measured in FLOPs, have increased from 2 GigaFLOPs for the Cray in 1985, to 93 PetaFLOPS (93,000,000 GFLOPs) for the Sunway in 2016, and 200 PetaFLOPS for the US Summit in 2018.  The annual increase is a multiplier of about 1.75 — close to Moore’s Law.  Along the way, the Lawrence Livermore Laboratory had the top supercomputer in the world in 2001 — the 12.3 TeraFLOP, $110 million, ASCI White.     

The computer I have built from off-the-shelf parts, at a cost of about $3,500, runs at about 12 TeraFLOPs.  The key to the speed is the GPU board — a GTX 1080 Ti with 3500 processing units.  Benchmarks I have run show an increase in speed by a factor of about 100 in matrix operations — the kind central to machine learning.  Programs that took one hour on a single CPU, now take about 30 seconds using the GPU.  Speed depends, in part, on the mix of instructions and amount of data relative to size of memory.

This is a game changer.  Machine learning applications, such as deep learning and complex ensembles, that were not practical before, are now available to small organizations at reasonable cost. 

FAQ:  safe-f and CAR25

In response to a request for clarification of the metrics safe-f and CAR25, and their use, asked by a reader —

Greetings —

Use CAR25 as an objective function while developing trading systems, and also as an objective function while managing the trading of those systems.

CAR25 is a metric that estimates future profit potential for any set of trades. It is a dominant metric. Each of the alternatives being evaluated and ranked have the same probability of loss of capital. If this was done correctly, the trader / investor should be impartial to choice among them and willing to accept any of the alternatives. CAR25 ranks the alternatives. When measuring and managing daily — mark to market daily — rank each alternative daily and have a signal for the next day’s action for each day. Each day, focus on the top-ranked of the alternatives. Follow its signal for the next one-day action. Using funds to take signals and trades from any of the alternatives that have lower CAR25 is a suboptimal use of funds.

All of that being said, we are looking for the best return from a portfolio of risk-normalized alternatives. These will be trading systems, each of which trades a single issue long/flat or short/flat. CAR25 ranks them daily. The rotation is among a portfolio of systems, rather than a portfolio of issues. Include among the systems a (nearly) risk-free alternative — such as certificates of deposit or money market funds — which can also have its risk and CAR25 computed. When the risk-free alternative is top-ranked, stay risk-free.

The risk evaluation has four steps. Revisit the video — The Four Faces of Risk

Follow the steps in this order:

1. Risk assessment begins with a personal statement of risk tolerance. All alternatives will be normalized to give the same probability of this amount of risk.

2. Each series of prices has risk associated with it, even before a trading model is applied. This is the risk the “data prospector” assesses. We are looking for tradable data series that have both enough variation to offer profit, but not so much that there will be excessive risk no matter which model is applied.

3. Given a series with those “goldilocks” features of volatility, try to develop a model that identifies profitable trades. Use the scientific method — fit in-sample data and validate using out-of-sample data. When a promising model has been fit, analyze the out-of-sample trades, computing safe-f and CAR25. If the development was done correctly, the out-of-sample results are the best estimate of future performance.

4. Financial data is not stationary. As the data changes, the fit between the model and the data will change, and the distribution of trades will change. Regularly update the set of trades used to calculate safe-f and CAR25. When the model and data lose synchronization, safe-f will drop and CAR25 will drop.

Continue monitoring CAR25 day by day, taking signals from the top-ranked system.

Best regards, Howard

A new paper, entitled “Assessing Trading System Health,”

has been added to the Papers page.  The paper describes a technique for validating that the system developed in-sample does have predictive value and will probably be profitable in future trading.  It continues on with further techniques for assessing the ongoing health of the system and adjusting position size to reduce the risk of a drawdown that exceeds the trader’s tolerance.  Complete details, including code, is presented.

This is a link to the Papers page