Quantpedia Update – 2nd June 2016

2.June 2016

Two new strategies have been added:

#309 – Subsidiary – Parent Equity Momentum
#310 – Headquarter Location Momentum

Two new related research papers have been included into existing strategy reviews. And two additional related research papers have been included into existing free strategy reviews during last 2 weeks.

Continue reading

Forecasting the VIX to Improve VIX-Derivatives Trading

25.May 2016

A related paper has been added to:

#198 – Exploiting Term Structure of VIX Futures

Authors: Donninger

Title: Forecasting the VIX to Improve VIX-Derivatives Trading

Link: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2771019

Abstract:

Konstantinidi et. al. state in their broad survey of Volatility-Index forecasting: "The question whether the dynamics of implied volatility indices can be predicted has received little attention". The overall result of this and the quoted papers is: The VIX is too a very limited extend (R2 is typically 0.01) predictable, but the effect is economically not significant. This paper confirms this finding if (and only if) the forecast horizon is limited to one day. But there is no practical need to do so. One can – and usually does – hold a VIX Future or Option several trading days. It is shown that a simple model has a highly significant predictive power over a longer time horizon. The forecasts improve realistic trading strategies.

Notable quotations from the academic research paper:

"Konstantinidi et. al. investigate in [E. Konstantinidi., G. Skiadopoulos, E. Tzagkaraki: Can the Evolution of Implied Volatility be Forecasted? Evidence from European and U.S. Implied Volatility Indices. Draft from 18/12/2007] different models for forecasting several volatility indexes one day ahead. There is no practical need to restrict the forecast to one day. The one day convention is for trading purposes unusual. One either trades intraday or over a longer time horizon. It is well known that the VIX has a mean-reverting behavior. Mean-reversion is swamped in the short run by the high volatility of the index. But it should be possible to exploit mean-reversion in the long run. The best – and most practical – model I have found is:

VIXret(h) = a0 + a1*VIX(t) + a2*VXV(t) + a3*IVTS(t)

VIXret(h) is log(VIX(t+h)) – log(VIX(t)) where h is the forecast horizon in trade days.
VIX(t) is the current VIX-value.
VXV(t) is the 3-months volatility index.
IVTS(t) is the implied-volatility-term-structure defined as VIX(t)/VXV(t).

The model uses the current VIX level, VXV can be interpreted as a smoothed version of the VIX. The IVTS is a measure of the current term-structure.

Campasano & Simon proposed in [J. Campasano, D. Simon: The VIX Futures Basis: Evidence and Trading Strategies. June 27, 2012] a simple VIX Futures strategy to exploit the positive bias.

The daily roll of a VIX-Future is defined as:

R(t) = (VXF(t) – VIX(t))/TTS(t)

VXF is the VIX Futures Price.
TTS are the Trade-days Till Settle (expiry).

One enters a short VIX Future position if R(t) is above a given threshold and sells the Futures back if the basis is either below a lower threshold or one is close to the expiry. One can replace the current VIX value with the VIX forecast at expiry. The strategy with the plain VIX has a P&L of 110.2% with a Sharpe-Ratio of 0.93 and a maximum relative drawdown of 18.2%. The forecast improves this to a P&L of 156.2%, a Sharpe-Ratio of 1.12 and a drawdown of 16.8%.

"


Are you looking for more strategies to read about? Check http://quantpedia.com/Screener

Do you want to see performance of trading systems we described? Check http://quantpedia.com/Chart/Performance

Do you want to know more about us? Check http://quantpedia.com/Home/About

Continue reading

A Global Macroeconomic Risk Explanation for Momentum and Value

19.May 2016

A related paper has been added to:

#28 – Value and Momentum across Asset Classes

Authors: Cooper, Mitrache, Priestley

Title: A Global Macroeconomic Risk Explanation for Momentum and Value

Link: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2768040

Abstract:

Value and momentum returns and combinations of them are explained by their loadings on global macroeconomic risk factors across both countries and asset classes. These loadings describe why value and momentum have positive return premia and why they are negatively correlated. The global macroeconomic risk factor model also performs well in summarizing the cross section of various additional asset classes. The findings identify the source of the common variation in expected returns across asset classes and countries suggesting that markets are integrated.

Notable quotations from the academic research paper:

"U.S. macreconomic risk factors can successfully describe the return premia on both value and momentum strategies, and combinations of them across both countries and asset classes. In addition, it can explain the negative correlation between these two return premia. We present three main results.

First, the positive return premia on value and momentum, across both asset classes and countries, can be explained by the estimated prices of risk and loadings on the global risk factors. For example, the value, momentum, and combination return premia that are aggregated across all asset classes and all countries are 0.29%, 0.34%, and 0.32% per month, respectively, and they are statistically significant. The global macroeconomic factor model produces expected returns that are 87%, 109%, and 103% of the actual return premia, respectively, with small and statistically insignificant pricing errors. We find similar results for separate asset classes and across different countries, thus, offering a unified macroeconomic risk explanation of value and momentum return premia.

The second result is that the negative correlation between the return premia can be explained by their differing factor loadings. For example, for the aggregated value, momentum, and combination return premia, the factor loadings on the global industrial production factor are -0.34 for value, 1.77 for momentum, and 0.80 for the combination. For global unexpected inflation they are -2.20, 7.81, and 3.16. For the change in expected inflation they are -1.69, 3.92, and 1.31. For global term structure they are 0.35, -0.01, and 0.17, and for global default risk they are -0.04, 0.17, and 0.07. Based on these loadings, we calculate the expected returns of the return premia and compare the expected
return correlations with the correlations of the return premia. For example, remaining with aggregated value and momentum across all asset classes and markets, the actual correlation between the value and momentum strategies is -0.48, whereas the implied correlation of the two strategies from their expected returns is -0.47. We also observe differing factor loadings within each asset class and country. These differences in the factor loadings allow us to match the actual negative correlation between value and momentum return premia with a negative correlation between the expected returns of value and momentum strategies across asset classes and countries.

The third result shows that the global macroeconomic factor model does a good job in explaining the return premia on the combinations of the value and momentum strategies both in the time series and cross section. This is interesting since Asness, Moskowitz, and Pedersen (2013) note that because of the opposite sign exposure of value and momentum to liquidity risk, the equal-weighted (50/50) combination is neutral to liquidity risk. However, we show that this 50/50 combination is not neutral to global macroeconomic risk even if the value and momentum return premia have opposite sign exposures with respect to the global macroeconomic factors. These exposures have different magnitudes and this is clearly seen when we examine the loadings of the combination strategies."


Are you looking for more strategies to read about? Check http://quantpedia.com/Screener

Do you want to see performance of trading systems we described? Check http://quantpedia.com/Chart/Performance

Do you want to know more about us? Check http://quantpedia.com/Home/About

Continue reading

Quantpedia Update – 18th May 2016

18.May 2016

Two new strategies have been added:

#307 – Reversal During Earnings-Announcements
#308 – Short-Term Momentum in Currencies

Two new related research papers have been included into existing strategy reviews. And two additional related research papers have been included into existing free strategy reviews during last 2 weeks.

Continue reading

Cliff Asness’s (AQR) View on Factor Timing

11.May 2016

Cliff Asness (AQR Capital Management) on Factor Timing:

Authors: Asness

Title: The Siren Song of Factor Timing

Link: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2763956

Abstract:

Everyone seems to want to time factors. Often the first question after an initial discussion of factors is “ok, what’s the current outlook?” And the common answer, “the same as usual,” is often unsatisfying. There is powerful incentive to oversell timing ability. Factor investing is often done at fees in between active management and cap-weighted indexing and these fees have been falling over time. Factor timing has the potential of reintroducing a type of skill-based “active management” (as timing is generally thought of this way) back into the equation. I think that siren song should be resisted, even if that verdict is disappointing to some. At least when using the simple “value” of the factors themselves, I find such timing strategies to be very weak historically, and some tests of their long-term power to be exaggerated and/or inapplicable.

Notable quotations from the academic research paper:

"Finding a factor with high average returns is not the only way to make money. Another possibility is to “time” the factor. To own more of it when its conditional expected return is higher than normal, and less when lower than normal (even short it if its conditional expected return is negative). An extreme form of factor timing is to declare a previously useful factor now forever gone. For instance, if a factor worked in the past because it exploited inefficiencies and either those making the exploited error wised up or far too many try to exploit the error (factor crowding) one could imagine the good times are over and possibly not coming back. I think of these as the “supply and demand” for investor error!7 Factor efficacy could go away either because supply went away or demand became too great.

Why do I call factor timing a “siren song” in my title? Well, factor timing is very tempting and, unfortunately, very difficult to do well. Nary a presentation about factors, practitioner or academic, does not include some version of “can you time these?” or “is now a good time to invest in the factor?” I believe the accurate answer to the first question is “mostly no.” However, my answer is usually met with at least mild disappointment and even disbelief. Tempting indeed.

I argue that factor timing is highly analogous to timing the stock market. Stock market timing is difficult and should be done in very small doses, if at all. For instance, Asness, Ilmanen, and Maloney (2015) call market timing a “sin” and recommend, using basic value and trend indicators, to only “sin a little.” The decision of how much average passive stock market exposure to own is far more important than any plausibly reasonable amount of market timing. Given my belief in the main factors described above – that is I do not think they’re the result of data mining or will disappear in the future – the implication is to maintain passive exposures to them with small if any variance through time. Good factors and diversification easily, in my view, trump the potential of factor timing.

While I believe that aggressive factor timing is generally a bad idea, there is one possible exception. Perhaps the only thing of interest in these value spreads would be if and when we see things unprecedented in past experience. The 1999-2000 tech bubble episode focused on by AFKL was indeed such a time. If timing were ever to be useful it would be at such extremes. Factors being “arbitraged away” or an extreme version of “factor crowding” would likely entail observing such extremes. In the extreme crowding case we’d see spreads in the opposite direction of what value experienced in 1999-2000 when the value factor looked much cheaper than any time in history. So, an “arbitraging away” would lead to a factor looking much more expensive than any time in history. To date, the evidence that this has already occurred is weak and mixed. For example, if you look at the “value spread” of the factors through time to judge them as cheap or expensive, you get very different answers depending on whether you use, say, book-to-price or sales-to-price. For instance, if you use book-to-price you’d find the value factors currently look cheap versus history (though nowhere near the levels of 1999-2000) and the non-value factors (things like momentum, profitability, low beta) look expensive. However, if instead you use sales-to-price to make this judgment you find current levels are far closer to historical norms.

In sum, here’s what I would suggest. Focus most on what factors you believe in over the very long haul based on both evidence (particularly out-of-sample evidence including that in other asset classes) and economic theory. Diversify across these factors and harvest/access them cost-effectively. Realize that these factors, like the stock market itself, are now well-known and will likely “crash” at some point again. So, invest in them if you believe in them for the long-term and be prepared to survive, not miraculously time, these events sticking with your long term plan. If you time the factors, and I don’t rule it out completely, make sure you only “sin a little.” Continue to monitor such things as the value spreads for signs these strategies have been arbitraged away – like value spreads across a diversified set of value measures being much less attractive and outside the historical reasonable range – signs that, as of now, really don’t exist."


Are you looking for more strategies to read about? Check http://quantpedia.com/Screener

Do you want to see performance of trading systems we described? Check http://quantpedia.com/Chart/Performance

Do you want to know more about us? Check http://quantpedia.com/Home/About

Continue reading

Quantopian’s Academic Paper About In vs. Out-of-Sample Performance of Trading Algorithms

4.May 2016

A really good academic paper from guys (and girl) behind Quantopian:

Authors: Wiecki, Campbell, Lent, Stauth

Title: All that Glitters Is Not Gold: Comparing Backtest and Out-of-Sample Performance on a Large Cohort of Trading Algorithms

Link: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2745220

Abstract:

When automated trading strategies are developed and evaluated using backtests on historical pricing data, there exists a tendency to overfit to the past. Using a unique dataset of 888 algorithmic trading strategies developed and backtested on the Quantopian platform with at least 6 months of out-of-sample performance, we study the prevalence and impact of backtest overfitting. Specifically, we find that commonly reported backtest evaluation metrics like the Sharpe ratio offer little value in predicting out of sample performance (R² < 0.025). In contrast, higher order moments, like volatility and maximum drawdown, as well as portfolio construction features, like hedging, show significant predictive value of relevance to quantitative finance practitioners. Moreover, in line with prior theoretical considerations, we find empirical evidence of overfitting – the more backtesting a quant has done for a strategy, the larger the discrepancy between backtest and out-of-sample performance. Finally, we show that by training non-linear machine learning classifiers on a variety of features that describe backtest behavior, out-of-sample performance can be predicted at a much higher accuracy (R² = 0.17) on hold-out data compared to using linear, univariate features. A portfolio constructed on predictions on hold-out data performed significantly better out-of-sample than one constructed from algorithms with the highest backtest Sharpe ratios.

Notable quotations from the academic research paper:

"For the first time, to the best of our knowledge, we present empirical data that can be used to validate theoretical and anecdotal claims about the ubiquity of backtest overfitting and its impact on algorithm selection. This was possible by having access to a unique data set of 888 trading algorithms developed and tested by quants on the Quantopian platform. Analysis revealed several results relevant to the quantitative finance community at large – practitioners and academics alike.

Most strikingly, we find very weak correlations between IS and OOS performance in most common finance metrics including Sharpe ratio, information ratio, alpha. This result provides strong empirical support for the simulations carried out by Bailey et al. [2014]. More specifically, it supports the assumptions underlying their simulations without compensatory market forces to be present which would induce a negative correlation between IS and OOS Sharpe ratio. It is also interesting to compare different performance metrics in their predictability of OOS performance. Highest predictability was achieved by using the Sharpe ratio computed over the last IS year. This feature was also picked up by the random forest classifier as the most predictive feature.

Additionally, we find significant evidence that the more backtests a user ran, the bigger the difference between IS and OOS performance – a direct indication of the detrimental effect of backtest overfitting. This observed relationship is also consistent with Bailey et. al's [2014] prediction that increased backtesting of multiple strategy variations (parameter tuning) would increase overfitting. Thus, our results further support the notion that backtest overfitting is common and wide-spread. The observed significant positive relationship between amount of backtesting and Sharpe shortfall (IS Sharpe – OOS Sharpe) provides support for a Sharpe ratio penalized by the amount of backtesting
(e.g. the "deflated Sharpe ratio" by Bailey & Lopez de Prado [2014]). An attempt to calibrate such a backtesting penalty based on observed data is a promising direction for future research.

Together, these sobering results suggest that a reported Sharpe ratio (or related measure) based on backtest results alone can not be expected to prevail in future market environments with any reasonable confidence.

While the results described above are relevant by themselves, overall, predictability of OOS performance was low (R² < 0.25) suggesting that it is simply not possible to forecast profitability of a trading strategy based on its backtest data. However, we show that machine learning together with careful feature engineering can predict OOS performance far better than any of the individual measures alone. Using these predictions to construct a portfolio of strategies resulted in competitive cumulative OOS returns with a Sharpe ratio of 1.2 that is better than most portfolios constructed by randomly selecting strategies. While it is difficult to extract an intuition about how the Random Forest is deriving predictions, we have provided some indication of which features it deems important. It is interesting to note that among the most important features are those that quantify higher-order moments including skew and tail-behavior of returns (tail-ratio and kurtosis). Together, these results suggest that predictive information can indeed be extracted from a backtest, just not in a linear and univariate way. It is important to note that we cannot yet claim that this specific selection mechanism will work well on future data as the machine learning algorithm might learn to predict which strategy type worked well over the specific OOS time-period most of our algorithms were tested on (for a more detailed discussion of this point, see the limitations section). However, if these results are reproducible on an independent data set or the strategies identified continue to outperform the broad cohort over a much longer time frame, it should be of high relevance to quantitative finance professionals who now have a more accurate and automatic tool to evaluate the merit of a trading algorithm. As such, we believe our work highlights the potential of a data scientific approach to quantitative portfolio construction as an alternative to discretionary capital allocation."


Are you looking for more strategies to read about? Check http://quantpedia.com/Screener

Do you want to see performance of trading systems we described? Check http://quantpedia.com/Chart/Performance

Do you want to know more about us? Check http://quantpedia.com/Home/About

Continue reading
Subscription Form

Subscribe for Newsletter

 Be first to know, when we publish new content
logo
The Encyclopedia of Quantitative Trading Strategies

Log in