A Global Macroeconomic Risk Explanation for Momentum and Value Thursday, 19 May, 2016

A related paper has been added to:

#28 - Value and Momentum across Asset Classes

Authors: Cooper, Mitrache, Priestley

Title: A Global Macroeconomic Risk Explanation for Momentum and Value

Link: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2768040

Abstract:

Value and momentum returns and combinations of them are explained by their loadings on global macroeconomic risk factors across both countries and asset classes. These loadings describe why value and momentum have positive return premia and why they are negatively correlated. The global macroeconomic risk factor model also performs well in summarizing the cross section of various additional asset classes. The findings identify the source of the common variation in expected returns across asset classes and countries suggesting that markets are integrated.

Notable quotations from the academic research paper:

"U.S. macreconomic risk factors can successfully describe the return premia on both value and momentum strategies, and combinations of them across both countries and asset classes. In addition, it can explain the negative correlation between these two return premia. We present three main results.

First, the positive return premia on value and momentum, across both asset classes and countries, can be explained by the estimated prices of risk and loadings on the global risk factors. For example, the value, momentum, and combination return premia that are aggregated across all asset classes and all countries are 0.29%, 0.34%, and 0.32% per month, respectively, and they are statistically significant. The global macroeconomic factor model produces expected returns that are 87%, 109%, and 103% of the actual return premia, respectively, with small and statistically insignificant pricing errors. We find similar results for separate asset classes and across different countries, thus, offering a unified macroeconomic risk explanation of value and momentum return premia.

The second result is that the negative correlation between the return premia can be explained by their differing factor loadings. For example, for the aggregated value, momentum, and combination return premia, the factor loadings on the global industrial production factor are -0.34 for value, 1.77 for momentum, and 0.80 for the combination. For global unexpected inflation they are -2.20, 7.81, and 3.16. For the change in expected inflation they are -1.69, 3.92, and 1.31. For global term structure they are 0.35, -0.01, and 0.17, and for global default risk they are -0.04, 0.17, and 0.07. Based on these loadings, we calculate the expected returns of the return premia and compare the expected
return correlations with the correlations of the return premia. For example, remaining with aggregated value and momentum across all asset classes and markets, the actual correlation between the value and momentum strategies is -0.48, whereas the implied correlation of the two strategies from their expected returns is -0.47. We also observe differing factor loadings within each asset class and country. These differences in the factor loadings allow us to match the actual negative correlation between value and momentum return premia with a negative correlation between the expected returns of value and momentum strategies across asset classes and countries.

The third result shows that the global macroeconomic factor model does a good job in explaining the return premia on the combinations of the value and momentum strategies both in the time series and cross section. This is interesting since Asness, Moskowitz, and Pedersen (2013) note that because of the opposite sign exposure of value and momentum to liquidity risk, the equal-weighted (50/50) combination is neutral to liquidity risk. However, we show that this 50/50 combination is not neutral to global macroeconomic risk even if the value and momentum return premia have opposite sign exposures with respect to the global macroeconomic factors. These exposures have different magnitudes and this is clearly seen when we examine the loadings of the combination strategies."


Are you looking for more strategies to read about? Check http://quantpedia.com/Screener

Do you want to see performance of trading systems we described? Check http://quantpedia.com/Chart/Performance

Do you want to know more about us? Check http://quantpedia.com/Home/About

Cliff Asness's (AQR) View on Factor Timing Wednesday, 11 May, 2016

Cliff Asness (AQR Capital Management) on Factor Timing:

Authors:
Asness

Title: The Siren Song of Factor Timing

Link: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2763956

Abstract:

Everyone seems to want to time factors. Often the first question after an initial discussion of factors is “ok, what’s the current outlook?” And the common answer, “the same as usual,” is often unsatisfying. There is powerful incentive to oversell timing ability. Factor investing is often done at fees in between active management and cap-weighted indexing and these fees have been falling over time. Factor timing has the potential of reintroducing a type of skill-based “active management” (as timing is generally thought of this way) back into the equation. I think that siren song should be resisted, even if that verdict is disappointing to some. At least when using the simple “value” of the factors themselves, I find such timing strategies to be very weak historically, and some tests of their long-term power to be exaggerated and/or inapplicable.

Notable quotations from the academic research paper:

"Finding a factor with high average returns is not the only way to make money. Another possibility is to “time” the factor. To own more of it when its conditional expected return is higher than normal, and less when lower than normal (even short it if its conditional expected return is negative). An extreme form of factor timing is to declare a previously useful factor now forever gone. For instance, if a factor worked in the past because it exploited inefficiencies and either those making the exploited error wised up or far too many try to exploit the error (factor crowding) one could imagine the good times are over and possibly not coming back. I think of these as the “supply and demand” for investor error!7 Factor efficacy could go away either because supply went away or demand became too great.

Why do I call factor timing a “siren song” in my title? Well, factor timing is very tempting and, unfortunately, very difficult to do well. Nary a presentation about factors, practitioner or academic, does not include some version of “can you time these?” or “is now a good time to invest in the factor?” I believe the accurate answer to the first question is “mostly no.” However, my answer is usually met with at least mild disappointment and even disbelief. Tempting indeed.

I argue that factor timing is highly analogous to timing the stock market. Stock market timing is difficult and should be done in very small doses, if at all. For instance, Asness, Ilmanen, and Maloney (2015) call market timing a “sin” and recommend, using basic value and trend indicators, to only “sin a little.” The decision of how much average passive stock market exposure to own is far more important than any plausibly reasonable amount of market timing. Given my belief in the main factors described above – that is I do not think they’re the result of data mining or will disappear in the future – the implication is to maintain passive exposures to them with small if any variance through time. Good factors and diversification easily, in my view, trump the potential of factor timing.

While I believe that aggressive factor timing is generally a bad idea, there is one possible exception. Perhaps the only thing of interest in these value spreads would be if and when we see things unprecedented in past experience. The 1999-2000 tech bubble episode focused on by AFKL was indeed such a time. If timing were ever to be useful it would be at such extremes. Factors being “arbitraged away” or an extreme version of “factor crowding” would likely entail observing such extremes. In the extreme crowding case we’d see spreads in the opposite direction of what value experienced in 1999-2000 when the value factor looked much cheaper than any time in history. So, an “arbitraging away” would lead to a factor looking much more expensive than any time in history. To date, the evidence that this has already occurred is weak and mixed. For example, if you look at the “value spread” of the factors through time to judge them as cheap or expensive, you get very different answers depending on whether you use, say, book-to-price or sales-to-price. For instance, if you use book-to-price you’d find the value factors currently look cheap versus history (though nowhere near the levels of 1999-2000) and the non-value factors (things like momentum, profitability, low beta) look expensive. However, if instead you use sales-to-price to make this judgment you find current levels are far closer to historical norms.

In sum, here’s what I would suggest. Focus most on what factors you believe in over the very long haul based on both evidence (particularly out-of-sample evidence including that in other asset classes) and economic theory. Diversify across these factors and harvest/access them cost-effectively. Realize that these factors, like the stock market itself, are now well-known and will likely “crash” at some point again. So, invest in them if you believe in them for the long-term and be prepared to survive, not miraculously time, these events sticking with your long term plan. If you time the factors, and I don’t rule it out completely, make sure you only “sin a little.” Continue to monitor such things as the value spreads for signs these strategies have been arbitraged away – like value spreads across a diversified set of value measures being much less attractive and outside the historical reasonable range – signs that, as of now, really don’t exist."


Are you looking for more strategies to read about? Check http://quantpedia.com/Screener

Do you want to see performance of trading systems we described? Check http://quantpedia.com/Chart/Performance

Do you want to know more about us? Check http://quantpedia.com/Home/About

Quantopian's Academic Paper About In vs. Out-of-Sample Performance of Trading Algorithms Wednesday, 4 May, 2016

A really good academic paper from guys (and girl) behind Quantopian:

Authors:
Wiecki, Campbell, Lent, Stauth

Title: All that Glitters Is Not Gold: Comparing Backtest and Out-of-Sample Performance on a Large Cohort of Trading Algorithms

Link: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2745220

Abstract:

When automated trading strategies are developed and evaluated using backtests on historical pricing data, there exists a tendency to overfit to the past. Using a unique dataset of 888 algorithmic trading strategies developed and backtested on the Quantopian platform with at least 6 months of out-of-sample performance, we study the prevalence and impact of backtest overfitting. Specifically, we find that commonly reported backtest evaluation metrics like the Sharpe ratio offer little value in predicting out of sample performance (R² < 0.025). In contrast, higher order moments, like volatility and maximum drawdown, as well as portfolio construction features, like hedging, show significant predictive value of relevance to quantitative finance practitioners. Moreover, in line with prior theoretical considerations, we find empirical evidence of overfitting – the more backtesting a quant has done for a strategy, the larger the discrepancy between backtest and out-of-sample performance. Finally, we show that by training non-linear machine learning classifiers on a variety of features that describe backtest behavior, out-of-sample performance can be predicted at a much higher accuracy (R² = 0.17) on hold-out data compared to using linear, univariate features. A portfolio constructed on predictions on hold-out data performed significantly better out-of-sample than one constructed from algorithms with the highest backtest Sharpe ratios.

Notable quotations from the academic research paper:

"For the first time, to the best of our knowledge, we present empirical data that can be used to validate theoretical and anecdotal claims about the ubiquity of backtest overfitting and its impact on algorithm selection. This was possible by having access to a unique data set of 888 trading algorithms developed and tested by quants on the Quantopian platform. Analysis revealed several results relevant to the quantitative finance community at large – practitioners and academics alike.

Most strikingly, we find very weak correlations between IS and OOS performance in most common finance metrics including Sharpe ratio, information ratio, alpha. This result provides strong empirical support for the simulations carried out by Bailey et al. [2014]. More specifically, it supports the assumptions underlying their simulations without compensatory market forces to be present which would induce a negative correlation between IS and OOS Sharpe ratio. It is also interesting to compare different performance metrics in their predictability of OOS performance. Highest predictability was achieved by using the Sharpe ratio computed over the last IS year. This feature was also picked up by the random forest classifier as the most predictive feature.

Additionally, we find significant evidence that the more backtests a user ran, the bigger the difference between IS and OOS performance – a direct indication of the detrimental effect of backtest overfitting. This observed relationship is also consistent with Bailey et. al's [2014] prediction that increased backtesting of multiple strategy variations (parameter tuning) would increase overfitting. Thus, our results further support the notion that backtest overfitting is common and wide-spread. The observed significant positive relationship between amount of backtesting and Sharpe shortfall (IS Sharpe - OOS Sharpe) provides support for a Sharpe ratio penalized by the amount of backtesting
(e.g. the "deflated Sharpe ratio" by Bailey & Lopez de Prado [2014]). An attempt to calibrate such a backtesting penalty based on observed data is a promising direction for future research.

Together, these sobering results suggest that a reported Sharpe ratio (or related measure) based on backtest results alone can not be expected to prevail in future market environments with any reasonable confidence.

While the results described above are relevant by themselves, overall, predictability of OOS performance was low (R² < 0.25) suggesting that it is simply not possible to forecast profitability of a trading strategy based on its backtest data. However, we show that machine learning together with careful feature engineering can predict OOS performance far better than any of the individual measures alone. Using these predictions to construct a portfolio of strategies resulted in competitive cumulative OOS returns with a Sharpe ratio of 1.2 that is better than most portfolios constructed by randomly selecting strategies. While it is difficult to extract an intuition about how the Random Forest is deriving predictions, we have provided some indication of which features it deems important. It is interesting to note that among the most important features are those that quantify higher-order moments including skew and tail-behavior of returns (tail-ratio and kurtosis). Together, these results suggest that predictive information can indeed be extracted from a backtest, just not in a linear and univariate way. It is important to note that we cannot yet claim that this specific selection mechanism will work well on future data as the machine learning algorithm might learn to predict which strategy type worked well over the specific OOS time-period most of our algorithms were tested on (for a more detailed discussion of this point, see the limitations section). However, if these results are reproducible on an independent data set or the strategies identified continue to outperform the broad cohort over a much longer time frame, it should be of high relevance to quantitative finance professionals who now have a more accurate and automatic tool to evaluate the merit of a trading algorithm. As such, we believe our work highlights the potential of a data scientific approach to quantitative portfolio construction as an alternative to discretionary capital allocation."


Are you looking for more strategies to read about? Check http://quantpedia.com/Screener

Do you want to see performance of trading systems we described? Check http://quantpedia.com/Chart/Performance

Do you want to know more about us? Check http://quantpedia.com/Home/About

A New Analysis of Commodity Momentum Strategy Tuesday, 26 April, 2016

A related paper has been added to:

#21 - Momentum Effect in Commodities

Authors: Bianchi, Drew, Fan

Title: Microscopic Momentum in Commodity Futures

Link: https://www120.secure.griffith.edu.au/research/file/0a572b95-132b-419d-9a71-310420fad143/1/2015-10-microscopic-momentum-in-commodity-futures.pdf

Abstract:

Conventional  momentum strategies  rely  on 12 months of past returns for  portfolio formation. Novy-Marx  (2012)  shows that the intermediate  return  momentum strategy formed  using only twelve to  seven  months  of returns prior  to  portfolio  formation significantly outperforms the recent return momentum formed using six to two month returns  prior. This  paper proposes a more granular strategy  termed  ‘microscopic momentum’, which further decomposes the intermediate and recent return momentum into  single-month  momentum components. The  novel  decomposition  reveals that a microscopic  momentum  strategy  generates  persistent  economic profits  even  after controlling   for   sector-specific   or month-of-year   commodity   seasonality   effects. Moreover, we show that the intermediate return  momentum in the commodity  futures must  be  considered largely  illusory, and all 12  months of  past  returns play  important  roles in determining the conventional momentum profits.

Notable quotations from the academic research paper:

"In  this  study,  we  propose  a  third  type  of  momentum  strategy  termed Microscopic Momentum, which further decomposes the recent (6 to 2 months) and intermediate (12 to 7 months)  momentum  of  Novy-Marx  (2012)  into  12  single-month  individual momentum components. As a consequence of the decomposition, we are able to take a glimpse at momentum profits under a month-by-month, microscopic scale. For the first time,  this  novel  approach  not  only  reveals  a  striking  new  discovery  of  a  momentum based anomaly, but also allows us to pinpoint whether specific months in the past play a more significant role in determining conventional and echo momentum profits, hence it offers fresh insights into our understanding of momentum in commodity futures.

The proposed granular analysis of microscopic momentum makes four major contributions to the commodity futures literature. First, in the commodity futures markets, the ‘11,10 microscopic momentum strategy’, constructed using the 11 to 10-month return prior to formation, produces an annualised average return of 14.74% with strong statistical significance. The superiority of the 11,10 strategy is not driven by sector-specific nor month-of-year commodity seasonality effects and is robust across sub-periods  and  out-of-sample  analysis. 

Second,  when  the  RNM  echo  momentum  is regressed  against  its  microscopic  components,  RNM  intermediate  momentum  can  be completely   subsumed  by   the  11,10  microscopic  momentum.  Thus,  the  superior performance  of  intermediate  momentum  claimed by  RNM may  be  an  illusion  created by  the  11,10  microscopic  momentum.  This  implies  that  for  tactical  asset  allocation decisions,  CTAs  and  commodity  fund  managers  must  not  consider  intermediate momentum  as  a  viable  substitute  for  conventional  momentum  strategies. Instead,  the 11,10  microscopic  strategy,  which  offers  similar  profits  in  magnitude  but  unique dynamics of returns to conventional strategies, may be a feasible alternative.

Third, around 77% of the variation of returns in the JT conventional momentum strategy can be explained by its microscopic decomposition. However, since no dominance is found on any individual month, all past months are found to be important in determining the conventional commodity momentum profits.

Fourth, echo and microscopic  momentum  is  partially  related  to  the  U.S.  cross-sectional  equity momentum and the returns from broad commodity futures, but is not related to stocks, bonds,  foreign currency risks and macroeconomic  conditions. Consistent with  Asness et. al., (2013), this finding implies that there may  indeed be a  common component in momentum across asset classes."


Are you looking for more strategies to read about? Check http://quantpedia.com/Screener

Do you want to see performance of trading systems we described? Check http://quantpedia.com/Chart/Performance

Do you want to know more about us? Check http://quantpedia.com/Home/About

Analysis of US Dollar Carry Trades in the Era of 'Cheap Money' Wednesday, 20 April, 2016

A related paper has been added to:

#129 - Dollar Carry Trade

Authors: Shehadeh, Erdos, Li, Moore

Title: US Dollar Carry Trades in the Era of 'Cheap Money'

Link: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2765552

Abstract:

In this paper, we employ a unique dataset of actual US dollar (USD) forward positions against a number of currencies taken by so-called Commodity Trading Advisors (CTAs). We investigate to what extent these positions exhibit a pattern of USD carry trading or other patterns of currency trading over the recent period of the ultra-loose US monetary policy. Our analysis indeed shows that USD positions against emerging market currencies are characterised by a pattern of carry trading. That is, the USD, as the lower yielding currency, is associated with short positions. The payoff distributions of these positions, moreover, are found to have positive Sharpe ratios, negative skewness and high kurtosis. On the other hand, we find that USD positions against other advanced country currencies have a pattern completely opposite to carry trading which is in line with uncovered interest parity trading; that is, the lower (higher) yielding currency is associated with long (short) positions.

Notable quotations from the academic research paper:

"In the wake of the 2007-2008 financial crisis, many countries, especially developed countries including the USA, have adopted unconventional loose monetary policies with the purpose of stimulating their sluggish and unstable economies. This period is termed in the financial press as “the era of cheap money”. On the other hand, other countries, especially emerging markets, have maintained relatively high interest rates over the same period. Because of the potential impact of these effects on the trading decisions of the FX traders, it is worthwhile to consider currency trading in general and USD carry trading in particular over the sample period of the paper.

In light of this, the crux of the paper is to analyse our dataset of the USD forward positions to find out to what extent they show characteristics of USD carry trading or another trading strategy over the recent period of record-low US interest rates. In other words, we investigate whether these positions exhibit a response to the very low US interest rates by having a pattern of USD carry trading or other patterns of trading strategies can be identified across different currency markets. The distinctive feature of this study is that we have access to a dataset of daily-aggregated USD forward positions against a number of advanced and emerging currencies. It is collected from a Swedish investment specialist, Risk & Portfolio Management AB (RPM) which is a fund of hedge funds investing in Managed Futures strategies which are also known as Commodity Trading Advisors (CTAs). CTAs engage in various strategies like trend-following, short-term trading, and global macro that often employs carry trading as a sub-strategy. By exploiting and analysing our private dataset we find significant long-run equilibrium relationships which directly relate the USD forward positions to its forward premium. The relationships point to different trading strategies for emerging and advanced market currencies. For emerging currencies, we find that these relationships are consistent with carry trading. That is the lower yielding currency (the USD) is associated with short positions and vice versa. This carry trading pattern of forward shorting lower-yielding currency is induced by the expectations that the lower-yielding currency will not actually appreciate on average as much as the forward rate implies, or even it will depreciate. This in turn implies a profit on average at maturity. On the other hand, we find that the reverse holds between the USD and advanced currencies. In other words, we find a pattern of “fundamentals-based” trading consistent with the uncovered interest parity condition. That is, the lower (higher) yielding currency is associated with more long (short) positions. These anti-carry positions can reflect the unattractiveness of the advanced currencies-USD carry trading due to the increased uncertainty and narrow interest differentials for these markets over the period following the recent crisis.

Given that our data set is collected from FX traders which are mainly trend-followers, these results of the different trading strategies for emerging and advanced market currencies shed some light on the trading behaviour of this group of the FX market participants. On the one hand, the characteristics of carry trades for EM currencies which involve long high-interest currency aginst the low-interest currency reflect a trend-following strategy which is based on
the expectations that high-interest currency is going to appreciate -i.e. based on the appreciation trend of the high-interest rate currency. On the other hand, the characteristics of “fundamentals-based” trades for AM currencies which involve long low-interest currency against high-interest currency reflect a trend-following strategy which is based on the expectations that low-interest rate currency is going to appreciate –i.e. based on the appreciation trend of the low-interest rate currency. This is in line with the heterogeneous agents model developed by Spronk et al. (2013). The model demonstrates that depending on the dominant trend in the market, FX trend-followers can be in the same line of either carry traders or fundamentalists. In this sense, our results provide some insights into these features of the FX trend-following traders."


Are you looking for more strategies to read about? Check http://quantpedia.com/Screener

Do you want to see performance of trading systems we described? Check http://quantpedia.com/Chart/Performance

Do you want to know more about us? Check http://quantpedia.com/Home/About