Factor Attribution of Jim Cramer’s ‘Mad Money’ Charitable Trust Performance

3.June 2016

Weekend reading, on a lighter note:

Authors: Hartley, Olson

Title: Jim Cramer's ‘Mad Money’ Charitable Trust Performance and Factor Attribution

Link: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2778724

Abstract:

This study analyzes the complete historical performance of Jim Cramer’s Action Alerts PLUS portfolio from 2001 to 2016 which includes many of the stock recommendations made on Cramer’s TV show “Mad Money”. Both since inception of the portfolio and since the start of “Mad Money” in 2005 (when it was converted into a charitable trust), Cramer’s portfolio has underperformed the S&P 500 total return index and a basket of S&P 500 stocks that does not reinvest dividends (both on an overall returns basis and in Sharpe ratio). These findings contrast with previous studies which analyzed Cramer’s outperformance in short windows before the 2008 financial crisis. Using factor analysis, we find that Cramer’s portfolio returns are primarily driven by underlevered exposure to market returns and in some specifications tilting toward small cap stocks, growth stocks and stocks with low quality of earnings. These results have broad implications for market efficiency, the usefulness of single name stock recommendations made on television, financial education, and the implementation of academic factors thematic in Cramer’s portfolio.

Notable quotations from the academic research paper:

"The usefulness of the financial advice from CNBC financial markets commentator Jim Cramer and other television finance personalities has historically been one of controversy.

Returns data from the Action Alerts Portfolio PLUS are provided by TheStreet.com which are also made available to the public (See Table 1, Figure 1). Subscribers are also given access to portfolio holdings data which we use to confirm some the findings of our risk factor analysis.

The results of the regressions are reported in Table 2. Analyzing the entire history of the portfolio, our CAPM specification finds a CAPM Beta of approximately 0.95 (statistically significant at the 1% level) and a negative alpha of -2.38% that is statistically significant (at the 10% level). Being underleveraged (underinvesting in the market portfolio) in part may be a result of the portfolio’s policy of not reinvesting cash dividends.

Across almost all of our specifications, the results demonstrate that underleverage explains most of the portfolios relative underperformance given the S&P 500’s positive absolute performance over the period. This is also confirmed by the portfolio holdings data which indicates that the AAP portfolio often holds a significant cash position, largely to make its annual cash distribution in March to make charitable contributions.

In our Fama-French (1993) three factor specification, we do find that the portfolio has some exposure to small caps given that the SMB factor is statistically significant at the 10% level, something confirmed by the portfolio holdings data. We do not find such a statistical significance when only looking at the entire history of Mad Money from 2005.

Also, when controlling for momentum factors in our Carhart (1997) four factor specification, statistical significance of the size factor also disappears nor do we find evidence of statistically significant exposure to momentum stocks.
However, we do find that when analyzing the March 2005 to March 2016 time period, when adding the extra size, value and momentum factors in the Fama-French (1993) and Carhart (1997) 4 Factor regressions that the statistical significance of the negative alpha of -3.06%, found in the CAPM for the same period, disappears.

When we include the Frazzini and Pedersen (2014) Betting-Against-Beta factor and the Asness, Frazzini and Pedersen (2013) Quality Minus Junk (QMJ) factor, we find some evidence that Cramer tilts toward growth stocks and away from stocks with high quality of earnings.

Using the factor analysis results obtained above, we also construct a “robo-Cramer” portfolio that uses the same factor loadings as estimated from the regressions. The systematic Cramer-style portfolio is constructed from the same regressions of monthly excess returns, namely the Carhart Four Factor regression using data over the entire time period (August 2001 to March 2016). The portfolio is rebalanced annually at year-end to keep constant weights. The explanatory variables are the monthly returns of the standard size, value, and momentum factors. Note that such a synthetic portfolio outperforms Cramer’s actual cumulative returns for the entire period.

"


Are you looking for more strategies to read about? Check http://quantpedia.com/Screener

Do you want to see performance of trading systems we described? Check http://quantpedia.com/Chart/Performance

Do you want to know more about us? Check http://quantpedia.com/Home/About

Continue reading

Forecasting the VIX to Improve VIX-Derivatives Trading

25.May 2016

A related paper has been added to:

#198 – Exploiting Term Structure of VIX Futures

Authors: Donninger

Title: Forecasting the VIX to Improve VIX-Derivatives Trading

Link: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2771019

Abstract:

Konstantinidi et. al. state in their broad survey of Volatility-Index forecasting: "The question whether the dynamics of implied volatility indices can be predicted has received little attention". The overall result of this and the quoted papers is: The VIX is too a very limited extend (R2 is typically 0.01) predictable, but the effect is economically not significant. This paper confirms this finding if (and only if) the forecast horizon is limited to one day. But there is no practical need to do so. One can – and usually does – hold a VIX Future or Option several trading days. It is shown that a simple model has a highly significant predictive power over a longer time horizon. The forecasts improve realistic trading strategies.

Notable quotations from the academic research paper:

"Konstantinidi et. al. investigate in [E. Konstantinidi., G. Skiadopoulos, E. Tzagkaraki: Can the Evolution of Implied Volatility be Forecasted? Evidence from European and U.S. Implied Volatility Indices. Draft from 18/12/2007] different models for forecasting several volatility indexes one day ahead. There is no practical need to restrict the forecast to one day. The one day convention is for trading purposes unusual. One either trades intraday or over a longer time horizon. It is well known that the VIX has a mean-reverting behavior. Mean-reversion is swamped in the short run by the high volatility of the index. But it should be possible to exploit mean-reversion in the long run. The best – and most practical – model I have found is:

VIXret(h) = a0 + a1*VIX(t) + a2*VXV(t) + a3*IVTS(t)

VIXret(h) is log(VIX(t+h)) – log(VIX(t)) where h is the forecast horizon in trade days.
VIX(t) is the current VIX-value.
VXV(t) is the 3-months volatility index.
IVTS(t) is the implied-volatility-term-structure defined as VIX(t)/VXV(t).

The model uses the current VIX level, VXV can be interpreted as a smoothed version of the VIX. The IVTS is a measure of the current term-structure.

Campasano & Simon proposed in [J. Campasano, D. Simon: The VIX Futures Basis: Evidence and Trading Strategies. June 27, 2012] a simple VIX Futures strategy to exploit the positive bias.

The daily roll of a VIX-Future is defined as:

R(t) = (VXF(t) – VIX(t))/TTS(t)

VXF is the VIX Futures Price.
TTS are the Trade-days Till Settle (expiry).

One enters a short VIX Future position if R(t) is above a given threshold and sells the Futures back if the basis is either below a lower threshold or one is close to the expiry. One can replace the current VIX value with the VIX forecast at expiry. The strategy with the plain VIX has a P&L of 110.2% with a Sharpe-Ratio of 0.93 and a maximum relative drawdown of 18.2%. The forecast improves this to a P&L of 156.2%, a Sharpe-Ratio of 1.12 and a drawdown of 16.8%.

"


Are you looking for more strategies to read about? Check http://quantpedia.com/Screener

Do you want to see performance of trading systems we described? Check http://quantpedia.com/Chart/Performance

Do you want to know more about us? Check http://quantpedia.com/Home/About

Continue reading

A Global Macroeconomic Risk Explanation for Momentum and Value

19.May 2016

A related paper has been added to:

#28 – Value and Momentum across Asset Classes

Authors: Cooper, Mitrache, Priestley

Title: A Global Macroeconomic Risk Explanation for Momentum and Value

Link: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2768040

Abstract:

Value and momentum returns and combinations of them are explained by their loadings on global macroeconomic risk factors across both countries and asset classes. These loadings describe why value and momentum have positive return premia and why they are negatively correlated. The global macroeconomic risk factor model also performs well in summarizing the cross section of various additional asset classes. The findings identify the source of the common variation in expected returns across asset classes and countries suggesting that markets are integrated.

Notable quotations from the academic research paper:

"U.S. macreconomic risk factors can successfully describe the return premia on both value and momentum strategies, and combinations of them across both countries and asset classes. In addition, it can explain the negative correlation between these two return premia. We present three main results.

First, the positive return premia on value and momentum, across both asset classes and countries, can be explained by the estimated prices of risk and loadings on the global risk factors. For example, the value, momentum, and combination return premia that are aggregated across all asset classes and all countries are 0.29%, 0.34%, and 0.32% per month, respectively, and they are statistically significant. The global macroeconomic factor model produces expected returns that are 87%, 109%, and 103% of the actual return premia, respectively, with small and statistically insignificant pricing errors. We find similar results for separate asset classes and across different countries, thus, offering a unified macroeconomic risk explanation of value and momentum return premia.

The second result is that the negative correlation between the return premia can be explained by their differing factor loadings. For example, for the aggregated value, momentum, and combination return premia, the factor loadings on the global industrial production factor are -0.34 for value, 1.77 for momentum, and 0.80 for the combination. For global unexpected inflation they are -2.20, 7.81, and 3.16. For the change in expected inflation they are -1.69, 3.92, and 1.31. For global term structure they are 0.35, -0.01, and 0.17, and for global default risk they are -0.04, 0.17, and 0.07. Based on these loadings, we calculate the expected returns of the return premia and compare the expected
return correlations with the correlations of the return premia. For example, remaining with aggregated value and momentum across all asset classes and markets, the actual correlation between the value and momentum strategies is -0.48, whereas the implied correlation of the two strategies from their expected returns is -0.47. We also observe differing factor loadings within each asset class and country. These differences in the factor loadings allow us to match the actual negative correlation between value and momentum return premia with a negative correlation between the expected returns of value and momentum strategies across asset classes and countries.

The third result shows that the global macroeconomic factor model does a good job in explaining the return premia on the combinations of the value and momentum strategies both in the time series and cross section. This is interesting since Asness, Moskowitz, and Pedersen (2013) note that because of the opposite sign exposure of value and momentum to liquidity risk, the equal-weighted (50/50) combination is neutral to liquidity risk. However, we show that this 50/50 combination is not neutral to global macroeconomic risk even if the value and momentum return premia have opposite sign exposures with respect to the global macroeconomic factors. These exposures have different magnitudes and this is clearly seen when we examine the loadings of the combination strategies."


Are you looking for more strategies to read about? Check http://quantpedia.com/Screener

Do you want to see performance of trading systems we described? Check http://quantpedia.com/Chart/Performance

Do you want to know more about us? Check http://quantpedia.com/Home/About

Continue reading

Cliff Asness’s (AQR) View on Factor Timing

11.May 2016

Cliff Asness (AQR Capital Management) on Factor Timing:

Authors: Asness

Title: The Siren Song of Factor Timing

Link: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2763956

Abstract:

Everyone seems to want to time factors. Often the first question after an initial discussion of factors is “ok, what’s the current outlook?” And the common answer, “the same as usual,” is often unsatisfying. There is powerful incentive to oversell timing ability. Factor investing is often done at fees in between active management and cap-weighted indexing and these fees have been falling over time. Factor timing has the potential of reintroducing a type of skill-based “active management” (as timing is generally thought of this way) back into the equation. I think that siren song should be resisted, even if that verdict is disappointing to some. At least when using the simple “value” of the factors themselves, I find such timing strategies to be very weak historically, and some tests of their long-term power to be exaggerated and/or inapplicable.

Notable quotations from the academic research paper:

"Finding a factor with high average returns is not the only way to make money. Another possibility is to “time” the factor. To own more of it when its conditional expected return is higher than normal, and less when lower than normal (even short it if its conditional expected return is negative). An extreme form of factor timing is to declare a previously useful factor now forever gone. For instance, if a factor worked in the past because it exploited inefficiencies and either those making the exploited error wised up or far too many try to exploit the error (factor crowding) one could imagine the good times are over and possibly not coming back. I think of these as the “supply and demand” for investor error!7 Factor efficacy could go away either because supply went away or demand became too great.

Why do I call factor timing a “siren song” in my title? Well, factor timing is very tempting and, unfortunately, very difficult to do well. Nary a presentation about factors, practitioner or academic, does not include some version of “can you time these?” or “is now a good time to invest in the factor?” I believe the accurate answer to the first question is “mostly no.” However, my answer is usually met with at least mild disappointment and even disbelief. Tempting indeed.

I argue that factor timing is highly analogous to timing the stock market. Stock market timing is difficult and should be done in very small doses, if at all. For instance, Asness, Ilmanen, and Maloney (2015) call market timing a “sin” and recommend, using basic value and trend indicators, to only “sin a little.” The decision of how much average passive stock market exposure to own is far more important than any plausibly reasonable amount of market timing. Given my belief in the main factors described above – that is I do not think they’re the result of data mining or will disappear in the future – the implication is to maintain passive exposures to them with small if any variance through time. Good factors and diversification easily, in my view, trump the potential of factor timing.

While I believe that aggressive factor timing is generally a bad idea, there is one possible exception. Perhaps the only thing of interest in these value spreads would be if and when we see things unprecedented in past experience. The 1999-2000 tech bubble episode focused on by AFKL was indeed such a time. If timing were ever to be useful it would be at such extremes. Factors being “arbitraged away” or an extreme version of “factor crowding” would likely entail observing such extremes. In the extreme crowding case we’d see spreads in the opposite direction of what value experienced in 1999-2000 when the value factor looked much cheaper than any time in history. So, an “arbitraging away” would lead to a factor looking much more expensive than any time in history. To date, the evidence that this has already occurred is weak and mixed. For example, if you look at the “value spread” of the factors through time to judge them as cheap or expensive, you get very different answers depending on whether you use, say, book-to-price or sales-to-price. For instance, if you use book-to-price you’d find the value factors currently look cheap versus history (though nowhere near the levels of 1999-2000) and the non-value factors (things like momentum, profitability, low beta) look expensive. However, if instead you use sales-to-price to make this judgment you find current levels are far closer to historical norms.

In sum, here’s what I would suggest. Focus most on what factors you believe in over the very long haul based on both evidence (particularly out-of-sample evidence including that in other asset classes) and economic theory. Diversify across these factors and harvest/access them cost-effectively. Realize that these factors, like the stock market itself, are now well-known and will likely “crash” at some point again. So, invest in them if you believe in them for the long-term and be prepared to survive, not miraculously time, these events sticking with your long term plan. If you time the factors, and I don’t rule it out completely, make sure you only “sin a little.” Continue to monitor such things as the value spreads for signs these strategies have been arbitraged away – like value spreads across a diversified set of value measures being much less attractive and outside the historical reasonable range – signs that, as of now, really don’t exist."


Are you looking for more strategies to read about? Check http://quantpedia.com/Screener

Do you want to see performance of trading systems we described? Check http://quantpedia.com/Chart/Performance

Do you want to know more about us? Check http://quantpedia.com/Home/About

Continue reading

Quantopian’s Academic Paper About In vs. Out-of-Sample Performance of Trading Algorithms

4.May 2016

A really good academic paper from guys (and girl) behind Quantopian:

Authors: Wiecki, Campbell, Lent, Stauth

Title: All that Glitters Is Not Gold: Comparing Backtest and Out-of-Sample Performance on a Large Cohort of Trading Algorithms

Link: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2745220

Abstract:

When automated trading strategies are developed and evaluated using backtests on historical pricing data, there exists a tendency to overfit to the past. Using a unique dataset of 888 algorithmic trading strategies developed and backtested on the Quantopian platform with at least 6 months of out-of-sample performance, we study the prevalence and impact of backtest overfitting. Specifically, we find that commonly reported backtest evaluation metrics like the Sharpe ratio offer little value in predicting out of sample performance (R² < 0.025). In contrast, higher order moments, like volatility and maximum drawdown, as well as portfolio construction features, like hedging, show significant predictive value of relevance to quantitative finance practitioners. Moreover, in line with prior theoretical considerations, we find empirical evidence of overfitting – the more backtesting a quant has done for a strategy, the larger the discrepancy between backtest and out-of-sample performance. Finally, we show that by training non-linear machine learning classifiers on a variety of features that describe backtest behavior, out-of-sample performance can be predicted at a much higher accuracy (R² = 0.17) on hold-out data compared to using linear, univariate features. A portfolio constructed on predictions on hold-out data performed significantly better out-of-sample than one constructed from algorithms with the highest backtest Sharpe ratios.

Notable quotations from the academic research paper:

"For the first time, to the best of our knowledge, we present empirical data that can be used to validate theoretical and anecdotal claims about the ubiquity of backtest overfitting and its impact on algorithm selection. This was possible by having access to a unique data set of 888 trading algorithms developed and tested by quants on the Quantopian platform. Analysis revealed several results relevant to the quantitative finance community at large – practitioners and academics alike.

Most strikingly, we find very weak correlations between IS and OOS performance in most common finance metrics including Sharpe ratio, information ratio, alpha. This result provides strong empirical support for the simulations carried out by Bailey et al. [2014]. More specifically, it supports the assumptions underlying their simulations without compensatory market forces to be present which would induce a negative correlation between IS and OOS Sharpe ratio. It is also interesting to compare different performance metrics in their predictability of OOS performance. Highest predictability was achieved by using the Sharpe ratio computed over the last IS year. This feature was also picked up by the random forest classifier as the most predictive feature.

Additionally, we find significant evidence that the more backtests a user ran, the bigger the difference between IS and OOS performance – a direct indication of the detrimental effect of backtest overfitting. This observed relationship is also consistent with Bailey et. al's [2014] prediction that increased backtesting of multiple strategy variations (parameter tuning) would increase overfitting. Thus, our results further support the notion that backtest overfitting is common and wide-spread. The observed significant positive relationship between amount of backtesting and Sharpe shortfall (IS Sharpe – OOS Sharpe) provides support for a Sharpe ratio penalized by the amount of backtesting
(e.g. the "deflated Sharpe ratio" by Bailey & Lopez de Prado [2014]). An attempt to calibrate such a backtesting penalty based on observed data is a promising direction for future research.

Together, these sobering results suggest that a reported Sharpe ratio (or related measure) based on backtest results alone can not be expected to prevail in future market environments with any reasonable confidence.

While the results described above are relevant by themselves, overall, predictability of OOS performance was low (R² < 0.25) suggesting that it is simply not possible to forecast profitability of a trading strategy based on its backtest data. However, we show that machine learning together with careful feature engineering can predict OOS performance far better than any of the individual measures alone. Using these predictions to construct a portfolio of strategies resulted in competitive cumulative OOS returns with a Sharpe ratio of 1.2 that is better than most portfolios constructed by randomly selecting strategies. While it is difficult to extract an intuition about how the Random Forest is deriving predictions, we have provided some indication of which features it deems important. It is interesting to note that among the most important features are those that quantify higher-order moments including skew and tail-behavior of returns (tail-ratio and kurtosis). Together, these results suggest that predictive information can indeed be extracted from a backtest, just not in a linear and univariate way. It is important to note that we cannot yet claim that this specific selection mechanism will work well on future data as the machine learning algorithm might learn to predict which strategy type worked well over the specific OOS time-period most of our algorithms were tested on (for a more detailed discussion of this point, see the limitations section). However, if these results are reproducible on an independent data set or the strategies identified continue to outperform the broad cohort over a much longer time frame, it should be of high relevance to quantitative finance professionals who now have a more accurate and automatic tool to evaluate the merit of a trading algorithm. As such, we believe our work highlights the potential of a data scientific approach to quantitative portfolio construction as an alternative to discretionary capital allocation."


Are you looking for more strategies to read about? Check http://quantpedia.com/Screener

Do you want to see performance of trading systems we described? Check http://quantpedia.com/Chart/Performance

Do you want to know more about us? Check http://quantpedia.com/Home/About

Continue reading

A New Analysis of Commodity Momentum Strategy

26.April 2016

A related paper has been added to:

#21 – Momentum Effect in Commodities

Authors: Bianchi, Drew, Fan

Title: Microscopic Momentum in Commodity Futures

Link: https://www120.secure.griffith.edu.au/research/file/0a572b95-132b-419d-9a71-310420fad143/1/2015-10-microscopic-momentum-in-commodity-futures.pdf

Abstract:

Conventional  momentum strategies  rely  on 12 months of past returns for  portfolio formation. Novy-Marx  (2012)  shows that the intermediate  return  momentum strategy formed  using only twelve to  seven  months  of returns prior  to  portfolio  formation significantly outperforms the recent return momentum formed using six to two month returns  prior. This  paper proposes a more granular strategy  termed  ‘microscopic momentum’, which further decomposes the intermediate and recent return momentum into  single-month  momentum components. The  novel  decomposition  reveals that a microscopic  momentum  strategy  generates  persistent  economic profits  even  after controlling   for   sector-specific   or month-of-year   commodity   seasonality   effects. Moreover, we show that the intermediate return  momentum in the commodity  futures must  be  considered largely  illusory, and all 12  months of  past  returns play  important  roles in determining the conventional momentum profits.

Notable quotations from the academic research paper:

"In  this  study,  we  propose  a  third  type  of  momentum  strategy  termed Microscopic Momentum, which further decomposes the recent (6 to 2 months) and intermediate (12 to 7 months)  momentum  of  Novy-Marx  (2012)  into  12  single-month  individual momentum components. As a consequence of the decomposition, we are able to take a glimpse at momentum profits under a month-by-month, microscopic scale. For the first time,  this  novel  approach  not  only  reveals  a  striking  new  discovery  of  a  momentum based anomaly, but also allows us to pinpoint whether specific months in the past play a more significant role in determining conventional and echo momentum profits, hence it offers fresh insights into our understanding of momentum in commodity futures.

The proposed granular analysis of microscopic momentum makes four major contributions to the commodity futures literature. First, in the commodity futures markets, the ‘11,10 microscopic momentum strategy’, constructed using the 11 to 10-month return prior to formation, produces an annualised average return of 14.74% with strong statistical significance. The superiority of the 11,10 strategy is not driven by sector-specific nor month-of-year commodity seasonality effects and is robust across sub-periods  and  out-of-sample  analysis. 

Second,  when  the  RNM  echo  momentum  is regressed  against  its  microscopic  components,  RNM  intermediate  momentum  can  be completely   subsumed  by   the  11,10  microscopic  momentum.  Thus,  the  superior performance  of  intermediate  momentum  claimed by  RNM may  be  an  illusion  created by  the  11,10  microscopic  momentum.  This  implies  that  for  tactical  asset  allocation decisions,  CTAs  and  commodity  fund  managers  must  not  consider  intermediate momentum  as  a  viable  substitute  for  conventional  momentum  strategies. Instead,  the 11,10  microscopic  strategy,  which  offers  similar  profits  in  magnitude  but  unique dynamics of returns to conventional strategies, may be a feasible alternative.

Third, around 77% of the variation of returns in the JT conventional momentum strategy can be explained by its microscopic decomposition. However, since no dominance is found on any individual month, all past months are found to be important in determining the conventional commodity momentum profits.

Fourth, echo and microscopic  momentum  is  partially  related  to  the  U.S.  cross-sectional  equity momentum and the returns from broad commodity futures, but is not related to stocks, bonds,  foreign currency risks and macroeconomic  conditions. Consistent with  Asness et. al., (2013), this finding implies that there may  indeed be a  common component in momentum across asset classes."


Are you looking for more strategies to read about? Check http://quantpedia.com/Screener

Do you want to see performance of trading systems we described? Check http://quantpedia.com/Chart/Performance

Do you want to know more about us? Check http://quantpedia.com/Home/About

Continue reading
Subscription Form

Subscribe for Newsletter

 Be first to know, when we publish new content
logo
The Encyclopedia of Quantitative Trading Strategies

Log in

MORE INFO
We boasts a total prize pool of $15,000
Gain a Share of a Total Prize Pool of $25.000
MORE INFO
$25.000
Gain a Share of a Total Prize Pool
QuantPedia
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.