The Expected Returns of Machine-Learning Strategies

29.July 2024

Does the investment in sophisticated machine learning algorithm research and development pay off? It is an important question, especially in light of the increasing costs related to the R&D of such algorithms and the possibility of decreasing returns for some methods developed in the more distant past. A recent paper by Azevedo, Hoegner, and Velikov (2023) evaluates the expected returns of machine learning-based trading strategies by considering transaction costs, post-publication decay, and the current high liquidity environment. The obstacles are not low, but research suggests that despite high turnover rates, some machine learning strategies continue to yield positive net returns.

Continue reading

Less is More? Reducing Biases and Overfitting in Machine Learning Return Predictions

13.November 2023

Machine learning models have been successfully employed to cross-sectionally predict stock returns using lagged stock characteristics as inputs. The analyzed paper challenges the conventional wisdom that more training data leads to superior machine learning models for stock return predictions. Instead, the research demonstrates that training market capitalization group-specific machine learning models can yield superior results for stock-level return predictions and long-short portfolios. The paper showcases the impact of model regularization and highlights the importance of careful model design choices.

Continue reading

Decreasing Returns of Machine Learning Strategies

10.November 2023

Traditional asset pricing literature has yielded numerous anomaly variables for predicting stock returns, but real-world outcomes often disappoint. Many of these predictors work best in small-cap stocks, and their profitability tends to decline over time, particularly in the United States. As market efficiency improves, exploiting these anomalies becomes harder. The fusion of machine learning with finance research offers promise. Machine learning can handle extensive data, identify reliable predictors, and model complex relationships. The question is whether these promises can deliver more accurate stock return predictions…

Continue reading

Exploring the Factor Zoo with a Machine-Learning Portfolio

3.August 2023

The latest paper by Sak, H. and Chang, M. T., and Huang, T. delves into the world of financial anomalies, exploring the rise and fall of characteristics in what researchers refer to as the “factor zoo.” While significant research effort is devoted to discovering new anomalies, the study highlights the lack of attention given to the evolution of these characteristics over time. By leveraging machine learning (ML) techniques, the paper conducts a comprehensive out-of-sample factor zoo analysis, seeking to uncover the underlying factors driving stock returns. The researchers train ML models on a vast database of firm and trading characteristics, generating a diverse range of linear and non-linear factor structures. The ML portfolio formed based on these findings outperforms entrenched factor models, presenting a novel approach to understanding financial anomalies. Notably, the paper identifies two subsets of dominant characteristics – one related to investor-level arbitrage constraint and the other to firm-level financial constraint – which alternately play a significant role in generating the ML portfolio return.

Continue reading

Top Models for Natural Language Understanding (NLU) Usage

27.July 2023

In recent years, the Transformer architecture has experienced extensive adoption in the fields of Natural Language Processing (NLP) and Natural Language Understanding (NLU). Google AI Research’s introduction of Bidirectional Encoder Representations from Transformers (BERT) in 2018 set remarkable new standards in NLP. Since then, BERT has paved the way for even more advanced and improved models.

We discussed the BERT model in our previous article. Here we would like to list alternatives for all of the readers that are considering running a project using some large language model (as we do 😀 ), would like to avoid ChatGPT, and would like to see all of the alternatives in one place. So, presented here is a compilation of the most notable alternatives to the widely recognized language model BERT, specifically designed for Natural Language Understanding (NLU) projects.

Continue reading

Subscribe for Newsletter

Be first to know, when we publish new content


    logo
    The Encyclopedia of Quantitative Trading Strategies

    Log in