Adaptive Asset Allocation Revisited (with QuantConnect code)

October 19, 2023

Introduction

Finding investment strategies that adapt to changing market conditions is paramount. This is where adaptive asset allocation steps in. Unlike static allocation methods that maintain a fixed allocation regardless of market fluctuations, adaptive strategies are designed to respond to shifts in economic, financial, or market variables. This proactive approach aims to optimize portfolio returns by dynamically adjusting the allocation of assets, ensuring that your investments remain aligned with the evolving landscape of the financial markets.

In this post, we evaluate such systematic strategies originally published by ReSolve Asset Management in 2016 (paper). They show that an equal-weighted portfolio consisting of 10 major global asset classes is drastically improved by adding volatility weighting, momentum overlays, and minimum variance portfolio optimization from 1997 to 2016. Here, we take the perspective of retail investors who followed their proposed strategies post-publication and see how their portfolios fared. We also include QuantConnect code for each of the strategies for you to play around with.

Universe

We selected the ETFs that most closely track the proposed universe of 10 major global asset classes:

  • US stocks: ITOT
  • Japanese stocks: EWJ
  • EU stocks: VGK
  • Emerging market stocks: EEM
  • US REIT: VNQ
  • International REIT: RWX
  • US 7-10 year treasuries: IEF
  • US 20+ year treasuries: TLT
  • Commodities: DBC
  • Gold: GLD

Experimental Setup

We evaluate five dynamic strategies and compare them to an equal-weighted portfolio of all 10 assets, and the classic US 60/40 benchmark. The backtests were done using the Python package Vectorbtpro and start on January 1st, 2016, and run until October 1st, 2023. Portfolios were rebalanced on the first trading day of every month. Data is sourced from Yahoo Finance. Additionally, we backtested each strategy on QuantConnect for which the code can be found here. For a full side-by-side comparison of performance metrics refer to the conclusion.

Inverse Volatility Weighting

This strategy allocates capital to assets in inverse proportion to their risk. This way, exposure to risky assets is reduced, decreasing overall portfolio risk. We follow the paper’s methodology and measure risk using the simple standard deviation of past returns with a lookback of 60 days. If you want to delve deeper into these types of portfolios, you can do so here (naive risk parity).

Although this allocation strategy beats the equal-weighted portfolio (EW) in the original paper, it does not seem to hold in the period after publication (year 2016 and onwards). The dynamic approach obtained a Sharpe ratio of only 0.33, whereas the EW achieved one of 0.4. Moreover, neither was able to outperform the 60/40 benchmark with a Sharpe ratio of 0.53.

We assume the disappointing performance is partially due to being underweight equity in a period where the latter did incredibly well. Indeed, only 40% of the universe consists of equity, which got a reduced allocation due to the risky nature of the asset type. Moreover, we know from literature that this weighting scheme only tends to work for equity or credit assets (sourcel), whereas our universe consists of 60% assets that are therefore not benefited by this scheme. 

Momentum Overlay

Momentum is perhaps the most famous factor in the investment world. The idea is simple: buy assets with the most positive returns in recent months (and sell the biggest losers). You can read more about momentum here.  The authors suggest buying (equal-weight) the top five assets based on their return over the last six months. The next equity curve shows the result of this strategy.

This time we were able to beat the equal-weighted portfolio, which is perhaps unsurprising as momentum has proven its effect time after time in practice. Moreover, the strategy was also able to outperform the 60/40 benchmark with a Sharpe ratio of 0.58 versus 0.53. Although momentum is a great tool, it can be slow to pick up changes in market environments. For example, it severely lagged the 60/40 recovery after the COVID crisis in 2020.

Momentum Overlay and Minimum Variance

In addition to buying the top five assets with the most positive returns in the last six months, weight allocation is now driven by a minimum variance portfolio optimizer. By taking into account past return correlations and volatility, the optimizer aims to produce a portfolio with overall increased diversification and thus reduced risk. Read more about minimum variance here. The authors did not specify the lookback period for the calibration of the model. We opted for a 60-day lookback, similar to the lookback period for inverse volatility weighting. Note that performance was fairly robust to changes in this lookback window.

Despite the results reported in the study, adding a minimum variance optimizer on top of the momentum overlay did not seem to improve results in the period after publication. Although the maximum drawdown is slightly less with only 21% versus 23% for the plain momentum strategy, the Sharpe ratio suffers and drops from 0.58 to 0.54.

Conclusion

We conclude this post with a table summarizing all strategy performance measures: 

Although the momentum strategy outperformed the 60/40 benchmark on risk-adjusted performance measures, it only did so slightly based on absolute returns over the last eight years (2016-2023). One could argue that applying such an adaptive asset allocation method might not be worth it. However, we expect that (especially) in the long run, these strategies will continue to outperform. 

Note that this replication shows that reported results have to be taken with a grain of salt. We did not necessarily disprove the results published in the original paper, but our results do show that strategies sometimes can perform abysmally for quite a long time before picking up again. 

Ultimately, we conclude that the simple equal-weighted momentum strategy performs the most robust out-of-sample. More complicated weighting schemes typically come with an array of challenging estimations (i.e. correlations, volatility, …). How do you measure volatility and/or correlation? How far do we go back to measure our metric? Do we want to weigh each data point equally, or give more importance to more recent data? The scope of the exercise quickly blows up, which in turn also increases the probability of finding false positives (overfitting) among the tested strategies.