22 January 2021
GAM Systematic’s Dr Daniele Lamponi and Dr Lars Jaeger highlight the importance of alternative risk premia (ARP) models evolving over time and adapting to changing conditions.
Two main biological evolutionary theories developed in the 19th century: Charles Darwin assigned a greater importance to fortuitous random, undirected variations that provide the material for natural selection over time, while Jean-Baptiste de Lamarck believed that evolution is primarily driven by non-randomly acquired, beneficial variations, driven by changes in the external environment. They both fought against the “standard theory” of the biological world: God created all living entities on Earth in the optimal way possible, and those have existed in today’s forms since the beginning of life itself. While Lamarck is mostly forgotten today Darwin’s theory became one of the cornerstones of modern biology. Contrary to the consensus views of biological evolution today, evolutionary changes in the ARP space rather follow the Lamarckian theory: they mainly occur under the influence of the environment. And they surely do not follow the creationist theory of once being created in the optimal version they live forever.
Since we started investing in ARP in the early to mid-2000s, they have become widely accepted and grown to rightfully reclaim a main role in the universe of liquid alternatives. It is our strong belief that risk premia are here to stay. This belief is anchored on a strong theoretical foundation: risk premia play a major role in the ultimate functioning of capital markets, which is based on spreading and transferring risk across market participants. And it is also supported by long term empirical evidence, spanning over decades of real performance.
But we have also long argued that quantitative models must evolve and adjust to changing conditions, and ARP strategies are no exception. In the minds of some there might be an apparent conflict here: if risk premia are immutable over time, why do we need to adapt the harvesting models and algorithms? After all, one could claim, once a harvesting algorithm has been created, it will last as long as the particular risk premium lasts, ie indefinitely (apart from cases of structural changes in the risk transfer functionality). One must thus assume that there exists a best harvesting algorithm which one can design, and at best we could tolerate an evolution taking the form of a convergence towards that “best” algorithm over time. While we might see some merit to this argument, overall we believe it is incorrect. Harvesting algorithms need to evolve and adjust to changing external conditions, as they are the result of a balance of multiple objectives, such as the purity of the harvested risk premium, transaction costs, the liquidity and availability of the instruments used and operational complexity of the model implementation and execution.
But how do models adapt to changes in the underlying markets? As anticipated, contrary to the consensus views of biological evolution we embrace the Lamarckian theory here: evolutionary changes in the ARP space mainly occur under the influence of the environment. Lamarckian evolution of an ARP practitioners models is of paramount importance for the investor, as it has a direct and measurable outcome: performance.
At the beginning of our ARP journey in the early 2000s we became familiar with spending most of our research time on “new” risk premia1 . As in every research process, successes and failures alternate: some of the research projects led to robust strategies, others were discarded because of difficult implementation or too high transaction costs, and again others were discarded because of weak empirical support or no evidence in the assumed risk transfer’s theoretical background. Almost two decades later, most of our time is still dedicated to enhancing existing strategies and reviewing harvesting algorithms. Of course, it is thrilling to reexamine a strategy with expensive or difficult implementation or follow a new lead with some great intellectual promise. Both may come with the creation of new financial instruments or an increase in liquidity of existing ones. But, again, in todays established framework this is more the exception than the rule: the new frontier of ARP research, once a practitioner has established themself (which normally takes a few years or even a decade), is implementation efficiency rather than constant discovery of new premia2. Therein lies the real innovation and a marked border between successful and unsuccessful harvesting algorithms (as unsexy as this sounds in client pitchbooks).
What is common, then and now, is the methodology employed. At GAM, our research process is fundamentally based on understanding the risk and performance drivers of strategies. The focus of our research is not creating great returns in backtests, but identifying risk drivers and the risk transfer mechanisms. The final part is then the design of an algorithm to most efficiently harvest the premia. As already anticipated, this last step implies compromises as the purity of the risk premia harvested might be balanced with the cost of the harvesting process, the availability and liquidity of traded instruments, and its complexity.
In order to emphasise this continuous evolution process in the definition of the harvesting algorithm we consider in the following two examples. Both illustrate handsomely why model evolution is needed and point out that harvesting alternative risk premia portfolios requires experience and a continuous attention to detail. The first example relates to the hedging instrument in our minimum variance strategy, while the second relates to the impact of central bank monetary policy on our algorithm harvesting the bond momentum premium.
Example 1: hedging instrument in the minimum variance strategy
GAM’s minimum variance strategy is market neutral and constructed by creating a long portfolio of equities, balancing this with a short futures position on indices in the relevant market. The securities in the long only portfolio are selected according to a minimum variance optimisation3, while the amount of future hedge is computed by imposing a pure neutrality constraint to directional market exposure (beta hedge). Figure 1 reports a graphical representation of the overall strategy. As we state repeatedly, the devil of such implementation lies in the details and in the multiple choices we face while developing the respective harvesting algorithm. Specifically, even if we fix all the constraints in the long only portfolio and a methodology to compute the beta hedge, the question of which future one should use as hedge is open. For example, in the Japanese market both Nikkei and Topix futures are traded4. They are both liquid; trading costs are lower for the Nikkei, albeit only marginally. The Nikkei index has also marginally higher volatility and beta, thus requiring fewer contracts to beta hedge the long only portfolio. At the same time the Nikkei is also more concentrated, ie bears higher idiosyncratic risk (which explains some of the higher volatility). The choice to trade one contract or the other is a fine balance between transaction costs, netting possibilities in the portfolio, liquidity, and quality of hedge. Furthermore, the composition of the indices changes over time: this implies that the choice should be reconsidered regularly. Experience and a clear understanding of the return drivers of the long and short portfolio are needed in order to accurately select the contract to trade. Figure 2 shows the difference in performance of the two futures contracts over the period December 2015 to December 2020 and the difference in exposure (sector allocation) between the two Japanese indices.
Figure 1. Graphical representation of our minimum variance strategy.
The choice of futures on Topix or Nikkei is undoubtedly a small one, a small detail in oceans of details, but it could cost the strategy5 around 15% over the considered period!
Figure 2. Topix versus Nikkei Indices.
Example 2: bond momentum and monetary policy
The change in behaviour catalysed by the introduction of very low interest rates is an example of changes in market environment and structure prompting the need to adapt a harvesting algorithm. In the last few years central banks across the developed economies have been implementing unprecedented measures to fight deflationary scenarios, with interest rate being repeatedly cut to eventually reach negative levels, and quantitative easing becoming a new normality. As we have discussed elsewhere6 the intervention of central banks has prompted structural changes in interest rate expected distributions and term structures, requiring a review and adjustment of the algorithm harvesting the momentum risk premium in developed bond markets. On the one hand, the statistical properties of bond time series have changed significantly in the last few years because interest rates are at levels where further decreases are less likely. On the other, the behaviour of the term structure of interest rates is also structurally different, as short-term rates have limited freedom of movements in this new framework. Both structural changes are pointed out in Figure 3. Panel A shows the evolution of one year rolling volatility for Euro-Bund (10-year tenor) and Euro-Schatz (2-year tenor) future contracts, while Panel B shows the regression coefficient (beta) of the Euro-Bund versus the Euro-Schatz time series. Both point out structural changes in volatility regime for the shorter-dated contract and in the relationship between the two contracts. The new market conditions forced us to question and closely monitor the use of volatility as an indicator in the harvesting algorithm, either when employed in the signal generation or in the definition of the weighting scheme.
Figure 3. Example of structural changes in bond markets prompted by central bank monetary policy.
As we have argued many times and will never be tired of repeating, the process of successfully harvesting single risk premia requires experience and continuous attention to detail. They are of paramount importance in handling the complexity arising because of the multiple choices one has to face while designing and implementing an algorithm, concerning for instance data, investable universes, signals and portfolio construction methodologies7. But they are also crucial in finding the right balance between the purity of the risk premium and the costs involved and in identifying the most efficient way to use tradable instruments. Considering the astounding dispersion in returns across ARP providers, not just in 2020, it seems like many providers have not quite found this balance yet. Maybe it is time to appreciate the Lamarckian framework and the time it takes to have ARP portfolios evolve to a powerful balance, rather than believing there are God given optimal ARP strategies.
2Actually most claims of new risk premia discovery, can be either be reconducted to in-sample optimization or already known existing return drivers.
3The objective of our minimum variance optimisation is selecting securities such as the resulting long only portfolio has the minimum risk (as measured by variance).
4The Nikkei Index is comprised of the country's top 225 stocks. It is a price-weighted index, which means the index is an average of the share prices of all the companies listed. The Topix Index is a capitalization-weighted (free float) index that lists all firms in the "first section" of the Tokyo Stock Exchange, a section that organizes all large firms on the exchange into one group. The number of securities in the index is currently 2173.
5Minimum Variance on Japanese market, beta hedged
6D. Lamponi. Beyond the rear-view mirror. GAM Insights 2019
7D. Lamponi and A. Schorr. ARP: There is no such thing as generic algorithms. GAM Insights 2019.
The information in this document is given for information purposes only and does not qualify as investment advice. Opinions and assessments contained in this document may change and reflect the point of view of GAM in the current economic environment. No liability shall be accepted for the accuracy and completeness of the information. Past performance is no indicator for the current or future development.