Although classification schemes for Alternative Risk Premia (ARP) strategies might differ slightly across various managers, a broad consensus has developed over what the universe looks like in terms of categories of generally agreed risk premia. At GAM Systematic, we classify ARP strategies across two dimensions: strategy (which we term value, momentum, carry) and asset class (where we include currencies, bonds, equities, commodities). This classification scheme has proved useful in managing our portfolios and in researching individual premia to deliver on the premise of ARP – the sustainable and repeatable harvesting of risk premia across non-traditional exposures to diversify the simple equity and bond risk inherent in the vast majority of investment portfolios.
This was quite different when certain individuals within the GAM Systematic Team began to pioneer the concept of ARP investing in 2004. Back then there was still a lot of mystique surrounding alternative returns streams, mostly created by the hedge fund industry to aid its claims of generating “alpha”. Back then investors generally accepted high fees and a “black box” lack of transparency as the cost of entry to the hedge fund wizards, and their promises of market-proof returns. This old “alpha” word is largely gone, reduced in part by years of mediocre hedge fund returns and lack of sufficient diversification during challenging markets. Out of the dust of its remnants appeared the concept of alternative risk premia, also known as alternative beta. This emergence has happened mostly because investors have come to appreciate that a large portion of “alpha” returns can in fact be captured by ARP investments. Moreover, ARP strategies do not typically charge exorbitant fees nor do they come wrapped in a “black box”, they simply seek to systematically extract “alternative” risk premia across capital markets (above and beyond those of long-only equity and bond holdings) using straightforward trading tools. By virtue of extensive research on the subject1, it is today generally agreed that ARP are the economic driver of a large part of hedge fund returns.
A reasonable question to ask is, if the ARP approach uses straightforward tools and benefits from academic credibility, surely the universe must either be generic or commoditised? And, if this is indeed the case, the selection of a manager or provider must be unimportant. A look at the spectrum of managers and providers and their spread of returns over the last few years provides an answer of sorts. It is certainly the case that the categorisation and descriptive heterogeneity within the ARP space is more contained than that of the hedge fund industry – there are generally agreed categories (like value, momentum and carry) and, similarly, agreed single ARP model descriptions within those categories are typically consistent. However, a simple examination of track records clearly demonstrates that, historically, some managers have consistently outperformed others. One can also find a significant spread in performance of offered ARP that share the same descriptor. We at GAM Systematic believe that long-term, relative outperformance using the ARP approach is an achievable objective. The salient question is whether such results are predicated on the quality of the models or if they stem from a singular asset allocation framework, which may be based on expected drawdown as risk measure rather than the more broadly employed volatility. As is often the case with binary questions, a healthy mix of both is most likely where the answer lies in our view.
ARP models are actually not as generic as the average investor may believe. While, for example, specifying an FX carry strategy might sound quite straightforward, as so often in the work of investments the devil lies in the details: how exactly does one define the interest differential (based on real or nominal yield differences)? What universe of FX rates is optimal (eg should emerging market currencies be included, and if so which ones)? How should risk positions be sized (on spread, signal, inverse volatility, etc)? Questions like these, and the lack of clear answers, require ongoing research into existing as well as potentially new ARP opportunities. This is why we at GAM Systematic are committed to reviewing and challenging existing models on an ongoing basis, even those that have been particularly successful in the past.
What does this research process look like? Particularly, how do we source new ideas for implementations and determine the model signals, parameters and risk rules? And, perhaps most importantly of all, how can we avoid the risk of creating and believing overly optimistic back-tested results? The following paragraphs aims to take a real life look at research questions like these based on a discussion of the market-neutral equity value model.
There are many reasons why managers should constantly review existing models, such as, for example, the availability of new trading instruments, shifts in market structure or changes in trading liquidity profile. However, it is also essential to find simple ways of improving the models’ ability to harvest specific risk premia and / or to mitigate the noise arising from unwanted exposures. As markets and their participants continuously evolve, investment strategies also need to adapt to reflect new realities. These changes might lie in very different dimensions, such as changes in tax policy or accounting practices, disruptive technologies or industry maturity. A further consideration is that emerging markets are becoming more developed in profile, while the related currencies can shift from being unpegged to pegged or vice versa.
Value investing is a well-known investment strategy, where investors buy cheap assets and sell expensive ones. This investment approach as a philosophy dates back to the book Security Analysis, written by Benjamin Graham and David Dodd in 1934. Since then, numerous studies have empirically proven that value stocks outperform. The rational explanations mentioned in the literature2 are that value stock prices co-move with risk factors such as distress risk or extreme event risk (which tends to correlate with people’s income and wealth). As such, exposure to the value factor typically delivers higher average returns in the form of risk compensation.
So historically value investing, or buying cheap, has been a successful philosophy. But how do we define cheap? Fundamental value investors attempt to identify cheap securities by analysing a company’s underlying fundamentals to estimate its intrinsic value and then compare that to the share price value in the market. The difference between the intrinsic value and the market value shows how cheap (or expensive) the company is on this measure. We at GAM Systematic agree with this in principle and believe we can harvest the value risk premium by systematically screening large investable universes using fundamental metrics. These include price-to-book ratios, as well as a few others, which help in determining whether a company’s shares appear cheap or not. In this process we heavily rely on computational and data sciences. However, it would be wrong to assume that computers are driving our security selection, as the detail of our investment rationale (which forms the backbone of our investment philosophy), lies within the very definition of the algorithm itself and also the chosen metrics.
For value in its simplest form, one would typically select an investable universe (which in our case is restricted to very liquid securities belonging to a given index, like the S&P 500) and rank the securities according to one or more metrics (such as price-to-book). The strongest securities (cheapest) are bought (long) while the weakest (most expensive) are sold (short).
The classical framework is based on the use of price-to-book as the leading or single measure of the value risk premium. Book value refers to the value of company assets and the measure is therefore an expression of how cheap or expensive a stock is relative to those assets. For a long time this was the standard methodology used in academic research and by the investment community. Its prominent supporters were Fama & French, who included it in their famous three-factor model (the three factors being market risk, company size premium and value premium). Interestingly, the first broadly implemented risk premia models used in harvesting the equity value risk premium were based on the same approach. The academic rationale was “simply” complemented by meticulous attention to trading costs, liquidity and investability3. However, today it is accepted that this price-to-book approach is dated and imperfect. Book value is most useful in industries where assets are typically tangible (such as older industries). In contrast patents, brands and goodwill, which often command a much higher proportion of the book value of the constituents in newer and fast-growing industries, are difficult to quantify. Furthermore, book values might not mean much at all in some sectors, such as services. That said, the model was splendid in its simplicity. Over time other metrics besides the price-to-book ratio have been increasingly used in academic literature as well as among practitioners. Examples of these include price-to-earnings and price-to-cashflow indicators.
Chart 1: Value factor dispersion (10 year average)
Some ten years ago we pointed out the shortcoming of the first simple value models, as the dispersion among typical value factors was increasing. In our view, this called for a revision of the original model – although it was still working and continued to be based on the same strong rationale, the noise was increasing and the strength of the signal was decaying. We can illustrate this by looking at a simple measure of return dispersion among value metrics. For example, in Chart 1 above we report the 10-year averages of the difference between the maximum and the minimum monthly returns of portfolios constructed with the price-to-book, price-to-cashflow, and price-to-earnings metrics. The higher dispersion over time indicates that the value metrics are not all telling the same story (ie the old agreed signals or metrics are diverging in helpfulness).
Given that the metrics used to predict value have been diverging, adding complementary metrics has proved to be an efficient way to reduce the noise in the model and, at the same time, to partially address the sector bias resulting from the use of the price-to-book metric. The evolution of the market structure has pushed us to further review the way we harvest the value risk premium, leading us to today consider using different metrics for different industry sectors and / or regions. For example, we might use EV (enterprise value) / EBITDA (earnings before interest, tax, depreciation and amortisation), price-to-cashflow and dividend yield ratios to assess the value of companies in the telecommunication services industry, while price-to-book and return-on-equity remains a more pertinent combination to use in relation to the financials sector.
A clear example of the need to consider the evolution in markets is the weights and the characteristics of the industry groups in the investable universes. Some industries are declining while others become more mature, changing their growth prospects. The Chart 2 shows the evolution of the weights of industry groups within the S&P 500 Index. For example, the weight of the technology sector at the end of August 2018 was 26% against 19% in 2010 and less than 10% in the 1990s. The financials sector constituted more than 17% before the crisis of 2008, but has since declined to 14%.
Chart 2: Sector evolution of the S&P 500 Index
Meanwhile, the telecom weighting declined to such a degree that the GICS (Global Industry Classification Standard) sector classification used by MSCI and S&P amongst others had to be drastically modified in September 2018. A new sector (‘communications services’) was created that included the telecommunication services group in its entirety as well as certain companies from the technology and consumer discretionary groups. These 2018 changes (not reflected in the preceding chart) had the biggest impact on the sector landscape in GICS history, as around 10% of the S&P 500 weights were re-classified. This reflects how telecommunications, media, and select internet companies have evolved and converged.
Developments and factors like those discussed in this paper are continuously monitored by our team and motivate us to constantly review our approach to harvesting equity risk premia, particularly the value risk premium.
The model review and the slightly increased complexity finds its justification in the evolution of the market structure, as well as in the necessity to refine the harvest procedure (algorithm) by controlling for unwanted (ie unrewarded) risks. However, increasing complexity also means increasing back-testing biases. This is where the models start to become excellent describers of the past and terrible predictors of the future. Should the complexity level become too high, over-fitting dominates and outcomes tend to disappoint or worse. Thus, we are confronted with a conflicting target function. Should we allow or even aim for complexity? This is clearly not an objective in its own right, at least not for us. However, we do feel the need to add some minimal complexity if it allows us to meaningfully reduce noise in the algorithm harvesting the risk premia. What we do not do is increase the complexity in order to increase the back-tested “fit” or “over-fit” to allow for all the historical twitches in the market. Of course, in addition to model review, the old meticulous attention to trading costs, liquidity and investability, has become, with time and experience, even more important.
This example of equity value premia investing illustrates that the dynamics of global markets require constant attention to new details that can play a significant role in the quality of the risk premium extraction engine. Thus, we believe constructing some generic ARP model and then leaning back to enjoy its merits is the wrong approach to risk premia investing. Markets evolve and risks change. Therefore, if models are not thoughtfully reassessed over time, there is an obvious danger of being left behind in the race to find rewards for taking calculated risks.
In summary, we believe the evolution of how equity analysis has evolved since Fama & French’s three-factor model was published (little more than 25 years ago) clearly demonstrates the importance of constantly re-evaluating investment models. In our view, this is also true of other asset classes. We believe the persistent challenging of existing models, as well as a commitment to the evaluation of new opportunities, has proven, and should continue to prove, a differentiator in the ARP space across different timeframes.