Dynamic Asset Allocation for Practitioners
Part IV: Naive Risk Parity
Our last ‘prequel‘ article explored the creation of a policy portfolio that utilizes a framework of structural diversification to hedge against the four major market regimes – inflationary boom, deflationary boom, stagflation and deflationary bust. In the conclusion of the article we said we would investigate a variety of quantitative methods of risk diversification to complement the more theoretical construct of structural diversification. This next instalment introduces naive risk management methods.
Figure 1 illustrates how the estimates and assumptions we bring to bear in pursuit of the optimal portfolio dictate the type of optimization that we might choose to use. The structural diversification concept does not rely on any estimates per se for return or risk, but it does make strong assumptions about assets’ ambient volatility and the theoretical structure of correlations between the assets in the portfolio. In contrast, a true 1/n or equal weight portfolio makes no assumptions whatsoever about any assets in the portfolio. That is, choosing an equal weighting scheme embeds the assumption that all assets are likely to exhibit similar return and risk characteristics.
The strategic portfolio built around structural diversification, and the equal weight portfolio, are highly susceptible to poor universe specification. For example, a portfolio consisting of 9 high volatility equity-like assets and one low volatility ‘risk off’ asset will yield almost no diversification benefit from the risk off asset. The risk based optimizations that we will introduce over the course over the next few posts will be successively more robust to messy asset universes, but the quantitative methods necessarily trade-off universe assumptions for estimate risks.
In this post we will deal with naive risk weighting or naive ‘risk parity’. Note that naive risk parity only requires that we derive estimates for each assets’ relative linear risk. In other words, naive risk parity makes no explicit quantitative allowance for diversification potential. As a result, the approach implicitly assumes that all assets are likely to exhibit similar Sharpe ratios. We have shadowed out the right hand portion of the diagram dealing with co-risk (correlation) and return estimates because these concepts will be introduced at length in our next article.
Figure 1. Inside the Optimization Machine
In practice, forecasting returns is by far the hardest component of the investment process. Figure 2. from our thoughtful friends at Newfound quantifies the rank stability of returns versus volatility and correlations in a U.S. equity universe using walk-forward testing. Note that volatility rank estimates appear to be approximately twice as stable as correlation estimates, and about four times more stable than return estimates. Despite this, most of the time and energy committed to the investment process is devoted to attempts to estimate relative returns. Investment banks and asset management firms hire armies of analysts to scrutinize every dimension of a firm’s operations in the hope of unearthing any pearl of insight that might offer an edge. Sadly, the record for these analysts is grim at best, which has increasingly compelled portfolio managers to seek alternative methods to improve portfolio outcomes that rely less on return forecasts.
Naive Risk Parity
Where an investor believes that all assets in his investment universe are likely to deliver an equal amount of excess return per unit of risk – in other words assets are expected to exhibit equal Sharpe ratios – but where correlations can not be estimated with confidence, a naive risk parity, or inverse risk, approach may be optimal. This approach requires that assets be allocated weight in the portfolio in proportion to the inverse of their respective risk; higher risk assets will receive lower portfolio weights, and lower risk assets will receive higher weights. Note that risk may be measured in a variety of ways, including volatility, variance, VaR, CVaR, CDar, max daily loss, etc. We will cover naive risk parity weighting approaches in this article, along with some tests below.
Figure 3 shows for simple 1/n equal weight portfolio how each asset’s nominal contribution to portfolio volatility changes through time (note that this is a simple plot of the proportional inverse rolling 60 day volatility so it does not show marginal risk – that’s for another post).
Figure 3. Nominal asset volatility contribution through time, 60 day rolling observations
Data Source: Bloomberg
If we hold all assets in equal weight regardless of their relative volatility, this creates a situation where the ‘lunatics run the asylum’. Notice how IEF (intermediate Treasury bond ETF) volatility contribution (represented by orange in Chart 3) contributes much less to total portfolio risk over time than high volatility assets like emerging market stocks (EEM), European stocks (IEV) or U.S. real estate (ICF). If an asset class contributes little volatility, this limits the diversification potential of that asset in the portfolio, so despite its low average correlation with other risky assets in the portfolio, its diversification potential is rendered ineffective. Think about a choir where the soloist is drowned out by the chorus.
In contrast, our goal with risk weighting is to ensure that, to the greatest degree possible, each asset contributes an equal amount of risk to the portfolio. To facilitate this, each asset must be weighted in the portfolio in proportion to the inverse of its risk, so that assets with high risk have a small allocation in the portfolio and low risk assets have a large allocation in the portfolio. Figure 4 clearly shows how a portfolio with the goal of equalizing asset level volatility in the portfolio requires a large weight in intermediate Treasuries (IEF), and relatively low weights in Emerging (EEM) and European (IEV) stocks through time.
Figure 4. Equal volatility weighting through time, 60 day rolling observations
Data Source: Bloomberg
As naive risk parity methods do not account for the potential for many assets to be highly correlated in the portfolio, it is also vulnerable to poor universe specification. For example, a portfolio with ten equity-like assets in the portfolio and one fixed income asset will be completely dominated by the equity assets despite the inverse risk weighting. We took some care to bring our experience and knowledge of asset markets to bear in the creation of a well specified universe for this series of posts, and in fact we would argue that the universe of ETFs we chose for this article series is itself a source of value.
The objective of this article is to extend the framework constructed throughout articles 1 through 3 by introducing methods to weight assets in a portfolio based on their individual risk contributions.
We examine 6 ways of measuring and allocating based on asset level risk: volatility, variance, Value at Risk (VAR), and Conditional Value at Risk (CVar), Conditional Drawdown at Risk (CDaR) and Maximum Loss. It’s worth noting that, if asset returns were truly normally distributed these measures of risk would, on average, render the same results. Given that these different measures of risk manifest in measurably different results, we can safely assert that asset returns are not normally distributed, a point few experienced investors would choose to debate.
Once we have calculated a risk measure for each asset,
- Volatility – this is the granddaddy of risk measures, used in almost every formal risk model in use today. Volatility is defined as the standard deviation of log price changes. Alternatively, it is the standard deviation of price changes in percent, and this is the measure most used in practice. Many beginner quants make the easy mistake of running the volatility analysis on the original price series, so watch out for that.Note that to calculate volatility in Excel we simply use the STDEV(A1:A60) function, where the range A1:A60 contains asset returns from time t=1 through t=60.
- Variance – this is simply the volatility measure squared, so we eliminate the square root sign from the equation above.
The Excel implementation for variance is VAR(A1:A60).
- Value at Risk (VaR) – we could write a whole book on the nuances of VaR, as there are a wide variety of definitions and potential assumptions. Essentially, VaR represents the magnitude of loss which is expected to be exceeded only α percent of the time. For the purpose of this analysis we will assume a normal probability distribution, and focus on 5th percentile losses, so α=0.05. If L represents all loss observations l over a period, then:
The Excel implementation for VaR0.05 is PERCENTILE(A1:A60,0.05)
- Conditional Value at Risk (CVaR) – is an attempt to capture the often non-normal character of returns at the extreme left tail of the loss distribution. To calculate CVaR one must first calculate VaR because CVaR is the average of all loss observations below the VaR threshold. Often, this average is materially different from what would be expected under Gaussian assumptions, so this measure is considered more robust than traditional VaR.Frankly, the formal mathematical definition is beyond my ken so I am not going to bother to publish it, but the following is the pseudocode:
i. find the VaR threshold at percentile = α
ii. sort returns from highest to lowest, and discard all returns > the VaR threshold
iii. find the average of the remaining returns
The Excel implementation for CVaR is AVERAGEIF(A1:A60,”<“&PERCENTILE(A1:A60,0.05))
- Conditional Drawdown at Risk (CDaR) – this is a close cousin to CVaR except that, where CVaR captures independent observations in the negative tail of the return distribution without regard for the sequence of those returns, CDaR measures average losses during the worst drawdown periods. Definitionally, this means that CDaR quantifies the potential for negative sequences of returns which lead to large losses in aggregate, and is therefore a potentially more robust measure than CVaR. In this post we use an α value of 0.2. Again, the formal mathematical definition for CDaR is not very useful for those of us who are not math PhDs, so we will present the pseudocode and Excel implementation logic:
i. generate the time series of rolling drawdowns
ii. find the CDaR threshold at percentile = α
iii. sort drawdowns from highest to lowest, and discard all drawdowns < CDaR threshold
iv. find the average of the remaining drawdowns
The Excel implementation requires two steps:
1. To generate the time series for drawdowns, drag the following formula down rows 1 through 60 in column A (where column A holds daily return data for asset A): ABS((A1)/MAX(A$1:A1)-1)
2. To find CDaR use: AVERAGEIF(A1:A60,”<“&PERCENTILE(A1:A60,0.20))
- Maximum Daily Loss – this is really an extension of VAR and CVaR in the extreme case where the quantile = 0. In other words, we simply weight portfolios according to the inverse of the magnitude of the maximum daily loss during the lookback period.
For the most part this post is for illustrative purposes only. It is instructive to investigate the sensitivity of naive risk optimization to universe specification, so we have taken the step of running tests for each weighting method on two different universes. Our first universe is simply the universe we have been using throughout articles 1 – 3 so far:
Our 10 asset universe:
- Commodities (DB Liquid Commoties Index)
- U.S. Stocks (Fama French top 30% by market capitalization)
- European Stocks (Stoxx 350 Index)
- Japanese Stocks (MSCI Japan)
- Emerging Market Stocks (MSCI EM)
- U.S. REITs (Dow Jones U.S. Real Estate Index)
- International REITs (Dow Jones Int’l Real Estate Index)
- Intermediate Treasuries (Barclays 7-10 Year Treasury Index)
- Long Treasuries (Barclays 20+ Year Treasury Index)
The universe above is very well specified by many important measures, so it does not do a very good job of demonstrating the inherent weaknesses of the naive risk parity framework. In contrast, the universe below, which consists of 35 different equity universes along with REITs, gold, commodities, and 1 intermediate-term Treasury index clearly shows the limitations of naive risk parity, as the massive overweight in equities swamps the diversification of alternative asset classes. Where possible we have extended the horizon of ETFs back through time using their respective total return indices. We will call this universe our ‘Dog’s Breakfast’ universe for lack of a better name.
Dog’s Breakfast Universe (Alternative assets highlighted in gold)
|VTI – Total U.S. Stock Market||EIRL – Ireland|
|TUR – Turkey||ECH – Chile|
|THD – Thailand||EEM – Emerging Markets|
|QQQ – Nasdaq||DBC – Commodities|
|IFN – India||ACWI – All Cap World Index|
|IDX – Indonesia||VGK – Europe|
|GLD – Gold||RSX – Russia|
|GREK – Greece||RWX – Int’l RE|
|EWZ – Brazil||IYR – U.S. Real Estate|
|EZA – South Africa||IEF – 7-10 Yr Treasuries|
|EWW – Mexico||EWM – Malaysia|
|EWY – South Korea||EWK – Belgium|
|EWT – Taiwan||EWL – Switzerland|
|EWU – United Kingdom||EWJ – Japan|
|EWQ – France||EWH – Hong Kong|
|EWS – Singapore||EWI – Italy|
|EWO – Austria||EWD – Sweden|
|EWP – Spain||EWG – Germany|
|EWN – Netherlands||EWA – Australia|
|EFA – EAFE||EWC – Canada|
Note that all Sharpe ratios in the performance tables below are net of the 3 month T-bill rate.
It is clear from the performance summary above that naive risk based optimizations are vulnerable to large drawdowns even on well specified universes during periods like 2008. Again, this is due to the fact that naive optimizations can not control for periods when assets which are relatively uncorrelated under ambient conditions become highly correlated – and highly volatile – under periods of extreme financial stress. That said, it is clear that the more conservative risk optimizations, like inverse variance and inverse CDaR do a materially better job of optimizing returns per unit of volatility and drawdown, as evidenced by their higher MAR and Sharpe ratios.
It is easy to see that the Dog’s Breakfast universe is poorly specified for a naive risk parity approach because the equity and equity-like assets completely swamp the ability for non-correlated assets like Treasury bonds to exert their diversification benefits. As a result, the performance of naive risk parity based portfolios is barely distinguishable from the simple equal weight version.
Conclusions and Next Steps
In this post we identified several methods of weighting assets in the portfolio to extend the concepts we explored in our ‘prequel’ article on structural diversification.
A variety of naive risk weighting or ‘risk parity’ techniques were introduced to balance the nominal risk contribution across assets in the portfolio, and the techniques were tested on our usual well specified universe of 10 assets as well as a poorly specified ‘Dog’s Breakfast’ universe consisting of 35 equity indices, REITs, gold, commodities and a single Treasury ETF. Naive risk parity clearly demonstrated a limited ability to leverage the diversification potential available from these two universes.
In our next instalment we will introduce the concept of robust risk parity, which leverages the covariance matrix in order to emphasize the diversification potential of assets in the portfolio. As an interesting twist, we will explore the concept of risk ‘clusters’ which quantitatively group assets into similar sources of risk using a novel decomposition of the covariance matrix. This method is all the more interesting for how it empirically validates the structural diversification concept that we introduced in our last article.
Figure 7. Current asset class clusters