Today we interview Lars Kestner, a Managing Director at a European investment bank. Over his 20+ year career on Wall Street, he has led teams that have managed derivative risk across a vast range of market environments. He is the author of Quantitative Trading Strategies, a cutting edge text on systematic trading. Lars designed and employed his first systematic trading system to trade 30yr bond futures before entering college.

We discuss two papers that Lars released on his website, satquant.com, in the last few weeks. His paper “Preferred Portfolios” describes a novel framework for assembling strategies with wildly different characteristics into a coherent and resilient portfolio. We discuss how to sort strategies into Boosters, Defenders, Diversifiers and Selectors based on a novel quantitative method. We then go on to examine the theoretical limits of diversification, and the importance of aligning strategy composition with investor psychology and goals to minimize the potential for abandonment.

We also discuss a brand-new paper called “Replicating CTA Positioning: An improved method”, which proposes a method to peer into current CTA portfolio positioning. This is of value because CTA trend-followers are often the marginal buyer in markets at certain points. The ability to identify concentrated risk positioning and/or potential turning points may offer investors a unique edge.

Lars is clearly passionate about using quantitative methods to maximize investment results in the real world and he offers a variety of valuable nuggets for the perceptive listener. Please enjoy my conversation with Lars Kestner.

Listen on

Apple Podcasts

Subscribe on

Overcast

Subscribe on

Google Podcasts

Lars Kestner
Managing Director at Deutsche Bank Securities

Lars Kestner is a Managing Director at a European investment bank. Over his 20+ year career on Wall Street, he has led teams that have managed derivative risk across a vast range of market environments. He is the author of Quantitative Trading Strategies, a cutting edge text on systematic trading. Lars designed and employed his first systematic trading system to trade 30yr bond futures before entering college.

Lars’ approach to markets has always been quantitative in nature. In a recent paper entitled Preferred Portfolios, Lars introduces a blueprint for combining strategies into a robust portfolio. Continued research, correctly classifying individual strategies, understanding the benefits and limits of diversification, and having patience to allow well-constructed portfolios time to perform are highlighted as necessities for success.

Among topics of Lars’ interest are the role that convexity plays in trading strategies, proper portfolio construction to balance multi-dimensional risk, optimal leverage in the presence of non-normal distributions, and improving performance measurement of strategies.

Lars recently established satquant.com as repository for his views, writings, and research.

Transcript

Rodrigo Gordillo:        00:00:06         Welcome to Gestalt University hosted by the team of ReSolve asset management, where evidence inspires confidence. This podcast will dig deep to uncover investment truths and life hacks you won’t find in the mainstream media. Covering topics that appeal to left brain robots, right brain poets and everyone in between. All with the goal of helping you reach excellence. Welcome to the journey.

Speaker 2:                   00:00:29         Mike Philbrick, Adam Butler, Rodrigo Gordillo and Jason Russell our principles at ReSolve Asset Management. Due to industry regulations, they will not discuss any of ReSolve’s funds on this podcast. All opinions expressed by the principals are solely their own opinions and do not express the opinion of ReSolve Asset Management. This podcast is for information purposes only and should not be relied upon as a basis for investment decisions. For more information visit investresolve.com.

Adam:                         00:00:55         Hello and welcome to ReSolve Asset Management’s Gestalt University podcast. This is Adam Butler, Chief Investment Officer at ReSolve. Today I interview Lars Kestner, a managing director at a European Investment Bank. Over his 20 plus year career on Wall Street, he has led teams that have managed derivative risk across a vast range of market environments. He’s also the author of Quantitative Trading Strategies, a Cutting Edge Text on Systematic Trading. That’s 15 years old now. Lars designed and employed his first systematic trading system to trade 30 year bond futures before entering college. Today we discussed two papers that Lars released on his website satquant.com in the last few weeks. First, his paper Preferred Portfolios describes a novel framework for assembling strategies with wildly different characteristics into a coherent and resilient portfolio. We discuss how to sort strategies into boosters, defenders, diversifiers, and selectors based on a novel quantitative method, the theoretical limits of diversification and aligning strategy composition with investor psychology and goals to minimize the potential for abandonment along the way. We also discuss a brand new paper called Replicating CTA Positioning , An Improved Method – which proposes a technique to peer into current CTA portfolio positioning. This is of potentially pretty high value because CTA trend followers are often the marginal buyer in markets at certain inflection points. So the ability to identify concentrated risk positioning and or potential turning points, may offer investors a unique edge. Lars is clearly passionate about using quantitative methods to maximize investment results in the real world. And he offers a variety of valuable nuggets for the perceptive listener. Please enjoy my conversation with Lars Kestner.

All right, today we’ve got Lars Kesner. Lars we’re going to talk a lot about how portfolio assembly and diversification and you’re working on a draft version of a really neat new paper, but it’s going to be relatively heavily quants. So I think you should go ahead and give our viewers and listeners a sense of your background and why you are well qualified to weigh in on some of these topics.

Backgrounder

Lars Kestner:               00:03:11         Sure, of course, and thanks for having me on the podcast today. So by way of background, I’ve got about 20 years of experience in the equity derivative space. I am an author of Quantitative Trading Strategies which was published about 15 years ago, a text on systematic trading strategies. And really my background, I would probably best describe as the crossover of managing equity derivatives risk, and thinking about markets from a systematic point of view, really focusing on merging the real world and the risk neutral analysis to construct positive edge trading strategies.

Adam:                         00:03:43         So you published that book 15 years ago, that is astonishing. It feels like a long time ago.

Lars Kestner:               00:03:49         Exactly, it’s fun to go back and see the progression of the ideas from then to now and many are actually still at the core beliefs of how I look at the systematic side, and some have dated themselves a little bit as is the case. And we’ll get into the discussion today.

Adam:                         00:04:04         Well, I think it’s worthwhile pulling on that a little bit. So what would you say are some of the concepts that you were discussing in your book 15 years ago that have stood the test of time? And what are some that you think looking back, you have revised your views on pretty substantially?

Standing the Test of Time, Or Not

Lars Kestner:           00:04:20         Sure. I think the constant is approaching the markets from a systematic point of view for myself, a lot of discretion while maybe around the edges is okay, being a full discretionary based trader, it just doesn’t work for me. I need a framework that is quite systematic in its format and in its basis, and that has not changed and my philosophy over those 15 years since publishing the book. Also being honest with yourself, what works and to what extent does something work and if it doesn’t, that’s fine, move on. Don’t try to curve fit something. Don’t try to over optimize a back test or simulation just to get a positive result to pat yourself on the back and say yeah, this is great. Because guess what, you’re just going to be disappointed out of sample when you apply it to real world. So that I think the philosophy has stayed the same. The methods, I would say where I looked at, let’s call it momentum from 20 or 30 different ways, whether it’s channel breakouts, moving average crossovers, volatility breakouts, they lead to more or less a correlated result. And so, I would say the exact methodologies and pointing, what is the exact time series momentum signal that’s going to be so much better than everything else. I’ve relaxed that a little bit and instead are looking and again, we’ll get into this over more diversifying strategies. I’ve got my trend, great, what else can I add to it and move forward instead of just pinpointing on the exact trend signal that I think is the absolute best?

Adam:                         00:05:50         Yeah, we tend to describe that as a view to trying to be generally correct while avoiding the risk of being specifically wrong. And I think that’s been a real evolution in our thinking as well. I’m curious about how systematic managers come to become systematic thinkers. What is it about their life experience that led them to having strong confidence in systematic methods over discretionary approaches? What led you to that?

Lars Kestner:               00:06:24         So for me now we’re going to go even further back than the 15 years ago, in my book, I was kind of a math kid growing up, I was good with numbers. It was something I liked, I truly enjoyed. My father, when I was 12 years old probably gave me a book of something called Technical Analysis of Stock Trends by Edwards, considered the Bible of discretionary technical analysis, which I really enjoyed and even as a 12, 13 year old ate it up. And then over time, I discovered even as a 15 year old, trading strategies, systematic trading strategies, I remember I got something in the mail on a bond system and follow these rules and makes a lot of money and awesome, and we’re all millionaires and everything’s fine. And so I followed that and became really interested in when I was I would say 16 years old, I developed a trading system. I’ll call that with air quotes, because it was a very simple range based system look over the past 10, 14 days, if we’re in the lowest 20% we’re buying if we’re in the Upper 80% we’re buying. And this was before my experience with more heavy duty back testers, if you will, on the software side. But I did paper trade it for a month or probably even two months even. And eight in a row trades, tons of money. It was great and-

Adam:                         00:07:35         Oh, wow.

Lars Kestner:           00:07:37         Yeah, eight in a row.

Adam:                         00:07:38         Just imagine the lesson that you’re learning, the reinforcements. You get trade after trade goes, right?

Lars Kestner:           00:07:44        Yeah, exactly. And literally I did but you know, a lot of people do well, if this success continues, my goodness in three years I’m going to be richer than Bill Gates. My parents who were just incredibly supportive of my endeavors, we opened up a futures trading account. And I was going to trade the 30 year futures, put in my first trade, followed the rules and immediately dropped money. And I panicked. And I said, Oh, my goodness, what happened? After eight in a row, how could I lose? I thought I had figured it all out. And that failure, if you will, was really a great reflection of, okay, wait a minute, I’ve got to think about how to do this better. What can I do to have even more confidence than two months of a back test? And I went to college, went to University of Pennsylvania and finance programmes are very strong and very academic. And so between the academic nature of finance and my practical nature, again, even in college started to really get interested in systematic trading. And at that point, you had more public software for back testing and started to do data acquisition and further back tests of five years, 10 years, which compared to what I was doing in high school was a giant leap on the analysis side. It kind of continued from there. I was always interested. It just kind of progressed.

Adam:                         00:09:05         I sort of realised the merits of systematic thinking, it took me kind of three major failures of discretionary trading for me to at least open my mind to the merits of systematic thinking and then the work of Philip Tetlock and to perhaps a lesser extent Nassim Taleb with his sort of narrative fallacy and some of the concepts around that. But especially Philip Tetlock expert political judgment was just an incredible revelation for me. And it arrived at the right time and the evolution of my thinking. And I think you either have had an experience or have had a series of experiences that just make systematic thinking the only reasonable way to approach this complex domain or you are awaiting the point at which you have an experience that leads to that conclusion. There’s really only two states of being in my view. So there’s a lot to unpack actually in some of the experiences that you just pointed out. One of them is just the idea of the robustness of a trading strategy. So you’ve got a new paper out, it’s called Preferred Portfolios. We’ve already sort of dug into some of your initial thrusts regarding the importance of quantitative methods or leaning into quantitative methods in general. But then you start to talk about this idea of temporal validity. And I wonder if you can just describe what you mean by that. And maybe tell me if my intuitions right that it connects to just in general the idea of robustness?

Temporal Validity in Preferred Portfolios

Lars Kestner:               00:10:42         Yes, absolutely does. Yeah, temporal validity was a concept and it’s really from the social sciences from psychology I only recently stumbled upon in some of my readings two three years ago. It caught my attention, particularly from a systematic trading point of view and the failures and successes that we’re all going to have in the somewhat randomness of what we have. Data is very noisy, the signal that we can extract from it is not very large compared to the noise in the background. And so while we can create and best craft our strategies and our portfolios based on what’s happened in the past, and we are being systematic traders, expecting many of those tendencies to continue, we have to know that the future is going to be even bumpier, it’s going to be noisier in the future than it was in the past. And so the idea of temporal validity caught my attention as an ability to do a study and have a result that not only stands up in that moment of time that you’re performing that study, but as small characteristics change in the sort of microcosm of the data or the background of whatever you’re studying, and again, whether this is finance, psychology, medicine, how robust are the things that you’re finding, and will they be robust subject to small changes over time that are going to  happen? Markets change, markets evolve, participants come in, they trade differently, they change a way signals are going to behave to their trading. And while we’ve got core beliefs and core strategies that we expect to continue, they’re not going to continue exactly as we expect. And we want to build portfolios with strategies that even though the future will be a little bit different, it should still hold up relative to what we’ve seen in the past are our results.

Adam:                         00:12:24         Again, there’s two or three things in there that I want to pull on. So one of the dimensions of this idea of temporal validity is that a strategy will persist through time. It’s consistent in time, and it is likely to persist in the future out of sample. So there’s this idea of how do we evaluate whether a strategy is still relevant out of sample, I don’t know if you have anything new or interesting to say on that. For us, that falls into the category of a hard problem over certainly four strategies that are traded at frequencies that most of us are familiar with. It’s sort of anything at or greater than a daily trading horizon. The evaluation of out of sample performance relative to in sample performance, just due to sample size, I think is a hard problem. So I’ll throw that over, wonder if you have anything to say on that.

Lars Kestner:               00:13:23         Sure. And actually, the first topic of the Portfolios piece that I published, is that you should expect your signal to decrease over time. And the stronger it is, the better that you see it, whether you found something that’s no one else has, you’re able to access a market in a way that no one else has. People are smart. There’s a lot of really smart people looking for edge. And sooner or later, they’re going to start to dig into it and it likely won’t decay to zero overnight. But the stronger it is, the faster it’s going to decay. And that doesn’t mean that everything is going to go to zero eventually. I think because we’re competitive as a group of people in finance looking for these edges. Eventually, once some of these signals have degraded, money starts to leave it, they start to chase the hot idea again. And so, there is kind of some baseline of where strategies ultimately trough, if you will. But if you found something that’s very effective, you need to monetize it, you need to trade it, you need to take advantage of it and expect that over time that signal is going to degrade. Now, to your point, how do you exactly measure that and what’s the expectation? That’s very tricky. And I have yet to find something perfect to describe mechanically that expected deterioration of edge, but you got to figure it’s out there.

Adam:                         00:14:37         Yeah, or even just measure it. It’s one thing if, you’ve got a neat chart in your paper that plots a theoretical decay function. It’s sort of like alpha decay function and so a strategy starts out being unheard of. It probably has a limited capacity, if you really want to preserve that really strong edge. It leaks out, it becomes known, papers get published, it sort of enters the belly of the curve, it’s still relatively attractive and enters the domain of your typical two and 20 hedge fund. They’re seeking both performance and capacity. And eventually it enters the mainstream the sort of zeitgeist, the Overton window and it migrates into this kind of smart beta commoditized version. It’s asymptotic to whatever that baseline edge might look like. I think that that’s a reasonable framework to think about the evolution of trading edges over time. It’s an endless curiosity for us as to how to think about evaluating.

I mean, one of the things that you identified in your paper, which we’ve spent a lot of time on is that the marginal benefit of diversification, it decays fairly rapidly. So you keep adding new strategies, but every new strategy that you add has diminishing marginal utility. And so if that’s true, then there comes a point relatively quickly actually where it pays very substantially to be able to identify which strategies are likely to outperform others, and just remove the ones or substantially de-emphasize the ones that you think are likely to have decayed to a meaningful degree and then emphasize the ones that obviously you’ve got greater confidence in, and to be able to wrap some kind of quantitative framework around that to measure that is I think an ongoing question. If you don’t have a solution to it, then I’m not surprised because I don’t think I’ve encountered anyone with a holy grail on that.

Lars Kestner:               00:16:35         No, and that was a very eye opening result of mine. And when I write these papers, a lot of it is the conclusions I found, which very often are … for me, and I just like to share it because frankly I get back so much from people like yourself and others, but the one to me was diversification and the fact that it’s the only free lunch that’s out there and absolutely true. But the benefits and there are some remarkable graphs in their decay very quickly, particularly when the one example is you’ve got strategies that each have a correlation with each other of point five, by the time you get to seven or eight, you’ve pretty much hit that asymptote type of how quickly or what extra benefits you’re going to get. And if you’re going from seven to eight to 50 strategies, you’re probably better off spending your time either taking those top 10 strategies and making them better, or looking for something not correlated, as opposed to going to your 10th and 20th and 30th strategy with very high correlation with each other. It just doesn’t add up. It’s not worth it. And I think the paper brings it up as a theoretical idea from if you are a systematic manager, and you need the new research for the new strategies. Where do you want to spend your time? You need to think about what you’ve got, and what the value add is going forward of edge or alpha versus correlation to your existing portfolio suite.

Adam:                         00:17:52         The other side of that, of course, is that if you can legitimately find strategies that are uncorrelated, so have a zero correlation, then that point of diminishing marginal utility is surprisingly far out in terms of the number of strategies that are still materially accretive. You can get into the sort of 20, 30, 40 strategies, and you know where that marginal extra strategy still may add a creative Sharpe ratio even at the margin. So, the difference between zero and point one is way average correlation is way larger than people might imagine intuitively, right?

Lars Kestner:               00:18:28         Yes. And again, anyone who says I’ve got 50 strategies that have absolutely no correlation to each other. I’d be curious how that happened, or what their experience was the past three to four months. Because again, in normal times, maybe, but correlated factors, correlated risk among assets that look very different can pop up and unfortunately at the worst time.

Adam:                         00:18:48         Yeah. It’s hard to really dig into the conditional correlations of all of these strategies as well. So they may look uncorrelated on average but just as a really accessible example, stocks and treasury bonds. So on average, over the very long term, they’ve had a correlation of zero, but they go through multi decade periods where they have a negative correlation, and then another multi decade period where they have positive correlation. So you need to be aware of the correlation dynamics at each point in time in order to be able to fully take advantage. So it requires a dynamic approach to that in order to maximise the diversification. We sort of discussed one dimension of that idea of intertemporal validity, but you did sort of bring up another which is kind of not so much intertemporal where temporal refers to time, but it’s still a dimension of robustness which is sort of perturbing the specification of the strategy. So a strategy needs to work in different economic environments or at different points in time and be fairly persistent. But it also needs to be resilient to small or even sort of moderate changes in how the strategies are specified. The parameterizations of them. So would you include that it’s your idea of inter temporal validity, or is that something different?

Lars Kestner:               00:20:12         Yeah, no, absolutely. Part of I would say the culmination of my research philosophy wants to better classify strategies than just call it stocks, bonds and alts. I’m exaggerating a little bit, but you get the idea. And the idea there being as these alternative risk premiums pop up, and more and more entire asset class strategies come to life. We can’t just say they’re long, short, they’re flat, let’s just put them in a bucket all together. I would say probably the best example of that is even managed futures and trend followers, which had such a great run in the great financial crisis of 2008 and 2009. And a great diversifier of not only protecting capital when equity and risk assets were very strongly going the other way, but actually making a fair amount of money. So fast forward through 2010 and the backward hindsight capital mirror, more and more allocation goes into these products into the trend following products with the expectation that if something happens to risky assets, they will perform very well. Not just uncorrelated but anti correlated if you will, and we get to coming back a little closer to the here and now, February 2018, in Volmageddon and the very sharp selloff in the S&P 500 and equities and trend followers generally speaking didn’t quite perform. The answers is they probably shouldn’t have. You had a market that was near its highs, long exposure as they should be for momentum time series momentum basis. And while they didn’t crush, they didn’t provide the diversification value that they did in 2008, 2009. March 2020, that kind of the same thing. You had such a strong and swift speed of the selloff that it took time for signals to flip.

Bucket Monikers

My point in all this is that’s not a failure of the strategy. The strategy has variability in its beta if you will to the S&P or to risky assets. Long short momentum is another one where depending on the characteristics of equity markets long and low volatile versus bear and very volatile, the characteristics depending on the portfolio construction, obviously, but generally speaking, are variable. You can’t expect it to be, by definition, a negative beta crisis alpha product, it depends, you just don’t know. And so when I started to classify the strategies, I determined and created four strategies, four monikers for bucketing everything. One is a Booster, so something that performs well when equity and risky assets, risk on is working. A Defender which goes the other way, which performs when equity markets and we have sort of these risk off periods and then you had the non-correlated strategies to the S&P and those are a little bit trickier because again, on one hand, you’ve got time series momentum which depending on where we are in regimes they could act risk on they could act risk off, it will depend on where their signals are, versus say gold or a long short value strategy, which typically is a little bit more, a little closer to home in terms of correlations always being near zero.

And so I split the lowly correlated strategies into two monikers, one being a Selector, so something that selects its beta if you will over time based on the strategy. So managed futures, trend followers fell into that, long short equity factor momentum fell into that. And then you’ve also got what I call the true Diversifiers. Things like gold, equity value, equity quality from the factor side. And again, it was a nice idea to classify every single strategy but when you get into the paper there’s a graph that I think is the proof in the pudding.

Adam:                         00:23:54         This is the correlation versus the standard deviation or dispersion.

Lars Kestner:               00:23:58         Yeah exactly.

Adam:                         00:23:59         I really liked that framework. Say more about that.

Lars Kestner:               00:24:01         That’s the framework for deciding which bucket something falls into, a strategy falls into. But then right below that we look at the average correlation over time, depending on the buckets. And what you find is that the boosters are very consistently correlated to the S&P 500. So, call it correlations of point 6, point 8. They don’t really move.

Adam:                         00:24:21         Lars, why don’t you try and go ahead and share your screen on that? I think it would be useful and if you’re struggling with it, I can share it.

Lars Kestner:               00:24:28         Yeah, actually, if you want to try to give it a shot. That would be nice. While you’re doing that, I’ll go through that. But the idea is when you actually looked at the correlations over time, not over the full sample, but a rolling window of one year, they behave as you would expect, and that to me gave me a lot of comfort that this is probably the right way to classify the strategies.

Adam:                    00:24:52         Yeah, agreed. This is the chart that you’re referring to, right Lars? So maybe you can walk through how to interpret that chart.

Lars Kestner:               00:25:00         Sure, exactly. So this is a time period from 2008 to 2019 I picked, I believe 19 either asset classes or strategies, nothing particularly selective about them, just a representative population of things that are being traded in the mainstream these days. And I took the regular returns of each return stream and calculated a 52 week rolling correlation between each of those assets in the S&P 500. I pick S&P 500 just says-

Adam:                         00:25:34       Equity beta or risk on.

Lars Kestner:               00:25:35         Yeah. Equity did. Exactly. Nothing wonderful about the S&P versus like an MSCIO world. That was just the proxy I picked. And so we’ve got the 52 week rolling correlations for each strategy or each asset class.

Adam:                         00:25:48         And that’s on the y axis.

Lars Kestner:           00:25:50         Yeah. So from this, I take the average of that correlation over the 10 year period. And that gives us essentially the average Beta correlation, is at risk on? Is it risk off? And if you look at the top, you will see assets up there that you would expect – international equities, a short volatility strategy, high yield and REITs which again, one of those that when things are calm may not look very equity but when we hit periods of stress, those assets do become very equity like. On the other side, the assets and strategies that had negative correlation to the S&P 500, not really surprising a long volatility strategy, an equity, low volatility, which was actually dollar- neutral long, low beta short, high beta. So ex ante by construction, short beta, so no surprise there. And then long rates which was a longer dated US Treasury total return series. So those with the average correlation the S&P 500 below negative point three, Defenders, those with average correlation to the S&P above point three, Boosters. So you’re left with the stuff in the middle and this is what we were talking about before. They’re not zero correlation to the S&P, but they’re also not terribly high. And so the classification system was based on the X axis which is the volatility of the correlation, meaning if something never moves with the S&P not only is the average correlation over time very low, but the standard deviation of that correlation will be low too.

                                               

Rodrigo Gordillo:        00:00:06         Welcome to Gestalt University hosted by the team of ReSolve asset management, where evidence inspires confidence. This podcast will dig deep to uncover investment truths and life hacks you won’t find in the mainstream media. Covering topics that appeal to left brain robots, right brain poets and everyone in between. All with the goal of helping you reach excellence. Welcome to the journey.

Speaker 2:                   00:00:29         Mike Philbrick, Adam Butler, Rodrigo Gordillo and Jason Russell our principles at ReSolve Asset Management. Due to industry regulations, they will not discuss any of ReSolve’s funds on this podcast. All opinions expressed by the principals are solely their own opinions and do not express the opinion of ReSolve Asset Management. This podcast is for information purposes only and should not be relied upon as a basis for investment decisions. For more information visit investresolve.com.

Adam:                         00:00:55         Hello and welcome to ReSolve Asset Management’s Gestalt University podcast. This is Adam Butler, Chief Investment Officer at ReSolve. Today I interview Lars Kestner, a managing director at a European Investment Bank. Over his 20 plus year career on Wall Street, he has led teams that have managed derivative risk across a vast range of market environments. He’s also the author of Quantitative Trading Strategies, a Cutting Edge Text on Systematic Trading. That’s 15 years old now. Lars designed and employed his first systematic trading system to trade 30 year bond futures before entering college. Today we discussed two papers that Lars released on his website satquant.com in the last few weeks. First, his paper Preferred Portfolios describes a novel framework for assembling strategies with wildly different characteristics into a coherent and resilient portfolio. We discuss how to sort strategies into boosters, defenders, diversifiers, and selectors based on a novel quantitative method, the theoretical limits of diversification and aligning strategy composition with investor psychology and goals to minimize the potential for abandonment along the way. We also discuss a brand new paper called Replicating CTA Positioning , An Improved Method – which proposes a technique to peer into current CTA portfolio positioning. This is of potentially pretty high value because CTA trend followers are often the marginal buyer in markets at certain inflection points. So the ability to identify concentrated risk positioning and or potential turning points, may offer investors a unique edge. Lars is clearly passionate about using quantitative methods to maximize investment results in the real world. And he offers a variety of valuable nuggets for the perceptive listener. Please enjoy my conversation with Lars Kestner.

All right, today we’ve got Lars Kesner. Lars we’re going to talk a lot about how portfolio assembly and diversification and you’re working on a draft version of a really neat new paper, but it’s going to be relatively heavily quants. So I think you should go ahead and give our viewers and listeners a sense of your background and why you are well qualified to weigh in on some of these topics.

Backgrounder

Lars Kestner:               00:03:11         Sure, of course, and thanks for having me on the podcast today. So by way of background, I’ve got about 20 years of experience in the equity derivative space. I am an author of Quantitative Trading Strategies which was published about 15 years ago, a text on systematic trading strategies. And really my background, I would probably best describe as the crossover of managing equity derivatives risk, and thinking about markets from a systematic point of view, really focusing on merging the real world and the risk neutral analysis to construct positive edge trading strategies.

Adam:                         00:03:43         So you published that book 15 years ago, that is astonishing. It feels like a long time ago.

Lars Kestner:               00:03:49         Exactly, it’s fun to go back and see the progression of the ideas from then to now and many are actually still at the core beliefs of how I look at the systematic side, and some have dated themselves a little bit as is the case. And we’ll get into the discussion today.

Adam:                         00:04:04         Well, I think it’s worthwhile pulling on that a little bit. So what would you say are some of the concepts that you were discussing in your book 15 years ago that have stood the test of time? And what are some that you think looking back, you have revised your views on pretty substantially?

Standing the Test of Time, Or Not

Lars Kestner:           00:04:20         Sure. I think the constant is approaching the markets from a systematic point of view for myself, a lot of discretion while maybe around the edges is okay, being a full discretionary based trader, it just doesn’t work for me. I need a framework that is quite systematic in its format and in its basis, and that has not changed and my philosophy over those 15 years since publishing the book. Also being honest with yourself, what works and to what extent does something work and if it doesn’t, that’s fine, move on. Don’t try to curve fit something. Don’t try to over optimize a back test or simulation just to get a positive result to pat yourself on the back and say yeah, this is great. Because guess what, you’re just going to be disappointed out of sample when you apply it to real world. So that I think the philosophy has stayed the same. The methods, I would say where I looked at, let’s call it momentum from 20 or 30 different ways, whether it’s channel breakouts, moving average crossovers, volatility breakouts, they lead to more or less a correlated result. And so, I would say the exact methodologies and pointing, what is the exact time series momentum signal that’s going to be so much better than everything else. I’ve relaxed that a little bit and instead are looking and again, we’ll get into this over more diversifying strategies. I’ve got my trend, great, what else can I add to it and move forward instead of just pinpointing on the exact trend signal that I think is the absolute best?

Adam:                         00:05:50         Yeah, we tend to describe that as a view to trying to be generally correct while avoiding the risk of being specifically wrong. And I think that’s been a real evolution in our thinking as well. I’m curious about how systematic managers come to become systematic thinkers. What is it about their life experience that led them to having strong confidence in systematic methods over discretionary approaches? What led you to that?

Lars Kestner:               00:06:24         So for me now we’re going to go even further back than the 15 years ago, in my book, I was kind of a math kid growing up, I was good with numbers. It was something I liked, I truly enjoyed. My father, when I was 12 years old probably gave me a book of something called Technical Analysis of Stock Trends by Edwards, considered the Bible of discretionary technical analysis, which I really enjoyed and even as a 12, 13 year old ate it up. And then over time, I discovered even as a 15 year old, trading strategies, systematic trading strategies, I remember I got something in the mail on a bond system and follow these rules and makes a lot of money and awesome, and we’re all millionaires and everything’s fine. And so I followed that and became really interested in when I was I would say 16 years old, I developed a trading system. I’ll call that with air quotes, because it was a very simple range based system look over the past 10, 14 days, if we’re in the lowest 20% we’re buying if we’re in the Upper 80% we’re buying. And this was before my experience with more heavy duty back testers, if you will, on the software side. But I did paper trade it for a month or probably even two months even. And eight in a row trades, tons of money. It was great and-

Adam:                         00:07:35         Oh, wow.

Lars Kestner:           00:07:37         Yeah, eight in a row.

Adam:                         00:07:38         Just imagine the lesson that you’re learning, the reinforcements. You get trade after trade goes, right?

Lars Kestner:           00:07:44        Yeah, exactly. And literally I did but you know, a lot of people do well, if this success continues, my goodness in three years I’m going to be richer than Bill Gates. My parents who were just incredibly supportive of my endeavors, we opened up a futures trading account. And I was going to trade the 30 year futures, put in my first trade, followed the rules and immediately dropped money. And I panicked. And I said, Oh, my goodness, what happened? After eight in a row, how could I lose? I thought I had figured it all out. And that failure, if you will, was really a great reflection of, okay, wait a minute, I’ve got to think about how to do this better. What can I do to have even more confidence than two months of a back test? And I went to college, went to University of Pennsylvania and finance programmes are very strong and very academic. And so between the academic nature of finance and my practical nature, again, even in college started to really get interested in systematic trading. And at that point, you had more public software for back testing and started to do data acquisition and further back tests of five years, 10 years, which compared to what I was doing in high school was a giant leap on the analysis side. It kind of continued from there. I was always interested. It just kind of progressed.

Adam:                         00:09:05         I sort of realised the merits of systematic thinking, it took me kind of three major failures of discretionary trading for me to at least open my mind to the merits of systematic thinking and then the work of Philip Tetlock and to perhaps a lesser extent Nassim Taleb with his sort of narrative fallacy and some of the concepts around that. But especially Philip Tetlock expert political judgment was just an incredible revelation for me. And it arrived at the right time and the evolution of my thinking. And I think you either have had an experience or have had a series of experiences that just make systematic thinking the only reasonable way to approach this complex domain or you are awaiting the point at which you have an experience that leads to that conclusion. There’s really only two states of being in my view. So there’s a lot to unpack actually in some of the experiences that you just pointed out. One of them is just the idea of the robustness of a trading strategy. So you’ve got a new paper out, it’s called Preferred Portfolios. We’ve already sort of dug into some of your initial thrusts regarding the importance of quantitative methods or leaning into quantitative methods in general. But then you start to talk about this idea of temporal validity. And I wonder if you can just describe what you mean by that. And maybe tell me if my intuitions right that it connects to just in general the idea of robustness?

Temporal Validity in Preferred Portfolios

Lars Kestner:               00:10:42         Yes, absolutely does. Yeah, temporal validity was a concept and it’s really from the social sciences from psychology I only recently stumbled upon in some of my readings two three years ago. It caught my attention, particularly from a systematic trading point of view and the failures and successes that we’re all going to have in the somewhat randomness of what we have. Data is very noisy, the signal that we can extract from it is not very large compared to the noise in the background. And so while we can create and best craft our strategies and our portfolios based on what’s happened in the past, and we are being systematic traders, expecting many of those tendencies to continue, we have to know that the future is going to be even bumpier, it’s going to be noisier in the future than it was in the past. And so the idea of temporal validity caught my attention as an ability to do a study and have a result that not only stands up in that moment of time that you’re performing that study, but as small characteristics change in the sort of microcosm of the data or the background of whatever you’re studying, and again, whether this is finance, psychology, medicine, how robust are the things that you’re finding, and will they be robust subject to small changes over time that are going to  happen? Markets change, markets evolve, participants come in, they trade differently, they change a way signals are going to behave to their trading. And while we’ve got core beliefs and core strategies that we expect to continue, they’re not going to continue exactly as we expect. And we want to build portfolios with strategies that even though the future will be a little bit different, it should still hold up relative to what we’ve seen in the past are our results.

Adam:                         00:12:24         Again, there’s two or three things in there that I want to pull on. So one of the dimensions of this idea of temporal validity is that a strategy will persist through time. It’s consistent in time, and it is likely to persist in the future out of sample. So there’s this idea of how do we evaluate whether a strategy is still relevant out of sample, I don’t know if you have anything new or interesting to say on that. For us, that falls into the category of a hard problem over certainly four strategies that are traded at frequencies that most of us are familiar with. It’s sort of anything at or greater than a daily trading horizon. The evaluation of out of sample performance relative to in sample performance, just due to sample size, I think is a hard problem. So I’ll throw that over, wonder if you have anything to say on that.

Lars Kestner:               00:13:23         Sure. And actually, the first topic of the Portfolios piece that I published, is that you should expect your signal to decrease over time. And the stronger it is, the better that you see it, whether you found something that’s no one else has, you’re able to access a market in a way that no one else has. People are smart. There’s a lot of really smart people looking for edge. And sooner or later, they’re going to start to dig into it and it likely won’t decay to zero overnight. But the stronger it is, the faster it’s going to decay. And that doesn’t mean that everything is going to go to zero eventually. I think because we’re competitive as a group of people in finance looking for these edges. Eventually, once some of these signals have degraded, money starts to leave it, they start to chase the hot idea again. And so, there is kind of some baseline of where strategies ultimately trough, if you will. But if you found something that’s very effective, you need to monetize it, you need to trade it, you need to take advantage of it and expect that over time that signal is going to degrade. Now, to your point, how do you exactly measure that and what’s the expectation? That’s very tricky. And I have yet to find something perfect to describe mechanically that expected deterioration of edge, but you got to figure it’s out there.

Adam:                         00:14:37         Yeah, or even just measure it. It’s one thing if, you’ve got a neat chart in your paper that plots a theoretical decay function. It’s sort of like alpha decay function and so a strategy starts out being unheard of. It probably has a limited capacity, if you really want to preserve that really strong edge. It leaks out, it becomes known, papers get published, it sort of enters the belly of the curve, it’s still relatively attractive and enters the domain of your typical two and 20 hedge fund. They’re seeking both performance and capacity. And eventually it enters the mainstream the sort of zeitgeist, the Overton window and it migrates into this kind of smart beta commoditized version. It’s asymptotic to whatever that baseline edge might look like. I think that that’s a reasonable framework to think about the evolution of trading edges over time. It’s an endless curiosity for us as to how to think about evaluating.

I mean, one of the things that you identified in your paper, which we’ve spent a lot of time on is that the marginal benefit of diversification, it decays fairly rapidly. So you keep adding new strategies, but every new strategy that you add has diminishing marginal utility. And so if that’s true, then there comes a point relatively quickly actually where it pays very substantially to be able to identify which strategies are likely to outperform others, and just remove the ones or substantially de-emphasize the ones that you think are likely to have decayed to a meaningful degree and then emphasize the ones that obviously you’ve got greater confidence in, and to be able to wrap some kind of quantitative framework around that to measure that is I think an ongoing question. If you don’t have a solution to it, then I’m not surprised because I don’t think I’ve encountered anyone with a holy grail on that.

Lars Kestner:               00:16:35         No, and that was a very eye opening result of mine. And when I write these papers, a lot of it is the conclusions I found, which very often are … for me, and I just like to share it because frankly I get back so much from people like yourself and others, but the one to me was diversification and the fact that it’s the only free lunch that’s out there and absolutely true. But the benefits and there are some remarkable graphs in their decay very quickly, particularly when the one example is you’ve got strategies that each have a correlation with each other of point five, by the time you get to seven or eight, you’ve pretty much hit that asymptote type of how quickly or what extra benefits you’re going to get. And if you’re going from seven to eight to 50 strategies, you’re probably better off spending your time either taking those top 10 strategies and making them better, or looking for something not correlated, as opposed to going to your 10th and 20th and 30th strategy with very high correlation with each other. It just doesn’t add up. It’s not worth it. And I think the paper brings it up as a theoretical idea from if you are a systematic manager, and you need the new research for the new strategies. Where do you want to spend your time? You need to think about what you’ve got, and what the value add is going forward of edge or alpha versus correlation to your existing portfolio suite.

Adam:                         00:17:52         The other side of that, of course, is that if you can legitimately find strategies that are uncorrelated, so have a zero correlation, then that point of diminishing marginal utility is surprisingly far out in terms of the number of strategies that are still materially accretive. You can get into the sort of 20, 30, 40 strategies, and you know where that marginal extra strategy still may add a creative Sharpe ratio even at the margin. So, the difference between zero and point one is way average correlation is way larger than people might imagine intuitively, right?

Lars Kestner:               00:18:28         Yes. And again, anyone who says I’ve got 50 strategies that have absolutely no correlation to each other. I’d be curious how that happened, or what their experience was the past three to four months. Because again, in normal times, maybe, but correlated factors, correlated risk among assets that look very different can pop up and unfortunately at the worst time.

Adam:                         00:18:48         Yeah. It’s hard to really dig into the conditional correlations of all of these strategies as well. So they may look uncorrelated on average but just as a really accessible example, stocks and treasury bonds. So on average, over the very long term, they’ve had a correlation of zero, but they go through multi decade periods where they have a negative correlation, and then another multi decade period where they have positive correlation. So you need to be aware of the correlation dynamics at each point in time in order to be able to fully take advantage. So it requires a dynamic approach to that in order to maximise the diversification. We sort of discussed one dimension of that idea of intertemporal validity, but you did sort of bring up another which is kind of not so much intertemporal where temporal refers to time, but it’s still a dimension of robustness which is sort of perturbing the specification of the strategy. So a strategy needs to work in different economic environments or at different points in time and be fairly persistent. But it also needs to be resilient to small or even sort of moderate changes in how the strategies are specified. The parameterizations of them. So would you include that it’s your idea of inter temporal validity, or is that something different?

Lars Kestner:               00:20:12         Yeah, no, absolutely. Part of I would say the culmination of my research philosophy wants to better classify strategies than just call it stocks, bonds and alts. I’m exaggerating a little bit, but you get the idea. And the idea there being as these alternative risk premiums pop up, and more and more entire asset class strategies come to life. We can’t just say they’re long, short, they’re flat, let’s just put them in a bucket all together. I would say probably the best example of that is even managed futures and trend followers, which had such a great run in the great financial crisis of 2008 and 2009. And a great diversifier of not only protecting capital when equity and risk assets were very strongly going the other way, but actually making a fair amount of money. So fast forward through 2010 and the backward hindsight capital mirror, more and more allocation goes into these products into the trend following products with the expectation that if something happens to risky assets, they will perform very well. Not just uncorrelated but anti correlated if you will, and we get to coming back a little closer to the here and now, February 2018, in Volmageddon and the very sharp selloff in the S&P 500 and equities and trend followers generally speaking didn’t quite perform. The answers is they probably shouldn’t have. You had a market that was near its highs, long exposure as they should be for momentum time series momentum basis. And while they didn’t crush, they didn’t provide the diversification value that they did in 2008, 2009. March 2020, that kind of the same thing. You had such a strong and swift speed of the selloff that it took time for signals to flip.

Bucket Monikers

My point in all this is that’s not a failure of the strategy. The strategy has variability in its beta if you will to the S&P or to risky assets. Long short momentum is another one where depending on the characteristics of equity markets long and low volatile versus bear and very volatile, the characteristics depending on the portfolio construction, obviously, but generally speaking, are variable. You can’t expect it to be, by definition, a negative beta crisis alpha product, it depends, you just don’t know. And so when I started to classify the strategies, I determined and created four strategies, four monikers for bucketing everything. One is a Booster, so something that performs well when equity and risky assets, risk on is working. A Defender which goes the other way, which performs when equity markets and we have sort of these risk off periods and then you had the non-correlated strategies to the S&P and those are a little bit trickier because again, on one hand, you’ve got time series momentum which depending on where we are in regimes they could act risk on they could act risk off, it will depend on where their signals are, versus say gold or a long short value strategy, which typically is a little bit more, a little closer to home in terms of correlations always being near zero.

And so I split the lowly correlated strategies into two monikers, one being a Selector, so something that selects its beta if you will over time based on the strategy. So managed futures, trend followers fell into that, long short equity factor momentum fell into that. And then you’ve also got what I call the true Diversifiers. Things like gold, equity value, equity quality from the factor side. And again, it was a nice idea to classify every single strategy but when you get into the paper there’s a graph that I think is the proof in the pudding.

Adam:                         00:23:54         This is the correlation versus the standard deviation or dispersion.

Lars Kestner:               00:23:58         Yeah exactly.

Adam:                         00:23:59         I really liked that framework. Say more about that.

Lars Kestner:               00:24:01         That’s the framework for deciding which bucket something falls into, a strategy falls into. But then right below that we look at the average correlation over time, depending on the buckets. And what you find is that the boosters are very consistently correlated to the S&P 500. So, call it correlations of point 6, point 8. They don’t really move.

Adam:                         00:24:21         Lars, why don’t you try and go ahead and share your screen on that? I think it would be useful and if you’re struggling with it, I can share it.

Lars Kestner:               00:24:28         Yeah, actually, if you want to try to give it a shot. That would be nice. While you’re doing that, I’ll go through that. But the idea is when you actually looked at the correlations over time, not over the full sample, but a rolling window of one year, they behave as you would expect, and that to me gave me a lot of comfort that this is probably the right way to classify the strategies.

Adam:                    00:24:52         Yeah, agreed. This is the chart that you’re referring to, right Lars? So maybe you can walk through how to interpret that chart.

Lars Kestner:               00:25:00         Sure, exactly. So this is a time period from 2008 to 2019 I picked, I believe 19 either asset classes or strategies, nothing particularly selective about them, just a representative population of things that are being traded in the mainstream these days. And I took the regular returns of each return stream and calculated a 52 week rolling correlation between each of those assets in the S&P 500. I pick S&P 500 just says-

Adam:                         00:25:34       Equity beta or risk on.

Lars Kestner:               00:25:35         Yeah. Equity did. Exactly. Nothing wonderful about the S&P versus like an MSCIO world. That was just the proxy I picked. And so we’ve got the 52 week rolling correlations for each strategy or each asset class.

Adam:                         00:25:48         And that’s on the y axis.

Lars Kestner:           00:25:50         Yeah. So from this, I take the average of that correlation over the 10 year period. And that gives us essentially the average Beta correlation, is at risk on? Is it risk off? And if you look at the top, you will see assets up there that you would expect – international equities, a short volatility strategy, high yield and REITs which again, one of those that when things are calm may not look very equity but when we hit periods of stress, those assets do become very equity like. On the other side, the assets and strategies that had negative correlation to the S&P 500, not really surprising a long volatility strategy, an equity, low volatility, which was actually dollar- neutral long, low beta short, high beta. So ex ante by construction, short beta, so no surprise there. And then long rates which was a longer dated US Treasury total return series. So those with the average correlation the S&P 500 below negative point three, Defenders, those with average correlation to the S&P above point three, Boosters. So you’re left with the stuff in the middle and this is what we were talking about before. They’re not zero correlation to the S&P, but they’re also not terribly high. And so the classification system was based on the X axis which is the volatility of the correlation, meaning if something never moves with the S&P not only is the average correlation over time very low, but the standard deviation of that correlation will be low too.

Let’s say you’ve got a different strategy that picks a long or short on the S&P, flip of coin you name what it is, and it does it for 10 weeks in a row, and then it flips a coin and goes the other way. Well, that average correlation over time just can be sometimes it’s long, sometimes it’s short, is going to be near zero. At any point in time it’s “correlation one – correlation, negative one”. So again, it randomly flips a coin. It says I’m long S&P or I’m short S&P and it doesn’t roughly equal amounts of time over the period. Your average correlation is in fact going to be zero. But at any point in time, your correlation is going to be plus one or minus one and so when you look at that variability of correlation over time it is going to move and a rolling 52 week time series point of view. And so the split between what I call Diversifiers and what I call Selectors was based on the consistency of that correlation. So for the assets that weren’t correlated very much to the S&P over time, and intra-periods didn’t show much variability, I call those Diversifiers. For the other assets which over the average don’t show much correlation to the S&P but have very high variability of sometimes they’re correlated, and sometimes they’re very much anti correlated. Those I considered Selectors, and those have a higher standard deviation of the correlation there. And so that’s the framework for creating each of the four classifications of structures.

Adam:                         00:28:45       I think this chart really demonstrates the merit of that framework. It just shows that the different categories exhibit the exact characteristic that you are in going for when you divide it using this framework of average correlation versus standard deviation correlation. So maybe just spend a quick minute going through this just to illustrate it.

Lars Kestner:               00:29:07         Sure. And this is a proof of the pudding. So the sort of navy blue line that’s on top is the average correlation of the strategies that were considered boosters in the previous graph, and they are positive and they are consistently positive, they are risk on and that’s just what they are. The very light blue line at the bottom is the average correlation of the three defenders, that being rates, the equity low volatility and the long volatility strategy. They are negative and they are negative throughout, they don’t really move these are risk off assets, where the interesting results line are the purple line, which are the selectors, which are the ones where that variability moves, it gets very high and it gets very negative. You can see the average of that purple line over time is kind of near zero, but-

Adam:                         00:29:53         One thing that really stood out to me just as parenthetically is just how regularly the periodicity of that purple line is.

Lars Kestner:               00:29:59         Yes, it does feel sine wave. That’s very interesting.

Adam:                    00:30:02         It could be just the time period. But that is astonishingly regular.

Lars Kestner:               00:30:06         Sure, but you see the variability of the purple line of the selectors. At any point in time you throw a dart at this graph, they may very well be risk on or they may very well be risk off. Whereas that middle line, that sort of light blue line is the average correlation to the reverse fires doesn’t move. It doesn’t budge that much. 0.2. It’s very consistent. You throw a dart there, you kind of know where you’re going to get.

Adam:                         00:30:28         Yeah. I think that’s great. So one of the things that I think you mentioned, but you didn’t sort of drill in, it didn’t seem to be a primary theme of the paper. But you had mentioned it a couple of times this idea that because we should acknowledge that every strategy has some kind of decay function over time as more investors become aware of it and build models to harvest that, or arbitrage it away. That a major source of value that is often not recognized in the quant community, or rather with investors investing in quant strategies is the role of the innovative process. You need to be constantly scouring the investment universe and every other conceivable source for ideas about other edges that you may be able to identify, either edges that have always existed but that hadn’t been identified yet or edges that are new because of new regulations or new structural interventions and markets, but the need to constantly innovate, drive creativity and dispose with the old or de-emphasise with the old and bring in the new. Maybe speak about how big a role that should play in what investors pay quantitative managers for.

The Role of the Innovative Process

Lars Kestner:               00:31:51         I think it’s huge and it’s huge in quantitative management of assets. It’s huge in real life, whether you’re running a business. If you are successfully wild at what you do, you’re going to have people chasing you, copying what you do, trying to take away your market share, your revenue, whatever it is, it’s just the nature of the world is that things are going to be harder tomorrow than they are today. And if instead of fretting over it, you just kind of realize it like, okay, I got to figure something out next, it becomes a little bit less daunting, because again, you don’t have to reinvent your process overnight. This is where I really think it’s key. It’s incremental steps to stay just a little bit ahead of where you were yesterday. So I think that’s incredibly important. And if you’re not innovating, and if you’re not willing to go out a little bit on the on the risk structure of the strategies you’re looking at, and you’re left with what was in the mainstream 10 years ago, you’re in trouble. I really do believe that. You’ve just got to innovate on the research side. And by the way, we say research, maybe it’s portfolio construction, maybe it’s thinking about the buckets, maybe the strategies are fine, but there’s a better way to put them together, portfolio craftsmanship, kind of in your words, the innovation part, the research part, the thinking about how can we do this better tomorrow than today is just incredibly important. 

Adam:                         00:33:07         Okay, well, I’m going to just continue on and then you can answer a question while I try to rectify my technical issues at my end. But I did want to come full circle and just sort of say, is there a general practical takeaway for investors on some concrete steps that they could take with their current portfolios to make use of the framework that you have outlined in this Preferred Portfolios paper before we move on to your CTA replication idea?

The Portfolio Paper

Lars Kestner:           00:33:37       Sure, I think one of the interesting things about the Portfolio paper is that if you’re running a multi strategy hedge fund, it’s probably the wheelhouse but the concepts are important for anyone. If you’re an individual investor and thinking about how does my portfolio fit together, okay, I’ve got bonds. I think those are different, here I got an equity allocation, I think that’s a booster. Do I have any gold? Do I want to have gold? Do I have alternative strategies? And how are those alternative strategies going to behave in a time of not only market peril which we probably have a good window into given what we’ve gone through in 2020? But how are they going to perform if we see a rationality common and bubbles happen on the upside? Because all those things can very much happen, and you’ve just got to think how do the individual pieces of your portfolio fit together in different market climates and in some extremes? I think that’s probably the one thing we don’t do enough is no one thought your market down 30% in a month and a half was probably reasonable at the beginning of this year, but it is and it happened. We have to deal with it and make adjustments where they make sense and continue with taking risk.

Adam:                         00:34:46         I like that. Is there a clear mapping between the preferred portfolio idea and a more sort of generalised risk parity framework? Do you see a direct mapping there? Are there major distinguishing features between the two that you would want to highlight? I see this sort of idea of the boosters, defenders, framework as being an overlay on the general idea of risk parity I think. And then classifying them using this average correlation versus standard deviation correlation idea. I really like that. Do you agree that there’s a lot of parallels or overlap there?

Lars Kestner:           00:35:25        Essentially, I’ve never thought of that. But you’ve got a point. When you consider the all-weather portfolios that managers have put together that are risk parity is sure directly risk parity, it is pretty similar. Yes, certainly from a high level of how is an asset going to perform in different and certainly this is macro environments, and then making sure we’ve got enough of all those buckets and obviously, the million dollar question is how much of each bucket to perform under all scenarios to maximise our ending wealth if you will. But yeah, there is a fair amount of overlap there?

Adam:                         00:35:58         Well, I think just sort of theoretically and just coming back to your idea of alpha decay. If we acknowledged that the strategies  all decay to some sort of asymptotic long term average, then I guess that would be a long term average Sharpe ratio. So, to the extent that you’re identifying for risk parity markets where, in general investors have the perception of a reasonably good grasp of the population distribution of the Sharpe ratio for the major asset classes as an example, you’re layering on relatively well known factor premia for which there’s already a fair amount of capital that is in the market harvesting those premia. And so they’ve driven the expected Sharpe ratio of those premia closer to that asymptotic minimum. So then you get into this idea, they all have approximately the same expected Sharpe ratio and then if you’ve got a good grasp of how they fit together, either structurally or statistically, then that drives towards this first parity concept where all of the major sources of return have the same expected Sharpe ratio. And so, you’re just trying to maximise the number of bets between those, as the expected diversification between those different sources. So I like how those pieces fit together and that may be a reason why your preferred portfolio framework really resonates with me because it does map on to relatively well this idea that we leaned into pretty heavily that I’ve come to really embrace over the last 10 or 12 years. So I think that’s great.

CTA Replication

All right. Well, I want to move on to the CTA Replication paper because I think it’s really neat and we spent quite a lot of time with an intern. I think it might have been last summer. No, it was two summers ago. We were doing some work on just kind of investigating machine learning concepts and we have this idea of deriving the optimal window shape for trends and that sort of thing. So we thought CTA replication was a good case study for this concept. So I was really intrigued. You mentioned that you write this, and I was really keen to dig into it. So let’s do that. So the first thing that you mentioned is that this is a hot topic. There’s a lot of research shops, especially sort of sell side research shops that are publishing constantly, the CTA positioning and a lot of investors use this especially macro investors use this, the relative positioning CTAs, some of the … reports, that sort of stuff to feed into their looser models on how to position in markets. So why is this a thing? Why are investors in general interested in this?

Lars Kestner:               00:38:44         I think there’s a number of strategies out there that have grown in assets, so much so relative to the rest of the market that their trading can actually be an influence on what’s going on. And the three that I think get the most press, the most market analysis on are systematic option sellers. So fall premium harvesting, variants risk premium harvesting, whether it’s via variance or short dated optionality selling. And essentially, that leads its way into market makers who run equity derivatives desks, and they’re buying low and selling high and depending on the supply of the options out there, there’s either a lot to buy on the way down and sell on the way up, or it can actually go the other way. So that that’s one topic. Volatility target funds particularly the ones embedded in variable annuities, where I’ve got a leverage number to the S&P that’s moving based on realised volatility, and as volatility goes up they de-lever, as volatility goes down they really lever back up. That’s a second one. And then the third one, there are this trend following CTAs and specifically, and very often you’ll see the research shops have this combination of all three of these together and their take on what’s out there now. How is it affecting markets? How much is long? How much needs to be purchased on a 1% decline? It’s relevant. I think that the sizes of these flows are big enough that if I’m not deep into the strategies behind them, I probably want to know. So I get the reason why and I actually think it’s probably some value add to have it out there and for the research, I just think it needs to be done correctly. It was my point.

Adam:                         00:40:20         Yeah, I agree. And so the way people may trade around this information is markets peak, for example, when there’s no one left to buy. So, when the CTA is often perceived, I think by macro traders as a marginal buyer and it’s a fairly predictable marginal buyer if you’ve got the right tools to predict and if you can tell when CTAs as an important marginal active trader in markets are maximally positioned. In other words, there’s no more dollars from CTAs going in a particular direction right into a specific beta or market sector, et cetera. That can be an indicator that we may be near a shorter term peak. So how do these research shops typically model CTA positioning and what are some of the issues that you found with their models?

Lars Kestner:           00:41:13         Sure. My biggest issue is many of them do it through a rolling window of a very vanilla linear regression, meaning I’ve got a benchmark and in my paper I use the SG Trend Index, which follows a number of the larger trend following CTAs and follows their returns. So they use that as their independent variable, they use the S&P or their dependent variable, figure out the beta of one to the other over time, use a rolling window of three months, six months, one month, whatever it is, and just blindly use the outputs of the regression for okay, here’s where the exposure is at the moment.

Adam:                         00:41:51         Okay, yeah. And so, one of the major challenges of course, is that CTA positioning depth initially changes through time. It’s non-stationary. And it’s a non-trivial question of how to create linear regression models where the betas change through time. So I guess this is what we identify as a major challenge in these models and I think your analysis and we’ll dig into a little bit later the extent to which it’s true, but your analysis shows that they often get it quite wrong at the wrong times. And therefore, these reports that use these methods are not of much practical use.

Lars Kestner:               00:42:32         Yeah, and again, I don’t mean to criticize factor models in general. And for certain investment managers where the reporting periods are sparse, or we don’t really know much about the underlying process. It may be the best you can do albeit with lag and so if that’s the case, it may be a valid way to measure exposures, but you do need to be cognizant of the lag you’re going to have as positions can flip, long short, very quickly. And if you’re running a model looking at the past returns over let’s call it 20 weeks, you may need another 10 weeks or 15 weeks before you really pick up that change.

Adam:                         00:43:07         Yeah, exactly. There’s a few firms out there. I know Markov Process has a method that they use to try to map the non-stationary betas to different funds. So that’s interesting. But I have to say that I am much more aligned with the way that you’ve approached it. I think this is certainly closer to how we approach the problem as well. So why don’t you go ahead and take us through it?

Lars Kestner:               00:43:30         Sure, so the funny thing is the history behind this is after seeing some of the research on this and saying all right, we know how CTAs trade in general, do I now have any specific one trades? No. But in aggregate, we know they’re a buyer of strength, they’re seller of weakness, they are likely going to ensemble over a number of periods in terms of their trend speeds, they’re likely to ensemble across a number of markets and knowing that, can we do a little bit better in terms of not fitting their returns but replicating them. Meaning I’m a CTA, how would I do this? How do I think I would set up my signals? And again, I’m not trying to maximise the performance. I’m trying to maximise the replication to the CTA benchmarks. I want to do my best to predict how they’re going to trade and how their returns are going to trade. Literally, as I mentioned this was a two hour just fun research project and I started with a couple of very simple momentum models, volatility to adjusting each market. I picked by definition to keep things very tight. I only picked 16 markets.

So four specific markets across four asset classes, that being equities, interest rates, foreign exchange and commodities. I then applied what I guess we would call a naive risk parity. So inverse volatility weighted signals to each of the markets. I did include a trend in intensity level with a little bit of a parameter and what I mean by that was, if a momentum turned from negative to positive, I used a level of positioning that was based on the intensity meaning that I flipped from zero to zero point zero zero one, didn’t necessarily create that much of a position. A flip from zero to positive one in a sort of normalised space created a bigger position and as that momentum increased, the position would get bigger. And I put all this together, came up with the results of what I thought my replication return train would look like. I compared it against the S&P trend index, and I got a correlation that was way higher than I thought it was possible. So I scrapped it and said, all right, hold on, I must have done something wrong. Let me do it all again from scratch, went through it again, same result. Okay. So now this went from just a fun project to let’s put a little bit more rigor in this. Let’s think about how we can make sure that I’m not just getting lucky. Let’s see if we can figure out which momentum look backs these CTAs are most using and then from there create a robust replication model. 

Adam:                         00:46:07         So essentially just going through your paper you’ve got five momentum look backs, you’ve got how many caps did you test?

Lars Kestner:               00:46:16         Five different caps.

Adam:                         00:46:17         Five different caps.

Lars Kestner:               00:46:18       I think of that for those as almost like a z score.

Adam:                         00:46:22         Yeah, so I was going to dig into the z score of, is it just the standardised returns? In other words, the z score or the Sharpe ratio?

Lars Kestner:               00:46:29         Yeah.

Adam:                         00:46:30         Okay, perfect. And so there’s five different caps which presumably range between zero and three or something? They’re increments 0 and 2. And five different look back parameters for the estimation of volatility.

Lars Kestner:               00:46:44         Mm-hmm.

Adam:                         00:46:45         So really, what you did if I understand is you created 125 different strategies just like potential trend following strategies on those markets based on every combination of those three dimensions of parameterization with five parameters each. And then you found the combination that had the greatest explanatory power on the SG trend index.

Lars Kestner:               00:47:13         Yeah, although what I first wanted to figure out is of the three parameters if you will to look back, the position cap, the z score position cap and the volatility, which of those were most important? And so the first and probably the most interesting graph is taking the R squared. So the coefficient of determination between my replication and the SP trend index and sorted them high to low and looking for break points. And the first thing we see very easily is that it again using the look backs, there’s a four week, eight week, 16 week, 32 week and 52 week. The four week look back a very super short term one month has very little predictive power. Not really surprised. The eight week, a little bit better, but still pretty far away. The 16, 32 and 52 were certainly the next leg up. And while the 32 week had a little bit better predictive power than the 16 or 52. On average, they were more or less a plateau that were well above the four week in the eight week both backs. Again, we’re talking about the momentum side. The position cap’s not so important, the volatility look backs, not so important. Again, there were some peaks here and there that were a little bit preferable to others, but certainly far and away and not terribly surprising was that the length of the momentum look back was the most important factor of achieving high replication.

Adam:                    00:48:35         So you did identify that the look back parameter was by far the most important or most explanatory variable in your model, right?

Lars Kestner:               00:48:47         Yes, for sure. And that being the 16 week, the 32 week and the 52 weeks. So intermediate to longer term trend following system as you would expect.

Adam:                         00:48:57         Yeah, one thing I was wondering is because we know that, or rather our hypothesis which we’ve tested, at least a little bit, is that the parameterization of trend funds on average has changed over time. So how do we think about that in terms of updating your model or I guess refitting it on an ongoing basis or on a rolling window or something like that?

Lars Kestner:               00:49:24         Yes. It’s an interesting point. That is part of the reason I selected only five years of data. Is that if you went back over 10 years or 20 years where we’ve got the appropriate benchmarks, you’re definitely going to have some not as good fits because the trend speed slowed down a little bit over the past 10 to 15 years. Again, as I understand it and as trend speed has been studied a little bit more in depth over the past five years or so.

Adam:                         00:49:46         Yeah, so I can imagine a model where you are running a rolling regression. So you’d add an extra parameter here which is a look back widow on your model fit. Maybe you’ve got five different potential look back horizons on the model fit, now you go from 125 to whatever five times 125. I should know that, what is it? 625. 625 different models that you’re testing now and it’s just like you’re adding that model fit horizon just to account for the degree to which-

Lars Kestner:           00:50:19         Changes over time.

Adam:                         00:50:21       Change over time. I think it’s great. So walk us through these charts.

Lars Kestner:               00:50:25         Sure. So we went over the top one on your screen now that is just the R squared, if you will from all the 125 models sorted from high to low. From those I started to nail down and pick some specific parameters so that that bar chart on the bottom that is holding two of the parameters constant being normalised momentum score of one again, think of that as a z score, and the volatility look back at 90 days, and you see, and I’ve scaled the left hand axis, the y axis from zero to one and all these so that they’re like for like, you see a pretty big improvement as we go from four weeks to eight weeks to 16 weeks. And then we see the kind of plateau and looks like 32 is a little bit better than 16 or 52. But again, not so terribly sharply that we can say for sure it is 32 weeks and not 16 or not 52. We’ll come back to that later. And then if you’ve got a chance to scroll to the next page, again, holding two of the parameters constant, again the 32 in the top graph, it’s the 32 week, look back is constant, the 90 day volatility look back is constant, and then we look at varying the normalised momentum cap, hard to tell that there’s any difference between there. Again, the peak is at around 1.0 but not so-

Adam:                         00:51:39         It’s not particularly sensitive to that now.

Lars Kestner:               00:51:40         No it’s not. And then the bottom again, holding the look back for the momentum at 32 weeks holding the cap at 1.0 and varying the volatility look back, same thing, not terribly sensitive, all things considered.

Adam:                         00:51:54         It’s pretty remarkable that 0.01 still continues like as a maximum threshold. Still continues to give you signal. I guess if you’re normalising all signals to 0.01, then they end up being somewhat relative. So maybe the magnitude doesn’t matter as part of the regression. I need to think through that but I do think that that is interesting. And then obviously, the volatility estimation window also has very little sensitivity. So I think that’s actually kind of interesting. Did you have any hypotheses going on into this on what you’d expect the model to say?

Lars Kestner:               00:52:30         So the look back’s kind of as we expected, and certainly I didn’t expect to see much on a four week, and I guess I am more surprised that there was not a wider variability in the normalised momentum or the volatility look backs. I guess the peak is not terribly surprising. I just thought there would have been more differentiation as these are pretty sparsely, pretty widely varied parameters. Despite the wide variation in them, there doesn’t seem to be a whole lot of difference. That probably surprised me.

Adam:                         00:53:02         Yeah, now that is interesting. How do you expect investors might make use of this type of model? I mean, I know there’s sort of the obvious, do a better job as a sell side analyst of actually providing information about current CTA positioning, maybe. But then are there systematic ways that investors can incorporate these types of models or build models around this or that are informed by the results of these types of analyses?

Lars Kestner:           00:53:28         Yeah, I think it’s tricky to take the outputs specifically, here’s the positioning of CTAs and what’s going to happen next, who knows if you can tell me the market. It’s only the market moves, I’ll tell you which way they have to trade. I think for me, the more interesting point of view is okay, if we have an idea of how to replicate these, what happens and we kind of alluded this earlier you did Adam, when they hit their maximum exposures, what is the conditional return series look like for these CTA benchmarks when they hit their maximum when they hit their minimum? Is there something to trade off of there? So that to me is the next step of this analysis. The replication part, it’s interesting, but I think the true value add is going to be looking at okay, when, now that we have a way to summarise how they’re trading, what happens when they’re at max long? What happens if they are max short? And does that conditionally affect the returns of the models? And that to me is kind of the next step.

Adam:                         00:54:22         Yeah, I can see two or three different ways to incorporate this information. One of them is take a time series, the finite difference of the change in positioning over time, and does that contain information? Can you front run CTAs a little bit, because you have some forecast ability, there’s a probability cone of where the signal will almost certainly be over the next day, five days, 10 days, et cetera. So, we can quazi forecast where with CTA positioning, there’s a potential cone of where that positioning will go. And then I think you could just add these time series as a feature into a machine learning model where CTA positioning may contain information that is informative about future returns either on its own or conditional on, in combination with other features. I think there’s a few interesting directions that you could go or just if you want to build a pure alpha type machine learning model, then you might want, you’ve got a better understanding of the underlying mechanics of the benchmark. So you know that institutions have an allocation to this, on average as a benchmark, they want alpha strategies that are uncorrelated to this. And therefore, you’re going to by design, build strategies that are accretive, have marginal Sharpe to this strategy. You’ve got a better understanding of what CTA strategies are because of this type of model. Fantastic. Well, I think we covered a lot of ground. We managed to overcome some technical difficulties which hopefully we’ll have been able to edit out for the most part through the production process.

I want to thank you for sharing some charts and tables and some really great papers to discuss. So where can investors or readers learn more about these ideas? Find these papers, when are you going to publish or expect to publish the CTA paper?

Lars Kestner:           00:56:16         Yeah, sure. So the CTA paper should be up momentarily hopefully by the time the podcast is up. I’ve created a website that’s literally just a repository of papers and thoughts I will be adding to it over time. And it’s www.satquant.com and it’s all there.

Adam:                         00:56:33         Outstanding. All right, Lars. Well, it was great. There’s 15 or 20 years of quantitative experience packed into this hour. I’m sure there’s a number of other directions we could go. I’m glad we had a chance to talk about the stuff that you’re currently working on, I think is timely and important. And I would look forward to doing this again sometime.

Lars Kestner:               00:56:51         Absolutely. It’s been a lot of fun. I appreciate having me on the podcast.

Adam:                         00:56:54         All right. Well, let’s put in that and let’s do it again sometime. Thanks again. See ya.

Lars Kestner:               00:56:57         Bye.

Rodrigo Gordillo:        00:56:58         Thank you for listening to the Gestalt University podcast. You will find all the information we highlighted in this episode in the show notes at investresolve.com/blog. You can also learn more about ReSolve’s approach to investing by going to our website, and research blog at investresolve.com where you will find over 200 articles that cover a wide array of important topics in the area of investing. We also encourage you to engage with the whole team on Twitter by searching the handle @investresolve and hitting the follow button. If you’re enjoying the series, please take the time to share us with your friends through email, social media and if you really learned something new, and believe that our podcast would be helpful to others, we would be incredibly grateful if you could leave us a review on iTunes. Thanks again and see you next time.

 

 

Let’s say you’ve got a different strategy that picks a long or short on the S&P, flip of coin you name what it is, and it does it for 10 weeks in a row, and then it flips a coin and goes the other way. Well, that average correlation over time just can be sometimes it’s long, sometimes it’s short, is going to be near zero. At any point in time it’s “correlation one – correlation, negative one”. So again, it randomly flips a coin. It says I’m long S&P or I’m short S&P and it doesn’t roughly equal amounts of time over the period. Your average correlation is in fact going to be zero. But at any point in time, your correlation is going to be plus one or minus one and so when you look at that variability of correlation over time it is going to move and a rolling 52 week time series point of view. And so the split between what I call Diversifiers and what I call Selectors was based on the consistency of that correlation. So for the assets that weren’t correlated very much to the S&P over time, and intra-periods didn’t show much variability, I call those Diversifiers. For the other assets which over the average don’t show much correlation to the S&P but have very high variability of sometimes they’re correlated, and sometimes they’re very much anti correlated. Those I considered Selectors, and those have a higher standard deviation of the correlation there. And so that’s the framework for creating each of the four classifications of structures.

Adam:                         00:28:45       I think this chart really demonstrates the merit of that framework. It just shows that the different categories exhibit the exact characteristic that you are in going for when you divide it using this framework of average correlation versus standard deviation correlation. So maybe just spend a quick minute going through this just to illustrate it.

Lars Kestner:               00:29:07         Sure. And this is a proof of the pudding. So the sort of navy blue line that’s on top is the average correlation of the strategies that were considered boosters in the previous graph, and they are positive and they are consistently positive, they are risk on and that’s just what they are. The very light blue line at the bottom is the average correlation of the three defenders, that being rates, the equity low volatility and the long volatility strategy. They are negative and they are negative throughout, they don’t really move these are risk off assets, where the interesting results line are the purple line, which are the selectors, which are the ones where that variability moves, it gets very high and it gets very negative. You can see the average of that purple line over time is kind of near zero, but-

Adam:                         00:29:53         One thing that really stood out to me just as parenthetically is just how regularly the periodicity of that purple line is.

Lars Kestner:               00:29:59         Yes, it does feel sine wave. That’s very interesting.

Adam:                    00:30:02         It could be just the time period. But that is astonishingly regular.

Lars Kestner:               00:30:06         Sure, but you see the variability of the purple line of the selectors. At any point in time you throw a dart at this graph, they may very well be risk on or they may very well be risk off. Whereas that middle line, that sort of light blue line is the average correlation to the reverse fires doesn’t move. It doesn’t budge that much. 0.2. It’s very consistent. You throw a dart there, you kind of know where you’re going to get.

Adam:                         00:30:28         Yeah. I think that’s great. So one of the things that I think you mentioned, but you didn’t sort of drill in, it didn’t seem to be a primary theme of the paper. But you had mentioned it a couple of times this idea that because we should acknowledge that every strategy has some kind of decay function over time as more investors become aware of it and build models to harvest that, or arbitrage it away. That a major source of value that is often not recognized in the quant community, or rather with investors investing in quant strategies is the role of the innovative process. You need to be constantly scouring the investment universe and every other conceivable source for ideas about other edges that you may be able to identify, either edges that have always existed but that hadn’t been identified yet or edges that are new because of new regulations or new structural interventions and markets, but the need to constantly innovate, drive creativity and dispose with the old or de-emphasise with the old and bring in the new. Maybe speak about how big a role that should play in what investors pay quantitative managers for.

The Role of the Innovative Process

Lars Kestner:               00:31:51         I think it’s huge and it’s huge in quantitative management of assets. It’s huge in real life, whether you’re running a business. If you are successfully wild at what you do, you’re going to have people chasing you, copying what you do, trying to take away your market share, your revenue, whatever it is, it’s just the nature of the world is that things are going to be harder tomorrow than they are today. And if instead of fretting over it, you just kind of realize it like, okay, I got to figure something out next, it becomes a little bit less daunting, because again, you don’t have to reinvent your process overnight. This is where I really think it’s key. It’s incremental steps to stay just a little bit ahead of where you were yesterday. So I think that’s incredibly important. And if you’re not innovating, and if you’re not willing to go out a little bit on the on the risk structure of the strategies you’re looking at, and you’re left with what was in the mainstream 10 years ago, you’re in trouble. I really do believe that. You’ve just got to innovate on the research side. And by the way, we say research, maybe it’s portfolio construction, maybe it’s thinking about the buckets, maybe the strategies are fine, but there’s a better way to put them together, portfolio craftsmanship, kind of in your words, the innovation part, the research part, the thinking about how can we do this better tomorrow than today is just incredibly important. 

Adam:                         00:33:07         Okay, well, I’m going to just continue on and then you can answer a question while I try to rectify my technical issues at my end. But I did want to come full circle and just sort of say, is there a general practical takeaway for investors on some concrete steps that they could take with their current portfolios to make use of the framework that you have outlined in this Preferred Portfolios paper before we move on to your CTA replication idea?

The Portfolio Paper

Lars Kestner:           00:33:37       Sure, I think one of the interesting things about the Portfolio paper is that if you’re running a multi strategy hedge fund, it’s probably the wheelhouse but the concepts are important for anyone. If you’re an individual investor and thinking about how does my portfolio fit together, okay, I’ve got bonds. I think those are different, here I got an equity allocation, I think that’s a booster. Do I have any gold? Do I want to have gold? Do I have alternative strategies? And how are those alternative strategies going to behave in a time of not only market peril which we probably have a good window into given what we’ve gone through in 2020? But how are they going to perform if we see a rationality common and bubbles happen on the upside? Because all those things can very much happen, and you’ve just got to think how do the individual pieces of your portfolio fit together in different market climates and in some extremes? I think that’s probably the one thing we don’t do enough is no one thought your market down 30% in a month and a half was probably reasonable at the beginning of this year, but it is and it happened. We have to deal with it and make adjustments where they make sense and continue with taking risk.

Adam:                         00:34:46         I like that. Is there a clear mapping between the preferred portfolio idea and a more sort of generalised risk parity framework? Do you see a direct mapping there? Are there major distinguishing features between the two that you would want to highlight? I see this sort of idea of the boosters, defenders, framework as being an overlay on the general idea of risk parity I think. And then classifying them using this average correlation versus standard deviation correlation idea. I really like that. Do you agree that there’s a lot of parallels or overlap there?

Lars Kestner:           00:35:25        Essentially, I’ve never thought of that. But you’ve got a point. When you consider the all-weather portfolios that managers have put together that are risk parity is sure directly risk parity, it is pretty similar. Yes, certainly from a high level of how is an asset going to perform in different and certainly this is macro environments, and then making sure we’ve got enough of all those buckets and obviously, the million dollar question is how much of each bucket to perform under all scenarios to maximise our ending wealth if you will. But yeah, there is a fair amount of overlap there?

Adam:                         00:35:58         Well, I think just sort of theoretically and just coming back to your idea of alpha decay. If we acknowledged that the strategies  all decay to some sort of asymptotic long term average, then I guess that would be a long term average Sharpe ratio. So, to the extent that you’re identifying for risk parity markets where, in general investors have the perception of a reasonably good grasp of the population distribution of the Sharpe ratio for the major asset classes as an example, you’re layering on relatively well known factor premia for which there’s already a fair amount of capital that is in the market harvesting those premia. And so they’ve driven the expected Sharpe ratio of those premia closer to that asymptotic minimum. So then you get into this idea, they all have approximately the same expected Sharpe ratio and then if you’ve got a good grasp of how they fit together, either structurally or statistically, then that drives towards this first parity concept where all of the major sources of return have the same expected Sharpe ratio. And so, you’re just trying to maximise the number of bets between those, as the expected diversification between those different sources. So I like how those pieces fit together and that may be a reason why your preferred portfolio framework really resonates with me because it does map on to relatively well this idea that we leaned into pretty heavily that I’ve come to really embrace over the last 10 or 12 years. So I think that’s great.

CTA Replication

                                                All right. Well, I want to move on to the CTA Replication paper because I think it’s really neat and we spent quite a lot of time with an intern. I think it might have been last summer. No, it was two summers ago. We were doing some work on just kind of investigating machine learning concepts and we have this idea of deriving the optimal window shape for trends and that sort of thing. So we thought CTA replication was a good case study for this concept. So I was really intrigued. You mentioned that you write this, and I was really keen to dig into it. So let’s do that. So the first thing that you mentioned is that this is a hot topic. There’s a lot of research shops, especially sort of sell side research shops that are publishing constantly, the CTA positioning and a lot of investors use this especially macro investors use this, the relative positioning CTAs, some of the … reports, that sort of stuff to feed into their looser models on how to position in markets. So why is this a thing? Why are investors in general interested in this?

Lars Kestner:               00:38:44         I think there’s a number of strategies out there that have grown in assets, so much so relative to the rest of the market that their trading can actually be an influence on what’s going on. And the three that I think get the most press, the most market analysis on are systematic option sellers. So fall premium harvesting, variants risk premium harvesting, whether it’s via variance or short dated optionality selling. And essentially, that leads its way into market makers who run equity derivatives desks, and they’re buying low and selling high and depending on the supply of the options out there, there’s either a lot to buy on the way down and sell on the way up, or it can actually go the other way. So that that’s one topic. Volatility target funds particularly the ones embedded in variable annuities, where I’ve got a leverage number to the S&P that’s moving based on realised volatility, and as volatility goes up they de-lever, as volatility goes down they really lever back up. That’s a second one. And then the third one, there are this trend following CTAs and specifically, and very often you’ll see the research shops have this combination of all three of these together and their take on what’s out there now. How is it affecting markets? How much is long? How much needs to be purchased on a 1% decline? It’s relevant. I think that the sizes of these flows are big enough that if I’m not deep into the strategies behind them, I probably want to know. So I get the reason why and I actually think it’s probably some value add to have it out there and for the research, I just think it needs to be done correctly. It was my point.

Adam:                         00:40:20         Yeah, I agree. And so the way people may trade around this information is markets peak, for example, when there’s no one left to buy. So, when the CTA is often perceived, I think by macro traders as a marginal buyer and it’s a fairly predictable marginal buyer if you’ve got the right tools to predict and if you can tell when CTAs as an important marginal active trader in markets are maximally positioned. In other words, there’s no more dollars from CTAs going in a particular direction right into a specific beta or market sector, et cetera. That can be an indicator that we may be near a shorter term peak. So how do these research shops typically model CTA positioning and what are some of the issues that you found with their models?

Lars Kestner:           00:41:13         Sure. My biggest issue is many of them do it through a rolling window of a very vanilla linear regression, meaning I’ve got a benchmark and in my paper I use the SG Trend Index, which follows a number of the larger trend following CTAs and follows their returns. So they use that as their independent variable, they use the S&P or their dependent variable, figure out the beta of one to the other over time, use a rolling window of three months, six months, one month, whatever it is, and just blindly use the outputs of the regression for okay, here’s where the exposure is at the moment.

Adam:                         00:41:51         Okay, yeah. And so, one of the major challenges of course, is that CTA positioning depth initially changes through time. It’s non-stationary. And it’s a non-trivial question of how to create linear regression models where the betas change through time. So I guess this is what we identify as a major challenge in these models and I think your analysis and we’ll dig into a little bit later the extent to which it’s true, but your analysis shows that they often get it quite wrong at the wrong times. And therefore, these reports that use these methods are not of much practical use.

Lars Kestner:               00:42:32         Yeah, and again, I don’t mean to criticize factor models in general. And for certain investment managers where the reporting periods are sparse, or we don’t really know much about the underlying process. It may be the best you can do albeit with lag and so if that’s the case, it may be a valid way to measure exposures, but you do need to be cognizant of the lag you’re going to have as positions can flip, long short, very quickly. And if you’re running a model looking at the past returns over let’s call it 20 weeks, you may need another 10 weeks or 15 weeks before you really pick up that change.

Adam:                         00:43:07         Yeah, exactly. There’s a few firms out there. I know Markov Process has a method that they use to try to map the non-stationary betas to different funds. So that’s interesting. But I have to say that I am much more aligned with the way that you’ve approached it. I think this is certainly closer to how we approach the problem as well. So why don’t you go ahead and take us through it?

Lars Kestner:               00:43:30         Sure, so the funny thing is the history behind this is after seeing some of the research on this and saying all right, we know how CTAs trade in general, do I now have any specific one trades? No. But in aggregate, we know they’re a buyer of strength, they’re seller of weakness, they are likely going to ensemble over a number of periods in terms of their trend speeds, they’re likely to ensemble across a number of markets and knowing that, can we do a little bit better in terms of not fitting their returns but replicating them. Meaning I’m a CTA, how would I do this? How do I think I would set up my signals? And again, I’m not trying to maximise the performance. I’m trying to maximise the replication to the CTA benchmarks. I want to do my best to predict how they’re going to trade and how their returns are going to trade. Literally, as I mentioned this was a two hour just fun research project and I started with a couple of very simple momentum models, volatility to adjusting each market. I picked by definition to keep things very tight. I only picked 16 markets.

                                                So four specific markets across four asset classes, that being equities, interest rates, foreign exchange and commodities. I then applied what I guess we would call a naive risk parity. So inverse volatility weighted signals to each of the markets. I did include a trend in intensity level with a little bit of a parameter and what I mean by that was, if a momentum turned from negative to positive, I used a level of positioning that was based on the intensity meaning that I flipped from zero to zero point zero zero one, didn’t necessarily create that much of a position. A flip from zero to positive one in a sort of normalised space created a bigger position and as that momentum increased, the position would get bigger. And I put all this together, came up with the results of what I thought my replication return train would look like. I compared it against the S&P trend index, and I got a correlation that was way higher than I thought it was possible. So I scrapped it and said, all right, hold on, I must have done something wrong. Let me do it all again from scratch, went through it again, same result. Okay. So now this went from just a fun project to let’s put a little bit more rigor in this. Let’s think about how we can make sure that I’m not just getting lucky. Let’s see if we can figure out which momentum look backs these CTAs are most using and then from there create a robust replication model. 

Adam:                         00:46:07         So essentially just going through your paper you’ve got five momentum look backs, you’ve got how many caps did you test?

Lars Kestner:               00:46:16         Five different caps.

Adam:                         00:46:17         Five different caps.

Lars Kestner:               00:46:18       I think of that for those as almost like a z score.

Adam:                         00:46:22         Yeah, so I was going to dig into the z score of, is it just the standardised returns? In other words, the z score or the Sharpe ratio?

Lars Kestner:               00:46:29         Yeah.

Adam:                         00:46:30         Okay, perfect. And so there’s five different caps which presumably range between zero and three or something? They’re increments 0 and 2. And five different look back parameters for the estimation of volatility.

Lars Kestner:               00:46:44         Mm-hmm.

Adam:                         00:46:45         So really, what you did if I understand is you created 125 different strategies just like potential trend following strategies on those markets based on every combination of those three dimensions of parameterization with five parameters each. And then you found the combination that had the greatest explanatory power on the SG trend index.

Lars Kestner:               00:47:13         Yeah, although what I first wanted to figure out is of the three parameters if you will to look back, the position cap, the z score position cap and the volatility, which of those were most important? And so the first and probably the most interesting graph is taking the R squared. So the coefficient of determination between my replication and the SP trend index and sorted them high to low and looking for break points. And the first thing we see very easily is that it again using the look backs, there’s a four week, eight week, 16 week, 32 week and 52 week. The four week look back a very super short term one month has very little predictive power. Not really surprised. The eight week, a little bit better, but still pretty far away. The 16, 32 and 52 were certainly the next leg up. And while the 32 week had a little bit better predictive power than the 16 or 52. On average, they were more or less a plateau that were well above the four week in the eight week both backs. Again, we’re talking about the momentum side. The position cap’s not so important, the volatility look backs, not so important. Again, there were some peaks here and there that were a little bit preferable to others, but certainly far and away and not terribly surprising was that the length of the momentum look back was the most important factor of achieving high replication.

Adam:                    00:48:35         So you did identify that the look back parameter was by far the most important or most explanatory variable in your model, right?

Lars Kestner:               00:48:47         Yes, for sure. And that being the 16 week, the 32 week and the 52 weeks. So intermediate to longer term trend following system as you would expect.

Adam:                         00:48:57         Yeah, one thing I was wondering is because we know that, or rather our hypothesis which we’ve tested, at least a little bit, is that the parameterization of trend funds on average has changed over time. So how do we think about that in terms of updating your model or I guess refitting it on an ongoing basis or on a rolling window or something like that?

Lars Kestner:               00:49:24         Yes. It’s an interesting point. That is part of the reason I selected only five years of data. Is that if you went back over 10 years or 20 years where we’ve got the appropriate benchmarks, you’re definitely going to have some not as good fits because the trend speed slowed down a little bit over the past 10 to 15 years. Again, as I understand it and as trend speed has been studied a little bit more in depth over the past five years or so.

Adam:                         00:49:46         Yeah, so I can imagine a model where you are running a rolling regression. So you’d add an extra parameter here which is a look back widow on your model fit. Maybe you’ve got five different potential look back horizons on the model fit, now you go from 125 to whatever five times 125. I should know that, what is it? 625. 625 different models that you’re testing now and it’s just like you’re adding that model fit horizon just to account for the degree to which-

Lars Kestner:           00:50:19         Changes over time.

Adam:                         00:50:21       Change over time. I think it’s great. So walk us through these charts.

Lars Kestner:               00:50:25         Sure. So we went over the top one on your screen now that is just the R squared, if you will from all the 125 models sorted from high to low. From those I started to nail down and pick some specific parameters so that that bar chart on the bottom that is holding two of the parameters constant being normalised momentum score of one again, think of that as a z score, and the volatility look back at 90 days, and you see, and I’ve scaled the left hand axis, the y axis from zero to one and all these so that they’re like for like, you see a pretty big improvement as we go from four weeks to eight weeks to 16 weeks. And then we see the kind of plateau and looks like 32 is a little bit better than 16 or 52. But again, not so terribly sharply that we can say for sure it is 32 weeks and not 16 or not 52. We’ll come back to that later. And then if you’ve got a chance to scroll to the next page, again, holding two of the parameters constant, again the 32 in the top graph, it’s the 32 week, look back is constant, the 90 day volatility look back is constant, and then we look at varying the normalised momentum cap, hard to tell that there’s any difference between there. Again, the peak is at around 1.0 but not so-

Adam:                         00:51:39         It’s not particularly sensitive to that now.

Lars Kestner:               00:51:40         No it’s not. And then the bottom again, holding the look back for the momentum at 32 weeks holding the cap at 1.0 and varying the volatility look back, same thing, not terribly sensitive, all things considered.

Adam:                         00:51:54         It’s pretty remarkable that 0.01 still continues like as a maximum threshold. Still continues to give you signal. I guess if you’re normalising all signals to 0.01, then they end up being somewhat relative. So maybe the magnitude doesn’t matter as part of the regression. I need to think through that but I do think that that is interesting. And then obviously, the volatility estimation window also has very little sensitivity. So I think that’s actually kind of interesting. Did you have any hypotheses going on into this on what you’d expect the model to say?

Lars Kestner:               00:52:30         So the look back’s kind of as we expected, and certainly I didn’t expect to see much on a four week, and I guess I am more surprised that there was not a wider variability in the normalised momentum or the volatility look backs. I guess the peak is not terribly surprising. I just thought there would have been more differentiation as these are pretty sparsely, pretty widely varied parameters. Despite the wide variation in them, there doesn’t seem to be a whole lot of difference. That probably surprised me.

Adam:                         00:53:02         Yeah, now that is interesting. How do you expect investors might make use of this type of model? I mean, I know there’s sort of the obvious, do a better job as a sell side analyst of actually providing information about current CTA positioning, maybe. But then are there systematic ways that investors can incorporate these types of models or build models around this or that are informed by the results of these types of analyses?

Lars Kestner:           00:53:28         Yeah, I think it’s tricky to take the outputs specifically, here’s the positioning of CTAs and what’s going to happen next, who knows if you can tell me the market. It’s only the market moves, I’ll tell you which way they have to trade. I think for me, the more interesting point of view is okay, if we have an idea of how to replicate these, what happens and we kind of alluded this earlier you did Adam, when they hit their maximum exposures, what is the conditional return series look like for these CTA benchmarks when they hit their maximum when they hit their minimum? Is there something to trade off of there? So that to me is the next step of this analysis. The replication part, it’s interesting, but I think the true value add is going to be looking at okay, when, now that we have a way to summarise how they’re trading, what happens when they’re at max long? What happens if they are max short? And does that conditionally affect the returns of the models? And that to me is kind of the next step.

Adam:                         00:54:22         Yeah, I can see two or three different ways to incorporate this information. One of them is take a time series, the finite difference of the change in positioning over time, and does that contain information? Can you front run CTAs a little bit, because you have some forecast ability, there’s a probability cone of where the signal will almost certainly be over the next day, five days, 10 days, et cetera. So, we can quazi forecast where with CTA positioning, there’s a potential cone of where that positioning will go. And then I think you could just add these time series as a feature into a machine learning model where CTA positioning may contain information that is informative about future returns either on its own or conditional on, in combination with other features. I think there’s a few interesting directions that you could go or just if you want to build a pure alpha type machine learning model, then you might want, you’ve got a better understanding of the underlying mechanics of the benchmark. So you know that institutions have an allocation to this, on average as a benchmark, they want alpha strategies that are uncorrelated to this. And therefore, you’re going to by design, build strategies that are accretive, have marginal Sharpe to this strategy. You’ve got a better understanding of what CTA strategies are because of this type of model. Fantastic. Well, I think we covered a lot of ground. We managed to overcome some technical difficulties which hopefully we’ll have been able to edit out for the most part through the production process.

                                                I want to thank you for sharing some charts and tables and some really great papers to discuss. So where can investors or readers learn more about these ideas? Find these papers, when are you going to publish or expect to publish the CTA paper?

Lars Kestner:           00:56:16         Yeah, sure. So the CTA paper should be up momentarily hopefully by the time the podcast is up. I’ve created a website that’s literally just a repository of papers and thoughts I will be adding to it over time. And it’s www.satquant.com and it’s all there.

Adam:                         00:56:33         Outstanding. All right, Lars. Well, it was great. There’s 15 or 20 years of quantitative experience packed into this hour. I’m sure there’s a number of other directions we could go. I’m glad we had a chance to talk about the stuff that you’re currently working on, I think is timely and important. And I would look forward to doing this again sometime.

Lars Kestner:               00:56:51         Absolutely. It’s been a lot of fun. I appreciate having me on the podcast.

Adam:                         00:56:54         All right. Well, let’s put in that and let’s do it again sometime. Thanks again. See ya.

Lars Kestner:               00:56:57         Bye.

Rodrigo Gordillo:        00:56:58         Thank you for listening to the Gestalt University podcast. You will find all the information we highlighted in this episode in the show notes at investresolve.com/blog. You can also learn more about ReSolve’s approach to investing by going to our website, and research blog at investresolve.com where you will find over 200 articles that cover a wide array of important topics in the area of investing. We also encourage you to engage with the whole team on Twitter by searching the handle @investresolve and hitting the follow button. If you’re enjoying the series, please take the time to share us with your friends through email, social media and if you really learned something new, and believe that our podcast would be helpful to others, we would be incredibly grateful if you could leave us a review on iTunes. Thanks again and see you next time.