David Berns: Modern Asset Allocation for Wealth Management
Today’s conversation is with David Berns, an emerging thought leader in asset allocation and behavioral finance, who is pushing the boundaries on how to best measure real-world behavioral preferences and build optimal portfolios that actively incorporate this extra information.
We open the conversation with a discussion of his involvement in the design and launch of three new ETFs, which seek to deliver an investment experience that addresses both traditional mean-variance preferences, and also accounts for real investor behavioural and cognitive biases. We then discuss core themes from his new book, “Modern Asset Allocation for Wealth Management”, which marries advances in portfolio optimization with a scientific approach to measuring real investment risk preferences.
I was impressed with David’s thinking about fundamental investment principles and driving ambition to go beyond traditional portfolio formation techniques to account for loss aversion and reflection, and higher moments of the return distribution. It’s clear David genuinely cares about client satisfaction with their investment experience and is advocating for ways to treat clients as unique individuals with novel preferences and goals.
This discussion has something for everyone from advisors to portfolio managers to planners and even for end investors.
David M. Berns, PhD
CIO & Cofounder, Simplify Asset Management
David M. Berns, PH.D.,is an innovative thinker and emerging thought leader in asset allocation ad behavioral finance, and is pushing the boundaries on how to best measure real-world behavioral preferences and optimally build portfolios whit that information.
Currently, he works as Co-founder of Simplify ETFs where he builds novel investment products for real-world client needs. Dr. Berns holds his doctorate in physics in quantum computation from M.I.T.
Adam:00:00:01I am Adam Butler, the Chief Investment Officer of ReSolve Asset Management. I have with me today, David M. Berns, the author of a new book. David, what’s your book called?
David:00:00:10It’s Modern Asset Allocation for Wealth Management.
Adam:00:00:14Excellent. And just for the benefit of listeners, maybe go through, what you do for your day job, and maybe how it relates to your motivation for writing this book?
David:00:00:24Sure. Well, I am the CEO of PortfolioDesigner.com, which is the software, that was built for users and readers of the book. But more importantly, now I am the CIO at Simplify Asset Management, which is an asset manager that’s advising Simplify ETFs. And we just launched our first three ETFs, about a week ago on NYSE and, yeah, exciting times.
Adam:00:00:54Very interesting. So, which came first, the software or the investment practice?
David:00:01:01Yeah, so the book and the accompanying software came from my background in asset allocation. So, I started my career, after being a physicist in quantum computation, which is another whole lifetime ago. I went into asset allocation in the wealth management space, focus on quantitative solutions, building on asset allocation systems for advisors. And I did that for seven years, at a wealth management firm called Athena Capital Advisors, great shop, we were managing around 5 billion, when I was there, across liquid and illiquid asset classes. And my job there was really to focus on, building a framework for asset allocation for the advisors to use for the clients there. And that’s really sort of how this book started. Basically, the book was sort of a look back on what I had built over the seven years at Athena, and what I thought maybe we could potentially build over the next 10 – 20 years, and what the future of wealth management could hold in the realm of asset allocation.
Adam:00:02:05Okay, so your current firm. So, you’re currently at, Simplify Asset Management. Is that an advisory firm? An asset management firm? Both I mean, obviously, just launched some ETFs. So, you’re definitely in the asset management space. But are you also in the advisory space?
David:00:02:20No, we’re just focused on the ETF complex. So, we just have, our first three just came out, we just listed a handful more and some new prospectuses that we just filed. And so yeah, we have a lot of exciting things coming down the pipeline at Simplify.
Adam:00:02:35So, what are the themes of the ETFs? And do they map to the themes in the book?
David:00:02:42Yeah, that’s a great question. So, there’s definitely connection between the book and the ETF company. I think, if you know, in the book, one of the core principles is going beyond sort of the standard mean variance approach. And considering things like, loss aversion and so when you do that, it naturally begs the question, are there products that you can use to help build portfolios that can handle these sorts of nonlinearities and asymmetric preferences? So, it’s, you know, the book is focused on the asset allocation machinery. On how to account for asymmetric preferences. But if you look, there aren’t a lot of investable assets, there are a lot of building blocks, you can deploy, to actually build portfolios of very asymmetric preferences.
Adam:00:03:36So, the ETFs, then are, we’re skipping ahead a little bit in your book, which by the way, I thought was great. So, one of the chapters in your book is focused on selecting a coherent investment universe, which is sufficiently large, so as to capture all of the orthogonal dimensions of risk and return, but reduced to the point where there is a minimal amount of overlap between the different bets in the portfolio. So, the ETFs that you just launched, are they meant to be, these sort of orthogonal constituents to a portfolio?
David:00:04:20They are. Our first suite of products, it’s they’re labelled US Equity Convexity products. And what we’re trying to do there really is, go from a normal distribution of an equity risk premium, or relatively normally distributed equity risk premium. And S&P 500 is our core equity premium benchmark, and we’re trying to say, can we modify the moments of the distribution to create something that is pretty orthogonal to the core risk premium? So, kind of what you would expect out of alternative risk premia, where you’re taking on higher moments. But at Simplify, we’re trying to really preserve the core equity risk premium when we do this. So we’re always invested around 98% in US equities, and then we have this 2% convexity overlay, that’s meant to really chop off left tails, add to right tails and add skew and co-skew and these higher moments, while still giving you the core equity premium.
Adam:00:05:25So, do you want to take a minute and go into, how you manufacture that profile?
David:00:05:31Sure. So, we’re doing this purely with an option-based overlay. So, we start with a 98% investment in a core US Equity ETF, you know, low fee, you know, very efficient, all that. And then, we’re overlaying. So, let’s just do the, we have three products, we have a product, that has, that’s trying to add convexity on the downside and the upside. It’s symmetric. Then, we’re also, we also have products, a product that is trying to add convexity just on the downside, so trying to cut off the left tails only. And then we have product that’s trying to add to the right tail only, which is our upside convexity product. So, let’s just take the downside product as an example, in terms of how this, what this option overlay would look like. Basically, we start by saying what do drawdowns generally look like in markets? And how can options be used to help mitigate them? And so, we basically do a clustering analysis of what drawdowns look like. And we’ve identified two horizons and magnitudes of drawdowns in history, that are most common. And then we’re buying really deep out of the money puts, that’ll help hedge against those types of market drawdowns. And then we have a bunch of strategy components that help really effectively create those modified distributions, will monetize options early. We’ll roll them before expiry. So, there’s a lot of components that go into the strategy, that really help effectively create those modified distributions. So, we we’re really thinking of options as like surgical blades, where we’re trying to take a core risk premium that everyone loves, and knows, and we’re trying to carve up the return distribution and take weight out in certain places, and then add it to other places.
Adam:00:07:35So are you selling options and to fund the purchase of other options? Or are you expecting there to be a general sort of rolling cost in terms of the purchase of option premia in order to fund the different distributions?
David:00:07:53Exactly, we’re not selling any options. We’re not capping upside or capping any direction. We’re just, we’re spending a set 2% annual budget on these, on the entire overlay.
David:00:08:08And so when you’re talking about the symmetric product, we’re spending 1% a year on both sides. And if you’re talking about just the downside of the upside, we put the entire 2% into the option overlay for the one side. And so yeah, as long as the math works out that you know, the amount you’re spending, you can add enough value during those extreme moments, when convexity is really valuable, then then you should be in good shape.
Clients and Utility Functions
Adam:00:08:33Okay, well, that actually leads really nicely I think, into the theme of your book, which I think, if I could kind of summarize it, you’re asserting that typical mean variance optimization assumes and investors have exclusively mean and variance preferences. So, on the risk side, we’re only concerned about volatility. And what you’ve acknowledged explicitly with your model, is the findings from behavioural economics, and specifically Prospect Theory, Kahneman/Tversky, their acknowledgement that investors are more sensitive to downside risk than upside risk. And that they also exhibit this property, this behavioural property or characteristic called reflection, or no, real reflection rather reflection, right. And, so, the utility function needs to account for both, for all three variants. Preference away from downside risk and potentially this reflective risk as well. Right? So that’s a core theme of your book. And there’s other things too, I think that relates directly to maybe how you’ve manufactured this line up of ETFs. So, why don’t you go into how you think about this utility maximization problem?
David:00:10:01Yeah. So, well, before I forget, you’re exactly right. I think, if you really asked the question of, what are human beings’ preferences? Then this sort of, you know, mean variance, rational investor, which the utility function for that was created in the early 1900s, and economists just ran with it. It doesn’t really represent human beings well. And so, we know that well now. And so right, when you get into that, you really start to get, quickly beg the question of, where are the investment products? Right, you can build the asset allocation framework, but should we be focusing on investment products, that can more directly speak to these asymmetric preferences? So spot on, on that. So, I guess, let’s roll back to kind of like the very beginning of the book, which is asking the question, how do you build? What does it mean to build a portfolio for someone, investment portfolio for someone? You know. How do you take someone’s brain and soul and whatever and, and say, this is the right portfolio of investment holdings for you? And, to our best knowledge, the way to do that is to, define a utility function for a human being, and then try to maximize that utility. So, I don’t know if that’s the best thing to do. I don’t know if that’s going to be around in 500 years from now, but sort of, to the best of our knowledge today, in economics, that’s how you map a person to an investment portfolio. And, the core way that we operate today in wealth management is, we really don’t account for the two dimensions of the utility function, that Prospect Theory has, the reflection and loss aversion, we really just have that singular risk aversion parameter. And the whole industry runs off of that, and people are sort of bent into risk tolerance bins, and you know, there might be four or five portfolios that a firm offers, and you’re just sort of slotted into one. But we know humans aren’t like that. And so, the first thing to going from A to Z to build a portfolio for someone is to say, what is your utility function? That’s like the first step. And then like you said, there’s a bunch of other steps we have to go through. But that’s the starting point. And there’s also a huge topic of how do you actually measure three parameters? Right? I think everyone is very familiar with well, how people measure your risk tolerance today, but I would challenge that the way that’s done is, probably has a massive error bars on it. And, I don’t think that there has been nearly enough research into that. And, you know, I think ask your neighbour, ask anyone, if they filled out a risk tolerance questionnaire at their broker, and they’ll generally get the same answer, which is, yeah, I was asked a couple questions. Do you know, am I scared of losing money? Or, you know, do I like gambling? And from my perspective, being a scientist, it’s really hard to imagine how that’s like a real sort of medical grade test. So, I think there’s two really important things to consider here, that the book is trying to take a step forward on, which is, what is the starting utility function we should consider? How many parameters are in it? And then how do we measure it? And, yeah, there’s a ton of loss aversion in the world. And when we don’t correctly account for it, we’ll build portfolios that are not appropriate for people, you know, think about 2008. Think about the stories of people who, went to cash in 2008, and didn’t get back invested into equities for 5,6,7 years. We’ve all heard the stories. And I think, really, what you’re seeing there is, that, people’s loss aversion was mis-diagnosed, and they got incredibly scarred. And they would have been way better off, having a proper diagnosis of loss aversion, having their equity allocation probably dialled back from 60 to 30%, or 40%. And then, they would have stayed in and they would have done a lot better.
Adam:00:14:34Right. So, I think it’d be helpful, and I was, actually trying to log in, to see if I could show the chart from your book. But I think it’d be helpful to, sort of describe a typical, I found the chart in your book to be helpful in understanding these concepts, right. So, a typical utility curve is shaped sort of like a parabola because the idea is that, and investors, the marginal utility of an extra dollar of wealth declines, as an investor’s wealth increases, right. So, if you, offer an investor who’s a multimillionaire the opportunity, to play a lottery for $10 or $1,000, the value of that extra thousand dollars is less to that multimillionaire than it would be to somebody who lives below the poverty line, right. So, you get this sort of diminishing return on extra wealth. And what I found interesting in the chart that you showed was, where you introduce this idea of loss aversion, when the potential for losing wealth is introduced, or the part of the curve where you’re losing wealth, that the utility drops off very – very steeply. Much more steeply than a normal log utility curve. Right. And, then for this reflective quality, I guess the curve almost has a bit of an S-shape.
Adam:00:16:11So, what is I think that, it’s the loss aversion parameters, I think, a little bit more easy to comprehend visually and internalize than the reflective one, I think, requires a little more explanation. So maybe, pull on that thread a little bit.
David:00:16:25Yeah, this is like, kind of like a, fear of missing out, type of behaviour, and it’ll actually put you into higher risk portfolios. And, you know, I think the easiest way for me to think about it is, let’s say you have an asset, you know, in your portfolio that’s like down 30%, you know, do you want to kind of stick with it, and roll the dice, and this thing could go down, could be cut in half from there, or do you want to kind of roll the dice and see if you can get back to even, or even better. So, I think, holding on to losers. You know, we’re all told to cut losers and hold on to winners. And this is the opposite, right? And this is what a lot of people do demonstrate. They hold on to losers, hoping to get back to even or still be proven correct.
David:00:17:21And so, that’s exactly what, that, risk seeking behaviour is in the loss domain.
Adam:00:17:27Right. So, I thought, that was very intuitive. And I thought it was neat how you incorporated directly the findings from behavioural economics. One thing I was curious about, I mean, having worked with clients for many years directly, one thing that we noticed, because we’ve always been focused on mean variance optimization and perhaps overly myopic on that objective from a practice standpoint, but one thing we’ve especially noticed is that investors are also very attuned to what the market is doing. Right. So, we’ve always contemplated and I know, Cory Hoffstein for example, has done some work on this, on the weird portfolio which jointly optimizes, to mean variance utility, but then also optimizes to mean tracking error utility against for example, the S&P or the US 60/40 portfolio. Did you contemplate this? Or is this something that you’ve obviously observed in the literature the way we have experienced directly with clients, and is this something you contemplated including with your model? If not, or if so, why?
David:00:18:45Yeah, I didn’t. I guess, I just didn’t find the right and, to be fair, it’s not something, I’ve sort of looked at exhaustively. So, but I guess sort of falling out of the academic literature and sort of the process I went through to try to put together this sort of best practice workflow from end to end. It didn’t fall out to me as sort of the path I wanted to take to manage clients. So, for better or worse, I was probably going through sort of an overly pedantic and potentially, a bit academic of an exercise, to create this sort of end to end workflow. And so, that’s why it probably smells like that.
Adam:00:19:31Yeah, no, I think, again, it was it intuitive, and I appreciated the connection to behavioural economics. And I also think that it would have been interesting to explore adding an extra dimension to the optimization to account for sensitivity to tracking error aversion, like just adding an extra term. And to your point, then you can probably measure this using similar type of lottery-oriented questions to capture the beta of an investor’s utility curve to that specific dynamic as well. Right. Which dovetails nicely into.
David:00:20:10Like another dimension of behaviour, I think.
Adam:00:20:12Exactly. Right. Exactly.
David:00:20:15And it’s like, let’s go from three to four parameters. Dave let’s do this.
Adam:00:20:17It’s shockingly important, though, and I agree, obviously, moving into more and more parameters, that you get into dimensionality challenges with the amount of data you have. And I mean, there’s lots of issues on both sides. But this has been an especially interesting observation for us, as you know, people who’ve worked directly with clients over the years, and we’ve run mostly strategies that are not really oriented towards traditional portfolios. So, this has been acutely urgently in our sight..
David:00:20:55More sensitive to it.
David:00:20:57See, you might be more sensitive to it. Yeah, that’s fair.
Adam:00:20:59For sure. For sure. But either way, I think it’s, I was really enamoured also with the way, you try to attack measuring a person’s risk preferences, using the lottery based surveys, maybe go into that, what’s your thinking on that? How do they work?
David:00:21:16Yeah, I think, you know, intuitively, the sort of, you know, Grable/Lytton questions that the industry has just grossly adopted over the last two decades, which seemed very qualitative. And there’s sort of, a lot of self-diagnostics in there, they just didn’t really hit me as very, as really having the potential of being very accurate. And at the same time, I think the way people have aggregated different questions and just sort of squish them together in terms of, you know, let’s say you answer four questions on risk tolerance. If you flag as a four out of four on each one, you added up to 16. And just adding up a series of questions also doesn’t seem like sort of a very rigorous kind of aggregation system, especially if there’s like multiple dimensions in there. So, if you look carefully at the risk preference, diagnostic literature for these Grable/Lytton type questions, they actually have loss aversion questions in the list, buried, and they identify them as such. But as you know, since loss aversion makes the utility function fall, it can make it fall off a cliff really aggressively, way faster than risk aversion can. Then, if you have 10 questions on a list, and two are about loss aversion, and you flag is very loss averse, well, that’s just two out of 10. So, it’s only going to maybe bring down your like, effective risk aversion by like 20%. But yet, we know the utility function can be like triple as low. So, there’s also this really obvious aggregation challenge in the industry, with these types of questions. So, the only way I could see from the literature to do this accurately, is to diagnose each parameter very independently. And really try to specifically diagnose each parameter. And these lottery style questions are out there. And you can map each one of these questionnaires to a very specific parameter. So I know that, I’m not just kind of taking like a, fluffy question about diagnosing your own risk, and then having to map it somehow qualitatively again, so two qualitative steps to a parameter, I actually have this thing is incredibly precise, as long as I can solicit the response properly from someone in a good state of mind. I mean, there’s plenty of other error that can come in, but at least I know that, if you switch, you know, on for instance, on the loss aversion questionnaire, if you’re willing to take a bet that’s a two to one payoff, but not one, that is 1.5 to one, then I know your loss aversion is two to one, it’s just the ratio of those two things, it’s very mathematical;
David:00:24:11So that was also really attractive. So independently measuring them and then also trying to use a questionnaire that is at least trying to be accurate from the start.
Adam:00:24:21Yeah, no, I thought that was really intuitive. You do discuss at length and I think very thoughtfully to get this idea of the validity of a have a test, right. And, I think you acknowledge explicitly and I’d be interested in your in your take on this but, one of the reasons why traditional surveys are not very effective is that, A- they don’t measure explicitly the right dimensions of risk, but also because, investors are not very good at conceptualizing or identifying their true emotional triggers, you know. Like you ask a person in a calm emotional state to evaluate how they were, will feel, when they’re on tilt? And those don’t often, I mean, in my experience, they just don’t map very well, in many cases. And I know that your attempt at this, doesn’t make that any worse, like there’s nothing about your method that is any worse at capturing that than any other method. That is just an omnipresent challenge. Do you feel that your approach closes the gap though in a way that some of the more prosaic surveys don’t?
David:00:25:46I hope so. That’s definitely one of the hopes. And I’m working closely with a psychometric testing firm. And I did sort of throughout the book, we’ve surveyed the questionnaires, and we’ve done a lot of data work on that. And we’re going to be publishing that soon. The psychometric testing company is called ACS Ventures, and their expertise is on reliability, validity and studying these things. And so, we’ll be releasing some data and analysis on that soon. And I think we’re doing a pretty good job, and we stack it against Graebel/Lytton type questions and everything. So, I think we’ll have some exciting news to share on that front.
Adam:00:26:26So how do they measure that? I’m curious? I mean, I’m not sure just how deeply you’ve been involved with what they do. But I mean, I would expect that they would measure people’s response to the surveys, and then observe their behaviour in different market conditions and see how well their responses map to their behaviour in these conditions? Or is there another method that they use?
David:00:26:48I think there’s a bunch of ways to do it, which I think is a whole nother podcast. And it goes off the deep end a bit quickly. But, you know, the thing I’ll bring up on this that I think is overwhelmingly interesting is that, I don’t know if you saw it, you probably saw this part of the book. And this is sort of I think, is an interesting challenge to the whole industry, is the part where I went through how when you give one of these questionnaires with small amounts of money and fake dollars, how you’re not soliciting accurate results, answers, but then if you scale those numbers way up, but they’re still fake, you’re still not soliciting accurate results, but then if they’re real dollars, and very large, then all of a sudden the answers change, and you start to get more real answers and accurate answers. And so it’s, I find this really funny and but very interesting, you know, the way you should do your first meeting with your client is, you know, snap your fingers and their bank account’s on the table, and then you snap your fingers and your bank account’s on the table and you say look, to do this well, somebody’s going to lose money right now. But we’re going to go through these questionnaires. But this is real as it gets.
Adam:00:28:17It’s literally the only way, to actually get close to a person’s true risk preferences. And even then, it assumes that risk preferences are stationary in time, right? Like, that people don’t get more or less risk averse in different market conditions or different phases of their life or what have you. Right. So, I think you did a good job of suggesting that advisors should be administering these tests to clients regularly, so that, you know, they can sort of triangulate the true preferences over time.
David:00:28:48Minimally over sort of big market events, big life events, you know, personal events can really change things. So, I think as long as you try to administer it sort of around those big events, you’ll be in pretty good shape. And a lot of it is also educational for them, just to be able to say hey look, all of my clients’ loss aversion went up 50% over the last two months, you’re not different. Here’s a chart of my whole client base. And you know, that’s incredibly interesting education and also just, on the three different parameters, educating clients on loss aversion, for instance. I think that also is something that’s that can be really educational for clients, and thus far what I’ve seen from advisors who have used the software and have deployed this with clients. I think clients really feel like the advisor is getting to know them a lot better and the client is seeing a window into the advisor, sort of caring and really getting involved and the advisor explaining things to them. So, for instance, the story I like to tell on this front is, I was doing a demo with some advisors. And we measured everyone’s loss aversion. And then we had everyone take their phones out of their pocket and look at their cell phone cases. And it was a really nice correlation between, the highly loss averse people have these very large cases on their phones. And then the guy that had like, absolutely no loss aversion, maybe even the opposite somehow, he had two iPhones with no cases on them. And so when you start to tie these parameters into real life, and the advisor really gets to understand what these parameters are and they can make those parallels and clients can start to see how they exhibit loss aversion in their everyday life, how much insurance do you take out for X, Y, or Z and what’s the, you know, what’s the national average, and there’s a lot of really interesting content and it really empowers the advisor and the client to really engage the portfolio I think in a more meaningful way.
Adam:00:31:12I agree. And I think you acknowledge that this may be the most important feature of the approach that you advocate, this idea that you are able to have deeper, more informed, more educational conversations with clients. And the client genuinely feels like you’re getting to know them. And they’re not just getting bucketed. There’s three kinds of snowflakes. There’s snowflake A, right?
Adam:00:00:31But rather, you’re really getting a sense of their unique snowflake characteristics and are able to map those qualities to other decisions that they make in their life. And that crystallizes the concept for them I think, in a way that you aren’t able to do using other techniques. So, I think that’s a great quality to this.
David:00:32:05Yeah, and I think another interesting thing on this topic is… I’m going to give a shout out to a company, Riskalyze, which I’m sure you’ve heard of. They have this single risk number. And so, you might sort of intuitively say, “Okay, they’re kind of like everyone else. They just have a single number, and you’re somewhere” or whatever. But if you look at their questions and how they actually build the number up, they’re actually using these quantitative questions, and they’re actually building out a full utility function. They’re asking these questions that sort of help map out the entire utility function. But what they do is, instead of deconstructing it into a series of parameters like I do, instead of parameterizing with three parameters, they just fit one function over it, and they just fit it with this risk aversion number, a single number. And they parameterized this sort of constant second derivative function with one parameter.
David:00:33:03But their methodology is very numerate. And they have all the underlying data. But they’re just deciding not to showcase the underlying data and sort of expose it in this way. It’s very intriguing to me, sort of how they made that business decision to have the deeper power of all these really nice quantitative questions that should be more valid, higher validity. But then they hide it to sort of make… what they do is they make this effective risk aversion and have a single parameter that accounts for loss aversion. So, if your loss aversion is really steep, they’ll just basically kind of increase the risk aversion to help bring this curvature in.
David:00:33:50And I find that to be really interesting, but it gets to the question of what is best for the client. Is it better to expose these underlying parameters and behaviours, or not? And my book went one way, but as long as we’re measuring it right, I think that’s a great first step. So, Riskalyze is certainly doing that.
Adam:00:34:18And then you’ll communicate those dimensions in an intuitive way to clients. So, it’s an extra layer of value that…
David:00:34:24Yeah. I would love to see them say, “Hey look, we have this effective number, but if we look at your utility function, you have some really interesting behaviour going on under the surface. Let’s expose that. Let’s educate you on it.” I think that could be incredibly cool because I know they have the data.
David:00:34:42But then you can take it one step further and say, if there was a firm that started to offer higher moment building blocks, and we started to get outside of playing just in volatility land, now it gets even more deeply interesting to have some of those parameters really isolated because an effective risk aversion won’t really be enough now. If there’s higher moments in the building blocks themselves, having just an effective risk aversion isn’t good enough. You need those isolated parameters, so that in your expected utility optimization, it can really be sensitive to those more interesting higher moment building blocks.
Adam:00:35:27Yeah, no, you’re right. I haven’t met anybody who isn’t interested in learning more about themselves, right? Which is why these personality tests are so popular, and there’s all these automated Myers Briggs type tests online, where you get a full report. It’s automatic. It’s computer generated, it’s formulaic. So, a person can fill out a questionnaire. You can imagine a person filling out a Riskalyze questionnaire. I think this where you’re going, right? And not only does Riskalyze get a very granular understanding of their utility function, but they also produce a report that informs the client about some of the interesting things that maybe they don’t know about themselves and they would find very interesting to learn. That’s a really good point.
David:00:36:22I’ll give you a personal example very dear to me. I was prop trading for a few years, and I found out about my loss aversion after the fact. And it was discretionary prop trading. I wish I knew that beforehand.
Adam:00:36:39So, what happened? You want to share?
David:00:36:41It made for a… What was that?
Adam:00:36:42What happened exactly? Do you want to share?
David:00:36:44Yeah, no. I was trading futures, highly levered, very short term, couple of days. And my loss aversion was just a constant challenge. To put on these discretionary trades with a lot of leverage where… It was a systematic approach, but you have to stick with the system.
David:00:37:09But when you have discretion and you have a lot of loss aversion, it’s the same exact thing we were talking about with the portfolios. You have to stick with it, and you can’t just go to cash and get scared. I wish I knew about my loss aversion before. It affected my returns for sure, so yes.
Adam:00:37:26And as you describe in the book, right? If you if you know this about yourself, you can design an investment strategy that maps to your specific set of different risk preferences, right? And so, maybe describe the process that you go through to translate the three different risk preference parameters, and obviously a target return, into an optimal portfolio. And I know you do this in sort of two chapters in the book. And I like the way that you did that where one chapter is, well, we need to find a parsimonious investment universe, right? And then, we’re going to feed that parsimonious investment universe into our optimizer, because if not, then we’re going to optimize on the error term, and we get very sensitive and fragile results, right? So, maybe start with how to think about creating a parsimonious investment universe.
David:00:38:26Yeah, so I think the first thing to recognize is like, we mentioned earlier, we need to go beyond mean variance if we really want to properly account for these Prospect Theory type terms that are going to have higher moment sensitivities. So, we’re going to do something called returns-based optimization. So, we’re going to maximize the expected utility. But we’re going to be bringing in the full return distribution of every asset, and we can see every month. We’re doing monthly returns. We can see every month, how each asset interacts. And so, that’s sort of the first step you need to take to be able to pull this off properly. And then, to be able to enable advisors to actually run this mathematical tool, there are two things that we need to help them with. First, we need to minimize the estimation error in our capital market forecasts. Okay? The way the book proposes to do this, to 1st order, just trying to keep it simple for advisors to actually do this at home without a team of PhDs in their office with them, we’re going to use the last 50 years of historical data. And we can get error bars pretty low. And we’re making a huge assumption, we’re making an assumption that the risk premium we’re going to invest in for the next 10/20/30 years are going to roughly look like the last 50 years. So, we’re assuming a stationary stochastic process, and the moments are going to be consistent. And if you’ve been earning 7-8% by taking on equity risk, that risk premium is going to stay there and still be there, and someone’s going to compensate you for taking those risks. So, same with the duration, same with all the other risk premia. So, we assume those risk premia are relatively static and consistent. And so, by using the 50 years of data, you can have pretty low error bars when you estimate the return distribution, to use Azure Capital Market forecast. And then, the next thing we need to do to make this optimizer behave well for us and not be kooky is that we want to minimize the optimizer sensitivity to estimation error. So, there’s like two different steps. And the best way to do that is to make sure you don’t use assets that are very similar. Right? So, think about it. An optimizer, an AI, whatever, if you have two very similar things, it’s just going to fry and not know how to choose between one or the other. But if we only offered very distinct choices, then the sensitivity to estimation error itself is going to be low. So, we really want to tackle it on both fronts. And in the book, in the software, in terms of… I’ve talked about the 50 years historical data with low estimation error, the way we avoid co-movement, assets that have a lot of co-movement, is we basically build these mimicking portfolios. What you do is, if you have let’s say, six assets in the portfolio that you want to consider for optimization, for each asset you build a portfolio from all the other assets on the list, not including the one you’re interested in. And you build a portfolio that mimics that one asset as much as possible. And if you can build a mimicking portfolio with very low tracking error to that asset, then you don’t need that single asset, and you can throw it off the list. Because we’re not just sensitive to assets that are similar. We’re also sensitive to if a whole other list of assets, aggregated together in some random way, would be similar to a single asset. So, it’s not just a one versus one question. It’s also, sort of, the rest of the portfolio list, if you can replicate a single asset. And so, I go through this tool where you look at diminishing portfolio tracking error with the rest of your assets, and you want to just avoid things that have very low mimicking portfolio tracking error.
Adam:00:43:02Lost you for a second. Keep going.
Adam:00:43:05Yup. Yeah. So, you were saying that a cluster of… You want to make sure that you’re seeing whether or not another asset is additive from an orthogonality standpoint to a cluster.
David:00:43:20To the rest, right. So…
David:00:43:22Let’s say, there’s 10 assets that you’ve been taught that are really interesting and compelling and should be in every portfolio. I challenge you to put them into my software, and look at the mimicking portfolio tracking error, and tell me if any of the tracking errors are very low for any of the assets. So, I think the classic example that I give in the book is high yield bonds. They have a lot of duration. There’s a lot of sort of, equity type credit risk. And so, the mimicking portfolio tracking error of high yield bonds when you also have equities and duration in the portfolio, can be really low. And so, it becomes less compelling and even if you can justify it, it’s the question of will the optimizer freak out. And so, to empower people to actually run an optimizer for a given set of risk parameters and a set of assets, we just want to avoid those things. So, we want to keep it very high level to avoid that craziness.
Adam:00:44:21Right, so the example used in the book for example is, you’ve got typical mean variance optimization, you’ve got a market cap weighted stock portfolio, and you’ve got a small value stock portfolio. And because they’re so highly correlated, a very small change in expected mean, you go from holding 100% cap weighted to 100% small value, right? And so obviously, this is a well-known challenge with optimization. And I’m wondering, are you familiar with Kritzman’s work on… Because what’s interesting about this, right? Because people talk about optimization being an error maximization process. And that is absolutely true, if you, for the reasons that you described, in weight space, right? So, the weights themselves are extremely fragile?
David:00:45:09Right, right. I know, but not returns wise.
Adam:00:45:12But the frontier is actually not fragile at all, right? It ends up being virtually identical, even though you’re completely switching the asset you’re holding. The actual mean variance character of the portfolio is very consistent, right? Because you’re not changing the portfolio character, the individual constituent characteristics very much, right?
Adam:00:45:36So, there’s that point. And then the other point is, I know that your method of identifying whether an asset co-moves with other assets accounts for more than just correlation, right? You’re also accounting for co-skewness and co-kurtosis, which is great. What you’re not accounting for I think, is the difference in means. Right? So, imagine you’ve got a small value portfolio that… it doesn’t really change the correlation or the co-kurtosis or the co-skewness, but it has a very substantial… it can have a very large difference in mean over extended periods. How do you address that in the optimization?
David:00:46:29I don’t have a good answer to that. It’s a good question. I like it. I’ll have to get back to you on that.
Adam:00:46:37So, one way that we’ve attempted to address that is by just… So, you got this optimization, which is a returns-based optimization, you’re optimizing to a custom utility function. And so you’re finding, using the actual empirical distribution, what combination of assets empirically map most closely to this utility function, right? And you actually go through a process of resampling in the book. So, I was wondering, why didn’t you just take that one step further, and actually run a bootstrapped optimization? So, you randomly draw row wise returns for all the assets. You optimize, then say, now you’ve got an optimal weight vector. Then you do this again and again and again 1000 times. Then you’ve got 1000 different optimal weight vectors that account for the true distribution of not just variances or covariances, co-skewness, and co-kurtosis, but also differences in means overtime. So, you actually don’t need to do this dimensionality reduction in the beginning where you’re weeding out assets. You can actually leave all the assets in there, and then just resample the optimization process, and take the average of all the optimal weights, which is kind of like, sort of like what…
Adam:00:48:08… does but he uses the multivariate distribution for resampling. And instead, you’re just using an actual bootstrap of the empirical distribution.
David:00:48:17Yeah. Yeah, I’ll have to think more about that too. I’ve steered clear. I guess I didn’t really talk about that much in the book. But I steered clear of sort of like the … style resampling, just because of the well-known issues of its elevating sort of like, things that should be getting 0 weight will have some kind of small average weight, and there’s these like weird anomalies that happen when you…
Adam:00:48:45But it is neat too, right? Because you actually do a great job, I love this about your book, you actually show the error terms and the expected asset returns, and then also… and the weights. And so, you see a lot of the time where the error bars are sort of between 0 and 14% or between zero and 8%, or whatever, on some asset classes like commodities or on the long/short ARP strategy, right? And so, you don’t want it to be necessarily zero. You want it to be within this sort of error bar. And so, this way actually, you’re using the empirical distribution, you’re giving asset classes. Maybe they are just small weights, right? But they are consistent with the true empirical distribution of what an investor might expect going forward. And I just, I love it because you’re still capturing all the different moments of the utility function, right? So, it’s, anyway, something to explore.
David:00:49:44Yeah, okay, cool. I like it. I like it, yeah.
Adam:00:49:47So, what do you hope to achieve with this book, right? I mean, obviously, you’ve got this asset management business. Are you also building an advisory business, or are you still only focused on providing software consulting services to advisors?
David:00:50:05Yeah, the software, I wanted to give the software for free, and my wife said no. The software is really… She cried a little bit too, and then, asked me not to give it away for free. So yeah, I built it with my own money, and I just, I really didn’t want to put another book out that was talking about empowering advisors and teaching advisors, without actually letting them do it. They can’t go code this stuff up.
Clients and Risk Reflection
David:00:50:34And so, I absolutely wanted to empower. That was really the goal. But I also wanted to modernize because things are starting to feel pretty stale from 1952 when modern portfolio theory was invented. So, my hope is to get conversations like this happening. You know, see if Riskalyze emails you or I or whatever, see if we get any anyone thinking there. I just want to sort of be pretty altruistic, and I want to just help push things along. I have a physicist view on this wealth management space. And I’m just trying to contribute to help move things forward and empower advisors. I really don’t like how handcuffed advisors are in practicing their practice. Without the right tools and education, they just can’t do it, and they just have to take these big box asset allocation models, or whatever. And so, yeah, hopefully the software will slowly evolve with time. Hopefully, with ACS Ventures, we’ll do a lot of good work on this behavioural data and how it factors into building portfolios in a smarter way, and just get people talking about some really important things. There’s a lot of advisors out there who don’t want to really be asset managers. They want to be relationship people. So, some of the heavy lifting stuff, in sort of building customer portfolios, they won’t be interested in. But maybe the risk profiling and stuff like that, they will really still care about because they want to really just have great relations with their clients and really build something that’s good for them. So, yeah.
Adam:00:52:26Yeah, I agree. There’s actually something in there for planners and advisors with quite a wide spectrum of different foci, right? So, there is stuff in there for planners, and we did skip over the section of the book that goes into the lifestyle risk preference, or lifestyle risk… um…
David:00:52:51Standard of living risk.
Adam:00:52:53Standard of living risk, right. I know all the planners are out there going, “This is all great theoretical hocus pocus. But what about clients with real goals?” And I want to assure you that, while this is not… I don’t spend a lot of time on the planning side. But this is absolutely dealt with very coherently—
David:00:53:12Actually, I’ve read some of your work on the planning side and saving for retirement and all the risks associated with it so.
Adam:00:53:19Yeah, yeah. That was, whatever, 7, 8, 10 years ago. But you’re right, we do have. We have lots, and I just don’t spend a lot of time on that these days. But I just want to reassure planners, who do focus more on the hard sort of financial goals side of the planning exercise, that there is some very concrete methods in the book on how to modify or moderate the utility function to accommodate very specific financial goals. And I think you do a good job of erring on the side of… If a client has enough risk capacity, that’s when their preferences become important, but if they don’t have the capacity to optimize on non-wealth maximizing preferences, then they actually can’t play much of a role. So, I thought that was very astute.
David:00:54:12Yeah, yeah. Thanks to Michael Pompian, for sort of motivating me on that. His book on behaviour and how to account for it sort of… I basically took a lot of his stuff but just took quote of his figures, changed the wealth axis to SLR, just a slight change, but yeah. That’s where a lot of that came from.
Adam:00:54:32Nice. Credit where credit is due. Having written a book and written lots of papers, I know that the second it goes to publication, that you start to think of things that you would have done a little differently. Anything that you wish you had added to or maybe said a little differently or just changed in the book?
David:00:54:521000%. Let me get out my scroll. No. It actually hasn’t been that bad thus far. There is one thing, though. Reflection was pretty new to me. And the way I parameterized it in the book is not going to be how it lives in perpetuity. So, in the book, I parameterized reflection as just binary, on or off, like you have it or you don’t. And the curvature, the amount of curvature, on the law side of things, that S-shape curvature, I just used the risk aversion parameter as the curvature. And that’s not accurate, and I’ve learned that with the work we’ve done with ACS Ventures on the testing, people… and now that I look back at other literature, I think the signs were there. Riskalyze knows this well. The reflection is very muted in a curvature way, relative to the positive side. So, there’s a lot more curvature in risk aversion generally than there is in reflection. The reflection is kind of like flattish. It’s kind of shallow, it’s not that intense. So, just to assign the same amount of curvature, if you’re expressing reflection as you do on the positive side, not so accurate. So, you really need the same questionnaire as you do on risk aversion, just kind of inverted for losses. And then you can just measure that parameter very, very explicitly.
Adam:00:56:27Yeah, there was, almost like a state change in the shape of the function when you add in the reflection. Which I thought was shocking in how much the shape of the utility curve changes.
David:00:56:42And the portfolios.
Adam:00:56:43And the portfolios. Yeah. Dramatic.
David:00:56:45100% equities. And so, that should have been my flag before finalizing the publication. But no, I think that’ll soften. So, that’s like, I think, just an important lesson learned from the data.
Adam:00:56:58Absolutely. Nice. Great. Well listen, I really appreciate you. First of all, thanks for writing a book. I really enjoyed it, and I definitely would recommend it for more quantitatively oriented planners and advisors. There is a fair amount of technical material in the beginning. You do a really good job of sort of stepwise mapping it out, and what the different terms are and why they’re important and how they function. And certain terms are positive, and certain terms are negative. And I think you did as good a job as you could have done with that. There is a little bit of up-front lift there, in terms of the math. But for anybody who’s remotely quantitatively minded or just wants to learn a little bit more about how to merge the twin concepts of rational expectations and behavioural economics, I think you do a great job with that. And then, lots of really good material to use in a bunch of different dimensions of the practice. So, thanks for writing it. And thanks for coming on the show and explaining it and explaining your motivation and telling us a little bit about where you want to go with this.
David:00:58:06Yeah, thanks for having me. It was fun. Anytime.
Adam:00:58:10Alright, awesome. Well, have a great afternoon and thanks for listening.