Feature Engineering, Strategy Mining, Economic Value and The Profitability Rule with Michael Harris.

Michael Harris started trading rates and derivatives 30 years ago. He is the Founder of Price Action Lab and the developer of the first commercial software for identifying parameter-less patterns in price action 20 years ago. In the past 10 years he has worked on the development of DLPAL, which is a software program designed to identify short-term anomalies in market data for use with fixed and machine learning models.

In this wide ranging interview we discuss: 

  • Back testing and generating trading signals on an Atari Console
  • Loss Compression and Forecasting
  • Data Mining – microfeatures, micropatterns and trend following
  • The hit rate, payoff ratio, profit factor and the Profitability Rule
  • Data Mining Bias and Data Snooping
  • The Seven Weekly Strategies
  • The direction of probability 

The conversation hosted by Adam Butler of ReSolve Global*, was interesting and enlightening as Michael provided new insights into emerging financial technologies. We hope you enjoy it. Thank you for listening.

Listen on

Apple Podcasts

Subscribe on

Overcast

Listen on

Spotify

Subscribe on

TuneIn

Subscribe on

Android

Subscribe on

Google Podcasts

Subscribe on

Libsyn

Mike Harris
Founder of Price Action Lab

Michael Harris started trading commodity and currency futures 30 years ago. He is the developer of the first commercial software program for identifying patterns in market price action. In the last seven years he has worked on the development of software for hedge funds that identifies features in time-series for use with machine learning models.

Michael holds two master’s degrees, one in Mechanical Engineering from SUNY at Buffalo with emphasis in control systems and optimization and another in Operations Research from Columbia University with emphasis in forecasting and financial engineering. Michael has worked for several years in the field of robotics and then in the financial sector where he developed a bond portfolio optimization program and strategies for trading futures and equities. He has also worked as a long/short equity and ETF trader for a Swiss-based hedge fund.

He is the author of the books “Short-Term Trading with Price Patterns” (1999), “Stock Trading Techniques with Price Patterns” (2000), “Profitability and Systematic Trading” (2008) and “Fooled By Technical Analysis: The perils of charting, backtesting and data-mining” (2015). Mike has also published many articles in trading magazines and in his blog, Price Action Lab Blog.

TRANSCRIPT

Adam:00:00:00All right. I’m here with Mike Harris from currently, Price Action Lab. I believe Mike has many years of experience in trading markets and machine learning and investment strategy design. I’m excited to hear how Mike found himself in this grizzly business and the path that he took to found Price Action Lab and some of the lessons that he learned along the way. Before we get started, maybe Mike, introduce yourself and maybe give us a little bit of your background.
 

Backgrounder

Mike:00:00:35Okay, thank you for the invitation and nice meeting you Adam and Ani. Ani, right? My name is Michael Harris. I started trading about 30 years ago actually, I’m old. And I was a student at Columbia, I was working for AT&T in robotics for about 8 years. I designed the high speed robots for electronics assembly. I have a patent with AT&T, and at some point was working on my PhD on our larger space structures. At some point I really got tired with the educational environment after so many years. I had already two Masters and I was working on my PhD. I was working for AT&T and I thought to make a transfer to the financial world. After the 1987 crash, the markets became popular because before the crash, no one cared. At Russell Value Investing as you know, some CTA’s and some trading in stocks was non-existent and they sold futures, currencies and commodities. I took another degree at Columbia and I studied financial engineering, portfolio management, forecasting. It was my favorite subject. And I joined Wall Street, my first job was too boring because the money markets desk, and I have to calculate the gap of the desk every day, write a computer program, assets minus liabilities, deposits minus loans and how much they had to finance. I had my Bloomberg terminal, then Bloomberg was in Fortran, if you know.

So I had my Bloomberg terminal and I was also in charge for looking at basis trading in bonds, the difference between the cash bond, the latest cash bond that was trading, and the futures. There were some arbitrage there, some money to be made for big portfolios, for big pockets. And I was so looking into that, that’s how I started. Basically I was not a retail trader or someone who read the book about technical analysis. Next to me was the Forex guys, they could swearing and throwing telephone so one against the other all day and I learned so many things about Forex trading, I still have nightmares about these Forex traders.

Adam:00:03:35I believe you.

Mike:00:03:37Brave people, and many of them women too at that time. 50% were women, very astute, very smart, they will trade in billions every day, face value of course. But most was commercial transactions for companies that they were buying overseas and they have to get. It was getting boring. I thought I had learned latest the portfolio management theories, fixed income. I started in fixed income. So I moved to developing algos for optimization of portfolios in fixed income, like I used the Hill Climb algorithm for linear programming optimization of portfolios to create…you’re familiar with all this I’m sure, to create index tracking portfolios. You go to minimum duration or maximum yield or whatever. If you want to track some arbitrary portfolio, I could do that. That was a very good algorithm, I still have it today. It runs over in DOS. I had to learn everything for …, books. I’ve had to learn anything about duration convexity and program these things for floating rate notes, for any conceivable zero coupon. But I really liked it, this program you could put your portfolio of bonds, maybe 100 bonds and say, now I got to minimize the yield of the duration. And then the portfolio will tell you which bonds to sell and which bonds to buy from the universe. And that’s how I started.

Introducing Constraints

Adam:00:05:26 I’d love to hear more about that. So for example, you could also introduce some constraints. I’d like to maximize yield, subject to the portfolio having a duration less than five or something like that.

Mike:00:05:40Not only that, exactly. But not only that, also constraints on individual bonds, like VCSU because of credit rating, corporate bond, the department said we are not going to put more than 2% of the portfolio.

Adam:00:06:00So box constraints on individual bonds and sectors of bonds, that kind of stuff.

Mike:00:06:04Exactly. Now, I see to understand everything about optimization, because it was a huge linear programming optimization subject to multiple constraints and the nice thing about the heat line is that you don’t need an initial feasible solution. That’s why I changed, then I wrote them myself from scratch and it could convert to a feasible solution. It was beautiful. I mean, the bonds so structural, mathematical, orderly. I mean, equals for …,so nice. If you go to stock market and it’s chaos.

Adam:00:06:44It is an absolute mess. Absolutely.

Mike:00:06:47And now, let’s make a quantum jump. You have all these guys and analysts that treat bonds like stocks, all they say huge went up, is bond holders could be destroyed. Come on you guys. I did immunization of portfolios, immunization, the insurance companies they call the bonds to maturity because they have liabilities, pensions to pay, they never sell the bonds, they never lose money if the yields go up, what they do they immunize with futures swaps in time. Now there are infinite products, if they can use structured products to immunize, immunization like a vaccine for interest rates. If this people, someone told me the other day but the bond rolls.

Adam:00:07:43Not for the major bond owners, they don’t.

Mike:00:07:46United States government has to pay the principal when it expires, except if it’s callable, that’s another thing. So I worked on this, it was nice, mathematical. And then this is what changed my whole course. I had a classmate at Columbia, it was a very wealthy woman, very smart. The husband approached me and said, look, I know you work in Wall Street, can you help me to back test a trading system? That was ‘99 to ‘01.

Back Testing on Atari

Adam:00:08:24Now, had you been back testing bond trading systems or were you just working on arbitrage problems and portfolio optimization problems until that point.

Mike:00:08:31I was not back testing them. That was the next step. I was talking with an investment banker at Salomon Brothers, it was called then, to go there to actually back test these things. And I didn’t get to that because I met this person and he said, I have a good idea and we have the money, can you help me to back test an idea about trading currency futures, and we’re going to make a contract and you’re going to get 20% of the profits. I said very good idea, 20%. But the catch is that in the beginning, we are going to put the money to test it. So I back tested this idea, he goes like trend following, he had this idea about trend following, it was not working well.

Something interesting, I started writing all these Basic programs to back test and a program came in the market, it was already in the market, called System Writer Plus, to back test. The company was called Omega Research. Do you know how the company is called today? TradeStation, it was the Cruz brothers. They came up with the first program to back test in 1990 and we bought two copies, one of the first customers. He had a huge database with futures data, very well written, you have easy language, almost the same that is today, make a few changes. We bought this and to be able to back test fast we didn’t have to write the code. I started back testing and I found a similar system that looked good.

Adam:00:10:27So using primarily trend as your features?

Mike:00:10:32Yeah, trend following. I would get into these what happened to trend following during that time, it was a regime change happening during that time. At that time … show, Simons were just beginning. Actually Simons had a lot of problems during that time. So I practiced and I say, okay, we’re going to put two down…Anyway, I forgot my disclaimer, nothing I say in this podcast is a financial advice for everyone to know. I guess it’s obvious, because I’m not a financial advisor. So we put down $200,000 in an account in an investment bank, World Trade Center, and we started trading. It was February.

Adam:00:11:26Of 1990.

Mike:00:11:261991. And what I do, I was traveling because I had a job, and how would I keep up? I buy an Atari computer like the one they had in Terminator, I download the program in Basic to the Atari. Now the company thought it would not work, I called them. They said, it will not work but it did. So I had my trend following programming in the Atari and it still works today, I still have it. I would call the broker at the close when I was traveling, he would give me four closing prices of the four major futures, British Pound, Swiss Franc, Yen, and the Deutsche Mark. I would plug them in and they would tell me two things. If I have a new order for the open and when should the price go for next close to half a … So I would feel calm just projecting the moving averages and the indicators and everything that’s obvious. A lot of people do that.

Adam:00:12:34 I want to press pause on this because I think it’s just absolutely remarkable. So you’ve got this TradeStation that would normally run on, what was it built to run on back in 1991?

Mike:00:12:42DOS.

Adam:00:12:42DOS okay, so like Windows PC, and you were able to install TradeStation on an Atari game console?

Mike:00:12:51I back tested it, the strategy in System Writer Plus, the product of the Cruz brothers and then I copied the algo in Basic and I downloaded the code to Atari.

Adam:00:13:12So you could travel and it’s almost like the Atari was like a modern laptop and you would just plug it into your TV in the hotel room or something and bring a keyboard with you and you were able to travel with this makeshift laptop and run trade signals from your hotel.

Mike:00:13:24Yeah, people thought I was a lunatic.

Adam:00:13:28That’s fantastic. I love that.

Mike:00:13:31By November, we have made 60%. I sit down and he tells me ‘we have a lot of connections, we can raise a lot of money’. Of course you’re going to make less because there are many brokers and everything. We are not going to make 20%, we are going to make less but it’s going to be a lot of money. I said wait a second. I was one of the good students in the probability and statistics at Columbia University. The class was starting like 50 people inside, probability statistics graduate course. After 15 minutes there were only three, four people left in the class.

Adam:00:14:20They saw the writing on the wall in terms of the mathematical rigor and the concepts and realized they weren’t prepared.

Mike:00:14:24They walked out, they went to the cafeteria. It was nuts. The course, it was difficult because, it was one of my favorites to listen to probability and statistics. People couldn’t take it. They just dropped out. The teacher came inside and said look, I have to tell you guys, no one who will get an A, a few will get a B and most will get a C and many will get a D in this unit. So I tell him wait a second, are we going to manage money of other people, I’m not going to do that. I don’t want to manage any money. I tell him, look, we were lucky 100% it’s not the strategy, we will just have to present lucky, because you guys are nice people, you have positive energy and we’re lucky and it has nothing to do with the strategy, with the moving averages. The RSI is the MACD and all this. He said, no, it’s a very good system. I said, look, let’s do a Monte Carlo simulation. This can produce about 80% drawdown in the future. He said no way, I could use stops. I said, I take my money and I go to California to meet my girlfriend because I was going to go stay there forever.

Bootstrap With Replacement

Adam:00:15:55I just want to make sure we’re clear. You did a Monte Carlo analysis of the live returns of the seven or eight months’ live returns.

Mike:00:16:05Bootstrap with replacement. I asked my professor first at Columbia. I went to see him. I parked outside of the university, they broke my car. Monte Carlo simulation inside, breaking the car outside.

Adam:00:16:22So you had like 200 bars of data and you just wrote a blog bootstrap or just bootstrapped with replacement the 200 bars of data?

Mike:00:16:30Yes.

Adam:00:16:32Yeah. So what kind of volatility were you running like at? 40%, 50% vol.

Mike:00:16:36We were overtrading, we were taking big risks. And I said, if you drop your risk down to a reasonable region, 10% 11%, because return, then he goes hey, big one that I…he goes like the God of trading. If you drop it down to 2%, better to put your money in a zero coupon. The rates were like 7-8% at a time. Put it in a zero coupon, get the discount, take your wife and go to Bahamas.

Adam:00:17:14So the reason that was the case, so imagine I’m just going to make numbers up. But imagine it’s 60% at a volatility of 40%. So if you were to take that volatility down to that level of volatility than the expected return on the portfolio was worse than what you would get on a 10 year treasury bond or a high grade corporate bond.

Mike:00:17:36Yeah, something like that, maybe better. But then you can’t justify to investors, they have high expectations. And I said, no, I will take my cut and take off, because we had to get there and I will take…I don’t want the 20%, I don’t want to manage money. So I take my money and I go to California. I said, here is a system, I give it to you for free. But it was his idea originally. I fixed it, take the system and I wish you best of luck. I leave to California, the job market was not good there for financials.

Adam:00:18:181991, 1992 there was a major commercial real estate collapse, the banking sector was in rough shape and it was a very bad climate if you’re looking for work in the financial sector.

Mike:00:18:29Anyway, I liked it in California, I lived in Santa Monica. I had my system there. I was trading myself for bond futures. I love bond futures for myself. After two years, of course, life in California was too short. You either become a surfer or you leave, you go back to real life. I returned to real life what happened to the guy, you know, something? He almost lost all the money someone tells me.

Adam:00:19:00Were you tracking the performance? Like were you also able to see what the portfolio would have done?

Mike:00:19:05Yeah, he lost the money but had he stayed in, he would have recovered, but the drawdown was like 50%, 55%, 60. The investors told him stop, at least we get half of the money back. So he lost the money and he didn’t continue. I worked for a company forecasting freight rates. That was my next job, the market was not very good. I had a nice job and they also traded commodities. So I was doing my analysis and everything and this is how I started and then I realized that trend following was dead.

The Death of Trend Following

Adam:00:19:50So how did you come to that realization?

Mike:00:19:52It’s not my realization because I’m a small guy. If you listen to Jim Simons’ interview at TED, the famous Simons, I have an article in my blog. He says trend following was good in the 60s and in the 70s, then by the 80s it goes not very good. And he shows an example of a chart of a commodity and he says, I could use a 25 day moving average and make money in the 80s, you could use a 20. But then that stopped because it became overcrowded. All the smart guys realized.

Adam:00:20:37How did you realize that? You weren’t listening to Jim Simons?

Mike:00:20:41No, that’s recent.

Adam:00:20:43How did you come to that conclusion at the time? What analysis were you doing and what type of thinking were you applying that allowed you to recognize?

Mike:00:20:50It was just a feeling that, actually I wrote an article in Active Trader in about 2001, that ‘you can either go for trend following, or you can simulate or realize the same returns by shorter trend trades’. So at that time, I saw the drawdown coming also from simulations and everything and testing in trend following systems, and what happened. In the past, they worked because serial correlation in the returns was very high and when you have high serial correlation, like point 5, point 6, everything you do with moving averages like fast moving averages works, like for instance, the TA people, classical TA, they got fooled by the serial correlation and they attributed forecasting power to patterns.

Adam:00:22:02I can see because classical TA, obviously you’ve got a lot of stuff like cup and handle patterns and double bottoms and head and shoulders and all this stuff. But a lot of it is just drawing trend lines, I guess the idea was they were observing serial correlation in the form of these, or perceived that they were seeing zero correlation in the form of these drawing, these trend lines and assuming that the trend was going to persist. Is that where you’re coming from there?

Mike:00:22:28It’s more subtle, what happened is the following. TA was an effort of Lossy compression. Lossy compression is when you try to describe a stochastic process with an alphabet. The alphabet are the patterns, the head and shoulders, the pendants are the three lines, everything. And this is a compression scheme for the purpose of forecasting. You try to compress the data and you lose information about volatility and every other thing, but this is not what is important. When they talk about confirmation of the pattern because in TA there is this thing about it has to be confirmed. You have a head and shoulders and he breaks the neckline. And they say if it breaks the neckline, the market will go down. Yes, but if you have serial correlation, what does it mean? It means that there is high probability that lower prices is going to be followed by lower prices. So the alphabet goes random.

Actually, they were arbitraging serial correlation. And this is why serial correlation disappeared by 2000. Because TA became so popular, it was so easy to teach and learn, everybody started using it. People made computer models to use it, like pattern identification. It will tell you when head and shoulders was forming or a pennant or a trend line or, and the serial correlation disappeared from the market. That was the main driver of this formation.

Adam:00:24:20You’ve mentioned that before I remember we’ve discussed this back and forth informally over Twitter or what have you. And I’ve certainly observed that over the last 10 years, I was a little confounded with your statement that you observed this in 2001. Because I think if I recall looking at the trend following indices that 2001 to say late 2008, diversified trend following anyways was actually quite good and it was certainly a good portfolio diversifier. But is that not what you were observing, or are we just talking past each other?

Mike:00:24:53I didn’t observe it. It was my feeling to go with data mining or for short term or normal use price action, rather than trend following. It was my feeling. And I also found it intellectually stimulating and challenging to go with that rather than waiting for the moving average cross and then sitting there for 275 days until I get the exit signal. So what happened with TA is by middle 90s there was no serial correlation and then after that it became negative correlation. And recently we have extreme anti-correlation in the stock market returns and mean reversion works. I’ll give you a statistic which you may find interesting. In the last 252 days almost 70% of down days, in S&P 500, got followed by up days. That’s an all time record. I have data since 1940. This is by the deep. And trend following of the old kind, the Jim Simons kind, does not work in this environment. What you have to do is to increase the lag. And that’s why modern trend following stock about the 12 month moving average. The problem is that you open yourself to large drawdowns. If you have moving average lag is large, by the time the system responds, you already have a large drawdown.

Adam:00:26:40My own understanding or thinking on this is that the longer the moving average, the more you’re capturing the long term drift. And the less you’re capturing whatever serial correlation dynamics that you think that you were seeking. It’s like capturing the drift with stop losses, is the character that you’re seeking with longer term moving averages. And the shorter you go, the more it begins to resemble a straddle type strategy or some kind of long volatility type strategy. But the longer look back, it ends up just being mostly capturing the drift, would you generally agree with that characterization?

Mike:00:27:13Well, but in the past it worked, VCC. Now he doesn’t so you have to go anyway with what you’re saying, the long moving average. And the problem is, look how fast the corrections are, like the COVID correction. The moving average, it depends like if you are using the monthly moving average and you are exiting at the next month open, it depends whether that open will be right after the moving average or much below the moving average. The difference can be like a 15% draw down.

Adam:00:27:51We published all kinds of research on that. I know you have too. It drives me crazy.

Data Mining

Mike:00:27:55You quantify it very well, I’m just doing some analysis, brief analysis and everything. So I started doing data mining, trying to identify a short term anomalous price action.

Adam:00:28:10What year did you begin to work on that the data mining?

Mike:00:28:14About ’95.

Adam:00:28:16Wow. ’95, great. And then so what were you using and what was the framework you were using at the time in order to find features? What markets were you looking to trade and how are you thinking about the experimental design?

Mike:00:28:29I did some analysis and I found out that in most markets you have to go like 50 … or longer term timeframes. If the noise was too big, the signal to noise ratio was going to zero, below that, so you have to go like 50 but I prefer the daily timeframe end of day. So I started testing some neural networks and machine learning algos, like I used to support vector machines and similar algorithms, but I couldn’t find anything that would give me more than I could understand.

Adam:00:29:11And that was important to you.

Mike:00:29:13It was important, I wanted to see the code and understand why this particular strategy should work.

Adam:00:29:23What features were you using in these models?

Mike:00:29:27There is hierarchy of features. You start with the high open low close and you move up, the primitive features of the open high low close. Anyway, there is nothing else in price action. If you want to deal with price there is nothing else. Anything else you can miss an abstraction is a hypothesis. If you try to look for cause and effect, there is none. There is no cause and effect in price action.

So you’re starting to start building features from that, identify the features from the data and this is a very dangerous practice as opposed to a unique hypotheses like rates go up, and I expect bonds to go down. That’s a unique hypothesis. Of course, it’s not unique in the sense that you didn’t make it first. Many have made it. The problem with data mining is very deep besides the deep learning. First of all, you data mine, you have your own bias because you’re trying to identify features. But at the same time, many other people are doing the same thing. So I have come up with this idea of collective data mining bias, you may identify a good strategy today and in the next five minutes some other person who identified it, arbitrages out the return.

Adam:00:31:08There’s an element of simultaneous discovery there. So you’re assuming that if you’re looking at things that are triggering your imagination and causing you to think about potential explanatory variables, then in all likelihood many other researchers with the same objectives are seeing similar articles and pieces of data that are triggering similar thoughts and therefore you’re all going to eventually and actually rather quickly converge on the same types of models?

Mike:00:31:36Exactly. You made a very nice abstract for a paper. So what is the key? The key is to try to be not only unique in the hypotheses you generate, but also unique in the way you generate them.

Adam:00:31:59So say more about that. I’m very curious about what you mean there.

Mike:00:32:02You have all these companies, the hedge funds, they get smart graduates out of the school, they play machine learning like Mozart plays the piano. But they all use the same techniques, they get the same title libraries and they apply to the same data, the all read the papers that they all write, and then you have the same production. And essentially, I have shown in some articles very complicated ways they use, replicate a fast moving average, like three or four day moving average, low pass filters, they replicate. I mean, they go through this extremely complicated analysis and application of machine learning, and a five day moving average could do the same thing.

Adam:00:33:05They just discovered basic convolutions.

Mike:00:33:08Yeah, convolution. So I thought I had to develop my own proprietary machine learning algo, that besides the identification of features, it would work on principles about how the market operates, microstructure. So I did that. I sat down and it took me five years to develop this, was working with a person who goes to university, very nice guy, who knew C++ very well. I had the diploma from AT&T, AT&T discovered Developer C language, but I was already behind.

Adam:00:33:50You had been programming in Fortran.

Mike:00:33:53No, C language.

Adam:00:33:55 But before that, when you were building your back testers, I guess you were using TradeStation.

Mike:00:33:59Before that I was using Basic, it was faster. Just to program moving averages you only have to add the numbers.

Adam:00:34:06The simplest conceivable floating point type computation. Yep.

Mike:00:34:09There was nothing mysterious. So we’ll work together, I was telling him what to do and I came up with this algorithm and I built this program. It’s the first one and it had the following features. It could identify these micro patterns and also generate code for TradeStation and MetaStock. You remember MetaStock? It was some other platforms automatically which you could take back test and trade if you want. And I have an article in Active Trader Magazine in stocks or commodities, like in 2002. Now, many people told me, why didn’t you patent it? If you patent you have to disclose the algorithm? And then how do you know who is using it? It’s not a car. It’s not the Tesla that is driving down the road that you can stop and say, Who are you? You are driving the car I developed? You wouldn’t know who’s using it. How can you enforce if 1 million people take the algo? It’s in the patent application in music. It’s a trade secret.

Adam:00:35:29I’ve always wondered about that because there are some, Choueifaty has patented the ‘maximum diversification algorithm’ and Meesho has patented their ‘multivariant resampling efficient frontier’ and stuff like that. But when you publish it, then you no longer control it. Now everyone is using it against you.

Mike:00:35:50How do they know what the trade their portfolio manager in China is doing? And when are they going to find out, how? Models are under determined by prices, you cannot go back from market behavior and say this model produced it.

Adam:00:36:08Now it’s like the metaphor that Nassim Taleb uses to describe the narrative fallacy where if you’ve got an ice sculpture and the ice sculpture melts, and you’ve got a puddle of water, it’s like trying to say you can know the shape of the ice sculpture by the puddle of water.

Mike:00:36:26I’m glad you mentioned Nassim, because in every conversation, he should be mentioned at least once.

Adam:00:36:36I’d love to hear more about…I don’t know if you’re willing to share which is totally fine. We obviously run models, we’re not willing to share all of the details either. But I am curious if you can say more about what you mean by, and I may have gotten the word wrong but micro features or micropatterns or something.

Mike:00:36:52In every timeframe, for instance, daily timeframe, maximum is nine bars. Some of them, they look like traditional ones, like Island Reversals in head and shoulders, but some are quite strange.

Adam:00:37:08So it’s like the idea of I just want to make sure I’m understanding. So you’ve got open high low close data, you’ve got nine bars and it’s almost like the number of potential permutations of…

Mike:00:37:20No, that’s the naive way, you can do this but this is the naive way to do it. The other way I’m doing it is, it has to be driven by an economically viable metric. Now, I can talk more, but it has to be because permutation is going to give you billions of combinations and the data mining when you choose the bias is huge. Because you are choosing survivors, even in the out of sample, if you test billions or trillions of permutations, even if you test out of sample you are going to find 5% of them that work. But if you do a Bonferroni Correction, data mining bias correction, your significance is zero.

Adam:00:38:14With a Bonferroni correction, yeah. Because there’s such a large sample of runs, there’s got to be some economic causality or potential economic causalities, that’s what you’re looking for?

Mike:00:38:25                                  No causality. It has to be an economic underpinning of these. And the first way to minimize data mining bias is to use similar features. I see these people and they use like 150 features like GDP and price to earnings, price to book, a return for one day, return for five days and return for the year. I mean, you can think of many and the more indicators you use, the higher the data mining bias because it is easier to find a combination of things that work, even in the out of sample. So the problem with data mining bias and the features extracted from the data is to have a sound process of extracting them that makes economic sense. The whole idea of machine learning is the features have to have economic value because the emphasis in the literature, as you know, is on advanced algos. ‘Deep Learning’, we are now at the deep learning stage and began we started from binary logistic regression and we moved to deep learning, but this is not where the value is.

I found out that if the features have economic value, even a binary logistic regression would give good results. And this is the key. So the whole idea as you know, is feature engineering and construction. If your features have value, you are okay. I have called this, ‘machine learning cannot find gold where there is none’, it doesn’t matter how much you data mine noise, you’re going to end up with noise. So at the top level, you have to choose your markets carefully, you have to choose your process, your machine learning algorithm to be based on a process that has economic value that is compatible with the market, not just an arbitrary neural network that gives you an abstract model, you don’t know what it means.

Economic Value

Adam:00:40:57So can you give me an example of maybe a simple or obvious pattern or feature that has economic value to help cement the concept?

Mike:00:41:08Economic value. Let me try to qualify that. Conditioned on the data we have, you never know if in the future you will have economic value. And this is the problem, because if you knew that, you’re printing money.

Adam:00:41:26When you say features that have economic value, my immediate assumption was that you were leaning a little bit into the more traditional empirical finance literature, almost like a factor lecture where you’re saying you would expect higher risk securities to produce higher return because there’s economic value in having higher required return or some other economic intuition that would explain why a feature should explain returns.

Mike:00:41:54Perfect example, let me take it down to micro features. If I see a feature that is mean reverting in a market with extreme inter correlation, I know it has some economic value. As I said before, 70% of the time down days are followed by up days in the S&P 500. So I look at when people back test moving averages, they assume positive autocorrelation. And it just happens that they work because, now we’re jumping, because they have been very lucky, and the US markets have recovered from large bear markets or corrections good V bottoms. Now, do you know what will happen to trend followers if the US market goes into a five year sideways chop?

Adam:00:42:57Trend following on US equities will obviously suffer.

Mike:00:43:01Approximately 40% losses. And you can see an analogue in the emerging markets from 2011 to 2016 they move sideways, and I have examples in my blog. The most popular trend following models, they lost from 30 to 50% in that period, because it chop you don’t go anywhere. Now, US market, trend followers in the US market have been very lucky because of the large rebound.

Adam:00:43:41You’re describing more long only or long flat trend following.

Mike:00:43:47Long/short does not work anymore.

Adam:00:43:50So it really is like you’re capturing the drift and you’ve got a stop on being long invested in the index, by virtue of using a moving average to exit.

Mike:00:44:00Yeah, super-imposing yourself on a geometric Brownian motion. The log of the prices, random walk plus drift and you just pray essentially that there is positive three … there is not a crash. It’s all praying every day and praying and praying. Now, let me give you this if you have time.

Adam:00:44:29Of course, please, I’ve got lots of time. I’m finding this really fascinating.

Mike:00:44:33From 1960 to 1999, if you bought, if you went long equities, grain, the return was positive, and stayed until the return got negative and you switch to short when the return went negative, and then you repeated this, this is escalating strategy. You could have made 30% annualized return before commissions.

Adam:00:45:08Over what time horizon when you say the market went up or the market went down?

Mike:00:45:131960 to 2000, 40 years.

Adam:00:45:15When you say that the market went up, you went over up over the last day.

Mike:00:45:20That’s because the serial correlation goes high and smart people show that, and that’s why serial correlation was capitalized out because…now, the same system from 2000 to 2020 has generated minus 15% annualized return. Because a serial correlation return switched to negative. Talking about trend following now in the CPA spectrum whether you’re trading commodities, there it’s a different thing because in commodities there are trends always, because of economics, but the problem is there, the space got too crowded, and they are not able to get alpha, only to be a better strategies. And in the last 10 years, they have generated 2% annualized return. Because it’s too crowded, live capital. And how much money you can? 1000s of CTAs, that’s why many of them have switched to equities now, there is no liquidity, they have liquidity constraints in the CTA series.

Micro-features, Micro-patterns and Trend Following

Adam:00:46:42So I just want to make sure we’re tying this back to micro features or micro patterns to this narrative on trend following and I just want to maybe see if I understand how you’re tying the two together. So if you identify that there has been no serial correlation or serial correlation has been negative and yet you’re finding trend following type strategies that appear to be working, then those trend following strategies are probably noise, there’s no serial correlation for it to pick up on. And so it’s just picking up on random patterns and not economic, there’s no economic value there. Am I close on that?

Mike:00:47:20That’s one aspect. When in the past, you show that there was momentum in theory terms, you wouldn’t trade something that went against it. They say no, but the algorithm has to do it automatically, besides the features. Maybe that’s what you were asking, the algorithm should be able to identify the conditions of the market and this is what I mean, economic value. Now, let’s go one step further, you don’t use one feature. In our long/short strategies, we combine these two ensembles, I don’t care about the bias, I care about the variance.

Adam:00:48:07So say more about that, what do you mean by bias and variance for the people that are not maybe not familiar with these terms in this context?

Mike:00:48:12Well, in the machine learning you trade enough bias with good variance. You either get a strategy that has a high return with the market, he follows the market, he makes more than to buy and hold, but at the same time, he can give you a very large drawdown volatility. Or you can elect to go with a strategy that gives you less return. So you miss all this drift and everything, you don’t fit to it. But you get less variance in the returns. Because especially if you’re a professional these days and you have a drawdown of 20%, you are in trouble. ARKK when they got to 20%, the whole Twitter was about good. Like that ETF because if a very high variance.

Adam:00:49:13To say the least.

Mike:00:49:15So I will sacrifice my return especially when you’re going to the market neutral domain, long/short. I will sacrifice my return, I will not follow the drift of the market, in exchange for lower volatility. When you add numbers that are volatile, random variables, the volatility is reduced by the square root of the number.

Adam:00:49:45Goes right back to Claude Shannon and the ‘information coefficient’, and then ‘the law of active management’.

Mike:00:49:50So when I combine these features into any ensemble and I get the directional bias. Now, one feature may say, I’m 100% sure the market will go up. The other feature says I’m 75. The other says, I’m 55. At the end of the day you get something that says, the market go up 53, probabilities is 53% to go up. When you do all this in ensembles and then this classification essentially. And then you say, okay, if the directional bias or using the machine learning terminology, the target probability is higher when you score for a new day. Because in machine learning, what are we doing exactly? People don’t understand, in machine learning we’re not looking at a month.

Adam:00:50:48We don’t have nearly enough data at a monthly scale to get economic or statistical significance on anything.

Mike:00:50:55Looking at the next day return, the probability of the next day return. And this is much different than the A, that they look at prices and they say, Tesla is too high, or the market is too high. This is useless in quantum analysis, high price doesn’t make any sense. It’s determined that you care. And actually, that’s the basic difference between traditional market analysis, it’s the study of price and volume. And quantum analysis is the distribution over a data center of moments and whether there is distribution and whether the moment exists in all these things that Nassim Taleb is the expert.

Bias and Variance in Machine Learning

Adam:00:51:45Exactly. So I just want to make sure that we cover this idea of bias and variance sufficiently as well in a the machine learning context. I think you did a good job of making it approachable in terms of just people who understand just Sharpe ratios, you want to harness the drift and minimize the variance and I think that’s useful. I’ve always thought about the bias variance trade off in terms of model building as high bias models are models that tend to be simple. They’ve got a high probability of generalizing, but you’re leaving potentially a lot of the information on the table. There’s too much compression, whereas high variance you are fitting the data to a very high degree, you’re extracting a lot of information, but you’re also likely mistaking a lot of what is noise for information and therefore the model is less likely to generalize on data that it hasn’t seen before. Correct me if I’m wrong, but I think you’re saying that in finance it’s much more preferable to err on the side of having too much bias than it is to err on the side of having too much variance. Is that a fair assertion?

Mike:00:52:46Excellent scientific description of the subject, very scientific. Exactly like you see it in books, and this is the idea, Sharpe ratio, the problem. A Sharpe ratio is roughly, it will say the rate is, the free rate is zero, is roughly the mean return divided by the volatility. So let’s say that we have a Sharpe ratio that is 10% mean return by 10% volatility. How much is this? One. Very nice Sharpe ratio. If you have a strategy and the Sharpe ratio is one or more, it’s very nice. Now let’s say that you have 50 divided by 50. This is also one. So now you get your 50 but at the risk of losing 50. Still the Sharpe ratio is one. Of course you have to consider, I don’t like the Sharpe ratio. I use the MAR. Annualized return divided by maximum drawdown.

Adam:00:53:53I presume you use a Monte Carlo MAR.

Mike:00:53:57Yeah, Monte Carlo, you can find the distribution, but I use that and I don’t use the maximum drawdown, I usually like 1.5 times the maximum drawdown. So going back to machine learning, the ultimate strategy you can find for machine learning, and the one that broke Quantopian. You remember Quantopian? Is the long/short market neutral. What do I mean like this? You have a big universe of securities and you data mine and you find, go long 100 of them and go short 100 of them and you have market neutral, and dollar neutral, and you don’t care if the market goes up or down, the strategy makes money. And Quantopian spent several years because this is the elective of the person who financed it.

Adam:00:55:04None other than Steve Cohen. Yep.

Mike:00:55:06Baseball game. Yes, very smart. And they spent all this time and they have this they claim 100,000 quants. They didn’t make it. This is the idea. So the best machine learning strategies are the ones with the high bias low variance. Like in 2017, you remember it was extremely low volatility. The VIX got down to nine, you would read in Twitter all the day the start, the market hasn’t had a 0.5% day in the last two months. How can you make profit? … made profit by chance because they didn’t care. Doesn’t matter how much the market went up in 2017. Machine learning didn’t, market neutral could not make any money. Now my friend, and customer, he goes in Twitter but he left, Alex who has given me very good ideas about machine learning, he’s an expert. I’m 10% of what he is. He told me, machine learning has to have a component of bias. So there is an implicit market view in everything you do even in machine learning. Machine learning is not the jewel for people’s inability to understand markets, and we’re going to get the library from Google and we’re going to put some numbers and we’re going to become rich. This is not how it works. You have to take risk and eventually as we know, because you are professional, what makes a difference is the market view you have, and how that ties with the strategy you have, because all strategies can be killed.

In the long/short I can say, instead of going long 100 and short 100 I will go long 100 and short 20, the machine learning algorithm will not tell you what to do, you will tell the machine learning what to do. I remember you wrote an article once that you wrote, or someone else from your organization that in 2017 many fund managers switched off the strategies equals the short volatility area, the XIV until they got the turkey problem. That seems good, sudden death one day.

Adam:00:57:50I use that all the time and I draw a little Turkey. Love it.

Mike:00:57:55It was like February I remember, February something right? Suddenly, it was silence, long volatility. It was a good feature. Long volatility was the best feature at a time and all the algos, they were taking this feature and actually they were going short volatility with some features from the yield curves. And suddenly what happened? Boom, you know, XIV went down 96% in one day. I have a volatility long/short for UVXY strategy that shows 302% returns a year. If you do the bootstrap and the analysis and everything, there is a 5% probability of a 95% drawdown, it’s crazy, 5% is not small.

Adam:00:58:49Would you agree that there may be roles for those types of strategies along as very small components have a much broader ensemble of strategies and where you’re constantly rebalancing? If they have sufficient negative correlation and especially conditional negative correlation potential, not like deterministic but potential at certain times against other strategies in your portfolio, I might be able to see an argument for, like a very small allocation if you’re going to rebalance.

Mike:00:59:17Like Bitcoin, for instance. Bitcoin is like a volatility trade. But why risking losing money? Buffett, rule number one, ‘never lose money’.

Adam:00:59:28I know, of course, Buffett lost 40 plus percent six or seven times in his career, but I know he constantly cites that rule. I actually wanted to probe on one thing that you said in passing, but I think it’s really an interesting thing to explore. Because I think, by virtue of the way that academic finance operates, almost by necessity they focus on strategies that have very high in sample performance. It’s always struck me that strategies that have high bias, in other words they’re simple, relatively simple specification, and have high Sharpe ratios are almost certain to be arbitraged away first. In markets where everyone is constantly trying to arbitrage away opportunity, the most sustainable opportunities are in more esoteric strategies that have lower Sharpe ratios. They’re below the threshold of what would normally be considered interesting by academic finance. To have large ensembles of those barely significant or barely attractive strategies that you put together, and when you put a bunch of marginally attractive strategies together, the ensemble can be quite attractive. But you need a different type of experimental design to engineer that strategy than what the empirical finance community is able to publish papers on. Do you see what I’m going there? Does that make sense?

Mike:01:01:01Yeah, it makes sense. And actually, you can have two losing strategies and get a winning strategy. You can have all sorts of things. And he actually brought a very good example. But the point is, do you have enough good ones, or you need to add the Bitcoin to the volatility? If you don’t have enough good ones then you have to add those too, to get the extra bias and make sure that the others will reduce the variance of that one, in case something goes wrong. And actually, that’s how many funds work. I mean, they have a complete plethora of strategies, and they try to suite and actually research in machine learning is the voting algorithms to choose at every point which strategy to use, or which group. They use Markov processes and very advanced mathematics to let’s say you have a pool of 100 strategies, and you can use this machine learning like a meta strategy to choose which ones to use in the ensemble at every point. Now, you have to separate here, these are good for the large funds, but the retail, the person that has a small account cannot go into this complex schemes. Its purchasing power, that the retail will try to find one or two strategies that makes good prospects, they cannot use 10 strategies, and also the logistics.

Adam:01:02:43But that leaves them exposed to high variance.

Mike:01:02:47And this is why there is so much withdrawal from retail space. It’s not because the strategies are bad. Like when I started with the trend, they generate a large draw down, they quit. And then when they look after if I have stayed in, I would have recovered in a like the stock market. The stock market after 2000 there are two maximum drawdowns about more than 50%, 50, 55. And then you have a few drawdowns for 25 and then recently 33% in a few weeks the price is higher, what is going on?

Adam:01:03:31I’m particularly fascinated with this idea of, so there’s the feature engineering step, there’s a mining step, let’s just assume you’ve identified some features. I’m understanding that you’re mining for strategies and you’re eliminating strategies that are inconsistent with the regime, which presumes that you’re somehow able to estimate the regime that we’re in. So for example, if you’ve got some trend following strategies that appear to be useful but you’re in a mean reverting regime, then you would eliminate the trend following strategies. So am I right so far? Is that generally the thinking?

Mike:01:04:11Yeah, but the algo will do it. Not you personally. What I’m saying is, it should be able to do it to have economic value.

Using Metrics

Adam:01:04:18But how are you identifying that it’s a mean reverting regime or it’s a positive serial correlation regime and over what number of bars and all that kind of stuff? How are you thinking about those kind?

Mike:01:04:29Okay, excellent question. It has to do with the metrics you use to evaluate performance in the in sample, in the out of sample. It has to do with the metrics you use. If you use metrics like in typical machine learning, they use area under the care of the usual success confusion matrix and, don’t remember all the terminology, I have gotten away from this, because these don’t mean anything, you have a very high hit rate, doesn’t mean anything.

Adam:01:05:07Yeah, the problem with the confusion matrix approach is that you’re giving equal weight to positive errors and negative errors. Obviously, if you’re just missing an opportunity, that’s a different type of, there’s a different penalty for missing an opportunity than there is for taking a trade and being wrong.

Mike:01:05:24Exactly. In trading, what counts, there are three quantities that are inextricably related. It’s the hit rate, the payoff ratio, and the profit factor. And I have derived the formula 20 years ago that relates them. The win rate is equal to the profit factor divided by profit factor plus payoff ratio. And I called it in one of my books ‘the profitability rule’. And this profitability rule, if the payoff ratio is equal to one, you get hit rate equals one divided by one plus the payoff ratio, this gives you the lower bound of the hit rate for a given payoff ratio to break even. Now, this is how I started to build the algorithms, you have to understand how. Now this also formula explains why trend following is hard. Trend following is hard because the payoff varies. It’s variable, stochastic, it’s a random variable. So if you have a certain hit rate, trend following say, I can be 35% right and still make money because there are trends. Wait a second, if the payoff ratio is not enough, you’re not going to make money. Maybe you need a hit rate of 60% to make money, and what determines the payoff ratio? The volatility of the trends.

Adam:01:07:10That will be, determine the magnitude of the payoffs when you’re right.

Mike:01:07:13You make it the stop loss if the volatility…because trend followers use stop loss. Many reversion traders do not use stop loss, because stop loss in minute reversion destroys the profitability right away. That’s why after 2000 the markets, when they became mean reverting they also have gotten extremely risky because if you put stop loss, there is your profitability is gone, in a mean reversion regime. In trend following it is necessary because you may have a chop and you have to protect yourself. So more losses but in the next long trend you make up for them, and then you make your alpha.

Now, I wrote an article, ‘trends and trend following is not the same thing’. Some people think trend following and trends is the same thing. If there are trends, trend followers will make money. No, it’s not the same thing. Like mean reversion, and mean reversion strategies is not the same thing. Because you may get mean reversion but the market can go so low, that when it bounces back and you get your exit signal, it hasn’t really recovered the loss. So all these a machine learning algorithm to return back to your original question, has to implicitly account for all these things, not explicitly, implicitly and have the state. Now, to go to the next step, the bias…tell me when you are out of time because this can go for many days. All who is there could be bias. Data mining bias. The problem is how much. Some people say I want to measure the data mining bias, you’re wasting your time, I’m sorry. You have to have an algorithm that minimises the data mining bias in the first place, and then pray to your gods, whatever that is, because everything you do is conditioned on past data.

Then another important thing, the more you reuse the data, you get ‘data snooping’. So I wrote a book,            ’ Fooled by Technical Analysis’. And there I have a diagram and I show that data mining bias has three components, overfitting. You tweak the moving average, for example, periods until you get your result. Selection bias, you select the best system that uses those moving average and data snooping bias. Every time you reuse the data, you increase the probability of finding a random system by chance, and this probability goes to one if you are very persistent. I wrote an article once that people learn in school to be persistent in life. I said back testing people should not be persistent in life. If they find something that doesn’t work, they should forget it immediately. Don’t try to fix it. You’re just increasing the Data Mining Bias, use the out of sample only once. If the idea doesn’t work, you came up with an idea, let’s say unique hypothesis, that if the Kardashians speak today and Kathy Wood says I love Bitcoin and Elon Musk’s says to the moon, then the market will go up. And you practice for the period these people existed in the markets, they are popular, and it doesn’t work.

Adam:01:11:27How do you determine that something doesn’t work? What’s the right experimental design to test hypotheses in your opinion?

Mike:01:11:36For me as a retail, I’m a practitioner above all, and I don’t care about fancy terminology. I don’t care about papers. I don’t care about what professors say, for me doesn’t work, is I lose money.

Adam:01:11:52But over what timeframe and how are you observing losing money? Is it like only in live trading or is it in the out of sample or how are you setting up the design?

The Strategies

Mike:01:12:01I have the seven weekly strategies and I have a few customers, very dedicated people, they look at them, I don’t know what they do. Probably they laugh. That’s why they pay a subscription to laugh at me. People go to see a comedy, they pay, because it’s an entertainment. One is a trend following with volatility adjustment because trend following should be part of every portfolio. That’s where the big money comes from. Now, that’s not Data Mining Bias, oh golly, that’s one, data mining one of the strategies. The other one is mean reversion, these are weekly timeframe, all the entries and exits Monday morning, if you get a good price is a mean reversion in the space, and then I have a cross sectional momentum, three of them using different ETFs and some of the same. Because cross sectional momentum in my opinion for myself should be in my portfolio. Cross sectional momentum is a very powerful concept. The people who discovered were brilliant. And what does it do, he moves around, it tries to find across many markets, … where they are finding the strategy that is doing best year to date, is a cross sectional momentum strategy and the mean reversion is doing well.

But the mean reversion is, only the stays of the market about 40% of the time, the problem with the cross sectionals is they are 100% in the market. Then I have a mean reversion in Dow stocks, which is working well and long/short with data mining. And this is a portfolio of strategies and this has ensemble. Always the ensemble of volatility is very low, maximum we get this like 50% in the worst case scenario of the strategies.

Now, these strategies for me have some economic value. But if these strategies don’t work, to go back to your question, how do I decide they don’t work? I set, a maximum sink threshold. It’s where I start sweating. And for me, that’s 10% for each strategy, if they lose 10%, they are pause mode, and then I monitor them for two or three months and if they continue to deteriorate, I say oops, mean reversion at the strategy level – out. And I find another one to replace it. But it’s already one part of the ensemble.

Adam:01:15:08What do you say replace it with a new one? How are you finding a new one and then you mentioned in sample, out of sample, I’m just wondering how that process of testing goes for you.

Mike:01:15:19That’s apart from data mining. The data mining is a long/short. The other strategies I have some ideas about the market, like all of us, if commodities were the dollar I go to put $1 commodities, oil and bonds in the same cross section strategy. Because I have noticed that when gilts rise, the commodities start going up. These general ideas and I have a Price Action Lab database with strategies for this, and I test them in sample first, out of sample, if I use it the …, but I tried to use the indexes to test them that go far. Then more data like for instance, I like the strategies between low volatility, S&P 500 and a high beta. And now high beta was doing very well and now there is a sweet, suddenly low volatility became popular, after was beaten up for a while, and there is not enough data, those started like in 2006, I don’t remember. But the indexes they go back a while, so I can test with more data. I think, from this point of view that unique hypotheses, value versus growth, low volatility versus high volatility, are two areas where a lot of rotation could take place in the future. The data mining is every day, I run the S&P 500 stocks through my program. I determine the features and then I select five from the long part and five from the short.

Now this strategy did extremely well in the downturn last year, daily returns of 2%, 3%, 6%, but then by September, where the short covering started, the stocks they started rebounding, it has stagnated. But the long/short it provides a little bit of positive convexity in case of a huge downturn, at least you hope.

Adam:01:17:47Is that the way you’ve designed it to produce positive convexity?

Mike:01:17:51No, it’s not, it’s because of the market, the short side always falls a lot more in down terms than the long side. If I have five long and five short and I manage to get four of the shorts right, and one of the longs right, it makes a huge difference because of the averages. So there were days where it got four correct in the long side and five correct the downside. In a downturn, if I can get one stock that it will go up, it makes a big difference to the return.

Adam:01:18:34So when you run your software, what’s going on under the hood there, whatever you think you can share?

Mike:01:18:39It’s in my website, describes a lot of things, not the algorithm, it loads the data, it has a testing sample and a training sample. You find these features for every stock. Each stock uses 10 clusters of features for each one. By clusters I mean these are features with common characteristics. Like for instance, even if you have open high low close, you can have some features that have only close. If you have first up, second down, third up fourth down. That’s a mean reversion type of things, it calculate all these 1000s of features and then it finds the ensembles and determines the probability. I call it ‘direction of probability’, just to make sense. The academics call it 1000 different fancy names, many other different fancy names like ‘target labels’ and everything.

And if you want to do machine learning, it generates this in files and you can go ahead with your algorithms take the files. But I usually don’t do that because I have found out that if something doesn’t work on the basic level, it doesn’t matter how sophisticated you get on top of it. If something doesn’t work fundamental level, it doesn’t matter how hard you try on top of it, you’re not going to get anything. It has to be sound at the fundamental level. So it doesn’t matter if it doesn’t work and then I take the final scenario and some machine learning to make a ‘labelling’ and know this new fancy terminology. If the features already don’t have economic value, chances are very low you will get something.

Adam:01:20:42And you’re determining whether they have economic value automatically using the algorithm.

Mike:01:20:47The algorithm is designed with economic value in mind. It conveys the understanding of the development of the algo about how the market works. It’s not a generic algo. It’s a special purpose. You remember the RISC processors, like you have the CPUs, and then you have the RISC processors that they did specific, like for instance signal processing. Kalman filters, they use them in communications like this. When you choose a generic machine learning algo is like using a general CPU. My algorithm is like a special processor designed, I have put my understanding about the market. So it’s esoteric, idiosyncrasy, and that’s how it works with the data mining. The demand is very low for these things. And the reason it’s very low is people are not willing to put the effort necessary to understand and work with them. People want things served on a golden platter. They call you and say tell me the secret. Okay buy Bitcoin. Okay, thank you. We’re going to be rich.

Adam:01:22:13There is not investment advice. 

Mike:01:22:16Yeah, I didn’t say that. 

Adam:01:22:22I know, I was just joking. 

Mike:01:22:27So, nothing I say here constitutes investment advice and everybody knows to do their homework. I wish I could go into more details but there is a set …

Adam:01:22:36Absolutely fascinating. Lot’s to chew on at my end, I’m sure you are going to give listeners a great deal to think about their end as well and probably prompt an enormous flurry of questions…

Mike:01:22:47And angry people.

Adam:01:22:47Maybe, but you have been extremely generous with your sharing and with your wisdom. Thank you so much for spending two hours with me and I hope that at some point in the near future we can continue this conversation.

Mike:01:23:02Thank you very much Adam, I appreciate. Thank you for the invitation and I wish you best success with your trading and everything you do.

Adam:01:23:10Thanks Mike, we’ll carry on other conversations on Twitter and in the meantime all the best and thanks very much.

Mike:01:23:16Thank you. Bye.

 

Show more

*ReSolve Global refers to ReSolve Asset Management SEZC (Cayman) which is registered with the Commodity Futures Trading Commission as a commodity trading advisor and commodity pool operator. This registration is administered through the National Futures Association (“NFA”). Further, ReSolve Global is a registered person with the Cayman Islands Monetary Authority.