In this episode I speak with Charles McGarraugh, Chief Investment Officer of Altis Partners.

Charlie finds himself at the helm of Altis from a non-traditional route. His career began at Goldman, where his experience spanned everything from asset backed securities to liquid commodities. He then started a firm specializing in machine-learning driven sports betting before moving into cryptocurrency markets. Today, Charlie is betting that alternative strategies will play an increasingly important role for investors over the coming decade.

We spend the majority of our conversation talking about Altis’s investment stack, which is comprised of two components: an upstream signal layer and a downstream strategy layer. The signal layer is responsible for ingesting data and constructing a prediction curve for different futures markets. The strategy layer ingests these prediction curves and constructs a portfolio. Charlie discusses the types of signals Altis relies on, how they turn prediction curves into trade signals, and where risk management fits into the equation.

I hope you enjoy my conversation with Charles McGarraugh.

Transcript

Corey Hoffstein  00:00

Charlie, you ready?

Charles McGarraugh  00:01

I’m ready.

Corey Hoffstein  00:01

All right 321 Let’s jam. Hello and welcome everyone. I’m Corey Hoffstein. And this is flirting with models the podcast that pulls back the curtain to discover the human factor behind the quantitative strategy.

Narrator  00:22

Corey Hoffstein Is the co founder and chief investment officer of new found research due to industry regulations. He will not discuss any of new found research funds on this podcast. All opinions expressed by podcast participants are solely their own opinion and do not reflect the opinion of new found research. This podcast is for informational purposes only and should not be relied upon as a basis for investment decisions. Clients of newfound research may maintain positions in securities discussed in this podcast for more information is it think newfound.com.

Corey Hoffstein  00:53

In this episode, I speak with Charles Maghera chief investment officer of Olympus partners, Charlie finds himself at the helm of office from a non traditional route. His career began at Goldman where his experience spanned everything from asset backed securities to liquid commodities. He then started a firm specializing in machine learning driven sports betting before moving into cryptocurrency markets. Today, Charlie is betting that alternative strategies will play an increasingly important role for investors over the coming decade. We spend the majority of our conversation talking about offices investment stack, which is comprised of two components in upstream signal layer and a downstream strategy layer. The signal layer is responsible for ingesting data and constructing a prediction curve for different futures markets. The strategy layer ingests these prediction curves and constructs a portfolio. Charlie discusses the types of signals Altis relies on how they turn prediction curves into trade signals, and where risk management fits into the equation. I hope you enjoy my conversation with Charles Maghera. Charlie, welcome to the podcast excited to have you here in this discussion, which I think at face value, some people might say, another managed futures podcast, they know I love managed futures. But I would urge listeners to really hold on because I think this is going to go in a really interesting, very different direction than other podcasts I’ve done in this space. So excited to have you on. Thank you for joining me.

Charles McGarraugh  02:26

Well, thanks a lot for having me. Great. I’m excited to So Charlie, you have

Corey Hoffstein  02:29

a pretty non traditional banking background. Most people who go through the banking channel end up really much more specialized and on a much more linear path. But your background, if we take a look, you actually jumped across jobs in assets quite a bit. And I was hoping you could maybe quickly walk us through your background and share some of your thoughts and your approach to getting up to speed so quickly, and all these different departments and assets that you worked in. Yeah,

Charles McGarraugh  02:57

I jumped around sectors and products quite a lot. But I did have a good degree of career continuity. I spent my first 16 years in the business as a trader at Goldman Sachs, across emerging markets, credit derivatives, structured products, like structure reinsurance mortgages, and then eventually I became the head of metals trading in the commodity department. So yes, it’s been a pretty Securitas route. And then after that, I’ve been a tech entrepreneur, both in sports betting and crypto, and been working on Managed futures for quite some time. And this does definitely seem like it’s all over the place. But I just draw a couple of perhaps continuities through this circuitous journey. The first is attended to be where the action is, whether it was the Argentina default, right at the beginning of my career as an analyst on the emerging markets desk, the major downgrades of General Motors, and Ford and oh five, which was the so called correlation crisis in the OTC credit market. I was a trader on the subprime desk at Goldman Sachs in the summer in fall of oh eight, that was super crazy. I’ve just been around lots of interesting businesses. And personally, in the beginning phase of my career, I think it was just I was ranging in a quick study. And so my superiors felt free to just move this kind of fungible human resource around later on. I really enjoyed moving between markets because it’s a journey of discovery. Markets are interesting, right? There’s all kinds of crazy things happening. And so I guess I was just more willing to take the nonlinear pathway because I felt like the operative principle was, there’s no higher return on equity in the long run than investment in human capital, especially your own. Now, that’s definitely a debatable point because the specialized career path oftentimes pays the best and become more obviously operating leverage for whoever’s building a business in a particular vertical. But I think being between markets as a generalist, I think gives you more perspective, that’s transportable. I’ve definitely seen a lot other crazy stuff up close and personal. And I’ve traded a lot of markets successfully. And I think that’s also good. I’ve also, by the way, made a lot of mistakes of various different flavors in lots of markets. And that also, I guess, involves its own growth pathway. Are

Corey Hoffstein  05:14

there any tips or tricks you would offer listeners who are considering a career switch to a different asset class?

Charles McGarraugh  05:22

Yeah, I would say in my experience, it takes about two years to get up the curve, at least to maybe like the inflection point of the S curve of knowledge, like, you’ll never know everything there is to know even if you spend 30 years in one market, but to get to like a working sort of professional grade practitioners level. And in the early phase of my career, the only way that that was possible was with good teachers and mentors, and the financial industry is by and large, an apprenticeship system. As you mature and get more experience and kind of know what to look for and understand some of the basics about transactionally. How do markets work and the different kind of game theory involved in negotiating prices and stuff for the OTC business workflows, just there’s quite a lot to know. But some of these skills are transposable. And so I would say in later years, the pivots have been easier to do as a self starter. And in the seven years since I’ve left Goldman, there’s been quite a lot of self directed study, and then working on new things as a colleague, as opposed to as a junior person being mentored. But yeah, it takes about two years, I’d say, to really get to the point where you start to find your sea legs,

Corey Hoffstein  06:32

I want to stick on that concept of transposable concepts for a moment because your formative years, were in OTC Markets and structured products, and you really only moved to liquid products later in your career. Curious how this early experience shaped how you view the liquid macro space? Yeah,

Charles McGarraugh  06:53

so there’s a lot to unpack there, I’d say the first thing is generally, at least in fixed income, a lot of what happens as an OTC trader in a bank is you get very, very specialized on your vertical, right, you know, things that are orthogonal to all of your colleagues, because you are so specialized, especially as a trader. And there’s a couple of big thematic calls to make because your lane is so narrow, so you end up pretty diversified. But to protect yourself from that, you also have the kind of constant flux of the transaction doing of just normal flow, which puts a margin of safety into your annual p&l. And a big part of your mandate is just burning the bid offer, facilitate liquidity and anticipate supply and demand in a specialized way. And then another big part of your job is making sure that you’re ingesting information from the other specialists around you and their generalist managers. Now, from a liquid markets perspective, look, the OTC Markets have varying degrees of liquidity. So not all instruments are illiquid. But as a sort of stylized fact, I would say the positions are more frequently a marriage than a date, versus say systematic trading and futures or equities or something. And so when you’re at the pace of human execution, and it is a more complex transactional process to source and remove the risk, the velocity and your main risk moments is slower. And that forces you to think thematically, it is harder, although not impossible to subject stuff to the crucible of sort of statistical validation. It also tends to breed a focus on relative value, because you’re obsessing constantly liquidity in pricing with some kind of inference of value from things that are more visible, right. And so that matrix of kind of value and cross sectional relative value becomes just a natural instinct, within your specialization, as a discretionary kind of OTC guy, or gal. And then one of the things that was the hardest, I would say, in the transition away from micro fixed income, which is really where I grew up. I was one of the first to false wall traders at Goldman back in the day in trading, structured RBS like I would say, generally, those OTC markets will have complicated products, but simple market structures. So it’s maybe hard to get the trade down, right. So in liquid markets, one of the hardest things to wrap your head round was like the product mighty simple, but the market structure is way more complex. And on top of that, unlike trading, let’s say OTC credit derivatives, it is a whole lot easier to frequently and judiciously exercise the option to be flat. So learning how to fold your cards at the table, right and then re up when conditions are more auspicious. It’s just a very kind of different exercise in truly liquid macro markets than it is in more complex OTC stuff. And I’d say that whole kind of framework of like, how do I get flattened think less about sourcing entry points and exit points and more about time series prediction and trade sizing, I think there’s a pretty big difference. Can

Corey Hoffstein  10:16

you expand on what you mean for a second, when you say a market structure is more or less complex? Yeah.

Charles McGarraugh  10:22

So there’s sort of like two dimensions of complexity that you might have. I mean, this is pretty simple if I but you can have a really complicated product, like let’s say non agency RMBs, right, that’s a complicated product, you have to understand the whole loan pools, you have to understand the prepayment speeds, you have to understand the various nuances of how you service nonperforming people in the various transition phases of nonperforming. It’s complicated. And then from there, you can infer a kind of error bound around like what you think the vector of cash flows on the security looks like, then you have some sort of valuation methodology, which amounts to a discount rate plus a spread and appropriate riskless discovery plus a spread of some kind, and some volatility rolling around that, right. That’s a complicated animal. But generally speaking, at least postcrisis, there weren’t that many people holding these things, and there weren’t that many people who could buy them. So like you were a phone call away from understanding the vast majority and flows in the system. Take metals, where I was the Global Head of metals trading at Goldman, as a contrast, the product is super simple. It’s a lump of molecules, or a list of futures contract on a lump of molecules, which is a little bit more complicated, but not that much more complicated. And there’s some nuances about exchange delivery and warehouses. So there is some complexity in the product. But the market structure is literally every human being on planet Earth. And, in particular, in the metals markets for the mid 2010s. The dominance of totally opaque flows onshore, and China in leading pricing meant that it was pretty hard to be a phone call or two away from really understanding the flow, especially when you add in how many electronic traders are in there, too. So a much more complex set of market participants with varying degrees of incentives, driving a lot more complexity in the price action, it’s just a totally different thing.

Corey Hoffstein  12:16

So before we jump to your work with all this today, I can’t help but touch on two points, which you briefly mentioned, which was a move into sports betting as well as a move into crypto and I can’t let those go. So let’s start with the sports betting because I think there’s a lot of fun stuff to unpack there. I’m curious, maybe first and foremost, how the sports betting market compared to other markets that you would worked on previously, and maybe some of your biggest lessons learned in transitioning from traditional financial markets into the sports betting markets, for

Charles McGarraugh  12:46

sure. So I mean, it was quite different. But firstly, how did it happen? Well, I was ready to move on from being a trader and a bank, I wanted to sort of do something entrepreneurial, I was ready for the next phase. And I had a friend who founded a sports betting business that was focused on using machine learning for sports prediction, which seems like a great idea. I backed him, and it was like, sports are a lot. They’re pretty data intensive. It’s a reasonably stationary problem, right? It’s like the same thing over and the rules don’t change, it’s kind of the same thing over and over again. So it was like oh, and for me personally, I was like really intrigued with the idea of exploring the contact surface between a human and a computer, because I had done very well as a discretionary OTC guy. And I would say, like my first foray in commodities, I was fine on automatic stuff. But all the tape reading stuff was a foreign skill set, as I just described. And so just taking the next leg of my career to kind of like work on electronic trading and machine learning. This was before machine learning and AI were cool. This is like 2015 2016 seemed like an interesting thing. And sports seemed like a really good way to do that, because we could get paid to learn. And then also, we could hire the talent, which at that data science talent back then was was more scarce than it is now. And it’s like, if you tell people they can wear jeans and a T shirt and think about sports all day, then maybe you can access the talent at a different price point than if you’re making them think about Treasury on basis or something. So that was kind of how we ended up there as the sort of first step on my own. And it was a totally different market. Right. And I would say a couple things. The first is, number one, it’s an asset with a basically to our duration, which means the need for capital to intermediate the flow is intrinsically constrained by the short time horizon like basically a sports betting businesses all income statement and balance sheet kind of hand wavy. So that was kind of the first thing that also means that the barrier to entry is low because basically every sort of statistics autodidact in the world, like the first thing they do after they learn Python is oh, maybe I can make money betting on sports. So sports betting is a trading business is like a zero out of 10 on business model innovation even If it’s like a 10 out of 10, on a technical challenge, and then because of the seasonality, the entropy changes, right throughout the season, right, you know a lot more by the end of the season than you do at the beginning. But basically, the market never gets dumber at predicting sports, because it’s totally stationary. And so because that combination of low capital intensity, low barrier to entry and stationarity, it’s actually very hard to get an edge in sports betting. And then finally, and I think this is more on the market structure, sports are basically an entertainment product more than they are a financial market, which means the vast majority of the flow does not ever see the light of day in secondary market trading. It just goes into a primary b2c operator, where they’re a lot more focused on retaining the customer and getting second and third deposits, than they are in risk managing any one line item, at least within reason, some huge concentrated things a different thing. But basically, even if the customer wins once, they’re probably going to just re up and then lose later. And so the whole kind of idea of a big liquidity intermediator. I don’t think it was that necessary. So it was kind of like a bit of a thesis violation, even though it seemed like a really interesting use case for machine learning. Now, I think there are a number of successful high frequency sports betting shops, but it’s really about high frequency liquidity provision, it’s really focused on in play betting. And yeah, it just didn’t seem as interesting once we kind of dove in is it seemed after? Or is it seemed before rather, so

Corey Hoffstein  16:34

while you went from one interesting market to another with your pivot to crypto in 2018, which, candidly, the timing of that seems pretty interesting. For people who know the history of the crypto market that was pretty much right after the 2017 ico busts like the market was not in a boom at that point, it was in a bust. Curious what some of the initial opportunities you pursued were, what was the catalyst for the transition? And and how did that opportunity set change over time as the market evolved?

Charles McGarraugh  17:02

Yeah, so because of all those drawbacks of being a sort of secondary market sports specialists that I described, I should also add, there’s one other kind of crucial aspect of sports trading, which is, I think, really kind of fundamentally different than most financial markets, which is, because it’s a 90 minute duration asset. If you think of a market as an interplay between the buildup of expectations and a resolution to fundamentals, that resolution to fundamentals happens incredibly fast in sports. And you have to have basically low latency action data, because you’re trading up to fundamentals constantly, at a very quick wavelength. And in any event, you’re always done at the end of two hours, basically. And again, the participants in the sports market would say, why would I hedge, I’m just trying to turn the velocity over as fast as possible, right? In crypto. By contrast, we sort of watched the 2017 Bubble unfold. And it looked a lot like blow off tops that I’d seen in many other markets over the years. And that looks like an opportunity. That might be the only person in the world whose first trading crypto was short. In early 2018. It was just like, wow, this is a blow off top that’s in the process of resolving a bigger picture. It was like, wow, I have a team of data scientists and engineers who are good at integrating with pretty non standardized, shall we say, platforms on the internet. And this might be a good use case, because literally nobody in crypto ever said, Why would I hedge? Right? It’s an infinite duration asset. I shouldn’t say literally, nobody ever said that. Because now it looks like there are some people who did not say that. But you take my point. And so it seemed like a really interesting and dynamic market that was very liquidity constrained. And that a lot of the techniques that we were thinking about might be applicable to and so I decided to pivot the business to crypto. I think the other really interesting thing about crypto is in addition to the market dynamism, and complex and definitely non stationary market structure. So that nonstationarity means there’s a lot of value for a human beyond just project managing engineering, right? Like there’s a lot to understand and it changes. So that was interesting. But in addition to all that, I had the experience of looking into crypto and going like, okay, sure, this feels like a really wild hivelocity online gambling market. There are certainly aspects of that. But also, there’s a really deep intellectual there there. Right and like the deeper you dig in, the more you find, and even now I feel like crypto gets a pretty bad rap. In light of the unwinding of the bubble and all the crazy things that have happened in crypto, but there is an intellectual there there. There are fundamentally pro social value propositions and I like that as well. Like building an asset based pass through system built around atomized balance sheets and self custody is potentially a technological solution for too big to fail, making value transfer with low friction ubiquitous feature in software and transaction lysing. The internet with price discovery, like that’s also a good idea, like the problem that is not that it’s a bad idea. The problem is maybe that’s too good an idea. A very senior person at Goldman once told me it was also true about securitization, after subprime, like the problem is not it’s bad idea. It’s too good an idea, that experience of going deep and going like, wow, there’s a lot here. And there’s something really powerful at the human level about the fact that it grew up organically, fundamentally, as a retail movement, where engineers are saying maybe we could do a better job on the system, I think was super compelling and remains compelling. It just comes with a lot of baggage, because it’s like a whole cross section of humanity with pretty low barriers again, so you’re gonna get a lot of craziness in there, too.

Corey Hoffstein  20:53

All right, let’s finally dive into what you’re doing today with all this. And to set the table, can you give us a quick background on the firm before you got there, and then, since you’ve gotten there,

Charles McGarraugh  21:05

yeah, so I’ll just my firm where I am now, CIO, is a pretty long established managed futures specialist, asset manager. And it was founded in 2000, with a big focus on trend following. And one of the partners there had been a board member at my sports betting company recently got to know them, and ultimately had a very successful run as a trend follower will trend in well in the sort of 2000s. And then having less successful run as a trend follower in the 2010s, post kind of QE. And you’ve in many of your other podcasts, I’m sure discuss why that may have happened as a broad market theme. But it’s certainly happened to all this. And so at the top tick, the assets were just shorted 2 billion, then performance was poor, and a lot of assets redeemed, and the team did a lot of work, basically refactoring the systems into multi risk premium and did an admirable job kind of with the relaunch refactored version of the strategy in the late 2010s. And then, tragically, as Fisher miszewski, the CIO of Altis died in a freak accident while skied an avalanche. And the remaining partners reached out to me and said, Hey, we think there’s a lot of IP here, it needs a new kind of captain of the ship, do you wanna have a look, and I spent a lot of time looking at it, and is a recall my kind of mission in terms of long term professional development, it was and remains being the best trader I can possibly be. And I thought, oh, gosh, here’s a platform that’s like, totally operationally burned in. It’s really tight with all of its workflow processes and regulatory footprint. And there’s good IP in here. And I’ve gotten a chance to acquire a stake in it at a fraction of replacement cost to save myself quite a lot of time. And this particular bit of IP is a good complement to my own core skills that I’ve developed in the previous sort of 20 years in the market. Now, I’m a decent thematic discretionary guide, decent relative value guy understands routers, valuation, understand market structure, understand negotiation. And here I’ve got a computer that’s good at reading the tape, right and good at sizing trades. And so that felt like a very natural extension of my skill set and a good filling in the gaps. And also, a lot of the IP feel pretty generalizable to other markets, including crypto. And so I acquired a stake in office and began work on improving the futures trading system alongside the team at office. And then as my time in the crypto market came to an end, and that market kind of is retrenching and my own conviction that we are on the eve of a commodity supercycle and that it’s probably a bull market and liquid alternatives. It just made sense to basically go full time on this, given my view structurally on the markets. So that’s what I’m doing. I

Corey Hoffstein  23:55

want to talk a bit about that IP that you saw at all this. And in our pre call you described, the tech stack to me is really comprising two different parts, what you described as an upstream signal layer and a downstream strategy layer. And I want to attack each of these parts individually. But first, I was hoping maybe we could take a step back. And you could talk about why the choice of this design, what are the benefits of thinking about this upstream downstream design, and what are potentially some of the drawbacks.

Charles McGarraugh  24:27

I’ll start with the drawback of anything that’s I guess, rigidly component ID is going to potentially constrain your thought process. It may be like reduce the spanning set of the design space that you’re operating within. And there may be potentially missed opportunities. There are rigidities there that you regret later. And that is certainly a risk. But there’s many advantages to having a thoughtfully component ID system. I suspect our throughput is not that different than most we have data we build signals out of them. With the signals we make prediction means we combine the predictions, and then have a sort of aggregated prediction. And then downstream. Once we have all these predictions, we then have a trading strategy that takes trades based on the predictions. And one of the nice things about a system like that is if you improve any component, you’re likely to improve the overall strategy. And that’s nice, because what it means is you can get really focused in a very specific way on the scope of work of what you’re doing, and begin to ask questions that are functionally portable. Right? As opposed to if you were to take sort of waterfall kind of straight through strategy development process, what you might find is, you have lots and lots of things that are put on they work for reasons that may or may not be transparent. And you have this hard problem, like how do I allocate capital between them. And with this kind of a design that we have, which is like basically data, feature prediction, portfolio construction, execution, and monitoring. With that kind of a flow, you can get really focused on a more narrowly defined problem and get really good at it. And then recognize that it can uplift the whole thing. It’s also quite flexible for building other things, which is also good. Can

Corey Hoffstein  26:20

you talk a little bit about how the signal layer operates, and what the actual output of this layer looks like? Yeah.

Charles McGarraugh  26:27

So we believe that there are multiple risk premia on offer in the market at any given moment. And we want to try to use them to predict future returns. So what we do is, we look at some of these alphas, which are pretty well known, like trend, and carry, we have our own approach to how to do that properly, but trending carrier like obvious alphas that interplay with each other, and, and so on. We look at trend and carry, we look at Inter market lead lag relationships, right, which we think is also important. And then we have some cross sectional relative value stuff that we spend a lot of time thinking about, in an attempt to, I guess, get more predictive power in a way that’s not correlated to trend of the market, you know, and that’s an attempt really to do two things. One is to solve some of the problems that are well known with trend following which is like, you’re not on your high watermark, most of the time, you have this really nice skewness and trend. But of course, also, sometimes you feel like you’re the dumb money and the last guy at the party staying too long, like you’re the dumb guy who buys the top and sells the bottom. And so what we’ve tried to do is have some degree of alpha that works at other times and attempt to smooth those returns. And also have more alpha, right, just have more alpha, because if you’re on a Sharpe of one or one and a half, you know, life’s a lot easier than if you’re on a Sharpe is 0.5. So what we’ve done is we have these alphas, we’ve spent a lot of time on them, trying to make sure that they are robust and not overfit. And then we use them, we try to make sure that they’re not correlated, at least historically, going back over big robust datasets. And then it’s basically like, how do I blend these indicators into some kind of overall regression that creates predictions. And the way we think about our predictions are on multiple time horizons. So we think about predicting a day forward, both risk and return one day in the five day, and then five days into 15 days. So we’re predicting up to 20 business days out. And we parameterize that prediction for every asset as a curve of information ratios. So we can get a sense of the relationship of the signal strength between markets. And this normalizes to risks. So it kind of allows you to make apples to apples comparisons between instruments. In

Corey Hoffstein  29:02

our pre call, you said to me, quote, reliance on market data alone for generating signal is a potential weakness in a world of increasing nonstationarity. What did you mean by that?

Charles McGarraugh  29:15

Yeah. Okay. So I’ll tell you a story, I guess, in human terms, rather than current terms. First, which is the world is changing. And the world is changing, I would argue more quickly than it has in really in the pre COVID world. And that’s basically because from Oh, eight to 2020, you’ve had kind of this like dominance of a Fed input, and a policy framework that is designed for volatility suppression, like you basically had, I think the opioid crisis was scary enough for policymakers that it was like, Okay, our goal is to stop transitions and institutional structures. Like we let Lehman go that was a really scary mistake, because pretty soon we got scared that the ATMs would run out of money. So our goal is basically to ossify incumbency. And that’s really what QE did. It also made as a function of that the cost of capital really cheap, which has allowed basically a lot of these risk premia to compress and the returns on offer from actively trading and then attempting to rebalance them, perhaps not that greatest relative to transaction costs, and not that much alpha in them overall, which I think is a good part of the reason why trend following did so poorly. After COVID. We’ve lived in a world where there’s a greater threat of D globalization, the greater threat of geopolitical instability, much more aggressive policy interventions and markets, industrial policy. And all of these things, from a statistical perspective smell like regime change. They smell like big coefficients that matter a lot changing, like the correlation coefficient between bonds and equities now that inflation is on the bag. They also feel much more like a world with multiple binding constraints in terms of what policymakers can do and what market participants can do. And of course, the most binding constraint evolved from a market participant perspective, at least in liquid markets is just the cost of capital. Because how much leverage can you run at five versus at one, depending on the resource benchmark against which every cost of capital is priced? Right? And so we think about that, it’s sort of like, okay, well, the world is changing. And the structure of all these economic relationships as the sort of global supply chain graph, and the global capital flow graph continues to rewrite could change quite a lot. And that means a lot of these statistical relationships couldn’t break down become different. And so I think, as a design principle, in building trading strategies, you’re going to want to be more adaptive than you were previously. Right. In the old world of recent memory, it was sort of like, well, in the end, no matter what sort of high minded quote, language I use, if in the end, I’m betting on financial asset, inflation, I’m gonna make money, right, multiple expansion and carry basically both make money. In a world where the discount rate is all over the place. And not trending, perhaps we’re trending in the wrong direction and stabilizing with a whole lot more volatility. That’s a totally different animal. And so yeah, that’s what I meant, I guess by being prepared for non stationary, it’s, and it’s a design principle, again, the idea is greater adaptiveness in the system. And that comes with risk. Because if you’re going to fit response functions to data, with shorter windows of data, you’re going to end up with less statistical robustness, more memorization in your system of noise, which probably means bigger predictions. And so the risk of getting too aggressive on that, with that greater adaptiveness is definitely high from an overfit perspective. And from an overconfidence perspective, in terms of trade sizing. So it’s not just the true negatives on your predicts that hurt, it’s also the false positives. But that doesn’t mean it’s the wrong idea. Because the market structure is moving faster. And I believe that the pace of these non stationarity events are going to increase over the next couple of years as we go through our fourth turning and rearrange institutional relationships. So and it also means opportunity, of course, right? Because big dislocations in the market, obviously mean that there are big price changes that you can lever up in futures markets and take advantage of and futures are really attractive as an asset class for that because the leverage is cheap. It’s well defined. And yes, from a structural perspective, it’s risky. But it’s not as risky as OTC leverage provision, because Clearinghouse margin is pretty high up the waterfall in terms of counterparty risk and all that. So

Corey Hoffstein  33:43

what about data sources that aren’t directly related to the market? How are you thinking about ingesting alternative datasets?

Charles McGarraugh  33:51

Yeah, so alternative data is really exciting, because it offers the potential for information content that is not in just the tape of prices, right? It’s like you’re allowing a system to ingest new orthogonal information. And that seems really potentially helpful in raising your predictive edge when you’re predicting market. Now, that said, there’s a whole bunch of challenges with it. The first is that lots of alternative data may already be just captured by the price action. So it is possible that the markets already ingested the information. And you could do just as well slicing something off market data at a fraction of the cost. Second is, it’s fiddly, and it’s non contemporaneous often. And so just getting to the point of how do I get an apples to apples ingestion of this in a way that is properly out of sample versus the forward looking returns and so forth? That’s just a big complicated ETL job, which can be a challenge. And then there’s some more challenges we found on integrating it in which is like, a lot of these datasets don’t have the same amount of history. And if you have an asynchronous amount of history, then if you’re building systems that are learning from data, you are faced with this really difficult challenge, which is given my different sample counts and my different potential indicator streams, how do I think about blending them in a way that is thoughtful, not too heavy on recent experience thinks about statistical robustness, but also accounts for non stationary. So kind of adjusting the temperature learning is a big challenge there. And then you have another problem, which is that a lot of these sources of information may only pertain to certain markets in the cross section of what you’re dealing with. And then you have to think about like, Okay, I’ve got something that seems pretty good in this specific case, but it means I need to think about its interaction with everything else in a subset only of the cross section of my assets, which means I’m losing robustness and sample count potentially. And also, I’ve got to think about applying that. And so all those things are pretty big challenges. And for what it’s worth, you know, we’ve done quite a lot of work on that, but have not yet launched any signals from alternative data. But it is, I guess, in our ambition to do that, because of course, we would like to be able to predict the market better with more granular information content.

Corey Hoffstein  36:10

So you have this market data, this non market data all being ingested by the signal layer, the output being these prediction curves that you create. And that’s what the strategy layer is ultimately ingesting. So how do you go from at that strategy layer prediction curve to portfolio?

Charles McGarraugh  36:29

Yeah, so we’re pretty simplistic about it in a certain sense. But I would say, I guess the first insight that we kind of build everything around is, we think it’s important to predict over multiple horizons, because we live in a world with transaction costs, and a world of transaction costs and dynamism in the market. There isn’t a single optimal portfolio, there’s just a trade off between transaction costs and the signal horizon are attempting to monetize. And so we think of portfolio construction as a dynamic scheduling problem that is intrinsically related to the friction of the underlying markets that we’re trading, which of course, is not constant, and gets bigger, the bigger your scale is, and it’s very different across markets, right, you can imagine if there were no transaction costs, you would not be in the long term prediction game, you just tried to predict the next time step, for however little tiny bit of edge you have, and then just rebounds everything and going in, you go as small as you can, right, because of course, that’s taking your bet count up. And as long as there’s any positive edge, you’ll extract some money from that, right, but neural the transaction costs as an active trader, we’re not in the passive market making game at all just partners, it’s just too infrastructure intensive, we’re never going to go up against like the virtues and details of the world. But as an active trader taking the market, we need to be thinking about the time horizon that we’re expecting a return over, it’s what we do is we look at all the different time horizons, we look at the expected returns relative to the transaction costs. And we optimize to maximize the expected growth rate of our capital river now. So that geometric mean maximization, you can think of it as a fractional Kelly criteria betting system. And the upshot of that is there’s no volatility target on our strategy, or our strategies. I shouldn’t say we’re not talking about any one product here. But there’s no particular volatility target, because we’re going to take more risks, when we think we have a greater statistical edge, and less risk when we don’t think we have a greater statistical edge. We think that’s a really desirable property of a trading strategy in general, because you don’t want to force your system to take risks, if it doesn’t have edge, because the volatility drag, volatility is the enemy of compounded returns. So I think that’s a pretty important thing. And no volatility target does not mean we have no risk control. What we do instead is we use the merchant to equity number on our system as basically the kind of high level leverage control knob. And we specify a cap on our Merchant equity. And then we just optimize we effectively tell the system go do the geometric mean maximization. If your assets were only equal to your nap times emergent equity cap, which makes it a transaction costs need an inter temporal fractional Kelly button system. So that’s kind of high level how we do it. And you can imagine the input for all those things is basically a transaction costs model your existing trades, or return expectation on every instrument in the investable universe, and the covariance of the assets across the investable universe. And it is effectively a mean variance optimization, just one that’s dynamic.

Corey Hoffstein  39:49

So let’s say all your signals create a strong prediction for the same asset in the same direction. Talking about not having that explicit volatility targeting using that margin to equity risk system and wanting to take risk when you think you have alpha. If you have all the signals leaning in the same direction, is this a case? Where risk ultimately should get pressed? Or is this a case where we should actually take a step back and say, No, actually, we need a second check risk should actually somewhat modulated or we might get out over our ski tips here.

Charles McGarraugh  40:27

Yeah, so it is possible that all the signals align, and then our waiting creates a more aggressive aggregate prediction. That is possible. And in that case, we will take more risk, because that’s what feels Kelley optimal. Now, that said, getting concentrated in any one thing is still going to add a lot of volatility to the portfolio, so it’s still naturally gonna hold off, especially because the overall amount of edge you have is still quite small, relative to the overall risk involved. Like I would say, like, our best predictions, maybe predict, point one five of a monthly volatility. So we’re not saying Oh, yeah, like we’re 90% confident on something like, like, we might be 15 or 20%, confident or something. And even then, you know, we might be wrong. So yes, we’ll press the risk, but it will still naturally be curtailed. I say, there’s another sort of interesting thing, too, which is, anytime you’re using a kind of naive mean, variance optimization, of course, your system is going to build really big spread positions and correlated things if you think they have different return expectations. And generally, that can be spooky, because if you’ve got something wrong, you might be really levered and have a problem like, that’s the road of prediction from a risk management perspective. And so what we do is, there’s lots of ways that people handle this, like, you might use a two factor risk management system, like capping the aggregate leverage by instrument or by sector, you might do something like put a ridge bump down the covariance matrix and boost the volatility of individual assets, right, and sort of tell your system that you do syncretic risk counts for more, basically, than you’re actually observing in the market. We don’t like to do that. Because we think it’s important to have an accurate representation of our best guess of what the measured risk really is. Right? So that kind of like bridge bonds, or correlation clustering, or that kind of stuff. Like we don’t really do that. What we’d like to do instead, in plain English, is to say, Well, geez, you’ve got these really divergent opinions and these really correlated things. But history says that that’s pretty unlikely. Like, maybe your opinions should not be more granular than your risk management methodology. And so we end up topping and tailing stuff in relationship to the cross section of other stuff. And that naturally enforces diversification. It clips a lot of untold trees in terms of the predicts, and basically forces more diversification in the system. But we do it at the prediction layer, like we do by trying to be humbler. Our research suggests that that kind of an approach, where you’re basically letting additional cross sectional information into your predictions, like pretty late in the processing game, we think that’s like a really good thing to do for the risk management side. But it also seems to have a really nice result, which is that it seems to uplift the predictive accuracy of the system, which is like taking an ensemble of information and thinking about how everything should relate to each other. Number one, it makes your opinion smaller, but you end up predicting more. And that’s a really interesting result, like less is more. So that’s kind of how we think about that.

Corey Hoffstein  43:39

In our pre call, you said, quote, as a design principle of the system, we believe that change in the market is accelerating. We want the system to be more adaptive than less adaptive to this market state. I think you started to touch on this a little bit earlier. But I wanted to ask it as a standalone question, because I think it’s a powerful view that you are holding. So the question here is really what is your thesis for this view? And how does it actually impact the design of your systems? Right?

Charles McGarraugh  44:09

Well, the kind of broad base macro thesis, I think I kind of covered a little bit before but the basic idea is we’re in a world with a greater degree of institutional volatility, like people are reassessing how they’re all going to get along. So every dependency graph in the world whether that’s the capital flow, or commodity flow, or product flow with customers, that stuff is in a state of flux, and crucially, behind the scenes, the main policy choices which have been volatility minimization. Now in many examples recently or involuntarily maximization, like an old man gets upset and decides to invade a country guess what, like all hell breaks loose in the market, some other guy decides to massively constrict the cost of capital, right and guess what everything in the system changes. Somebody decides this will be controversial but decides that they don’t want to lie about a public health issue, right? And then suddenly everything changes. So you’re just looking at a world where policy decisions are being taken, and they’re no longer being taken for Volm minimization. And that involves structural shifts, right. And of course, a trend, broadly defined ought to benefit, I guess, some first principle because it trend is the manifestation of persistent fundamental change in the world’s relationships, as reflected by prices, like a trend is the process by which a market ingests information to reflect change, like persistent change, not just noise, but persistent change. So try not to do okay, you would think. But I think one of the questions is like, well, can we do better? And like we described with some of the alphas, there are these other premium that are out there. And as a general principle, capital is more constrained. So probably you can be paid more to take all these premium, but in the end, what is a premium really, but just a predictive edge on prices? And so thinking about how to blend your predictive edges on different kinds of premiums is a pretty important part of the game. And then the question is, well, to get your question on a rotation, it’s like, basically, well, if I’ve learned how to blend these in 2014, maybe the world is different enough, because of all these nonstationarity is that I need to be conditioning my methodology in some way. So a good example would be during the march 2020, COVID, implosion, the system we have running at the time, which is not the current system, it took too long to stop out of equities. It was like, well, we can see what’s coming. This is really scary, right? And it’s like, yeah, but I’ve got a long window for calibrating my volatility expectation of my bar and whatnot. And then it crashed. I was like, oh, gosh, the trend is really terrible. I’m out. And then I was like, Okay, well, now I’m going to take too long to stop back in. Because now that the feds backs off the market, the markets up, balls coming down, once that spike in bowl is out of lookback window, then I’ll buy it. It’s like, well, I prefer to buy it before it goes up. 20%. I don’t know about you. So like that responsiveness seemed kind of wrong. So one example that we’ve done is we’ve taken multiple look back windows for volatility computations on realized basis, and looked at a greater of, and what that will do is like the spikes will come into scope, you’ll stop out sooner, and then they’ll leave the scope sooner, and you’ll stop back into. And that’s just again, like an implantation of a sort of idea that the velocity of what you’re doing should be faster. Now, that’s a design trade off. Because that could come with the expense of greater portfolio turnover. False breakouts, like if you’re going to adapt more quickly. Like I said, you could overfit you could turn, pay too much TCA. But wherever that design trade off was before in QE, my strong view is, it’s further toward the adaptive side now. Because the pace of fundamental change is increasing. So if you fit something to that dataset in the QE world, you might want to think about making whatever your response functions are more elastic now. And probably that’s not a bad thing.

Corey Hoffstein  48:01

So one of the things we haven’t spoken at all about yet is any specific product concept here, right? There’s all been very high level design based thinking, which I really appreciate. So I want to take it a little bit granular, maybe not talk about a specific design our product so much as product concepts, or how you think a product design question, because you do offer strategies in a number of structures, including hedge funds, and ETFs. So I wanted to get your thought as to your approach of productizing your investment stack? And how does the actual structure you choose impacts the ultimate design? Yeah,

Charles McGarraugh  48:38

that’s a great question. And I’ll go in a couple parts. So one question is the product management question, which is why choose which particular design for what particular job? And then the second bit is okay, within those product management buckets? What do you end up with? On this first point? Again, my strong view is that passive investing is probably the wrong toolkit for the next 10 years. And that’s because we are no longer in a broad based asset inflation only, like there are more risk premium in the world. So if you think, again, from a design principle of an investment strategy or a product, when you’re thinking about, it’s like, okay, if I’m going to put capital to work, how much information in the current state of the world should I incorporate in that decision? The passive investor says, I should ignore all of it. Because statistically, it’s in the two hard bucket plus the market always goes up in the end. That’s a view right. Now, I might say, well, that’s predicated on the sample span of like 150 year baby boomer lifespan, which is not a very large sample count. But even if you sort of push back on that idea, so I might be like, Oh, well, stocks always go up might be an overfit. But let’s say you’re kind of bullish on human enterprise and all that and stocks do always go up. Okay. You might still say, Well, okay, on the one hand, I could completely ignore them or overstate and on the other end, I could have some Super, super complicated, heavily conditioned massively overfit thing. My view is like probably right now, neither of those extremes are the right answer. So we all agree that the massively overfitting is probably not the right thing. But the completely blind state independent decision making, I would also argue is wrong. And like, we don’t even think about it, because it’s the water that we swim in, in financial markets, there’s however many trillion of passive assets and it’s like, we just don’t even think that it’s there, because it doesn’t feel but it’s massive. So our view is a house is well on that design spectrum of ignore everything, or take everything into account with massive overconfidence, like, there’s a happy medium, somewhere along the line, which means judicious use of like, well proven risk, premium phenomena, like trending carry, and so forth. And probably you can do a bit better. So that’s kind of the high level. So as a product manager, what you’re trying to do is deliver something like that, that you can kind of hang your hat on and feel confident about, rather that the customer can and it has a clear use case for the customer, which is I get it, you want absolute returns, you want diversification, you want liquidity, you want transparency, you want good governance, you want all these things. So firstly, with the strategy, how can I give you that right? Well, a positive absolute return and a negative correlation to all the liquid things you already own are probably pretty desirable, which naturally are used for trend volatility and arbitrage, broadly defined, and volatility and arbitrage and trend and carry, but curious, really the 40 in the 6040. So maybe, maybe not. But maybe it’s helpful in as much as it can give you some insight into how to dynamically trade the market. And so as a product manager, like I want to give you a negative correlation that positive return in the context of a world that’s likely to benefit more active trading than recent memory would suggest. So then the question is, okay, cool. Well, if we do that, in hedge fund format, there’s a pretty wide design space and the choices we can make, we can trade a lot of instruments, we can trade them at varying frequencies, including intraday, and we can deliver you these statistical alphas or risk premium, whatever you want to call it, maybe the beta is the alpha. But there’s a risk premia in there. And there’s a fair amount of alpha and how to switch between them. And the value proposition is the diversification to a high degree, right? Because futures are really good at moving a lot of VaR around really efficiently. And so managed futures are great if your goal is to actively trade and move our system. And you can deliver that diversification without too much specialization. Because you’ve got that and solve that kind of product design. And then the question is like, Okay, well, that’s great. If I do that, now, in terms of the different product buckets, they come with different mutations, like I said, a hedge fund, there’s quite a lot that you can do. In an ETF, there’s less that you can do, right, you’re on a daily rebalance best case, generally speaking, the universe and tradables is more constrained, because it needs to be tractable for market makers. Also, it’s pretty hard to build monoline factor, specialized product, and ETFs because you can’t really gather critical mass from an AUM perspective. So the strategy probably needs to work as an absolute return, buy and hold so that the client can kind of set it and forget it. Right think that’s important than you and you’re trying to solve a problem for a customer, which is give me this anti correlation with transparency and liquidity. So that’s really the design philosophy. But again, the underlying thesis is liquid alts have a really important role to play. And that role is increasing in importance in this environment for the foreseeable. And then the nice thing about the ETF structure in particular is it’s a disintermediated distribution model by enlarge, right, it’s listed. So you’ve effectively outsource a lot of the manual effort of DD to listing requirements. And it’s got to be tight, obviously. And with that disintermediate the distribution model, it can scale and also it’s more discoverable because it’s listed. And so our bed working on ETF strategies is we want to deliver a product that’s really good. And that satisfies the need of the return stacking array. And the need ultimately the client for that true diversifier that actually has positive expected return. So that’s what we’re striving to do. Now, obviously, past performance is not indicative of future results, but that’s what at least what we’re trying to do. In the hedge fund. I think you can go a little bit more for just the highest octane kind of absolute return you want and then modify it SMA is basically on a per client basis, depending on what their specific needs are. So it’s a little bit more customizable as well.

Corey Hoffstein  54:54

So you share quite a few thoughts about why you think alternatives are appropriate for this market environment or perhaps more appropriate for the coming market environment. But I’m curious about why specifically, you’re looking towards ETFs as a vehicle for delivery of alternative strategies, why not just focus only on the hedge fund delivery mechanism?

Charles McGarraugh  55:17

Well, look, the hedge fund mechanism is obviously really flexible. And of course, we have a hedge fund. But on ETS, we think that there is potential to deliver a pretty superior value proposition to a much larger total addressable market because of the ease of use with which advisors can allocate to them. And of course, not every advisor has access to hedge funds. And even if they do the matching process, and figuring it all out is quite hard, it’s quite nice to just have a much more streamlined process that’s disintermediated with some standardization, I think that’s a really important consideration. And then when you think about kind of the value proposition of these types of active strategies, for a super high end hedge fund, it’s sometimes people kind of conflate two separate things, especially in the multi manager platforms, like one aspect is the returns enhancement and risk reduction coming from diversification across lots of different factors and lots of different markets. And I think there’s this misperception, by the way, that even if you have two liquid markets, that they have to be kind of contemporaneous and no arbitrage between them. But even if the markets are liquid, oftentimes, the client bases that drive the flows in them are not liquid. And so there is an opportunity, I would say, just cross market sort of relative value just structurally in the market, I think just because of the way the markets are set up. And that is one of the byproducts. There’s another thing, though, is so that’s that diversification that multifactor diversification. And then of course, the other thing is in a really high end hedge fund, you’ll have a lot of specialists who are with really deep domain expertise, scraping every little bit out of what’s on offer in the market by virtue of their security expertise, or at least that’s the claim. Now, whether that is worth paying for is a different question, because obviously, it’s expensive to have that kind of an operation set up. But there are some people who’ve run things like that to great success. So at the higher end, the super premium product, those two USPS, which are specialized alpha plus factor diversification are kind of at the end of the day, the USP with good risk management, obviously, in the ETF wrapper, at least a proportion of that can still be delivered, which is the factor diversification benefit and returns enhancement from that. And when you think about the factors that people bet on in a 6040 model, it’s basically the excess return from carry and duration, and the excess return from the equity risk premium. And we’ve just come off a 40 year period where real yields went from super high to super low. And it may be the case that we’ve just seen a sort of outlier to the upside kind of returns profile to those specific factors. But in the super long run, as some of your other podcasts have alluded to, there are other factors in the market that also have positive expected return risk premia associated with them, but may require active management to access things like trend, or cross sectional relative value. And those things can be delivered and for the next decade or two, when we’re in a world with much higher inflation, volatility, and just higher cost of capital in general, I think it’s debatable whether you’re going to see a sort of top decile outcome in the returns to the carrier premium and the ERP. And so what we think is that there is likely to be some structural pressure for different diversify errs kind of creating fun flows into other kinds of active kind of factor product that delivers the only free lunch, which is really on offer in financial markets, to the best of my knowledge anyway, which is diversification. And to that end, we’ve done a JV with simplify in order to stand up some of our IP in that more accessible and scalable wrapper. And, you know, we’re optimistic on that.

Corey Hoffstein  59:04

One of my favorite questions to ask some of the potentially more opaque portfolio managers that come on this show, is what due diligence question they think people should be asking them. And when I asked you that, you said, the question people should ask you is, quote, how do you think about nonstationary risk? And so what I wanted to ask you is why do you think that’s an insightful question, and how would you answer it?

Charles McGarraugh  59:32

I think it’s an important question because it really gets to the heart of some of the biggest debates in investing. One of the biggest ones, of course, is active versus passive and another one is quant versus discretionary. And I think it was one of your recent podcasts where your guest said, No, the discretionary guys have problems with over salience bias or whatever they’re focused on. They have problems to trade sizing. Problems like ignoring the underlying factor exposures and factor edges that are out there. All true, at least in my experience as a reasonably successful discretionary trader, those are definitely problems on my skill set. And then on the flip side, there’s a totally different thing, which is like an arrogance of belief in the model, an over reliance on data, an under appreciation for how much markets can change, and how fat the tails really are. Like, generally speaking, if you’re data driven, you’re kind of like, always, in some sense, going past performance is indicative of future results. So to me, like if you have somebody who’s like flogging data driven strategies, a really legit question is just like, what what do you do if the world changes? And one good answer to that is, well, if I’m in the trend following business, if the world changes, eventually, I’m gonna bet on it changing even more. I think that’s actually a pretty good answer, because that’s what makes trend attractive. Basically, it’s it’s got this skewness, it’s got this sort of anti correlation to regime change, right? It’s one of the few strategies that makes money on average, let’s say in non stationary events. But again, it gets to this principle kind of be more thoughtful about it, of how adaptive Are you really? And I think it’s an open question. When I think about the whole kind of workflow that a quantum takes, or often takes, which is leaving just a bunch of data, the attempt to learn something from the data, back test how I might have done with varying degrees of believability, depending on how good a job we’ve done doing that, and how hygenic we’ve been about out of sample and waffle and all that. When I think about that, what you’re trying to do broadly defined is learn generalize patterns from the past, without accidentally memorizing the noise. And when I think about the future, I’m like, Okay, well, the future is some combination of repeated patterns and history, and then the genuinely novel. So it makes sense structurally, in some sense that like any strategy, when it goes live, will underperformance back test for you. Because you’re like, Oh, well, if it’s repeated patterns in the past, I’ll probably do a good job. And then if it’s genuinely novel, unfolding, as the future turns into the present, there is at best time a push, or maybe I’ve even gotten something wrong. So it sort of makes sense, in some sense, for any back test to sort of like perform live, and it may not be from alpha decay, it just might be this time, it’s different. It’s not entirely different, but it’s also not entirely the same. So I think one question is, again, and that adaptiveness, in the response function, whatever you’re doing from the data, is sort of like, well, how do I do that? And then again, as a principle of strategy design, you’re saying, Okay, well, when the future seems novel versus the data set, I fit over its history, a circle, or is it an arrow? Are we just repeating the patterns of some other regime? So things like an exponential decay of my data to get my signal? Like that might not be right, because the signal appertain 35 years ago might actually be more relevant now given a shift, right? Or is it like, wow, it’s something totally new, which is like, it’s an unknown, unknown. So how do you think about that cyclicality versus sort of linearity of history, because it’s an important concept in the design thing. And then what we do is, we tried to think of economic first principles that probably don’t change at all, like ever. For example, if you run out of storage, the cost of spots go into a lot lower. Right, so you’re looking for pockets of stationarity, at a low enough level of the primitive, that you can then build up what appears to be non stationary regimes in the data, but they’re still on solid ground from first principles. And so when we think about r&d agenda, we’re spending a lot of time kind of thinking about that kind of thing, which is sure a top down inference of how or the way things can change and attempt to measure that and infer the market state, or building something up from relationships that you can confidently say are definitely stationary. And even they might be because nothing is definite, obviously, in trading or in the future. But yeah, that’d be the kind of answer I would like a pm to tell me. But of course, I think that because this is the thing we built. Will

Corey Hoffstein  1:04:13

you led me nicely where I wanted to go next, which is a question about what your research pipeline looks like today. So there’s explicitly that question, but maybe a little bit more nuanced, because you do have this separation between the signal layer and the strategy layer. As someone who’s helping set the course for research. How do you think about the dedication of resources to research in the signal layer versus research in the strategy layer?

Charles McGarraugh  1:04:39

Right, so there’s a risk management piece to this because research takes time, which means it takes resource, it’s an investment decision that may or may not pay out because research is risky. So you may or may not do much work and then discover that it was worth doing. Then there’s also a product management question to this, which is, we have investors they’ve been sold on a particular product idea, or strategy. And if you come out with something that’s totally different, it may not do what it says on the tin. And that could be a problem. So, when we think about prioritizing r&d, we want to have a mix of the obvious incremental things to do that just make you a better professional operator in your lane. There’s always a to do list of those things, but they have different degrees of marginal improvement, and how much marginal improvement they could have might be condition dependent. So you take a view on that, right, like so for example, right now, we’re doing a lot of work on intraday data. Because we want to reduce the response latency in our systems, from market signal to action. Under that basic, increase adaptiveness principle, things are going fine. But it would be nice to be better. And we think that that might not even just be an alpha generation thing. We also think it’s important from a risk management perspective, like, even if it doesn’t show up in the expected return, if it changes the skewness and kurtosis, especially on the left tail, that’s a great result, right, that makes the product better. So I’d say incremental improvements to execution, to risk modeling. And just to latency, like all those things seem like obvious things to do that anybody would want kind of no matter what direction you’re taking your system. So we prioritize the incremental stuff, the highest because there’s not gonna be any question of product market fit from a product management perspective. And it’s always table stakes for just getting better after that. Looking a little bit further ahead, kind of at the generalized thing, then I guess the question becomes, well, if the game ultimately is ingest information and make predictions at one side, and then subject to having predictions trade at the best you can, given the risk in the transaction costs. Right now, I would say, the trade the best you can bid, and the risk management stuff for the kind of trading we’re doing right now, which is kind of diversified directional basket trading, the kind of mean variance framework we have seems pretty good for risk management perspective, but reducing latency would make it better. But if we were going to do different style of trading, like really big cross sectional spreads, or curved trays, or calendar spreads, or that kind of thing, or less liquid markets, we had to be a lot more thoughtful about liquidity modeling, that can imagine changing the risk management profile downstream and strategy a lot. But right now, we got a diverse enough world and enough ideas on Alpha generation, that we’re really more focused on the prediction layer beyond those incremental improvements in the system. And so when we think about that, it’s like we brainstorm, as a team. And we tried to justify what we’re doing in economic logic or domain specific expertise, right. And that’s another benefit of being a generalist, which is we’ve seen a lot of as a team, we’ve seen a lot of things besides just futures time series over the course of the life. And so maybe that gives us a bit of a different perspective on attempting to articulate frameworks that then the computer can listen to. Yeah. And I would say, in general, we’re getting more thoughtful about, again, adaptiveness, which ultimately means increased conditioning of predictions based on market state, even though that road in extremists, is the road to perdition of overfit. Clearly, again, on that sort of design spectrum, probably right now you want more like incrementally more of that than you did before. And so that’s really what we’re focused on.

Corey Hoffstein  1:08:32

So we’ve come to the last question of the episode. And I know you’ve listened to the podcast before and you know that every season I ask a new question for that season, that is a consistent question I asked every guest what you don’t know is that you are actually the first guest of season seven, I’ve decided that we’re striking a new season, you’re the first guest and this question is inspired by you and something you said in our pre call. I love this idea of obsessions, what people are obsessed with. So the question is going to be what are you obsessed with? You already told me what you’re obsessed with on the pre call. Your recent obsession is this concept of asset duration. As the last question to this episode, I was hoping you could explain to me, what is it about asset duration that has caught your fascination and why is it such an obsession for you right now?

Charles McGarraugh  1:09:21

Yeah, well, you’re right. That is my obsession right now, as a market practitioner, I think it was Warren Buffett and famously said, the short term the market is a voting machine. But in the long term, the market is a weighing machine. To put that more formally, there’s a process of expectations formation, which is all about feedback within the crowd. And my theory, I can’t back this up with data yet, but my theory is that that feedback is more powerful, the less it’s subjected to the crucible of realizations of information. So when real yields were negative dollar As in 10, or 20 years time, we’re worth more than dollars today. Which means that the stories you tell about the indefinite future matter a lot, and PV terms, and because there’s no data point you can look at to validate them other than pretty soft stuff. Basically, the only thing people go on is what does everybody else think. And so that process of when the discount rate is really, really low or negative, the expectations formation loop can really go hog wild. That’s really interesting. And it probably matters a lot about the tenor of the stories you’re telling. And then of course, as the discount rate changes, the market is going to have a greater or lesser opinion about stories that you’re telling Tigers now. So I think that’s one thing. And then the second thing is, although Buffett says in the long term, the market is a winning machine. In different assets, the weighing machine operates at different wavelengths. In spot electricity, it operates at wavelength zero, it is not storable, basically snd matches, and that’s that, in gold, the storage cost is super cheap relative to the value. So what needs to true up fundamentals in the gold market? I don’t know. It’s like basically a Rorschach test of what people’s opinions are. That doesn’t mean the duration is always constant. But that idea of basically the interplay between the cost of capital, the wavelength at which fundamental realizations are playing out, and the non stationary wavelength of expectations, feedback. That’s basically I think, something that we’re spending a lot of time thinking about, it may also impact whether short term trend or long term trend matters more, right in terms of the informational content, like what is the trend, the trend is the market telling you is its price, I want to go to a new place. But that might be different depending on the wavelength of the asset you’re trading. And so that’s really something that we’re spending a lot of time on, and it’s intimately related to the cost of capital.

Corey Hoffstein  1:11:48

Charlie, this has been a fascinating discussion, both from a high level thesis and from a bottom up practitioner in the weeds perspective. I really appreciate you joining me today. Thank you.