In this episode I speak with Adam Butler, co-founder and CIO of ReSolve Asset Management. For full disclosure, at the time of recording I am personally an investor in one of ReSolve’s private funds.

Adam last joined the show in Season 1, where we discussed his background and philosophy of diversification. This episode begins with a discussion of how Adam’s thinking and process has evolved over the last four-plus years, much of which is centered around the idea of experimental design. Adam discusses the adoption of machine learning techniques, the spectrum of complexity between zero- and strong-prior signals, and how proper experiment design allows for greater process diversification.

The back half of the conversation dances across a few subjects. We discuss topics such as seasonality, carry, the operational burdens of introducing a full-stack machine learning process, and the difficulties allocators face in introducing multi-strategy alternatives into their portfolios.

I hope you enjoy this episode with Adam Butler.

Transcript

Corey Hoffstein  00:00

All right, my friend. Are you ready to do this? I was born ready. All right 321 Let’s jam. Hello and welcome everyone. I’m Corey Hoffstein. And this is flirting with models, the podcast that pulls back the curtain to discover the human factor behind the quantitative strategy.

Narrator  00:22

Corey Hoffstein Is the co founder and chief investment officer of new found research due to industry regulations. He will not discuss any of newfound researches funds on this podcast all opinions expressed by podcast participants are solely their own opinion and do not reflect the opinion of newfound research. This podcast is for informational purposes only and should not be relied upon as a basis for investment decisions. Clients of newfound research may maintain positions in securities discussed in this podcast for more information is it think newfound.com.

Corey Hoffstein  00:54

If you enjoy this podcast, we’d greatly appreciate it. If you could leave us a rating or review on your favorite podcast platform and check out our sponsor this season. It’s well it’s me. People ask me all the time, Cory, what do you actually do? Well, back in 2008, I co founded newfound research. We’re a quantitative investment and research firm dedicated to helping investors proactively navigate the risks of investing through more holistic diversification. Whether through the funds we manage the Exchange Traded products we power, or the total portfolio solutions we construct like the structural Alpha model portfolio series, we offer a variety of solutions to financial advisors and institutions. Check us out at www dot Tink newfound.com. And now on with the show. In this episode, I speak with Adam Butler, co founder and CIO of resolve asset management. For full disclosure at the time of recording. I am personally an investor in one of resolves private funds. Adam last joined the show in season one, where we discussed his background and philosophy of diversification. This episode begins with a discussion of how Adam’s thinking and process has evolved over the last four years, much of which is centered around the idea of experiment design. Adam discusses the adoption of machine learning techniques, the spectrum of complexity between zero and strong prior signals, and how proper experiment design allows for greater process diversification. The back half of the conversation dances across several subjects, we discuss topics such as seasonality, carry the operational burdens of introducing a full stack machine learning process, and the difficulties allocators face and introducing multi strategy alternatives into their portfolios. I hope you enjoy this episode with Adam Butler. Butler, it’s been a long time since I’ve had you on the podcast. It’s a little weird for me, I feel like you and I talk at least once a week, but I’m looking back here you were episode one of season one first guest I ever had on this podcast, you set the tone. I actually had someone telling me that listening to that episode with some of the things you said changed their entire view on how to invest.

Adam Butler  03:17

Wow, I don’t know if that’s good or bad. I’d be interested to follow up and see how that worked out for them. It means we can

Corey Hoffstein  03:23

have a high bar for this episode, my friend a high bar indeed. And I’m excited to have you back on because it’s not going to be more of the same. When you were a guest in the first season. The conversation was so much about maximizing diversification not just across fundamental economic exposures and asset classes, but around investment process and time and things. You know, I love like rebalance timing luck. And I know that there’s been a big evolution of process since then, not that you throw away any of those fundamental concepts, but the process by which you think about that has really changed. Let’s just start have the fundamental views of portfolio construction changed over time. And if so how? I think we’ve just continued to push as far as we can go in the same direction, which is seeking diversity. When we chatted in season one, either, we were still primarily focused on ETFs, or we had just recently migrated some of our strategies from ETFs to futures seeking diversity was a big motivation for that, because there’s just such an enormous diversity of liquid uncorrelated sources of risk and return and futures, that it’s much more difficult to get in ETFs or other securities. When you combine that with the fact that with futures you also get insanely cheap funding, and also this mosaic of other types of price adjacent features like there’s an enormous amount of information contained in the term structure of futures. You can also To introduce diversity by creating synthetic securities, like calendar spreads was very easy without some of the operational complexities of shorts and securities. It’s just trivial to take Long’s and shorts and futures by virtue of how futures are constructed. So you can create relative value strategies, whether it’s pairwise or baskets, there’s a diversity of instruments, synthetic instruments. And the information ecosystem is that much richer, you’ve got stuff like Commitment of Traders reports. And you’ve got all of these sorts of economic reports that are relatively high frequency, like the throughputs on pipelines and the data that comes out of the energy agencies etc, that you can use as either sources of direct information or to condition the primary sources of information and relationships that help you to forecast returns. So that has been obviously a major step. Our chief Propellerhead Andrew Butler also has completed his PhD in robust optimization, or ability to bring to bear real innovation and best practices in robust portfolio construction, we’ve made pretty substantial strides there. Another source of diversity is just an framework of thought most of the canon of empirical finance think Chicago school starting with Fama, French in the early 90s, they developed a framework for how to tease out relationships between certain characteristics of markets or securities and their ability to explain future returns, that cannon is largely derived from having strong priors, they do their best, they do all kinds of gymnastics to try and conform to the type of experimental design that somebody in the hard sciences would adhere to somebody doing physics experiments, or biology experiments or medical experiments. But the social sciences just don’t lend themselves to that kind of experimental framework. They do. But they require the ability to have strong theoretical priors that inform your hypothesis testing. So while a lot of our earlier work was steeped in some of that empirical finance tradition, largely driven by Chicago School, we have also taken a pretty substantial departure into experimental frameworks that require weaker priors, the ability to do that largely comes out of best practices in data science and operations research. So we brought that to bear and Andrews background in machine learning for, for example, oil, wealth, optimization, that sort of stuff, we have a natural background in that. And we have just allowed that kind of thinking to take on a greater role in the organization. That’s diversity, US strategies that add here to a framework of strong priors and other strategies that adhere to a framework of weaker priors. Not no priors, but weaker priors that is a source of diversity, a general acknowledgement of the fact that every month quarter and year that goes by we as a team recognize that we can know or even infer a lot less about the world, and specifically the world of investing than we might like. And then we may have previously thought that we could we just continue to bring more and more humility to the process at every level of research and decision making, that lends itself to greater diversity in every dimension. I think you set the table there really nicely for what will likely be the rest of the conversation, you laid out a lot of topics, we can go really deep. And so let me try to peel back the onion layer by layer. And maybe we’ll start with over the last couple of years, I know that you and your partners, as a firm have made a really meaningful reinvestment into incorporating a lot of that machine learning logic into your process. Curious as to if there was a catalyst for making that decision and and what that catalyst was. And if we compare the new approach of incorporating that machine learning process versus the old research approach, maybe you can explain what really has changed both philosophically and practically. As humans,

Adam Butler  09:25

we don’t do a very good job of consistently examining where our most basic views on the world come from. For example, while my background academically in originally is in psychology, and the hard sciences, I did my CFA and after my CFA, I took a deep dive into the traditional canon of empirical finance and that is defined by a set of very nuanced but extremely critical assumptions about how to think about out designing experiments and drawing conclusions from experiments. Once you’re embedded in that way of thinking, you don’t stop to question the fundamental assumptions that go into it. By contrast, Andrew, our head quant came up with much more of an empirical background, allowing the characteristics of the underlying data to inform the types of models and the types of experiments that you want to conduct in order to be able to draw conclusions. It was always very uncomfortable for Andrew and I kept having to hammer away at him as we designed our earlier strategies that were much more informed by the academic empirical finance literature, I kept having to nudge him in the direction of those techniques. And it was always very uncomfortable for him to have those strong priors not withstanding, the what seemed like extremely intuitive theory that supposedly informed a lot of the priors that we relied upon. There were a few catalysts, for example, some people that joined the team, or that were adjacent to the team that nudged us in the direction of at least acknowledging the potential for us to see what kind of results emerge from experiments that rely on a different set of priors. And that was largely based on best practices in data science. And when we started to experiment in that way, some incredible conclusions and results emerged from that line of thinking, it was sort of an existence proof, it was, first of all Andrew’s natural proclivity to think in that way, encountering traders that had tremendous success, designing strategies in that way, mostly in the higher frequency space. But then applying some of those design features to our set of explanatory variables on daily bars and our markets. And observing that there are some really interesting things there. That existence motivated more research in that direction, and eventually allowed us to create an experimental design and a research framework to see how this line of thinking would work when you combine it across all markets. And using some of the feature set that we have higher confidence in as explanatory variables. That was the evolution. And then we spent the last three or four years fully immersed in that way of thinking over the last little while we’ve begun to revisit, well, there’s strong priors, and there’s weak priors. And how can we combine the thinking from both of those to deliver diversity in a number of dimensions, and it happens that combining those actually delivers even better results, which is a really nice discovery.

Corey Hoffstein  12:43

I sort of interpret the strong priors versus weak priors in the context of traditional empirical finance has almost an obsession with avoiding false positives, at the cost of being willing to accept false negatives. And you and I have discussed this in the past. And I actually think we may have even discussed this on the first podcast way back then this idea that false positives are just going to be potentially random noise minus trading costs, that’s going to be your cost of incurring a false positive. Maybe the other cost is opportunity cost that you’re deploying some capital, you could have been deploying elsewhere. But the cost of missing a false negative is all that potential alpha you could have had. So the trade off perhaps becomes asymmetric here, curious how you think about that trade off? And is that adjacent to your concept of strong priors weak priors?

Adam Butler  13:35

I think it’s right in the sweet spot of the discussion about strong priors and weak priors. In theory, if you trade a strategy based on random signals, then your expected return is going to be random noise minus trading costs. But you also acknowledged you also have to carve off some amount of capital or risk budget to those strategies as well. So there’s also an opportunity cost in deciding to trade strategies and formed by random noise rather than strategies that are informed by skill. The challenge is that most of the things that matter in investing and in life are a result of complex dynamics interacting with random noise at timescales or trading frequency scales that most of us interact with. So in other words, non very high frequency scales, it’s reasonable to say it’s next to impossible to discern purely empirically whether a strategy is a true signal, or is noise. So let’s think about what a zero priors research process would look like. Well, effectively, you’re saying you have no idea what kinds of information or features or variables might explain or forecast next period returns. So you’re going to feed all of the available data that you have at your disposal into machine with every conceivable transform, and you’re going to allow the machine to tell you which of those variables and transforms, explain next period returns all of the experiments that we’ve done both in trying to tease out which types of variables from the classical literature. So it for example, all the different value specifications or momentum specifications, or investment or profitability, which of these are more or less likely, comparatively, to go on and explain for midterms, based on a huge variety of different empirical tests that nothing we have experimented with has allowed us to determine that strategy A is better than strategy B in out of sample tests, I would classify that as a hard problem. For me again, at anything daily bars or higher, the no priors approach is hard. There are techniques like the model confidence set, for example, that may allow you to take a subset of a very large sample of different strategies and save that there’s a 90% chance that you’ll capture all of the true strategies in a proportion of the set that you can eliminate the bottom 5% based on some Pareto frontier of different explanatory performance variables, etc. But the age that you get from that process is so small that it’s almost not worth it. So you’re left having to at least have some set of priors about the kinds of variables that you think there’s a reasonable probability should inform future returns, then the question is, do you want to impose specific types of relationships, for example, let’s say you assume that investors heard they chased returns, they overreact to changes in information about the environment. This sets up a momentum or trending dynamic in markets, and therefore past returns that have a positive relationship with future returns. So if past returns are positive, we would expect returns in the next period to also be positive and vice versa. That sets up both the strong prior in terms of past returns explain future returns. And also an even stronger prior in terms of that explanation always has a positive slope, you can then take one step towards weaker priors and say I believe that past returns explain future returns. But I don’t know if that explanation has positive or negative slope, there is some literature on short term reversal. For example, if we go back to the early 90s, short term reversal, it was identified that the one month returns for individual securities were actually negatively predictive of returns over the subsequent period. So strong returns in this month predict negative returns or low returns in the next month, a slightly weaker prior type of experimental design says we’ll allow the slope of that relationship to emerge from the data. And then you need a different type of experimental design to determine if you can select using in sample data whether the slope is positive or negative when you apply it to out of sample data. So it requires a slightly different kind of experimental design, because now you’re having to select models, instead of imposing a certain structure on the models, like is done typically in the traditional canon of empirical finance. Already, we’ve made another assumption, and that is that the relationship is linear, is that a safe assumption, we can weaken our priors even further and allow for a higher degree of complexity and saying, maybe there are nonlinear types of relationships between this explanatory variable and for returns, maybe when returns are extremely negative, or extremely positive, that relationship has a negative slope. But when they’re slightly negative, and slightly positive, it has a positive slope. So you needed another kind of experimental design to first of all determine how many bends in those slopes will you allow? And are we able to forecast and optimal level of complexity or the number of bends or the number of times that the explanatory function goes through the x axis, that sort of thing? And determine can you identify an optimal level of complexity if you can? Can you identify nonlinear models that are more explanatory than simple models? That requires a different kind of experimental design? You can see how we’re going from very strong priors, which are typical to the traditional canon of empirical finance towards weaker priors, but not zero priors. And that as you move along that continuum, it requires that you bring to bear different experimental tools.

Corey Hoffstein  19:56

I like that you keep using the word complexity. hear a lot of people who push back on the machine learning approach, dismiss it potentially as data mining. And in some cases in the high frequency space, I mean, you and I both know traders that are explicitly data mining, using machine learning and having success in the high frequency space, much less so as you point out at lower frequencies, daily bars and beyond. But I like this way of framing it as a continuum of simplicity to complexity in the experimental design, for many simplicity is more robust. And we don’t want to use a complex approach, because it’s inherently more fragile. Curious how you think about that trade off of trying to optimize for complexity. And given that you are having success with it, why do you think other investors think about it the wrong way,

Adam Butler  20:48

especially when you first start in investing, and I think many people that are really steeped in that traditional empirical finance canon find it harder to think this way. But the fact is that markets are just not deterministic. And if you can’t determine empirically cause effect relationships, if there’s an enormous amount of randomness, and if even where there are relationships, the signal to noise ratio is really small. And if there’s a lot of conditionality in the relationships, so the relationship between this variable and forecast returns has a positive slope under this condition, but a negative slope or is not explanatory under different conditions, or if the variables are non stationary, then trying to decide what type of simple model is most robust over time and over different markets. And then acknowledging all of the different subtle parameterizations that happen between identifying an explanatory variable and deploying an investment strategy, I just think it’s naive to think that you can specify a simple, very particular model that is going to be most effective in explaining returns over such a wide variety of different financial conditions across a wide variety of different markets at different time horizons. In a environment where there is extremely slow learning, it’s dominated by randomness, it’s virtually impossible to identify and make decisions about cause effect relationships, the best approach is to identify a very large number of independent agents. In our case, our agents or models used to be that think about millennium, for example, as a hedge fund, where they hire a bunch of independent agents, which are just independent traders, hundreds, perhaps 1000s of them all trading their own independent strategies. Some of them are systematic, some of them are discretionary, whatever. But they’re all trading based on different information using different models. That is, whether it’s the Millennium approach, hiring a bunch of different traders, or more of a quant approach, where you’re building a large number of independent models, each of those models is going to make a forecast and then you’re going to take an average of the forecasts across all of these models, which would like having all of these independent agents, well, you’ve got a much better chance of converging on an optimal forecast, if you’re drawing on information from a very wide variety of models have different points on that strong versus weak priors, different points on that simple to complex spectrum, drawing from different sources of information forecasting over a variety of different models employing a variety of different experimental designs, employing a variety of different type of model creation methods on a wide variety of different instruments. To me, that’s by far the best approach to ensure that your forecasts are going to converge in general on something that is generally correct. Whereas when you try to identify the best, or even just a model just as good as any other model, which is people who embrace simplicity, another kind of argument that they make, then you run a very large risk of happening to pick the model that was specifically wrong. The method that I described tries to err on the side of converging on a solution that is generally correct, and minimizing the risk of converging on a solution that is specifically wrong.

Corey Hoffstein  24:28

One of the phrases you’ve used a number of times so far in this episode is experimental design. I was hoping we could put a pause on the conversation and have you explain what you mean by experimental design and discuss a little bit of the continuum that you see in potential experimental designs?

Adam Butler  24:46

Well, let’s start with the way that most empiricists that are published in finance journals approach, experimenting with financial data. There’s some sort of theory maybe it’s based on an observation about barriers to Arbor draws maybe they’re regulatory or structural in terms of the way investment organizations are designed to oriented. Or maybe there’s a risk factor that needs to be priced and is not priced in previous models. Whatever it is, there is some sort of inefficiency or one price risk factor that’s identified. And then the researchers hypothesize that this risk factor should explain some portion of returns that are not already explained by other risk factors or other inefficiencies or anomalies. And based on this intuition, they specify a hypothesis test. And to do that you need to go from, I believe, for example, that equity long only mutual fund managers are compensated on information ratio, and not on absolute performance, so that they pay special attention to tracking your over volatility, if they’re motivated by tracking error, it stands to reason that if they’re taking active bets against their benchmark in certain positions, they may want to take offsetting positions as well in the portfolio that minimizes their tracking error with the benchmark. If that’s true, maybe they want to own a series of high beta positions, assume their leverage constraint that minimizes their tracking error to the benchmark. If that’s true, maybe high beta securities are overpriced relative to low beta securities. And if you go short, high beta and long, low beta, you can earn a premium. So now you’re converging on a hypothesis testing, maybe you’ve got an undefined beta. So what are you defining beta? As you’ve got to find a look back horizon for covariance? So you define that? How are you going to construct the portfolios? Are you going to take the top 10% of beta against the bottom 10% of beta, the top 20%, the top third, the top half? How often are you going to rebalance it, etc. All of these design decisions need to be factored into the hypothesis test. But they rarely are typically, the researcher, they specify this factor definition. So we’re going to be sorting on beta, then they’re defining the beta, but they’re going to experiment with a number of different definitions of beta, maybe beta over the past three years, or beta over the past one year or five years, they’re going to try decile sorts, they’re going to try different investment universes. So we’re only going to use the Russell 1000, because you want them to be liquid enough so that it can be a practical investment strategy. Maybe we’re going to run it separately on high cap weighted stocks in another completely distinct set on low cap weighted stocks. And we’re just going to average the results from both of those. There’s so many different design decisions that go into this, the researchers are experimenting with all of these different parameterizations, none of which for the most part goes into the final journal article. What goes into the final journal article purports to be a simple parsimonious set of decisions that were made to express this particular hypothesis. But it’s not there were dozens, if not hundreds of different tweaks to this model that occurred beneath the surface that were not reported. You’ve got these subtle multiple hypothesis testing challenges that go into even traditional types of investigative frameworks that are then peer reviewed and go into traditional finance journals. The data science best practices are that you keep track of all of these different decisions. And there are experimental frameworks to determine if you have any skill in selecting things like whether you should be selecting from the cap weighted universe, the large cap universal, the small cap universe or use all caps, the degree of concentration of the portfolio, how often you should rebalance, which comes down to how effectively you can forecast over different investment horizons. All of these types of decisions that empiricists in the traditional canon leave behind on the cutting room floor. All of these are incorporated in you determine using in sample and out of sample test techniques, which ones you have skill in making decisions about and where you don’t have skill, you end up determining the amount of variance that occurs with different parameterizations of these things. If there’s a lot of variance between selecting the top and bottom decile, and selecting the top a bottom third of the data alarm bells should probably be going off. How do you manage that? Do you use all these different types of specifications which is more parsimonious top decile, or top half? These are all decisions that empiricists in the traditional canon take for granted that practitioners with a data science orientation account for explicitly both in terms of acknowledging the potential dispersion of the outcomes of their data and setting expectations for out of sample results and account for explicitly in decision making and designing the ensembles of models that they eventually deploy. I

Corey Hoffstein  30:10

absolutely love that way of thinking of which of these designs do you have a potential edge within the information and you have some sort of accuracy in your forecasts versus those you don’t, I was hoping to just tease it out a little bit further. And maybe we can stick with a particular example like forecast horizon itself, just to make sure I’m fully following and try to create a little bit more transparency. For listeners, what you’re talking about holding all other parameters, or design specifications constant, we look at one variable like forecast horizon, and we try to determine both over which horizon we have the greatest forecast accuracy as well as the variance of those horizons in which we have forecast accuracy. Can you explain a little bit further given that information, then what are the implications? Let’s say we have high accuracy with low variance, or we have medium accuracy with high variance. How does that ultimately change the implementation? When it comes to portfolio construction?

Adam Butler  31:11

It’s more of a question of which forecasts horizons do we appear to have some edge in our ability to forecast it and typically is a range, you’re holding everything else constant, you’re using these models to forecast at a sample over different forecast horizons. So typically, what you’ll see is that there’s a range where netting out costs. Keep in mind when you’re forecasting over a one day horizon, typically your turnover is going to be a lot higher than if you’re forecasting over a five year horizon, or 20. Day horizon. And it’s not just Israel forecast accuracy higher, but can you capture that edge after netting out expected trading costs, that is a factor into it. But notwithstanding that you’re observing a wide variety of different forecast horizons. And you’re saying we can’t really distinguish between the ability to forecast over one day, two day three day 45 days, we can say it doesn’t look like there’s any forecast horizon out beyond a certain forecast period, we’ll just take all models over the forecast horizons where we appear to have an edge. And then as we’re setting expectations, we’re using all of the information from the variance that we observe in each of those forecast horizons to expand the cone of potential expectation that we want to see as we observe this strategy in live trading. So that cone of potential expectation expands and contracts as you learn more about the variables that explain the explained for returns, and all of the hyper parameters that are required in order to translate a traditional kind of empirical finance hypothesis test into a final investment strategy that you deploy in live trading.

Corey Hoffstein  32:58

Going back to this spectrum of simplicity to complexity and the varying degrees of complexity in which you can introduce into your process, you hinted at something at the beginning. And I want to make sure my interpretation of it is correct, that you still rely on a couple of major categories of signals. So I know just from talking to you, your topology is relative value, trend, seasonality and carry. So I interpret your process in many ways as having all of these potential degrees of complexity, but conditioned on the prior that you want to work within these sort of high level signals. First of all, I’m curious why those particular categories and to how do you think about potentially introducing new categories as well as signals within those categories?

Adam Butler  33:49

I would classify this as the hard problem. The question of which explanatory variables you believe in is I think the hardest problem in empirical finance because it’s almost impossible to be able to identify x post, which explanatory variables are true and which explanatory variables are spurious. I personally have strong confidence in the theoretical rationale for why past returns would have explanatory power over future returns because the vast majority of investors as they’re making decisions in markets incorporate the trajectory of past returns into their decision making. For better or worse, investors are looking at past performance looking at trajectory past returns. And they’re using that either as a signaling mechanism so the crowd believes that something is happening, and I’m going to place at least some confidence or faith in the average of the crowds opinions to inform my own views or it just feels good to invest in something that’s going up or for some who are wired as more contrarian. It feels good to buy something that has gone down a lot. I have a fundamental leave knowing how I feel as I look at time series in markets and the behavior that I’ve observed my own psychology background, etc, gives me strong confidence that there’s information in past returns, I hesitate to say that we necessarily employ past returns in a traditional trend framework. So I might call this more time series than trend.

Corey Hoffstein  35:19

Adam is using air quotes there with trend and time series. Yeah,

Adam Butler  35:23

exactly. Because some of those relationships are counter trend. Some of them are nonlinear. What’s more relying on time series, but I have strong conviction in that there are strong structural and regulatory and behavioral arguments for there to be seasonality effects in markets, banks and institutions have a seasonality to how they need to shore up the regulatory capital, there are accounting seasonality, there are calendar effects related to the behavior of people in institutions and managers and their incentives, etc. That drive behavior. The term structure of futures carries a lot of information because it describes the urgency and motivations of different kinds of investors at different points in the future. So there’s reasons to have conviction that these should be explanatory. And it’s nice that we observe that they’re explanatory at all levels of complexity. They’re explanatory using simple models, and they’re explanatory using models of different levels of complexity. Again, this is just on the strong prior to weak prior. These are variables in which we have strong prior beliefs and high competence conviction. So how do you introduce new sources of information? Well, you can introduce them as new direct explanatory variables, and you’re going to build pure models that are informed explicitly and only by those variables. You can also create variables that act as conditions for other explanatory features. For example, if you’re in crude oil and crude oil has a positive return over the past six months does that information carry the same forecast over the next week when inventories of crude oil are higher than average versus below average when they’ve been rising over the last month versus when they’ve been falling over the last month, when the commitment of traders and crude oil shows the speculators have high long speculative positioning or extremely low expected positioning. These types of conditional relationships are also interesting to explore. So you’ve got these direct ways to use explanatory variables and indirect ways to use explanatory variables. We prefer variables, which we have fundamental reasons to believe that they should be explanatory under certain conditions, to use as context rather than use them as direct variables. But the fact is, in the end, the variables that you choose to be explanatory derive from a set of theories and observations and intuitions. In our experience, it’s very hard to just throw all of the information you can at your model making process and let the empirical relationships alone telling you which features are explanatory and which ones aren’t.

Corey Hoffstein  38:11

So I was having a really interesting conversation with a very large, systematic macro firm, who was talking about their use of machine learning. And it was interesting to me, because it’s in many ways the opposite of your approach, where again, it seems like you’re conditioning some of your complexity on the existence of the signals. Whereas this firm took the exact opposite approach and said left unconstrained machine learning largely rediscovers the signals anyway, within the data. So what they wanted to use machine learning for very explicitly was finding things outside of these signals. So they were conditioning in an orthogonal way, saying, we only want to search through the data for stuff that is unrelated to relative value, seasonality carried, and trend, which might concern most as being a little bit more of a data mining exercise, because the hypothesis of why we would have confidence that these signals should continue to work perhaps isn’t necessarily there. But I’m just generally curious as to your thoughts and feedback on that type of approach versus the way you’re attacking the problem? Well, there’s

Adam Butler  39:15

ways to approach adding new variables that explicitly orthogonalize off of existing variables. Certainly one of the approaches that we take, and this is an approach that is taken in a variety of off the shelf machine learning methods is residual allocation. So you learn a model based on one feature, and then you go back to all the other features, and you learn the model that best explains the residual returns that are not already explained by the feature that you had previously modeled. Let’s say you’ve got the highest conviction in time series. You go through your time series features, you generate the models, you get the residual returns, and then you go to your next set of features. Let’s say it’s termsoup. I’m sure and you allow your term structure models to learn off the returns that the time series models have not already explained. And then you for the next set of features, maybe it seasonality, you allow the seasonality bottles to learn off the returns that have not already explained by the time series models and futures term structure models. And there’s absolutely merit in this type of learning. And it’s great because you can start with different variables. First, you’ll get slightly different models when you start with training in residuals versus when you start with seasonality and residuals down. And I think it is worthwhile seeking to add explanatory variables based on how well the EXPLAIN residual returns rather than how well they explain the raw returns. Because so much of the explanatory power of any individual variable, or many individual variables are conflated with the explanatory power of other variables. And often they’re conflated with the explanatory power of the variables that everybody already knows about. So that idea of searching for new explanatory variables based on residual isation makes a lot of sense to me, all I would say is that the process of determining, for example, we observe all the time taking literally random features take 1000 features that are random signals, you’re gonna get some subset of them that will exhibit a Sharpe ratio over 10 years in excess of four, you absolutely well. And then when you apply them, you’ll build a model, and you’ll apply them out of sample. And you’ll have absolutely zero explanatory power. There’s different types of meta features that you can use, besides just the in sample performance to determine if a model or a feature does have legitimate explanatory power, or maybe complimentary to the set of other features that you’re using that I won’t go into. Even taking a Pareto frontier of the best meta features, the explanatory power is extremely low. And so that process of adding new features to the data that you don’t already have reasonably strong priors to now you’re in a situation where you’re adding noise and extra trading, you’re not adding extra training in a linear fashion, because of the process. I mean, if you add random features together, you’re only adding noise at a square root of the number of new random features that you’re adding because of diversification. But you’re still devoting risk budget to features that you have lower conviction on. Regardless of what you observe in there in sample performance, you need to proceed in that direction with more extreme caution.

Corey Hoffstein  42:42

want to spend a little bit of time talking about seasonality and carry specifically and carry a little bit more because I know how much you’ve been beating the drum lately about carriers perhaps being the ultimate risk premium. So we’ll save that for a little bit later, because I know we’ll dive in deep there. But let’s start with seasonality. There’s all sorts of documented seasonality effects with various sorts of explanations. You touched upon a bunch of them ranging from rebalancing flows to corporate needs for liquidity, institutional effects, all sorts of reasons why this seasonality might exist seasonality and economic data itself all sorts of reasons why seasonality might exist. I was hoping you could talk about specifically what you mean by seasonality, and why you think it’s a valuable addition to a Managed futures program.

Adam Butler  43:28

We already talked about some of the reasons to believe that there are seasonal effects due to regulatory and structural and organizational design factors and incentives and behavioral effects around different calendar events. And fact is that we observed an unbelievable amount of patterns in the historical seasonal data. Like if you stand up an ensemble of seasonal models against models derive from time series data or term structure data, they stand up beautifully. Empirically. They’re just as strong and just as sustained. As we observe in these other feature categories. We think about seasonality in terms of stuff like day of the week, month of the year, quarter of the year, day of the month, days till option expiry, you could argue that some of these seasonal effects may require relatively high turnover. But this is something that I haven’t also mentioned that a lot of these effects when you’re forecasting over one to five to 10 to 20 days, obviously day of the week sounds like wow, you’re gonna be long something today and short something tomorrow, it seems like the turnover would overwhelm the train gods were overwhelmed whatever the edge is. An important thing to recognize in these types of strategies is that you’re combining hundreds of different edges. They’re all voting on the position that you want to have today. So while day of the week effects may have extremely high turnover on their own and maybe on attractive to trade as an independent strategy when you combine them with hundreds of other different strategies that are using different information at different time horizons, you’re averaging them all, then it’s just that little tiny nudge in one direction or another that all of these different strategies are giving from day to day. And you get this profound trade netting effect that dramatically reduces the turnover across all of the different strategies, which allows you to trade strategies that untreatable on their own become highly affected at the margin when you combine them with a wide variety of other diverse strategies. When you’re trading across 80 plus different markets with all the diversity of different market participants in those markets and the different regulatory regimes the answer to different regulators, there’s different seasonal effects in terms of how the economy runs. When do we need gasoline, winter wheat versus spring wheat, all these types of effects, you can imagine that a wealth of patterns emerge at different timeframes, that the right type of experimental framework can pick up on and utilize as part of a large ensemble.

Corey Hoffstein  46:00

So let’s talk a bit about carry now because I don’t want to put words in your mouth, but you’ve been beating the drum, calling it the ultimate risk premia strategy is I think, what I heard you call it in your podcast with Auntie Hillman and who everyone should go listen to that podcast, by the way, is a fantastic podcast. But I suspect when people think of Kerry, they’re often thinking of that picking up pennies in front of a steamroller trade. That horrible huge left skew that’s always looming out there. That’s going to offset your positive expectancy and throw you back years on your trades. And I don’t think many people would consider that the ultimate risk premia strategy. So floor is yours. I’d love for you to expand your thoughts, maybe both on how you define carry, because it might not be that traditional interpretation and why you think allocators should be considering it.

Adam Butler  46:45

I define carry as the return that you expect on an investment if the price doesn’t change. So in securities markets, let’s say equities, that is the dividend yield, and bonds, it’s the coupon in bond futures. It’s the imputed coupon and the rule yield. In commodity futures, it’s the real yield. In currencies. It’s the difference between the interest rate that’s paid on a cash deposit in one currency and the interest rate that must be paid as interest on a loan when you’re borrowing and another currency. Traditional carry strategies were deployed exclusively in currency markets. And it was often an emerging market currency with a high yield due to high levels of inflation or a lack of confidence in the fiscal prudence of the government, that emerging market would have a high yield on its cash and you’ve got a developed market. Often, Japan for the last 20 or 30 years old had an extremely low and relatively managed yield on its currency. So you can borrow in the lower yielding currency and then invest in this higher yielding currency. Knowing that during times of crises these emerging markets often get into, they borrowed in external currencies are they’ve over borrowed in domestic currencies, they get into some kind of capital flows crisis, and that currency drops relative to the funding currency and you end up being offside and so you get this large left tail in that kind of isolated currency carry, that is one limited facet of carry, but equity investors are investing in equity carry, they’re investing in the dividends that securities are going to pay over time, and they’re capitalizing those dividends back to the present. bond investors are doing the same thing with coupons. You can apply the same thing we just talked about two currencies. You can also apply it to commodities, commodities have a term structure when commodities are in backwardation, where the second or third month futures are trading lower than the current future. Those Backman futures get drawn up towards the spot price. That’s the role yield you get on commodity futures. That’ll make sense except that sometimes the direction of carry in equities or bonds or commodities or currencies invert, so sometimes the dividend yield in equities is below the yield on cash. In which case why are you taking equity risk when the capitalized value of dividends is below that of what you would earn on cash? Sometimes the yield curve is inverted. Why are you taking duration risk by going up the curve if you’re getting paid less to own a 10 year bond than you are to own a T bill? It’s one thing to want to be long commodities when a commodity is in backwardation and you’re expecting to earn positive real yield, but often commodities are in contango, where you’re expecting to earn negative real yield. Often the carry relationship in currencies is inconsistent because the inflation expectations offset what you’re expected to earn on the difference in yields. The difference in carry and traditional long only investing is Yeah, sure you want to be long only in invested in risk premia so long as the long version has a positive expected risk premium. If that expected premium is negative, you probably want to be short, you just want to be invested in the direction of the risk premium. So if the yield curve is inverted, to whatever extent whatever term in the yield curve is inverted with treasuries, you probably want to be short that term and long treasuries, if whatever equity indices are earning an expected dividend yield below that of their domestic treasury bonds, or T bills, maybe you should be short those equity markets and hold cash in same for commodities, etc. It’s taking the risk premium, but in the direction of the risk premium rather than just assuming that that risk premium is always positive. And of course, as you diversify out across all global equity markets, all liquid global bond markets, all major commodity markets, currencies, etc. In need of volatility, you’re now invested in a variety of risk premia that are moving into positive and negative premia territory based on a wide variety of different economic and financial factors. And empirically, that diversified carry approach doesn’t have the procyclicality are a crisis type of profile that the old fashioned traditional currency carry has does have a slightly positive beta to liquidity risk. It does have a slightly positive beta to financial crisis. But it’s not nearly as procyclical as a traditional endowment portfolio. And the long term expected Sharpe ratio is dramatically higher net of transaction costs on the order of two to two and a half times what you expect to get from a long only version of these same diversified premia.

Corey Hoffstein  51:46

There’s been a lot of great literature published over the last couple of years about conditioning different signals on each other. So for example, conditioning trend on carry has proven to be one of those ideas that has worked historically and out of sample seemingly quite well. We were talking earlier about conditioning, the machine learning complexity based upon these high level signals. I’m curious how you think about the potential interaction effects across these high level signals of carry seasonality trend and relative value, the potential benefit, but also the added complexities in the research process that you have to address.

Adam Butler  52:24

We spend a lot of time on this, there is a tremendous opportunity to add value here. But it comes from a counterintuitive direction. The first question to answer is, Is there information in the interaction between variables that is not already captured by using information from simply combining information from one variable and the other variable together in a linear way. So for example, if we were to condition a trend signal on whether the carry is in the same direction, is there any added value to that over simply trading a trend strategy and combining it with a carry strategy so that you’re taking a position in every market that is the average of the position that will be dictated by the trend strategy and the carry strategy. In most cases, there is no benefit to conditioning, all of the information is captured by simply trading trend as a strategy and trading carries the strategy and combining them with equal risk. In the right kind of experimental framework. There are absolutely ways to tease out interaction effects. But the approach to that is different than simply creating two dimensional or three dimensional models and hoping that the two dimensional three dimensional models provide extra information after accounting for just combining the one dimensional models from both of those different individual features.

Corey Hoffstein  53:57

So I know that this has been a multi year evolutionary process for you and the team, both the research and the implementation, which shouldn’t go understated how difficult the implementation of this sort of research process is. And it’s a continuing ongoing evolution. But that said, I know you implemented a number of changes in your mandates towards the end of 2020. If I were to look at the excess returns generated by a couple of the mandates that you manage, they’ve actually ended up looking a lot like the sock Jen CTA trend index out of sample. So in other words, despite having value carry seasonality and trend signals, it actually is ended up looking like an ensemble trend followers. Let me take a step back if I’m in the seat of an allocator. And for full disclosure to listeners, I am an allocator to one of results, private funds. How do I interpret this result knowing you have all these signals, but at the end of the day, it’s sort of just look like trend following anyway.

Adam Butler  54:53

It’s a really great question. And I think for an allocator it’s a very hard problem since October 2012. Many our pure Alpha strategies looked like trend following Let’s call a point eight correlation with sock Jen trend index for a good chunk of that time, over the last couple of months, it’s diverged substantially. And so that correlation has come down a lot now closer to the long term historical expectation. But for an allocator, the reality is, you cannot derive expectations about a strategy as an allocator. By looking at the live returns, you may be able to derive some intuition about risk. But without having an extremely deep understanding about what exactly is going on under the hood, there’s only so much that you can learn or conclude about how to think about the long term character of a strategy from observing its live performance. It’s perfectly normal, for example, for seasonality effects, or relative value effects to align with trend effects for months and quarters at a time. This happened to be one of these times, during the period when our alpha strategies looked most like the trend index, our best performing set of models was actually from the seasonality feature family, our trend models or time series models were also performed well, but the seasonality models were just off the charts good. If two different strategies are performing well, in the same way at the same time, how are you going to distinguish one from the other, you just can’t find the empirical results, it’s helpful to be able to go back and have an experimental framework that allows you to observe theoretical out of sample performance over a much longer historical period. And then you can see the distribution of the sensitivity of the strategy or the correlation of the strategy to stuff like equity indices, Bond indices, other types of factors, strategies like trend, if you know that the long term average correlation with the trend index is point five or point four, you spent the last year at point eight, and the historical different distribution is somewhere between point two and point nine, over any rolling 12 month period, you probably shouldn’t be that concerned that it has exhibited a point a correlation over the last year, that’s well within expectations. And then as you talk to the manager, and you’re able to dig one step deeper, and say, Well, here’s the performance of the underlying model stacks. And you can see that this other model stack has been our best performer. This happens that we were also deriving strong performance from the time series stack that should allow you to get some extra comfort. But again, I see this as a very hard problem for allocators. And the only real guidance I would offer is don’t pay much attention to the live returns in seeking this kind of information. And most of the important information is going to come from the conversations with the managers about what’s going on under the hood. I

Corey Hoffstein  57:57

think this scenario presents a really interesting problem for asset managers. Because you have this situation where multiple signals can coincide, you would know this better than I do. But my interpretation of the last 18 months is that seasonality carry and trend signals largely coincided across the same markets. And you have two potentially conflicting ideas. One, you’re losing diversification. If all of these signals are pushing into the same markets in the same direction, you’re losing diversification. But on the other hand, there’s potentially conditional information about expected returns in those markets when all of the signals are pushing into the same markets at the same time. And so you might have two camps, one camp might say, we’ve lost diversification, and therefore we need to de risk and the other camp might say no, there’s conditional information here, that the forward information ratio is actually higher, and therefore we can press risk. I’m curious as to how you interpret the scenario.

Adam Butler  58:54

We approach it by examining the information that is shared by the underlying models, rather than the current bits that are being expressed in a portfolio. So if you’ve got sufficient diversity in the sources of information along all of the different axes that we’ve already talked about, in terms of how you can diversify within a portfolio, and your observation is that empirically, the underlying models themselves have almost no correlation with one another both in terms of models forecasting and individual market and the average of all models forecasting a market and how the models in that market are correlated with the models on other markets. If all of those models are largely uncorrelated, then when you have a confluence of edges that are all pointing towards concentration in a certain market or sector, what that’s telling you is you have a large number of independent sources all converging on high conviction, so you probably shouldn’t take Steps to dilute the opinions that are arriving from all of those diverse sources of information and all of those diverse models. Absolutely. From a risk management standpoint, it behooves a manager to set constraints on portfolio concentration at the market level at the sector level, perhaps at other levels. To apply robust optimization to set var or expected shortfall constraints, you want to allow concentration, given the strategy design that I have described, to the greatest extent possible within your risk constraints,

Corey Hoffstein  1:00:38

staying on the threat of risk management for a moment here, and push back on me if my interpretation is wrong. But the way I think about the evolution of your processes that you are, in many ways, leaning more heavily into the historical distribution of realized returns in developing the quantitative models that you have which one push back on that approach would be it opens you up to potential outliers, or conditions that the model hasn’t seen before. How do you think about that, and managing that sort of risk?

Adam Butler  1:01:09

One important step is having an experimental framework that allows you to observe how models built in one epoch generalize to other epochs that are defined by very different economic and financial characteristics. And obviously, we’ve done a huge amount of work in that domain. It still does it guard against true outlier events like an earthquake in California to cars, California, often to the ocean or nuclear strike, we can conjure a wide variety of different instantaneous shocks that the market participants in the markets themselves and other economic factors can’t give you any guidance on in terms of what to expect in markets over your forecast horizons. For that reason, I think it’s prudent and we do internally impose hard constraints at the individual market level at the individual sector level, in some cases, at the asset class level, and in some cases, at the risk on risk off level, depending on the product category, to just acknowledge that if this entire sector gets cut in half tomorrow or goes to zero, then the portfolio won’t be wiped out that we can live to see another day that we won’t be in substantially worse shape than every other conceivable strategy, your portfolio that an investor might also have considered investing in. So this is just about being responsible, and acknowledging that a lot more things could happen than have happened in the historical sample, doing your best to limit the potential catastrophic impact of those types of outcomes.

Corey Hoffstein  1:02:41

Can you talk a little bit about the actual operational burden, and what it’s been like to move towards a full stack machine learning approach how long it took lessons learned along the way, I know, it’s not as simple as just opening up Python and importing TensorFlow, partially because you guys are in our shop anyway, but we’d love to discuss the experience for you.

Adam Butler  1:03:01

We started out by building a really simple sandbox building models where you could look under the hood very transparently and see what was going on. Our models are still fully transparent, you can absolutely look at exactly the response functions that are motivating individual models, individual positions and individual markets. But over time, you employ techniques like boosting and bagging residual isation, the technical design of the underlying stack needs to evolve. You need to involve Deb’s with different types of expertise, you need to build a much more robust execution engine monitoring engine reconciliation engine, because once you combine ensembles of models, and especially nonlinear models, in many cases, you can’t just look at the behavior of the market and the term structure and Intuit what your position is, oftentimes, the positioning the portfolio is highly counterintuitive. It’s not where I would prefer to be based on my own macro economic sensibilities, or it doesn’t make sense based on the past three, six and 12 month trend in the market or what you would traditionally expect from the current futures, term structure, etc. You need to build an operational and technical framework that is maybe an order of magnitude more reliable, with checks and balances to make sure that the models are operating correctly. They’re translating to the trade blotter correctly, that’s getting translated to the traders correctly that the trades are being reconciled correctly, everything’s been accounted for correctly. So we’ve had to build out our tech team pretty substantially with DevOps and data oriented people and we’ve deployed a lot of stuff in Rust at the execution level with messaging layer. arrows etc. You can imagine the technical skills evolution to programming full stack enterprise level technical backbone versus five years ago, you’re kind of using a set of scripts seven years ago using scripts and web scraping. Your Data Architecture now is airtight. It’s been several complete overhauls of the full soup to nuts tech stack, requiring the addition of new types of technical and research experts at each layer to migrate to the current level of operational robustness. And we see a clear path in 18 to 24 months towards intraday models and intraday trading, that is going to require another level of rigor in the tech stack. So it’s kind of a continuous process, you move from running relatively high turnover strategies on a few 10s of millions to a few hundreds of millions, we’re gonna require different levels of tech stack and execution in the billions. We’ve already grown as a firm dramatically, we’re at 600 million, we’re onboarding another couple 100 million by the end of the year, we expect to be in the billion dollar range. It just requires an evolution of operational and executional robustness as you keep adding size and complexity to your strategies.

Corey Hoffstein  1:06:19

I wanted to ask you a little bit about the future you mentioned that they’re thinking about intraday signals and the changes required. What is next? Is it a focus on new models? Is it a focus on new datasets or potentially even non traditional non forward non futures markets thinking about OTC Markets are things like inflation swaps that could potentially be introduced that aren’t captured by most traditional CTAs. But certainly, there’s potential diversification there.

Adam Butler  1:06:47

There’s, I think, a huge opportunity in the feature space. Using market derived data like bid ask spreads the depth of the order book, information derived from the options surface and dealer positioning, honestly, I could go on and on. That’s just in the equity and fixed income space, there’s a huge number of alternative types of features based on real time flows across different pipelines and different flow stations for natural gas and different grades of crude oil in different regions. Using planting information in grains. There’s a whole feature space direction, there’s just so many different ways to create synthetic assets based on trading one asset long against another short trading baskets of Long’s against shorts, employing calendar spreads at different points along the Futures Curve. And then there’s different ways of building models, our head of quant Andrew Butler just finished his PhD. And a huge number of potential directions came out of that machine learning at the total portfolio level. So using all different features from all different markets to inform the optimal positioning not just of each individual market on its own, but of the total portfolio at the portfolio level conditional features, you’re in the best possible situation as a firm. If you’ve got far more prospective research directions to go, then you have the resources in order to explore them. Hopefully, there’s this nice tension where you’re continuing to build out the research team in proportion to the new directions that you have, from a research perspective, you’re not going too quickly. You’re prototyping along the way you’re properly prioritizing the most prospective projects and assigning them appropriately to existing resources and deciding where we need to build on any resources. But we’re definitely in a situation where we’ve already prototyped a handful of extremely promising directions that we’re just continuing to deploy resources to, to flush out and operationalize. So it’s a really exciting time.

Corey Hoffstein  1:08:53

The last couple of questions here, I want to take the opportunity to step back, because as much as you are an expert in this space, there’s other things I know you have passionate thoughts about. And one of those areas that you’ve long been a proponent is risk parity. And you’ve written some really wonderful articles over the years both exploring risk parity from a theoretical perspective, as well as all the practical differences that emerge from different risk parity implementations. The question I receive all the time and the question I want to pose to you is how should risk parity investors allocate to commodities? More specifically, if I can, you and I maybe have a slight difference in our fundamental belief around say, a commodity risk premium? We might both agree maybe it’s a little more tenuous than say the equity risk premium or the bond risk premium. But given the potential diversification of commodities, are investors better off with a well diversified commodity sleeve or say an active managed futures program?

Adam Butler  1:09:51

Why choose this mathematical reasons to believe that an optimally diversified and rebalanced commodity portfolio can do An array of premium over time, some of our research suggests that it may be on the order of the duration premium or the global equity risk premium on a risk adjusted basis. But the point is why stop there, risk parity often gets looped into this traditional asset allocation framework for my sins, it’s often applied as an equity bond asset allocation strategy. Basically, it’s 80% bonds and 20% stocks, which is just completely misguided and completely jumped the shark on the theoretical background for risk parity, which requires a sleeve that responds positively to inflation. But the whole idea of risk parity is to find as many diverse sources of premia or edge or independent risk and return as possible. To the extent that you don’t have strong views about whether the equity premium is stronger than the bond premium is stronger than the commodity rebalancing premium is stronger than the trend premium is stronger than the value premium, or all the different edges that we deploy that are nonlinear, informed by weak priors. The point is, if you don’t have a strong prior about which of them is stronger than the other on a risk adjusted basis, you want to hold all of them with an equal risk budget. If you do have some idea that commodities are more correlated with CTAs, maybe you don’t want to hold equal risk weight in commodities and CTAs. Rather, you want to acknowledge that correlation in the optimization. I don’t think you want to lean too heavily on those correlations, but where they’re a bit more obvious, I think you may want to make adjustments to that. But in general, the idea is to find as diverse and as many sources of return as possible, and combine them in a portfolio with approximately equal risk budget. And if you do that you’re maximizing the probability of having a positive outcome over whatever your investment horizon is. I don’t know if I skirted your question there. I like the idea of having some risk budget allocated to diverse, lonely commodities. There’s a macroeconomic reason for that the last 18 months kind of been a good case study for that there’s a mathematical reason for it with the rebalancing premium. But I think that commodities are also enormously useful in a carry framework and a trend framework and a seasonality framework, we should be applying all of these different frameworks together and all of the other ones that you have some sort of confidence in and that are different than all of the other sleeves in the portfolio, and that we should have the ability to hold them in approximately equal risk budget, so that they all have the opportunity to express their different personalities and characteristics over time to maximize the efficiency of the portfolio.

Corey Hoffstein  1:12:47

My concern is that if we start talking about the rebalance premium, this podcast will go another two hours only for us to end up in a place where we both just typically agree with each other.

Adam Butler  1:12:56

I agree. Absolutely. Yes. But let

Corey Hoffstein  1:12:58

me pull a little bit more of a specific question. Because again, this is one I get all the time where people say, look, I want to build a risk parity portfolio, global equities, global bonds, global commodities, I want to hold them in equal risk basis. And then I want to introduce a multi strategy managed futures program. And the question they ask is, How do I think about putting that in? Do I have to understand its correlation to the asset classes and treat it as if it’s a fourth asset class? Is it something I think about layering on top? How do I think about that from a portfolio construction perspective in understanding that its correlations to those asset classes are going to fundamentally change over time? So should I be looking at long term dynamics, short term positioning? How do we think about risk managing that what’s the right way to introduce these for lack of a better phrase style premia into a risk parity framework?

Adam Butler  1:13:49

Again, it’s a Why choose short term or long term covariances heuristic expectations? Do I think about this old premia fund as one big alternative bet? Or do I think about it as a combination of the three, four or five independent strategies that are running underneath it that this fund combines into one large ensemble, I would be inclined to want to look under the hood at all this stuff and say, I’ve got equities I’ve got bonds. I’ve got commodities, trend seasonality carry relative value in futures. I’ve got market neutral value insecurities, momentum, profitability, investment, low vol. Call it a style premia fund, is that style premium fund one bet? Or is it an agglomeration of four or five different bets? I tend to think of it more as four or five different bets. The stick three funds, a classic risk parity fund stocks, bonds, commodities, broadly will only maybe you’ve got one the ultimate strategic weights, like risk parity, even another one that holds them in dynamic weights. So they’re constantly observing the covariance matrix, and they’re changing the weights to always maintain optimal risk parity between the constituents. And they’re also scaling the exposure of the total portfolio up or down in response to changes in those covariances and total market volatility. Combine the both. Now you’ve got a procyclical risk parity strategy and a counter cyclical risk parity strategy. Sounds fantastic. I have no idea which one of them is going to outperform over the long term, it’s going to depend whether markets are mostly mean reverting or mostly trending over the time horizon that you’re investing. That combination has three different bets. Now you’ve got a systematic macro strategy that allocates a trend carry seasonality relative value, I just added four bets, each of those beds should probably have its own equal risk budget. So now I’ve got seven debts, I’m going to add a style premium fund, that’s another five bets, let’s say, Now, I’ve got 12 bets in the portfolio. So my equity should probably be 112 for the risk budget bonds, 112, longly, commodities 112 for 12, to the systematic, macro, and 512, to the style premium, just as an example. But some people are going to have different levels of confidence. I know lots of people who are diehard trend followers place a lot of extra confidence in that specific narrow definition of trend that like cultists, they might want to have a much higher allocation to pure trend. At the limit, maybe they only want diversify trend in the portfolio. Others might want diversified classic risk parity, combined with just trend, that’s fine. Even that is much better than just owning 6040 stocks or bonds. Challenges do emerge here, some of which are addressed through return stacking, thank goodness through the ability to use some of the new products out there that provide leveraged exposure provide some capital space for you to slide in some of these other alternative exposures. Ideally, you’d have one fund that provides the classic risk parity strategy and all of these other exposures on top and is able to maximize the capital efficiency of the portfolio, maximize all of the trade netting that comes from holding all of these different securities. Classic risk parity says you want to be long this equity market carry says you want to reduce your exposure trends as you want to increase your exposure. Seasonality says you want to increase it even further relative value says you want to attenuate a little bit. So you’re trading a small marginal Delta each day rather than having all of these different strategies, trading in different products, where they’re all needing to put their full trade on each day. And you don’t get to take advantage of these trade netting opportunities. So there’s a lot of reasons to want to hold all of these and institutions do trade all of these different styles and exposures in one big portfolio that they oversee. And then they can trade that against. But for individuals, it’s still highly beneficial to use these products that can be leveraged exposure to some of your core passive betas like stocks and bonds, and then build some of these alternatives around that using the capital that’s freed up by that type of leverage. And we talked about that, obviously, in our return stacking paper and other types of content that we’ve released. Well, you’re

Corey Hoffstein  1:18:03

touching on one of the fundamental challenges that both of us have faced in the fact that most investors are never going to purchase a single fund solution for the majority of their wealth. The stakeholders to our businesses are typically financial advisors. They’re the ones who are buying the funds and strategies that we manage, and they are looking to build a holistic portfolio. And often when they are looking at the funds that we manage, they are looking at them for their potential expected contribution to their holistically designed portfolios. But the reality is their performance is evaluated on a line item basis, both by the advisor frequently and often by the end investor who doesn’t necessarily get the holistic picture you and I share clients. I know this has been true for you guys in the past, this is not a trivial problem. And I’d love to get your thoughts on how allocators really should be tackling this selection allocation and oversight problem, especially in recognition that there is massive asymmetry that goes into understanding how these strategies are managed.

Adam Butler  1:19:09

Let’s talk about the line item risk. So a lot of advisors like to invest in niche strategies. And the idea is that you’re going to own a trend manager or you’re going to own a value manager and you’re going to own a momentum manager and you can own a growth manager etc. And at a portfolio level, this seems intuitive, because if investors are looking at the entire portfolio, these should be zigging and zagging at different times for different reasons, and the overall character of the portfolio through time should be more efficient. Instead, what happens, of course, is that value goes through 10 years of underperformance trend goes through 10 years of underperformance. And every year you meet with the client and the client asks, Why do we have this dead weight in the portfolio? It’s just detracting from our growth funds or what have you? An answer is to combine a number of different alternative exposures into one fund. which then has a much lower probability of going through these long multi year stretches of negative performance. A challenge with that is that now you’re reducing the number of line items that the advisor gets to present to the client. The client perceives, in some cases that it advisors value is somehow in some way related to the number of line items that they have introduced to their portfolio. So there’s this kind of tension. But the benefit is that the client doesn’t see that these long streaks of underperformance and they’re more likely to stick with these alternative exposures over time. That’s one thing. Another thing is that historically, advisors have had to reduce exposure to their core allocations in order to gain exposure to their target alternative allocations. The great thing is our last couple of years more funds have emerged that combine a core portfolio of underlying strategic exposures, for example, a constant 50%, or 100% allocation to global equities, and then overlay alternative strategies. On top of that, let’s say you have a client who wants 100% in global equities, you take 20% of that portfolio, you want to allocate it to alts. Well, if that all strategy also has a 100% allocation to global equities, you’re not losing any of your allocation of global equities, but you are gaining this overlay of a full allocation to alternatives. The same thing applies within multi strat alternatives. For example, our multi strats combine a full allocation to global risk parity with a full allocation to a global trends, strategy, full allocation to a global carry strategy, full allocation of global relative value full allocation to global seasonality. It’s like holding five or six different strategies all in full weight, but in one portfolio, and we’re able to achieve that by combining them all recognizing that when you combine a number of uncorrelated different sources of return in the portfolio, that you get this really low risk portfolio, and then using the unbelievably cheap leverage or funding costs within the futures market, to scale that portfolio up to target a level of return or expected return and risk that most clients prefer. innovation within the product space now gives advisors alternatives that allow them to have their cake in terms of wanting to preserve clients preferred core allocations while layering on diverse alternatives and extremely efficient ways.

Corey Hoffstein  1:22:33

The cats out of the bag, I know the seasons already been released. So you know the final question you’ve gotten prepare more than other guests. But the final question of the season has been looking back on your career to date, what do you think the luckiest break you’ve had has been?

Adam Butler  1:22:49

There’s so many meeting my first financial backer was unbelievably fortunate, I met him on my honeymoon. And then maybe I should give my wife credit for being an unbelievable partner, meeting some of the incredible team members along the way, especially Mike and Rodrigo, and Andrew and Carmen, and Jason and Nick, I can’t mention all the members of the team, but the different members that have joined the group over the years and continue to deliver unbelievable value. It’s been an amazing journey. And your journey is milestone by the people along your way that bring joy to your life, but also that like a bowling ball sort of ping you off in different directions of thinking. And I’ve had the good fortune of meeting a lot of extremely smart people along the way and talented people along the way that have continued to nudge me along highly productive lines of inquiry that have led to where we are today and that I think are going to lead to even more amazing things down the road.

Corey Hoffstein  1:23:52

Oh my friend, congratulations on all the recent success. You know, I’m rooting for you. This has been an absolutely fantastic episode that I suspect people will have to listen to more than once to really get all the value out of it. Thank you for joining me.

Adam Butler  1:24:05

Well next time. We need to involve you more in the conversation.

Corey Hoffstein  1:24:09

The whole point is that I don’t have to talk.

Adam Butler  1:24:13

Everyone wants more. Cory, thank you so much for having me and for great questions and for giving me room to ramble on at times. My pleasure, my friend.