In this episode I speak with Kevin Cole, CEO and CIO of Campbell & Company.

In the first half of the conversation, we discuss Campbell’s flagship systematic multi-strategy program. We cover topics including trend-following versus multi-strategy, the taxonomy of alpha signals, the concept of edge when you’re running hundreds of models, the process for introducing and sunsetting signals, and risk management.

With such a strong focus on quantitative research, we spend the latter half of the conversation discussing how Campbell organizes its research team and process. Kevin explains how the team is organized and how the agenda is set. He also introduces the management process they’ve adopted called “Pulse,” providing the framework for which the team operates.

Please enjoy my conversation with Kevin Cole.


Corey Hoffstein  00:00

Alright, Kevin, are you ready to go? I am. Let’s do it. All right 321 Let’s go. Hello and welcome everyone. I’m Corey Hoffstein. And this is flirting with models, the podcast that pulls back the curtain to discover the human factor behind the quantitative strategy.

Narrator  00:22

Corey Hoffstein Is the co founder and chief investment officer of new found research due to industry regulations. He will not discuss any of newfound researches funds on this podcast all opinions expressed by podcast participants are solely their own opinion and do not reflect the opinion of newfound research. This podcast is for informational purposes only and should not be relied upon as a basis for investment decisions. Clients of newfound research may maintain positions in securities discussed in this podcast for more information is it think

Corey Hoffstein  00:53

If you enjoy this podcast, we’d greatly appreciate it. If you could leave us a rating or review on your favorite podcast platform and check out our sponsor this season. It’s well it’s me. People ask me all the time Cory, what do you actually do? Well, back in 2008, I co founded newfound research. We’re a quantitative investment and research firm dedicated to helping investors proactively navigate the risks of investing through more holistic diversification. Whether through the funds we manage the Exchange Traded products we power, or the total portfolio solutions we construct like the structural Alpha model portfolio series, we offer a variety of solutions to financial advisors and institutions. Check us out at www dot Tink And now on with the show. In this episode, I speak with Kevin Cole, CEO and CIO of Campbell and Company. In the first half of the conversation, we discuss Campbell’s flagship systematic multi strategy program. We cover topics including trend following versus multi strategy, the taxonomy of alpha signals, the concept of edge when you’re running hundreds of models in the process for introducing and sunsetting signals. With such a strong focus on quantitative research, we spend the latter half of the conversation discussing how Campbell organizes its research team and process. Kevin explains how the team is organized and how the agenda is set. He also introduces the management process they’ve adopted called Pulse, providing the framework for which the team operates. Please enjoy my conversation with Kevin Cole. Kevin, welcome to flirting with models. Really excited for this episode. I think it’s very timely this year with the reemergence of managed futures programs. And suddenly, after a decade, everyone’s interested again. So a very timely episode. excited to have you here. Thank you for joining me.

Kevin Cole  02:54

Thank you, Cory. It’s great to be here.

Corey Hoffstein  02:56

So let’s start where I always like to start. For folks who may not know you may or may not know of Campbell and company, can you maybe start with a little bit of your background. And I know, You’ve been at Campbell, I think for almost 20 years now. So once you get to Campbell, maybe you can tie that into the evolution of where Campbell was and where it is today.

Kevin Cole  03:15

Going way back right after I completed my undergrad degree, I was fortunate to get a job at the Federal Reserve Bank of New York. And that was a great place for me to work. It was actually my first exposure to financial markets and thinking about markets from a research perspective. So I was there for two years. And then I went on to UC Berkeley to get a PhD. And then not that long after finishing my PhD. I came to Campbell in 2003. I joined as a researcher boots on the ground helping you build out a lot of strategies. It was an exciting time to see the expansion of some of the strategies that Campbell that we’ll talk about. And then fast forward to the summer of 2017. I took over as head of research and ultimately the CIO role. And then I stepped into the CEO role at the end of 2021. Just to give an overview of Campbell, we’re a quant hedge fund. We have about 60 team members based in Baltimore, Maryland. Today we have about three and a half billion dollars under management and our investors range from large public pensions to individual investors. Our flagship program is Campbell epsa return. That’s what we call car. So I might refer to card during the podcast. We think of it as a systematic multi strat fund. It’s got about 130 individual models or alphas in the portfolio today trades in about 130 derivatives markets. It trades about 5000 Cash equities. We also have a Managed futures program that is essentially a carve out of the futures and derivatives portion of car. And we also have a quant equity sustainable program. That’s a carve out of the equities piece of car. And then finally we have some trend only mandates for investors that are looking for that risk mitigation profile.

Corey Hoffstein  04:54

So let’s maybe start with the trend side because I do know Campbell has deep deep roots as a CTA A in that sort of pure trend style. And lots of CTAs, over the last decades started to de emphasize trend signals in their models. I think a lot of that was born from maybe a bit of frustration in how trend was performing and trying to adapt to what the market wanted. But Campbell was actually very early on that trend. I think, in the early 2000s, you started to really broaden the scope of signals that you employed. As you mentioned, today, you guys are using over 100 Alpha signals. So can you take me back to maybe the early 2000s Walk me through the evolution of the mandate and how it got to where it is today.

Kevin Cole  05:38

Campbell was founded back in 1972. And definitely during the early days, the form was very focused on liquid futures markets. In the early days that was mainly commodities and then expanded with the market universe, the trading trend, momentum type strategies. And in those days, I think Kimball was kind of a pioneer in that area. But it was around 2000, that the firm really began to expand our conception of who we were and what we could do and began to think of ourselves not as just trend followers, but as systematic managers that could apply the kind of rigorous research and systematic approach to a broader set of markets, asset classes, different investment styles, time horizons, and so on. So when I joined in 2003, that was during the early days of that build out, and a lot of those early years, one area of intense focus was quant macro. That was something that I was involved in building out a diversified set of macro signals in the 2000s. Parallel to that we started trading quant equity strategies in 2001. And the initial focus, there was stat ARB, that was actually around the time that we first launched the EPS, that return program was in 2002, to be that mix of managed futures, plus cash equities. And then actually around 2009, that was the next phase where we started making a push into short term trading. And we can get into more what that means. But for us, that’s intraday data statistical models with about a one week holding period. By the early 2000s, that seemed the firm had put in a great deal of effort in building out this set of diversified strategies, well beyond trend. At that point, we really had, I’d say, four major styles that we still have today, which would be quite macro short term trend and equity market neutral. But in the early 2000s 10s, most of Campbell’s AUM was still kind of heavily tilted towards trend heavy managed futures. And even the core portfolio in terms of its risk allocation was heavily tilted towards trend, it was about 60% trend at that point. So we took a look at essentially all of the work we’ve done and what that resulted in in terms of this diversified set of models, and then match that against the risk allocation and said, there seems to be an opportunity here. So it was in the summer of 2014, that we actually made a strategic shift in the allocation within car to be roughly equally balanced between those four investment styles. And then over the years since I think we saw the result of that in terms of delivering what we wanted, which was strong returns on a relative basis compared to a lot of the peers, really well managed risk management, and the diversification we’re looking for versus traditional assets. So that gave us the conviction that that was the right direction to go in. And subsequently, we moved even our Managed futures portfolio to a more balanced mix of the strategies within that. The other piece is that we’ve continued to build out the strategy since that time. So in any given year, we typically add about five to 10 new models to the portfolio. And that’s what gets us to about 130 alphas in car today.

Corey Hoffstein  08:23

So I never like to put a time stamp on my episodes, I like them to be as timeless as they possibly can be recognizing that process and philosophy is bound to evolve over time. But I couldn’t help but sort of chuckle because yesterday Cliff Asness wrote a piece about managed futures. And he basically said, look, the whole point of managed futures, he said is two things, one to have positive returns on average and to to offer attractive returns during equity market sell offs. Part of the argument there was that if you’re running a Managed futures program, you should stay pure to trend. Somewhat ironically, towards the end of the piece. He talks about different ways in which they dilute trend with other maybe trend signals or trend Blake signals. But this is like a very common thing trend purists say where they argue that the convexity benefits exhibited by Trend specific programs are too valuable to be diluted by other signals, particularly when you consider them in the context of the larger portfolio construction that happens. So I’d love your opinion a on that if you happen to see the piece, if not just the general thoughts and sort of be how do you think the transition from say pure trend to a systematic multi strap program changes? How the strategy is ultimately viewed and implemented by allocators?

Kevin Cole  09:38

Yeah, it did actually read close piece. I think there’s some points in it that we agree with, and maybe some where we have a little bit of a different perspective. But I would start by saying, we’re certainly not going out and turning your back on trend. We still have super high conviction in trend, both in our flagships. And also we do have standalone trend mandates for investors that are looking for that. And I guess that’s the place I would start is that I think this question comes down to understanding the objectives of the allocator. So if an investor is looking for crisis protection, or if their top priority is maybe mitigating risks from their larger portfolio, so they have a large long equity exposure or exposure to risk on type assets, then a pure trend allocation can be a good solution. And again, we have mandates that do deliver that. But one thing that I would say is sure trend can be hard to hold for some investors through the full cycle. And I think when Cliff talked about that dual mandate, I’m not sure whether some of those periods and funds delivered on the mandate of strong returns throughout the cycle. The real tragedy is if an investor maybe bought into trend after a crisis, expecting to get both of those benefits, and then maybe they lost conviction in it when there were years where it was underperforming the other investments, and then they didn’t have it in their portfolio at the time when it was needed a year like this. So we want to be careful that if an investor is getting an allocation to pure trend, they understand that trade off and understand that maybe the expected risk adjusted returns would be lower than a portfolio that has a more diversified set of strategies. But I think more generally, a lot of investors are looking beyond that kind of profile. And they’re looking for an allocation that not only will diversify their traditional investments, but also we’ll have a high standalone chart. And for that, we believe that a systematic multi strat approach is a good fit. The other consideration that I would mention is that we can take advantage of the embedded leverage in these markets, we trade to really create a portfolio that can, I don’t want to say get the best of both worlds, but can’t actually get a lot of the upside benefits of trend, but also have some of the benefits of the other approach. And so we actually wrote a paper a few years back on this idea of using embedded leverage to your advantage. One way to think about it is suppose an investor were to allocate to four individual managers specializing in each of the four investment sales we have. And just for the sake of argument, let’s say each of those delivered 10% annual vol, and zero correlation to the other managers. And again, just to make the math easy, let’s suppose each of those delivers a point five Sharpe, a lot of the quants listening to this podcast are going to be able to do the math on this in their head, it’s pretty straightforward to show that the combined allocation would have a Sharpe of one that’s pretty attractive. But the combined vol would drop to 5%, because of the diversification effect is zero correlation, which would mean the combined return would be 5%. So an investor would be getting a higher Sharpe, but not getting the benefit of the return. But because we’re running these in a single portfolio, we can manage the notional leverage of the underlying strategies, and we can ensure that we hit the target vol for the portfolio level. If we go back to the comparison to the trend only portfolio, this embedded leverage effect means that we can capture more the upside of an individual style like pure trend than the allocation might imply. And I think we’ve seen that we saw that in the research we did before we made the allocation shift, I think we’ve seen it in live trading. And even in a year like this, if you look at our publicly available mutual fund, which does have a balanced mix among the three derivative strategies, the performance is actually higher year to date than, say, the new edge trend index or the newest CTA index, which I think is what we’re looking for from the program,

Corey Hoffstein  12:59

you’ve started to mention a little bit about the different categories, you trade in the 130 different alpha signals that you guys employ. It’s been my experience when I talk to different absolute return managed futures programs that they actually tend to have different taxonomies as to how they think about their signals. Some are much more style driven, carry seasonality, trend, relative value, others have absolutely unique ways of categorizing their signals, hoping maybe you could talk a little bit about what the major taxonomies are at Campbell and company and why you think about categorization and the way you do.

Kevin Cole  13:37

Well, the most important taxonomy for us is that broad style level. So we think of those four investment styles being macro short term trend and equity market neutral. There’s certainly other ways that you could categorize I mean, you could use markets or asset classes, you could use trading speed, the we’d like to think primarily in terms of those investment styles, because number one, they somewhat are distinct in terms of maybe the types of investment thesis that they access the types of data or frequency of data, even in some cases, the infrastructure that’s used to generate the signals. And also, we believe each of those styles has a pretty distinct role to play in delivering the overall objectives of the portfolio. And it’s actually a good way to organize the teams or think that it’s more efficient, at least for us to organize the teams around these investment styles, both for new research and a production oversight, as opposed to splitting by asset classes. Beneath that top level, the investment styles. We have a number of sub styles, separate folios. There’s different ways of slicing these. But one thing that we find is that once we get down below that top level, that granular taxonomy becomes less and less useful. And a lot of the newer strategies we’ve been building are actually harder and harder to classify in terms of those traditional boxes. And sometimes that’s frustrating because we want to be able to explain exactly where something fits in in a clean easy way. But actually, one thing that I find is that I view it as a sign of successful innovation when we’re done bumping up against the edge of those categories.

Corey Hoffstein  15:02

All right, you’re gonna have to expand on that last sentence for me. What do you mean by one sign of successful innovation is when you’re bumping up against the edges of categories,

Kevin Cole  15:11

I think a lot of the quant world has become pretty well defined in boxes that are well understood. I think the whole push towards risk premia over the last 10 or so years was a push in that direction. And I think for the investment world, that was a good thing. I think it clarified a lot of that it provided some transparency, it increased understanding among a broader set of investors. But the potential drawback of that is that it makes it easier for those areas over time to become commoditized. As more money goes into them, there’s more people competing for that same source of churn. And that means to stay at the forefront and to make sure that we’re not degrading or opportunities that we need to continue to innovate and look in those new areas. And what we find is a lot of those most attractive areas are the ones that are not fitting cleanly in a single box. I think one advantage for us, you know, at Campbell is our collaborative culture, which allows us to work together across teams, and maybe across some core competencies to find those areas of edge. So one example of that, that we’ve worked on over the last couple of years is taking ideas from the macro space that we traded for a number of years, that historically would have been using daily data with holding periods of a model that might be a couple of months on average, and thinking about ways that we can speed that up. And actually, in some cases, use intraday Data Use short term type infrastructure to trade those much more quickly. So we actually did a research project last year, there was a joint effort between the short term team and the macro team to implement a number of those alphas and we deployed those. And so that’s a direction we’re continuing to push looking for those areas that might not fit cleanly in a single category.

Corey Hoffstein  16:46

We’ve mentioned a couple of times this idea of over 100, different alpha signals, reminiscent to me of the idea of the factory zoo, right over 500 factories in the factory zoo in the world of equities. But the critique of that often is, many of these factors are really just characteristics that are cousins of each other that maybe fall into the same style category is price to book really all that different than price, equity or price to free cash flow. Curious how you and the research team, think about the dividing line between what is a new model, what’s a different specification of the same model? What is truly a new alpha signal? As an example, if you use the same trend model, but one is implemented with a short term parameterization, and one’s a long term parameterization? Are those different alpha sources? Or the same alpha model? How do you guys think about that? For us,

Kevin Cole  17:37

we generally distinguish a model by it having either unique investment thesis or some distinct mathematical formulation. Sometimes the dividing line between a model versus a different specification is a little bit nebulous or a little bit of a judgment call. But when in doubt, we would look at things like correlation, they’ll decide that generally different parameterizations of the same underlying investment thesis or the same mathematical model would not be counted as a separate model. The other thing I would say is that this question is interesting. And I think it’s useful. But our goal is not to be out there trumpeting the number of models for their own sake. You know, it’s not the point, our point is to deliver strong investment returns. So we do think about how to classify this models. But the end of the day, the portfolio construction approach that we follow would also consider things like correlation among models, for those cases where maybe two different models have some positive correlation that would be accounted for and discounted in terms of their risk allocation in the portfolio.

Corey Hoffstein  18:32

How do you define or maybe even quantify your edge when you’re managing a collection of 100 plus strategies across different styles and asset classes and time horizons?

Kevin Cole  18:45

To answer that, I want to start just by giving a little bit of background on how we think about markets, I guess you could call this our organizing principles. The background is that we don’t believe there’s any Holy Grail, like or any single model, that’s going to be the answer and markets. I mean, that was an attractive idea. I think for a lot of us when they came into this space that you’re going to find the Holy Grail model, I think that we understand that’s simply not the case. Markets are complex adaptive systems. They’re highly competitive, they’re nonlinear, the signal to noise ratio is quite low. And I think after being in the space for a while, you understand that markets are not perfectly efficient. But I do like the phrase efficiently inefficient. And that means to me that small edges can be found through hard research, but those edges do get competed away over time. And what was once alpha becomes widely known, it gets commoditized and becomes beta. Similar to what we talked about with the earlier question. What that means to us is we need to continually evolve the portfolio. We can’t just have a static set of models today, if we want to stay out of the competition. The next organizing principle I think, is one that’s near and dear to the hearts of most quants, which is diversification is the closest thing we have to a free lunch. So it means that we do have kind of the marching orders to continue to find additional signals that have low correlation and combine a lot of ideas across these different markets and time horizons and into font styles and so on. The third organizing principle, I think, is that the central risk we face as quants is that we’re fitting that it’s pretty easy to find a good back test. But just because you find artifacts in past data doesn’t mean they’re going to hold going forward. I had a colleague that once he has a quote that really resonated with me, he said, Just because you uncovered dinosaur bones in an archeological dig, it doesn’t mean that dinosaurs still roamed the earth. And I think that’s very true for what we do. So it means you really do have to have a rigorous process for evaluating and rejecting ideas. Where does this take us in terms of Campbell’s edge? For me, our edge is in what we call a meta model that we built and refined over past 20 years or so. And it’s not any one particular model in that portfolio. So for us, this meta model is the idea that we can do high quality research, we can evaluate new ideas and reject the ideas that don’t hold up under scrutiny. We can apply this across a wide range of markets, investment styles, time horizons, and so on. And it should also say in terms of ways of expressing traits we haven’t talked about RV versus directional, but about half of our trades are relative value. And so continuing to look for those new small sources of edge, and continuing to evolve the portfolio to make sure we’re staying at the forefront that works hand in hand with the review process that we’ve put in place to make sure we’re subjecting models to appropriate scrutiny a

Corey Hoffstein  21:19

little bit earlier, you mentioned introducing three or five new models a year, maybe on average, I was hoping you could maybe talk us through the process you have to go through when adding a model. But also, I’m curious about the process of sunsetting models, I presume not all of these models stick around every year that some do eventually have to be sunset, be curious, not only the frequency with which you are sunsetting models, but how you’re ultimately making that decision and that trade off of needing to add something new, or removing something, what are those conditions you’re looking for, to ultimately give up on a signal?

Kevin Cole  21:53

So let’s start by answering the question of how we add new signals. The research process begins informally with an individual team. And we can go back maybe later and talk about how we source ideas to start with. But let’s suppose that a team has a concept for a model. Typically at the beginning, they would set forth what is their investment thesis, decide what data they need, what other resources they need to answer the question, we also would want to in some way prevent through the idea of maybe set forth what would be an out of sample data that would be held back, what would be the criteria we would use for evaluating whether the model is ready to go. But during the early stage, that would happen more informally within the team. If the idea shows initial promise, then we would take it to a more formal peer review, where we would invite researchers from other teams to participate in having access to the code and the data and have a set up meetings where they’re able to challenge the ideas test for robustness. During that peer review process. In the later stages, we would look at the out of sample data, which is always an exciting and nervous moment when you get to see if it holds up at that point. But assuming that it makes it do the peer review process, then it would go to investment committee for final approval. At that stage, we would typically deploy it for a period of maybe one to two weeks with de minimis capital to just make sure the pipes are running properly and in production things are running smoothly, then there would be a period, typically of about two to three months, we would run it with what we call launch capital, which would be maybe 1/10 of the full target allocation, where we just monitor it and making sure again, everything’s running as intended, make sure it’s behaving properly in terms of trading costs, you know, risk characteristics, that amount of time won’t give you extreme confidence as a quant in terms of whether it’s delivering the Sharpe but you can at least look at the range of expectations and making sure it’s within the distribution of your expected outcomes. At that point, at the end of that few months, we’d get back together with Investment Committee for what we call a post launch review. Assuming everything’s on track, then we would decide is it ready to go to his target allocation. So that’s the process going in. And again, typically in a given year, five to two models would go into the portfolio in that sense. Now the other question of how do we decide when to sunset or remove models? It’s always tricky, because I think we understand again, as quants that there’s statistical noise around you need these models of Sharpe of any particular model may not be extremely high. And so we have to accept that some variation in some periods of underperformance is natural, especially when we’re looking to make sure we’re preserving that diversification among models, you don’t want to always be cutting something immediately when it loses an end up with a concentrated portfolio. But on the other hand, we know we don’t want to just let a model that is clearly broken to continue trading indefinitely. So we used as a school guidance to help with that. For each model. This deployed we set forth guidelines for expected Sharpe, the vault of models running at and some other features like that we’re able to judge in live trading, is it within the range of expectations of its outcome in terms of performance or drawdown or measures such as that, and then when a model goes below a certain threshold in that distribution, then typically we would cut that model initially, maybe give it a 50% haircut, continue to monitor. We have criteria also to say when it goes back above have a higher threshold and we would redeploy to full capital. That’s typically the case that that happens. But in cases where the model continues to underperform, then it could be removed from trading. So that’s one way a model could be de allocated and ultimately cut. Another would be, maybe the thesis has changed. There are cases where something about market circumstances have changed. And we decided that model is no longer valid. So we removed models for that reason. And the third would be sometimes a model that we’ve done research on the deployed might be a better or more effective implementation of an existing model, the new model might overtime replace the old. So typically, in a given year, we might remove one to two models, but on net, you can imagine that rate of inflow of new models is higher than the rate of removal. And that’s how we grow the number of models over time. Do you find

Corey Hoffstein  25:47

that certain types of models have a longer expected lifespan than other? For example, does the time horizon affect the lifespan shorter term signals versus longer term signals or directional signals versus relative value signals?

Kevin Cole  26:02

I can’t say that we found that and that’s sometimes frustrating, because you’d like to be able to find these rules of thumb that would be very clear about efficacy in that way. But I don’t think that we’ve necessarily seen that.

Corey Hoffstein  26:13

So we’ve spent a lot of time talking about alphas, which is always the fun, juicy part of the conversation. But take a little pivot here to something equally important is risk management. I have to imagine, again, with 100 Plus models that you’re trying to keep your arms around, I would love to know to what extent is your risk management process systematic? To what extent do you require human oversight and intervention,

Kevin Cole  26:39

the vast majority of a risk management process is systematic, maybe to give a little background, we think of three layers of risk, we think about what we would call normal risk, which would be that day to day variations and risks that can be well modeled with statistical distributions. And we can use the measures that quants love in terms of wall measures to size positions and to hit a target at a particular level, the portfolio that we’re looking for, that runs, of course, in the background very smoothly and doesn’t require a lot of oversight other than just making sure that things are running. The next layer would be tail risk, which would be things that involve outsized moves, but in a way where pooled across markets, we understand what the features of those tail moves might be. And again, we can apply statistically driven constraints like C var limits or risk limits, sector loss, risk, concentration, that type of thing. And again, that’s systematic, it’s something that over time we evolve, and we look for new ways to do that. But that’s pretty well understood and run systematically, the final group would be those unknown risks, the areas that really can’t be well modeled with a systematic process that might be events, the crop up going back to Brexit in 2016, or an election, the early days of the onset of COVID, the Russia Ukraine crisis, a lot of those are something that you can’t build permanently in your system necessarily. And for those, I would say, I’ll shift gears now and then come back to the systematic part. But we do have an investment committee that meets every day, most of the investment committee’s role is very passive and observatory. Typically, we’re just going through our set of analytical tools that summarize the various exposures in the portfolios. And as you said, there’s a lot to consider there. So we’ve developed a set of tools that help to boil that down into factor exposures, sector exposures, and so on, and help us to make sense of that. So most of the day to day is going through those reports, thinking about how positions have changed how our risk posture has changed, thinking about what news might have come out, or an economic news or market news that might affect us. But also we’re thinking about what are the emerging risks on the horizon that maybe the normal risk management process would not be able to handle on its own? And are there additional layers of protection we would need to put in and so going back to the case of Brexit in 2016, in the winter and spring of 2016, we could already see that this event was coming. And we’re able to think about things like let’s build risk factors that represent the market risks of Brexit. And we can measure and quantify our exposures to that. So we did that put in place these risk factors that helped us monitor that exposure, put in place some additional measures of volatility, like using implied vol. And that helped us to manage to that experience. And that’s kind of been the playbook we followed for some of those similar event risks. Over time, we’ve learned from that and found ways to quantify some of those things that maybe would have been difficult to quantify in the past. So we have in place now what we call contextual risk management, which basically takes our proprietary risk factor library that has hundreds of risk factors related to market exposures, macro exposures, and can incorporate things like geopolitical risks, you know, we can build risk factors around that. And then we can interact that with measures of crowding in markets, we can take crowding at the market level and roll that up to exposures that to factor level and identify times where maybe our portfolio has concentration in a particular area that also is concentrated in terms of crowding in the market, and at those times, we can clamp down as needed on this exposures. So that’s an example where we’ve looked to learn from those episodes where maybe Investment Committee had to go above and beyond the normal systematic process and systematize those. So our goal is to minimize any kind of intervention. And indeed, it’s rare that we have to intervene, we also need to be prepared that if there is an unknown unknown event, and something that comes out of nowhere, then we need to be ready to reduce risk in the portfolio. And that’s one reason why having that human oversight is useful.

Corey Hoffstein  30:27

You know, as quants, we have this sort of ever growing library of what I would call, maybe structural risk factors that we’re all very aware of, I really love this idea of emergent risk factors that seem to come into the market, the two maybe most obvious ones in my fairly short career have been sort of the risk on risk off risk factor environment, we were in post 2008. And then I feel like a very easy lead identified one was the post COVID COVID, on COVID. off that totally changed the behavior of how certain baskets of securities traded against each other, depending on the news flow. I would love if you could maybe draw a little bit more out talk to me a little bit more about how you’re identifying these emergent risk factors at any time, and working with the team to try to capture them quantitatively and how that flows through the risk management process.

Kevin Cole  31:19

We’re working with our trading desk with our counterparties with the research team thinking about what risks might be emerging, what might we be missing? And might we be able to quantify that. So going back to January 2020, when it wasn’t called COVID at the time, but when the first news was coming out about the risks of a virus, we were able to make use of some bank research that was beginning to come out there about the markets that would be affected during the early days that was very focused on Asian markets. And we were able to build risk factors around that. But then, that evolved through time, we followed a similar process with Russia and Ukraine that we were able to identify what would be the markets that would be most exposed to news about how that crisis evolved. Now, in some cases, we don’t automatically put this into the systematic process, we would need to decide, does this rise to the level that we want to put it into the contextual risk management process, but we’re able to do that if we determined that’s necessary,

Corey Hoffstein  32:12

changing gears a bit, one of the areas of really particular interest to me personally, over the last couple of years has been this idea of what I call strategy versus structure. And I know that Campbell’s flagship program is offered as a managed account. It’s offered in a pooled vehicle structure, as well as within a mutual fund. And I’m curious as to how you think about the trade offs and challenges and opportunities presented by managing a similar strategy in all these different structures.

Kevin Cole  32:44

On the one hand, it’s, in some ways, not too much of a challenge, because the underlying strategies and markets are all liquid. And we do see it as being good from a business perspective, because it allows us to diversify products and fee structures and meet the needs of a lot of different clients. On the institutional side, it’s allowed us to customize products to meet the needs of a particular clients objectives. And I think that’s almost the price of entry for a lot of large institutional clients as they need that ability to customize mandates. From that perspective, I think it’s necessary for us some of the considerations I would say that we think about when we’re looking at structuring a particular program or adapting a program across different structures would be think about the tracking or the results from any restrictions in a particular structure. You know, if you’re doing a UCITS product, for example, you might have constraints, whether it be excluding commodities, or fixed income constraints that are set by regulatory requirements that might lead to tracking error, and making sure investors understand that, we’d want to make sure that the fee structures are coherent across different products or different structures, even if they might be a different mix between, you know, management fee in some cases, and performance fees and others. The other piece for us is making sure we have a good understanding of the capacity of the underlying strategies, because sometimes people will ask me what is the capacity of a particular product or a particular offering. We don’t think about it that way. Because different underlying strategies or models are applied in a lot of these different programs and structures and offering. So you need to make sure you’re thinking about capacity holistically across all that, especially if like us, you have allocation to capacity constrained strategies like short term strategies. So that’s an important consideration.

Corey Hoffstein  34:20

Now with the constant evolution of the strategy. Obviously, ongoing research is incredibly important to Campbell and as CIO, part of your job is to oversee the structure of the research organization. I was hoping maybe you could take us to a high level and explain how you think about organizing the research team.

Kevin Cole  34:41

I’ll start by emphasizing what I think is the most important feature of our culture, which is collaboration. There are some quant seems to be organized in silos or maybe in pods, and I think not to denigrate that because it certainly seems to work for some organizations. But we found it for us at Campbell a collaborative team approach really works the best. That means sharing of ideas sharing of IP across the team. And we’ve seen a lot of benefits from that. I believe in terms of the rigor of the research, making sure the most high quality ideas make it through the research process, also in terms of cross pollination of ideas from one area to another, and in terms of just team retention. So collaboration is really important to us. When we think about organizing the teams, as I mentioned earlier, we do organize the team around functional responsibilities. Along the lines of those different investment styles or risk management being, we have a team that’s focused on data engineering and data science, and also a team on the engineering side that focuses on core infrastructure. When I think about what are the traits or the qualities we look for in the research team, I mean, there’s a few things that are important. So I think it goes without saying that you want high intelligence and a good quantitative background, those are essentially table stakes for what we do. But we’re not looking for particular pedigree or a particular degree. I mean, we found that the skill set we need in terms of math, stats programming are found across a range of STEM fields. And so it’s not so much what field you have your degree in. And we do like the diversity of perspectives that comes from a lot of different academic backgrounds. But the soft skills are also pretty important. I think we’re looking for curiosity, the ability to generate original ideas is very important. We want to make sure that team members are resilient, because the research process is hard, you run into a lot of obstacles, and you want to make sure people are going to be able to overcome that. We’re looking for researchers that are skeptical. They don’t take ideas at face value, collaborative mindset, obviously, it’s important. And finally willing people that are results oriented. We do have people with strong academic backgrounds, but we don’t want an ivory tower mindset. We want people that are excited about seeing their ideas make an impact on the portfolio. The other thing I’ll say is that, you know, our team has really been built in house from the ground up, we don’t bolt on teams, at least we’ve not found the opportunity to do that. And again, I think in terms of that collaborative culture and cohesion to the culture, that’s been an important piece for us.

Corey Hoffstein  36:56

I think one of the biggest challenges for any CIO is figuring out how to set the research agenda. Can you maybe walk us through how you think about setting the cadence determining what projects to pursue or not pursue? And how you take into account the potential costs of these different projects? Yeah,

Kevin Cole  37:15

I think it starts by in the background, making sure that you have a pretty rich set of ideas seated, that look like good opportunities, you know, you almost wanted the research team to feel frustrated that there’s more ideas that we could explore that we don’t have time to explore. And I think that’s a good thing if you have that. And so that comes from having a process in the background that encourages that early stage exploration, like we have a research discussion group that meets every week to share early stage research and ideas where somebody shouldn’t be afraid to get up and present something that may feel kind of half baked, because a lot of great ideas come from that. But then from there, we want to make sure that we’re defining the agenda of new initiatives for some regular cycle. For us, that’s a yearly cycle where we start at the end of each year, they get together with the research leadership and talking with the entire research team about what are the potential ideas in the pipeline and thinking about what are the ideas that are there that have the opportunity to make the greatest impact on the portfolio. So that would result in identifying a list of projects or initiatives for an upcoming year. Once we’ve got that list and make sure that we’ve got the resources to carry out those sub projects, then we would hand that over to the individual teams to carry out. And we actually have put in place a process that we call pulse that I would say it’s a lightweight project management framework that we found to be really useful in terms of making sure that we’re effective in terms of taking ideas from early stage through the entire process, and also deciding when is the time to move on and say kill a project that’s just not panning out?

Corey Hoffstein  38:42

Can you expand a little bit more on that pulse framework that you guys have implemented,

Kevin Cole  38:45

we put pulse in place about five years ago, it is an approach for managing all the projects and research. It’s something that follows a biweekly cycle. That’s kind of the pulse of pulse, where that’s how we look at the segments of any given project. But it culminates every other Friday morning, where we get together and the full pulse meetings about two hours, the entire research team is invited to participate if they’re interested. But each project team would come and have five to 10 minutes, basically to give an update on their project. So to talk about what’s working, you know, what was the progress they made over the last two weeks maybe where they hit some roadblocks? Do they need additional resources? Or how might they need to interact with other teams going forward? It’s not meant to be a research focus meeting. So there might be highlights of these were some highlights of results we found over the last two weeks, but not to go deep. That would be other venues to do that. It’s really meant to make sure that the questions are answered. Are we on track? Do we have the resources we need? And is this the best use of time for this individual or group? And there’s natural apprehension when you begin something like that, because managing a research team, there’s maybe a feeling that any kind of project oversight may be heavy handed, so we wanted to make sure when we put in place it was not that and We’ve continued to evolve it over the years. But I think that one reason I believe it’s been successful for us is that it’s not seen as a top down Big Brother view of like, are you working hard enough, because that’s not the model that we want to have, we understand that we have really high quality motivated people here. We know their time is valuable. So we want to make sure that they understand that they’re empowered to say, this is not going where I would like with this project, and it’s okay to say Let’s kill it. And it’s seen as a peer based approach of accountability rather than accountability to management. It’s been pretty effective for us, I think it’s part of the reason why we’ve continued to be pretty effective in terms of deploying a number of new models. And I’d also say that it was really helpful during the COVID work from home period, to make sure that everybody was able to stay focused. And in fact, doing posts by videoconference was very effective. And we’ve actually continued that even as we’ve gone back to a hybrid approach in the office,

Corey Hoffstein  40:53

one of the big risks that I often see, really, in any research organization, whether finance or outside of finance, is that new ideas aren’t necessarily properly scrutinized for political and social and these sort of sunk cost fallacy reasons. Curious how you make sure that you’re adequately rejecting ideas that really do need to be rejected.

Kevin Cole  41:16

I think it starts with team culture, you need to make sure that you’re hiring and retaining team members that have a collaborative mindset. And don’t let their ego get in the way of getting to the right answer, you need to make sure that you’re defining research success as getting to the right answer and doing high quality research, and wherever that leads, and then making a decision whether to reject or approve that idea, not setting up the incentives, where somebody feels pressure to complete a project and deploy it, if they don’t feel confident that that’s going to be successful. You also need to build formal steps into the process to make sure that you’re encouraging rejection of ideas for the right reasons. And so, you know, our peer review process is really focused on that. And finally, it is important to think in terms of opportunity costs, like in isolation, it may seem tempting to keep hammering away at an idea thinking that that breakthrough is just around the corner, you don’t want to kill an idea too quickly. I mean, there is a risk of that. But also, you have to recognize that that time may be better spent moving on to other areas, especially when you have a lot of other promising ideas in the pipeline. So you want to make sure that people don’t feel it’s a failure to put something aside and move on.

Corey Hoffstein  42:20

to risk the cliche, learning from failure can often be just as important as learning from success. You’ve clearly had great success so far with this new pulse process that you’ve embedded within your research organization. I’m curious if there’s any processes that you’ve tried that haven’t worked? And maybe lessons learned there?

Kevin Cole  42:40

Well, I’ll build on a couple of the things we’ve discussed before maybe in slightly different ways. So first, I really do think that we’ve learned over time, the importance of focus, maybe there were times going back years ago that we didn’t have as much focus as we could have. And it’s natural, I think, in a research organization to fall prey to that, because there’s so many interesting ideas that you can explore at any point in time. And it’s easy to get distracted by whatever the latest shiny object is. I think it can be exacerbated if the leadership of the organization are throwing out questions to the team without appreciating the effort it takes to answer those questions. And so that’s similarly with good intentions. I mean, there are a lot of good ideas out there. But I think it’s important to make sure that we stay focused that we don’t lose sight of the properties in front of us. And I think that we’ve probably learned over time more how to do that. The other thing I would mention is in terms of that peer review process. I mean, that’s something that has evolved a lot over time. And during the early days, when we put it in, going back to let’s say, the late 2000s, the process was pretty overly formalized. And it got pretty rigid at a certain point where there were a checklist of these are all the steps in the process. And I think what we found is that’s overly rigid, and that no two reviews are alike, you need to be able to evolve the process to meet the needs of any particular project you’ve got. So I think we found that it’s better to be a little more nimble without while still having some overall structure.

Corey Hoffstein  44:00

sticking on the topic of failure for a second, taking this conversation full circle back to the evolution of trend strategies. As we mentioned earlier, a lot of managed futures programs started to dabble and expanding that palette of signals they were employing through the 2000 10s. And a lot of the had varying degrees of success. My suspicion is actually the breakdown was really in the research process itself. These were firms that were not set up to research signals in these different areas. I’d love to know where you think some of these firms may have gone wrong in making this evolution to their managed futures programs. I can’t speak

Kevin Cole  44:39

for other firms, but my guess would be that there are a few issues they stumbled on. One would be back to what we said earlier about looking inside the boxes versus maybe pushing outside the boxes. I think some of those firms if they had large AUM, they would have needed to focus on highly liquid strategies. And also if maybe they weren’t Experience and looking some of those in your areas. The first place to look would be all the research that was coming out at the time on risk premia. So they may have been looking at standard carry or standard value type factors. And as we said, some of those got pretty commoditized, because there was a lot of money chasing them. And so I think if you were just looking in those well defined areas that might not have been the most fruitful, it takes time to build up the expertise to trade a lot of the strategies and look from where the nuances. I mentioned earlier that maybe in that period between the early 2000s, and 2014, we did a lot of work building out that strategy setting, it really wasn’t matched by the allocation. And in some ways, that was frustrating. But at the same time, that gave us the time to make some mistakes without high consequences in some of the strategies and learn from those mistakes. So not rushing, or having a knee jerk reaction to switch your portfolio, particularly if it was in response to underperformance of your existing strategy is important. And finally, it’s important to make sure that you’re aligned with investors on what you’re offering and aligned about how you’re changing your program. It’s something that we know investors are very sensitive to style drift. So you need to make sure that that conversation is happening in a very transparent way.

Corey Hoffstein  46:11

In the preparation for this interview, you and I had a pre call. And you mentioned something that I took very specific note of, which is that during COVID, your research team was able to implement an epidemiology model. And I thought that was really interesting for two reasons. One, it highlighted the ability for the team to really rapidly pivot what they were working on to what was most pertinent at the time. And two, it’s a really interesting example to me of the research team working on a model for which there is no obvious validation dataset, there’s no obvious back tests that you can necessarily perform, to try to figure out whether the model is valid or not. I’d love to hear you maybe talk about a what that experience was like and be how you and the team tackled these types of research ideas. As background, I

Kevin Cole  47:04

would say, of course, we’d love backtest at Campbell’s, I mean, it’s the kind of bread and butter of any good client. And the more historical data, the better. Typically, if you think about all this papers, like 100 years, the trend falling, that’s a very comforting place for a client to be. But going back to some of the discussion we had earlier, there’s an inherent tension in the ideas that have the most robust empirical support, probably are also at the most risk of being commoditized. And perhaps the concepts that have the greatest opportunities are those that can’t be validated across long back tests and across a broad set of markets. So a couple of years ago, I kind of gave the research team a challenge to think about those areas that might still fit within a systematic framework. But that would challenge us in terms of our ability to backtest. It might be you know, an emergent phenomenon, it might be something episodic or contextual, and markets that just don’t fit in that standard box. Just to be clear, I’ll say these models don’t represent a major allocation of our portfolios. Today. They’re a relatively small allocation in aggregate, something like the COVID model came in and then went out once that kind of period had passed. But it’s my view that we need to be open to incorporating these kinds of ideas and to continue to push the boundaries of the traditional quant comfort zone. So how do we tackle these ideas? First of all, I think in most cases, you’re going to have to rely more heavily on the investment thesis to start. So you need to clear criteria, or set a criteria for when that thesis is in play. And also when it might no longer be valid. So when you might need to remove the model. And you also need to look for ways to validate the thesis. Even if you don’t have a long back test. One way to think about this is that you might be able to find other empirical tests that you can do that when it involves the predictive relationship. Predictive relationships, for what we’re looking at tend to be pretty noisy, which is why you need a lot of data to find some kind of statistical confidence. But often, the contemporaneous relationship may be stronger. And so you might be able to use the contemporaneous relationship not to establish a profitable trade. But to just test something about the thesis, you know, is there something in play there a relationship that’s meaningful? So that’s the kind of thing that we would look at, think about the thesis, think about more creative ways to test the thesis. And then also just think about, what would it take to implement that strategy in live trading something like COVID epidemiology data, it would mean we would need to work with the data engineering team to think about can we scrape the data that we need? Can we ingest it into our data pipelines, and so on, and then we just need to think about how to monitor that model to decide when the conditions have changed. And when it’s time to pull it. The COVID epidemiology model was an interesting experience. I think it was a learning experience for us. And again, I want us to continue to push further in those directions. Well, Kevin,

Corey Hoffstein  49:50

we’ve come to the last question of the episode and it’s the same question I’m asking everyone this season, which is to reflect upon your career and tell me What the luckiest break you had was the luckiest break

Kevin Cole  50:03

was actually the coincidence of me ending up at Campbell. I was actually in Washington DC interviewing for a job in Washington back in 2003. And I happen to hear through the grapevine about Campbell, I didn’t know much about them. But actually, almost on a whim, I made a phone call on the morning of my last free day in DC, and was invited by the head of research to stop in and meet the team. And it went from there and I ended up joining Campbell and everything followed from that. So I’m thankful that luck was on my side that day,

Corey Hoffstein  50:38

from coincidence to CEO. That’s not a bad lucky break. Well, Kevin, thank you so much for joining me. This has been absolutely fantastic.

Kevin Cole  50:45

Thank you, Cory. It’s great being here.