In this episode I speak with Asif Noor, Portfolio Manager at Aspect Capital where he oversees the firm’s Multi-Strategy Program.

Asif has spent the last 25 years of his career developing systematic macro strategies, giving him a depth and breadth of experience to understand what it takes to remain competitive in the space.

While a handful of low frequency signals may have been sufficient a few decades ago, today Aspect’s Multi-Strategy Program incorporates hundreds of alpha forecasts ranging from intraday to several months. But this evolution also brings new challenges, which we discuss at length in this episode. For example, how are new alphas introduced and old alphas sunset? How do you unify alphas of different magnitudes and convictions? Or, how do you manage risk across so many signals?

This conversation is chalk full of the practical, real world experiences of running a multi-strategy program.

Please enjoy my conversation with Asif Noor.

Transcript

Corey Hoffstein  00:00

321 Let’s do it.

Corey Hoffstein  00:06

Hello and welcome everyone. I’m Corey Hoffstein. And this is flirting with models the podcast that pulls back the curtain to discover the human factor behind the quantitative strategy.

Narrator  00:19

Corey Hoffstein Is the co founder and chief investment officer of new found research due to industry regulations he will not discuss any of new found researches funds on this podcast all opinions expressed by podcast participants are solely their own opinion and do not reflect the opinion of newfound research. This podcast is for informational purposes only and should not be relied upon as a basis for investment decisions. Clients of newfound research may maintain positions and securities discussed in this podcast for more information is it think newfound.com.

Corey Hoffstein  00:50

In this episode, I speak with Aasif Noor Portfolio Manager and aspect capital where he oversees the firm’s multi strategy program. Aasif has spent the last 25 years of his career developing systematic macro strategies, giving him a depth and breadth of experience to understand what it takes to remain competitive in this space. While a handful of low frequency signals may have been sufficient a few decades ago. Today aspects multi strategy program incorporates hundreds of alpha forecasts ranging from intraday to several months. But this evolution also brings new challenges which we discuss at length in this episode. For example, how are new alphas introduced an old Alpha sunset? How do you unify alphas of different magnitudes and convictions? Or how do you manage risk across so many signals? This conversation is chock full of the practical real world experiences of running a multi strategy program. Please enjoy my conversation with Aasif Noor.

Corey Hoffstein  01:51

Asa, thank you for joining me today. I don’t like to put too much of a time stamp on these episodes because I like them to be evergreen. But I know it’s been a while we’ve got the FOMC. Today, I know it’s a bit stressful. So I appreciate you taking the time, I know you’ve got a lot going on in your side, working on systematic macro programs, a lot of obviously systematic macro things going on in the market. So I think this is a very timely conversation to have. But let’s start where I always start, which is for the listeners who maybe are not familiar with your work, can you give us a bit about your background and how you got to where you are today, leading up to multi strategy program at aspect.

Asif Noor  02:32

Thank you. Great, thank you for inviting me to be on this podcast. It’s a pleasure and honor to be here. And I’ve been listening to some of your podcasts. And they’re quite different to some of the content out there. So it’s a pleasure to be here. Yeah, it’s a busy day, but it’s also very nice, sunny day in London. So that offsets some of the stress, I suppose. So my name is Asif Nora, I’ve been building systematic strategies for about two and a half decades, about 25 years roughly, I started my career really building global macro strategies. So purely relative value strategies within the broad sectors, equity futures, bond futures and currency forwards. And over the last 25 years that’s expanded to now managing the multi strategy program at aspect capital. For those of you who don’t know, aspect capital, we’ve been in business for about 25 years initially building trend following the price base directional trend following strategies for a lot of our history of managing financial futures, nonpartisan futures commodities. And for the last seven to 10 years, we’ve been focusing on building non price based strategies as well. And sort of a tour of what I would call conversion strategies and using data sources that are non price based and price based as well when it comes to mean reversion strategies. So my focus currently is managing global macro strategies and relative valuable macro strategies that aspect, as well as managing customer solutions and multi strategy programs. So I never like to waste an opportunity where our guest has a real depth of experience in an area. I would love to get your perspective on how sort of systematic and global macro strategies have changed over the last 25 years of your career? Yeah, so I’m gonna try to avoid say that’s an interesting question. Correct. But I think that’s one of the most used word in interviews, and live meetings, I will try to avoid that. So. So if I don’t say it’s an interesting question, don’t take it the wrong way. But yeah, 25 years is a long time, actually. And you know, when we first started building macro strategies, basically using data, there was mainly non price based with some of the restrictions that we have. So basically, not building directional strategies, building strategies that had no correlation to equity markets, no beta, sort of tried to build some strategies that were pure alpha, pure relative value. It felt a lot easier than it has been for the last 15 1015 years. And that’s mainly because I think a lot of strategies will be built back in those days had a few biases. Perhaps we could categorize them as risk premia strategies. Currently. There was a lot of carry probably in there

Asif Noor  04:59

A bit of trend and a lot of value, but sort of correlated strategies really mean especially when you’re thinking about forecasting currency markets pre financial crisis, I think a lot of managers trading in that space had similar bets, because you know, cash yields in, in the UK and Australia were, you know, in the range of five to 7%. So you have the scary currencies, those sort of commodity heavy currencies also had high growth rates. So from a fundamental macro perspective, you’re long the same markets, and they had been trending the same way dollar have been with me for a long time. So in a way, you were sort of in similar bets. But over the last, I’d say, eight to 10, five years, things have changed a lot. I mean, a lot of these exogenous shocks that you didn’t have to really cater to and worry about have been happening more and more feels like it’s happening more and more as we enter what we constantly keep referring to as a new environment or a challenging environment. So whereas previously, we used to think about a static allocation of risk to a number of forecasting models, we have to be a bit more dynamic markets are pricing in. So information technology, there’s a lot more information available to investors, which means that things that arbitrage to a lot quicker, and you have to dig a little deeper to find forecasting relationships or anomalies that existed forecast these markets. So in that sense, we didn’t have the concept of alternative data, Medium Term Frequency to build macro models was okay, in a monthly frequency data was fine, monthly rebalancing was fine. And the breadth of assets has increased as well. So it’s sort of technology with data. And with the with more of these exogenous shocks, more of these dislocations that we see, it does make systematic investing more challenging,

Corey Hoffstein  06:40

I wanted to start with that perspective of history, because we are going to spend quite a bit of time going deep on aspects, multi strategy program that you lead up. And I think this is one of those areas where listeners want to get to get to the end of this conversation. So while there, there’s a tremendous amount that’s actually going on, sort of behind the scenes, as we as we peel back the onion. What I wanted to start not only with the perspective of history, but maybe at a very high level, you could explain what what the actual objective of the program is, and maybe how those objectives translate into the overarching more general strategy design.

Asif Noor  07:18

Yeah, no, Cory, I think that’s quite important. Because, you know, when I was asked by the aspect, both to take over the multistrike program, we spent quite a few sessions actually just narrowing down what the objectives are, because I think you’ve hit the nail on the head, the objectives should really fit into the design of the program. And I don’t think it doesn’t, you know, for a number of other cases that have come across, you sort of fall into, okay, this is an interesting program. And now let’s retro actively figure out what the utility function of this program is. But I’ve learned over so many years of building systematic strategies that actually nailing down that what is the utility function, you know, what niche is it trying to cater towards? That was a client utility here, especially when you are looking at a form like aspect that has a number of systematic strategies, it becomes quite important to establish what the utility function of each of our programs is because, you know, we don’t want to muddy the water, we want to be quite clear, to offer the investors the right product. So the objective here is, in my mind, quite simple. It’s a multi strike program. It’s not a combination of all the models that we have aspect, we have currently over 250 individual forecasting models. So it’s not a combination of everything that we do. And that’s because we are trying to deliver our investors, a smooth, consistent return profile, with diversification to any of the risk factors, and more broadly, to equity markets, to bond markets, and to trend and macros or any sort of common hedge fund indices, and to any of the common systematic factors that you come across. It’s a very simple objective. But I think it’s not as easy to achieve and it gets harder with every passing day. And so what do I mean by smoothness and consistency in return profile? With systematic strategies? The favorite question of clients and prospects is, which environment is the strategy supposed to work in? And which environment does it not supposed to work in? But by definition, I’ve tried to deliver your consistent smooth return profile. So it’s, it’s supposed to work in all environments? Well, is it an all weather product, but I’m not very fond of that term, all weather? Because it means you know, it’s sort of discounts what we’re trying to do here, you know, we’re trying to combine a set of alpha model forecasting models, return forecasters to deliver consistent return stream that is going to deliver you the returns in every environment that the world Metro bonus. One of the phrases we’re going to use quite a bit in this conversation is alpha signal. And one of the things that may be a surprise me after doing six seasons of this podcast is when you talk to a large number of systematic quants, the phrase Alpha signal actually can mean very different things depending on who you’re talking to. So just to make sure I’m on the same page as you and the listeners are on the same page,

Corey Hoffstein  10:00

So when we say the phrase Alpha signal in this conversation, can you maybe define what that means to you and provide a naive example of what that might look like?

Asif Noor  10:10

Sure, again, I agree with you completely on that, you know, some people think carry is not an alpha factor, some people think momentum is not an alpha factor. And I think, Cory, it’s dynamic. To be honest with you, I think it depends on the product that we’re working with that we’re talking about. For the multistrike program, the way I judge an alpha signal is really by correlation of its return profile to other things like equity beta, for example, or to any of the risk factors that I’m gonna be looking at. So I don’t mind having a systematic carry factor in my book, as long as it’s doing something dynamic, something different. So carry factor may be move away from a simple premia to an alpha factor in the way we construct it. So we make a structured by having an element of timing of actually not having a dynamic allocation to capture carry throughout every time period, but you may only capture carry over certain time periods, you may also have some non linearity in terms of how you build it. And that sense becomes an alpha opportunity as opposed to a simple premium factor. So I’m not fond of this alpha beta separation, because actually, it’s really subjective on which product you’re referring to and how you actually construct it. So you can’t just label and say, all my momentum, my value Macquarie is going to be my non Alpha factors. And my true alpha, my shorter term arbitrage opportunities, because even your shorter term arbitrage opportunities, you’ll be surprised to see how much Gary comes into a lot of these models. In reality, there’s a lot of directionality, a lot of beta that comes in. So you have to be very careful of how you define and how you construct this these anomalies.

Corey Hoffstein  11:46

Based upon that answer, I’m a little hesitant to ask this next question, because it might be redundant, but I want to ask it anyway, which is, when you think about the Alpha signals that are being used to drive the multi strategy program, you know, you mentioned things like carry and momentum, do you find that they fall into the natural topology that I think most quants would be familiar with, right things like value momentum carries seasonality? Or do you find that most of the Alpha signals you focus on are truly orthogonal to traditional research,

Asif Noor  12:18

I’ve given an example right, going back sort of two decades, when we started building these models, we had 18 of them 18 to 20 different models. And to make it simple for our clients to understand we went value technical sentiment, because that made sense. We just bucketed them into these broad categories. You could take those value technical sentiment models, and bucket them into carry value, seasonality. And so on. Today’s implementation today, the way I categorize these alpha opportunities in the multi site program is within 14 broad investment themes. And those 14 broad investment themes are really designed to capture cross correlation between those models. So if you take our 110 individual models we have in the multi strap program and run a simple clustering algorithm on them, you know, we can cluster them into 14 broad opportunities. And then you sort of look at those bins. And you say, Okay, what are these bins roughly trying to capture? Well, interestingly, bin number one has a lot of models that try to capture sentiment using option markets. Well, let’s call that option market sentiment, we’ve been to looks like a lot of momentum models, but they’re on the faster scale faster horizon, that that means that they’ll quite slowly correlate to a medium term momentum, let’s call them fast momentum, you know, you can go down, you know, some of them look at sort of curves across commodities, fixed income, then currencies, sort of twisting around in a time series fashion. So we can call them curve dynamics. So for me, it’s, you know, you can call them bin one bin to bin three banane. It’s really the clustering and the relationship between them, because ultimately, what I don’t want this, all of them to line up and lose money at the same time. So as a result, I guess to answer your question directly, we don’t sort of categorize or label them in the normal topology, because we don’t need to write because we’re doing even when you call when you think about seasonality. Seasonality factors today, are quite different to what they were 1015 years ago. And I say factors and models, I use them use that interchangeably. So you don’t need to, you know, it’s gonna seasonality could be done using machine learning. Seasonality could be done using different data sources. So if you forecast models trying to capture seasonality using non traditional data sets, does that fall in the old data category? Or if it’s done using some machine learning as a form of machine learning category? So it’s really just, you know, we don’t need to sort of label them. And I guess 15 years ago, 20 years ago, a lot of investors when it came to quant strategies, there was a lot less understanding of quant strategies. We used to be called in a black box approach don’t want to touch it or understand it. And so to simplify things, we would categorize them into high level categories like I mentioned value sentiment.

Asif Noor  14:59

Technically, we don’t need to do that anymore. You know, we have chat, GBT, everybody can live chat TV. Did you open that black box? So yeah, we actually labeled things the way that intuitive and they started clustering methodology.

Corey Hoffstein  15:12

You somewhat alluded to this in your last answer. But the number of alpha signals over the last 1015 years that you would incorporate into a program like this has grown by an order of magnitude, call it 1015 signals maybe 20 years ago to potentially hundreds of signals today, can you talk a little bit about your process for what it takes to introduce a new alpha signal? And then maybe, conversely, how you think about sunsetting old Alpha signals?

Asif Noor  15:40

Yeah. So I think that’s the most important piece of investment philosophy that investors should focus on, you know, they talk about past performance grades, they talk about what made the last few money, fine. But actually, you know, ultimately, it’s a quantitative strategy, systematic strategy. So actually, your research process is quite important. And I always talk about investors, you know, they always say, Well, what should be a red flag in terms of losing confidence in the programs you manage. And that should really be when you stop introducing new alpha sources, or, in fact, sunsetting old models, because that means that you’re not really evaluating your current set of models. You know, our firm belief is that no set of models actually good forever. So you don’t just develop models, release them, allocate risk to them, and then let them be for the next decade and go to the beach, or whatever. So it’s about constant evolution. Because if you think about what things are driving markets today versus 10 years ago, it’s a different set. It’s a, maybe some of the models are overlapping, but there’s a lot of different sources. So you know, step one, for us is actually a high level blue sky session. That’s a high level, I mean, including researchers, the PMs sitting together and thinking about ways to forecast markets, with some sort of envelope around that in terms of the strategic direction of where we want to go. So if it’s a macro program that focuses on non price based models, that’s the agenda that’s sort of the envelope or focuses, we need to really beef up our emerging currency exposures, emerging currency models, but that’s a little bit of a direction. But essentially, we are thinking about a way of forecast currency markets, equity markets, bond markets, commodity markets, and not what would have actually forecast markets in the last three 510 15 years. But what do we think will drive returns in the foreseeable future, it doesn’t need to be a decade out, it can be for the next three years. So that I think Cory is is very important and actually a good exercise to get everybody involved. Because, you know, we hire quite talented individuals, and one of the key elements that I want from them as intuition and an understanding of what we’re trying to do. And this idea generation process actually gets a lot of that out. So step one, pull a number of ideas that have some economic rationale, some philosophy behind it has some raw data that we can utilize to build those models. That’s an off that set of ideas, we then filter them into a smaller sets like a high conviction list of models that may deserve a research allocation or resource allocation. And every researcher then gets allocated a project, the timeline is usually two to four months to take that project from start to finish. And during that time period, we have meetings on a weekly or fortnightly basis. And each researcher is encouraged to present their findings of the empirical investigation. And the other researchers, their job is to actually try to find weaknesses in that defense of that research. So if you think about, you know, you have a 10 researchers, you’re investigating about, you know, 30 to 40 ideas a year. And our hit rate is not very high, it’s about eight to 12% a year. So you roughly introducing three to five models here. And unfortunately, given how many models we have, and given how the markets evolve and change, we also have a lot of models that are on the watch list or are under review. So we’ll end up retrying, one to two to three models a year. But the process of retiring is a lot harder than the process of introducing new models. Do you see new models, you’re constantly improving the process, because the tools and techniques you have to do the research constantly gets upgraded. So we have an extended process of actually a feedback loop of researchers defending and presenting their findings, ways of actually bootstrapping and robustness checks. And then the final stage is to present that new research that new model and the allocation to the Investment Committee, which then goes through their oversight responsibility to make sure that they’re comfortable with that model that’s been introduced.

Corey Hoffstein  19:38

I always think that this is where the real alpha in the conversation is. People always want to talk about what are the signals but I think that it’s the process element. When you talk to quants that is hugely valuable when it comes to actually running a quant operation. One of the things that immediately comes to mind for me when you’re talking about only potentially introducing, call it that eight to 12% of new research into the portfolio. On any given year, is that you’re gonna have a pretty substantial research graveyard that three or four years from now, you might be considering a new, you know, alpha research project that actually might have relevance from something that was researched four or five years ago. How do you think about maintaining that archive so that it’s useful to future researchers?

Asif Noor  20:20

Yeah, that’s a difficult one. And, you know, we have some researchers that have been here many, many years. And, you know, they’re the deep pools of knowledge of those graveyards. You know, a lot of times, somebody in the Investment Committee will say, ask XYZ person, because I’m sure they’ve done this five, six years ago, you know, Courier, even that graveyard has a half life to it, right? Because something that you investigated five, six years ago, probably deserves an investigation now, because, as I said, the quality of data may have changed, the investment horizon may have changed. And the way it was investigated by may have changed as well. I mean, we do a lot more machine learning than we did now than we did five years ago. So actually, you don’t really need a very extensive historical log of ideas you investigated, and you do need some sort of memory and you know, you can we have a database of things that we keep that have been investigated. It’s unlikely that something that has been categorized and fair research of the last three, four years, we’ll come back and work, because there will be quite fundamental issues with that research. As an example, you know, we look for consistency across the different markets that we forecast. So if you’re forecasting 20, equity markets, and this particular model only forecasts in some statistical significance, only 10% of this market. Well, that’s a little bit odd, right? It’s a little bit difficult to overcome. Of course, you can think about exposed why those two markets work. But that’s not how we do research. You know, if you think that off the 20 equity markets, we forecasting only two will have alpha, because of using this raw data set, you should have written that in the economic in the investment and idea generation process, you should have just said, actually, this will only work for NASDAQ and s&p because the US centric data, and there’s a lot more better quality data in terms of the data quality as an example. So yeah, going back to your original question, I don’t mind reopening a research project for an idea that we had more than five, six years ago, because that might still have some opportunity there. As long as ideas still relevant.

Corey Hoffstein  22:25

When it comes to forecasting and alpha forecasts, my experience is one of the things that’s crucially important is the certainty around the forecast, right, you can have a very high conviction signal, and you can have a very low conviction signal, how does that uncertainty in a forecast factor into the portfolio construction process for you?

Asif Noor  22:48

Yes, and this has changed as well. Cory, as I mentioned, when we first started building systematic strategies, pre financial crisis, we had the firm belief that the size of the forecast, the size of the score, we call it is related to the conviction. So higher conviction, higher signal strength, would mean that you know, we had more as opportunities. So there we can actually that size, your risk based on consider that you have a very small score there. And and hence you should allocate less risk. And when you have a higher, stronger signal that you have higher risk. So that’s one way of doing things. The other way is direction was magnitude argument. So you can say, well, you know, 70 to 90% of my alpha comes from getting a direction, right? And the rest of it is magnitude. So I don’t really mind if my signal strength, this is really small, I’m still going to blow that up. In some cases, it becomes a bit silly, right? Because if you have a very small magnitude of a score, and you’re still blowing that up, it doesn’t make much sense. So the way we do this is it’s a case by case basis, it’s depends on each of the different models we look at, we evaluate that based on directionality versus magnitude. We also look at signal strength versus alpha opportunity. In most cases, actually, it’s quite continuous for us i direction has more as opportunities than magnitude. But it really depends on whether you’re building a divergent strategy or convergence strategy, and also depends on the size of the investment pool. So if you’re looking at forecasting 20 equity markets, it’s unlikely that all of them are going to be shrink toward zero. Whereas if you’re forecasting only two markets, that actually magnitude doesn’t matter if you might imagine constant rescue always wonder negative ones on that. So it really is a case by case basis.

Corey Hoffstein  24:35

In our pre call, you mentioned that when it comes to allocating towards different alpha models, one of the approaches that you lean heavily into is hierarchical risk parity, which was I believe, originally proposed by Marcos Lopez de Prado. And that’s an approach that relies meaningfully on some sort of measure like a covariance Matrix whether it’s explicitly a covariance matrix or not, it usually starts at least in the naive literature with the covariance matrix. And when you start to talk about hundreds of alphas, that estimation error of some sort of self similarity across alphas becomes a numerical problem, right? Understanding how to actually do that clustering is an important issue. It’s curious if you could talk a little bit about how you think about tackling this problem making sure when you go to allocate risk across these different alpha clusters, you’re not unintentionally over allocating risk, it’s because you have this estimation issue.

Asif Noor  25:38

Sure. So I’ve come across various bits of literature and also various debates internally about how to solve basically, what we’re talking about is estimation errors within the covariance, it’s not just within HRP, it’s across a lot of things in terms of risk targeting that we do. And this is our statistical techniques, Robusta phi, your covariance estimates. And then you can do that, in some cases, we do apply those techniques. But ultimately, what you’re referring to is, your algo is telling you to allocate risk in a certain way. And actually, for whatever reason, the discretionary trader, and you’re saying, I’m not quite comfortable with that, right. So you overlay some biases, and you can make yourself feel comfortable that those biases are justified, because you can shove them in sort of risk management. But ultimately, that’s what they are using, well, the covariance isn’t quite accurate. And actually, it doesn’t really know everything I know. So I’m not comfortable with that allegation. And the way I solved that problem, maybe it’s a good thing that is a bad thing depends on who you are. But it’s a set of constraints around those mins. And Max bounds ranges around what I’m comfortable allocating risk, because ultimately, I’m the portfolio manager, the risk is on my shoulders to manage that portfolio. And if I think that the algo will come up with some corner solutions on allocations that I’m not comfortable with, I need the authority to override that. So putting envelopes around allocations of risk to individual models, cluster of models, types of strategies is how we are looking to solve it. That’s one element of it. The second element of it, I think, is using a reasonably long look back in your covariance estimates to smooth out a lot of that data. And the third element could be to make the change in allocations a bit sticky. So once you are actually, you know, let’s say you run the process in a quarterly basis, you have your ERP rates, based on your limits that you’re comfortable with, you only let the next quarter that change be a certain percentage from that from you know, sort of have a starting point. So over a period of X quarters, you move away from that, you know, gradually, but you minimize the risk of whipsawing between the allegations up and down because certain change in covariance estimation,

Corey Hoffstein  27:48

When you talk about Alpha signals, across different sources, one of the things one of the problems you run into is that those Alpha signals can have different units or very different magnitudes. So for example, if we were just looking at naive momentum scores, a longer term momentum score likely has a much bigger magnitude than a short term momentum score. Or if you’re looking at momentum scores versus say value or carry scores, again, they’re going to have entirely different units, which they’re being measured in. Given that you’re trying to combine all these alpha scores across a variety of different sources, how do you think about normalizing them so that they are cross comparable without necessarily losing valuable information?

Asif Noor  28:34

Yeah, so this is a challenge when you move away from building pure RV strategies, pre RV portfolios to portfolios that have some directionality. And especially when you look at models that have directionality based on momentum, for example, which is the point that you bring up because obviously, you’re thinking, the stronger the momentum signal, the more conviction of that trend continuing. So if you actually normalize and standardize your risk target, every time you’re squashing some of that conviction rate, it goes back to the conviction point. And so for the RV portfolio, we know we can measure constant risk, and it’s less of a challenge there. Because, you know, it’s not so much of an issue in terms of normalizing or standardizing. So for the momentum, or the trend or the directionality bit of the portfolio, we do allow a little bit of flexibility in terms of risk targeting. And it’s not a fully uncapped, risk targeting. But there are various ways of modulating the risk target around your benchmark to make sure that you are not completely giving away that information. So let’s say you want to target 10% annualized risk in your portfolio, and it really comes down but the maximum rescue comfortable taking is it’s you know, it’s not the flow that matters. It’s really the cap. So if you’re comfortable taking 15%, will you put a 50% band around your target, let’s say and ready to allow your directional models that do have that tendency to outperform based on strength to actually deliver higher than average risk in certain environments. In some of the research we’ve been doing more recently, even in the macro side, we’ve seen a lot more of that sort of a conviction filter, sort of non normalized distribution of risk, in a way for the models, but I don’t, because I haven’t come across any other sort of super sophisticated way of doing it, except to just targeting constant volatility, will squash that conviction and extended the forecast.

Corey Hoffstein  30:22

Can you talk a little bit about your approach to combining Alpha signals? So for example, are you combining unique portfolios for each alpha signal? And blending those portfolios? Or are you combining the signals into a single forecast for an asset class and then running some sort of optimization may be something entirely different?

Asif Noor  30:41

Yeah. I mean, if you do a little bit of a hybrid of the two things that you mentioned, and I think an unconstrained framework was to similar things, if you think about each of our 100 models, has called them as different fund managers in each of them have a view on on a set of markets that they follow a forecast, and they get that view in whatever format like you may have rankings, then somebody asked me have a portfolio rates, a third person may have buy or sell signals, our job within the multi state side is to actually take those Alpha views, forecasts scores, and normalize them and put them in a framework that can be compared cross sectionally. So that’s the first step. But each of those 100 models is treated as a long short or long, long, short portfolio, it can be directional non directional, at a standardized risk target, unless, in some cases, with a range of volatility around the target. So that’s step one is to get a handle on each of the models that can be cross sectionally, compared and hence combined into a portfolio. And because we are at the moment in an unconstrained framework, we can actually reverse optimize those views, and create forecast. The next step is to actually combine those portfolios as 100 portfolios into one set of global portfolio, which then gets turned into implied forecasts, and then put through an optimizer mainly because we don’t live in a world that’s unconstrained. You know, we have various levels of risk limits and constraints that are put in place, which is why we’re using these an optimizer. So combine those 100 portfolios, make sure that we are able to allocate risk to them in a way that’s comparable, and then feed them into an optimizer to apply the limits.

Corey Hoffstein  32:28

How do you think about unifying Alpha forecasts that are over different time horizons.

Asif Noor  32:34

So that is challenging part of my job as a multi strat Portfolio Manager, because again, I think that makes it a little bit easier than it would be if you know, given how aspect is structured. You know, we do believe in collaboration across the different investment teams. So we have various research teams across the multi strike team, a macro team and an uptrend following a risk premia team. But collaboration is quite important in my role. So for example, if you’re getting an alpha forecasts or model from the macro team, beyond just giving me their positions for the multistrike portfolio, they have to give me a lot of other information that allows helps me with the portfolio construction process in terms of you know, the alpha decay properties, basically how they want to roll that model set, that alpha decay and whatever trading algo they like to use for that somebody becomes quite integral in how we construct the portfolio. In a way, if we are looking at a timeline of a 24 hour timeline, you know, we optimize the multitrack portfolio 12 to 15 times a day, mainly because we get alpha forecasts of our different horizons. So we may have a macro model that has a medium term horizon so doesn’t really mind but the executed over a VWAP over several hours, or a TWAP, you know any of these Simple Slicer algos. Whereas we have short term opportunities that come in random times of the day, that means to execute within a certain timeframe. Otherwise, the alpha is no longer available. So the key for us is information sharing, but also build that into the portfolio construction process and then pass that information out to the dealing desk, which then is able to execute those trades based on where the trade has come from. The challenge becomes if you have a trade, that’s the same side. But that’s come from two different sources. You know, two different alpha forecasts have conflicting horizons. So you may have a model, a trade that’s, you know, by 1000, lots of s&p 500 is from a shorter term high alpha signal and 500 is from a slower alpha decay signal. Does that mean that we should do the all 1000 At the same time, because the shorter term signal is telling you that there’s Alpha there, so let’s get that done. We don’t take that. Take that view, we actually do the 500 that comes from the shorter term single separate to 500 that for the longer term, because ultimately somewhere around the research process we’ve baked in market impact and slippage. So actually, we know that 500 is a year okay to implement this 500 without it incurring the market impact and slippage, which may destroy the alpha, if you actually go beyond that 500 Lots. That’s only possible in the framework that we have where we share information. And it’s one umbrella, building this portfolios.

Corey Hoffstein  35:14

Somewhat on the same theme of thinking about these signals over different horizons is the sort of conditional interaction between these signals. So as an example, when you look at long term forecasts very rarely is it the case, at least in my experience, that the alpha is actually realized continuously over that period? Let’s save the one month forecast it almost never do. You get all the alpha and equal increments every day. Rather, what often happens is it sort of happens in a much shorter, unpredictable window, which in theory can make it a lot more sensitive to what short term alpha signals happen to be saying, at that exact time when the long term alpha is realized. I was wondering if you could talk a little bit about how you think about the conditional interaction between these different timeframes?

Asif Noor  36:03

I’m not sure I agree with the point that actually because of longer term alphas, we don’t know when that longer term alpha will be realized, if it’s something that over a three month horizon is one three, that we realized that month, one of the shorter term signals are telling you that there’s opportunity there, we should then make sure we execute that longer term alpha, I think it goes back to what that alpha opportunity is what the data set has been used to build that model, which doesn’t, I don’t think that it’s a hard one, really, because what you’re saying is that the short term alpha has more information than the longer term alpha, because it’s telling you that actually, it’s not a three month horizon opportunity, it’s a one month horizon opportunity. And the shape of that return opportunity is also different, it may have little return profile return opportunity over the first month, but the last two, three months, it may have the most of the opportunity. So we take the view of sticking strictly with implementing the opportunity, based on what the underlying investment team informs us, and actually don’t speed up or slow down the execution based on on the other signals.

Corey Hoffstein  37:10

When you talk about having hundreds of alpha signals, a question of marginal benefit comes to mind. So I’m going to propose a hypothetical to you, you have a choice, your research team can find a totally new independent Alpha signal. But it’s low accuracy, so you have low confidence in it, or they can enhance slightly enhance an existing Alpha source. But that enhancement is high confidence. Which would you pick and why?

Asif Noor  37:43

I would pick the enhancement of the existing strategy. When you say the new model, you have low accuracy, I already don’t believe in backtest as much right? How back test look really good, 45 degree line, you know, bottom left to top right. And so if you’re starting premises, I’m not quite convinced about this backtest, you should really have no reason to doubt the back test, because you need to discount that by a large margin, not factor anyway. So if you’re starting premises, I’m not quite convinced about this thing that’s, that has not traded at simulation. Well, that gives me zero confidence in that signal, really. Whereas the one that you’re talking about enhancing? Well, it’s still in your process, because it’s delivered some returns or it’s its history. So out of sample period is, is much more crucial for me. And now you’re saying that you can actually improve that even further. So probability wise, I would think that I have a better chance of that model that’s already live delivering returns in the future.

Corey Hoffstein  38:39

With his next question. You’ve already touched on it a bit. But I don’t want to make any presumptions. I want to I want to draw it out.  But I, you know, you’ve sort of mentioned the constraints you put within your allocation process. And we’re going to touch on that with this next question, which is how you think about balancing conviction versus risk management. So for example, let’s say there’s a disproportionate number of, I want to say independent in air quotes here, but independent Alpha signals that all converge on the same trade, whether directional or non directional. Do you think that that’s a time to lean into that conviction, when all of these theoretically independent signals are saying, Hey, you should execute this very similar trade? Or is that something that should be tapered for risk control reasons?

Asif Noor  39:29

Yeah, basically, that worries me a little bit like if all of my signals are telling me that this is the right trade. That means that as a portfolio manager as somebody who manages risk, I’m more worried about the downside, right? I’m less excited about the fact that things are all lining up and telling me to go long, s&p Short footsie, I’m more worried that what if that trade is just lost making now I’ve added a lot more risk. So I would really lean more towards risk management. Because think about philosophically we’ve built these hundreds models because they have so supposed to have low correlation between them. For some reason they have that correlation structure has broken down and they’re now highly correlated. And then you have a bit of a concentration risk going on there. And the downside is a loss not costly, but asymmetric. And that’s mainly, probably if I were 15 years ago, I would have a different answer. But I’ve been doing this a long time. And you know, you learn a lot from drawdown and left a risk.

Corey Hoffstein  40:23

I know that everyone will always wants us to talk about alphas, that sort of the sexy part of the conversation. But can you talk about maybe some of the most difficult challenges you face outside of generating unique Alpha signals,

Asif Noor  40:36

The key to generating new signals isbringing in new talent, new ways of thinking, avoiding groupthink. So for me to keep ahead of the competition, keep on top of all the technological advances, it’s about, you know, having a motivated, innovative research team that actually continues to build good return opportunities, good model sets, so actually identifying new talent, bring them on into the team, and evolving, I think that’s probably the harder part, it sort of goes hand in hand possibly, of generating unique Alpha signals, but actually, making sure that we have a robust process and a talented, motivated team is a challenge this challenge I enjoy, but I think it’s important.

Corey Hoffstein  41:25

I suspect another area, if I kept pressing you for more answers, there might be the operational side, particularly with short term signals, but surely, and my guess is that’s an area could be a podcast unto itself, the difficulty of implementing that sort of stuff. Can you talk to maybe some of the challenges of implementing short term alphas versus longer term alphas and the rising importance of estimating execution costs and those sort of concepts?

Asif Noor  41:50

Yeah, I mean, we’re quite fortunate at aspect, because you know, we have a dedicated execution research team. And you know, their job is to continuously make sure that we are honest, in terms of a backtest. So part of the process is to actually kick the simulation of especially our shorter term efforts to the execution research team and make sure they kick the tires. And we’ve been investing quite a large amount last two, three years on technology and databases to make sure we have good timestamp data and ways of processing that information. So it’s not just about having the ideas, but it’s actually having the infrastructure to be able to trade those ideas. The challenge is very often assuming a certain execution window, or certain time stamped raw data to trade those anomalies. And actually, in reality, you don’t have that you have some event that’s happened, that’s delayed your execution, or the actual quoted data, this is different to the word data that you downloaded. So data is a big challenge in shorter term. And the accuracy of that is a big challenge here, there’ll be a building a lot more on that side to circumvent that challenge. And actually before, especially on the shorter term. First, before we start allocating external risks, external capital to those models, we go through an extended period of incubation, to make sure that we are actually the data that we’re using to build those models in live trading is exactly what we used in back test and simulation. And you wouldn’t believe but the processing time is also a challenge, you know, because you’re looking at Mass loads of information. So any delay in processing can cost you a lot of alpha. So a lot of those operational things become very, very important. But there are ways around it. And it goes again, back to your actual idea, and the rationale behind what you’re trying to capture. We’re not in the in the, in the game of actually capturing very, very high frequency technical arbitrage opportunities.

Corey Hoffstein  43:39

As someone who has been doing this for 25 years and seen the evolution of the space go from slower horizon signals, 10-15 core models to something where you are now looking at hundreds of signals of varying lengths, and time horizons. What are you most excited about in the evolution of this space over the next 35 years?

Asif Noor  44:06

This a number of things, Corey, actually data probably, if I had to highlight one thing, the types of datasets that we now have access to is mind boggling. 10-15 years ago, when I used to be doing client meetings, I used to get examples of how we build our systematic strategies. You know, I was talking about Well, one way to forecast economic growth could be business travel, right. If you see how many planes are going cross Atlantic from the UK to the US, maybe, you know, this business travel is a leading indicator of how equity markets behave. The trend, but we don’t have that data. We have that data and more than 15 years ago, forecast data of macroeconomic forecasts is available. I mean, shipping data, I mean, we have satellite images. It’s a whole host of information that allows us to test relationships that we didn’t imagine we could test and then we have GBT AI so I think that makes our job a lot more exciting. You know, we’re not just looking at casual differences and historic inflation data, we can actually use nowcasting to predict what inflation will be or what growth will be. So that makes the job more exciting. And actually gives us a shot at actually beating the benchmark consistently. So yeah, you just have to be willing to take a risk and look into the harder to uncover harder to find places to find those data sets that can be predictive.

Corey Hoffstein  45:28

Asif, thank you for joining me. This has been fantastic.

Asif Noor  45:31

Thank you so much. Great. I really appreciate it.