In this episode I chat with Mads Ingwar and Martin Oberhuber, co-founders of Kvasir Technologies, a systematic hedge fund powered by a full-stack application of machine learning.

By full-stack I mean every layer of the process, including data ingestion, signal generation, portfolio construction, and execution, which gives us a lot to talk about.

Our conversation covers topics ranging from the limitations of machine learning and hard lessons learned to how to keep up in a rapidly evolving field and thoughts about managing model risk.

Given the niche knowledge in a field like machine learning, some of my favorite answers came when I asked how they might perform due diligence upon themselves or where they think other adopters of machine learning go wrong. For allocators, I think these answers are priceless.

I hope you enjoy my conversation with Mads and Martin.

Transcript

Corey Hoffstein  00:00

321 Hello and welcome everyone. I’m Corey Hoffstein. And this is flirting with models the podcast that pulls back the curtain to discover the human factor behind the quantitative strategy.

Narrator  00:18

Corey Hoffstein Is the co founder and chief investment officer of new found research due to industry regulations he will not discuss any of new found researches funds on this podcast all opinions expressed by podcast participants are solely their own opinion and do not reflect the opinion of new found research. This podcast is for informational purposes only and should not be relied upon as a basis for investment decisions. Clients of newfound research may maintain positions in securities discussed in this podcast for more information is it think newfound.com

Corey Hoffstein  00:50

This episode I chat with Mads Ingvar and Martin over goober co founders of faster Technologies, a systematic hedge fund powered by full stack application of machine learning. By full stack I mean every layer of the process, including data ingestion, signal generation, portfolio construction, and execution, which gives us a lot to talk about. Our conversation covers topics ranging from the limitations of machine learning and hard lessons learned to how to keep up in a rapidly evolving field and thoughts about managing model risk. Given the niche knowledge in a field like machine learning, some of my favorite answers came when I asked about how they might perform due diligence upon themselves, where Mads and Martin think other adopters of machine learning go wrong for allocators, I think these answers are priceless. I hope you enjoy my conversation with Mads and Martin. Gentlemen, thank you for joining me today I am I’m really excited about this episode. As I was telling you, before we hit record, machine learning is a topic that I get a lot of people asking me to record more episodes about, I think it’s really one of those fascinating topics. Because first of all, it’s so broad reaching, when you say machine learning, it is such a wide area, but it’s also one that’s rapidly evolving. The state of the art has changed so much over the last decade. And I know we’re going to talk a little bit about that. And I think you guys are gonna bring a really fascinating perspective to this, because you started your earliest days, actually, as consultants started with your backgrounds. Mads, and now we’re gonna go into you a little bit with computer science. And that is actually where I want to start. Because you’re not originally from the finance industry, you sort of worked your way into here, which I think gives you a unique perspective. So if you guys could begin maybe by giving us a little bit of the background and how you got to where you are today.

Mads Ingwar  02:40

Yeah, sure. We’re happy to be here today, Cory. So for myself, I have a background as an engineer, I did machine learning research for my graduate studies and then fall down to do a PhD at UCL where my research was based on machine learning for time series predictions and for for unstructured data analysis. And UCL at the time was quite fascinating place where many of the advances we see today in deep learning and machine learning in general, were happening at a at a fast pace. And it feels like half of the faculty there went to Deep Mind afterwards, and the other half went and started businesses. I fell in the latter category and founded a company based on my research where we deployed convolutional neural nets for time series forecasting. And through that lens, I got to work with many of the large institutional allocators and asset managers and unique insight into how these billion dollar portfolios were managed and had a chance to work with everything from alternative datasets to trading and risk models and execution infrastructure on a large scale.

Martin Oberhuber  03:51

Yeah, thanks, Corey, for having us. It’s a pleasure to be here. My background is in computational finance. After graduating, I started working for a frequency trading firm, as a quantitative researcher was mostly focusing on building machine learning models for European equities and futures. And already at the time, I started thinking about using similar techniques to exploit more longer term patterns, and started playing around with a few ideas and how to implement those strategies. I later than moved more into data science consulting, where I worked on several different projects, different industries, but all ultimately around machine learning predicting something. And this is also where I met Matt and Mike. And then later on, when I joined Goldman Sachs, I was focusing on machine learning applications for the securities division and I was primarily responsible for optimizing risk turnover. My journey with trading started soon after I left the HFT shop I started essentially building out strategies for futures and start trading my own money with that, and that developed more and more into kind of like a solid for mean work, which ultimately we’re still using, obviously, in a more sophisticated way to this day. But the Early Start goes back to 2016.

Corey Hoffstein  05:08

And you just mentioned the firm name, which I’m sure anyone who will listen to the intro to this podcast knows I’ve I’ve at least butchered. And you told me before we hit record the great story of the firm name. So you guys I would love for you guys to say that again. While we’re recording.

Mads Ingwar  05:22

Yeah, for sure. Korea. And I think as you know, when when you set out to start a hedge fund you you look at Greek mythology, and you’ll quickly find out that all the good names already taken. So we have to look a little outside of that. From my background in the Nordics. We obviously looked at North mythology and decided for Gracia and in North mythology classy is a is a demigod being that is knows the answer to all questions and is unlike the Oracle of Delphi say it’s a benevolent creature that travels from village to village and disperses knowledge into the world. And we thought that would be fitting what forgot from my kind of Norse mythology classes back in primary school, they start being classes, eventually killed and butchered by the persons from disbursing, too much knowledge into the world. So that won’t be the case today.

Corey Hoffstein  06:14

I love that story. And talking about a small world, Martin, you and I realized before recording till we actually went to the same graduate school program just a year apart. So I love when those connections are made. I want to start this conversation maybe with a bit of an open ended question. But it’s one that I’m sure it’s one you face, just the headwinds of skepticism against machine learning. There’s just a very healthy amount of skepticism in the industry. It seems like it’s been adopted very strongly in other places, operations, research, pipeline management, all these other areas that machine learning has just been accepted. And you see these huge efficiency gains. And yet in the realm of finance, quantitative finance, there does seem to be a large degree of skepticism. How do you rebut that skepticism, because I have to imagine it’s something you come across and talking to allocators and institutions all the time.

Mads Ingwar  07:06

Yeah, I think a lot of the skepticism stems are the view that machine learning and AI or somehow a black box that just churns out predictions. And while it’s true that some people apply it that way, and it has applications in that realm, but that’s not necessarily the universal truth, and nothing finance and also been slow to adopt these advances we’ve seen in machine learning over the years. And one of the reasons is that people tend to use linear models in finance, I think it’s a bit of a crutch. I mean, we spent the last 50 years in finance since the seminal fama French paper arguing over which factors that exist in the market. But we spend very little time discussing whether using linear models to capture them are the right approach. And I think it’s quite interesting, actually, as we see, one of the few things that we can agree upon is that financial time series are a nonlinear and non stationary in nature. But I think it’s also worth noting that despite all the hype, then machine learning and finance is not just a silver bullet that will function out of the box. It’s also not a substitute for clear critical thinking and having a sound economic theory. And modeling financial chantiers is my view harder than both self driving cars or facial recognitions. And I work in both of those areas. And the reason is that the signal to noise ratio in financial data is extremely low. And then, of course, we work in this non stationary and even adversarial environment.

Martin Oberhuber  08:42

Yeah, and machine learning generally, I think, are certainly often considered as blackbox techniques. But ultimately, it’s the power that they have, will allow you to pick up highly nonlinear patterns that are often prevalent in financial markets and simple techniques are just not able to do that. Ultimately, it’s the responsibilities of the researchers who apply these techniques to be responsible in applying the sort of like complexity, the right amount of complexity for a given problem, and not apply or use too many degrees of freedom in our modeling approach and end up overfitting. And I think the reason why it has such a tricky reputation industry is because it’s all too easy to overfit. And probably many mistakes have been made by researchers where they end up deploying models and trading a lot of money. And they didn’t really add any value simply because the researchers thought what results they saw in a backtest were realistic. Well, they completely overfit it, they’re getting this right isn’t easy, and therefore it has probably gathered some negative reputation over the years.

Corey Hoffstein  09:46

Why do you think at least in my experience, these ideas of the acceptance that markets are non stationary, that there’s a low signal to noise ratio, sort of has people conceptually fall back on the idea that simpler models are going to be wrong, but robust, whereas more complex models may be able to capture those nuances but are going to be caught completely offsides? During a regime change. Or they might pick up on more noise than signal. You started to touch upon that a little bit, Martin, and I’m, I’m going to go off script here. But I want to press on that point a little bit, because I do think it is one that is so important, and I think trips up people on their journey into machine learning. How do you think about finding that balance of being able to model the complexities without necessarily ending up with a model that’s inherently fragile?

Martin Oberhuber  10:37

Yeah, no, that’s a good point. I think it’s a continuum, right? Like, it’s not like black or white, or you saying you’re using traditional simplistic techniques versus highly complex machine learning. Machine learning itself is a bit of a loaded word, people don’t exactly know what falls into that bucket. But from my perspective, I guess it’s easier to think of all of these techniques as some form of statistical learning. And those techniques can start out very simple with a linear regression, perhaps with a regularized linear regression, which adds a little bit more value, especially for financial data, and then go all the way to ensembles of decision trees to deep learning. And it’s a continuum, right. And the researchers task is essentially to pick the right amount of complexity for a given problem. And because it’s a continuum, generally, there isn’t really a huge risk that one has to make a binary decision between a very simple in a very complex model, and really find that sweet spot is what using machine learning and finance is all about really walking that fine line between extracting as much signal as possible, while not overfitting. And that line is very fine. And it requires quite a bit of experience to approach it properly and know where that line is, or where or how one can measure where that line is. And given those difficulties are relatively high barrier to entry is probably part of the reason why it has gathered a little bit of a negative reputation over over time,

Corey Hoffstein  12:03

I want to start getting into the weeds a little bit, not just talk high level Machine Learning, but actually talk about application. And I was hoping for context, we could maybe just start off, you guys could tell us a bit about the investment strategies themselves that you manage, which I think will help inform the rest of the conversation.

Mads Ingwar  12:18

We manage a long short equities strategy and a future strategy and we trade them in our fund in a multi strategy approach. And we run a fully systematic pipeline that uses machine learning end to end. That means that basically everything from data ingestion, trading signal generation, risk and portfolio optimization is handled by machine learning algorithms. And we trade equities global and optimize both of the equities and the futures portfolio together, the strategy is market neutral, and we seek to have our returns on correlated to the market and, and also to your typical factor exposure. Yeah, when you

Martin Oberhuber  12:57

kind of like model from Australia perspective, we model individual names, however, and we ultimately trade individual names. And this as Matt was saying, we optimize them from a global portfolio perspective. But when we when we model these individual names, we do split up in kind of like sort of modeling the latent aspects of how markets behave. Think of it as factors, not necessarily your traditional factors, but underlying latent structures that make markets or individual names move together, like within a sector or the market as a whole. And then in addition to that, we also model kind of like the idiosyncratic drivers for each stock individually. And finding that mix is is certainly the way to go. For us, at least,

Corey Hoffstein  13:38

Matthew mentioned they’re sort of a fully systematic pipeline, everything from data ingestion to signal generation to portfolio execution all the way at the end. Are there different machine learning techniques that are more or less appropriate, different parts of the pipeline? Like do you find that deep learning might be good, for example, for signal generation, whereas it doesn’t work for your data gathering that you need?

Mads Ingwar  14:05

For sure, and it also relates to how much data is available and what the resolution of that data is, and for us also whether to utilize supervised or unsupervised machine learning techniques. So in some of the data ingestion or data processing, we use both supervised and unsupervised machine learning models to either pick out on signals already there or detect anomalies in the data structures or shorter breaks that may be occurring. And that allow us to to ensure that we have a data set that is as clean as stable as we can so that we have the best data foundation for our price predictions and other models. And that also means ingesting price data, tick data and fundamental data from multiple different vendors ingesting it ourselves and combining that already at the ingestion step. So there we have it. dataset that we can use mice in the rest of the pipeline. Then for price prediction that is inherently a supervised problem, you want to learn the future, expected returns and the future volatility of both single aim equities and futures and also baskets or groups of those. But then how we calculate and how we optimize position sizing, bet sizes and the portfolio, mix up those techniques.

Martin Oberhuber  15:27

Yeah, and perhaps to add to concrete examples, as you were saying before, it’s definitely very crucial to understand which techniques to apply to what data sets, for instance, for the pricing prediction problem in finance, or at least when you will trade at the lower frequencies like we do with relatively low turnover. Ultimately, you don’t get a lot of independent samples, right. I mean, like if we’re thinking in terms of like daily data, but our holding periods are actually much longer than that. And so you can easily apply deep learning techniques so that because simply don’t have enough data. But then you have problems like using news data for augmenting the predictive power. There’s a lot of like articles out there news articles, or you can even use kind of like pre learn models from the Deep Learning Community, and then apply it to your problem, in which case you have humongous amounts of data. And deep learning can actually be extremely helpful there. So it’s all about finding the right techniques for the right problem.

Corey Hoffstein  16:24

That’s I picked up on a phrase you mentioned, I don’t even know what you meant to mention it. But you said signal generation is inherently a supervised problem. And I was wondering with the acceleration and changes in machine learning over time, do you think that there are certain problems that will always be inherently supervised versus unsupervised? Or do you think there could be times in the future where even something like signal generation could become a non supervised non supervised approach?

Mads Ingwar  16:51

I think that definitely is. And it goes back to what your outlook is, as well, in the models, we train and deploy, we seek to model some sort of underlying cause and effect base our work out of economic principles. And instead of taking the more blackbox route, then actually knowing the ingredients that go into our model. So very much for the kind of scientific method where we start with a theory we know its sound, and then use machine learning models to test that out and apply that in the best way possible. And certainly, machine learning can be applied to pick up on unknown structures in the market. The problem with that is that you won’t necessarily know if those will hold out a sample. And given that we can generate more financial time series data, it can be very difficult to validate whether those will hold. I think a good example is from the actually pretty good book about Jim Simons and medallion fund, were in the midst of the.com. Bust, they were losing a significant amount of money, and it was consistent. And what it turned down was that the model, one of the blackbox signals they were trading was the model that picked up on the fact that NASDAQ was going up. And as NASDAQ was going up, they’re buying more. But when things were then crashing all around them, then the model was losing a lot of money. And they didn’t know why. So suddenly, you’re faced with having to kind of pick a part blackbox model in the midst of a crisis. And that’s also a spectrum of how nonlinear you want to go as well. And I think we prefer to stay on that side where we know the ingredients we’ve put into the machine learning model that is sound, and then have the model improve upon the way that these things are combined and exploit the nonlinear relationships that exist in the data. A good analogy here would be same technique that DeepMind used in their AlphaGo and Alpha zero algorithms. So the machine learning algorithms that beat the best players in the game of Go. And what they started out with as well was a model that ingested all the best players again, going back 1000s of years and looking at each little strategy or tactic that people want these players at the highest level go where we’re deploying against each other in game after game after game. And that model was able to look at all of these games and pick out all of these strategies, and then replay them in a way that actually allowed the model to beat some of the best GO players out there. And the next iteration, of course, was then to allow the model to start combining these learned patterns in a way that was completely non intuitive to human players. And many of the moves the algorithm made were seen as ludicrous but ended up being quite phenomenal. And that model that beat some of the world champions, chip or the world champions and go quite consistently, and the next evolution of that and where I think We may be going and finance but it will take it take a little while is that they then took that model and hat had it play itself and other models over and over and over at generating 1000s upon 1000s of games at each one, refining slightly those strategies and coming up with completely new ones. That’s the model that’s currently unbeatable. We do a similar thing in finance, it’s just a little more difficult for us to generate more data, we have to kind of wait to the closing bell every day. But our models do get better every day. And we’re able to pick up hormonal patterns in the market and validate them as we see new data to expand

Martin Oberhuber  20:40

on that quarter you asked supervised versus unsupervised, but with masses example, and probably more concrete and finances is supervised versus reinforcement learning, like the Deep Mind example that Matt’s just mentioned is really an example of reinforcement learning where you interact with a system and the system will give you feedback based on your interactions with it. And many problems the way they have been traditionally approach but also still are in finance, we’re using a supervised approach for an environment that is actually suited ideally for reinforcement learning type approach, because for instance, when we tried to predict future asset prices, it’s a different problem than say another supervised problem such as predicting the weather, the weather doesn’t care whether you’re predicting it correctly or not, or take actions based on it. But once you take actions on your asset price predictions, you will either remove that inefficiency that you found or other players will start reacting to your trades. And very soon you have generated feedback that your supervised model was actually not suited for treating properly. The issue is applying reinforcement learning techniques where you actually learn while you interact with the system are very tricky for firms like ours with our more longer term strategies, because of the collection of sample points that we would have to get to really come up with a robust strategy. So reinforcement learning techniques are probably more appropriate for players in the high frequency space than for us. But as Matt was saying, as we are trading, we obviously collecting those samples and how the market reacts, but especially as relates to market impact, and we will feed that back into our portfolio optimization.

Corey Hoffstein  22:22

The reinforcement learning is a really interesting case where in a game like go, there’s a very obvious my state versus your state. And the feedback is very direct, right, I make an action, you make an action. And there’s that pattern. To your point, when you’re making longer term portfolio changes that might be incremental over time, you might be able to measure what occurs in the market instantaneously, what sort of fills you get, and the execution, you may see. And that might play out in that part of the pipeline. But I would imagine it’s very hard to disentangle a position that might last months as to what impact it’s ultimately having in the market and how the markets reacting to that. But that does lead me sort of to the next question I want to get to, which is a little open ended. But it is about sort of the ongoing research process for you and the team. Because you do have this full stack machine learning approach everything from data ingestion to signal generation, to execution and portfolio optimization. How do you think about where new ideas come from? How that ongoing research takes place? And how to prioritize those projects?

Mads Ingwar  23:29

That’s a really good question. I think we start with basically a question to answer. And then following that scientific method, that means having a hypothesis of what we think works in the market, and that can be fundamental drivers, technical drivers are the things that we want to look at, or could be datasets we want to explore. And then we don’t believe that there is a single golden strategy that will like work forever out there, especially as you say that markets and economies evolve. So the key is then having a flexible setup, where we are able to take these ideas and look at integrating them into our existing pipeline and existing model. So we use an ensemble of models and that allow us to to plug in and expand upon that structure and see if adding a new data set or a new feature to the setup will give us higher risk adjusted returns or improve Sharpe ratios. And what we use there is something we call a future framework that allows us to take that data and then specify it on a higher level. And then we actually have a set of underlying models that will go and do the actual specification of that make sure that the hyper parameters are well specified, the model is stable and we have we don’t have model risk in one area or the other. And that will then feed into the entire pipeline and then continuously improve our kind of ensemble of models give hopefully better returns out of sample. Yeah. And

Martin Oberhuber  25:04

we have a huge pipeline of projects. And so to your question regarding, you know, how do we pick or prioritize these projects, it’s a difficult problem, ultimately. So say we have a big backlog of ideas, some are more fleshed out than others. But ultimately prioritizing those is on its own almost an optimization problem, right? Like we need to try to figure out, we have a limited amount of resources. And we need to figure out how do we best allocate them to really collect the best short, mid and long term ROI, because every project may lead to follow on projects, or may be more focused on building out infrastructure versus going down a specific route of, you know, just testing out new data set. So we have to think of it in these different time horizons as well. And ultimately, it’s a constant investigation and brainstorming of the whole team of trying to figure out what makes sense to approach next. And we are very much in the stage of you know, we’re using an 8020 approach where we’re not trying to squeeze the last bit out of an idea, but rather, you know, set something up if it works, we kind of like refine it, but otherwise move on to the next thing to ensure that we’re not going down a rabbit hole.

Corey Hoffstein  26:09

Chris Meredith, who was on my podcast last year from O’Shaughnessy asset management as a framework that I love when it comes to project management. All the projects, his research team works on get proposed in the framework of Sharpe ratio. So it’s a question of does this Project Enhance returns as a reduced costs? Or does it reduce risk? And how much is that going to change the sort of overall project Sharpe ratio, I thought it was a really fascinating way to sort of look at it. Neither here nor there, you guys have, again,

Mads Ingwar  26:36

should really use this attenuation there shouldn’t be.

Corey Hoffstein  26:39

That’s right. He should have, he should have. So given that you guys have this sort of models, all the way down framework, how do you think about managing model risk? Because this seems like you could have something misspecified at the earliest stage. And given that these are highly nonlinear models, your errors can propagate in a very nonlinear way throughout the entire pipeline? How do you think about either capturing those models, or recognizing that model risk is inherent and you need to design for failure? Yeah, that’s

Mads Ingwar  27:11

also a good question. I think we have draw a little bit of on background here, where, collectively we in the team, we work with production, icing, machine learning and deep learning models for some of the largest fortune 500 companies out there. So there’s a good chance that you and maybe the listeners have already interacted with the model that we deployed for some of the big banks or asset managers out there already. So having that approach, and being able to deploy deep learning in machine learning models in an environment where there is very, very little room for error. And you need to ensure that these are business critical areas we’re talking about, that this will continue to work and improve the overall process, because we talked about for us, that means doing a lot of stability analysis on the on the models, and making sure that any specification we do as researchers is done without with as few assumptions as possible. That means that we anywhere that we as researchers would jump in, we would instead of trying to say a simple parameterization, or a single hyper parameter, we’ll actually have a model that rigorous way will go and test all of the different combinations, we’ll see if there is a convergence of things are stable with different specifications. Going back to actually a lot of the work that you have done on trend and trend following, then you don’t necessarily want a scenario where say using 100 day look back window is fantastic. But using 120 days is an absolute chaos. So similarly, in specifying the machine learning models, we want to do away with all of that specification risk data is very inherent in these machine learning models. And then for the actual risk inside of a risk propagating in the models, what we use is an ensemble of models. So the data pipeline, and the models that work at that stage will generally work to improve the data quality and make sure we have a good data Foundation. And then the prediction models will look at that and make forecasts for each individual stock or future contracts, as well as an associated volatility estimate. And we’ll actually have models that then look at those estimates, as well as the conviction, the model have in data citation. And that allow us to look at that entire range of potential outcomes instead of relying on single point estimates. And that gives us in the portfolio and in the risk assessment, a much better idea around the inherent risks of individual predictions as well as the opportunities that we have. And that helps us basically set a more stable portfolio and have more stable returns over time.

Martin Oberhuber  30:00

crucial to have kind of like a modular approach to doing research, we have a fairly standardized lockdown back testing system that researchers aren’t touching that often ensuring that whenever you try out a new idea, you compare it to other simulation results went through the exact same testing environment. And thereby you ensure that if you mess it up, at least you, hopefully you’re back testing environment, test the model in the exact same way as before, rather than giving the researchers also the freedom to every single time they come up with an idea also tweak the backtester, perhaps in their favor. So it’s important that you compare apples to apples, it will still not protect you from overfitting, but at least it will protect you from any underlying fundamental issues that Your ideas may have. Because you, you’re going to be tested against a validated back testing system, that doesn’t change too often,

Corey Hoffstein  30:48

I often find that some of the best lessons learned come from realized risks, right, that when risk materializes, that you weren’t aware is there, you either learn really quickly or you go out of business. Both of you have now at least a decade of experience in machine learning different aspects, both outside of financial markets and within Finance, Financial Markets, I was wondering whether you could share any of the hard lessons that you’ve learned about applying machine learning,

Mads Ingwar  31:15

I think the biggest challenge is that overfitting is lurking behind every corner, especially in finance, even if you think you’re looking at out of sample data, then there’s a good chance that you aren’t, and even just being aware of how a market has behaved, then you may view the model with your kind of experience, which will then contaminate the research process. That means for many of us that’s gone through, say, Oh, 809 or, or even just went through the last couple of months here, then those experiences will be imbued in the models simply from having those experiences. So that’s definitely one area where you have as a machine learning researcher have to be very careful. And then of course, we have we’ve seen and tried and done a lot of things that haven’t worked or didn’t work out for a variety of reasons that’s looking at datasets or different models. I think at one point, we counted every tree on Borneo with deep learning nets to to see that they had a big impact on commodity prices and all of these things where there may be signal, but maybe the signal doesn’t align with the investment horizon we have in the fund and our core strategies. Or maybe the signal to noise ratio just isn’t there? Well, we think that the opportunity may be too transient to capitalize on. So I like the kind of like a think, Oh, son and the Ashiana seat guys, do you have that sort of a research graveyard where you take all these things that you’ve tried out, that didn’t work? And you kind of you can put them to bed in that good way. And that also holds for what we do. And then of course, anything that we deploy that can minimize complexity is also a positive, right? So if we can remove or simplify areas of our pipeline or process, that’s a huge positive as well.

Martin Oberhuber  33:13

I think the biggest challenge is, as Matt was saying, it’s really around managing overfitting properly. And that is definitely true when it comes to model selection. And I’ve worked within several teams that over and over are making similar mistakes in the sense of like, let’s say you run hundreds or 1000s of experiments, and each time you evaluate a certain performance metric of your model, it could be the Sharpe ratio. But it could also be non-fans Really, it doesn’t really matter. But I’ve seen certainly that when you run these many experiments, you’re kind of like performance metric is a random variable, right. And if you end up picking blindly the best model, it’s the same statistically, it’s the same as as choosing the maximum of a random variable. And that ends up having you know, you selecting a model that is way out in the tail from a performance metric perspective, but the expected value of that distribution is much lower, in fact, could even be negative, right? So you deploy a model that performs a lot less well on expectation than what you’ve observed it in back testing, and understanding the discount factor of what you need to apply from your model selection or the performance that you see in model selection to what you should expect in completely out of sample real life. Training is crucial. And therefore, again, coming back to you know, how important it is to manage overfitting properly.

Corey Hoffstein  34:31

Then my experience that for those people who survived the quantitative journey long enough, almost everyone comes to view everything as a random variable and cloaked in this distribution. And I don’t know whether that’s we’re all converging on the same thoughts because that’s how you survive or whether we’re all gonna go extinct at the same time. But I agree, it’s, it’s certainly you you start to I think the approach when you first get into quantitative finance is very naturally to try to optimize on an area But when you realize they’re all statistically indistinguishable, when you’re looking at this distribution, it suddenly changes your perspective on the whole idea of model generation.

Martin Oberhuber  35:09

There is just a lot of noise out there, right? And researchers as they go in that become practitioners, they will realize that it’s very hard to extract signal, especially in finance,

Corey Hoffstein  35:18

do you think there are areas which machine learning is better suited to either areas of the markets or types of strategies that machine learning is better suited to in areas where machine learning just may never be able to be successful?

Martin Oberhuber  35:33

Yeah, I guess as it relates to finance, I would say one concrete example that we’ve seen over the last couple of years is the rise of deep learning, which has taken some areas by storm like areas where you have high dimensional input space, like computer vision, or speech recognition, natural language processing, and arguably also areas where you have a lot of data and a reasonably strong signal to noise ratio. So for those areas, deep learning has just blown all records out of water completely. Are these techniques suitable for finance? Maybe yes, no, occasionally, but they are much harder to get working for finance, partly, again, because of that issue of having a very low signal to noise ratio, and in many cases, not having a lot of independent sample points, that, you know, these techniques, if applied responsibly, can be useful for finance to ultimately I believe that, because machine learning is more like we view machine learning as an extension to our brain, we use it as a tool to essentially help us prove or disprove our hypothesis, using statistical methods based on data. And I would argue that there shouldn’t be any area where data is involved and hypothesis or decision making is involved. That isn’t considering using machine learning, because ultimately, if you choose the appropriate techniques, they can always be supportive and helpful in trying to help you either prove or disprove your ideas or optimize them by optimizing the parameters that you give to the model that’s to support your hypothesis.

Corey Hoffstein  37:07

The techniques that are considered state of the art have changed considerably. In the field of machine learning over the last decade, I remember when I first started with machine learning in my undergraduate computer science studies, the state of the art was support vector machines. And now I think there’s been like four iterations past that of random forests and deep neural networks all made possible by advancements in both computational technology, as well as the algorithms that are powering them. How do you think about keeping up with the balance of the potential risks and benefits of incorporating these new techniques,

Mads Ingwar  37:47

I think it is an evolving space. And we continue to try to be at the forefront of that. And going back to SVN. And the other techniques you mentioned, I had the pleasure think of halfway through through my PhD being able to take the last three, four years of research and replace all of the kind of traditional machine learning methods, the mixture of Gaussian models, Kalman filters, and all of that with a single neural network using convolutional neural nets. So when you experienced that, and you’ll see the power that can bring, that kind of gives you an window into to how this space can evolve. And I started out doing a lot of image and video analysis where convolutional neural networks are especially applicable. And that was really one of the in the early days, the the kind of core drivers of deep learning research. And one of the cool things here in convolutional neural nets are that actually each kind of convolution or each layer will pick up on different structures in the images. And because it’s visual, we can kind of inspect it. And even though the combination are very nonlinear, and not necessarily intuitive to the human practitioner, then because it’s it’s additional data we’re dealing with, we are very visual creatures, then we can kind of see what the model is doing and picking up on. And one of the things that are quite interesting is if we look at, say, facial recognition, then the seminal paper by Viola Jones, and later we spend years in the kind of image analysis fields and in the different image groups around the world specifying small features that will detect like an eyebrow or nose or how to pick out a mouth from an image. And we come up with all of these small features and kernels that will pick that up in the images. And what we’ve seen actually is that if you just with no prior knowledge, start applying a convolutional neural net. Actually, each of the layers will detect different samples. So one would like specify in eyebrows more we’re specifying different textures, being able to detect skin or hair, and one would be able to, to map out different colors. So you have all of these features that we as humans have spend years defining by hand, a neural network will be able to pick up on automatically. And that’s really one of the key drivers that we want to take into finance. Also, going back to some of the other things we talked about, being able to start with those key economic principles that we spent years looking at and sound economic theories, but start using machine and deep learning to refine and pick up and validate and apply these in a different domain. And

Martin Oberhuber  40:41

with that, it’s hugely important for us to stay up to date. And it does make sense because if you are able to get a bleeding edge technology working is one of the first players you’re able to reap in the benefits very quickly and dramatic scale potential. So it’s absolutely crucial for us to be aware of what’s happening on the advancements, especially deep learning nowadays, but you mentioned risk. So of course, there are some risks, right? I mean, one big risk for us is to devote resources to potentially projects that don’t lead anywhere. And that’s a big risk for us, because we’re small team. The other risk, of course, again, deploying a model that is completely overfit because of its dramatic complexity. But hopefully our frameworks allow us to control for that reasonably well.

Corey Hoffstein  41:23

I know you just came back from a conference where you spent a lot of time talking to allocators explaining your process. I want you to think about for a moment, if you were in the allocators seat and looking to invest money with a fund that was claiming to use machine learning techniques. What would be the questions you would ask to do due diligence?

Mads Ingwar  41:47

That’s such a good question. And we have some reason experience. I think for managers, it depends on their level of sophistication as well. Some have good understanding of firms in general, but limited machine learning, understanding, and others may never have allocated to quantitative strategies at all. So trying to frame that where the allocate iron in terms of sophistication is obviously important. I think for for us, one of the best questions we’ve gotten was from quite sophisticated allocated, that basically said that what they’d like us to explain was how we calculated the covariance matrix. And we prepared this entire presentation and everything. And then we ended up talking about covariance matrices for two and a half hours with them. And it should be stated that these were some pretty smart folks with master’s degrees from Stony Brook. So we’re really, really getting into the weeds there. But from them, they found that that was really one of the key areas. And if people were smart enough to articulate that, well, then generally, that meant that a lot of the other things they were doing was also sound. But I think in principle, doing due diligence on machine learning fund is not too different from traditional diligence on traditional manager, you need to understand the process and how well the team can articulate it, if what they’re doing is also what they say they’re doing. And if they have a sound process for how they go about their their research. So machine learning fund, we have the benefit, the approach that is normally inside the head of traditional PM, we’re kind of codified and made more tactical in the way that we deployed it in machine learning models. But in essence, the approach of assessing whether we’ve done that, in a sound way remains the same between us and say, a traditional manager.

Corey Hoffstein  43:42

So I have to ask, only because you brought it up, how did you think about building the covariance matrix? For two hours? Maybe we’ll get back to that question someday. Are there any questions that you guys get frequently from investors and allocators that, perhaps maybe isn’t, aren’t as insightful as they think they are?

Mads Ingwar  44:08

We get a lot of questions in line on the Kinect interoperability of models. And that goes back to this notion of machine learning as a black box. And that spectrum of on one end where no reasoning can be extracted. And on the other end, where it’s actually sound components that have gone into it. But the combination of various may be maybe nonlinear. And I think that kind of misconception may be fueled by other areas of machine learning where you are happy to accept a more blackbox approach with better predictions rather than more explainability in the models. And then for us, that’s similar to I’ve done a lot of flying around not so much anymore with the Coronavirus but when I get on a plane and remember I’m I’m an engineer originally. So I remember a little bit from physics around how the shape of the wings kind of lift the plane up and all of that. But for all intensive purposes, a plane just remains a black box for me. But I really hope that for the engineers that build it, it isn’t a black box. And they really know down to the finest detail how that thing stakes up there. And I can kind of appreciate appreciate that. So there is, I think, for us towards many allocators, the the key component is really underlying that we build models of sound economic principles, we then use heavily nonlinear machine learning techniques to get the optimal performance out of that set the optimal portfolio and deliver the best returns possible,

Corey Hoffstein  45:50

very parts of your process that in discussions with investors and allocators you wished you got to spend more time discussing that maybe you think are more important than their given weight in the conversation,

Mads Ingwar  46:04

I think, especially the areas around optimization, both of the process, but also of hyper parameters and the portfolio construction. Going back to why the covariance matrix question was such a good question. That’s because that touches a little bit of bond all of those questions. So exploring that whole part where if you, as a researcher have an idea, how can you actually incorporate that into the existing framework? How do you validate if there’s signal in the data you’re exploring? How do we validate if that goes out of sample? And how do we ensure that it’s not something that distracts from from the existing models? So the dark side of machine learning, if you will, is that in an experienced hand, then these algorithms are incredibly easy to overfit on the data. And that leads to this divergence between the in sample performance of the model and out of sample performance. So really honing down on that, and then asking what the kind of perceived generalization error is of the model, I think, is a key line of questioning that many allocators should probably be poking at, when they do their due diligence.

Martin Oberhuber  47:19

I definitely personally prefer to be tested or kind of like poked on my technical abilities, because that’s how we’re able to really kind of like show that we understand the underlying concepts, as opposed to having a higher level due diligence process where the technical questions are kept at the minimum. So we definitely prefer to be grilled on the covariance matrix construction with two hours and not having technical conversations.

Corey Hoffstein  47:45

Maybe we’ll do a follow up podcast, all about the COVID. Talk to me about as more and more firms start to adopt machine learning, we’re seeing varying degrees of success. And machine learning is an area you guys have spent a lot of time in in with over the last decade, what do you think other adopters get wrong?

Martin Oberhuber  48:07

I would say I mean, you just need a lot of experience to get it right, really. And so initially, it’s inevitable that you make mistakes when you apply machine learning methods, especially in finance, even more. So when you come from the different field where you see all of these very complex models work really well. And then all of a sudden, you finally realize, wow, there’s actually not much signal to extract. One big mistake is so they especially from less experienced practitioners to immediately revert to like very complex techniques to try to do the most sophisticated modeling possible, while not really keeping track of overfitting. So, you know, validation, overfitting is one of the things that happen all the time in such environments, and then hyper parameter optimization, not setting up the experiments properly, scientifically to really understand, you know, what type of parameter space you should be looking at, and how to optimize for that. And in general, like neglecting certain aspects of a trading pipeline, for instance, people may initially solely focus on Alpha prediction, right? While we believe or our experience has been that portfolio optimization is just as important right to control your risk to maximize diversification, so forth. And lastly, a very, very much an aspect that people first starting out with quantitative methods probably neglect, this is modeling market impact properly, it’s crucial to understand the trading costs, not just the fees and so on, but really the market impact and that really allows you to exploit opportunities as best as possible. And I would say practitioners initially just neglect those aspects and immediately focus on on Alpha prediction, where that is only you know, obviously an important part of the story, but certainly not not everything

Corey Hoffstein  49:49

for firms that have yet to adopt machine learning but really want to Are there any incremental introductions that they can make right, some low hanging fruit? prove that they can start to build into their process? Or is this really a wholesale philosophical change that they need to make?

Mads Ingwar  50:07

I think there are many applications of machine learning outside of just short term price predictions that seem to get most of the press, I think ultimately you have to take the plunge. Otherwise, there will always be a barrier to entry. And I think people who are used to deploying everywhere models or statistical techniques, then machine learning is a different world. So some of the long low hanging fruits could be starting out with more simple linear models or regularized regression models, maybe moving on to more nonlinear models like gradient boosting boosted methods that offer a bit of an easier start. And then I think one of the key areas is always can looking at the data you generate yourself as a firm and how you can use that. And maybe that’s around understanding how your existing process or your data, even your market impact or your execution process, how that works. And that can oftentimes be an area where there’s some some immediate improvements to be made.

Martin Oberhuber  51:10

Yeah, I mean, ultimately hiring the right people, right, that can apply these techniques to finances, it’s absolutely crucial and giving them the resources that they need in order to do their job. And perhaps for a very young team, adopting these techniques, maybe not focused all their energy on just applying off the shelf algos, you know, that stuff is fairly easy, and everyone can do that. But really think about how to design an infrastructure that allows them to efficiently test strategies and ideas, focusing on frameworks that standardized certain aspects like back testing to ensure that researchers don’t build their ad hoc research environments and are not able to really compare results that they generate. So really focusing on infrastructure frameworks that support research is just as crucial as applying actual machine learning techniques for alpha generation.

Corey Hoffstein  52:00

Last question for you guys, as this is a rapidly evolving space, what do you think the future of machine learning and finance looks like?

Mads Ingwar  52:10

I think we’ll see more investors allocate to and also in dubbed quant methods, and I think the evolution will be fueled by many of the advances we’ve seen in other areas in machine learning. And I think we reached the point where there’s really no excuse for not applying these models in finance, machine learning allows you to add more and more detail to your existing process and your existing ideas. That means also moving more and more into the very deep networks and deep learning side of things where the algorithms that are developed for the applications be that reinforcement learning or other deep learning networks from from other areas will continue to find new applications in finance and on time series data. And that will be incredibly interesting to be part of and also see how that evolves, you may

Martin Oberhuber  53:06

end up in a scenario where machine learning models trade against each other or invest against each other, and there’s no human left in the process. But that may not necessarily be a bad thing. Because ultimately, finance exists to optimize the distribution of capital really, right. That is an optimization problem. So why not solve it with machines? I don’t really see a problem in the investment side becoming more and more owned by machine learning models with obviously human oversight.

Corey Hoffstein  53:33

My real last question here actually, it’s one I’ve been thinking about since we first got on together and I know my listeners sorry, listeners can’t see this. I’m over here looking like Tom Hanks in castaway. Given this whole quarantine situation isn’t mad. You’re looking like George Clooney. How is your hair so well maintained right now? That’s the only that’s the only thing I have left to ask.

Mads Ingwar  53:53

I just have one answer. And that is testing, testing, testing, testing. I love it. I love it. In the Nordics. I think we actually have an interesting AB test going on. We talked a lot about that aspect for machine learning. But we have a live situation going around right now. Where in Denmark and in Sweden we’ve had I think I can characterize it as a slight rivalry over the years involving the site and waging wars for quite a bit of time. And in Denmark, we started early lockdown of the entire country. And in Sweden they didn’t. And then we’ve seen how that has kind of played out in real time, which has been quite fascinating to watch as a quant. I think obviously you can’t really say one approach was better than the other because the the assumptions and the knowledge you had at the time were just not good enough to make a final conclusion. But it’s quite interesting to see in real time. What it has meant various that Denmark has actually seen less excess mortality than we normally have during those times so we we recently reopened among other things, hairdressers So that was that helped a little bit and look, thank you for it.

Corey Hoffstein  55:03

I look forward to that day. Gentlemen, this has been fantastic. And I know the listeners are going to get a ton out of this discussion. So I can’t thank you enough for your time and I look forward to the next discussion about the covariance matrix.

Martin Oberhuber  55:15

Excellent. Thanks so much for having us.

Mads Ingwar  55:17

Sounds very good. It’s a pleasure to be here.