What the AI?!... NGP Capital
The Drawdown (TDD): What is Q and how has it changed the way you work?
Atte Honkasalo (AH): The reality is that the market of investable companies is much wider than any kind of human team working without automation can keep track of. The purpose of Q is to get a good view of that market.
I joined NGP roughly four years ago for the purpose of building Q. Essentially, it's a data platform that we use to source and monitor interesting companies and also to understand the markets we are investing in.
Having your own view of the market makes you a proactive firm and your team can focus on more interesting tasks.
TDD: How has having an internal deal-sourcing tool changed operations at NGP?
Christian Noske (CN): From a resource allocation perspective, having come from a different fund to joining NGP two years ago, I would say that having a tool like Q can probably save half to one analyst of work time for a $200-500m fund in the European region.
It’s easier to measure Q’s efficiency when looking at how it does a typical analyst’s work of finding companies, but it’s worth saying that we also use Q to manage and support existing portfolios. So, if you think about identifying market research, portfolio support, due diligence, there are lots of work packages where Q can be used.
As a result, our team works more efficiently but needs to spend more time on qualitative work, such as understanding the bias within the data. Now, we have a quarterly meeting where we go through, let’s say: How many companies in Germany were added? How many of those companies did we actually proceed with? Is our outreach strong enough? All of that drives a very different kind of thinking and different work.
TDD: As a result, are you hiring fewer analysts, or are you looking for a different skillset?
CN: At NGP we have associates rather than analysts. They do more than desk research as they also reach out directly to firms. But I do not think the junior role has been eliminated because of Q, it has just become more interesting. Anecdotally, one of our associates has appreciated the reduction in manual labour needed to carry out her work. We find our associates are talking to more companies directly rather than doing the theoretical work.
TDD: Why did you decide to implement the tool internally rather than outsource its creation?
AH: We noticed that we needed a tool with a very specific use case and there wasn’t one that we could find on the market to do what we required. A lot of VC knowledge is something that you can't read in a book or go online and search for. We built Q to look at companies that are aligned with our investment thesis. We wanted to classify companies against that and understand which ones would be the best ones to look at. This led us to develop the tool internally. But in order to do that effectively, you need to be very close to the investors to get a close feedback cycle. You also need to have an agile work process to enact requests within days and weeks rather than months.
TDD: One of the disadvantages of creating an AI tool is the inherited bias from the information and data it processes. How have you tackled this with Q?
AH: The problem with any algorithm is that any training data will have a level of bias, no matter what you do. For example, if you are looking at gender bias in VC funding, the easiest way to avoid that is to remove a factor called the founder gender in your model. But that's not necessarily enough, because there are other factors involved. If female founders tend to get less funding, the model would learn that from some other variables even without having that variable in the model.
So, removing bias is a continuous job. It involves constantly looking at your model and testing it to identify any existing or new biases. You can't remove it all but hopefully you are doing much better than you would be doing without the tool. Above all, human oversight of the tool is key to monitoring this risk.
TDD: One of the main drawbacks of generative AI is its ability to hallucinate or falsify information that appears correct. How do you avoid this from happening in your tool?
AH: One of the ways we use external LLMs is to formulate company descriptions. We input company data in and we ask it to give us a 150-word description of a company. We noticed straight away that if you just fed the tool the company name, it would give a false description without knowing anything about the company.
It’s something to be mindful of. Without appropriate context, the tool is not useful, but asking specific questions with the right level of detail can create efficient and timely summaries, especially if you are asking the tool to structure unstructured data.
TDD: Should we be excited or worried about future developments in this field?
AH: I think it's an industry requirement to see more opportunities than risks but if you look at the bigger picture, we are really just at the beginning of the generative AI journey. At this point, these tools are simply made up of text-predicting algorithms. That is something I like to keep in mind, when hearing all the talk about some of the existential risks.
CN: On the positive side, I think VC is doing a lot of work that can be automated, from data collection to investment models.
I think generally, AI can do a really great job at doing the non-value-adding tasks and that is very exciting.
For me, explainability and security are the biggest risks. But innovation is happening in that area too, with experts looking into those factors, but I don’t see any concrete solutions yet.