What the AI?! Accelex
At the end of last month, Accelex launched its portfolio analytics and reporting platform for investors and asset servicers as part of its extraction and analytics services.
We spoke to Nicole Weder, chief product officer, and Boris Lavrov, product director, to understand how Accelex uses AI to provide investors with visibility across the levels of their portfolio at the click of a button.
The Drawdown (TDD): What does AI mean for Accelex?
Nicole Weder (NW): We started Accelex with the intention of enabling private markets investors with real-time analytics tools. We quickly realised that in order to create such an analytical solution, we needed to build a flexible unstructured data extraction tool, as most data in this market is locked up in document form. This is where our AI stack does the heavy lifting for us. Our AI engine has been built internally over the last five years, using the latest research and models that are publicly available. Our data scientists continuously analyse the performance of the top machine vision, natural language processing (NLP) and multimodal (Visual + NLP) models, fine-tune and apply them to our specific use cases. We then started to build the analytical side as well; making use of the extracted data was a logical evolution of our product offering.
It was really important to us to not create a generic AI tool, because the private markets behave differently from the more traditional sorts of assets. You don’t see many generic data extraction AI tools that try to compete anymore, because such a tool wouldn’t really understand all the industry-specific context required to create accurate structured data that can be readily used for portfolio analytics.
Boris Lavrov (BL): I’d add that we have developed a tool that can locate exactly where certain data points exist within a document. You would not be able to do that with a simple template. Instead, we used AI to create geotags. This means every single metric we have extracted can be precisely located, allowing the user to click back and forth between the source and the analysis. Clients are then satisfied that there is an audit trail of validated data analytics on a single platform.
TDD: How is AI used to analyse data for your clients, who are predominantly investors?
BL: We aim to increase efficiency, data scale and availability for our clients. We use the bulk of our AI capabilities to structure the data, which is then fed into our analytics suite. Investors can set up an automatic document feed into our platform, which will be instantly classified and assigned to the relevant investment entities that they already track and understand (funds, investment vehicles). Required data points are then pulled from those documents based on the client’s preferences and workflows. That data can be locked up in unstructured text or complex tables – our job is to cater for all kinds of formats and language being used in the document, to standardise it all into a single private markets-specific data model.
We also have data pipelines to eliminate the possibility of data duplicates and to flag certain anomalies. This further strengthens the validity of the data as clients don’t have to assess the credibility of the data themselves – but they can see where the data comes from should they wish to.
TDD: Should we be excited or worried about advancements in AI?
BL: We are always excited about new developments in this space because this is exactly the process that allowed businesses and products like ours to become possible in the first place. What we are doing today in unstructured data processing was simply not possible 20 years ago, and what we can do today is a great deal more advanced than what we could do with the technology available even five years ago.
Experimentation, research and development are a natural part of AI’s development. For example, for a long time it was implicitly assumed that the main way to improve performance of machine learning algorithms was to retrain and fine-tune the underlying models on larger and more specialised datasets. We would say this is no longer (always) the case. New models come onto the market so quickly that one of the most effective ways to improve performance on specific tasks may actually just be to replace them as quickly as possible and make sure you always use the best techniques available in the open-source community.
TDD: What developments can we expect to see in the next year or so?
NW: While it’s hard to predict as we usually focus on how to leverage AI ourselves, I think we can expect to see significant developments in large language models (LLMs). But will they transform private markets? I doubt it because the private nature of the industry means you have to be incredibly careful with how you leverage these models. You can’t just plug private and confidential data into something like ChatGPT, because you won’t know where that will end up.
We are particularly excited and are closely monitoring any multimodal developments in LLMs, because these relate directly to our particular use case. Essentially, we expect a big jump in capability once LLMs are able to ingest both visual and linguistic information at the same time – a capability demonstrated in the early GPT-4 demo earlier this year, but not yet released to the public. If and when LLMs can allow us to interpret visually rich documents with accuracy and scale, it would bring a great deal of value to our clients. That part of AI has accelerated at such a pace that we will see quite a bit of involvement in that area.
To read more interviews in the What the AI?!... series, click here
Categories: AnalysisFundraising & fund structuringData roomODD / DDQOutsourcingIT & cyber securityReporting & Transparency Reporting softwareTemplatesTechnologyAIData roomReporting portalReporting softwareTech providers