According to The Economist, some 54% of large financial institutions (FIs) had already adopted artificial intelligence back in 2020, so imagine where those numbers stand today. To add to that proliferation, 86% of financial executives say that they plan on increasing AI investment through 2025. And in another survey, 81% said that unlocking value from AI would be the key differentiator between winners and losers in the banking industry.
“There’s clearly a very strong value case to be made for AI in financial institutions”, said Athena. “Investment banks are perhaps the earliest adopters and beneficiaries of machine learning technology in the algorithmic trading space. After all, 70% of FIs now use machine learning for fraud detection, credit scoring, or predicting cash flow events, and conversational AI is commonly used in retail banking and insurance. Yet despite this, many FIs fall short when it comes to productionising their AI projects to deliver concrete, enterprise-wide value.”
Athena explained the main challenges to AI project success and how to overcome them:
Investing in core technology and data management
For Athena, one of the key difficulties FIs face is that their core technology is built for traditional operations, such as payments, lending, claims management. “Legacy IT stacks don’t have the flexibility to deploy AI skills. The computational capacity for data management and analytics you need in a closed loop VR application just isn’t there and testing and developing AI technologies can take days or even months – prohibitive when you’re trying to be innovative. The solution? Change core technologies: move to cloud computing.
“A cloud environment can reduce the time it takes to test and develop AI solutions down to a few minutes, thanks to managed services”, assures Athena. “A bank I worked with started transitioning into the cloud two years ago, and their innovation rate has increased by about 49% according to their own KPIs. That might seem small, but for an incumbent, monolithic institution, it’s quite revolutionary.”
Another facet of this challenge is investing in data management – both in terms of data quality and data access. In FIs, data is siloed across various business units and divisions. As a result, data isn’t standardised, quality is hard to manage, and there’s no single source of truth, so stakeholders are unsure if the underlying data of proposed projects is trustworthy. “Investment in modern data governance and data management practices is crucial for FIs”, insists Athena. “And a key component of that is what we call an Enterprise Data Model, or EDM. It’s not an IT concept, but a way of describing and logically organising your data – all of your data – in business relevant language – a kind of business glossary, if you will, that streamlines data quality management for all certified users.”
The final part of this challenge is data access.
“Data is the most valuable raw material any organisation possesses; key to leveraging its value is to have access to analytics at scale, at the point of decision making. It’s especially difficult in banks due to data confidentiality. An innovative solution is to create API-enabled databases for more effective and secure data access, but at scale and in real time to fulfil your business objectives, and in real time to fulfil your business objectives.”
Implementing a future-oriented operating model
The second challenge for financial institutions lies in the operating model they use. Most are organised according to business divisions, often with centralised IT functions, impeding their ability to innovate. Business leaders set their own agendas and AI strategies, resulting in fragmented teams and a waterfall approach that leads to delays, cost overruns, suboptimal performance and a total lack of a test and learn mindset. FIs must be able to work in an iterative manner to continuously innovate and improve – a necessity in order to scale AI, because no one ever gets it right the first time.
“Instead, we at Artefact propose a more agile and flexible future-oriented operating model based on data products. A data product is essentially a set of data solutions that directly address a business challenge or business outcome. Each data product is developed by a dedicated team that has their own budget, assets, and KPIs.”
“For example, say you have a client 360 team of business, IT and data stakeholders. They can provide several data products to the business, as well as to external customers, so you obtain a customer 360 analytics layer. Data scientists and engineers can use this analytics layer to test and learn AI ML solutions. You could also have a client 360 dashboard with relevant KPIs for your frontline sales colleagues and use it to improve customer lifetime value. You could also provide data to your marketing team about optimization and personalization to help them better spend their budgets.”
The possibilities are endless, but in essence, a modular operating model allows your teams to better collaborate and work towards a common strategic goal, rather than in the silos that currently divide FIs – as well as a myriad of companies across all sectors where product teams are not yet a reality.
Proactively considering AI ethics and regulation
Investment in AI ethics and regulation is crucial for financial institutions right now. In reviewing the European Commission’s proposed Artificial Intelligence Act, the European Data Protection Supervisor (EDPS) considers that stronger protection of fundamental rights is necessary, including strengthening the protection of individuals’ fundamental rights, including the rights to privacy and to the protection of personal data.
Regulatory restrictions are to be imposed on anyone who uses any software associated with biometric technology in financial institutions, human capital management or credit assessment of individuals. As things stand, this will affect almost all FIs. While the full extent of future AI regulation is not yet clear to anyone, what is evident is that regulations will be ethics-based. But many leaders in the financial services industry feel their companies don’t understand the ethical issues associated with AI.
Artefact proposes developing an ethical in-house AI governance framework that covers all aspects of AI ethics, including buyers, including data management, model training and retraining AI explainability. To do this, expert advice may be useful, but what’s really needed is a two-part mindset shift covering all aspects of AI ethics, including buyers, data management, model training and retraining AI explainability.
The first shift requires large-scale stakeholder buy-in, by obliging stakeholders to let go of the siloed mentality, division and operating models that are preventing you from productionizing AI. The second is moving from a risk-averse to a pioneering mindset. This requires a deep cultural change where the entire organisation attains a high level of literacy on the impact of AI, its applications and its ethics, in order to be innovative without being irresponsible.
“It isn’t easy, especially in an industry where risk aversion is deeply embedded. But ultimately, when it comes to AI adoption, I don’t think financial institutions have much optionality, it’s not how or if AI can add value to your business. It’s about how you can embed AI in your day-to-day operations in order to remain relevant and competitive in a rapidly changing global marketplace.”