Artificial Intelligence: Opportunity, risk, and regulation in financial services

In this article, we explore the opportunities, risks, and regulatory landscape associated with the use of Artificial Intelligence (AI) within the financial services sector.

Across all sectors the interest and use of AI is increasing as firms seek to realise its benefits. Tesco has recently collaborated with a leading university research team, using AI and big data in the early detection of ovarian cancer. Bloating and indigestion are common symptoms of this type of cancer, yet typically overlooked as being benign complaints. The research team leveraged consumer purchasing data from Tesco’s Club Card programme to identify buying trends that might be associated with these symptoms, with customers who were later diagnosed with ovarian cancer. Early detection like this means treatment can be offered sooner to improve patient outcomes.

AI and machine learning is being increasingly used by financial services firms to identify consumer trends, predict potential financial downturns, and assess the borrower’s ability to make loan repayments. Recent advances in AI present a significant opportunity to improve customer outcomes, but there are also risks that firms should be aware of as they look to implement AI-based solutions. 

What are the potential opportunities associated with AI?

Enhanced data analysis and insights: AI algorithms can process vast amounts of data at high speeds, allowing firms to generate actionable insights from complex datasets. This can result in better decision-making processes and a deeper understanding of market dynamics and consumer behaviour.

Automated customer service: Chatbots and virtual assistants can now provide 24/7 customer service, improving client interactions, particularly in answering FAQs. This will reduce the need for human intervention, which can be focussed more towards addressing more complex queries. 

Improved risk management: Predictive algorithms can identify potential financial risks, helping firms in proactively assessing and mitigating their risk exposures.

Fraud and money laundering detection and prevention: AI can be used in real-time to identify and flag irregular patterns or transactions in high volume transaction processing, increasing the ability to identify potential fraud and money laundering.

Operational efficiency: Automating manual, time-consuming, and routine tasks can result in higher productivity, efficiency gains, and cost savings.

Tailored financial products: By analysing customer data, firms can offer personalised financial products and services, enhancing the user experience and increasing client retention.

What are the potential risks associated with AI?

Data privacy: As AI relies heavily on data, protecting data privacy is of heightened importance, with the potential for misuse of personal information and potential cyber security breaches.

Over-reliance on automation: Heavy reliance on AI may lead to missed human insights, resulting in suboptimal decisions or overlooked risks.

Job displacement: As AI continues to automate various tasks and processes, there is a heightened risk to job security.

Underlying data risks: AI models are only as good as the underlying data that supports them; incorrect or biased data can lead to inaccurate predictions or suboptimal decisions by AI models.

Systemic herd behaviour: Where many firms adopt similar AI models, there is an increased risk of ‘herd behaviour’ within financial markets, possibly intensifying market volatility and sensitivity to shocks.

Ethical and inclusion concerns: AI-driven decisions, especially without proper oversight, could lead to unfair, biased or discriminatory outcomes. Firms need to consider their reputation, impact on customers, and regulatory compliance, particularly around data bias concerning protected characteristics, underrepresented groups or the treatment of vulnerable customers.  

Technical Failures: Like any technology, AI systems can malfunction or be vulnerable to cyberattacks, leading to potential financial losses, regulatory discipline or reputational damage. Cyber security systems should be revisited to assess AI cyber vulnerabilities and mitigation.

What is the regulatory landscape around AI?

In October 2022, The Bank of England (including the PRA) and the FCA published a Discussion Paper (DP5/22) requesting feedback on how the regulators can facilitate the safe and responsible adoption of AI in UK Financial Services. This was published in response to the AI Public-Private Forum (AIPPF) final report, which made clear that the private sector wants regulators to have a role in supporting the safe adoption of AI in UK financial services. 

On 26 October 2023, the FCA and PRA published the feedback statement (FS2/23) which outlined the key responses to DP5/22. The Discussion Paper was published to initiate a debate about the risks of AI and how regulators could respond. Some of the key themes from the feedback include: 
  • Respondents felt the current regulatory landscape on AI is fragmented and complex, and thus a synchronised approach and alignment amongst domestic and international regulators would be particularly helpful.
  • Many participants emphasized the need for more uniformity, especially when tackling data concerns like fairness, bias, and the management of protected characteristics.
  • Regulatory and supervisory attention should prioritise consumer outcomes, with a particular emphasis on ensuring fair and ethical outcomes. 
  • Respondents noted that existing firm governance structures (and regulatory frameworks such as the Senior Managers and Certification Regime (SM&CR)) may be sufficient to address AI risks.
Looking ahead, by the end of 2023 the UK Parliament are expected to agree final text of the EU AI Act, with the regulators expected to produce further guidance by the end of March 2024. 

Other considerations for firms

The use of AI in any sector carries significant ethical considerations, though these are especially pronounced within financial services. 

Transparency and data privacy

A recent article by the ICAEW explored the ethics around data privacy and consent in relation to AI. It highlighted the existing use of AI-based insurance risk assessments in dynamic pricing, based on customer responses to health questionnaires. However, the need to mitigate threats to customer outcomes is critical, especially within the insurance sector where dynamic pricing models can reflect bias or data leaks.

Therefore, transparency in AI, including the ability to delve into an AI model and understand its decision-making process, is crucial in building trust. This can enable consumers to better understand and challenge decisions and outcomes. However, as it stands for many AI models (including ChatGPT), transparency is weak, leading to the current ‘black box’ paradigm, whereby systems are viewed in terms of inputs and outputs, without sufficient knowledge of internal workings and methodology.

Bias, discrimination and ESG

AI models trained on historical data can inadvertently perpetuate or amplify existing biases, as discussed in our previous article on algorithmic bias and discrimination. The well publicised example of CV screening at Amazon showed bias against women because the tool reflected the bias in the human CV screening process. AI credit scoring or pricing systems might disadvantage certain demographic groups if past data reflects biases against them. This has the potential to directly contradict firms’ efforts towards promoting diversity, equity and inclusion (DEI), where cognitive, conscious, and unconscious biases affect the training data. 

In a best-case scenario, where underlying data is sufficiently free of bias, there is an opportunity for AI to enable organisations to understand inequalities and reduce bias in decision making. AI can be used to better monitor, and help reduce, greenhouse gas emissions, for instance by optimising energy generation and consumption across commercial premises. 

Job displacement

Automation through AI could reduce the demand for certain roles as technology may be able to replicate these activities, particularly for more junior roles performing manual tasks. The ethical considerations related to this include the societal implications of displacement, the responsibility of firms to their employees, and the impact on recruitment, staff development, talent management, and succession planning. Conversely, however, initial estimates by the World Economic Forum suggest that whilst AI could eliminate over 80 million roles, it could create almost 100 million new ones, thus the net effect appears positive.

What’s next for AI in financial services?

It is evident that the role of AI will continue to grow, offering clear opportunities for firms to innovate, streamline processes, and amplify their competitive edge, amongst many others. As firms look to keep up with the competition in the race to deploy AI solutions, there are several significant risks that firms will need to manage, which if unchecked could lead to enhanced regulatory scrutiny, litigation, fines, and reputational damage. Therefore, establishing the right control environment and governance arrangements early is fundamental to manage the risks to AI.

What should firms be doing when it comes to AI?

AI could present both opportunities but also serious risks for firms, particularly where models are implemented unchecked and without due consideration of the risks involved. There are a number of key governance and risk management considerations for firms, including: 
  • Firms should stay informed about the type of AI in use in their business and have an established system of internal controls to identify and manage the associated risks. As new AI models are rapidly introduced to firms, there should be a corresponding uptick in controls regarding the experimental use of AI.
  • Firms should consider the appropriateness of, and enhance where relevant, their governance and oversight arrangements in relation to AI. 
  • Senior leadership and the Board should consider and understand the relevant risks of the use of AI in the firm, alongside their roles and responsibilities in regard to the oversight of AI.
  • There should be sign-off for technology at a senior level; ensuring that senior leadership understands both the opportunities and risks of the technology and proposed control framework, promoting informed decision making.
  • Another crucial factor that firms should consider, is the effect of AI on customer outcomes and its role in delivering good customer outcomes. As such, firms should commit to the ongoing review and measurement of the impact on customer outcomes and any potential unintended consequences resulting from AI.

How we can help

BDO’s governance and risk management experts can support firms with regards to AI implementation, planning and control. 
  • Reviewing the internal control and governance frameworks around AI;
  • Reviewing the accountability framework, including roles and responsibilities of senior personnel, regarding AI; and
  • Developing and delivering tailored board training plans.
If you would like to know more, or start a conversation please reach out to Shrenik Parekh or Jennifer Cafferky. We also have extensive experience in the review of ethical risks and considerations, including from a DEI perspective. For further information, please contact Sasha Molodtsov, our Head of Financial Services’ DEI Advisory.

Sources
  1. https://www.icaew.com/insights/viewpoints-on-the-news/2023/jun-2023/AI-takeover-part-2-ethical-and-regulatory-implications-for-financial-services
  2. https://www.bankofengland.co.uk/prudential-regulation/publication/2023/october/artificial-intelligence-and-machine-learning
  3. https://www.bankofengland.co.uk/prudential-regulation/publication/2022/october/artificial-intelligence
  4. Shop loyalty card data may help spot ovarian cancer - BBC News
  5. https://www.ft.com/partnercontent/societe-generale/how-companies-can-assess-artificial-intelligence-through-an-esg-lens.html