The article below was written by Pauline Guénot as part of her work experience placement with WA.
According to a study published in 2017, over 50% of the activities currently undertaken in the global economy could be replaced by automation within the next 40 years. To some, this will be a startling estimate, but the Covid-19 pandemic has undoubtedly accelerated some trends for automation and catalysed the adoption of data-driven solutions.
With the availability of data continuing to expand, and ever more sophisticated analytical tools available, the financial sector is well placed to capitalise on the potential benefits offered by artificial intelligence.
The potential applications of AI are wide-ranging, and often rest on the ability of the methods to harvest, manipulate and analyse data beyond the capacity of traditional techniques. AI tools can, for instance, enable higher loan approval rates with fewer credit losses for lenders. Building accurate predictive models on the basis of large data sets can help banks to identify and assess borrowers considered “at-risk” of default like millennials or small business loan applicants. Such models naturally rely on the quality of their input data; the dataset must be large and representative enough to return accurate predictions.
AI could offer significant benefits to the industry given its capacity to improve anti-money laundering and anti-fraud detection management. The traditional risk documentation process is expensive and time-consuming, while an approach based on both pattern recognition and intelligence-based models could diminish the administrative burden. Ayasdi, a US-based predictive analytics platform, declared that one of its clients saw a 20% reduction in financial crime investigation cases after having used their services.
According to the UK Payment Markets Report 2020, while 58% of all payments in 2009 were in cash, this proportion was only 23% in 2019. Since the beginning of the pandemic, there has been a 60% decline in cash usage. With more and more transactions proceeding electronically, identifying fraud and other illegal activities with rapid, real-time techniques will become all the more important.
Whilst these techniques – implemented well – can reduce exposure to credit risk and increase confidence in the financial system, they undoubtedly come with their own risks. The most obvious is that poor input data will, almost certainly, yield poor results – the classic “garbage in, garbage out” refrain – and this is all the more relevant for AI techniques, which might be expected to proceed with comparatively less supervision than traditional methods. A further risk is that, if consumers learn how the model works, they may then seek to mimic “correct” behaviour to get a loan or achieve their objective under false pretences.
Given these risks, investors and financial services providers will want to take a close interest in a potentially changeable regulatory environment for AI.
Companies must build the right data partnerships to develop unique products, insights and experiences that differentiate them from their competitors. However, big tech companies remain critical sources of data and customer experience. As they anchor their financial value, smaller firms are left at a disadvantage. Earlier this month, the government announced the launch of a new regulator, the Digital Markets Unit, based in the Competition and Markets Authority to enforce a “new pro-competition regime to cover platforms with considerable market power”. Companies such as Google or Facebook, designated as having “strategic market status” and funded by digital advertising, will be monitored by regulators.
Financial firms could use alternative data, as mentioned during the second Artificial Intelligence Public Private Forum last March, but they must have clear due diligence processes to ensure that data is still from a trusted source. Financial services can also find inspiration in data standards developed in the open banking regime to apply existing data standards to AI. They must align with existing requirements like the European Banking Authority’s guideline on outsourcing, ensuring that their system is transparent and explainable.
The government has finally announced that “a new plan to make the UK a global centre for the development, commercialization and adoption of responsible AI will be published this year”, as AI could deliver a 10% increase in UK GDP in 2030. The European Commission will also propose new EU regulations on AI on 21 April 2021. Embracing Artificial Intelligence is therefore a priority for financial firms, but the prospect of reforms means that they must monitor it to ensure continuity of services globally.