The Financial Conduct Authority (FCA) has long had a focus on treating customers fairly. With the cost-of-living crisis pushing great swathes of the population into debt – 7.4 million people were struggling with bills and credit repayments in January, according to FCA data – customer vulnerability has come into specific focus.
Building on work started in 2021, when it issued guidance on how financial services businesses should treat vulnerable customers, the regulator has begun investigating whether the industry is meeting the standards it has set. A 40-part questionnaire issued to regulated entities in June 2024 will inform its next steps, with the regulator seeking information on everything from how financial services firms identify which of their clients are vulnerable, to how they design products and communications to meet client needs and measure outcomes.
The regulatory crackdown is designed to ensure businesses are adhering to the standards laid out in the FCA’s Consumer Duty rules, which were published last year. For Kirsty McKenna, Innovation Programme Manager at FinTech Scotland, while the main purpose of the duty is to tighten consumer protections, it has also created the opportunity for greater innovation in the financial services sector.
“At a very high level the Consumer Duty is putting the onus on the financial services industry to do everything it possibly can to create better outcomes for all consumers,” she says. “That covers whether firms know enough about their customers to make sure they offer them the right products and services as well as charging them a fair price. It has opened up lots of avenues for greater use, or more insightful use, of the data those organisations hold and in doing so has created opportunities for AI to be applied and for increased collaborations with new fintech solutions.”
Inicio Chief Executive and Co-Founder Rachel Curtis first started thinking about those opportunities while working in a challenger bank and being alerted to the case of a pregnant customer who had attempted to take her own life because she was unable to get on top of her spiralling debts. Those debts were spread across a number of different organisations and, while each business was attempting to help the customer come up with a plan to clear them, the first step in that process was to have her fill out an income and expenditure (I&E) form. Dealing with those forms, Curtis says, was what almost pushed that particular customer over the edge.
“The form is quite long, and it can be intimidating for those that may be less confident about finances, and that's a real barrier for a lot of consumers,” she says. “What was happening to this lady was she was being sent forms to do on her own that she wasn't confident enough to do by herself. The support she was being offered was to ring a call-centre agent to be talked through the form, but she couldn’t because of the sheer embarrassment of having got into that mess with her money and feeling she would be judged when going through what she was spending her money on – she thought the agent was going to say ‘wow, you're paying for Netflix, cancel Netflix and pay us back’, which is not necessarily how it works. There was just this fear of the unknown and she felt she didn’t really have any options in life, which is terribly sad.”
The case led Curtis to investigate how widespread the problem was and, having carried out market research, she said a “really consistent message” came back.
“Filling out an I&E form was a really critical first step in the process for people who were struggling, who were vulnerable, but there wasn't a way for them to get the support they needed,” she says. “That was the period when a lot of banks were putting in chatbots to cut operational costs, but they were actually creating quite a clunky customer experience. I really didn't buy into that whole chatbot sort of approach, but I thought that maybe a use-case for conversational AI technology was to create a virtual agent to give that support. That's where we started.”
Inicio’s solution is an online agent called Budgie, who helps customers fill in the I&E form by asking a series of soft questions that enable it to begin tailoring the form’s typical c.80 fields. It is trained to jump around the form if customers “blurt out random facts about their money” and interacts in a way that best suits the client, for example enabling them to text chat, ask questions or fill in information. That means, says Curtis, that confidence is built for users before they panic and close it down. That in turn enables the organisations that use Budgie – everything from utilities companies and banks to credit unions and debt collection agencies – to offer the kind of tailored solutions that are required to properly help people deal with their debt.
“Vulnerability can be caused by a range of factors and I think that's probably the key aspect to this because it's not a one-size-fits-all..."
ALISON RENNIE - HEAD OF ADVISER, LEGAL
Fintech business Aveni also offers specially tailored solution via its speech-analysis tool Aveni Detect. The technology is trained to listen in to customer calls and flag potential vulnerabilities to the people handling the calls. The organisations using it can choose for that to happen immediately via question prompts to call handlers, or for conversations to be assessed in their entirety post-call to determine whether a follow-up is required.
Yet while it is proving to be a powerful tool, Aveni Chief Client Officer, Robbie Homer-Plews says there is only so much AI can do on its own, adding that while machine learning has evolved hugely in recent years, its potential to create false positives means it should always be used to enhance rather than replace human interactions.
“Until a few years ago most vulnerability monitoring using technology would largely be using transcription and building tools to look out for particular keywords like dementia or Alzheimer's or cancer,” he says. “However, machine learning has come a long way in the past few years in particular and that's the technology we leverage when it comes to vulnerability identification. The reason for this is that it's very easy to take specific words said in isolation out of context using more basic methods.”
“We rank vulnerability on a sliding scale and perceived risk to harm is right at the top of that scale. I've seen numerous examples over the years of a customer talking about committing suicide. Using simplistic regular expression-based models, this would be flagged up immediately for a human review. However, in the vast majority of cases, when you understand and analyse the surrounding context, these are not true instances of vulnerability. They might be talking hypothetically about missing out on a lottery jackpot because they forgot to buy their weekly ticket which always carried the same numbers and that they’d kill themselves if that happened (jokingly). Scenarios such as these can trigger lots and lots of false positives, which then means you have to spend more time dealing with the noise than actually getting to the true vulnerable population.”
“To get the most out of the technology available, you have to be able to cater for specific use-cases and specific sectors and apply different underlying models to get accurate and trustworthy outputs. If you try and use a one size fits all approach to technology across sectors, what's going to come out of a million analysed conversations is half a million showing some sort of vulnerability characteristics and then you're going to have to employ more people to trawl through that data. You bring in technology to enhance your monitoring and provide your team with a machine line of defence through the adoption of AI, but it ends up costing more money and generating more work for your teams.”
Like many businesses, the abrdn Group is exploring how AI and other technologies could be used across its businesses and enabling functions. Jan Buchan, Head of Legal, Business Services and Transformation at the organisation, says abrdn’s approach to AI has been characterised by a 3-dimensional approach: (a) the creation of an AI Council and a Practitioners’ Group to create broader awareness of opportunities, risks and best practice, as well as to monitor use case development across abrdn; (b) use case generation to drive efficiency and/or create customer or business value; and (c) clear “AI Guardrails” to help colleagues think carefully about risks in developing their use cases and to best ensure responsible and ethical practices.
More specifically, abrdn provides a wide range of products and services to a broad range of customers and clients and it’s clear that the issues highlighted above are at the forefront of their thinking. Alison Rennie, Head of Adviser, Legal at abrdn, says one of the main issues financial services businesses are grappling with is defining and identifying what vulnerability can encompass. Indeed, people do not have to be struggling with debt to be in a vulnerable position in relation to their finances – simply being too busy to properly search for the best insurance deal could technically make someone a vulnerable customer. So, while some customers will know their circumstances make them financially vulnerable and will seek assistance accordingly, others will not. Knowing how to deal with a wide variety of causes which may lead to vulnerability creates layers of complexity, Rennie says.
“Vulnerability can be caused by a range of factors, and I think that's probably the key aspect to this because it's not a one-size-fits-all,” she says. “We would take into account things like capacity, illness, or financial circumstances at any particular time. For example, age can be part of it, but it shouldn't be the sole factor – being over a certain age does not automatically make you vulnerable.”
Identifying potential scenarios where a customer might be vulnerable can be relatively straightforward, for example if the business receives a pension-sharing order from a divorcing couple or when a power of attorney document is filed. Neither should automatically be viewed as markers of vulnerability, Rennie says, but businesses need to be aware that a combination of these types of events with other facts/circumstances might lead to a customer being vulnerable and therefore businesses should be looking more closely at the circumstances in which those customers are making key financial decisions. A customer may be going through a very tough divorce, maybe within an abusive marriage, and could be vulnerable from that perspective.
“Or you could have an older person signing a power of attorney for their son or daughter, which again is part of a normal planning process and not an automatic flag for vulnerability, but could be if it combined with a succession of very quick sell-downs of all the assets or a competing instruction from a different person, then immediately that would be a flag for us that we should look at this more closely. Granting powers of attorney is not normally an issue, but if we think there's an accompanying instruction or other factor that's unusual, we will look at it more closely.”
The challenge around AI is how to cover the wide number of circumstances which either singly or in combination with other factors could mean a customer is vulnerable.
While AI offers significant potential to enhance the detection and support of vulnerable customers, it needs to be implemented carefully.
When adopting AI vulnerability tools, organisations will need to consider data protection compliance, and in particular the lawful basis for collecting and processing a customer’s personal data.
Murray Cree, Partner in Technology & Commercial at Burness Paull notes that many vulnerability identifiers (such as health data) are likely to be “special category” data, which in most cases cannot be processed without consent. “Best practice is to notify users that their conversation may be monitored or recorded by AI tools and give them the opportunity to object.”
Recent statistics indicate that only 32% of UK adults believe AI will benefit them, particularly in areas like healthcare and education. However, this optimism is more prevalent among younger people, and those with higher education, and so vulnerable customers are likely to be more sceptical about the use of new technologies. Accordingly, firms will need to ensure that their customer service advisors are properly trained and able to clearly explain the benefits of the AI tools.
Cree advises: “An alternative to consent is to rely on the “substantial public interest” exemption. This includes “safeguarding the economic wellbeing of certain individuals”. However, this can only be relied upon where the data subject is at economic risk and where a firm cannot reasonably be expected to obtain consent (for example, where the customer does not have capacity to do so).
“Firms will also need to update their privacy notices, and explain the legal basis for processing, what data is being processed, and how the data will be used. Failure to comply with these requirements can lead to enforcement action from the Information Commissioner’s Office (ICO) including significant financial penalties.”
“...AI can be a means of removing people from the equation and that can cut costs, but it's most effective if it's used in conjunction with the power of people...There's still a way to go for AI to truly replicate human behaviours and human tendencies."
ROBBIE HOMER-PLEWS - CHIEF CLIENT OFFICER, AVENI
It is likely to be the end of this year before the FCA publishes the results of its vulnerable-customer survey, meaning it is not yet known whether it will seek to tighten up the rules firms are expected to abide by. However, an enforcement case from earlier this year shows just how seriously the regulator is taking the issue. In May the FCA fined HSBC £6.2m for failing to take into account the individual circumstances that led to customers missing repayments, something Burness Paull Partner Caroline Stevenson – Head of the firm’s Financial Services Regulatory team – describes as a “cookie-cutter approach”.
“The failings were caused by inadequate systems and controls being in place which was exacerbated by poor staff training. In reality, the forbearance all customers receive should be tailored to their individual circumstances, under the overarching umbrella of being treated fairly. This did not happen here,” Stevenson says.
The FCA found that the bank’s approach “put 1.5 million people at risk of greater financial harm” and, in addition to the £6.2m fine, HSBC had to pay the affected customers a total of £185m in redress. With the bank also having to spend £94m in order to identify and address the issues that led to the unfair treatment in the first place, Stevenson says the case should act as a reminder to all organisations of just how far the regulator will be prepared to go when it comes to ensuring vulnerable clients, and those in financial difficulty are properly treated and that firms comply with their regulatory obligations.
“The FCA has a broad range of enforcement powers – they can effectively stop you in your tracks by suspending your ability to trade,” she says. “Amongst other things, they can impose huge fines and can prosecute individuals in the criminal courts. One of the most effective tools, however, is public censure – the FCA can publicly reprimand firms or individuals which can cause serious reputational damage.”
“To avoid getting to enforcement, there are lots of required layers of accountability within the management of regulated firms known as the Senior Managers and Certification Regime (SMCR). However, this means that where issues arise, the buck lands with particular senior managers who can get struck off. The SMCR is supposed to ensure that people throughout regulated firms promote better conduct to reduce the risk of consumer harm. It’s hoped that through initiatives like the SMCR, those senior execs at the top of organisations embed and promote a culture of fairness and transparency throughout the layers of a firm to make sure that customers are getting good outcomes and being treated fairly. And to ensure that the right systems and controls are in place to support this.”
Given how high the stakes are, Homer-Plews says it is unlikely that any financial services businesses would consider trying to replace humans with AI any time soon. But, given the potential AI has to improve the way in which businesses treat their vulnerable clients, it is a useful tool for them to have at their disposal.
“In certain circumstances AI can be a means of removing people from the equation and that can cut costs, but it's most effective if it's used in conjunction with the power of people,” he says. “People have skills that can't be mimicked yet by AI. There's still a way to go for AI to truly replicate human behaviours and human tendencies. AI does carry significant power and there's lots of capability there that organisations should be leveraging, but for now it's best to have it working in conjunction with people's expertise.”
Jamie Gray, a Partner in the Financial Services Regulatory team at Burness Paull agrees with this and confirms that even the regulator has indicated they are supportive of AI. Jamie notes that the FCA wants to “promote the sage and responsible use of AI in UK financial markets and leverage AI in a way that drives beneficial innovation”. He continues with his own thoughts “harnessing AI for the benefit of firms and ultimately customers has to be a key strategic priority for firms but it’s not easy. The novel technology sometimes creates a tension with law and regulation which can lag behind the speed that tech evolves. Firms need to embed any new tech in a safe way which brings their customer base along with them.”
In a constantly evolving digital age – the age of the “Trust Economy” – the use of technology and its accessibility is impacting financial services customers who experience a wide spectrum of vulnerability. While there are acknowledged risks associated with the use of AI, there are opportunities for those fintechs developing innovative solutions and the firms using this new technology to support these customers. Outcomes from the FCA’s survey will inform next steps around regulation and guidance but for now, financial services organisations must continue to do all they can to earn and keep the trust of consumers and deal with vulnerability appropriately.
This paper is part of Burness Paull’s “Trust Economy” series, which supports and informs organisations who are exploring how to win earned trust and benefit from growth in the trust economy. You can read other papers in the series here, The Trust Economy and Trust in the Age of Artificial Intelligence or speak to our dedicated Tech & Financial services team about how to innovate in this space whilst remaining compliant.
Whether your business creates, sells or is enabled by technology, as a digitally native legal firm, we give thoughtful and precisely informed advice.
To discuss how to ensure you have the right assurances and protections in place to win earned trust and benefit from growth in the trust economy, get in touch. We’d love to have a conversation.
Written by
Caroline Stevenson
Partner
Financial Services Regulatory
Related News, Insights & Events
Bolt-on acquisitions: What you need to know
No two deals are the same, but in our experience, understanding the following key factors can streamline the process and maximise the benefits of a bolt-on acquisition.
Risk horizon scan: 2025
January is the optimal time for businesses to review risk registers against management plans and goals for the next 12 months.
Cyber security – looking back on 2024 and what businesses can expect in 2025
2024 was another year in which UK businesses battled to combat cyber security threats, which continue to impact organisations of all sizes across all sectors.