It’s safe to say AI is finding its way into the advice process more and more. Based on our data, adoption is also happening at a faster rate than might have been expected.
Almost a third of respondents to our State of the Advice Nation (SOTAN) research are using AI already, a stat which has nearly doubled from the same time last year. There are yet more who plan to be using AI in their businesses within the year.
Most use cases for AI that we hear about are what might be described as at the ‘outer edges’ of the advice process, mostly geared around note-taking and summarising meetings. SOTAN respondents cited no less than 14 different AI tools they are using to support with this, though it’s likely the actual number of AI systems advice professionals have to choose from is even greater.
Other firms go further, using AI to populate fact-finds, or build and improve their suitability reports, stripping out jargon and making them more client-friendly, as well as for research and marketing purposes. Anecdotally, other firms again are embracing AI wholesale, embedding it in their workflows as a key part of driving business efficiency.
The other side of the AI coin
So that’s the good stuff, and the kind of benefits of AI we’re probably all pretty familiar with. But advice professionals are also wary about AI on a number of different fronts.
There are the outright sceptics who don’t believe the claims about the admin burden magically being outsourced to AI from inside your back-office. We have also heard from individuals who know there are an increasing amount of AI tools to choose from, and who are reluctant to commit to tech only to see it acquired and have to change systems again further down the line.
In both our research and at our events, concerns have been raised about data security, but also what impact/ difference AI is making to actual clients. This is something Alex Lannin of Westcotts raised when we were designing the questions for SOTAN.
Alex said while new AI-powered tech is clearly getting in front of advisers and planners, from what he hears the results are mixed. Some firms are seeing efficiencies while others are having to deal with inaccuracies.
He said: “There’s a lot of talk about how efficient we can all become with the use of AI but not that much about client outcomes and how these effort/time savings can really benefit the client as opposed to just increasing the bottom line of advice firms.”
Alex added his main concern is whether some firms are diving into AI tools without thinking about what’s happening to client data, or how to protect this.
The view from a tech provider
Max Anderson, founder of WealthSpace and Chat.Redact on AI and data confidentiality
“Privacy agreements with an AI company, and privacy arrangements that prevent sensitive data reaching an AI company, are very different things. Both are largely compliant in a loose sense, but it’s the former that is asking for trouble.
“For example, using ChatGPT to help with route planning, I got a response starting where I live to the destination. Yet to do so it either needed to know my previous activity and/or my current location – both things supposedly suppressed by enabled privacy settings!
“When pressed, AI essentially responded saying it was a lucky guess. Hmmm.
“Many early mover AI technologies are sharing data with the likes of ChatGPT verbatim. While it’s true they have privacy arrangements in place, it’s worth understanding that the data is still reaching the large corporations running the Large Language Models (LLM), that are themselves predicated on the consumption of data to learn.
When you consider the type of data being shared by clients with professionals in the finance industry, particularly those entrusted with financial planning, this feels like a privacy time bomb waiting to go off.
“If you asked your clients if they were comfortable with everything they told you being shared with Google, Meta and the like, they would probably take exception to that.
“AI is super powerful and will help the finance industry immensely. But we must consider what data is being shared with whom, and when. Where possible protect yourself by making sure data is anonymised ahead of being consumed by an AI system. After all, big corporates are, and you should too.”
—