Well, it turns out that filling the Update with a heady mix of politics and pensions leads to what I can only call a Bulging Sack, mainly of people keen to take Tom to task for his views. This is excellent and I can’t begin to describe how much I am here for it; please know that every piece of feedback is remitted to him directly and the best bits are pinned in a team chat, much like Uhtred-son-of-Uhtred-son-of-Uhtred pins enemies’ heads on stakes pour encourager les autres.
There are pensions stories this week and there are politics stories this week but you can definitely have too much of a good thing, so we shall shun all that and instead head off to the realms of Artificial Intelligence, and in particular consider the pronouncements yesterday of Geoffrey Hinton, a cognitive scientist and neural network specialist who stepped down from Google to be the equivalent of that scientist James Cromwell played in I, Robot.
First of all, and this isn’t the thing, but Dr Hinton is a sprightly 75, and is making the news for, amongst other things, retiring. Early. I’m just saying – all those seminar presentations about how retirement is changing weren’t wrong. They mainly sucked, but they weren’t wrong. (Also, anyone who uses ‘change is the only constant’ in a seminar like that should be subject to the Atomic Wedgie).
Also – not only is the generation in its 60s and 70s not tech-averse, it’s the generation that built most of the foundations of the tech we use today. Try telling Dr Hinton that he needs to have a paper-based annual client review pack because he’s 75 and see where that gets you.
More seriously – whatever we think about the potential dangers of neural networks which learn both differently and much more quickly than us, they aren’t going away. A 6-month moratorium while we work out a Three Laws Safe code of conduct isn’t going to do much. So it behooves (posh) us to think about what we’re going to do with these technologies in our sector.
Most discussions about how new technologies will interact with the practice of financial planning bump into a flurry of denialist stuff about how nothing will ever replace human contact, empathy and understanding, and that seems true to me. Turing tests aside, dealing with a simulacrum of humans on sensitive issues – one planner told me of the emotional drain of dealing with a client with a terminal illness recently – is unlikely ever to be as good as having real human connection.
But beyond that, what if we could train a large-language AI to write suitability letters? What if you recorded your client meeting, fed that into a system that changed the recording into something the AI could learn from, and then got a suitability report spat back out at you 42.3 seconds later? You might need half an hour to check it through and brush it up, but with suitability letters taking anything from 3-7 hours to complete depending on which research you read, I’d take the 00:30:42.3 over the 07:00:00 version any time.
What about annual review reports? What if the AI could just look at the client record in your practice management system and Do The Report? It can’t make up for bad data, but neither can you, and the process of getting data up to date for a report should be improving anyway, or so Consumer Duty tells us.
This stuff isn’t moonbeams any more. I’ve had two conversations this week with firms who are building elements of it and one which has a working version. Is the output ready for live-fire exercises yet? No. But that’s what AI does – it comes out with rubbish to start with, you show it how it could do it better, and it learns.
Imagine feeding an AI every single suitability letter produced by UK IFAs in the last 10 years. It would take it, oooh, about until lunchtime to chew on all that, and with the right training I bet you’d get something pretty decent back straight away. Train it for 6 months and I suspect – apart from the occasional little human touch – you’d have something at least as good as the majority of what’s produced now.
Here’s the thing though – that learning would be available to every single user of the AI; that’s how neural networks work. Right now large organisations are trying to create ‘forked’ versions of these systems, where they get it to learn from their own data, but ring-fence what it learns so the rest of the network is dumber and they have competitive advantage. Imagine a garden where someone else plants and tends the garden, then you throw a wall up around a big bit of it, stick a bench in and say it’s yours.
There are upstream issues in this sector which will take a long time to fix, mainly in how money moves around. But transformational gains at the pointy end are coming – for those that want them. Just don’t make the mistake of thinking that it’ll only be the young’uns that will be involved. If that’s what you think, Dr Hinton wants a word with you and he’s bringing his double-secret highly malevolent AI with him that knows exactly what’s in your search history and isn’t afraid to use it.
- Verrrry interesting from platform tech provider GBST here, which has been busy transforming itself into a cloud-based SaaS organisation in the last few years. You may not know GBST well, but I think that might change soon.
- Congrats to Ray Adams, late of Cashcalc, who is off to play in his Speedos. Dear Lord, supply me a psychic wire brush and Dettol to get that image out my mind. Congrats to Ray.
- Cashflow modelling news part the second: this looks pretty interesting as a model…
- And your music choice this week: well, after such a strong reaction (from one person) to a cracking thrash track last week, let’s go more deathy this week. Now, a 9 minute melodic death metal track might be asking a lot of you, but I think you’ve got it in you, and if you do then you will be richly rewarded. Here’s Fires In The Distance with Crumbling Pillars of a Tranquil Mind. Incredibly, English is their first language.
Enjoy, and may your pillars remain uncrumbled. I personally avoid my pillars crumbling by having a mind that’s anything but tranquil, but your mileage may vary.
See you next week