Content Hub
Insights

AI Isn't New. What's Changed Is Everything Else.

By
Bill Wilkins
March 24, 2026
6 min read
Share this post

https://vitesse.io/insights/ai-isnt-new-whats-changed-is-everything-else

Executive reviewing holographic AI and insurance data visualizations in a modern boardroom at dusk

The Foundations Were Already There

The insurance industry has been using artificial intelligence for longer than most headlines suggest. Fraud detection models, actuarial pricing engines, underwriting algorithms — all operational for years, in some cases decades. What's happened recently isn't the invention of AI. It's the removal of every barrier that used to keep it out of reach.

The intellectual foundations go back further still. McCulloch and Pitts modelled the first artificial neuron in the 1940s. Turing asked whether machines could think. Decades of incremental progress followed: expert systems, machine learning, the deep learning breakthroughs driven by researchers like Geoffrey Hinton that quietly transformed what computers could do with complex data. None of this arrived suddenly. The field has been building steadily for the better part of a century.

The Great Acceleration of 2024–2025

What changed in 2024 and 2025 was a great acceleration that transformed both access and capability. The large language models underpinning tools like ChatGPT and Claude became commercially commoditised almost overnight, giving mass access to a technology built at great expense. The significant shift was what the technology could actually do. Reasoning, synthesis, and language understanding reached a level of sophistication that previous generations of AI simply couldn't match.

Frameworks that previously required significant engineering investment became freely available, and capabilities once reserved for the largest technology companies were embedded into the software every team already uses. The barrier to entry effectively disappeared.

Insurers are now as likely as technology companies to report active AI use. The gap between early adopters and everyone else is no longer about access — it's about execution.

That changes the competitive question considerably. Previous versions of AI once required a specialist to unlock. Now it doesn't. The playing field has been levelled and creativity can be unleashed. Exciting and scary in equal measure.

AI Amplifies What's Already Broken

Key Insight

"AI doesn't fix broken processes. It amplifies them."

— Bill Wilkins, Chief Technology & Product Officer, Vitesse

Layering sophisticated analytics onto systems still held together by spreadsheets and manual workflows doesn't produce better outcomes. It risks producing faster errors on a greater scale. The organisations that have struggled with AI pilots aren't struggling because the technology failed them. They're struggling because the data wasn't clean, the processes weren't clearly defined, and the infrastructure beneath the ambition wasn't ready.

Fixing data quality, integration, and system modernisation gaps isn't optional groundwork — it's the prerequisite for AI to function at all. According to Deloitte's 2026 Global Insurance Outlook, despite widespread enthusiasm, only 7% of insurers have successfully scaled AI beyond pilot projects to achieve measurable business outcomes.

That 7% figure is stark. Enthusiasm is everywhere. Outcomes are rare. The distance between the two is almost always an infrastructure problem, not an AI problem.

Where the Money Moves, the Rules Change

This matters with particular force in financial services, where the stakes of getting it wrong are not theoretical. Most AI applications in insurance to date have operated at the descriptive or analytical layer. That's a forgiving environment for a technology that is, by its nature, probabilistic. The same query can return different outputs on different days. At the transactional layer, it's a different problem entirely.

The distinction is not a technical nuance. It's the central design challenge for any financial infrastructure provider moving into AI-augmented operations. Where the money moves, the tolerance for error is zero.

Regulation, Risk, and the Road Ahead

PwC's Reinventing Insurance report notes that 57% of insurance executives now list generative and agentic AI as their top technology investment priority for 2026 — 22 points ahead of the next closest option. At the same time, governance frameworks are still catching up to the pace of adoption.

AI is being adopted faster than the frameworks governing its use are being written. The NAIC's model bulletin has been adopted by 24 states, with Colorado and New York actively drafting their own legislation. For companies operating across regulated markets, that gap demands careful navigation.

The same capabilities that reduce manual effort and improve decision quality will accelerate cybercrime and amplify fraud if deployed without proper controls. Embracing AI and trusting AI unconditionally are not the same thing.

The organisations that benefit most over the next few years won't be the ones who adopted the fastest. They'll be the ones applying it to foundational tasks — verification, reconciliation, workflow efficiency, exception handling — with the appropriate guardrails in place from the start.

Getting the Sequencing Right

At Vitesse, we've been working through exactly this — not as a theoretical exercise, but as a live operational question for a company that sits at the centre of regulated financial infrastructure. We're investing in AI, testing it across specific and bounded use cases, and being deliberate about where the guardrails need to be before moving it closer to the money.

We published our thinking on how infrastructure must underpin AI adoption in insurance — you can read it on our insights hub. That's not caution for its own sake. It's what responsible adoption looks like when you're moving funds on behalf of insurers operating in some of the most tightly governed environments in financial services.

1st

Clean the Data

Data quality and integration gaps must be resolved before AI is layered on top. No shortcuts.

2nd

Define the Guardrails

Governance must precede deployment at the transactional layer — especially in regulated financial environments.

3rd

Scale What Works

Move from bounded pilots to measurable outcomes. The 7% who got here got the sequence right.

The gains from AI are real, and so are the risks. The organisations that come out ahead won't be the ones who moved fastest. They'll be the ones who got the sequencing right.

My team has been working through these questions for more than a year: what's safe to build on, what isn't ready, what we were confident about three months ago that we've since had to revisit. We don't have a settled view — but we'll share how we're thinking about it as we go.

Share this post

Want to learn more?

Book a call now and unlock the potential of our products tailored to your needs.