AI in Insurance Requires Governance Before Scale
Artificial intelligence has rapidly become one of the most discussed forces shaping the insurance industry. From claims automation to underwriting insights, insurers are experimenting with tools that promise efficiency, improved client service, and deeper use of data. Yet behind the excitement sits a quieter and more fundamental question: how should insurers govern the use of AI?
For an industry defined by prudence, governance is not an afterthought. It is the foundation.
Insurance has always operated within a dense regulatory environment. In some markets, the constraints are lighter, in others they are extensive, but the underlying principle remains the same. Trust in the system requires discipline, transparency, and accountability. AI, by contrast, emerged initially in a largely unregulated environment. This created an unusual situation where one of the most regulated industries began adopting a technology that was, at least initially, governed mostly by internal judgment.
For insurers, this produced a period of exploration.
At first, AI was treated primarily as a tool. Companies examined how it could improve parts of the value chain, enhance customer service, or optimize the use of data. Projects were launched cautiously. Proofs of concept appeared across underwriting, claims management, fraud detection, and operational efficiency.
Despite the absence of formal frameworks, insurers rarely approached the technology recklessly. Most institutions applied common sense and internal discipline. Data protection rules already in place within insurance regulation shaped how information could be used. Boards and executives asked pragmatic questions before approving investment. If a project risked falling outside the likely direction of regulation, it simply would not proceed.
In practice, governance often preceded legislation.
As the use of AI expanded, the need for internal rules became more evident. Within many organizations, discussions intensified about what employees could do with emerging tools, how subsidiaries should experiment with them, and where the boundaries should be drawn. These debates were not always smooth. Conflicting interpretations, uncertainty, and sometimes fear accompanied the rapid pace of technological change.
Yet these tensions were also constructive.
They signaled that insurers understood the stakes. Technology capable of transforming decision making cannot operate without clear responsibility. The more AI moved from experimentation toward operational use, the more essential it became to define processes, oversight, and accountability.
This is why the arrival of more explicit regulation is, somewhat paradoxically, a welcome development.
In many sectors, excessive regulation can slow innovation. Insurance leaders often make this argument with good reason. But with AI, the situation is different. Clear regulatory guidance creates a shared reference point. It helps boards, executives, and operational teams understand what is acceptable and what is not.
More importantly, it brings calm to internal discussions.
When organizations debate the adoption of new technology without a framework, every decision becomes contentious. Once a regulatory structure exists, companies can focus on implementation rather than speculation. The conversation shifts from whether AI should be used to how it can be used responsibly.
For boards of directors, this clarity is particularly valuable.
AI touches every dimension of an insurer’s activity. It influences internal operations, customer interaction, partnerships with service providers, and even the design of products. Without governance, each of these areas carries uncertainty. With governance, leaders can evaluate opportunities with greater confidence.
The presence of a framework does not eliminate debate, nor should it. Healthy discussion remains essential when technology evolves faster than regulation. But it ensures that debate happens within boundaries that all stakeholders understand.
There is, however, a caution that the industry should keep in mind.
Regulation must evolve intelligently. If legislative bodies respond to technological change by layering new rules on top of existing ones without coherence, the result may become unmanageable. A regulatory environment built from accumulating circular letters and fragmented provisions risks creating more confusion than clarity.
For insurers, this would be counterproductive.
Artificial intelligence has the potential to improve operational efficiency, strengthen customer service, and support innovation across the insurance ecosystem. If governance becomes excessively complex, the industry may spend more energy interpreting rules than delivering value.
The objective should therefore be balance.
Regulators must provide guidance without stifling progress. Insurers must develop internal governance that aligns with legal expectations while remaining flexible enough to adapt as technology evolves. Dialogue between regulators and the industry will be essential to maintain this equilibrium.
After several years of experimentation, the sector now stands at an important moment.
Insurers have learned what AI can and cannot do. They have tested tools, encountered limitations, and begun to understand the operational implications. At the same time, regulatory frameworks are beginning to take shape.
This convergence is an opportunity.
With clearer governance, insurers can move beyond isolated pilots and begin integrating AI into their operations in a sustainable way. The technology will not replace the human judgment at the heart of insurance, but it can enhance it.
If managed carefully, AI will not undermine the discipline of the industry. It will strengthen it.
And in insurance, discipline has always been the prerequisite for trust.
François Jacquemin
P.S.: Want to watch the video version of this article? Go to https://www.francoisjacquemin.com/covered/ai-in-insurance-is-no-longer-a-technology-question-it-is-a-governance-one