Why insurers rely on basic research to understand tomorrow’s risks

© Adrian Moser

Speaking to the SNSF, Iwan Stalder from Zurich Insurance Group explains the vital role of basic research in building resilient insurance solutions – from climate science and toxicology to neural networks and cyber risk modelling.

“Many of our current insights would not exist without basic research – and it remains indispensable for understanding the risks of the future,” says Iwan Stalder, Head of Group Accumulation Management at Zurich Insurance Group. Stalder and his team are responsible for identifying, quantifying and aggregating risks – that is, combining them to understand the company’s overall exposure – across its global non-life portfolio, from natural perils to man-made incidents.

Originally concentrated primarily on natural catastrophes, the team’s scope widened when the 2015 explosion in the port of Tianjin showed how industrial accidents can suddenly take on catastrophic dimensions. Since then, scenarios such as pandemics and industrial accidents, or the question of what might become the “next asbestos”, have become part of their portfolio.

When it comes to managing complex risks, insurers are among the actors that are most dependent on scientific knowledge. Climate change, natural disasters, pandemics or cyber incidents can only be assessed with the help of mathematical, physical and climate models. These models form the basis for quantifying risks, allocating capital and designing insurance products that provide protection and stability for society and the economy.

Data, models and uncertainty

At the heart of any risk assessment lies data. One of the most valuable sources is an insurer’s own history of past cases – the record of damages suffered by clients and the payouts made to cover them. In the United States, for example, decades of experience with hurricanes, tornados and hailstorms have built up a detailed picture of how often such events occur and how much damage they cause. This information – referred to as claims or loss data – allows vulnerabilities to be identified and risk models to be refined.

However, different vendors’ models can still produce very different results. But when an insurer feeds in its own history of hurricane-related losses, it quickly becomes clear which model comes closest to reality. No model is perfect, as Stalder points out, but comparing them against real-world experience shows which ones are most reliable. And the work doesn’t stop there: every model requires regular adjustments. Where past loss data are available, Stalder’s team uses them to recalibrate assumptions; where they are missing, another model, scientific insights and expert judgement are introduced instead.

Catastrophe models are therefore not static but evolve continuously as new scientific insights become available, building codes adapt, new risks emerge, and the climate itself changes. Even U.S. hurricane models, optimised over the span of a generation, are still being revised today. As Stalder explains, “everything is in motion; we are constantly learning.”

There is always some degree of uncertainty, but it increases when data are scarce. Natural perils are comparatively well researched, yet the exact impact of climate change on their frequency and severity remains difficult to project. For cyber incidents the uncertainty is even greater: there have been no extreme events to serve as reference points. The global IT outage in July 2024, a worldwide computer systems failure, was caused by a faulty software update – the so-called “CrowdStrike event”. Although not an attack, it demonstrated how millions of systems can be disrupted almost instantaneously.

The role of basic research

In insurance, basic research proves essential. Climate science provides the scenarios that feed into catastrophe models: rising sea levels, increasingly heavy rainfall or changing storm tracks. Toxicological studies on substances such as PFAS – the so-called “forever chemicals” that have recently caused farming bans in Switzerland – highlight how new liability exposures emerge. “Research of this kind, often supported by institutions such as the SNSF, forms the scientific groundwork on which our practical risk models depend,” notes Stalder.

The connection between scientific inquiry and practice often runs through the data, methods and scenarios that basic research produces. One example of an SNSF-backed project is the scClim collaboration between the University of Bern, ETH Zurich and Agroscope, which has developed high-resolution simulations of supercell thunderstorms in the Alpine region to improve forecasting of intense, localized storm events.

Technological research is equally crucial. Artificial intelligence, particularly neural networks, starts to be more widely used to identify patterns in vast datasets – yet its roots lie in theoretical groundwork from the 1950s and 1960s. Early use cases were limited in scope and efficiency, but the surge in computing power and the evolution of algorithms over the past decade have enabled systems to operate with far greater speed and precision. “Some of our current risk insights would be unthinkable without neural networks. They are a direct result of decades of basic research,” emphasises Stalder. An SNSF-funded example in this field is an ongoing project by the University of Basel and IBM, which develops new types of quantum-enhanced neural networks.

From model validation to in-house development

The connection between science and practice is also reflected in how insurers apply models. For years, Stalder’s team has licensed, validated and calibrated models – a practice that Zurich pioneered in 2004 and that has since become standard. Over time, the team went further, developing its own scenarios or probabilistic models for terrorism, liability catastrophes, crops, pandemics and cyber incidents. These in-house models provide transparency on assumptions and calculations, which is particularly important from a regulatory perspective.

Exchange with the scientific community is a key part of this work. Zurich has established the Advisory Council for Catastrophes, which brings together foremost researchers in their fields to discuss the latest findings. Topics range from climate change, seasonal hurricane forecasts and earthquake early-warning systems to the reliability of prediction models. The sessions also involve insurance experts from risk management, underwriting, risk engineering and claims, ensuring that scientific insights are translated directly into business practice. “We want to understand where the research stands and assess how to use these insights to improve our risk view and our products,” states Stalder.

Contributing to resilience

Ultimately, the issue goes beyond the stability of a single company: the functioning of whole economies is at stake. Insurance, however, allows investments even in uncertain environments – be it building a billion-dollar factory or financing renewable energy projects. It creates predictability by making risks quantifiable, and distributable.

Basic research, therefore, is far more than an academic pursuit. It provides the foundation for models that allow to manage complex risks and for innovations that strengthen economic and societal resilience. As Stalder puts it: “Basic research gives us the tools not to eliminate uncertainty but to take informed decisions under uncertainty.”