If everyone uses the same risk model, is it still useful?

Posted by Ivan Maddox on Apr 4, 2018 8:03:01 AM

Find me on:

One of the recurrent themes of this blog is to explore the usefulness and limitations of risk models. This post explores the implications of the widespread — in some cases, universal — use of these models. Is there a limit to a model’s usefulness if everyone is using it? How can a model’s limitations be overcome?

iStock_000049591616_Full-985396-editedWhen a peril is well modeled, and that model is comprehensively applied throughout a market by both the carriers and re-insurers, it becomes very difficult to differentiate coverage because everyone has priced the risk similarly. The implications of this blanket usage begin to manifest when nothing happens for a while; i.e., when no significant catastrophe fulfills the model’s predictions. The capacity to cover the expected loss is collected by everyone, and with no claims to release the capital, the market gets soft. Competition becomes tighter, and it becomes necessary to look for new markets, or entirely new activities, to maintain a constant level of premium.

This recent article from Intelligent Insurer explores this phenomenon in the current reinsurance market. The big boys are moving to specialty reinsurance and even primary insurance amid a very soft market. Naïve capital accumulates and the only outlet is a catastrophe that is unexpected — i.e., unmodeled – to release the excess capacity through claims that exceed predictions.

A way to reduce the impact of the uniform application of the same models is to introduce variety. If carriers apply the same models differently this problem begins to disappear because diverse views of risk become possible, with competition blossoming from a basis of differing experience, expertise, choice of analytics, risk appetite, and augmenting datasets. The market as a whole becomes much more resilient to capacity overflow because everyone has collected premium on different risks based on their focused efforts.

An example of this effect could be drawn with flood in the U.S. Almost the entire market bases their flood policies on FEMA data, partially or completely. If carriers begin to apply alternative flood models, extra data (meteorological, topological, hydrographical), and unique analytics, the market would become more dynamic and resilient almost overnight. Capacity would be bound to different risks and different events, and reinsurers could compete for the accumulated risks based on their own interpretation of what the carriers have done.

A dynamic underwriting environment with unique views of risk from each carrier is a much saner way to ensure market forces remain firm, rather than hoping for an unimagined catastrophe to wreak enough havoc to shed excess capital. Standard application of standard models leads to stagnation, while the introduction of variety into the use of models, with results that can be understood and applied to solid actuarial work, is a recipe for success for those carriers who can use the available tools and information intelligently.

Risk scoring page

Topics: Insurance Underwriting, Risk Management, Other Risk Models

The Risks of Hazard Blog   rss-feed

Welcome to The Risks of Hazard, brought to you by Intermap Technologies®. From the latest industry news and trends, to insight from thought leaders around the globe, stay tuned for a variety of content aimed at helping you better understand the role of location-based intelligence in the world of insurance underwriting and risk assessment.

To see how Intermap delivers analytics tailored to your underwriting, visit our InsitePro page.

Subscribe to the Risks of Hazard blog!