As previously discussed, risk scoring is a powerful way to evaluate risk by including disparate and complex datasets, business rules, and experience — all prioritized and weighted — in an algorithm. While it’s not a new idea, it is underutilized. Therefore, I think it's worth looking at a few examples of how it's done. (Since it’s a vast subject, this post will concentrate on flood risk scoring.)
Not only is insurance an industry that is based on a general inability to predict what is going to happen, it is a hyper-competitive industry in which the winners are those who can best predict the unknowable…or at least be less wrong than their competition. Insurance underwriting is the actual process of pricing what is unknowable, and is necessarily performed with rigorous processes underpinned with vast amounts of data. However, underwriting is never perfect, and the gap between actual underwriting and perfection is called underwriting leakage.
Many solution providers in the market today use addresses as input to determine the risk from multiple perils at a specific location.
One of the recurrent themes of this blog is to explore the usefulness and limitations of risk models. This post explores the implications of the widespread — in some cases, universal — use of these models. Is there a limit to a model’s usefulness if everyone is using it? How can a model’s limitations be overcome?
When a peril is well modeled, and that model is comprehensively applied throughout a market by both the carriers and re-insurers, it becomes very difficult to differentiate coverage because everyone has priced the risk similarly. The implications of this blanket usage begin to manifest when nothing happens for a while; i.e., when no significant catastrophe fulfills the model’s predictions. The capacity to cover the expected loss is collected by everyone, and with no claims to release the capital, the market gets soft. Competition becomes tighter, and it becomes necessary to look for new markets, or entirely new activities, to maintain a constant level of premium.
This recent article from Intelligent Insurer explores this phenomenon in the current reinsurance market. The big boys are moving to specialty reinsurance and even primary insurance amid a very soft market. Naïve capital accumulates and the only outlet is a catastrophe that is unexpected — i.e., unmodeled – to release the excess capacity through claims that exceed predictions.
Modeling how a river will flood, or how the sea will rise over the shore, or how rain will accumulate and flood an area, is a tricky thing to do. The variables are limitless, and of course an act of nature is by definition unpredictable. Algorithms can get more and more complicated as these variables are accounted for in the model, but in the end it’s impossible to model things like trees accumulating under bridges to create ad hoc dams. What can really help a flood model’s quality is the foundation upon which it’s built: the empirical ingredients, like elevation data.