When it comes to flood risk, the Federal Emergency Management Agency (FEMA)'s Flood Insurance Rate Maps (FIRMs) are the authority — and rightly so. They have decades of engineering intelligence built into them and are refined regularly. In the end, though, they are based on flood models, and as statistician George Box famously wrote, "essentially, all models are wrong, but some are useful."
A recently published report that investigated how well the FIRMs predicted flooding from Superstorm Sandy in 2012 illustrates that while they performed well in some areas, there is also room for improvement. That's because there is no flood model, or model of any natural catastrophe, that will be 100% accurate. In fact, predicting 100% of a flood event is usually a sign of a weak flood model: Imagine a model where everything within 100 miles of water is labeled as High Risk. Flood models display their quality in the zone between, “You don’t need a model to know that’s a flood risk” and “that place will never flood”. In statistical terms, the sweet spot is around 75% or 80% of flooding predicted. This is about where FEMA's FIRMs were on Sandy, but there is more to understanding flood risk than statistics.