Why do FloodFlash use a sensor for our parametric flood insurance? Are there more high-tech solutions available? Why, for example – do we not use satellite data for our parametric policies? Parametric cover has come a long way from bottomry contracts and Roman ship protection. In this article, FloodFlash Catastrophe Risk Analyst Henry Bellwood reviews the options available for those designing modern parametric policies – exploring why we landed on our award-winning sensor technology.
Every parametric insurance policy needs a metric, a measurement, a parameter. The parameter acts as the trigger mechanism, unlocking a claim. Parametric flood policies are most often determined by the depth of flooding. The corelation between parameter and loss is simple: the greater the flood, the higher the costs.
Accurate and transparent measurement is vital for parametric insurance. If there is doubt around the parameters or how they are measured, trust can dissolve. In this blog, we explore the ways in which different parametric providers might measure their triggers. Each approach has a huge impact on accuracy, claim speed, premium size and the oft-reported parametric boogeyman basis risk.
Basis risk, basis risk, basis risk
Basis risk is often touted as the Achilles heel for the parametric movement. Defined (loosely) as a disparity between costs and payout, basis risk is characterised as a shortfall or a surplus payout when it comes to claim. In other words, the payout doesn’t match the losses.
The risk manifests in several ways for parametric insurance. If the parameter isn’t properly defined a client might incurs losses, but a trigger isn’t met. That may be lead to a shortfall in payout or even no claim at all. Similarly, parametric policies require clients to pre-select their payout amounts. This leaves room for over or underinsurance.
Indemnity insurance traditionalists might claim a victory over parametric cover. That misses the point though, as parametric insurance isn’t about perfect dollar for dollar replacement (protecting the balance sheet). Parametric covers are much better for rapid-payout cash injections to guarantee survival (protecting cash flow). What’s more, basis risk is just as prevalent in traditional covers – exclusions, loss-adjustment and deductibles all prime offenders in causing a mismatch between costs and payouts (we explore basis risk in more detail in our article – what is basis risk and why should I care?).
Clients are doomed to basis risk exposure regardless of their insurance choices. All is not lost. The real question is how much does a policy expose you to basis risk. That’s why it’s hugely important to pay attention to the holy trinity of parametric policy design:
- How you measure your parameter(s)
- What triggers you choose
- How much you get when your policy triggers
Trigger and payout value choices largely depend on the individual client. At FloodFlash we provide close support and free surveys through expert partners to assist clients. For this blog we’re taking a closer look at the first part of any parametric policy – how to measure the trigger.
Let’s have a look at the contenders…
#1: Stake in the ground – IoT sensors
The first form of data collection we’ll explore is the one we use. Much like the telematic policies widespread in car insurance, sensor-based parametric policies rely on a device installed at the property a client is looking to protect. These IoT (Internet of Things) sensors connect to the internet wirelessly to send live flood data. When the trigger depth is reached, it begins the claim.
The benefit of these smart sensors is that they are:
- cheap: each FloodFlash sensor costs £100 per year
- accurate: measure real-time flood depths to millimetre accuracy
- reliable: memory chips record data even if networks are down
- fast: we’ve got a reading of a flood before you can say ‘claims-assessor’ three times
- minimal basis risk: because the sensor is installed at the insured property – parameters correlate very closely to losses
Casting modesty aside for a moment: we are immensely proud of our award-winning sensor [link to Pete’s blog]. There is simply no-one else out there with a more reliable trigger mechanism for parametric flood insurance. Let’s look at the other options to see how they compare.
#2: Cat-in-a-box – third party statistical data
Some parametric products use what is known as ‘cat-in-a-box’ triggers. These policies typically rely on data from agencies like EA river gauge or NOAA tidal gauge data, often taken miles away from the client property. If river or tidal gauge data doesn’t directly correlate with the loss at the property this leaves a large window for basis risk to creep in.
These off-the-shelf trigger mechanisms can reduce policy set-up costs, but they fall short in other regards:
- setting prices – cat-in-the-box policies heavily rely on models to predict the relationship between the measuring gauge and the location of the risk leaving room for uncertainty
- modelling requirements – because of the above uncertainty, each policy requires expensive, bespoke modelling adding to the cost of the policy
- lack of precedent – the relationship between historical gauge data and local flooding is not always linear, making it difficult to write policies without broad (and expensive) contingencies
- covering multiple sources of risk – river and tidal gauges aren’t always great at capturing flooding caused by intense rain. Either more triggers are required, or worse, exposures to certain types of flood
#3: Eye in the sky – parametric satellite
And now we come to the main event – parametric insurance based on satellite data. With high-profile efforts from Elon Musk (Starlink), Jeff Bezos (Project Kuiper) and the UK government, satellite talk is everywhere. In fact, we hear about satellites so often we wonder if we should launch one ourselves.
The proliferation of LEO (Low Earth Orbit) Satellites is increasing our remote-sensing capabilities. Big names in the insurance industry are gearing up new parametric products based on these data. It’s certainly an exciting development for the field and one that could certainly shake things up. That said, there are a few reasons we aren’t launching the FloodFlash Space-program (yet).
The first is simple – it ain’t cheap. It currently costs $10m+ to contribute to the space junk floating above our heads. Think of all the FloodFlash sensors you could buy! (Ed. the conservative answer is 100,000 – and to be honest we’d probably provide a bulk discount).
Getting the metal up there is just the start. It isn’t cheap to licence this data either. The huge burden of real-time data analysts and scientists required to record, process, and package the measurements quickly add up. (You also need not one, but a fleet of satellites for a single parametric policy. For reasons I’ll go into shortly.) Putting aside the financial hurdles for a moment – let’s imagine a world where low cost satellites (and data) become a reality. There are still two big issues: the reliability and accuracy of the data. In our view, this means satellites aren’t the solution for parametric products just yet. Let’s explore why:
Parametric satellite data problem #1: lag-lead time
Unlike a ground-sensor, LEO satellites aren’t stationary. They orbit at fast velocities and high eccentricity at near-polar orbits. This means they orbit over a different point on earth every 90-120 mins.
The upshot is that even with a fleet of 6+ satellites, the chances of capturing the moment when a flood peaks is slim. So, what do you need to do to derive the peak flood depth? You model it, you make assumptions, you do the best you can do with the available data. But you do not, cannot, measure it.
With parametric measurement based on satellites, you have a snapshot of the flood at an undetermined point in the flood’s timeline. There are a lot more steps required before you can confidently claim you have an idea of what the peak could be. And each one contains assumptions and uncertainty.
Parametric satellite data problem #2: it’s not just satellite data in satellite data
This brings me onto my second point: if you’re not measuring the exact peak flood height, what other datasets do you need to get to a number?
- Ground elevation: any measure of flood height is useless if you don’t know what the ground elevation is underneath. So, you need a DTM (digital terrain model). These are often compiled using other satellite imagery or LIDAR (light detection and ranging – laser scanning essentially) measurements. These are only so accurate themselves. There are well documented issues for urban areas, unfortunately where the majority of your parametric insurance will be, because that’s where the property is.
- Flood hydrograph: you also need to reverse engineer the flood hydrograph from the two nearest measurements depending on the lag-lead time described above. This isn’t a simple extrapolation exercise. To do this well you need to all-but build your own flood model. To do that, you’ll need land-cover usage data, information on drainage systems and soil-types, corroboration with river-gauges and on-the-ground measurements to mention a few. That’s a lot of data, all of which is fallible at one level or another.
So you have your satellite data, ground elevation and flood hydrograph. Great – does that give you the peak of flooding. No, it gives you a modelled version of it. You and your clients are going to need to trust the satellite, data and models to believe whether the policy has triggered. What’s more, the time and resource burden to run these models, incorporate the data, process and validate the results and package it into something that makes sense is huge. And you’ll have to do all of that before you pay any claims.
Parametric satellite data problem #3: accuracy and uncertainty
When things are simple, it’s easier to be sure. The more complex a solution becomes, the more uncertainty can creep in, both for the underwriter and the customer? Insurers are often seen as omniscient bodies that have all the insight and hold all the cards. That isn’t the case though. When underwriting a policy, every dataset used and assumption applied carries uncertainty. The more data and assumptions involved, the less certain the result.
In our example, flood-depth measurements done by satellites and models carry huge bands of uncertainty, potentially upwards of 30-40cm (that’s a big difference in flood depth). FloodFlash clients could have three distinct triggers within these bounds, so using this data becomes untenable for their claims, and the calculation of their premium.
This has two big implications. Firstly, we’re back to our old friend basis risk. Would you risk your job, or should your client risk their home or business on a best guess? Secondly, how do you translate that uncertainty into an insurance premium. If you aren’t sure about when or if the trigger has been met – how do you arrive at a price? Err on the side of caution and the customer is hit by potentially unfair premiums. Be gutsy and risk paying more claims that you ought to. Neither position translates to sustainable success.
And the winner is…
Criticism aside, we at FloodFlash are delighted that more insurers and capacity providers are embracing parametric insurance. If parametric insurance driven by satellite data flourishes then great, more people will be covered with innovative risk transfer products. Technology advances rapidly as each day goes by so whilst we don’t use satellites for our cover now – we certainly follow their development closely.
For the time being, satellites don’t provide the level of reliability that we require for our version of parametric flood insurance. Until that day comes, FloodFlash will be sticking with what works and keeping things simple – a state of the art sensor designed by a former Dyson product engineer.
That’s enough about parametric insurance based on satellite data for now! Find out more about the FloodFlash sensor in our article all about the tech. Want to learn about our record-breaking flood insurance visit our homepage.