We often see extreme storm and flood events referred to as the ‘100-year storm’ or the ‘50-year flood’, but what does this actually mean?
Also referred to as ‘recurrence interval’. This is a term adopted by scientists and policy makers to estimate the likelihood and severity of extreme events (such as cyclones/hurricanes, flooding, and earthquakes). It is based on the statistical analysis of data (such as historical climatic records, flood measurements, or earthquake frequency data), to provide a probability that an event of any given magnitude will occur in any given year. This probability is often used to assess the risk of these events for human populations. The concept is based on the magnitude-frequency principle, where large magnitude events (such as major cyclones) are comparatively less frequent than smaller magnitude incidents (such as rain showers).
What does this mean in real terms?
Let’s consider the likelihood of a large hurricane occurring in any given year. Hurricane intensity is measured using the 5-point Saffir-Simpson Scale (Table 1).
If it has been calculated based on past hurricane records that a Category 5 hurricane has a return period of 100 years, this essentially means that there is a 1 in 100 chance (1%) that this will occur in any single year. Correct terminology is key…a return period of 100 years does not mean that we can’t get two ‘100 year hurricanes’ in two or three consecutive years. It is all based on probability, regardless of when the last similar event occurred (see Table 2). For example, a 1% chance in one year is the equivalent of a 10% chance in 1,000 years. This means that over 1,000 years there could be 10 consecutive years with Category 5 hurricanes, or these events could be spread over the 1,000 year time frame.
How can policy makers use this information?
The recurrence interval concept is widely used by policy makers and planners to assess the risks associated with extreme events and to develop suitable management strategies. We can use the calculated probabilities to engineer our environment in such a way as to reduce the impacts of these events. For example, we can use historical records of flood frequency and maximum stage (height) to develop appropriate flood defences (such as levees or barrages), to ensure that we do not develop the land too close to flood-prone areas, and to ensure that our bridges are of sufficient height to withstand flood events.
Let’s take the Thames Barrier in London, UK as an example. This is one of the largest flood barrages in the world. It comprises 10 steel gates, spanning 520 m across the River Thames, and has been built to protect 125 km2 of central London from potential tidal surges. The barrier was initially built to protect the city from a ‘1,000 year flood event’ (i.e. a probability of 0.1%) until the year 2030. The lifespan of the barrage has been calculated based on past records of flood events on the River Thames, and also takes into account projections of rising sea level, using a maximum estimate of 8 mm sea level rise per year. The barrier is tested regularly, on a monthly basis, to assess its on-going suitability to protect London from rising sea level and associated flooding.
Are large magnitude (extreme) events going to become more frequent under a changing climate?
To fully understand or predict the occurrence of future extreme events we need to understand the issue in three dimensions: event magnitude, return period, and spatial scale (Lane, 2008). With the predicted increase in global air temperature it is now widely expected that the magnitude of extreme events will increase (Mitchell et al., 2006; IPCC, 2013). This means that hurricanes or cyclones may become more intense, and floods may become larger. This is a result of changes to the hydrological cycle and behaviour of air masses across the globe.
It is also possible that these events will become more frequent. In order to accurately assess any changes in the return period of these events, scientists use long-term monitoring programmes. However, there is one major complication here. As extreme events are, by their nature, comparatively rare (as explained by the magnitude-frequency principle outlined above) observing enough of these events to form any statistically viable conclusions is going to take many years.
For example, measurements of storm intensity are very accurate from the 1970s to the present day due to the increasing use of satellites and monitoring networks such as NOAA. Prior to this, however, our records of storm occurrence and frequency are far less accurate due to incomplete written records, and the fact that many storms occur over the oceans, where there are very few human populations who might monitor and record such events. What is more, our global records are also highly spatially variable as data from less inhabited or developing regions is sparse. Only once the quality and length of our records improve will we be able to more fully understand, and better predict, extreme event frequency.