Sunday, 10 January 2016

After the storm, comes the calm…

This is my last blog before taking a break, but I just wanted to take a moment and use one of my favourite paintings as an inspiration for a few final words.

Figure 1: Snow Storm - Steam-Boat off a Harbour’s Mouth by Joseph Mallord William Turner. Source: Tate.org.uk


‘Snow Storm’ by Turner is a perfect way to end this blog. He is one of my favourite painters and inspired me to go to art school before studying science. He also helped me along the way to becoming a weather forecaster (a career lasting for 10 years) and now to being obsessed with big storms (although the Great Storm of 1987 in the UK also had a strong imprinting effect).

The painting shows his interpretation of a storm; one of many pictures he painted on the subject of vortices. Legend has it that he strapped himself to the mast of a ship during a storm, to gain an experience of extreme weather. In doing so he was risking his own life for his art. Whether true or not: it’s a great story.

It strikes me as a poignant image in light of today’s debate on climate change. To me, it represents the belligerent march of industry despite the environment carrying on in its naturally ferocious, unforgiving, but ultimately beautiful way. It also motivates me to consider more of what we can do to mitigate and adapt to a future of more severe extreme weather – a future which is likely to have been brought about by the actions of a few industrializing nations.

Even though I’m signing off from this blog for now, I feel it is certainly the beginning many more. I have learnt a lot about climate change while writing these posts, and I sincerely hope that you have enjoyed what you have read. I will continue to grow in my knowledge of climate issues, and try to synthesize the science with an even and balanced view.

If you too are finding out more about our natural environment and have any climate-related comments or questions related to anything I have posted, then please feel free to leave a comment.


But for now, goodbye and happy blogging!

Thursday, 7 January 2016

The Great Hurricane Debate

I have talked about the past in my palaeotempestology series, and some more about how models can help us and what they are capable of representing at different scales. But what can we really say about the present day climate and recent changes, regarding tropical cyclones like the devastating Hurricane Katrina (Figure 1).

In this blog, I ask: What is the current thinking on tropical cyclones in a changing climate? 

Figure 1: Hurricane Katrina just before landfall. Source: NASA


When we talk about storms and global warming there is a lot of uncertainty. Although we cannot assign any particular storm to global warming, we can talk about a shift in the likelihood of certain types of storm. Figure 2 below, shows this concept (using temperatures). The graphs show a distribution of possible futures, with the central vertical line representing the most likely. The curves fall away on both sides represent a lower probability of occurrence and we head towards the extremes.

With climate change, what is normal now, is likely to be shifted one way or another (warmer in terms of global temperature) which will change the likelihood of our current perception of extremes. The phrase ‘new normal’ is an evocative way of emphasising the shift.  Below is an example of how different changes in probability distributions affect temperature, but any variable can be displayed in this ways, for example probability of occurrence of storms.

Figure 2: Different changes in probability distributions of temperature. Source: Kodora and Ganguly 2014.
Figure 2 is useful in explaining probability shifts on a normal (Poisson) distribution. Don't worry about the small text:
  • The top graph (a) shows shifting the distribution of possible events to the left or right increasing the likelihood of the extremes (in the shallower edges of the curve) in the direction of shift.
  • The middle graph (b) shows the effect of changing the maximum probability of occurrence (the average most frequently occurring conditions and its ‘fattening’ of the extremes (also known as ‘fat tails’), which represents a change in variability. 
  • And the bottom graph (c) shows how changing the shape of the curve may affect average conditions but actually leaves the extremes largely untouched. 

This is how we need to think about possible climates in the future, and how distributions of storm intensity may change.

Dicing with extremes

Another way to put it would be using a dice analogy. On a ten sided die (yes, I do have a ten-sided die, as I used to play dungeons and dragons), let’s say 1 and 2 represent a category 1 hurricane and 9 and 10 are a category 5 hurricane. On a given day (obviously not real probabilities), if we assume global warming is shifting the intensity of storms to become more severe, all we are saying is that a storm in the future might be a category 5 storm if we perhaps roll an 8, 9 and 10 on that die, with a reduced chance of what we know as the category 1 storm, only occurring if we roll a 1.

So what’s the story for tropical cyclones?

There has been much debate over the years. Theoretical reasoning relies largely on the impacts of increasing temperatures affecting sea surface temperature, and extra water vapour in the air, which are a key factors in generating and develop tropical cyclones. However, the climate is not so simple. In a ‘Science: Perspectives’ piece by Kevin Trenberth in 2004, he explains how there is large variability in hurricane activity linked to ENSO, but as sea surface temperature and water vapour are increasing, they could enhance convection and therefore impact the intensity and rain-making potential of tropical cyclones. He also comments that trends in tracks and activity rates are harder to quantify.

It seems that there is much recent study on the changes in frequency, rather than the intensity, of storms. In a previous blog, I noted that climate models can represent the atmospheric conditions that encourage tropical cyclone genesis, however, intensity is more difficult due to the small scale features involved with storm development and convection that govern exactly how strong the wind becomes or the depth of pressure in the eye of the storm. This means that although we may be able to identity trends, we are unlikely to be able to quantify the changes, which limits application.

Through the Coupled Model Intercomparison Project, now on its fifth round of comparisons of the world’s biggest climate models (CMIP5), we are steadily making progress. Bellenger et al. 2014 describes how the latest models can capture modes of climate variability that influence tropical cyclone formation and evolution. The paper highlights an improvement in a previous cold-bias of sea surface temperatures in the Pacific Ocean, but generally not too much difference elsewhere, allowing the use of CMIP3 and CMIP5 models when assess ENSO. Climate models now also capture monsoon rains with high confidence (IPCC: WG1 Summary for Policy Makers). Historically, CMIP models have developed vortices that represent tropical cyclones, but they are generally too weak. (IPCC: WG1 AR4 Chapter 8).

Turning up the dial on Tropical Cyclone Intensity

The general consensus seems to be that tropical cyclones are not necessarily expected to increase in frequency, but they are likely to increase in severity. A shift in the intensity of storms towards the stronger wind speeds is a likely impact of global warming according to Holland and Bruyere (2013) as we can see from Figure 3.

Figure 3: Saffir-Simpson scale hurricane category proportion of total North Atlantic Tropical Cyclones (including Tropical Storms), changing through time (years indicated in the legend). Source: Holland and Bruyere (2013)
  
Back in 2005, Kerry Emanuel also discussed in trends tropical cyclone activity, and how their destructiveness has increased in the previous 30 years. A recent paper concurred with this finding from Estrada et al. (2015) who link US$2 to $12 billion of the losses incurred due to the busy 2005 hurricane season, to the effects of climate change. Tom Knutson (2004) also found similar results through studying the choice of climate models used to define the CO2-related warming, and the choice of parameterization schemes for convection in hurricanes. He found increases in tropical cyclone intensities linked to high CO2 environments (anthropogenically warmed simulations) in his model analysis.

Christopher Landsea, of the National Hurricane Centre in Miami, questioned many of Emanuel’s methods, in an article in Nature (2005). A debate ensued that caused a rift in the meteorological community. Landsea agreed with Will Gray in concluding that most of the variability on tropical cyclone frequency, especially intensity, is derived from natural variability, or at least that the observed data is not able to make a significant link to global warming. 

Gray preferred a theory linking hurricane intensity to the Thermohaline Circulation in the world’s oceans. An interesting article in the Wall Street Journal in 2006, highlights some of the awkward moments surrounding this passionate debate. Another more scientific angle from the guys at RealClimate.org, highlights where the different sides of the argument were formed regarding global warming’s effect on tropical cyclones.
Looking back now it seems apparent that the extra attention from two active hurricane seasons in a row, 2004 and 2005, may well have added fuel to the fire of the debate (Trenbeth 2005).

Poleward Bound?

IPCC synthesis of the past and future global changes in tropical cyclone frequency provide only low confidence (IPCC: Summary for Policy Makers), however there are regional patterns that have been elucidated.

Another interesting recent paper by James Kossin et al. 2014 found a slow poleward migration of tropical cyclone maximum lifetime intensity; a metric which is relatively insensitive to past data uncertainty. The trend is fairly small but linked to the last 30 years, and so I wonder if this is indeed another anthropogenic signal or part of natural variability. The main implication of such a poleward shift of the region affected by tropical cyclones, is that areas that have never experienced them before (and therefore are perhaps not built to withstand their destructive force), may become exposed in the future. And conversely, areas near the Equator that are currently in tropical cyclone-affected regions, may see a lower frequency of events.

Interesting stuff, and I look forward to more papers on this subject.

Latest models

Mizuta et al. (2012) showed how recent high resolution climate models (at 20 km resolution or so) have improved characterisation of intensity of tropical cyclones, at least to the extent to being able to examine distribution shifts within their own outputs, but still cannot tell us exactly how future storms will look in the year 2100. They are also able to represent variability in yearly activity rates with fairly low resolution (100 km) models (IPCC: WG1 AR5 Chapter 9). This is interesting when compared to the size of most tropical cyclones being only a few times bigger.

It’s an exciting time for climate modelling. I remember only 5 or so years ago as a forecaster that the global model resolution of the operational weather forecasting models was around 20 km. It’s amazing to think that this resolution is now being used to experiment with future climates over years and decades.

Conclusion

Recent findings echo the higher intensity theory, hence the inclusion of increases in tropical cyclone intensity in the late 21st century described as: “More likely than not in the Western North Pacific and North Atlantic.” (IPCC: Summary for Policy Makers). The latest IPCC report also concludes that it is “virtually certain” that there have been increases in intense tropical cyclone activity in the North Atlantic since 1970, but low confidence that this is anthropogenic in origin.

An increase in intensity certainly makes sense to many in the field – more water vapour and higher sea surface temperatures in the system may not create more tropical cyclones, but may well allow them to become stronger. After all, tropical cyclones are only trying to redistribute heat to the poles, so more heat potential has to end up somewhere… right? But then, if global warming is affecting the poles more than the tropics then surely this counter acts the effect somewhat by reducing the gradient. Perhaps the global gradient has something to do with changes in frequency. This will have to be a future blog subject too!

Ultimately, most of the scientists studying tropical cyclones around the world agree that global warming is happening, and that is very likely to be anthropogenic in origin (IPCC: Summary for Policy Makers). Although some still contend that there may not be enough evidence to confidently maintain that the intensity of tropical cyclones is increasing globally, there is a strong signal to say tropical cyclones have already increased in intensity. Furthermore, there are strong hints that intensity will continue to increase in the future.


Hopefully, we’ll get to a point where the science is settled and we can get on with adapting to the consequences of our changing climate. It certainly would be better to have more study to prove the idea beyond reasonable doubt.

Tuesday, 5 January 2016

Thoughts after COP21 and the role of risk assessment and insurance

I recently published a blog for my employer's blog site summarising some of the outcomes of the COP21 meeting in Paris. I focused on some of the developments in the financial world that may be able to help with adapting to the impacts of global warming in the coming decades. You can find the blog here if you’re interested.

Although the various schemes and forums that have been set up will no doubt broaden the reach of the insurance industry through government risk pools, or through micro insurance, many are quite new and innovative for the industry. Some of the new successful options are built as bespoke parametric insurance products that can be very quick to pay out since they are based solely on a defined parameter. The main thing required for such a product is a reliable dataset upon which to build a relationship to losses or costs.

Some financial support for African farmers

A very relevant example, due to this year’s strong El Nino (see another of my company blogs here) and droughts in some parts of Africa (Figure 1), is the African Risk Capacity (ARC).

Figure 1:  Pumping well-water from a borehole in the village of Bilinyang, near Juba, South Sudan. Source: World Bank/Arne Hoel














Their approach uses a drought index and an agreed threshold of precipitation which triggers payouts to help small farmers via their governments. The number of countries signed up to this in Africa is growing year on year, and is a great example of the insurance sector providing financial stability in an efficient manner for farmers that would otherwise be at risk of losing their livelihoods in extremes of drought. According to the ARC website, an analysis by Boston Consulting Group for ARC showed that the potential benefit of running the scheme is 4.4 times the costs of emergency response in times of drought.

Basically, every one dollar spent through ARC, saves four dollars and 40 cents in emergency response costs, and the money through ARC will be where it needs to be in a matter of days rather than the weeks and months that it can take for governments to reassign funds or wait for international aid. This is a great example of the financial sector providing a cushion to potential climatic impacts which may well get worse in the future.

Global initiatives from COP21

As I explain in my company blog, the UN’s Secretary-General Ban-Ki Moon announced his climate resilience initiative named A2R, which stands for Anticipate, Absorb, Reshape. Much of the scientific endeavour for projecting climate change, while understanding and providing early warnings for current climate extremes, broadly fits within “anticipate” section of the initiative. “Absorb” seems to fit naturally with financial mechanisms, as well as building resilient infrastructure (see here for link to fellow MSc blogger) and mitigating actions to reduce CO2. And “Reshape” is again about resilience but with more focus for the future, in building partnerships between the public and private sectors to foster sustainable growth and better decision making for future infrastructure.

There are many complementary initiatives getting started in this push for resilience. A special Task Force on Climate-Related Financial Disclosures, chaired by Michael Bloomberg, who has been ardent in support of building climate resilience, galvanized no doubt by having seen first-hand the impacts of severe weather on New York while presiding as mayor during Hurricane Sandy. This aligns well with a UN endorsed initiative called the ‘1-in-100 initiative’ which aims to encourage companies to better assess and disclose their ‘tail’ risk (risk of a 1% probability loss), giving them a financial incentive to be more resilient if they want to attract investment through being resilient.

Public/private sector partnerships

There seems to be a groundswell of activity in the private sector. While COP21 was underway I was invited, through my employer, to attend a meeting in Paris regarding climate resilience hosted by one of the clients of my company. The hosts are a large international management, engineering and development consultancy firm, and are therefore interested in finding out how different industries and sectors are planning to approach the challenge that will befall us due to global warming. It was a Chatham House Rule session so I won’t go into any details, but the meeting involved delegates from the World Bank, the European Investment Bank, the Rockerfeller Foundation, the Global Sustainability Institute and Anglia Ruskin University, a senior professor from our very own UCL Geography Department to highlight but a few - an interesting line up indeed.

Discussions covered a number of topics from city resilience to financial stability with respect to climate change, but my general feel from the event was that there was a tangible motivation to deal with the future impacts of global warming sooner rather than later. There was a recognition that there is good business opportunity through building sustainable cities, and offering risk assessment products and services in areas that will see increasing climate risk.

I feel the key to helping climate resilience is certainly to engage all parties and ideally within mutually beneficial partnerships. Initiatives such as the UN’s AR2 or the Insurance Development Forum (IDF, also announced at COP21) can help. My own job is part of this too: I may have mentioned it before but my MSc studies are part time, beside my day job which is in the risk and (re)insurance sector. I work with business users and academics to try to match up both of their needs and capabilities, and work towards tangible outputs through research and internal client related projects. An applied science coordinator/leader of sorts, by coordinating a network of academic institutions working with my company on a wide range of risk related subjects, including climate extremes.

There are also academic led partnerships such as the Engineering for Climate Extremes Partnership (ECEP) hosted by the National Centre for Atmospheric Research (NCAR) which aims to “strengthen society’s resilience to weather and climate extremes” (ECEP website: About). I also have a separate blog on this on my company website here. This vision can only truly be achieved through partnerships between the public and private sectors.

The power of partnerships is examined in the Stern Report which also highlights the potential economic downsides of not adapting to climate change. And furthermore a recent paper by Estrada et. al (2015) shows the economic costs of climate change in terms of damage from hurricane. They estimate that 2 to 12% of the normalized losses from the busy hurricane season of 2005 in the U.S. are attributable to climate change. It seems an interesting finding, but since they have also found an increase in both frequency and intensity of storms from the geophysical data, where other papers have only found a increase in intensity, it does seem to be a finding worth more exploration in a future blog.

In summary, it seems clear that the financial world certainly has a key part to play, and when fully committed to investing in new technology and research, it can act as a powerful driver for change in terms of building resilience and financial stability in the face of changing climate extremes.

Afterthought

I know this blog is supposed to be about storms, but I’m starting to realise just how much climate change is a multidisciplinary challenge and so to focus on one subject, one problem, or one solution can reduce our ability to bring together different expertise and opportunity.

I think it’s healthy to take a step back and look at the wider interaction of various adaptation and mitigation initiatives, and then perhaps work out how they can fit into your own area of expertise and capability to do something useful for society.


Sunday, 3 January 2016

Disastrous Return Periods

When talking about return periods it’s easy to assume that a 1 in 100 year event will occur only once in 100 years. This is may lead to misconceptions in the understanding of risk. Consequently, it can lead to poor-decision making. Stakes could be fairly high, perhaps affecting whether or not you invest in flood protection for your home, or misconceptions may influence engineering and building code regulations, when communicating with decision-makers. The reality is that a 1 in 100 year storm, can happen once in 100 years, twice in the same 100 years, three times or even not at all! At the risk of ranting, it strikes me as a hugely misleading communication tool which continues to purvey the risk management world, and communications in the general media today.

I am not alone in this. Francesco Serinaldi, an applied statistician at Newcastle university, wrote a paper in 2014 called: Dismissing Return Periods! Using an exclamation mark in a title of an academic paper gets a thumbs up from me! He goes into much more detail than I could on this subject, describing how univariate frequency analysis can be prone to misconceptions when return period terminology is used.

Serinaldi also suggests better alternatives for engineering and risk modelling applications. These are; the use of probability of excedance, and risk of failure over the life time of a project or average life expectancy of a person perhaps. These describe more objective and robust quantifications of frequency of specific events, or categories of events, defined by a parameter or index.

Perception of safety

Another example is described by Greg Holland in his blog on the Engineering For Climate Extremes Partnership (ECEP) website. When discussing Hurricane Katrina, he also suggests how misleading the description of the levee protection as being able to withstand a 1 in 100 year storm, evoking a ‘sense of safety’. He elaborates (as I mentioned above) that their 1 in 100 year storms is simply a 1% chance of such a storm happening in any given year (irrespective of climate variability).  He explains that this means that there is a 65% chance that such a storm would occur in the next 100 years. When changing the time period it also means that there is a 25-30% chance that such a storm would occur within the next 30 years. This starts to concern a much wider stakeholder groups, including small businesses and home owners.

Return periods can also vary widely depending on the spatial scale of an event. The Great storm of 1987 in the UK was reported as a 1 in 200 year event for many of the southern counties of the UK, whereas for parts of the south coast, it has been assessed to be more like 1 in 10 years! This is misleading in that the storm itself was large enough to cover a broad swath of land with severe impacts, but within this storm return period estimations vary. It depends on how the calculations are conducted, which data are used, thresholds that are assigned for definition of event.

One generalised return period statistic is not adequate to clearly describe the risks associated with a storm. The more objective methods suggested by Serinaldi are an alternative for engineering applications.

Public perception tangent…

As a bit of a tangent, but this has made me think about public communications too. I think that the use of analogues or a narrative, to try to recreate conditions in a viewer’s mind using past experiences, is very powerful in changing perceptions and behaviour, much more so than a misleading return period estimate.  I find it fascination how perception can change based on storytelling: one thing at which humans have always excelled.

An interesting paper by Lowe et al (2006) studies the effects of blockbuster movies such as the “The Day After Tomorrow” which can act to skew perception of risk, but also increase motivation to act on climate change and sensitize the viewer. The paper also notes a lack of knowledge on how to use this new found Hollywood-induced motivation. This is an interesting area of research in its own right.

Too many blog subjects, not enough time!

Saturday, 2 January 2016

What's in a name?

Hurricane Katrina left a scar on the American psyche. It was a huge storm that ravaged the Gulf Coast and flooded much of New Orleans and the surrounding area. Handling of the aftermath of the event received much criticism, with responses slow and emergency services overwhelmed. The flooding was exacerbated by inadequate levee protection, unable to deal with over 30 feet of storm surge (according to the NOAA post event report) with catastrophic consequences .

I do find it odd then that presidential candidate Jeb Bush decided to use "Hurricane Katrina" as a nickname for State Senator Katrina Shealy, as a form a praise.

Despite the characterisation of a "strong" and "fierce" storm, apparently matching State Senator Katrina Shealy's personality, it does seem to be poor taste, even though it has been ten years since Hurricane Katrina struck. I doubt this would have been attempted soon after the event. Furthermore, I wonder whether this nickname would have been used in the areas worst affected around New Orleans? Which begs the question, was it acceptable in Florida, which was also struck by Hurricane Katrina?

Ultimately, the intent was not malicious and so I would hope it has not offended anyone, but I wonder if the general population also sees this as poor judgement, and if it may have affected opinion. Perhaps it was a further attempt to simply make the news media story. Jeb's stance on anthropogenic climate change seems to be more accepting than his brother George at least.

It also reminded me of a project in 2014, by 350 Action, to lobby the World Meteorological Organisation to change the names of severe storms to those of famous climate deniers, in an attempt to associate their lack of willingness to accept the science which links global warming to anthropogenic fossil fuel burning, with actual disasters.

Here is a short presentation explaining the rationale behind the video released by 350 Action:



According to the project website, the video went viral gaining over 3 million views.

I do hope that a little more judgement is used when talking about extreme storms as their impacts last way beyond the superficial recovery. This is why tropical cyclone names are retired if there are any associated fatalities. However, if we accept the comment by Jeb Bush, perhaps some other politicians would be happy to take on the name of storm during an event? I think not.


Tuesday, 29 December 2015

Will GCMs really tell us everything we need to know about climate change?

In a previous blog, I discussed General Circulation Models (GCMs) at varying resolutions.

Here, I’ll highlight a few limitations, especially when looking at tropical cyclones.

Even though GCMs are able to capture tropical cyclone tracks and storm formation to provide hugely valuable forecasts for public safety concerns, we should be aware of the limitations in looking at climate scale variability and change. For example, looking seasons or years ahead into a climate projection, GCMs have less ability to say how many and how intense the storms might be. Hurricane season forecasts are put together using a variety of statistical and GCM-based techniques and we can get a lot of value from both approaches. But there is only so much that we can say.

However, papers by Deser et al 2012 and Done et al 2014 are useful in determining what can be explained on a seasonal or decadal time-scale. James Done found that based on one season, his regional climate model experiments shows that around 40% of the variability in tropical cyclone frequency in the North Atlantic is simply natural variability, and not associated with forcing from greenhouse gases, volcanoes, aerosols or solar variability (external forcing). He notes that from Deser et al. 2012, regional scales can see internal variability becoming greater than externally forced variability. This also highlights the difficulty in assigning a single regional event to changes in climate on a global scale.

To sum up, GCMs
  • as numerical weather prediction models, offer great ability to provide operational forecasts and warnings on a day-to-day basis, 
  • as global/regional climate models, to experiment with the atmosphere and explore sensitivities in the processes that bring about extremes of climate, global climate variability or climate change. 


When looking at seasonal or longer timescales, GCMs run at lower resolution and so lose the ability to capture small scale features that drive tropical cyclones, and so we have to model the large scale influences to look at more general shifts in probabilities of single or seasonal phenomena (e.g. hurricanes or droughts).

Deser et al. 2012 also calls for greater dialogue between science and policy/decision-makers to improve communication and avoid raising expectations of regional climate predictions. I totally agree. Better communication between scientists and stakeholders is important because talking about storms and climate change is highly political. Poor communication can lead to gross misrepresentations by those aiming to mitigate and adapt to climate change, as well as those who do not accept that climate change is a concern.

Future for GCMs?

I can see how GCMs have great ability in helping us understand the sensitivities of the climate system, and as they improve and as computing power increases (along with big data solutions), then so too should our understanding of various climate processes. In fact, growth of the GCM capabilities may well increase the level of uncertainty as we start to model more and more complexity. I do wonder where the next big step will be though. Between CMIP3 and CMIP5 (two rounds of climate model comparison projects – see previous blog) Bellenger et al. (2015) showed some progress, but also commented that overall, there were limited improvements of how ENSO (a dominant mode of climate variability)  is characterised.

An interesting article here by Shackley et al. back in 1998 called; “Uncertainty, Complexity and Concepts of Good Science in Climate ChangeModeling: Are GCMs the Best Tools?”, shows a range of interesting discussion points asking whether GCM-based climate science is actually the best approach from a number of perspectives. Are there alternative types of models that could allow us to better engage with the public, with policy makers or with the private sector? There are certainly alternatives that show promise as discussed on Judith Curry’s blog, who is of the opinion that climate modelling is in a “big expensive rut.” I hope I can find time to expand on this interesting topic in my blog here.


Personally, I am a big fan of GCMs. It's amazing that they can represent the atmosphere with such high fidelity, but it's good to ask these questions and not to forget alternative approaches which may be much more practical and 'fit-for-purpose' in particular situations.. 

In a future blog, I’ll discuss a little about how we talk about probability of future events, and then follow on with a blog on how we currently stand on tropical cyclones and climate change. 

Saturday, 26 December 2015

A Model Family

Many of my recent blogs have been quite focussed on the past. It seems clear that we have a few useful methods that can help us understand storm frequency, with less certainty on how severe they have been. As powerful as palaeotempestology might be, it is sadly unlikely to be able to provide enough data for us to truly compare the climate proxy outputs at the fidelity with which we have been observing storms in the last 100 or so years, especially since we began to use satellites to observe the weather.

However, as an ex-professional in the world of weather forecasting, I often get asked about the chances of a certain intensity of storm occurring, such as, could we see another Hurricane Katrina, or will the Philippines see another Typhoon Haiyan, or closer to home (UK), when will we see another Great Storm of 1987 (aka 87J). Of course, these questions are difficult to answer, unless a storm of similar characteristics is starting to form and picked up in numerical weather prediction models such as the UK Met Office’s Unified Model (UM), or the U.S. NOAA’s Global Forecast System (GFS) (there are many more).

This blog will talk a little about what I know of the types of models that are based on physical laws at work in the atmosphere and oceans, and take super computers bigger than my flat (not saying much) to run.

General Circulation Modelling – the granddaddy of physical modelling

General Circulation Models (GCMs) focus on the actual physical dynamics of the atmosphere and model them by building a system of grid cells (lego-like blocks) which talk to each other regarding momentum and heat exchanges. The size of these grid cells defines the scale of the weather phenomena that can be modelled.

However, there is a trade-off between three facets of a GCM configuration. With limited computing resources, a balance must be struck between complexity (the physics that are included in the model in the actual lines of code), resolution (size of grid-cells) and run-length (how much time does the model  represent i.e. into the future or a period in the past perhaps). Basically climate models use Duplo bricks, and high resolution models use normal Lego bricks. The analogy also works as the can fit together nicely (Figure 1).

Figure 1: Larger Duplo (climate models) bricks and smaller Lego (weather forecasting models) bricks working together. Source: Wiki Commons Contributor: Kalsbricks

I wonder what type of modelling is analogous to mechano? Thoughts on a postcard, please, or in the comments section below?

In case you were wondering, the Lego analogy came about since that's what I bought my three year old nephew, Harry, for Christmas. The present that keeps on giving! Merry Christmas by the way!

Lego Bricks

High-resolution model configurations of some of the big GCMs have been built that can, for example, capture the small-scale eddies around the headlands of the Isle of Wight in the UK (by the Met Office during their involvement London Olympics 2012). Models of grid-scale, in the order of a few hundred metres, are used for this detailed work and are run over a very small region.

Another example of high resolution modelling: A regional model was employed to reanalyse Cyclone Megi from 2010 which had one of lowest central pressures ever recorded. The comparison shows satellite imagery alongside a model run (by Stuart Webster at the Met Office) with amazing detail of the eye-structure and outer bands of convection. Because of the presentation of the model data, the two are difficult to distinguish for the untrained eye (Figure 2).


Figure 2: Cyclone Megi simulation (top) showing eye- wall and convective bands, compared to similar locations and overall size of the real storm in a satellite image from MT-SAT 2. Source: Met Office.

Duplo bricks

GCMs traditionally struggle to match the intensity of storms in climate model configurations, as described in the IPCC AR5 chapter on evaluation of climate models (IPCC WG1 AR5: 9.5.4.3), but example such as the Met Office’s Cyclone Megi, and others models with resolutions of 100km or so show that the science is still able to model many features of tropical cyclone evolution.

They are also used to model the large scale planetary interactions that govern phenomena such as ENSO, and are captured well according to the selection of models used in the Coupled Model Inter-comparison Project (CMIP). CMIP is currently on its fifth incarnation, CMIP5, which is used by the IPCC to understand future climate change. This paper by Bellenger et al. (2015) shows some of the progress made in recent years, between CMIP version, however, due to similra ability to represent large scale features when examining ENSO, both CMIP3 and CMIP4 models can be used in conjunction as a broader comparison

Assembling the ensemble

The “ensemble” is also a technique used to run a model multiple times with slightly different starting conditions to capture a range of uncertainty in the outputs. No model is perfect so their products shouldn’t be believed on face value, but ensembles can help us by showing the range of possibilities as we try to represent what we don’t know in the input data.

This addresses some of the observational uncertainty. GCMs starting points are based on the network of observations that are connected up throughout the world, and standardised by the World Meteorological Organisation (WMO) for weather forecasting. These observations include ground-based observations (manual and automatic), radar imagery of precipitation, satellite images, aircraft reconnaissance (with tropical cyclones), sea surface readings, and weather balloon ascents (and more) which are all assimilated into an initial condition, and gradually step forward in time by the gridded global model. The starting point is also called ‘the initialisation’ in a forecasting model. For climate models the starting point can be current climate, or whatever version of the climate is relevant to experimental design.

Regardless of how a mode is started on it time-stepping through a defined period, ensembles provide an idea of the range of possible outcomes through minor perturbations in observing conditions, or even how certain physical processes are handled (i.e. through different paramaterisation schemes for features too small to be represented at a given resolution). In my forecasting days at the Met Office, looking at the solutions from a variety of the world’s big weather modelling organisations (NOAA, Met Office, ECMWF, JMA) was colloquially termed ‘a poor man’s ensemble’ as normally an ensemble will consistent of many tens of solutions. A similar concept, although not using GCMs, is found in risk modelling applications such as catastrophe loss modelling, many tens of thousands of simulations are performed to try to statistically represent extreme events, but using extreme value theory and statistical fits to the rare events on a probability distribution. A useful paper reviewing methods in loss modelling for hurricanes can be found by Watson et al. in 2004.

And the weather today...

So numerical weather prediction models used for day-to-day forecasting are run at high resolution, high complexity, but can only go a week or so into the future. Their accuracy has improved greatly in the last few decades. A forecasting for three days ahead now is as accurate as a forecast for one day ahead in the 1980s, according to the Met Office. And below (Figure 3) is a picture of the European Centre for Medium Range Forecasting’s (ECMWF) verification of different ranges over the decades


Figure 3: ECMWF’s verification scores for a range of forecast ranges. Source: ECMWF.
.
Climate models on the other hand are run with lower complexity and lower resolution, allowing them to be run out to represent decades. Since large scale climate modes such as ENSO (or the AMO, or MJO, or many others) can influence storm activity, intensity and track, GCMs are invaluable tools in helping us understand the broader climate, as well as the small-scale processes.


Basically, GCMs can be run at different resolutions with different input data depending on the application (e.g. weather forecasting or climate experimentation). The computing power dictates how these model configurations perform and the range at which they can produce outputs in a reasonable run time. They have developed into the key tool for understanding our weather and climate and interactions with the Earth’s surface (via other modelling approaches such as land surface models  or ocean circulation models.