Many of my recent blogs have been quite focussed on the past. It seems clear that we have a few useful methods that can help us
understand storm frequency, with less certainty on how severe they have been.
As powerful as palaeotempestology might be, it is sadly unlikely to be able to
provide enough data for us to truly compare the climate proxy outputs at the
fidelity with which we have been observing storms in the last 100 or so years,
especially since we began to use satellites to observe the weather.
However, as an ex-professional in the world of weather forecasting,
I often get asked about the chances of a certain intensity of storm occurring,
such as, could we see another Hurricane Katrina, or will the Philippines see
another Typhoon Haiyan, or closer to home (UK), when will we see another Great Storm
of 1987 (aka 87J). Of course, these questions are difficult to answer, unless a
storm of similar characteristics is starting to form and picked up in numerical
weather prediction models such as the UK Met Office’s Unified Model (UM), or
the U.S. NOAA’s Global Forecast System (GFS) (there are many more).
This blog will talk a little about what I know of the types
of models that are based on physical laws at work in the atmosphere and oceans,
and take super computers bigger than my flat (not saying much) to run.
General Circulation Modelling – the granddaddy of physical
modelling
General Circulation Models (GCMs) focus on the actual
physical dynamics of the atmosphere and model them by building a system of grid
cells (lego-like blocks) which talk to each other regarding momentum and heat
exchanges. The size of these grid cells defines the scale of the weather phenomena
that can be modelled.
However, there is a trade-off between three facets of a GCM
configuration. With limited computing resources, a balance must be struck
between complexity (the physics that are included in the model in the actual lines
of code), resolution (size of grid-cells) and run-length (how much time does
the model represent i.e. into the future or a period in the past perhaps). Basically
climate models use Duplo bricks, and high resolution models use normal Lego
bricks. The analogy also works as the can fit together nicely (Figure 1).
Figure 1: Larger Duplo (climate models) bricks and smaller Lego (weather forecasting models) bricks working together. Source: Wiki Commons Contributor: Kalsbricks |
I wonder what type of modelling is analogous to mechano? Thoughts
on a postcard, please, or in the comments section below?
In case you were wondering, the Lego analogy came about since that's what I bought my three year old nephew, Harry, for Christmas. The present that keeps on giving! Merry Christmas by the way!
Lego Bricks
High-resolution model configurations of some of the big GCMs have been built that can, for example, capture the small-scale eddies around the
headlands of the Isle of Wight in the UK (by the Met Office during their
involvement London
Olympics 2012). Models of grid-scale, in the order of a few hundred metres, are
used for this detailed work and are run over a very small region.
Another example of high resolution modelling: A regional
model was employed to reanalyse Cyclone Megi from 2010 which had one of lowest
central pressures ever recorded. The comparison shows satellite imagery
alongside a model run (by Stuart Webster at the Met Office) with amazing detail of the eye-structure
and outer bands of convection. Because of the presentation of the model data,
the two are difficult to distinguish for the untrained eye (Figure 2).
Figure 2: Cyclone Megi simulation (top) showing eye- wall and convective bands, compared to similar locations and overall size of the real storm in a satellite image from MT-SAT 2. Source: Met Office. |
Duplo bricks
GCMs traditionally struggle to match the intensity of storms
in climate model configurations, as described in the IPCC AR5 chapter on evaluation
of climate models (IPCC
WG1 AR5: 9.5.4.3), but example such as the Met Office’s Cyclone Megi, and
others models with resolutions of 100km or so show that the science is still
able to model many features of tropical cyclone evolution.
They are also used to model the large scale planetary
interactions that govern phenomena such as ENSO, and are captured well according
to the selection of models used in the Coupled Model Inter-comparison Project (CMIP). CMIP is currently on its fifth incarnation,
CMIP5, which is used by the IPCC
to understand future climate change. This paper by Bellenger et al. (2015)
shows some of the progress made in recent years, between CMIP version, however, due to similra ability to represent large scale features when examining ENSO, both CMIP3 and CMIP4 models can be used in conjunction as a broader comparison
Assembling the ensemble
The “ensemble” is also a technique used to run a model
multiple times with slightly different starting conditions to capture a range
of uncertainty in the outputs. No model is perfect so their products shouldn’t
be believed on face value, but ensembles can help us by showing the range of
possibilities as we try to represent what we don’t know in the input data.
This addresses some of the observational uncertainty. GCMs
starting points are based on the network of observations that are connected up
throughout the world, and standardised by the World Meteorological Organisation
(WMO) for weather forecasting. These observations include ground-based observations
(manual and automatic), radar imagery of precipitation, satellite images,
aircraft reconnaissance (with tropical cyclones), sea surface readings, and
weather balloon ascents (and more) which are all assimilated into an initial
condition, and gradually step forward in time by the gridded global model. The
starting point is also called ‘the initialisation’ in a forecasting model. For
climate models the starting point can be current climate, or whatever version
of the climate is relevant to experimental design.
Regardless of how a mode is started on it time-stepping
through a defined period, ensembles provide an idea of the range of possible outcomes
through minor perturbations in observing conditions, or even how certain
physical processes are handled (i.e. through different paramaterisation schemes
for features too small to be represented at a given resolution). In my forecasting
days at the Met Office, looking at the solutions from a variety of the world’s
big weather modelling organisations (NOAA, Met Office, ECMWF, JMA) was
colloquially termed ‘a poor man’s ensemble’ as normally an ensemble will
consistent of many tens of solutions. A similar concept, although not using
GCMs, is found in risk modelling applications such as catastrophe loss
modelling, many tens of thousands of simulations are performed to try to statistically
represent extreme events, but using extreme value theory and statistical fits
to the rare events on a probability distribution. A useful paper reviewing
methods in loss modelling for hurricanes can be found by Watson et al.
in 2004.
And the weather today...
So numerical weather prediction models used for day-to-day forecasting
are run at high resolution, high complexity, but can only go a week or so into
the future. Their accuracy has improved greatly in the last few decades. A
forecasting for three days ahead now is as accurate as a forecast for one day
ahead in the 1980s, according to
the Met Office. And below (Figure 3) is a picture of the European Centre
for Medium Range Forecasting’s (ECMWF) verification of different ranges over the
decades
Figure 3: ECMWF’s verification scores for a range of forecast ranges. Source: ECMWF. |
.
Climate models on the other hand are run with lower
complexity and lower resolution, allowing them to be run out to represent decades.
Since large scale climate modes such as ENSO (or the AMO, or MJO, or many
others) can influence storm activity, intensity and track, GCMs are invaluable
tools in helping us understand the broader climate, as well as the small-scale
processes.
Basically, GCMs can be run at different resolutions with
different input data depending on the application (e.g. weather forecasting or
climate experimentation). The computing power dictates how these model
configurations perform and the range at which they can produce outputs in a
reasonable run time. They have developed into the key tool for understanding our
weather and climate and interactions with the Earth’s surface (via other
modelling approaches such as land surface models or ocean circulation models.
hey, there is a broken link in this article, under the anchor text - IPCC: WG1 AR5 Chapter 9
ReplyDeleteHere is the working link so you can replace it - https://selectra.co.uk/sites/selectra.co.uk/files/pdf/Climate%20models.pdf