• Order
  • GBR
  • Offers
  • Support
    • Due to unforeseen circumstances, our phone line will be unavailable from 5pm to 9pm GMT on Thursday, 28th March. Please be assured that orders will continue to be processed as usual during this period. For any queries, you can still contact us through your customer portal, where our team will be ready to assist you.

      March 28, 2024

  • Sign In

Disclaimer: This is an example of a student written assignment.
Click here for sample essays written by our professional writers.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

Climate Models Assignment

Paper Type: Free Assignment Study Level: University / Undergraduate
Wordcount: 5063 words Published: 1st Sep 2021

Reference this

Climate models parameterize many unresolved processes.  Why does this lead to uncertainty in predictions of future climate change and what approaches could be taken to reduce these uncertainties?

In order to tackle this question, one first has to first get familiar with what the concept of parameterisation actually means. Broadly speaking, it is a way of representing calculations in a different manner. Specifically in climate modelling, parameterisation is used in instances where the actual underlying processes driving the behaviour are too small (such as small scale radiation or convection processes), or too complex to handle efficiently within the system. In these cases, instead of, for example, explicitly calculating the absorption and emission behaviour of every air molecule, these processes are parameterised over larger scales, and “wrapped up” into functions. The GARP notes that a parameterisation has succeeded when it manages to provide a quantitative treatment that accurately reproduces the location, frequency and intensity of small processes on a resolvable grid scale.

One phenomenon that is commonly parameterised are clouds. Especially cumulus clouds, which typically have scales of less than one km, very much cannot be resolved by most climate modelling grids. Additionally, the processes that govern cloud formation and behaviour are very complex, making parameterisation necessary.

In earlier of such parameterisations, air columns were considered to be unstable if the temperature at the bottom was higher than at the top, leading to vertical mixing. More complex schemes may recognize that one column of air hardly ever overturns in its entirety, as well as accounting for entrainment. Usually, measures of temperature and moisture are most relevant for cloud parameterisation.

Get Help With Your Assignment

If you need assistance with writing your assignment, our professional assignment writing service is here to help!

Assignment Writing Service

Stratiform clouds tend to be more easily parameterised, as they generally form when air humidity reaches a high enough value. However, even if a grid box of the model has not reached this value, it is possible for sub-grid-size clouds to form, given that the local humidity at a section of the grid may be higher than the mean, and sufficiently high to allow for condensation. Thus, many model set this threshold humidity lower than the actual required value to account for this. Even so, attempting to introduce an almost binary treatment of each box – either there are clouds, or there aren’t, removes much of the more detailed but relevant interactions of clouds and the environment. For example, clouds have a major impact on radiative processes and energy balance, as one may consider a cloud’s blackbody behaviour of absorption and emission.

As has been alluded to, this process of parameterisation can lead to uncertainties. These arise, for example, from the fact that parameterisations tend to be not wholly precise, “glossing over” smaller details or absorbing them into larger processes. Representing a line spectrum in terms of wide bands may obscure some of the finer features that give rise to interesting behaviours. A small uncertainty in one of the parameters of the model may affect subsequent modelling steps like dominoes. Especially variables that the model is very sensitive to (for example, exponentials), can have large knock-on effects.

Additionally, uncertainties arise from the model resolution. For lower resolutions, this is intuitive: if one were to model the Earth as two grid boxes, one for each hemisphere, and then assign a value of percentage cloud cover to each, much information is lost; where do these clouds occur? How large do they tend to be? How do winds affect them? Conversely, using very high resolution gives rise to errors associated with the statistical treatment of processes within the box. Many statistical assumptions become inaccurate for sample sizes that are too small, as there is no guarantee the sample accurately reflects the underlying parent distribution. This is particularly relevant for convective processes, which rely heavily on the use of statistical physics. Stratiform clouds, specifically, tend to be overrepresented due to their size, and oversimplification of their thermal behaviour.

What can be done to reduce this uncertainty? A simple answer may be to be more explicit in the modelling, but this defeats the purpose of parameterisation in the first place. The issues with simply increasing resolution have also been touched on.

To successfully implement parameterisations with low uncertainties, one must calibrate the model. The process of calibration involves finding both the structural form of the parameterisation (eg, are variables related linearly, exponentially, etc), and the values of the parameters themselves. In essence, it is finding both the type of function and coefficients for that function that best describe the observations. This can be done, for example, by running a high resolution model of the explicit calculations, determine the main drivers and make sure they are accurately and precisely represented in the parameterised model by comparing them to reality. As such, precise observations are helpful in order to get good calibration.

Returning to the example of clouds, parameterising formation as simply an overturning pillar of air, and using the dry and moist lapse rate to quantify the air’s water saturation tends to lead to predicting excessive precipitation in a number of conditions.

Instead, a mass flux scheme is introduced (ie, a better “function” to describe the behaviour), which instead represents temperature and humidity in terms of large scale, resolved variables. It also better encompasses the microphysics of convection via updrafts and evaporatively drying down drafts.

In summary, while parameterisation is a useful and often necessary tool, it comes with a number of issues attached. Namely, the need to calibrate the model, and uncertainties, which grow larger the less good the calibration is. However, the processes usually represented by parameterised processes are important factors in climate (eg clouds), and as such, the effort is well worth it.

 

To have more accurate representation of future climate change we should increase the resolution of climate models to below 1km.  Present arguments for and against this point of view.

Over the past decades, with the advent of more and more powerful computers, it has been possible for climate scientists to use higher and higher resolutions when modelling the Earth’s climate. While in 1990, resolutions of 500km were common, by 1996, this had improved to 250km, by 2001, to 180km, and by 2007 to 110km, as the IPCC reports. Nowadays, resolutions in the tens or even single digits of kilometres can be used. But to what purpose?

A major, obvious advantage of using higher resolutions in climate models is that smaller and smaller scale phenomena can be accurately modelled. This is particularly relevant for eddies, currents, cyclones and similar processes at the km scale. This increased level of detail leads to a more complex simulation of the climate and its intricacies, and thus one that is more true to reality. This, in turn, means that the model can be more confidently compared to any existing observations.

In a similar vein, aside from representing processes in more detail, a higher resolution also allows for a more intricate and thus more realistic modelling of topographical features such as mountains and coast lines. Mountains can have a large impact on climate as they affect airflow, and thus getting accurate representations thereof is beneficial for the simulation as a whole. While a coarse grid may allow for noting high- and low-altitude areas of the simulation domain, a higher resolution leads to a better representation of steep gradients – which cause different responses in air circulation pattern than shallow gradients.

Further, many processes that occur on small scales but are relevant to larger scale processes are currently having to be parameterised. An example of this are clouds, which play a major role in radiative processes, and currently generally are represented via parameterisation. One could write a whole essay about the issues arising from using parameterisation, but in short, it tends to introduce errors, as physical processes are not explicitly modelled. Large, convective clouds (eg thunder clouds), could be directly modelled with a finer grid, albeit with some caveats. Firstly, the microphysical processes governing the formation of the cloud itself (condensation of water vapour), still need to be parameterised. And Secondly, especially shallow cumulus clouds usually fall below the 1km range, meaning they would also still need a parametric representation.

Finally, a finer grid, and thus more data, has a number of statistical benefits. It helps with accurately describing the mean and variance of data, while also better representing extreme values that may occur at local scales, which decreases the model bias. More data also generally leads to a better and more accurate model fit. Additionally, effects arising from finite difference schemes such as diffusion are reduced, as we move towards the infinitesimal limit of the scheme.

As good as all of these benefits may sound, high-resolution climate models have a major drawback: computational costs, and all its associated problems. This expresses itself in terms of time taken to run a model, computer hardware required, data storage, and power draw.

Generally, the time it takes for a model run is equal to the time the computer takes for one floating point operation, times the number of operations per equation, times the number of equations per cell, times the number of cells, times the number of timesteps, times the number of total model runs. One can quickly see that increasing the resolution from, for example, 100km to 10km, would mean a 100-fold increase in grid boxes (in 2D), as well as likely requiring more horizontal layers. Implicitly, a smaller special grid size also required a smaller time step in order to satisfy the CFL condition. Keeping the time step large while using a very fine grid leads to instabilities. Physically, one can imagine this as something occurring in a box at t=0, and at the next time step, appearing 5 boxes to the north-east. Based on this, we do not know if the phenomenon moved there in a straight or curved line, or maybe collapsed and re-formed. The decrease in time step futher increases the total computational cost.
Time-efficiency of a model can be measures in terms of simulated years per wall clock day. For long-term runs to be viable, we aim for 3-5 simulation years per wall clock day – and even present day best efforts with thousands of CPUs still fall short of that.

As well as taking longer to run, these high-res models also have higher power requirements. 1km resolution runs have power requirements of ~600 MWh for a single simulated year, which translates to 22GWh for a 30 year long simulation – enough to cover 22 million UK homes for an hour. Generally, increasing the resolution from 50 to 2 km comes with a 15,000-fold increase in power requirements. The ethics of this are questionable – is it right to use energy equivalent to thousands of barrels of oil (1 barrel of oil is equivalent to ~1.7 MWh) in order to better predict climate change?

Some ways to try and compensate for the huge computational requirement include hardware improvements such as specialist, powerful processors, or changes to the model such that only certain regions are modelled at high resolution. For example, one may, in spectral space, run high spectral wave numbers at lower precision. Further, nested models can be used, where a certain region of interest makes use of a finer grid than the surroundings. Usually, the high resolution area is driven from the low resolution base grid (one-way nesting), as the converse is problematic.

Lastly, while not a disadvantage in itself, the usefulness of expending a great deal of processing power for a high-res simulation has to be assessed. Regardless of resolution, major, large-scale features tend to be represented correctly, and thus a finer grid may be unnecessary, depending on the goal. Further, even infinitely high resolution cannot solve fundamental problems with the models themselves – inaccuracies will be propagated, and errors in the underlying GCM or the parameterisations used lead to errors in the simulation. Increasing the resolution is not an universal fix.

In conclusion, while a high resolution grid may come with a great number of benefits, its main drawback, as of now, is computational feasibility. In the end, climate modelling is a matter of balance between accuracy and computational efficiency. Even as this may change with the advancement of technology, it is not enough to simply make the task of creating better and more precise simulations a technical one – continuing improvement upon the boundary conditions and parameterisations used is just as key in the development of a great climate model.

 

Outline some approaches used to evaluate climate models. What are the strengths and weaknesses of the different approaches?  

Climate models allow us to predict climatic evolution over the spans of years or even decades. But in order to draw confident conclusions from any model, one has to make sure that the model is capable of actually predicting the true evolution of processes. This can be done via a range of methods, including comparison to observations, or statistical techniques using performance metrics.

At the basis, the former means comparing models to the observational record to assess their capability of reproducing real phenomena. The simplest way of doing this is to use observational evidence of distributions of, for example, temperature or precipitation, and compare these data to the distributions generated for the same time and place by the model.

A more complex approach is to isolate individual processes. Rather than averaging over time or space, results are averaged within distinct schemes of the overall system, such as circulation regimes. These can then be directly compared to the observations of this specific regime. Additionally, one may wish to isolate individual processes (rather than whole regimes), and compare to the record, or even detailed models of the process in question. These approaches are useful in determining the validity of individual parts of the model, but do not necessarily provide much information on their interplay.

Satellites provide a good source of observational data, as they cover large areas of the globe, while sampling a variety of variables. Traditionally, one would convert satellite observations to the model equivalent form, but this requires a range of assumptions, for example about the limitations of the satellite’s sensors. Recently, studies have been performed that use radiative transfer models to simulate instrumental data the satellite would provide. These avoid the conversion problem, but require inclusion of assumptions about microphysics, and generally have limitations, and are thus most useful in combination with additional techniques.

Another method for determining uncertainties includes the use of combined climate and weather models. As both use similar atmospheric models, and generally may be started using the same initial conditions, it is possible to test parameterised sub grid scale processes without introducing complications from feedback. Such inquiries have found that many systematic errors within models develop very quickly, and highlight the importance of fast, parameterised processes such as clouds.

Aside from assessing model performance based on adherence to physical observations, it is also possible to utilise ensemble approaches to research uncertainty in models stemming from internal variability, boundary conditions, parameterisations etc. These ensemble approaches include Multi-Model Ensembles (MME) and Perturbed Parameter Ensembles (PPE).

A MME is a collection of simulations from multiple centres, and can be used to sample structural uncertainty and internal variability. However, MME sample size is generally small, a condition made worse by the fact that modelling centres often share models and components with each other, further reducing the number of truly independent models. This also means that an MME is not a true random sample. Commonly, MME also calculate the arithmetic mean of each result, giving equal weight to each model regardless of ensemble size, its independence from other models, or its performance in other, more objective tests.

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services

PPEs are based on a single model, using parameter perturbations to explore specific variations. While they allow for the use of statistical methods to explore the major drivers of uncertainties, they do not account for structural uncertainty – an error in the underlying model can affect results “behind the scenes”. Generally, it is advised to explore both uncertainty in structure and parameters from a combination of both MME and PPE – but even this does not account for systematic errors.

Statistical methods can be powerful tools in assessing climate models. A ranked histogram approach has been proposed to explore whole ensembles, with the purpose of determining whether observations can be considered statistically significantly different from the model ensemble. While MME are generally reliable in exploring variances, the dispersion in single-model ensembles (such as those used in the group project) tends to be too narrow.

Bayesian statistics allow us to better quantify model inadequacies and combine a range of metrics within one statistical test. This may include a combination of priors with previous model performances to compute an ensemble’s uncertainty. Additionally, new computational techniques involving machine learning can assess the performance of parameterisations, but as this is a new field, further study is required to understand its full benefits and limitations.

In conclusion, there is no “best” metric for evaluating model performance, as different techniques explore different components of the climate model. The last IPCC uses a variety or observational, ensemble and statistics-based performance metrics, as a combination of methods generally yields the most sound results. A limitation arises from the comparison to observations itself, as the observational record comes with associated uncertainty and internal variability. This effect can be mitigated somewhat by the use of a multitude of independent observations.

In closing, while providing an idea of the capability of models in mirroring observations, these methods give very little insight into their suitability for predicting future climate, specifically with he advent of new, unexplored conditions, such as very high greenhouse gas concentrations.

 

Outline some ways in which climate models have been used to study climates of the past.  Identify strengths and weaknesses of the different approaches.

Most people are familiar with the concept of using climate models to simulate and explore the evolution of the global climate system in the future, especially with regards to climate change. However, climate models can also be applied to past processes in order to get an idea of global conditions many years ago. Insights thus acquired can be useful in prediction of future climate as well – for example the most recent IPCC concludes that current atmospheric CO2 and CH4 concentrations far exceed those over the last 650 000 years, while also noting that in the past, periods with CO2 concentrations higher than present were likely warmer than the present, exemplifying the link between greenhouse gases and global warming.

This process of paleoclimate modelling can be difficult due to several factors. Firstly, data, if even available at all, is limited. While deep ice cores, tree rings or sediment cores can hold a wealth of information on past climate, not every process and condition leaves a permanent trace on our planet. For example, while the CO2 record can be inferred from leaf structures, methane is usually constrained using ice cores, and can only be used dating back about a million years ago. This lack of data leads to poorly constrained initial conditions and boundary conditions, introducing large uncertainties early on in the process which in turn lower our confidence in the results obtained. Additionally, we also do not know how sensitive the climate may have been to certain conditions, as there is no observational record to cement out understanding of processes and their precise effect on each other. These problems become worse the further back in time one goes, as data becomes more and more sparse.

At the base level, two kinds of models have been used to explore past climates: Energy Balance Models (EMBs) and General Circulation Models (GCMs). Each comes with its own specific characteristics and uses.

The EBM is the simpler of the two, utilising, as the name implies, energy balance between the incoming radiation from the sun with absorption and reflection from the earth. It is mostly used to simulate the evolution of the climate system over large time scales, averaging over days or even years. The results returned tend to be simple, which can be useful in terms of determining what specific areas or epochs to run more in-depth models on without wasting a great deal of computational time and power on less interesting places and times. Results also tend to be more stable, and are sufficient for analysis of slow-changing environments (such as ice sheets). They also may make it clearer which specific processes affected the outputs, due to lower levels of complexity.

In contrast, GCMs use the Equations of Motion to simulate processes. While this leads to more detail in the simulations, it also requires smaller time steps. As a result of this, GCMs are much more computationally expensive, especially when coupling between processes is included, and hence tend to find application in “zoom ins” on specific areas or time periods. While coupling makes the model much more expensive to run, it leads to much more accurate results, while also somewhat reducing the needs for boundary conditions, as the coupled parameters provide boundary conditions for each other.

Both models have been used for various purposes, some of which will be explored in the following.

The changes in received solar radiation based on Earth’s orbital parameters were explored for the early Holocene (9000 Years ago) and today using a GCM and compared. As no ocean temperature data was available for the early Holocene, modern values were used – this, however, should have only a minor impact as theoretical and current observational data shows that the oceans’ thermal response is much more moderate than that of land mass. The model implied a more intense summer and winter monsoon season over Africa and South Asia; this was later confirmed by a newer model using a higher resolution as well as more realistic parameters.

A period that has been explored to a level of detail is the Cretaceous. In 1982 Barron and Washington explored the difference in atmospheric circulation patterns when compared to today. The hypothesis suggested sluggish circulation in the Cretaceous, but the model suggested otherwise. At the minimum, knowledge of the surface temperatures and lapse rates is required to confidently compute circulation intensity, and thus, direct evidence is required to draw confident conclusions. A study using the NCAR model with no ocean circulation and no changing seasons concluded an increase of the average global temperature of 4.8K; an EBM model, however, found an increase of only 1.5K, illustrating the difference in sensitivities of the different model types. The heating effect of the Cretaceous is only slight in the tropics, but very pronounced at the poles. This implies a reduction in equator-to-pole surface temperature gradient, but not necessarily a change in the atmospheric temperature gradient. It is suggested that the warming in the Northern Hemisphere is in part due to continental motion, while in the South the size of the Antarctic Ice Sheet plays a major role.

In conclusion, the choice of model is dependent on the purpose of the study. EBMs, while limited in their output, are computationally cheaper and can be used to determine specific areas of interests. These can then be explored in more detail with GCMs, which are expensive but provide more detail. Often, they are limited by observational uncertainties, as the further back in time one wishes to explore, the less observational climate evidence is available.

 

What are the advantages and disadvantages of spectral methods?

Spectral methods use Fourier Analysis to represent data as truncated sums of orthogonal functions, rather that applying finite difference methods. This has a number of both advantages and disadvantages as compared to grid point methods.

The main advantage is that spectral methods allow decomposition of the calculations at hand into sin and cos functions, which can be exactly differentiated. Thus, spatial derivatives can be done analytically rather than numerically, increasing precision and accuracy, while doing away with issues with aliasing and dispersion. Compared to grid-point methods, spectral methods also do not suffer from nonlinear instabilities caused by using finite differences representing derivatives inexactly.

Further, spectral methods remove the issue that grid based methods have, where due to the spherical nature of the earth, they tend to fail the CFL condition at the poles, leading to instabilities and spurious errors. While grid based methods require filtering or some other ad hoc techniques to solve this problem, it does not arise in the first place in spectral methods. They allow us to use spherical basis functions (assuming the Earth is a sphere), which are well known and naturally produce a uniform grid.

Another advantage of spectral methods is that they conserve area-averaged mean square kinetic energy as well as vorticity, unlike finite-difference methods. In general, solutions conserve 1st and 2nd order quantities, and even higher order if there are enough points in the grid.

Additionally, they allow us to “pick and choose” the waves to be investigated, allowing some flexibility in terms of analysis and parameterisation; this depends on the physical process in question.

An obvious disadvantage that arises from the use of Fourier series is that of the Gibbs phenomenon. As computers cannot store infinite series (Which would be required to reproduce the data exactly), the series has to be truncated somewhere. This truncation results in the Fourier series over/undershooting the true function, particularly at discontinuities and steep gradients, which is worse the earlier we truncate the series. The Gibbs phenomenon is particularly pronounced for positive definite variables; for example, descriptions of water vapour are particularly affected, especially near mountains.

In addition to the Gibbs phenomenon, another issue with using finite series is that different truncation methods have their own unique properties. There are many methods with various advantages and drawbacks, which complicates the use of spectral models. However, this issue is mitigated for very high resolution models, where the differences between truncation methods are minor.

Another issue arises from the complication of calculations when using series. As data is represented as series, many usually trivially calculated properties become non trivial, such as the calculation of any non-linear terms. This can be somewhat mitigated by using the transform method, ie transforming the equations to a different space, solving them there and then transforming back again, but depending on the equation in question this may also be complicated or computationally expensive. This transform method can also simplify the inclusion of parameterised processes (eg clouds), which otherwise are very difficult to include in the spectral model.

A further major disadvantage of spectral models is their relative efficiency. As resolution is increased, the number of floating point operations required at each timestep increases faster than in grid point models. This is due to the cost of using the Legendre transform, which goes as the number of points cubed. Spectral models scale roughly as ~N3, while grid point models go as N2 where N is the number of grid points on one dimension.

Finally, unless the transform method is used, they cannot be used for domain specific modelling – one has to solve the equations for the whole Earth, and later extract the relevant data for the desired area, which leads to a cumbersome simulation process with a lot of wasted processing power and time. However, one can use the Spectral element method to somewhat circumvent this, by splitting the globe into a grid of spectral zones. These nested spectral models allow us to approach computational efficiency closer to the grid point methods.

In closing, there are both drawbacks and advantages to using spectral models, the specific effects of which have to be weighed and considered for the system to be simulated. The majority of climate modelling centres use spectral methods, but there is a move towards finite elements methods as computers get more powerful and allow for better resolutions.

 

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this assignment and no longer wish to have your work published on UKEssays.com then please: