Chapter Two: Literature Review



The literature review will cover all relevant theory applicable to the dissertation. It will lay the foundation for the subsequent chapters, by firstly providing a grounding through which the later chapters can expand on and thus gain more insightful meaning. Secondly, it will provide a more concise, holistic view on the topic by exploring and linking various factors relating to the topic. Often, these factors are not obvious and the purpose of the literature review is to bring these factors to the fore.

Economics of Ports

Economic Function and Purpose of Ports

Ports have been a part of human endeavors for millennia and have functioned as conduits of wealth and prosperity for many of the world's cultures, both ancient and modern. They seamlessly allow trade of necessary, valuable, costly and rare items from otherwise unreachable regions. But what is the definition of a seaport and what is its exact function? Goss defines a seaport as “...acting as a gateway through which goods and passengers are transferred between ships and the shore” (Goss, 1990, pg. 208). Goss expands on the above by later stating that it is in fact too narrow a definition and adds “the basic function of a port is to minimise the generalised costs of through transport” (Goss, 1990, pg. 210). Goss goes on to state that the economic purpose of seaports is “ benefit those whose trade passes through them, i.e. through providing increments to consumers' and producers' surpluses” (Goss, 1990, pg. 207). The rest of this section will seek to examine and explain the role and purpose of ports, using the above quotations as starting points.

Lady using a tablet
Lady using a tablet


Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

The demand for the services of Ports is a multi-derived function, since ports are not demanded as a utility satisfying good in themselves but rather as a means to an end. Ports serve a broader function and purposes and as such their need is derived from the aptly described “double derived demand”. Ports exist only to serve ships and as such, the demand for shipping services is closely linked to the demand for seaports. The first and probably most significant of the demands affecting the shipping function is Gross Domestic Product (GDP) growth, from both a national and international aspect. An increase in GDP is synonymous with an increase in productive capacity which, ceterus paribus would lead to more goods being in circulation. This would lead to more goods being exported, via increases in production as well as more goods being imported, due to an increase in local income. More ships would consequently be demanded to transport the increase in goods, and this would lead to an increase in demand for shipping services. Other factors affecting the demand for shipping function will be examined in the next section, among these being include technology and industrialization. (Goss, 1999 and Jones, 2002)

Port economics is no different to any other branch of economics in terms of raging debate between the classicalists and the reformist. These differing views span over issues such as the economic role of ports, government intervention, employment generation, location, subsidization and pricing too name a few. The Anglo-Saxon or classicalist's have an unnerving faith in the price signalling mechanism of the market and feel that government intervention should be kept to an absolute minimum. Ports are considered as ordinary commercial models and as such profit maximization should be their primary goal. They have a clear preference for ports being self financed entities whereby users have to pay the full cost of the services they utilise. In this light, subsidies are frowned upon, as being trade, competition and cost distorting and as such would reduce the scope for efficiency gains. In opposition to this is the European model that subscribes to the reformist or socialist school of thought whereby profit maximisation is not the primary object and the ports broader economic impacts are considered as if not more important. The European model acknowledges the presence of market failures such as externalities, public goods and imperfect competition. Government intervention in the form of administration and subsidies is considered necessary since ports are considered drivers of economic activity and precursors to social development whereby employment and infrastructure additions are led by the port. From this angle, ports can be considered to be instruments of regional policy that are a cog in a greater machine that is the economy. Robinson (2002) concurs with this point of view and defines the role of ports as “links in supply chains” which help deliver and sustain value for ships. In line with this thinking pattern is Notteboom and Winkelmans (2001), who state that Port Authorities should move beyond the facilitator role and become a the catalyst to social development by using “value-added logistics” that facilitates greater interaction with the logistic chains of port users. (Robinson, 2002; Goss, 1999; Jones 2002 and Notteboom and Winkelmans, 2001)

Port Models

Lady using a tablet
Lady using a tablet


Writing Services

Lady Using Tablet

Always on Time

Marked to Standard

Order Now

Port models can take varying forms depending on factors such as pricing, administration, industry, structure, and subsidies. Jones (2002) defines ports in a hierarchical format according to their characteristics. The first and most primitive is the industrial port. The characteristics include one commodity shipped, bulk orientation and strong location advantage. The next step of port evolution is the common user port which is more diversified and has more bulk lines and maybe some general cargo. The next stage is that of a liner port which incorporates a more diversified traffic base and multiple terminals. The fourth stage of the evolution is the transhipment hub port which mainly focuses on container operations and can accommodate very large vehicles. The fifth and final stage is that of a main port and this version has regional dominance, highly diversified traffic, advanced superstructure and infrastructure, competitive advantage and numerous supporting industries in close proximity to the port. Van Klink (1998) defines ports according to four stages of development. The first is that of the port city and here the port is a centre of trade, has little intercontinental transport, is general cargo and labour intensive and there is a limited port authority role. The second stage is the port area and has the port functioning as an industrial complex and is characterised by increased intercontinental trading, increased bulk trade, capital intensive port operations and a limited port authority role. The third stage or port region contains most of stage two plus containerised transport, increased interport competition, increased port authority functions and increased demand for space. The fourth stage is the port network and is envisioned as a hub port in a seamless globalised world that emphasises logistic management, cost reductions, environmental considerations, societal concerns, diffuse port ancillary industries and greater network linkages. (Van Klink, 1998 and Jones, 2002)

Jones (2002) names three different structures according to port authority control, and these are named landlord, tool and operating ports. The figure below illustrates the various port models based on port authority control. In theory, the division between public-private operation of infrastructure and services appears technically easily assigned on an economic efficiency and costing basis, however in the real world it is rarely so. The first type of Port is that of a Landlord port and it has the most limited form of public sector involvement. The port authority jurisdiction covers only the marine infrastructure, which is the seaward side of the port, thus allowing for private enterprise to control landside activities like cargo handling and superstructure. The second port is a Tool or Hybrid port and is structured so that the Port Authority controls both the marine infrastructure as well as the superstructure on land. In this model the private sector controls cargo handling and stevedoring and the Port Authority remains in control of fixed equipment and storage facilities. The final model of port governance is the Operating port where the Port Authority controls the full range of port activities. Brooks (2004) presents similar port governance models to that of Jones. The one exception is that of the 'private service port', which is described as an entirely private operation. On the international scene, various forms of the above models are incorporated in ports. For example, in the United Kingdom, ports more closely represent a fully private sector model, whereas in South Africa there is far more government intervention. (Brooks, 2004 and Jones, 2002)

Port Cost's and Efficiency

Total port costs account for only a fraction of the total costs associated with the logistics chain. This, combined with the double derived demand phenomena, results in an inelastic demand schedule for port services and infrastructure. Looking back on a port's main function in minimising general costs, if a port increases its efficiency in the form of lower costs it will ultimately increase welfare. The figure below illustrates how a decrease in costs, or an increase in efficiency, can reduce the cost of sea transport. The red line signifies land based transport cost and the green/blue line signifying sea based transport costs. On the y axis is the cost variable and on the x axis is the distance variable. It is evident that although sea transport has higher fixed costs then land base costs, as the distance travelled increases, so does it become more economical to use sea transport. The green line represents a situation whereby an increase in transport efficiency occurs. As a result of this increase in efficiency, sea transport becomes even more attractive. Even though port costs are a relatively small part of the transport cost chain, they should not be overlooked. Radelet and Sachs(1998) illustrate this point by showing that high shipping costs can reduce the rate of growth of both manufactured exports and GDP per capita. The authors claim that by doubling the shipping cost, a decrease in annual growth of half a percent is realised. (Suykens and Van de Voorde, 1998; Radelet and Sachs, 1998 and Goss, 1999)

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

Does an increase in port efficiency lead to a decrease in costs and which are the most relevant transport costs? Clark et al (2004) conduct an empirical analysis in an attempt to identify the key determinants of cost in relation to sea transport and ports. The first factor is that of geography or more specifically distance. Intuitively, the further the distance between two locations, the higher the expected transport cost for that voyage. The authors find that an extra 1,000 km raises transport costs by 8%. Additionally, travelling an extra 1,000 km by sea increases costs by $190, whereas travelling the same distance by land raises costs by $1,380. The next factor is that of Directional imbalance and this occurs when many carriers haul empty containers back, resulting in price distortions and either imports or exports becoming more expensive. An example is given between the USA and Caribbean In 1998, where 72 percent of containers sent from the Caribbean to the US were empty, thus leading to 83% above average price for the goods being transported. The Maritime sector, like any other economic sector, faces increasing return to scale in the long run. The economies of scale occur both at the vessel level at the port level. The port of Buenos Aires in Argentina is a prime example of this whereby the cost of using the port is $14 per container for a 1000 TEU vessel but five times more for a 200 TEU vessel at $70 per container. (Clark et al, 2004)

The second factor is that of containerization. The development of containerized transport has allowed large cost reductions in cargo handling and this exhibits a significant negative effect on transport costs. This phenomenon has firstly induced an increase in sea transport and secondly hastened the creation of hub ports. Both of these reasons further enhance increasing return to scale. The idea behind this result is that containerization reduces services cost, such as cargo handling, and therefore total maritime charges. The availability and quality of land infrastructure is another crucial determinant of transport costs and the authors find that it constitutes 40% of predicted transport costs for coastal countries, and as much as 60% for landlocked ones. Clark et al (2004) use an instrumental variable to capture economies of scale, namely the level of trade that goes through a shipping route. This instrumental variable, exhibits a significant and negative coefficient. Limao and Venables (2001) show that a 10% increase in transport costs is correlated with a 20% drop in volume traded. But what of causality? De and Ghosh (2003) find that improved port efficiency and lowered costs causes improved traffic flow and volume traded and not vice versa. Estache et al (2002) establish that ports which operate autonomously performed more efficiently in the short run. The authors state that private port ownership increases productive competition between ports which enhances efficient operation. The authors further illustrate that decentralisation and competition amongst Mexican ports enhanced efficiency. Brooks (2004) concurs with this view and states that privatisation have a propensity to improved efficiency. Clark et al (2004) find that regulations at the port level influence port effectiveness in a non-linear way, implying that a certain level of regulation can be beneficial and increase efficiency, but only up to a point. (Clark et al, 2004; Brooks, 2004; Estache et al, 2002 and Limao and Venables, 2001)

Bennathan and Wishart (1983) illustrate that as the number of berths increases, so too does the average occupancy rate amongst those berths rise. The reason for this phenomenon is the randomness in arrivals of both ships and cargoes. So, as the number of berths increases, the probability or likelihood of a ship finding an empty berth increases, even in high traffic ports. This decreases the waiting time of ships at sea and since ships are chartered on a daily rate, this decreases the ships opportunity cost's. Ports by their nature and purpose try to gain maximum berth occupancy, whereas ships try to secure minimal waiting time. Increasing the number of berths serves to meet both these requirements more effectively, hence moving the port closer to a pareto efficient solution. The authors show that a typical 1 berth facility functions at approximately 50% tenancy, a 5 berth facility at 65% tenancy and a 10 berth facility at 80% tenancy. Additionally, a higher numbers of berths has an economies of scale effect on port output. ( Bennathan and Wishart, 1983)

Maritime Economic Theory

Demand and Supply for Sea Trade

At their core, ports exist to service ships with respect to the movement of goods and as such Maritime theory and Ports theory are closely interlinked. The demand for shipping is a derived demand whereby ships are a means to an end, in terms of being a mode of transport for products, and not an end in themselves. Consequently, the demand for Ports is a double derived demand since ports exist to service ships and their cargo. As such, maritime economic theory will be analysed with the aim of providing further insight, understanding and clarity on ports. Additionally, the seaborne industry is of particular significance to South Africa since approximately ninety five percent of its trade volume is transported by the shipping industry and the country constitutes six percent of global tonne miles. (Stopford, 1997; Jones, 2004 and Jones, 2002)

As stated above, the demand for sea trade is a derived demand and as such its demand function is determined by a preceding demand. The first and probably most significant of these preceding demands is Gross Domestic Product (GDP) growth from both a national and international aspect. An increase in GDP is synonymous with an increase in productive capacity which, ceterus paribus would lead to more goods being in circulation. This would lead to more goods being exported, via increases in production as well as more goods being imported, due to an increase in local income. Since the mode of preference for the mass transport of goods is the shipping industry, for reasons of cost, speed and predictability, the consequent demand for shipping services is close linked to that of GDP. The second factor that drastically affects the demand for shipping is the globalization and industrialization process whereby developing countries, undergoing industrial migration and greater trade openness, have an ever increasing demand for raw materials, finished products and semi-finished products. The figure below illustrates the strong growth trajectories of developing countries like India and China as well as the continual growth of most other countries. This bodes well for the shipping industry, since there is a strong correlation between GDP growth and the demand for shipping services for reasons stated above. (Stopford, 1997 and Jones, 2004)

Whereas the demand for sea transport is exogenous, being driven by factors outside its control, the supply of sea transport is endogenous. The most important factor is the size and growth of the world fleet. Shipbuilding is a long and expensive venture and can have a lag effect of between one to four years. In 1970, there was 326 million DWT of ships available worldwide. This figure has increased to 1043 million DWT by the beginning of 2007. In 2008, Vessel orders were at their highest level ever, culminating in 10,053 on order. The continual addition of new tonnage into the world fleet, at a rate exceeding that which vessels are withdrawn from operation, is the primary factor leading to declining average age of the world fleet. The supply of the world's ships is fairly predictable, since it is not a situation that can change drastically in a short period of time. In recent years the demand supply ratio has been very tight, declining from a surplus of 9.7% in 1990 to 0.7% in 2005. The figure below illustrates demand, supply and surplus tonnage of the world merchant fleet. (UNCTAD, 2008)

Another important factor is the changing composition for the world fleet whereby, due to various fundamentals, the type of vessels being demanded and consequently being produced can change from one decade to the next. Over the past few decades, there has been substantial growth in containerised cargo movements. Containerized cargo represented 54 per cent of world general cargo trade in 1999, compared with 48 per cent in 1995 and 37 per cent in 1990. The phenomenal revolution of the container industry is testament to this fact, whereby container growth is expected to reach 287 million TEU by 2015. Containers are rapidly becoming the mode of choice especially for heterogeneous and break-bulk good for reasons of safety, predictability, manageability as well as their increased usage for agricultural products. The increase in efficiency brought on by containers allows more goods to be transported more often at lower rates. The figure below illustrates the growth in container ships, across all TEU classes, for selected years versus that of tankers and dry bulk cargo. (UNCTAD, 2008 and Stopford, 1997)

The Growth in Sea Trade

Worldwide seaborne trade surpassed 8 billion tones in 2007. This was largely due to spectacular growth of developing economies such as China and India. The strong performance of the world economy over the past few years has more than proportionately affected sea trade growth. The diagram below illustrates the strong link between GDP growth and sea trade activity. The year 1994 is the base year, with the preceding and succeeding years being indexed against it. (UNCTAD, 2008)

The last 17 years have seen a doubling of tonnage transported, from 4008 million tonnes in 1990 to 8022 million tonnes in 2007. The diagram below illustrates the total tonnage shipped for various years. The trend represented in both the above and below diagram is that of an upward one. (UNCTAD, 2008)

Composition of Shipping Costs

The shipping industry is not exempt from economic forces and as such can also benefit from the economy of scales phenomenon whereby one can lessen the average and marginal costs of a ships operations by extending the vessel size. As a consequence, there is an international trend of increasingly bigger vessels being both ordered and produced, with 36% of all containers ships scheduled to be built being larger than 7400 TEU's and these post-Panamax vessels now represent about 75.8% on the container order book. The reason for this is that these larger vessels have lower average total costs associated with them known as economies of scale. There has already been an order of 9600 TEU ship recorded and a South Korean shipbuilder, Samsung Heavy Industries, has completed testing for a 12,000 TEU Ship, with the design of a 14,000 TEU ship in the pipeline. The new breed of larger ships will require specialized infrastructure such as deeper and wider approach channels and berths as well as larger container terminals. The South African Department of Transport has stated that increasing the average vessel size up to 3100 TEU could decrease the costs of sea transport by up to 17%. The diagram below illustrates the falling average costs as a vessel gets larger and as can be seen, the returns to scale become quite small from the 6500 Teu mark. (American Shipper, 2005 and ISL Bremen, 2004)

Micro and Macro Economic Theory

Perfect Competition Theory

Perfect Competition is an ideal process in economics where the markets function efficiently. This means that resource are utilised in a way that is Pareto efficient and thus allow an economy to be distributive and rationing efficient. Prices are disseminators of information and guide the factors of production towards society's most beneficial point. The diagram below illustrates what conditions are necessary in order for a state of perfect competition to exist.

If all of these conditions occur, then the result will be an environment of perfect competition. Under perfect competition there are many buyers and many sellers and no economic participant can influence the price level. Additionally, there are no barriers to entry and information is freely available. However if even one condition is not met the consequence will be market failure. If free market forces are allowed to interact freely via the price signalling mechanism, then a Pareto efficient situation will in theory be reached. However, when the free market is not functioning properly, then social welfare is not maximised and public intervention is needed. It is important to remember that sometimes not all effects are shown by market prices. This phenomenon is known as market failure, which is a total or partial failure in the price signalling mechanism. The types of market failure analysed will be public goods, externalities and monopolies. (Parkin et al, 2006)

An example of a public good is the earth's atmosphere which is a global, open access resource that is both non-rival and non-excludable in disposition thereby making it a public good. In a port context, a lighthouse would be a classic example, whereby one cannot prevent a ship from enjoying its usage and the use of it by a ship does not prevent other ships from using it in the same instance. Because it lacks well-defined property rights, a free rider situation arises whereby the lighthouse can be used without paying compensation. With rights and responsibilities difficult to outline and agreements a challenge to reach, markets will not develop and as such, a situation of market failure arises. It may therefore fall to governments and other organizations to develop direct interventionist policies for addressing the situation. Because the causes and consequences of such change are often complex in nature, effective policies will require extensive cooperation among countries and industries with diverse conditions and priorities. Governments are not immune to inefficiency and may themselves fail to effectively allocate resources. Cross border disputes and interest only further compound this problem, making international cooperation a tricky affair. (Stern et al, 2006)

Externality Theory

The next form of market failure is that of externalities. An externality is defined as a cost or benefit, arising from the production/consumption of a good or service, which is accrued to someone who is not directly involved in the production/consumption of that good or service. Externalities can be broadly divided into two sections, namely negative externalities and positive externalities. Negative externalities or external diseconomies impose an external cost on individuals who are not directly involved in that activity. An example of a negative externality is that of an individual smoking a cigarette and nearby people having to breathe second hand smoke. Positive externalities or external economies impose an external benefit on individuals who are not directly involved in that activity. An example is that of one's neighbour employing a security guard, which will undoubtedly benefit both their safety needs. (Pearce and Turner, 1996)

External costs are detrimental to global economic, social and environmental optimisation goals since they prevent market mechanisms from operating efficiently by interfering with price signals. Thus, identifying and quantifying external costs of energy systems are essential in achieving sustainable development. Economists, in the context of sustainable development goals, increasingly acknowledge the relevance of recognising, assessing and internalising external costs. In a purely competitive market, where externalities do not exist, prices represent the instrument for efficient resource allocation, both on the production and consumption sides of the economy. External costs resulting from market imperfections, as is the case for clean air and fresh water, prevent optimal resource allocation. Market prices cannot give the right signals to economic agents and policy makers as long as externalities exist. The equations below describe the notion of externalities in mathematical for. (Stern, 2006; NEA, 2000 and Pearce and Turner, 1996).

Equations (1) and (2) are two conditions that pertain specifically to externalities. Equation (1) pertains to externalities in consumption and shows that an externality exists when MSB exceeds the MPB. However, if there are no externalities involved then MSB=MPB. A consumption externality is illustrated in the diagram below, where it is assumed that the externality is constant and therefore does not vary with output.

Equation (2) pertains to externalities in production and shows that an externality exists when MPS exceeds MPC. This is related to the true cost reflection for a product or service. If the free market is operating efficiently, then MSC=MPC, however if there is market failure then an externality will exist equal to MSC less MPC. The below illustrates the difference between the MSC and MPC.

Thus, social costs are comprised of private and external cost and represent a true account of the cost of a good or service. External costs are the uncompensated side effects of a good or service but are nonetheless relevant.

Monopoly Power

The last form of market failure is that of a monopoly or single firm. The key defining point about a monopoly is that it has absolute market power and as such can practice selling products at a point where the price exceeds the marginal revenue. All firms produce at an output point where marginal revenue is equated to marginal cost. In a perfect competitive setting, this point coincides with price being equal to marginal revenue. However, in a monopoly setting, this point coincides with price exceeding marginal revenue. This enables it to earn abnormally high profits which decrease social welfare. The two primary reasons that allow a monopoly to operate are barriers to entry and no close substitutes. Barriers to entry are factors which make it difficult for other firms to operate. These are wide ranging and could be anything from financial factors to legal constraints. No close substitute simply means that a firm has a unique product and for some reason other firms cannot imitate this product. (Parkin et al, 2006)

A special form of monopoly that sometimes arises is that of a natural monopoly whereby the long run average cost curve is beneath the demand curve for the entire region of operational output. This means that a single firm can supply the entire market at a lower price then can firms operating conditions under perfect competition. Consequently, a natural monopoly experiences economies of scale at every point along the demand curve. Examples of natural monopoly industries include electricity, telecommunications and ports. (Parkin et al, 2006)

Keynesian Multiplier

The distinction between the short and long run is an important theoretical consideration in economics. The long run has all inputs as being variable whereas the short run has at least one input as being fixed. There are no criteria for the time duration of the long and short run and this transition period can be as little as a year or, as in the case of ports a 100 years or more. (Jones, 2004)

A useful short run tool that can help quantify the final income amount from an initial investment or spending impetus is called the Keynesian multiplier. The ratio between the eventual change in income and the initial investment is called the multiplier. The size of the multiplier depends on the fraction of the additional income generated in each round that is spent in the next round and this fraction is known as the marginal propensity to consume. The box below illustrates the mathematical equation of the Keynesian multiplier. It is important to understand the philosophy behind Keynesian economics and the consequent application of it. Keynesian economics is based in a world of excess supply, underutilized resources and where the price level is fixed or at the most, very slow to adjust. Additionally, demand is deemed to create supply which is in direct opposition to says law. With prices fixed, it remains in the realm of the short run world. As such, an increase in investment or spending can be multiplied throughout an economy without prices adjusting to maintain market equilibrium. There are few trade-offs to an increase in spending or investment in the short run world and the short run supply curve is very elastic. (Parkin et al, 2006)

Trade liberalisation

It is not hard to fathom the link between trade, shipping and ports and as such trade liberalisation is an important topic with regards to a country's maritime sector. Whether trade liberalisation benefits or harms an economy in a dynamic way is beyond the scope of this dissertation but a brief analysis of trade theory will be undertaken. Trade liberalisation is a highly contested area of economics with numerous opponents and supporters on both sides of the debates. Intuitively, trade allows an economy to be more efficient since with no trade a country must be self-sufficient and must then maximise its well being based only on local production. Thus trade increases a country's production possibilities. The reduction in tariffs encourages more trade which consequently promotes the flow of raw materials and finished goods. Additionally, the process of globalisation encourages and aids the sourcing of materials from around the globe. These goods need a cheap, efficient mode of transport and that's where the sea transport industry fits in.

As shown previously, growth in GDP is the primary driver of the demand for sea trade since the demand for sea trade is a derived demand. Consequently, if trade liberalisation can be shown to be growth inducing, then this will increase the demand for ships as well as ports. There are many economists who view tare liberalisation as growth inducing and some of their papers will now be discussed. The results of Frankel and Romer's (1999) tests show that trade opening raises income. A rise of one percentage in the ratio of trade to GDP is shown to cause a one-half percent increase in income. Income is positively affected through the accumulation of physical and human capital. The possibility of reverse causation, from growth to trade, is eliminated by the use of instrumental variables. Sachs and Warner (1995) examine the experience of countries that has liberalised their trade since 1975, and conclude that higher growth occurs two years after liberalisation relative to the pre-liberalisation years. Harrison (1996), using time series data finds that openness is indeed a robust and significant factor in relation to growth. Strong and sustained liberalization episodes result in rapid growth of exports and real GDP. Dollar (2001) shows that increased trade are related to accelerated growth. He controls for changes in other policies and address reverse causation with internal instrumental variables. Overall, there seems to be sufficient evidence that trade liberalisation does indeed cause growth. The diagram below, as taken from a World Bank study, shows the close positive relationship between trade and growth. (Sachs and Warner, 1995; Harrison, 1996 and Dollar, 2001; Frankel and Romer, 1999)

Sustainable Development

Defining Sustainable Development

There are many definitions of sustainable development (SD) but the one used here, is the 1987 statement by the World Commission on Environment and Development. The statement is as follows: “Sustainable Development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs.” (Mason, 1988. Pg 8). Thus, if non-decreasing welfare is maintained throughout time periods or generations, with a given stock of factor endowments, then a sustainable path can be identified. The report emphasised that significant and internationally threatening environmental problems were the mutual consequence of poverty in the developing world and excessive consumption in the developed world. Issues of intra and inter-generational equity were introduced, whereby intra-generational equity is equity within generations and intergenerational equity is equity between generations. (Mason, 1988)

Additionally, the commission stated that the escalating threats and consequences of development on the environment could not be solved without considerable international collaboration. It stated that the future welfare of developed countries was not only dependent upon them changing their development path towards more sustainable practises, but would fail unless developing countries were also prepared to adapt and make the necessary changes. The commission believed that the global economy had to meet people needs and desires, but growth had to simultaneously engage with the planet's limits and carrying capacity. The report identified two key concepts: Firstly, the needs of the poor should be a priority and policy should not further disadvantage them. The second point was the idea of limitations that technology and social organization imposes on the environments ability to meet present and future needs. (Heinrich Boell Foundation, World Summit for Sustainable Development Johannesburg, South Africa, 2002)

Another way of looking at the sustainable development approach is the term “security of intergenerational access to resources.” The meaning of intergenerational is quite clear in the South African context where the resources in question are for the benefit of both the present and subsequent generations. Security refers to a reasonable certainty that the future will not involve a significant reduction in people's access to resources. Access implies three qualities: that the resource remains available in terms of both sufficient quantity and quality; that the people can use it as needed or to the same extent as in the past and that equity exists in regulations governing its use and distribution. Resources, means natural resources such as forests, streams, lakes, agricultural lands, fisheries or anything in nature that has or could have a productive potential and/or provide ecological or cultural services in forested landscapes. (Pierce et al, 2002)

Though Sustainability Theory is quite broad in scope, it can roughly be divided into two approaches, namely the neoclassical approach and the Ecological approach. The neoclassical approach will first be discussed and will be the general school of thought subscribed to, followed by the ecological approach.

Different Approaches to Sustainable Development

The Hartwick rule is considered a suitable starting point for the neoclassical approach and was devised by John Hartwick in 1977. The rule ensures non-declining consumption through time when an economy makes use of exhaustible resources, such as coal, uranium and gas as part of its production processes. The rule states that as long as the stock of capital is non-decreasing through time, then non-declining consumption was also possible. The stock of capital could be held constant by reinvesting the rents received from the natural capital stock into man-made capital stock. Thus, as the natural capital stock falls the man-made stock replaces it. The rents received are stated to be Hotelling rents whereby rent per ton grows at the rate of interest and one unique path will maximize social welfare. This unique path is found by using the terminal condition and the stock constraint. Solow (1986) has stated that falling capital stock can sustain increasing consumption if the rate of technology increases. (Hanley, Shogren, and White, 1997)

The assumptions of the Hartwick model are, firstly that natural and man-made capitals are perfect substitutes. This implies that the elasticity of substitution is equal to one. Secondly, as the natural resource depletes we can use continually smaller quantities of it. This implies that as the amount of non-renewable resource decreases to zero, its average product goes to infinity, thus ensuring that the natural resource depletion does not act as a constraint upon growth. Also, preferences are exogenously determined and market prices are assumed to reflect over time. This approach presupposes that as long as there is a proper functioning price signalling mechanism and a free interaction of buyers and sellers then scarcity will be reflected by prices in the economy. The last assumption is that discount rates are positive. (Hanley, Shogren, and White,. 1997)

Given the above assumptions that man-made and natural capital are infinitely substitutable, the Hartwick-Solow and Neoclassical approach is usually termed the weak sustainability approach to Sustainable Development. Weak Sustainability (WS) is concerned with simply maintaining the overall stock of capital, which includes natural and man-made. The degree of substitutability of natural and man-made capital is the key principle, which clearly separates the weak and strong sustainability approaches. (Hanley, Shogren, and White, 1997)

Any economists who generally share a reservation regarding the pure neoclassical interpretation of SD would likely fall into this group. They represent a loosely assembled body of thoughts and ideas that criticize the inability, or perceived inability, of the neoclassical school to integrate ecological essentials into their welfare measures. Overall the alternative approaches agree that the neoclassical approach has too many unrealistic assumptions. Extraction of non-renewable resource should be consistent with substitute development and reinvestment, in strict accordance with the Hartwick rule for non-renewable resources. Ecological economists talk of non-declining natural capital to achieve SD. A greater sense of caution is emphasised under this approach, since there is a high level of uncertainty about how ecological systems work and the possibility of causing irreversible damage. One of the key differences between the ecological approach and the neo-classicists is the idea of infinite substitution between different types of capital. (Pearce, Hamilton and Atkinson, 2002)

The ecological school of thought also wants to keep the stock of capital constant through time, but they emphasis the constant use of natural capital. According to ecologists, natural capital is different to man-made capital because some assets are essential for human life and well being and cannot be replaced by man-made capital. Capital that is of high importance is given the name critical natural capital. Because of this, the ecological approach is labelled as the strong sustainability approach, because man-made and natural capital are not perfect substitutes and under some circumstances can even be considered complements. It is more about non-declining natural capital where there is some degree of untouchable natural capital. This would leave the future generation no worse off in terms of natural resource availability than the current, safeguarding against irreversible ecological loss. Strong Sustainability also stresses a discontinuity about many ecological functions and hence to the external costs realized through environmental stress. Thus, Strong Sustainability is concerned with conserving some critical components of the natural stock as well as the overall stock. (Pearce, Hamilton and Atkinson, 2002)


Discounting is an important element of sustainable development theory as well as other branches of economics such as Cost benefit Analysis and Econometric Forecasting. In the context of this dissertation, it will be seen that various discount rates are used in the cost calculation process. Thus, a basic understanding of the logic and application of the discounting process is needed.

Discounting is an extremely contentious issue because of its broad scope of application, which can include aspects of ethical, philosophical and economic particulars. These aspects can involve the environmental and resource needs of both present and future generations. To understand the process of discounting properly, one must first grasp the concept of compounding. Compounding is the process whereby a principle amount is grown by a certain interest rate and where the interest earned on the principal is continually reinvested. (Hartwick and Olewiler, 1994)

The discount rate is in essence, the reversal of compounding. It involves obtaining the present value from a future value. Discounting allows a comparison of two different future values, whereby they are both discounted to their present value, and then compared directly. It is thus, the rate used to calculate the present value of future cash flows. By not discounting we are saying that a loss/benefit today is valued the same as a loss/benefit tomorrow. In addition to this, the higher the discount rate the less the future is valued. (Pearce; Hamilton and Atkinson, 2002)

There are two reasons for discounting, namely the Social Opportunity Cost of Capital (SOC) and the Social time Preference Rate (STPR). The SOC states that because capital grows over time, people will be expecting some form of additional compensation if they are to forgo current consumption. Opportunity cost of capital also implies that we could invest an amount equal to the discounted cost now, so that it builds up to an amount equal to the cost in the future. Thus, the social opportunity cost of capital is merely how much society benefits from saving today to gain higher welfare in the future. It is worth waiting for these extra funds if the cost of waiting is less than the future benefits. The second reason for discounting is that of STPR. It is generally considered that people have a preference for present consumption as opposed to future consumption. The reason is based on the uncertainty of ones life span. Thus, it would be logical to maximise ones utility now, since there is no guarantee of our existence tomorrow. Since we only have one lifetime, this approach would be rational. However, since saving is merely delayed gratification, people expect to be compensated for not consuming in the present. Accordingly, it can be seen that the preferences of people are important and we cannot just ignore them. If people's preferences can be shown to be important, then it can be stated that people prefer the present to the future. Society as a whole also has a time preference; it is simply the cumulated time preference rate applied to individuals and then averaged. However, this is more than just a pure time preference. It may be a sign of a concept that future societies will be richer than the present societies. Thus R1 gained today is worth more in utility terms than R1 gained in 10 years time. We thus have a situation of diminishing marginal utility of consumption and this forms another reason for discounting. (Pearce, Hamilton and Atkinson, 2002)

There are, however, objections to using discounting when appraising investments. The higher the discount rate, the more society favours the present as opposed to the future. By this reasoning, many advocate lowering the discount rate as a means of been considerate of the futures needs. Models that are said to account for future generation's utility are more accurately accounting for what the current generation believes is important. It is argued is that investment now should be done with the goal of allowing future generations the maximum scope of choice. The problem is then how to ensure the futures utility is not negatively affected. One view of countering this problem is to purposefully lower the discount rate. The question thus becomes, by how much should the discount rate be lowered? One argument is to install a zero discount rate, so that consumption today will be valued the same as consumption tomorrow. This approach seems illogical because it takes no account of the opportunity cost of capital or time preference theory. A negative discount rate would also be nonsensical, since consumption would be continuously postponed for future generations. (Pearce, Hamilton and Atkinson, 2002)

Since discount rates are primarily determined by the interaction of market forces, they are said to reflect current generation needs and are not going to reflect the needs of future generations. However, the opposite side of this argument states that the determination of interest rates is reflected in the preference structure of economic agent's. It may be that we have a natural care for the future and that will be reflected in our current decision making. That is, we may have a utility function that contains an argument to include future generation considerations. If this is true, then lowering the discount rate will be a pointless exercise, since we are already considering the future with current interest rate. Another argument against lowering the discount rate would be the effect on consumption that it could have. Since consumption is inversely related to the interest rate, lowering it could lead to increased current consumption at the expense of the future. Also, the current interest rate, is determined by the monetary authorities or the Central bank. They use it as a targeting tool, normally to keep inflation under control, but also in some cases to manipulate the exchange rate or to affect employment. Thus, interfering with the discount rate to manipulate consumption for the future could be counterproductive and cause havoc in an economy. Environmental scientists are often very critical of discounting. They claim it discriminates against future generations, since the higher the discount rate the more a future cost is reduced. A common example is that of decommissioning a nuclear power plant. If the cost is estimated to be R1bn today, then, depending on the discount rate employed, the amount in 50 years can be quite miniscule. The box below illustrates the powerful influence discounting can have on a projected cost. (Bruggink and Van der Zwaan, 2001.)

Goodin (1982) more succinctly explains the above with the introduction of four arguments and counter-arguments for discounting. These are broadly defined as psychological discounting, discounting under uncertainty, diminishing marginal utility and the opportunity cost argument. Goodin (1982) describes the psychological argument as ‘people's psychological propensity to attach less importance to future payoffs' (Goodin, 1982, pg 54). As less importance is given to future payoffs, individuals will tend to prefer a current payoff to a future one. The next approach is that of the uncertainty argument. This argument attempts to justify itself by stating that uncertainty is a function of temporal distance. Thus, as a future cost or benefit becomes more distant, there will be less certainty of its magnitude. The argument of diminishing marginal utility does not provide an answer to how we should discount different goods over time either. This argument is made on the idea that economies will continue to grow in the future and therefore future generations will be better off. It states that the current, relatively poor generation will receive more satisfaction from the consumption of a particular good than the future, relatively rich generation will. The last approach is the opportunity cost argument. This argument states that the discount rate should be ‘the opportunity cost in terms of the potential rate of return on alternative uses on the resources that would be utilised by the project' (Goodin, 1982, pg.58). The project will thus only be undertaken if the calculated rate of return on a project is greater than the opportunity cost or rate of return on capital. (Goodin, 1982)

Cost Benefit Analysis

Due to the scarcity of resources and infiniteness of wants, decisions must be constantly made between which goods and services are to be produced. Many of these decisions have far reaching and complicated consequences across time and space. One way of assigning weightings to various gains and losses is via the Cost Benefit Analysis (CBA) process. CBA is an economic tool that is used to evaluate the economic merits of a particular project. CBA is intended to improve the quality of public policy decisions, by assigning a monetary measure of the aggregate change in individual well being resulting from a policy decision. This essay will comprise two distinct sections. Firstly, the CBA process and its rationale will be discussed and explained. After which, road accident literature will be examined with emphasis on the CBA approach.

CBA involves the evaluation of alternative investment measures requiring the placing of monetary values on benefits and costs of different actions. Project evaluation commonly employs both economic and financial analyses. Financial analyses focuses primarily on market prices whereas economic analyses includes the total economic value of the effects that a project has whether it is reflected in the market place or not. Both direct and indirect effects need to be incorporated for a fuller economic analysis of alternatives. Neoclassical welfare economics evaluates projects on the basis of changes in net social welfare. There are a number of implicit assumptions in this approach. It is assumed that societal welfare is the sum of individual welfare and that individual welfare can be measured. It is also assumed that individuals maximize their welfare by choosing a combination of goods and services that yields the highest possible total utility given their income constraints. It is initially assumed that marginal utility of income is the same for all individuals. In reality, the marginal utility of income usually diminishes as income rises. But, for easier comparability, constant marginal utility of income is normally assumed. (Kopp, Krupnick and Toman; 1997)

The CBA process can be broadly divided into four areas. These are: the identification of costs and benefits, valuation of each cost and benefit, discounting future streams into present value terms and finally calculating the net social benefit. In the initial phase of identification various benefits and costs are only included if they are extra outcomes of the project. That is to say, if their effects would occur only if the project was embarked on, then the outcomes should be included in the analysis. Sunk costs are those that were incurred before the project and do not change the net social benefit of new projects. As a consequence of this, sunk costs are excluded from the analysis. By the same token, fixed costs are also excluded from the analysis, since they apply to all alternatives under consideration. With regards to costs, all changes of costs, both negative and positive, must be included. Importantly, a benefit may arise from a project in the form of reduced costs and vice versa. In all cases it is necessary to ensure that the costs included are truly opportunity costs rather than just transfer payments since transfers do not measure benefits/costs from goods or services. Double counting is also an issue that must be avoided and occurs when an impact of a project can be measured in two or more ways. In all cases it is necessary to ensure that the costs included are truly opportunity costs rather than just transfer payments since transfer payments do not measure net social benefit from costs and benefits. Taxes and subsidies are market distortions that can artificially lower or higher the market price of a good or service. However, the decision to exclude or include them is project specific. CBA can be used both in the private and public domain but with differing applications and perspectives. In a public CBA externalities are included but not in a private CBA. The public CBA focuses on societal well-being whereas the private CBA focuses on firm/individual welfare maximisation. (Sinden and Thampapillai, 1994)

Once all the costs and benefits have been identified, the next step involves converting them into monetary measures. CBA indicates that an investment should be undertaken if the benefits are larger than the costs. In order to compare benefits and cost, all factors must be converted to a common scale, which is usually monetary. In a competitive market, prices are disseminators of information and indicate the true worth of a good or service. When market prices govern customers purchases, it reveals that their willingness to pay (WTP) for a good and that that good is at least as valuable to them as the money they abandon. The utility gained from a good by the individual concerned is the maximum he or she would be willing to pay for the use or ownership of that good. (Sinden and Thampapillai, 1994)

Consumers will increase their consumption of a good or service up to the point where the benefit of an extra unit is equal to the marginal cost to them of that same good or service. The marginal benefit will decline with the amount consumed, since the utility gained decrease as the amout of that product increases. The markets answer to this is to decrease prices to allow consumers to obtain a greater quantity of the commodity. This association between the market price and the quantity consumed is illustrated by the demand schedule or the WTP. This makes the CBA process easier since all values are already in the form of their true economic worth. Under a market situation, Pareto efficiency is guaranteed since trade is governed by choice and no rational person chooses to become worse off. However, not all goods have market prices and the term to describe this situation is market failure. Examples include the atmosphere, oceans and human life. (Sinden and Thampapillai, 1994 and Kopp, Krupnick and Toman; 1997)

A public CBA analysis includes externalities and as such the total economic value (TEV) that a project has on society as a whole, whether it is reflected in the market place or not, is measured and analysed. When market prices cannot be used directly for valuation, it may be possible to use them indirectly with the use of surrogate market techniques. In these techniques, market prices of substitutes or complementary goods are used to value goods or services that do not have market prices. Goods without market prices cannot be excluded, since the lack of an organised market does not imply that consumers place no value on them. The selection of the appropriate valuation technique will depend on many factors, including the project to be valued and the availability of financial resources, data and time. The challenge is to identify all the effects of the relevant project and to correctly incorporate the valuation of their benefits and costs into the analysis. (Sinden and Thampapillai, 1994 and Kopp, Krupnick and Toman; 1997)

Some of the non-market valuation methods will now be discussed. This process can take many differing routes depending on data, econometric model and status of the commodity or resource been analysed. The final model choice will depend on various factors such as direct value, indirect value, options value, single observation, group observation and existence value. An important starting point to the non-market values is the concept of willing to pay (WTP) and willing to accept cost (WTAC). The utility gained from a good by the individual is the maximum he or she would be WTP to acquire the good if the individual does not own the right to that good. Conversely, the relevant utility measure used for the lost of a good owned by the individual would be the minimum the individual would be willing to accept as just compensation. This would be the amount that would restore the individual to his utility level before the loss of the good. Theoretically, WTP and WTAC should be similar in magnitude for most goods which are substitutes and for which the income effect is small. Empirical evidence however has showed that WTAC is on average two to five times larger than the WTP for the same good. Garrod and Willis (1999) suggest six reasons that could possibly explain this divergence and these will be briefly explained. Firstly, the theory that WTP and WTAC should be similar for close substitutes may be correct. The divergence of the two measures is a function of the inadequate empirical procedures used to elicit WTP and WTAC, such as poor questionnaire design and interviewing techniques. The second argument states that the WTAC measure is faulty. Psychologists argue that the ownership itself makes a good more valuable resulting in a higher selling price or WTAC. Thirdly, consumers may act strategically when formulating their WTP or WTAC bids accordingly ‘especially the minimum they would be willing to accept as compensation for their loss to restore then to their original utility level'. A fourth argument that relates very closely to the second argument is that the disparity between WTAC and WTP might be real. Asymmetry of value is created by the fact that people demand much more to give up an object than they would offer to acquire it. Individuals ultimately disproportionately prefer the status quo having what others don't have. The fifth relevant argument is that the difference can be explained in theory where there is an absence of substitutes for the good being valued. The last argument is that the gap between WTAC and WTP may be because of a lack of financial incentives and experience. (Hanley and Spash, 1993 and Pearce, 2001)

The contingent valuation method (CVM) is an example of a technique that is used when market prices are not available and is one of the most popular valuation techniques. CVM is a survey technique that attempts to extract information about individual preferences for a good or service by asking them how much they are willing to pay (WTP) for that good or service. A CVM can be split into a number of well defined steps. The first step is to set up a hypothetical market for the good in question. The CVM falls under a set of approached known as ‘subjective assessments of possible damage expressed or revealed in real or hypothetical market behaviour'. CVMs are different from market and surrogate-market techniques in that estimates are not based on observed behaviour, but rather inference. The first stage is found in most contingent valuation literature as it is the logical starting point for the exercise. This step is to set up a hypothetical market for the environmental service or good in question and this involves creating a detailed scenario or description of the policy or project that the respondents are being asked to value. All relevant information should be given to the respondents in order for them to fully understand the consequences of their valuation. Amongst other things, the most important information is why they are valuing the good or service and how it is going to affect them. A bid vehicle must be decided upon as to how funds will be raised for the project, i.e. property taxes, entry fees, income taxes etc. Stage two of the CVM pertains to obtaining bids from the respondents. Once the survey instrument is set up in the first stage, the survey should be administered. The choice of sample size for the exercise will play an important part in the precision of the outcome. The rule of thumb to use is that the larger the sample the smaller the variation in the mean WTP and/or WTA. The survey can be conducted in a face-to-face interview which will provide the most scope of detail to the respondent but has the potential for interview bias. This bias occurs when indirect signals as influence the respondent's WTP or WTAC. The third method used is known as trade-off games where participants must choose between different bundles of goods. A mix of money and differing quantities of an environmental good will be offered. A second offer will be made where the environmental good in the mix will be increased and the money in the mix decreased. This decrease in money will basically be the price paid for the environmental good. The respondent may then choose between these two alternatives. The price of the increase in the environmental good is then varied until the respondent is indifferent between the two alternatives which will produce the WTP for that good. (Hanley and Spash, 1993 and Pearce, 2001)

Another approach is the travel cost method (TCM), which is a method for valuing the non-market benefits of outdoor recreation resources, which require expenditure for their consumption. These recreation resources can therefore have their user values estimated. If the good is a recreational resource, this approach provides useful information, but limited. Because of pools, even waves, are not complete substitutes for an ocean beach, just as zoos do not replace seeing animals in the wild. Therefore, surrogate marketed goods can provide limited estimates of the benefits from many environment services but great care must be taken to ensure that other non-marketed or intangible benefits are not ignored. Hanley and Spash asserted that the TCM seeks to place a value on non-market environmental goods by using consumption behaviour in related markets. Specifically, the cost of consuming the services of the environmental asset is used as a proxy for price. These consumption costs will include travel costs, entry fees, in site expenditure and outlay on capital equipment necessary for consumption. (Hanley and Spash, 1993)

The hedonic price method is based on an alternative consumer theory where a good or service within a particular commodity class can be described as a vector of characteristics The value of the good or service is the sum of all the values of these characteristics or attributes embodied in that good or service. So a good or service consists of a bundle of attributes/characteristics, and these different characteristics reflect differences in prices of differentiated goods or services. For instance, the value of a house will be made up of the value of the standard components of the house, such as the amount of bedrooms, as well as additional characteristics that can go into the house such as a Jacuzzi. So one might pay a certain amount, X for a three bedroom simplex, but we will pay X plus a premium for the same simplex with air conditioning units installed. The premium will be the extra value of the air conditioning unit, ceteris paribus. Thus the price of a