The Economics Of Globalisation: an Introduction

Paper for the 2005 Conference of the New Zealand Association of Economists, June 29-July 1: Christchurch.

Keywords: Globalisation & Trade; Growth & Innovation;

Introduction

The Royal Society of New Zealand has awarded me a grant from the Marsden Fund to study globalisation. The study is a continuation of my earlier research program, especially that which is summarised in my book In Stormy Seas with its central message that the fate of New Zealand will be largely a consequence of what happens overseas, together with our ability to seize the opportunities and manage the problems those events create.

The ultimate output of the Marsden Research will be a book. This paper describes the proposed chapters of the book, and the intuitions of their underlying economics. The study is based on five primary principles.

1. Globalisation is the economic integration of economies – regional and national economies.
2. Globalisation began in the early nineteenth century, so the phenomenon is almost two centuries old. Since globalisation an historical phenomenon, focusing on just the last few decades throws away a rich source of insights.
3. Globalisation is caused by the falling cost of distance: transport costs, plus the costs of storage, security, timeliness, information, and intimacy. This gives a driver for the globalisation process. I have much more to say about this below.
4. Globalisation is not solely an economic phenomenon. It has political, social and cultural consequences.
5. The policy issue is not being for or against globalisation, but how to harness it to give desirable outcomes.The structure of the book is as follows:

The structure of the book is as follows:Part I develops the economic analytics using a minimum of mathematics. The bulk of this presentation describes Part I albeit at a more complex level of analysis than the book will.

The structure of the book is as follows:Part I develops the economic analytics using a minimum of mathematics. The bulk of this presentation describes Part I albeit at a more complex level of analysis than the book will.Once the economic underpinnings have been established, Part II of the book explores political and social consequences such as nationalism, sovereignty, policy convergence, cultural convergence, and diasporas.

The original Marsden submission said the study would focus on New Zealand in a globalising world. As the project developed I realised that would trap the research into a narrow framework. So the book is being written for an international audience as well as a New Zealand one. This does not mean New Zealand (and the Pacific) will be neglected. Leaving aside there will be some chapters devoted to New Zealand, many chapters draw on New Zealand material to supplement the themes. For instance, the opening chapter contrasts the colonial experiences of Hawaii and Samoa, but the experiences of the New Zealand Maori are also used.

The Underlying Economics

Underpinning this analysis are some recent major developments in international (and regional) trade theory, particularly with the addition of economies of scale. An excellent exposition is The Spatial Economy: Cities, Regions and International Trade by Mashia Fujita, Paul Krugman and Anthony J. Venables. (FKV) It an exciting development because the new theory gives a geographical dimension to economics which thus far as been largely missing While economics has some history (going back to Johann von Thunen in the nineteenth century) of a spacial dimension in the economy, it has been very limited. As Krugman observed, little attention is paid to the geographical clustering of economic activity. There is a rarity of maps in most economic texts while standard international theory is not greatly concerned whether the two countries have a common border or on opposite sides of the world. Now we can start thinking systematically about economic activity in spatial terms.

A second major influence is represented by Globalization in Historical Perspective: a National Bureau of Economic Research Conference Report, edited by Michael Bordo, Alan Taylor and Jeffrey Williamson. It summarises the enormously valuable research on the economic history of globalisation, enabling one to think of the analysis in a practical historic context.

Once the costs of distance are introduced, location becomes an explicitly vital element of economic behaviour, especially where there are economies of scale. Their interaction generates outcomes which seem to be quite different from the standard theory without costs of distance and only diminishing returns. This is frontier of economic theory stuff. If economists can get its analytics right and its intuitions understandable, it represents a major shift in the paradigm.

The new theory is quite difficult. Much of the mathematics seems intractable (that is, it is no analytic solution although the model can be explored by simulation), so results often rely on simulations, which may not give general conclusions. The underlying mathematics abandons one of the key assumptions that has informed much economic analysis – that (plant and industry) economies of scale are not important. Their introduction changes the shape of the mathematical spaces, and undermines the intuitions that go with them. There are new intuitions, but we are not sure they are robust to minor changes of assumptions.

The analysis is further complicated by factor mobility. Elementary trade theory assumes that factors are immobile. It is readily extended by incorporating mobility of some or all factors, although the transition is not easy to analyse. The real complication, however is to the political economy when labour is mobile, for that means votes as well as labour power is moving. Moreover not everyone benefits form an opening of trade – be it by reductions in border protection or the cost of distance – which complicates the political economy even further.

The Costs of Distance

The costs of distance are more than transport costs, including storage, security, timeliness, information, and the loss of intimacy that separation causes.

Perhaps the best, if partial, attempt to measure them is the paper ‘Trade Costs’ by James Anderson and Eric von Wincoop, who calculated that the average American manufacture has a mark-up of 55% from the factory door to the final domestic retail price. The price of an export involves a 170% markup, or 74% more than the domestic sale. (The measures are multiplicative).

Such averages are but ‘gee whiz’ indicators that the costs of distance are substantial, varying greatly not only by destination but also by product. A classic example is the Barbie doll costing a $1 to make in Asia and selling for $10 in the US, a markup of 900%.

The study attributes about a third of the export trade costs to transport costs – divided between direct freight costs and the time value of goods in transit. The other two thirds of export trade costs are due to border related barriers – language barriers, currency barriers, information barriers, contracting costs and insecurity, and policy barriers (tariff and NTBs). Policy barriers contribute about a seventh of export trade costs, behind currency barriers, freight costs and the time value of goods in transit, and only slightly larger than information costs and language costs. Malcolm Purcell McLean, the pioneer of container ships, may have made a far larger contribution to international trade than any director general of GATT or the WTO.

Because trade costs are poorly measured, it is difficult to assess the degree to which the unit costs of distance have diminished and are diminishing. Aggregate costs may not be, because as unit costs decline more expensive destinations become profitable. The problem is nicely illustrated by observing that while today half of American shipments are by air, reducing the tax equivalent for its time costs from 32% to 9% over the 1950 to 1998 period, it is likely that many of the shipments would not have occurred had they not been airfreighted.

But if we can not measure the fall in the costs of distance precisely we can observe it schematically. In 1855 it took around three months to get from New Zealand to Britain, whether it were send a package and person or a message. Lets represent the time by a line across the page:

************************************************

Today it takes only a month to get to Britain by ship. That’s because ships are faster, and they can go through the Panama Canal. That line now looks like:

**************

But that is misleading in regard to people and light valuable goods. Once they went by ship to London too. Today they can fly to London in less than two days. Compared to the 1855 the world looks like:

*

Yet information can be sent in vast quantities almost instantaneously via the world wide web. On the same scale that time is represented by something smaller than the full stop which ends this sentence.

Australian historian, Geoffrey Blainey recognises in the latest (2002) edition of his famous book, that distance is no longer the tyranny it was once. However, as the trade costs paper reminds us, reports of its death are exaggerated.

Analysing the Costs of Distance

Anderson and von Wincoop value their trade costs as if they were tariffs. In principle that means we can use the theory of international trade as it applies to tariffs, to all costs of distance. (Recall that the costs of distance are sometimes called ‘natural protection’.) Some modifications are necessary, however.

Where the distance costs are very high, making trade prohibitive, the analysis is similar to that for a tariff which prohibits imports. However, where there are some imports, the analyses differ. Insofar as tariffs impact on the government’s revenue, whereas a fall in the cost of distance has little effect on the public revenue. On the other hand, a fall in the costs of distance releases resources which may be used for other purposes. Often tariff theory treats the revenue effect as minor: we could similarly assume that a reduction in the costs of distance releases only negligible resources. Another possibility is that the resources are supplied by third party and dont impact directly on the resources available to the country (or countries) under consideration

It is analytically important that the tariff rates – the costs of distance – vary by product. Here is a schematic list of some of the most important distinctions.

Information, where the costs by line are near zero relative to the cost of the product;

People and Light Valuable Goods shipped internationally (and continentally) by air,

Heavy Goods shipped or trucked (including rail) where typically the transport cost is high relative to the cost of the goods;

Intimacy, where face to face contact is necessary. This applies both to some business activities as well as to personal interactions.

The Turangawaewae – where one’s heart is, where one stands tall – which hardly moves at all

(Note that this list is incomplete. The New Zealand economy requires particular attention to the needs of fresh and frozen exports.)

Consequently, with the possible exception of some information transmission, costs of distance do not fall (near enough) to zero, and they fall at different rates. Thus the relevant analysis parallels a partial reduction in tariffs. Since the costs of distance do not fall uniformly by product (or by distance), the parallel analysis is with non-uniform partial tariff reductions. Regrettably their analytics are not simple as the total elimination of tariffs.

I wont further pursue the analysis here, except to say that the relevant theory is far more complicated and far more subtle than the crude theories that underpin free trade. For instance, a reduction in the costs of distance between regions may result in a one region experiencing a reduction in its per capita income, together with factors flowing from it. This is additional to the standard conclusion that a shift to free trade may make some owners of particular factors of production worse off.

For instance, we know from the theory of partial trade arrangements that one of a pair of countries can actually be worse off as the result of some, but not all, tariff elimination. Thus a reduction in the cost of distance could make one economy worse off. This is not inevitable, but it can happen.

Such distributional matters are not trivial for those interested in the political economy of globalisation. We can see a parallels between workers demanding a protective tariff to prevent the loss of jobs because of offshoring of production as a result of falling costs of distance, and those demanding that the tariff be not reduced. However, it is possible that an entire country may be worse off. I shall report a Fujita-Krugman-Venables result which may be interpreted that way.

Third, elaborating the first point, and perhaps offsetting some of the caveats of the second, a fall in the costs of distance is a technological change which releases resources for other economic activities. The models I have been describing largely ignore economic growth, studying a world in which the production technologies (and labour and capital supply) are largely given, a reasonable research strategy given they wish to explore other analytic issues. But I want ro apply their insights to actual history, where there is economic growth. One source is from productivity changes when distance costs fall. Since I dont want to write the general theory of everything I am fudging this issue, treating the technological changes which drive growth as exogenous and largely outside the narrative.

I now proceed through the book in chapter sequence. (The chapter titles are in the appendix.)

Part I: The Economic Model

Chapter 1.1: The Significance of Location: Samoa and Hawaii

The opening chapter, sets a context for the book by contrasting the experience of Hawaii and Samoa, demonstrating the importance of location, and how effective location changes with technology, with cultural, political and social, as well as economic consequences.

Chapter 1.2: When Distance Changed: New Zealand Refrigeration

The introduction of refrigeration changes the effective cost of distance for meat and dairy products from prohibitive to negligible. The historical experience illustrates the standard theory of trade model, but goes on to explore the dynamic impacts on technology and on political and social development.

Chapter 1.3: Regional Integration and Plant Economies of Scale: Nineteenth Century America

The falling costs of distance – roads, canals, railroads and telegraph – integrated the isolated regions of early nineteenth century American contributing to the development of the most formidable economy in the world. (Other factors were technology, resources, and land an immigration). This chapter elaborates the previous chapter’s analysis by adding plant economies of scale, and shows how the falling costs of distance boosts growth.

The chapter goes on to criticise the Robert Fogel conclusion that the impact of the railroads on American economic growth was small. He focussed on agricultural production but the effects of reductions in the cost of distance are far more spectacular where there are economies of scale. The growth of the US manufacturing sector is one of the spectacular developments of the nineteenth century, and while falling costs of distance are not the only reason for this growth, it is clearly one of them.

Among the other functions of the chapter are

– to draw attention to the high degree of labour and capital mobility within the US (and from across the Atlantic). If the economy of any location surged, it could obtain the necessary factor inputs. Later chapters show that labour mobility is a key issue in globalisation.

– to explore the US as an early example of a possible globalised world, where factors of production are mobile, and whose localities (the states) have little economic and fiscal independence.

Chapter 1.4: Cities and Industry Economies of Scale: New York

This might seem a repeat of the previous chapter, albeit with greater detail when it describes how New York was an important harbour on the West Atlantic seaboard and how that dominance was reinforced by the Erie Canal which connected the Hudson River to the flourishing Mid-west via the Great Lakes. However the chapter also introduces industry economies of scale (agglomeration), in which as an industry increases in size in a location it experiences falling costs. Industry economies of scale are more analytically tractable than firm economies of scale, because the individual firms are numerous and competitive.

Agglomeration was a super-multiplier, which enhanced the initial superiority of New York’s location. The chapter explores whether such industry economies of scale are decisive, or whether they become exhausted, and congestion costs take over. Presaging a future theme, the issue of the cost and quality of future governance is also raised. An implication of such super-multipliers is that the location of economic growth may be path dependent which is explored later.

Chapter 1.5: When Services Become Tradeables: Bangalore

Economists have traditionally divided the economy into the primary, secondary, and tertiary sectors. The justification is long forgotten, but geographically, the primary sector – farming, fishing, forestry and mining – had to be close to the resources it processes, while the tertiary sector – services – had to be near to its customers. Secondary industries – manufacturing – have more locational choice, which is why much policy has been directed towards influencing where they are established.

This categorisation was never perfect. Tourism is in the service sector, but it brings its customers to its location. Some other service activities – such as the education of foreign students and health services for foreigners – also bring the customer to them.

In recent years, as telecommunications costs have collapsed, other parts of the service industry no longer need be located near the customer, although outsourcing is not confined to services. (An early example was the purchasing of components for assembly, with just-in-time production a further refinement, again made easier as distance costs decline.) However the most public concerns have been the outsourcing of services, particularly their offshoring where the supplier is ‘overseas’. The book explores the off shore phenomenon in Bangalore.

Such offshoring is not a new phenomenon, but a variation of the relocation of manufacturing which has been going on for a couple of centuries. Perhaps the new political difference is that professional workers are more affected. The chapter looks at the particular circumstances which has made the offshoring of particular services to Bangalore attractive, and contrasts the different circumstances which has offshored manufacturing to China and other parts of East Asia. It concludes by speculating the extent to which other services can also be offshored.

Chapter 1.6: The Indeterminancy of Location: Finland’s Nokia:

Why should Nokia, the world’s largest mobile phone company, be located in Finland? Journalists offer post hoc explanations, but the introduction of economies of scale and low distance costa opens the possibility that some industries simply occur in particular locations as the result of accidents of history and path dependence. (The chapter also cites Fisher and Paykel, as another example for there is no obvious reason why New Zealand should be good at whiteware.)

The indeterminancy of location is an uncomfortable analytic conclusion, deriving from the complexity of the mathematical spaces which underpin the analytics, for they are no longer purely convex. Ultimately the models outcomes may be the consequences of chaotic processes.

However they do explain another recent development in economics, the role of competitive advantage. Comparative advantage generates deterministic locations. But suppose accidence and path dependence locates an industry in a particular place. How does it maintain its predominance, when other businesses and locations can replicate its production processes? In a changing industry, dominance can be maintained by the first mover if it continues to innovate ahead of its rivals. That is the core of competitive advantage.

Chapter 1.7: Intra-Industry Trade: To be decided (the motor vehicle industry?)

Intra-industry trade (IIT) occurs when two economies export and import the same product. Whereas it hardly existed internationally in 1950, it is now thought that about a quarter of the international trade in goods is of this form. (The other quarters are oil, primary commodities, and other manufactures).

While not initially intuitive, IIT is a consequence of falling costs of distance once there is a degree product differentiation. This chapter will be a good place to make Robert Reich’s point that the nationality of products are becoming ambiguous. He uses cars, which also being visible to the reader, are probably the best example (not to mention the substantial learned literature on the industry).

Chapter 1.8: Multinationals: To be decided (McDonalds?)

A chapter on multi-nationals follows. McDonalds has the merit that globalisation has been described ‘McDonaldization’, although the recent struggles of the company suggest the equation is too simple.

The final chapters of Part I examine the international mobility of the factors of production (other than land). They are not yet in draft form so the exposition here is very tentative.

Chapter 1.9: Labour Mobility: Mexico and The United States

Labour mobility is explored initially at the Mexican-US border, where there is one of the greatest cross-border income differentials. It leads to consideration of one of the central ideas of international trade theory – that trade flows are an alternative to labour flows, via the North American Free Trade Area. But the greater purpose of the chapter is to contrast the much higher labour mobility in the nineteenth century with the more restricted mobility today, because of the restrictions applied by nation states, The chapter discusses why they were introduced in the early twentieth century, and how this contributed to the period of stagnation of globalisation up to 1950.

Chapter 1.10: Foreign Investment: To be decided

The content and illustration of the chapter on foreign investment has yet to be decided. Obviously it will distinguish between foreign direct investment and short term financial flows. (How to tie it into the multinational chapter? Hmm.

Chapter 1.11: Technology Transfer: To be decided (Japan?)

The content of the technology chapter has also yet to be settled. The general view is that technology is highly internationally mobile, but the particular choice of technology is affected by factor proportions, the quality of the human capital, and managerial skill. There is a cross-national study of nineteenth cotton mills which illustrates this point. However the book really needs a a twentieth century illustration. I have yet to identify a suitable example, although Japanese post-war economic growth looks promising. In particular it might explore the degree to which the Japanese stagnation of the 1990s is a consequence of industry moving offshore to the Asian Tigers, because of the ease of transferability of technology to lower cost countries.

Chapter 1.12: The Convergence Club: (Argentina and an Asian Tiger)

Baumol has observed that countries which are high income at one point in time, tend to remain high income thereafter. Thus almost all the high income countries of a century ago still belong to this ‘convergence’ club.

Top membership has been relatively stable. In 1900 it consisted of North America, Western and Northern Europe, Japan Australasia, and the Southern American Cone. A century later they remained its members, except for Argentina, Chile and Uruguay. Perhaps a few from South-East Asia and East-Central Europe have joined, or are will join soon.

(The convergence puzzle is reinforced by the contrast with business. Very few companies which were top a century ago, or even a quarter of a century ago, even exist today. Economists are comfortable with this high business turnover. Why dont countries experience a similar turbulence in their ranking?)

It is relatively easy to offer an explanation why the characteristics which give good economic performance in one time will persist and so the economy will continue to grow at a rate similar to other high performance economies. Postwar revivals, suggest that these factores are important.

However the explanation seems to rule out the exceptions where countries leave ‘the club’. This needs to be explored.

There is much work to do on this chapter, including on definition and measurement, and how it relates to the phenomenon discussed in the final chapters.

Part II: The Consequences

Part II explores the consequences of the economic processes described in the first Part, including the policy responses which modify them.

Chapter 2.1: Nationalism: Germany

The nation-state, indirectly as an economy and directly as a controller of the border, appeared throughout Part I. However it is a relatively recent phenomenon, no more than a couple of centuries old. The chapter illustrates this by reference to Germany which did not exist as a state in 1805 and two hundred years later has many of the state’s ‘traditional’ functions subordinated to the European Union. The chapter argues that nation states are a response to the falling costs of distance, which made smaller communities aware of being a part of larger communities. The response was imperfect, as the ambiguous boundaries of Germany show. The chapter concludes that the nature of the state is changing, but points out that while the German state may not last long, German culture is much older and is likely to persist beyond the demise of any state.

Chapter 2.2: Is Cultural Convergence Inevitable?: Canada and the United States

This chapter explores the degree to which globalisation causes all cultures to converge to a single one, a matter of some public anxiety, although there is a confusion arising from a nostalgia which blames all cultural change on the forces of globalisation. The US-Canada border is the world’s greatest cross-trade border, so we can explore to what extent the two countries are culturally converging. (The question is complicated by the internal Canada issue of Quebec, although the US is also not a single culture but combination of regional ones.)

Chapter 2.3:Diasporas: To be decided (Samoa and ?):

This chapter turns the previous chapter’s argument the other way round, by asking whether a nation can exist outside a place, for an implication of the falling cost of distance is that the location of ‘citizens’ may become less important. The illustrations have not been chosen. Samoa is a possibility, with over half of Samoans living outside the Republic of Samoa and a local economy being very dependent upon its diaspora’s remittances. But an example of a diaspora from a more economically sustainable nation state is also needed.

Chapter 2.4: The Meaning of Sovereignty: The Globalisation of Time

By describing the development of international calender and time standards, the chapter introduces the distinction between de facto and de jure sovereignty, which it applies to economic agreements such as the Multilateral Agreement on Investment.

Chapter 2.5: Is Policy Convergence Inevitable?: Health Care

‘Policy convergence’ occurs as countries adopt the same policies. Thus countries are finding their ability to assist domestic industry increasingly limited. However is convergence a feature also of social policies? This chapter shows how a standard public health policy – taxation of alcohol to reduce harm – is being limited in the EU because of the requirement of the free flows of goods.

Chapter 2.6: International Trade: Agriculture

On the other hand, in some areas ther eis considerable policy convergence. I would like to illustrate this with international trade. Depends a bit on the Doha Round outcome I suppose.

Chapter 2.7: A Race to the Bottom?: The Social Market Economy

The social market economies of the original members of the European Unions are under considerable pressures as a result of the enlargement of the EU, both because of the low cost new economies and the invasion of cheaper labour from them. Are social market economies viable under globalisation or are all economies forced into individualistic modes of regulation?

Chapter 2.8: The Role of Borders: The Implications of Migration

I have not written this chapter yet so I dont know what it will conclude, but it seems likely there will be policy convergence in some areas, partial policy differences in others, and more freedom in others. But what are the dividing lines?

However, I am increasingly thinking that labour mobility is key to many of the politico-economic policy issues which globalisation rases. Which needs naturally on to the next chapter.

Chapter 2.9: Kinds of Nations

By considering whether New Zealand should join Australia, the United States ofr the European Union – or stay outside – we can explore different sorts of options for nations in a globalised world. There will be supplementary examples including Canada and Norway, and probably some consideration of micro-states in the Pacific.

Chapter 2.10: It’s Not Easy Being Small: New Zealand

The Size of Nations by Alberto Alesina and Enrico Spoloare argues that middle sized countries (New Zealand would be on the lower end of the group) tend to have a better economic performance than larger economies, because they are simpler to govern. Much of the research on economic growth focuses on the market sector ignoring the public sector, while there is a strong ideological strain which dismisses the public sector as a source of poor economic performance. This theory suggests that the public sector is important, and that the less heterogeneous the population the more efficient it is.

Middle sized countries face a tradeoff of a more efficient public sector with less access to economies of scale in market production. The resolution is specialisation in production, with international trade converting the production into a more general mix of consumption. The paradox, lurking throughout the book, is that smaller nation-states may be able to survive, but only by abandoning some sovereignty by participating in international trade.

The chapter may have also to look at sub-states, regions of nations which have greater autonomy – possibly illustrated by Scotland and Quebec.

The Book’s Conclusions

Chapter 4.1: The Pattern of World Development

This chapter brings together the analysis of Part I with economic history to reach what may be its most stunning conclusion. I sketch it here, on the understanding that what I have to say may be substantially revised as the research progresses.

I start off with two long term well established trends. The first arises from the sharp divergences in income and productivity in the world today. We might expect their distribution to be clumped in the middle with a few extremes at the top and bottom. Instead it is U shaped: a lot of people and countries are at the bottom, more – albeit fewer – at the top, but very few in the middle.

(Despite all the angst, New Zealand is – and always has been – a member of the top club. Its production relativity has almost certainly sunk in the post-war era, but it is still probably close to the mean, allowing for measurement error. The claim that New Zealand could join the third world is rhetoric, although a rhetoric which could become a reality were we to follow it.)

The reality is that those countries which were poor a hundred years ago, are typically still poor, unlike the era before globalisation, when the relative differences between the top and bottom income countries were tiny in comparison to what they are today. We might have expected economics forces – international trade, international migration, international investment and technology transfer – to have diminished the differences. They have not. Why?

The second long term trend is that the shifts in the pattern of geographical distribution of manufacturing since 1750. As the following table shows before globalisation began two countries, India and China produced more than half of the world’s manufactures. Almost two hundred years later their contribution was insignificant. The stark contrast is with the far greater stability of world population shares.

Manufacturing Output (By World Share: Percent) 1750-1938

Year India China Rest of
Periphery
Developed
Core
1750 24.2 32.8 15.7 27.0
1800 19.7 33.3 14.7 32.3
1830 17.6 29.8 13.3 39.5
1880 2.8 12.5 5.6 79.1
1913 1.4 3.6 2.5 92.5
1938 2.4 3.1 1.7 92.8

Source: Simmons (1985), p.600.

It would be easy to dismiss the change as a consequence of the transformation from handcrafts industry to factory production. But we have already seen in earlier chapters, the location of manufacturing type activities may be driven by accidents and time dependence. Recall that manufacturing is important because unlike primary production it need not be located near a resource base, nor unlike (traditional) services need it be located near its customers.

I am now going to apply the Fujita, Krugman and Venables model to historical experience, pushing it beyond its analytic limits. FKV caution against too bold an interpretation of their work, but economists are equally as bold at applying the rigorous neo-classical model to historical experience.

Their simplest model describes what happens as the costs of distance falls to the manufacturing (relocatable industry with economies of scale) in two identical countries. Initially, the two economies have the same level of manufacturing, but as trade costs decrease, the share of manufacturing goes through a critical point (as in catastrophe theory) and suddenly one country (arbitrarily) ends up with all the manufacturing, and the other with none.

The former country is better off. In the formal model the labour force qualities are identical but after the bifurcation workers in one gets paid more than the other, because of the higher productivity of manufacturing and the restriction on labour migration between the countries. In effect, the costs of distance enable the prosperous workers to capture the rents from the economies of scale. (Note the importance of labour mobility – or a lack of it – again.)

At this point we honour Joan Robinson, by observing that while we have (almost) described the phenomenon as occurring through time, the modelling actually makes static comparisons. Through time there will be transition dynamics, which are very complicated and not captured by the modelling. But if Robinson and FKV will allow us, we might describe the beginning and end of the nineteenth century as reflecting two sides of the bifurcation, albeit one involving more economies of different sizes, and many more sectors of production. We can see the economic forces which generated this branching in the examples of the growth on New York and Nokia.

However, as the costs of distance continue to fall, the model behaviour the bifurcation suddenly snaps back, to equal shares of manufacturing for the two countries. The intuition is that when costs of distance are near zero, manufacturing settles to where the population is, and where there are lower wages.

The FKV model involves constant technologies (falling costs of distance aside). Adding technological change will generate a pattern where the economies which dominate manufacturing will grow far more quickly than those which are left behind. One might speculate that it is not impossible that, under some realistic conditions, incomes in the loser economy may stagnate or even decline. Paul Samuelson’s recent paper on offshoring shows that this can happen in the standard model, via terms of trade shifts.

This sounds suspiciously like what may be happening in South and East Asia. Perhaps the economic dominance of the developed core is but a transition in the history of the world economy. A multiple country variable size economy FKV model may have many bifurcations. The accelerated growth we see among some East Asian economies in the last two decades may be an example of this shifting from one branch of the overall bifurcation to another. (Can the bifurcation go the other way? FKV dont identify a mechanism but the practical experiences of the southern cone countries suggest ‘yes’.)

We cannot be sure how long it will take to return to the end solution of manufacturing shares more closely reflecting population share. Given that it took one hundred years to create the developed core , it seems likely that it will take at least a hundred years to end it. The latter transition may take even longer. Today’s core economies have accumulated advantages in physical, social and human capital, but they need not persist forever, while the core’s capital and technologies are likely to flow to the new centre’s of economic activity. Note that while in recent years good government has been seen as a prerequisite for high economic performance, the history of the last two centuries shows that some economies which experienced considerable economic turbulence – including war and civil war – nevertheless performed very well by standard economic measures

And of course, some other factors may slow down or reverse the process. There may be an minimum to how low distance costs can go. Perhaps costs will rise with higher fuel costs or the needs for security against terrorism. Path dependent theories of growth precipitated by exogenous events leave open many possibilities. It will be our descendants, four and more generations on, who may be able to be more definitive. Certainly this chapter will be speculative, but it is a speculation constrained by the disciplines of models.

Chapter 4.2: Gainers and Losers

I am not sure what will be in this chapter. It may be subsumed in, or a continuation of, the previous one. There is a vigorous debate about the extent to which recent globalisation has resulted in increasing or decreasing poverty and income inequality. The factual differences in the dispute appear to rest upon what is going on in China and how to interpret it. I am happy to let the dispute evolve further before I write it up, particularly as it assumes of the Chinese income and production statistics are reliable, even though nobody with any expertise seems to trust them.

Chapter 4.3: Distance Looks Our Way

It is premature to guess what will be in the final chapter. Nor am I yet sure of the book’s policy implications. Anyone with a policy agenda can use the available analysis on globalisation to support theirs. My approach, as always, is to push the analysis as far as I can, before coming to policy conclusions, if any.

This Paper’s Conclusions

The purpose of this paper has been to describe the Marsden funded project on globalisation, and to give a flavour of its underlying analysis and tentative conclusions .It has covered a lot of material. Rather than try to summarise it all, let me reaffirm the five principles with which the paper opened and which have formed a context in which the paper has the analysis has been presented.

I am most fortunate to have the opportunity to pursue the project which is both intellectually challenging and relevant to the future of New Zealand and the World.

Ultimately, the cheerful conclusion may be that size need not be a handicap to New Zealand (and any distance handicap is diminishing). But there are other international forces shaping how New Zealand will evolve. Hopefully the book will help us better understand how to respond to them.

Go to top

Some Nationbuilding Economists

Paper to The History of New Zealand Economics Session at the June 2005 conference of the New Zealand Economist’s Association.

Keywords: History of Ideas, Methodology & Philosophy; Political Economy & History;

In the course of describing the evolution of the New Zealand political economy between 1932 and 1984, my book, The Nationbuilders, highlighted four economists: Bernard Ashwin, Bill Sutch, Bryan Philpott and Henry Lang. This paper looks at those economists from the earlier phase of the period, thereby leaving Philpott and Lang and others for a later assessment. By focussing on the period in which economics first became important in the New Zealand policy process, it adds to the first two a number of other economists: particularly Horace Belshaw, Dick Campbell, Douglas Copland, and James Hight.

Gordon Coates was key to this development. He was not an economist. Indeed he was not a university graduate, although he played a key role in establishing Massey University College, honouring his mentor Bill Massey. Yet he understood the importance of expert advice in the policy process. Because the economics profession was developing at that time, and because some of Coates’s toughest problems involved economics, he introduced economists into government, although of course it would have happened anyway, but later.

Joseph Gordon Coates (1877-1943) had entered parliament in 1912, representing Kaipara where he farmed. Following a distinguished war record, he became a cabinet minister in 1919, showed competence and energy, and became Massey’s choice as successor when he died in 1925. Coates won the 1925 election, although he was to lose the 1928 election (despite his party winning the most votes).

As Prime Minister Coates made two important innovations which promoted the development of the economics profession. The first began June 1926 he when appointed Dick Campbell as an advisor in the prime minister’s office.

Richard Mitchelson Campbell (1897-1974) studied at Victoria University College part-time while working in the Department of Education, graduating with an LLB in 1923 and a MA in economics in 1926. He left the Prime Minister’s office in August or September 1927, on a scholarship to LSE, obtaining a PhD for a thesis on imperial preference in 1929. There followed a couple of years in the United States at Cornell University, the Brookings Institute and travelling. He returned to Wellington in June 1931, initially on the secretariat of the abortive all-party Special Economic Committee. When Coates became deputy prime minister on the formation of the coalition government in September 1931, Campbell rejoined his office. Among Coates’s portfolios were works and unemployment. The Minister of Finance was lawyer Downie Stewart who earlier had been Minister of Finance in the Coates government.

Coates’s other innovation was the National Industrial Conference of early 1928, which is noteworthy for three reasons.

First, while tri-partite conferences of business and unions together with parliamentarians and officials are common enough today, the NIC seems the first of it kind in New Zealand.

Second, not only were five academic economists involved (Professors Horace Belshaw (Auckland), Alan Fisher (Otago), Barney Murphy (Victoria) and Bert Tocker (Canterbury), plus David Williams (a lecturer at Massey) but they dominated the conference by opening the discussion after the ceremonials, and providing half of the presented papers (by length), while the valedictory session was closed by Murphy, speaking on their behalf.

Third – even more astonishingly from today’s perspective – there were nine official heads of departments, including all the usual suspects, except there was no one from the Treasury. As Malcolm McKinnon reports, in those days the Treasury was essentially an accounting agency, its focus perhaps more gloriously described as ‘fiscal management’. It was certainly not involved in economic management. That would come later, and is a part of the Nationbuilders story.

Coates returned to office near the deepest point of the Great Depression. Within a few months the new government was contending over the direction of economic policy. On the one hand the agricultural sector, with Coates as their leader, wanted to share the burden from the fall in the terms of trade across the whole economy, and so promoted a devaluation, while the financial sector and Stewart wanted ‘sound’ money and opposed devaluation. (The alternative was a disinflation which presumably would have involved some extremely harsh measures.) Writing in 1936 John Beaglehole said that ‘Coates and his staff sped with eager elasticity from specific to specific. Mr Stewart saw with distinctiveness the disadvantages of everything.’

The political infighting is not well documented. Indeed the biographies of Coates as Minister of Finance are disappointing, perhaps because the writers are historians and do not follow the economic issues. A critical clue may be in a letter Campbell wrote forty years later recalling that on 22 January 1932, ‘Coates was writing to [prime minister George] Forbes to insist on a raised exchange-rate or (safe, then) let it find its own level. This, with a threat of “reconsidering my position …” A VITAL LETTER THIS WBS’ (letter to W.B. Sutch, 6 March 1973, original’s capitals) This is one of the few occasions in the correspondence where he offers a precise date. On at least two other occasions in his correspondence, he also refers to the letter, in one case inserting inside ‘Coates said’ ‘(Campbell at his elbow, Belshaw at Auckland not surprisingly – and indeed superfluously – duly encouraging)’. (6 March 1973) The implication is the event surrounding it were seared in his memory.

Campbell’s remarks have all the hallmarks of the discreet public servant signalling that he was a key player in the decision. Biographies of politicians often give the impression that their heroes came to their great conclusions all by themselves. Those familiar with contemporary policy making are acutely aware of how influential official advisers can be. As this instance shows, the practice is a long established one, if ignored by biographers. This not only gets history wrong, but overlooks that Coates was a superb leader of people, who may have been brighter than Coates but whose ideas he was able to articulate into policy. No wonder they worshipped him.

Twenty-one days later, on 12 February 1932, a Cabinet minute established a committee ‘to examine the economic and budgetary position of the Dominion’. The committee met the following day and ‘each successive day’, submitting its report on 24 February. Presumably it was set up a little earlier (given that four of the five members did not live in Wellington). The members were James Hight (chair), Belshaw, Douglas Copland, Alexander Park who was secretary of the Treasury, and Tocker.

Despite nary a reference to his economic achievements in the Dictionary of New Zealand Biography, James Hight (1870-1958) was one of New Zealand’s most important early economists, appointed to the chair in economics and history at Canterbury University College in 1909, having been a lecturer from 1901. In 1919 the chair was divided, John Condliffe becoming the professor of economics, with Hight remaining professor of history until 1948. It is this and his rectorship of the College from 1928 to 1941 which the DNZB, perhaps understandably, highlights. Yet in the couple of decades he was an economics teacher he brought on such extraordinary students as Condliffe, Copland and James McIlraith (best know for his Course of Prices 1852-1910). Condliffe dedicated his The Making of New Zealand to Hight, and both he and Copland write generously of his first teacher in Hight’s Festschrift, Liberty and Learning: Essays in Honour of Sir James Hight, particularly recalling his commitment to scholarship and research. In the public arena, Hight was a member of the 1912 Royal Commission on the Cost of Living (McIlraith was an expert witness), and the 1919 British Board of Trade Commission on the New Zealand coal industry.

One assumes that Hight was appointed to chair the 1932 Economic Committee because of his status (and perhaps because Forbes was a Cantabrian). Certainly the committee was independent, but the selection was done, one suspects, to rule out particular conclusions. Belshaw was already known to support devaluation, as was Copland (who must have been visiting New Zealand as he was Professor of Economics at the University of Melbourne from 1924 – earlier he had been professor at the University of Tasmania – going on to the vice-chancellorship at ANU in 1944). Copland had published articles advocating a devaluation in various New Zealand newspapers, which were brought together in a pamphlet as New Zealand Exchange and the Economic Crisis in 1931.

It is unlikely that Hight had any public position on the exchange rate question. But did Tocker? Tocker (1884-1964) famously wrote ‘The Monetary Standards of New Zealand and Australia.’ (Economic Journal 34 (December 1924): p.556-75) in which he applied Keynes’ analysis of the Indian monetary system based on sterling balances to New Zealand. He had graduated from Victoria University College in 1912 (but not in economics,which was not introduced until 1919), was an assistant lecturer at Canterbury under Condliffe from 1921-1925, became professor after Condliffe resigned, and held the chair unto 1950 (also following on from Hight as Rector from 1943 to 1948).

The obvious outlet for his opinions was The Canterbury Chamber of Commerce Economic Bulletin, which was published monthly from 1925, with the text usually attributed to ‘The Department of Economics of Canterbury College’. From 1927 to 1938 it consisted of Tocker and George Lawn (1882-197?). (Lawn was a director of the Reserve Bank of New Zealand from 1936-1952 and became its (first fulltime) Economic Adviser from 1937 to 1951. Colin Simkin was his CUC successor.) We do not know how the preparation of the Bulletins was shared between the two.

Whatever, there was surprisingly little discussion on exchange rate policy in contrast to Copland’s activities in 1931. There is nothing in 1931. The February 1932 issue, which was probably written before the Committee sat, does not offer an opinion of the level of the exchange rate, centring on the establishment of the Compulsory Exchange Pool in December 1931. In the year following the Committee’s report, when the exchange rate level was being vigorously debated nationally, the Bulletin only raises the exchange rate is in the context of ‘The Empire Currency Question’ in the August 1932 issue. There is no opinion on the appropriate level of the exchange rate.

Reading the Bulletin gives a sense that Tocker belonged to the finance wing of the economics profession of the time. Even so, he and Hight supported Belshaw and Copland with Park dissenting on the exchange rate change, although the rest of the report was unanimous.

Despite his leadership role at the 1928 conference, and despite being the local professor, Murphy was not on the committee. He belonged to the finance wing. In a 1973 letter Campbell dismisses his old teacher as ‘For me, Barney’s counsel ever was “Like the lovely Akatarawa tree, the economy of NZ just needs to be LEFT ALONE.”’ (1 July 1973, Original caps). These were interventionist days.

Park was not an economist, but an accountant, for this was the accountants’ era of the Treasury. However the report says that Bernie Ashwin was ‘constantly in attendance’. Ashwin was the Treasury’s first and lone economist, and would be so for another decade.

Bernard Carl Ashwin (1896-1975), a year older than Campbell, also studied at Victoria University College. He returned from overseas service to do his accounting qualifications and then, so he said, ‘accounting was becoming so popular … it was clearly advisable to go further’, to an M.Com in economics, possibly VUC’s first economics post graduate. Like Campbell he started in the Department of Education, but he transferred to the Treasury in 1922, and by 1931 was second assistant secretary, a position which seems specifically created for this promising young public servant. In 1930 he had given a paper to the local economic society, which was subsequently published as “Banking and Currency in New Zealand,” in The Economic Record (Nov 1930, p.188-204). A curious feature is that it cites Ashwin’s institution as Victoria University College. This may have been a laundering to separate it from a Treasury view – not the last time that this sort of thing occurred. But did Ashwin have some connection in 1930 other than being a graduate? Perhaps he was a part-time teacher. (His Masters thesis was on Public Finance. It probably no longer exists, but is unlikely to be connected with the Economic Record article.)

Campbell is as dismissive of Park as he is laudatory of Ashwin. ‘If Ashwin had been there, instead of subordinate to Park and [G.C.] Rodda [the Secretaries of the Treasury before Ashwin], at least there’d been an above-average intelligence at the top. There wasn’t.’ (16 August 1973)

No doubt too, Ashwin helped draft Park’s dissent against the devaluation. But what were his own views?

Campbell reports that Ashwin ‘reluctantly’ agreed to the devaluation. One is not surprised. His interests were more in finance than in agriculture (although he came from a rural background). It is reasonable to assume that he was more influenced by his teacher Murphy, particularly as unlike Belshaw, Campbell, and Copland he had no overseas university experience, even though he was as able as they. But his record shows he was a pragmatist rather than ideologue, and could be persuaded by analysis and evidence.

Coates, with Campbell as an adviser, and Stewart were away overseas for most of 1932, at the Ottawa conference. So the devaluation did not occur until January 1933, Stewart promptly resigned and Coates became Minister of Finance for the next three years.

He added to his advisory team Bill Sutch in August 1933 and Belshaw in December 1934 creating what was colloquially known as the ‘Brains Trust’, a not unreasonable description in that he was employing three of the four economists in the public service and they all had doctorates. Campbell said Coates would have liked to employ Ashwin too, but recognised the Treasury needed him more. It seems likely that the separation was not great. Both the Cabinet and the Treasury were in the same building and so the Brains Trust and Ashwin would have been bumping into one another and discussing issues informally (as still happens in Wellington today) even were there no formal meetings. The Public Service Commissioner, the older Paul Verschaffelt (he was 46 in 1933), was also a de facto member of the Brains Trust, which was wielding the economic management functions a decade before the Treasury did.

Because he presided over the depression and because his biographers have been weak on economics, the successes of Coates and his team during the era when he was Minister of Finance have tended to be overlooked. There was no Keynesian expansion (it would have been difficult without a central bank bedded in), but from the devaluation on there were numerous other measures the collective effect of which was to realign the prices, interest rates and debt in the economy to the post-terms-of-trade collapse reality. In doing so it made the fiscal expansion of the subsequent Labour government much more effective. Keynesianism in an open economy requires a coherent price structure.

Coates lost power in December 1935 when Labour was swept to office, although he became a valued member of the war cabinet, dying in office in May 1943.

Campbell left New Zealand to become economic advisor in the London High Commission in February 1935, and to marry his Scottish sweetheart, whom he met when he was doing his doctorate seven years earlier. Although he fades out of the New Zealand economists’ story, he returned to Chairmanship on the Public Service from 1946 to 1953, then went back to the High Commission and retired in Sussex ‘more English than Antipodean’ as he said of himself. Yet his contribution to New Zealand economic policy during the tumultuous depression years exceeded that of all other economists, with the possible exception of Ashwin.

Ashwin went on to be one of the youngest, longest serving, and most distinguished Secretaries of the Treasury from 1939, presiding over its shift from accountants to economists, for by the time he retired in 1955 there were numerous economists in the Treasury. Initially they learned their profession part-time while working as public servants in the Treasury or associated agencies such as the Economic Stabilisation Commission. Later, economic graduates were recruited.

William Ball Sutch (1907-1975) played only a minor role in the period this paper has focussed upon. Being at least a decade younger than the other economists mentioned he turns up later, although perhaps he caught up much of the decade, for he graduated from Victoria University College in 1928 (MA) and 1931 (B.Com). He obtained his PhD from Columbia in 1932 at the age of 25, only three years later than Campbell.

Sutch is seen as a controversial economist because he has written so much including about the 1930s and because his economist activities lasted into the 1970s, whereas those of the others considered here had ceased by 1960. Moreover, as for every other well functioning economist, his ideas develops with events and maturation. (One can, for instance, trace his changing views of industrialisation in the 1950s and 1960s while he was in the Department of Industries and Commerce reflecting his increasing understanding of the problem facing New Zealand.) To criticise him as an economist, as distinct from criticising one of his particular policy stances, requires an evaluation in terms of his times and its orthodoxy. (Other events in his life added to controversy, but here we are concerned with him qua economist.)

In fact, Sutch was an orthodox economist for his times. In the 1930s, the Columbia Department of Economics was the most important in the United States, teaching the dominant economic paradigm ‘institutionalism’, which Yuval Yonay characterises as ‘empirical research, suspicion towards deductive theory, emphasis on the changing nature of economic institutions, habits, and norms, special attention to the divergence of market values (prices) from social values, and the belief in the reality of informed concerted action to improve human welfare.’ Institutionalists trace their origin to Thorstein Veblin. Among the best-known are Gunnar Myrdal, John Kenneth Galbraith and Maurice Clarke, who led interwar Columbian economics, where Sutch was trained.

Younay is implicitly criticising the neoclassical synthesis (often abbreviated to ‘neoclassical’) – a combination of the macroeconomics that Keynes pioneered and the modern theory of markets (including a welfare economics which emphasises their beneficial outcomes), strongly laced with mathematical techniques – which only becomes important in economics after the Second World War. Even so, Paul Samuelson, the key neoclassical innovator and doyen of MIT economics which replaced Columbia’s supremacy after the war, said in 1999 ‘I do not come today to bury institutionalism nor to dispraise it. I believe it lives on as a lively element inside today’s mainstream economics, …’

Sutch could not have much neoclassical economics (other than that which derives directly from Alfred Marshall, who influenced both paradigms). To evaluate him according to the current neoclassical paradigm, would be anachronistic. Today he is criticised most because he is the most vocal and the most remembered, but he was part of the mainstream of his times.

Sutch of course was on the left, although Campbell may have been further to the left in the early 1930s. But neither Sutch nor Campbell were Marxists, both being strongly influenced by Fabian socialism which was anti-Marxist. Bill Sutch’s philosophical foundations may be like British Socialism’s, more influenced by Methodism than by Marx.

Ashwin was to their political right, but all the economists were together in the same paradigm. When orthodoxy moved, all may have been a little stranded. I have seen interventionist policies agreed to by Ashwin, which even today’s leftish economist would have doubts about. The interwar generations had much less faith in the market than today’s economists. We should not overlook their experience of the extraordinary failure of the Great Depression.

So while we may not agree with everything they did, the first economists to enter the public policy service, served New Zealand well. The foundations they constructed of an economy shocked by the Great Depression, were the basis of our best decade of economic performance ever. And they established a foundation for the profession upon which today’s New Zealand economists stand.

Go to top

Developing International Guidelines for Estimating Avoidable Costs

By David Collins and Helen Lapsley: Remarks on the Draft

Workshop on Guidelines for Estimating the Avoidable Costs of Substance Use and Abuse, sponsored by Health Canada, June 22-23, 2005, Ottawa.

Keywords: Health;

Thankyou for the invitation to attend what is proving to be a very interesting seminar in, if I may so, a pleasing and attractive city. As one would expect, David Collins and Helen Lapsley have contributed a valuable paper, albeit as they insist, a preliminary draft.

It behoves me, as the first speaker this morning, to go back to the first workshop in 1995 which led to the International Guidelines for Estimating the Costs of Substance Abuse published by the World Health Organisation in 2003. Those guidelines are the foundation for this paper. The progress reflects both new data bases and new analytical developments, together with addressing loose ends which had not been considered sufficiently in the earlier report.

It is worth recalling that the initial intention was a gold standard for the guidelines, an ideal grounded in standard economic theory, which would enable the social cost estimates to be extended to other areas of economics such as the System of National Accounts and the evaluation of policy options including substance abuse prevention and treatment. Systematising and measuring the notion of avoidable costs enables us to go a step further towards systematic policy evaluation. That it does, in part is a consequence of the gold standard decision. Had we not been so disciplined, this new work would be much less potentially valuable.

However, while the gold standard is the underlying principle, we have all found it difficult to apply the standard for various practical reasons, typically because of data deficiencies. So while keeping the gold standard in mind, we have had to use silver, copper and pewter. This is, of course, inevitable but sometimes we forget.

That is my first recommendation to David and Helen. I would like them to be more explicit in their paper as to where there is a gold standard, and where, because of the difficulties of implementing it, they are using a lower standard.

Let me illustrate this by reference to the paper’s advocacy of the Arcardian norm. A number of participants have argued that there are deficiencies with the measure. One problem was nicely illustrated by the comparison of the high suicide rate for Australia and the lower one for Greece. We know that not only are suicide rates culturally sensitive, but so are the recording of suicides. This problem of valid international comparisons because of different recording practices applies for other conditions, as have various contributions have instanced.

Even were there not these difficulties, economics have had problems using an arcadian type norm in at least two other areas that I know of. One is to identify the most productivity-efficient firm, the other is to identify the most cost-efficient health care provider. The parallel is Armstrong’s Arcadian norm attempts to identify the most health-efficient public health configuration.

Economists have put some effort into trying to improve the quality of their arcadian measures – especially in regard to production frontiers. (The first, and crudest, is to average the three most efficient observations.) No doubt those methods could be applied to the Armstrong arcadian too,. But while I would welcome such improvements in due course, my point is another.

At this stage, the gold standard for the minimum level of disease is totally impractical. So the paper recommends a lower standard. Whether it is the best practical one is initially the decision of this workshop, and later will be settled by the international research community. The point here is that the report has to recommend a procedure which, whatever it is, is not on the gold standard.

All I am asking is that the report clearly state when it is recommending a lower standard as a practical response to data inadequacies. In particular in a decade or so, people need to look at the report and recognise where in it there are (eternal) analytical truisms, and where there are proposals relevant for these times which have since been superceded.

The report also needs to say something about where better data already exists. Were I to be looking at avoidable costs for smoking cessation, I would go to the various longitudinal empirical studies which have followed the health record of those who have given up smoking. Ideally there would be a meta-study. Without one I would in effective create one, albeit a little differently from industry standard, because I would need slightly different information.

Of course I would check my conclusions against Armstrong’s Arcadian norm. If there was a major disagreement, I’m not sure what I would do – it would depend upon an assessment of why the inconsistency.

Although the epidemiological data for this strategy is available for tobacco and alcohol it is not generally available for illicit drugs. For them – and inter-country comparisons – I may need to go back to something like the Arcadian norm as a backstop, when the available research fails us.

Even so, I wonder if the norm can be avoided for illicit drugs. One argument is that anti-policies lead to substitution, perhaps as users move from heroin to cocain. The Arcadian norm is a way of dealing with these shifts of use. But would it not be better to directly estimate the degree of substitution. Indeed is that not what a policy maker really wants. The argument that we cant know precisely the degree of substitution is not valid, insofar as if the method implicitly assumes it anyway. The need is to be explicit.

The primary justification for my strategy is that it uses better quality data. Were I to use inferior data for, say, evaluating the impact of a hike in tobacco excise, you may be sure that someone subsequently doing, say, the impact of a public education campaign, would use another one – judged better – and the two results would be incomparable, which is exactly what policy makers do not want.

But I would also use this data because I need to a good time-response profile. As much as the conclusions may be sensitive to the choice of the Arcadian norm or whatever, I suspect they may be even more responsive to the profile. This is because the evaluation discounts events so that those near the asymptote have far less quantitative significance than those close to the policy’s initial impact.

That is, of course, depending on the size of the discount rate. Some one has to say that because DALYs already have a discount rate incorporated in them – 5 percent p.a. I think it is – that they cant be used for avoidable cost estimates, unless by coincidence the evaluation is using the same discount rate. (The NZ rate is 10 percent p.a.) To be useful in the context, a DALY has to be broken down into its component parts, becoming QALY like.

(DALYs also use an age adjustment. But public policy may not accept the validity of treating different people differently according to their age. Again the underlying data upon which DALYs are based may be more useful, again leading us back to the more neutral QALYs.)

If the response to the policy under consideration is rapid, then irrespective of the discount rate, the avoidable cost estimate becomes very much like the total cost estimate. That needs to be said doesnt it? Also that in the long run, the total cost configuration may look very similar to the avoidable cost configuration.

Actually there is much that needs to be added to this draft report to remind the neophyte reader that it is based on the International Guidelines. But here I am sure that I am counselling the writers what they have already have planned.

Finally, I have not mentioned the criminology. While the paper pays some attention to the social costs of drug induced crime, that seems to me to be and issue for the International Guidelines, and should be incorporated in the next edition. At this stage – I am open to persuasion – I dont see that it raises particular issues for avoidable cost estimates, which do not occur elsewhere. I may be wrong, in which case the final report will correct me. If I am correct we are most fortunate that this report is filling the gap until we get the third edition of the Guidelines.

Their paper opens up a new set of issues. One might say that it opens up too many issues: those who commissioned the paper must be well pleased with it, for it covers much more than they might have expected from their commission. But that only reflects the enthusiasm and commitment we have come to expect from David and Helen.

Go to top

Amorous or Amiable?

Transtasman banking regulation is a pressure point in our relations with Australia.

Listener: 18 June, 2005

Keywords: Macroeconomics & Money;

More than other overseas investors, banks can quickly withdraw their key assets. The foreign owner of a factory, farm, forest or beach-house can go off in a huff, but the physical entity remains. Financial assets are much more mobile.

Once we may have thought that banking regulation needed only transparent publication of each bank’s financial status, and the Basle rules – international requirements that set out minimum standards in a bank’s balance sheet. But, consider a big bank getting into a right proper muddle elsewhere, and desperately needing cash to shore up its position. So it raids its New Zealand branch, converting its assets into international currency, and leaving the branch calling in loans to rebalance its accounts. How are we to prevent an overseas mistake screwing up the New Zealand economy (as the collapse of the Bank of Glasgow in London did in 1878, precipitating our long depression)?

Of course, such an event is very unlikely to happen, but we can’t simply rely on the good sense and protocols of the banks, especially if many are overseas-owned. Some 85 percent of our banking assets are held by Australian banks. Local boards need to have duties to, and be accountable for, their bank in New Zealand. Not so long ago, Prime Minister and Minister of Finance Rob Muldoon would ring bank boards and bully them for his short-term political objectives. Would today’s boards be better able to withstand bullying from their overseas head offices? Without New Zealand regulation, there would be little to stand in the way.

The banks grumble that they face two sets of regulators – here and Australia. The two countries have different regulatory regimes. Australia addresses the same issue with much more restrictive regulation: foreign-ownership control of Australia’s major banks is banned altogether. More subtly, the Australian banks appear to be less politically effective with our government than Australia’s. Certainly, Australian Treasurer Peter Costello is taking a tougher line on a single transtasman regulator than our Minister of Finance Michael Cullen.

That was evident in their public presentation. A journalist asked them if it was a case of an ambitious Australia and a cautious New Zealand. Costello replied, “No, you’re facing two amorous friends who move and respond in the way good relationships operate.” Cullen shot back: “Everyone knows I am not quite as ambitious as Treasurer Costello” – a reference also to Costello’s hopes of replacing John Howard as Australian Prime Minister. “Not quite as amorous, either,” Costello said.

Costello’s ambitions extend to placing banking regulation at the top of the transtasman policy agenda. Unless it is resolved, there may be no progress made on other matters. Suppose we conclude that we need a local base for the regulation. Would it matter if we were obdurate?

We can overstate the importance of the Australian economy to us. (We are not very important to Australia. A recent London Economist supplement on Australia did not even mention the New Zealand economy.) Certainly, it is our largest single export market, but it is only a fifth of the total. (In some sectors, such as banking, the Australian investment penetration is somewhat greater.)

But as recently as five years ago, the European Union was our largest export market, and I shan’t be surprised if in 10 years’ time both are trumped by China. A little commented upon effect of a successful Doha Round is that the tariff preferences we give to one another will be diluted. That means that both countries will export less across the Tasman and import more from Asia. Of course we will export more there, too.

On many non-economic dimensions, such as international diplomacy, Australia and New Zealand need to work together – and play together: I am a fan for Closer Cultural Relations. But, as in military matters, we have quite different objectives. It may see itself as an American deputy sheriff – a little American economy in our part of the world. We will never be so big, and have to think small and act smarter. The transtasman relationship needs to be like two good friends, who, depending on circumstances and objectives, sometimes go together, sometimes go their different ways. Not amorous, but amiable.

Does GDP at Purchasing Power Parity Prices Measure Production?

Paper presented to OECD, 10 June, 2005

Keywords: Statistics;

GDP valued at purchasing power parity prices is widely treated as a measure of production, even though it is calculated on the expenditure side of the national accounts. This paper shows that GDP and GDE (or GDI)are not generally equal (although they are if they are measured in transaction prices). It suggests that we should relabel the measure as GDI at purchasing power parity prices which is actually being what is measured or, better still, measure GNI.

Introduction: Repricing GDP

It is an elementary truism that nominal Gross Domestic Product can be measured on the production side (that is in terms of the products of firms) and on the expenditure side (that is in terms of the final purchases of spenders) and that the two aggregates are exactly equal to one another (although in practice there will be a measurement error – the ‘statistical discrepancy’).

This equality arises from the properties of the relationships between the products and the prices on the two sides. However this mathematical congruency does not apply when a different set of prices is applied to the production and expenditure sides. That means that GDP at these prices is not necessarily equal to GDI at these prices, even if the prices are consistent.

Economists apply different prices from those in which the actual transactions take place. Over time they want to compare for volume (or real or constant price) GDP where the effect of the changing price level is eliminated. Between countries they want PPP-adjusted GDP which use common prices to the production.

It has long been known that through time, the application of different prices from the actual transaction ones, results in estimates of the two GDP sides which are not conceptually equal particularly where there is a change in the terms of trade. The SNA recognises this by identifying two volume measures:

Constant price GDP measured on the production side is called RGDP or Real Gross Domestic Product;
and
Constant price GDP measured on the expenditure side is called RGDI, or Real Gross Domestic Income.

Neither measure is to be preferred over the other. Rather they have different purposes. RGDP indicates what is occurring on the production side of the economy, while RGDI is a measure of the resulting spending power. A lift, say, in the terms of trade, means that the domestic spending power increases more than production, because the (exported) products are able to purchase more imports and hence give the purchasers more purchasing.

However the problem of the divergence, once non transaction prices are used is not confined to this case. Conceptually, PPP-adjustment has broadly the same mathematical structure, that is it is the application of another set of prices to the two sides of GDP, although in this case the prices come from the same time but a different country, rather than a different time and the same country.

This paper provides a rigorous formulation of the phenomenon. It does so by separating out the production side and its prices and products from the expenditure side and its prices and products. Thus butter in a shop appears on the expenditure side as a single item, but on the production side it appears as the result of the activities of a chain of firms: the farm produces the milk, the dairy factory turns it into butter, the transport system distributes it and the shop adds a retail margin. This chain (for every expenditure item) is characterised by a matrix Γ . The analysis shows that a divergence between RGDP and RGDI arises when a new set of prices arise where, as is likely, a different Γ matrix applies.

This exercise is done initially for a closed economy and then generalised to an open one, where the terms of trade effect becomes evident as a part of the effect. However, further terms arises, reflecting the distinction between product prices of international tradeables which are assumed to be the same in all economies (other than the scaling effect of the exchange rate). The analysis incorporates a Λ matrix which converts the price of the goods at the border to the domestic product price, the difference reflecting such things as protection (such as tariffs) on imports and subsidies (and other assistance) on exports.

The formal model shows that GDP is no longer equal to GDI except in special circumstances typically involving particular conditions on the and, where applicable, the matrices. (The analysis also looks at the effects on internal indirect taxes and subsidies.)

We now set out the findings mathematically.

The Formal Model for a Closed Economy

In an economy firms produce products (which are measured in the production side of the economy) the quantities of which in a period are represented by a (1 x n) column vector P where n is the number of products.

These products get transformed into expenditure items (which are measured on the expenditure side of the economy) the quantities of which in a period are represented by (1 x m) column vector E where m is the number of expenditure items (and n is not generally equal to m).

The relation between P and E is given by

(1) P =Γ.E,

where Γ is an (m x n) matrix.

The prices of the goods and services are given by a (1 x n) column vector pp, and the prices of the expenditure items are given by a (1 x m) column vector pe. It follows from 1 (and various routine economic assumptions) that

(2) pe’ = pp’.Γ

Nominal GDP and GNE is given by

(3) GDP = pp’.P
and
(4) GDE = pe’.E

Substitution from (1), (2), (3), (4) gives

(5) GDE = pe’.E =( pp’.Γ ).E = pp’.(Γ .E) = pp’.P = GDP

So GDE = GDP

Now suppose another set of prices are applied. The prices might be from another year of the closed economy (as a part of constructing a constant price series), or from another country (as a part of constructing a PPP adjusted measure). Call these new prices pp* and pe*, and the equivalent of equation 2 is

(6=2*) pe*’ = pp*’.Γ *

Now

(7=5*) GDE* = pe*’.E =( pp*’.Γ *).E = pp*’Γ.(Γ.E) + pp*’.(Γ * – Γ).E
= GDP* + pp*’.( Γ* – Γ ).E

So generally, GDE* = GDP* only if (Γ* – Γ) = 0.
It is usual to assume for constant price comparisons through time that for practical purposes
(8) (Γ* – Γ) almost equals 0,
so in such cases GDE* almost equals GDP* in a closed economy.

In the case of PPP comparisons, the assumption in equation (8) appears to be less true.

Diversion: An IllustrationΓ

The following is a very simple illustration of the result in the previous section.

Suppose an economy consists of two input goods ig1 and ig2 and a final good fg, each unit of which is composed of one unit of each production good. We suppose the economy produces one hundred units of the final good.

In the notation of the previous section

Γ = (1, 1)
E = (100)

From which it follows that
P = Γ.E = (100, 100),
so the economy produces one unit of each input good.

Suppose the price of each input good is $1. Then
pp = ($1, $1)
and so
pe’ = pp’. Γ = ($2)
So GDI = $200 and GDP = $200.

We can summarise the economy with the following simple tabulation:

First Year (1)

Item Final good Input good 1 Input good 2
Quantities 100 100 100
Prices $2 $1 $1
Values $200 $100 $100

GDI = $200;
GDP = $100 +$100 = $200.

Now suppose in the following year the , that is the way final goods are composed of input goods changes, so that while it now takes 1 unit of ig1 but only half a unit of ig2 to produce 1 unit of fg. (For instance suppose the second input good might be transportation, a new method of transportation or a new route is found, which reduces the required input). Again assume final production is 100 units.

Denoting second year variables by an #,

Γ# = (1, ½)
E# = (1)

From which it follows that
P# = Γ#.E# = (1, ½)

Suppose the price of each input good remains at $1. (There are numerous reasons why despite the productivity gain from input good 2 its price does not fall by the same extent.)
pp# = ($1, $1)
and so
pe#’ = pp#’’.Γ# = ($1½)
So GDI# = $150 and GDP# = $150.

The next year is tabulated as

Next Year (2)

Item Final good Input good 1 Input good 2
Quantities 100 100 50
Prices $1.50 $1 $1
Values $150 $100 $50

GDI + $150;
GDP = $100 +$50 = $150.
Now apply year 2 prices to year 1 production.

GDI (or GNE) in year one valued at year two prices is one hundred units of the final good times the year two price of $1½ = $150.

However GDP in year one valued at year two prices is one unit of production good 1 valued at $1 and one unit of production good 2 valued at $1 or $2.

Thus valued in year 2 prices, year 1 GDP does not equal year 1 GDI.

The tabulation is

Year (1) at Year (2) prices

Item Final good Input good 1 Input good 2
Quantities 100 100 100
Prices $1.50 $1 $1
Values $150 $100 $100

GDI + $150;
GDP = $100 +$100 = $200.

Which illustrates the general principle, that under a different, but consistent, price regime GDI valued at these prices need not equal GDP valued at these prices.

The Formal Model for an Open Economy (through time)

Suppose the economy has the same variables as in the closed economy, plus the additional opportunity of importing and exporting products. The quantities internationally traded are represented by a (1 x n) column vector T. Elements in the vector may be positive (in which case the product is imported), zero (in which case it is a not traded), or negative (in which case the product is exported).

In the following we shall assume that
(9) pp’.T = 0,
that is there current external account is in balance, and so GDE stills equals GDP.

The relation between P and E is given by
(10) P + T = .E,
it being unnecessary to identify, for these purposes, what determines T.

Substitution using equations (1), (2), (3), (4), (9) and (10) gives
(11) GDE = pe’.E = ( pp’.Γ).E = pp’.(Γ.E) = pp’.(P – T) = GDP – pp’.T = GDP.
so GDE = GDP

Now suppose another set of prices, pe*, are applied as previously. In which case, using (2) and (10):
(12) GDE* = pe*’.E = (pp*’.Γ*).E = pp*’Γ( .E) + pp*’.(Γ* – Γ).E
= GDP* – pp*’.T + pp*’.(Γ* – Γ).E.

So even if Γ* = Γ, then generally, GDE* = GDP* only if pp*’.T = 0,

That pp’.T = 0 provides no guarantee that pp*’.T = 0. In practice pp*’.T may differ greatly from zero for a country which experiences significant terms of trade changes. As a result the constant price GDP series can show a different pattern depending on whether it is measured on the product or the expenditure side. Hence the distinction which the SNA recognises.

Diversion The New Zealand Experience with the Terms of Trade

The problem arises because the goods and services consumed in an open economy differ from that which is produced because some of the domestic production is exchanged for foreign production, – exported to in exchange for imports. The ratio of the exchange values can vary, and that leads to the difference when, for instance, constant price GDP estimates are made over time. This exchange ratio can be measured as the ratio of export prices over import prices. Note that the exchange rate does not directly influence the terms of trade ratio, providing the prices are measured either in the local currency or the international currency.

The difference has long been understood in New Zealand, which is a small open multi-sectoral economy much prone to changes in its terms of trade as the following Chart shows.

There can be substantial changes in the terms of trade both on a year to year basis and secularly.

The issue was important practically in the 1950s and 1960s when the Court of Arbitration made a General Wage Order, which changed all wage rates across the economy. There was no express reference to the terms of trade in the law guiding the court but it was required to take into consideration ‘any increase or decrease in productivity and in the volume and value of production in the primary and secondary industries of New Zealand’ (as well as changes in consumer prices).[1] How then to measure productivity? To simplify, it could be measured as volume GDP per unit of labour input, but that makes the different measures of volume became significant. In any case workers were keen to a share any benefit form a rise in export prices, while businesses were keen to share their fall.

The resolution was to have two measures of volume GDP. Today they would be called RGDP and RGDI, although the latter was then called ‘effective GDP’. RGDP was calculated on the production side, indicating what had been produced after adjusting for price changes. RGDI, calculated on the expenditure side, was a measure of the purchasing power the production generated. It differed from RGDP by valuing exports in terms of the imports they would purchase. The relationship between them was

RGDI = RGDP + (value of exports)/(import price index)
or
RGDI = RGDP + (volume of exports)x(terms of trade)
= RGDP*(1 + (volume of exports/RGDP)x(terms of trade)). [2]

In an economy with exports as a low proportion of output, and not prone to major changes in its terms of trade, the difference between RGDP and RGDI would be small. But in an economy of New Zealand’s characteristics the effect is not insignificant as the following Chart of the ratio of RGDI to RGDP shows.

The overall pattern is a downward trend in the ratio, implying that RGDI has grown more slowly that RGDP by about .1 percent a year. The downward trend reflects that New Zealand had faced deteriorating terms of trade in the post-war era. (There were two main drivers. The largest traditional export, wool, was undercut by synthetics, and the other two traditional exports, meat and dairy products – still the largest good exports today – are subject to widespread international protectionism including restrictions of access in affluent markets and dumping by producers in those affluent markets into third markets.) The effect is that RGDI has grown less than RGDP in the post 1950 era by about 5.5 percent.

There is considerable variation around this postwar decline, again reflecting swings in the terms of trade. The Standard Deviation of the year to year changes is 2.0 percent compared to an average annual change in RDGP and RGDI of 1.5 and 1.4 percent respectively. In about half the years the growth of RGDP and RGDI diverged by more than their trend growth rate.

Thus for New Zealand the distinction between RGDP and RGDI is important in the short run and the long run. The production story is quite different from the income/expenditure story.

The Formal Model for a Open Economy (cross national (PPP) comparisons)

For comparisons of open economies, we need to represent prices of tradeable products at the border. The (1 x n) price vector is pb.

Equation (9) is now replaced by

(13) pb’.T = 0,

Additionally there are international prices, represented by pi, where

(14) pb = e.pi, and e is the exchange rate. [3]

Border prices do not always equal domestic prices. The simplest case is when there is a tariff (on an import) or subsidy (on an export), although the generalisation to a tariff/subsidy equivalent is not difficult. We characterise the relationship between pb and pp as follows:

(15) pp = .pb,
and also
pp’ = pb’.Λ (since is symmetrical).

where Λ is a n-square diagonal matrix (all off-diagonal elements are zero) in which the elements on the diagonal are
1+t if the product is an import (where t is the tariff rate)
1+s if the product is an export (where s is the export subsidy)
1 if the product is a non-tradeable.

As demonstrated in equation (11), GDE = GDP in the prices of the day, it follows from (15) and (5) that
(16) GDP = pb’.Λ.P = GDE.

Applying a set of prices from another country, as before, the result is as for (12) but applying (15):
(17) GDE* = GDP* – pp*’.T + pp*’.(Λ* – Λ).E.
= GDP* – pb*’. *.T + pp*’.(Λ* – Λ).E.
= GDP* – pb*’.(Λ*-I).T + pp*’.(Λ* – Λ).E,

noting that from (14)
pb* = e*.pi = (e*/e).pb so that pb*’.T = (e*/e).pb’.T = 0.

So even were Γ* – Γ = 0, GDE* would not equal GDP*, unless Λ*=I, that is there were no tariffs or export subsidies in the economy whose expenditure prices are being used.

The parallel with the constant price distinction between GDE and GDP should not go unnoticed. It suggests that

PPP-adjusted GDP measured on the production side is analogous to GDP, and should be called “PPP-adjusted RGDP”;
while
PPP-adjusted GDP measured on the expenditure side is analogous to GDI, and should be called “PPP-adjusted RGDI”;

It is to be noted that the standard estimates of PPP-adjusted GDP are measured on the expenditure side and so are more analogous to Gross Domestic Income (RGDI). RGDP needs to be adjusted for the border interventions, characterised by ( *-I).

The Effect of a Sales Tax

Suppose that there is a sales tax on expenditure items so (2) becomes

(2s) pe’ = pp’.Γ.(I+ Σ)
Where Σ is a symmetric matric with the rates of sales tax on the diagonals (they may be negative if there is a subsidy), and zeros off diagonal. (Note Σ = Σ’.)
In which case (5) becomes
(5s) GDE = GDP + pe’.Γ.E
so now GDE in market prices equals GDP in basic (or factor) prices plus the second term of tax revenue, which is standard in SNA accounting.

The next equation adapts (7) when there is a sales tax to
(7s) GDI* = GDP* + pp*’.(Γ* – Γ).E + pe*’.(Σ*- Σ).E.

Thus there is a need for a further adjustment which represents the difference in the two sales tax regimes. Where Σ = Σ* as often happens through time, the adjustment is zero, but that is much less likely in cross-national comparisons.

Conclusion

The mathematics shows that RGDP is no longer equal to RGDI except in very special circumstances involving particular conditions on the and, where applicable, the and matrices. The implication for constant price GDP series through time is reasonably well known: that if RGDP is to be derived from RGDI, there has to be an allowance for the impact of terms of trade. Less well known is that there is a parallel effect for PPP-adjusted GDP.

In practice, the internationally accepted estimates of PPP-adjusted GDP are derived from the expenditure side, and correspond with RGDI. They do not, therefore, reflect the production side of the economy, even though that is often the way they are presented. The problem is likely to be most serious for small economies (because they contribute little to the PPP prices) and small open economies subject to substantial fluctuations in their terms of trade.

Also, where there is considerable market distortions in a country’s export markets – as for the agricultural goods New Zealand produces – there may be considerable divergence between GDI and GDP – perhaps in the order of 10 percent. Essentially protection against efficient agricultural producers reduces their income relative to their production, and raises the income relative to production of the inefficient protected users. The income transfer from this protection does not change productivity, of course, but it contaminates the method of using GDI to estimate GDP.

Towards a Resolution

Thus using GDI as an estimate of GDP is misleading. Consequentially any productivity comparisons between countries may be very misleading. Can anything be done?

The first strategy might be to directly estimate the Γ, Λ and Σ matrices and adjust GDI to GDP. In the case of Γ matrix that may prove quite challenging, because of the problem of the wholesale and retail trade sector, further discussed below. Estimating the Λ matrix may be more straightforward, not only because it has a simpler structure (off diagonal elements are zero), but because the tarrification of protection where there have been quotas and prohibitions gives direct estimates of the on diagonal elements. Estimating the Σ matrix may be similarly straight forward although even more tedious because it involves collection of an even larger data base. (An alternative might be to value before taxes.)

A second strategy would be estimate GDP at PPP prices directly on the production side. There has been an understandable reluctance to do this, because the required data base is much larger, and there is the fundamental problem of the treatment of the wholesale and retail trade sector as indicated by their being separated out in Input-Output Tables rather than being incorporated in the expenditure change. International comparisons of the domestic trade sector are notoriously difficult, something avoided (or suppressed) in the GDE estimates, but which would have to be confronted in a direct GDP estimate. Nevertheless in my view there is a case for doing direct production comparisons in those sectors where the method is reasonably tractable. I have done some rough ones for New Zealand, which leaves much of the non-government service sector as a residual. Comparing them with the official PPP adjusted GDP figures suggests that there may be something seriously wrong with the latter.

The third strategy would be to abandon the estimation of PPP adjusted GDP via the expenditure side, at least temporarily until the previous two strategies can be implemented. Instead the current estimates should be given their conceptually correct name, GDI, while discouraging their use for productivity comparisons.

I would go a step further, adjusting to GNI or, were it possible, to NNI (or National Income). I observe the World Bank prefers this notion. It is a better indicator of material welfare. It answers a different question to that which GDP does, but as this paper shows, the current measure of PPP adjusted GDP does not answer production questions either.

Notes
[1] See In Stormy Seas (p.93) for more details. The Court was also required to take into consideration ‘relative movement in the incomes of different sections of the community’ and ‘all other considerations that the court may deem relevant’, either of which could also refer to the terms of trade.
[2] The equation ignores the details of the price bases.
[3] The prices of non-tradeable products in pb present a problem since there is no international price. The following analysis could be done with partitioned vectors and matrices. Or the elements representing those prices could be set at infinity, reflecting the price at which the non-tradeables could be tradeable. However we shall put them as the local non-tradeable price which avoids both inelegant partitioning or calculations of the form (0 x ). We can do this, because the place where it matters is in the expression pb’.T, where the price element for a non tradeable product multiplies with a zero, since there is no trade in it.

Go to top

Economic Report: Why Is the Economy Not Growing Faster?

Listener: 4 June, 2004.

Keywords: Growth & Innovation;

Earlier this year, the Ministry of Economic Development and the Treasury jointly published Economic Development Indicators 2005, with the intention to provide a basis for a public conversation about economic performance. It is part of the process by which the government is monitoring its performance closely, and reporting its findings to the public.

The report identifies six areas that contribute to economic growth. In two – enterprise and economic foundations (including regulation) – it concludes that New Zealand performs above the OECD average. In three – investment, international connections, and skills and talents – it thinks we are in the middle. In only one, innovation, does the assessment consider that we are low in the OECD. And yet New Zealand per capita GDP is below the OECD average. One might have thought that, given the overall excellence of these contributing areas, the economy should be doing better. Admittedly, there is a widespread view that the economy is now growing faster than the OECD average, but only slightly faster.

That we are doing well on the conditions for economic growth and poorly on the outcomes suggests there is something wrong with the implicit theory. One possibility is that there are measurement difficulties. Certainly, per capita GDP is subject to measurement error, while world protection against our agricultural, and other resource-based, exports biases the New Zealand production figure down.

A second possibility is that some important area has been omitted. The report does not discuss taxation, although it does discuss the closely related notion of the size of the government sector. The evidence that high levels of tax damage economic growth is spotty and unconvincing. In any case, properly measured, it is not evident that our tax levels are above average. Given the simplicity of our tax system, we must be one of the better taxed OECD countries. Perhaps the next report will address this area more directly.

The third possible explanation as to why we have good conditions for economic performance but poor outcomes is that we have the wrong theory. The compilers of the report would argue that they are using the standard theory of economic growth used overseas. Can we do better? I have already mentioned that the approach is not rigorous enough about the quality of the data.

We need to look at the performance of individual sectors, rather than the entire economy as in the current approach. Productivity (output per worker) may be about a sixth (16 percent) below the OECD average.

Which sectors are below average in performance? Some – such as farming and farm processing – we expect to be above the OECD productivity average. (We know the government sector is average, but this is an artefact of the measurement pro-cedure, because it assumes all public sectors are equally efficient for the resources provided to them.) Some sectors must be below the OECD average – well below, apparently. Which? International comparisons at the sectoral level are difficult, but unless we can answer this question, we cannot progress our understanding of what is happening or what to do about it.

We also need to pay more attention to the legacy of the past. In 1984, New Zealand per capita GDP was at about the OECD average. In the following decade, we grew markedly slower: for seven years the economy stagnated, the longest such period on record. We never discuss this, for too many people associated with the stagnation remain publicly active. But whatever happened then, it may be very hard to catch up any deficit.

Suppose there was a magic Recipe X, which enabled New Zealand to grow faster than the rest of the OECD. Recipe X would work for other countries, too. Soon all of them would also adopt it, and grow faster, with the result that the OECD average would rise, and we would all be growing at the average rate again.

There are always quacks promising Recipe X, but of course they never deliver what they promise. We don’t need their “reforms”, we need steadily improved and implemented policies. Unsurprisingly then, the report raises more questions than it answers, but this need not be a bad thing if it generates a thoughtful public conversation.

The Caring Tax: Why Do We Rate Minding Sheep Ahead Of Raising Children?

Listener: 21 May, 2005.

Keywords: Regulation & Taxation; Social Policy;

There is no consensus as to whether mothers should or should not go out to paid work and put their young children into childcare. The research says that it depends on family circumstances and the availability of care, together with the culture of the society. The policy conclusion must be that we should leave the decision to the primary carer and her or his family. But if we pursue a strategy of such neutrality, we should ensure that other aspects of public policy do not bias the family decision.

Does not the taxation regime do that? For the carer who goes out to “earnwork” (as economic Nobelist Robert Fogel calls it) seems to be double taxed. She (or he) pays taxation on earnings including the part paid for childcare. There would be an outcry if sheep farmers were treated the same, unable to deduct the cost of their shepherds when their income tax liability was calculated. Sheep care is deducted because it is a cost that enables the production of the farmer’s income. But is not childcare a cost incurred in enabling the parent to earn an income? Should it also be deducted? Even if looking after children is less important than looking after sheep – a view I certainly do not hold – the logic of equal tax treatment for shepherding and childcare remains.

Our tax system is a broad-based one, so deducting the cost of childcare would narrow the tax base, raising average rates. Were a wide tax base the only criteria, sheep farmers would be taxed on their gross turnover, rather than on their net income – where production costs are deducted from gross turnover. Income taxation should be imposed on a theoretically rigorous notion of income. I cannot see how purchase of childcare to enable one to work is a part of one’s income.

There is an argument that perhaps we should also deduct other costs of earnwork, like the food, clothing and travel to work. But since all earnworkers experience such costs in roughly the same proportion, the deduction is not worth the compliance costs. Not all earnworkers purchase childcare, so its omission has a grossly unfair impact.

There would have to be some restrictions on how much could be deducted. The childcare would have to be registered, and pay taxation and ACC levies and so on. Some argue that only public agencies should provide childcare, but private provision can be perfectly adequate, although there has to be a system of quality checks. Sometimes the best childcarer may be the neighbour. Providing she (or he) is registered and paying tax, why not?

Sometimes the best carer is the parent at home. In which case the other parent could purchase the childcare for his or her child from the homeworker who, it is to be recalled, would pay taxes and ACC levies and all. That would maintain the neutrality of the system between earnworkers and homeworkers, as well as between parents and sheep farmers (and other earnworkers).

Such a change in the tax regime would represent a substantial transfer of the nation’s income to families, since whether they were earnworking or homeworking the family would be paying less tax. (Some of the loss of tax revenue would be offset by the child-carers paying tax, too.) Given the amount of child poverty, that need not be a bad thing.

This proposal is conceptual, based on rigorously defining the income of parents. To implement it would require a lot of detailed analysis, including integrating it with the benefit system. I would not support the proposal in practice until I had seen this detail.

But if the deduction for childcare were implemented there would be an odd outcome. A lot of homeworkers looking after their own children would suddenly become earnworkers. The Government Statistician would calculate the value of their contribution to market production, and add them into the total production of the economy, its Gross Domestic Product. Even if everybody kept to exactly the same childcare arrangements they currently practise, GDP would increase, perhaps even raising our place in the international per capita GDP stakes. Which says something about how useful GDP is for measuring what we are actually doing.

Should We Rename the Marsden Fund?

Revised version of a letter to Royal Society Alert 374, 20 May, 2005

Keywords: Miscellaneous;

I am a fortunate recipient of a grant from the Marsden Fund, administered by the RSNZ, fortunate because there are so many deserving researching who are missing out because of the limited funds. My good fortune has led me to recognise a problem with the Fund’s name.

My subject is globalisation, which involves considerable interaction with overseas colleagues. When I mention I am Marsden funded, they look blank. (The one exception was a Sydneysider who wonder what missionary Samuel Marsden had to do with research.)

The Fund is named after Ernest Marsden (1889-1970). I think of him every day, when I switch on a radio or cell phone inside a room,. It is Marsden’s 1909 measurements from which his supervisor, Ernest Rutherford (1871-1937) deduced the nuclear model of the atom, which suggested those solid looking walls are largely empty. Rutherford described the findings “as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you”.

The Fund’s naming commemorates Marsden’s role in founding the DSIR which, ironically, was dismantled just before the Fund was established. But neither feat is known to my international colleagues nor, I expect, to most of those with whom other Marsden grantees are interacting. Marsden should be honoured for his contributions to New Zealand science, but perhaps it is not in the interests of the science he so fostered that our premium general research fund should be named after him.

There are a number of possible titles. To progress the discussion I suggest it be renamed ‘The Rutherford Fund’, instantly recognisable to scientists all over the world and to all New Zealanders. (He is already acknowledged by the Rutherford Medal, the highest award instituted by the Royal Society of New Zealand, There is already a Marsden Medal.)

Rutherford’s international status is an appropriate reason for naming a fund which enables researchers to engage with their international colleagues. But there are also domestic reasons.

The highest value note of the realm is the Rutherford. The fund is top dollar research. Yet the note is only $100, reminding of Rutherford’s “We don’t have the money, so we have to think”.

Even more pertinently, in 1916 he said he thought it would not be possible to efficiently unlock the energy of the nucleus. By the time he died, the research program which he was an integral part of had shown it was possible and, of course, Rutherford had changed his mind.

The point here is not that science is about making mistakes and correcting them, true though that may be. Rather, Rutherford was driven by a curiosity about the world. Yet the understandings his, and others’, curiosity generated led to practical applications which transformed the world we live in. I can think of no better justification for the curiosity driven research with which the Fund is concerned.

John Campbell helped me checking Rutherford’s facts. He may not agree with the content though. The Rutherford website

The following was in the Royal Society Alert 382, 14 July, 2005.

A couple of months ago we published a letter from Brian Easton MRSNZ suggesting that the Marsden Fund’s name, which commemorates Ernest Marsden’s role in founding the DSIR should be renamed as Marsden’s name is not widely known in international circles.

Two or three readers agreed with Brian’s comment, As one said: “Marsden” does take quite a bit of explaining, and even the explanation can be soon forgotten by those who don’t quite take it in the first time! However, the article provoked a comment in reply from Sir Ian Axford, who wrote:
“The choice of Marsden as the name of our research fund was made because, although born in Lancashire, he made a very great contribution to the development of scientific research in New Zealand. In contrast, Rutherford, although born here, left at the age of 23 with a Master’s degree. Marsden can be fairly be regarded as having pushed New Zealand into science whereas Rutherford did little more than recommend some of his students to chairs at our Universities.

New Zealanders have a serious problem with their forelocks. We like nothing better than to tug them deferentially towards their countrymen who have made good in the world outside, if they are remembered, even when they retain only a slight connection with New Zealand itself. Thus we revere not only Rutherford but other icons such as Wilkins, Popper, Wilson, Tinsley, Pickering, Buck, Mansfield, Gillies and Condliffe, who lived and worked abroad, but have neglected Aitken, Comrie, and Williams. We knew Ernest Marsden too well to offer a forelock to him.

Marsden’s name was chosen for the Fund, not because he was a great scientist but because he made science work effectively in New Zealand. The 1990s it is true, saw the end of Marsden’s DSIR but, since businessmen (and economists no less) had suggested that we might be better off to ‘purchase’ our research abroad, it was also a time to remember that good things can be done in New Zealand.”

Globalisation and the Public Health

For Annual Conference of the Royal Australasian College of Physicians, 10 May 2005.

Keywords: Globalisation & Trade; Health;

Introduction

The Royal Society of New Zealand has awarded me a grant from the Marsden Fund to study globalisation. The ultimate output will be a book. Today I want to set out the economists’ framework for thinking about globalisation, and to use it to consider the problem of alcohol control and the interaction between countries.

First, we need a definition . Globalisation is the economic integration of economies – regional and national economies. This is not to say it is solely an economic phenomenon. Economic globalisation has political, social, cultural and, as we shall see, health dimensions.

Second, globalisation itself is a consequence of the falling cost of distance: transport costs, plus the costs of storage, security, timeliness, information, and intimacy.

For an economist, globalisation is about where things happen. But while it pervades many aspects of our lives, we should not attribute everything to it. I spent quite a lot of time last year studying the pharmaceutical industry. It is a global industry, but its interesting issues are not particularly related to globalisation. Attributing everything to globalisation makes the notion meaningless rhetoric.

Third, most economists think globalisation began in the early nineteenth century, so the phenomenon is almost two centuries old. (There is an argument that it begins in 1492, which might attract the medical fraternity, as from that date certain diseases began being shipped around the world.)

The fourth and final general point is the policy conclusion. As long as the costs of distance fall, globalisation will continue, economies and societies will integrate and we must face those consequences.

Some globalisation issues are not that interesting although they may be important. Accreditation of qualifications between jurisdictions is a matter of some concern to organisations such as this Royal College, but are worth perhaps only a sentence. There is also a burgeoning problem of biosecurity because of increase in cross-border transactions, and the speed at which they occur. It vastly complicates disease control, but I dont know that we economists have much to contribute to its understanding, once we have pointed out the role of the falling costs of distance.

The Policy Convergence Problem

Given the limitations of time, today I want to consider the issue of policy convergence under globalisation. Policy convergence occurs where policies in different jurisdictions converge to the same practices. Often this may occur because of the identification of best practice, so that we may expect the treatment of many medical conditions would be much the same through out the world, subject to resource availability, as the profession discarded less efficient treatments as they learned more about them. That is little to do with globalisation.

However in many policy areas, there is not an agreed best practice, nor is there likely to be. Sometimes this reflects cultural and other situational particularities, but often we just do not know what is best. We dont know how to best organise a health care system. There are those who know what they think is the best system, but there is no majority agreement. Pragmatists discuss policies for improving the current situation, and they will look elsewhere to learn from their lessons. But there is no overall agreement as to the optimal system, and there is not likely to be in the near future.

The fear is that policy convergence to common policies may be forced on countries by the competitive forces which globalisation unleashes. For instance economies today have much less freedom to assist industry (and thereby create employment in the assisted industries) than they once did. It is easy to blame this on GATT (the General Agreement on Tariffs and Trade) and its successor WTO (the World Trade Organisation), but it is a consequence of the multilateral trade which globalisation induces, where it is necessary to have a kind of economic disarmament to avoid the economic warfare of assistance measures. (GATT and WTO are institutions to make this possible.)

There are, of course, those who say there should be no industry assistance. The argue that this policy convergence is a good thing, and that globalisation drives us towards what they deem to be best practice. Their ‘best practice’ is a very individualistic commercial arrangement, so they are arguing that globalisation drives out the collectivist solutions which they deplore. Those who argue that, at least in some areas, public solutions are more efficient, worry that globalisation which drives all policy to such to commercialist solutions is detrimental to the public interest.

However, we need first to ask whether globalisation necessarily causes a policy convergence. In many policy areas pertaining to business it does. But there are other areas where the convergence does not seem to be so inevitable.

A good example is that Canada and the United States have very different health care systems, despite there being more goods and services crossing the US-Canada border than any other border in the world. Of course the two systems influence one another, but they remain largely different systems, and are likely to remain so. If there is a policy convergence it is a very slow one.

Many other activities lie somewhere between the extremes of commerce and health care systems. So where do we draw the line which separate where policy convergence is, or can, happen and where it wont? The short answer is that no one knows, and in any case the line probably shifts over time with changing technologies and distance costs. Any long answer involves detailed analysis of particular circumstances.

Alcohol Control in the European Union

Country Preferred
Drink
Consump-
tion
Liver
Disease
Control
Index
Control
Index
Excise
Duties
    litres of
abs alcohol
per
100,000
1950 2000 €/litre
abs alcohol
Austria beer 12.6 18.3 4.0 7.0 4.0
Belgium Beer 10.1 11.8 6.0 11.5 5.9
Denmark Beer 11.9 21.8 4.0 11.5 5.9
Finland Beer+ 11.9 21.8 4.0 8.5 12.1
France Wine+ 13.5 13.4 1.0 12.5 3.6
Germany Beer+ 12.5 17.0 4.0 8.0 4.3
Greece Wine*+ 9.3 5.0 2.0 7.0 3.7
Ireland Beer+ 14.5 5.8 8.0 12.0 22.8
Italy Beer+ 9.1 13.6 7.0 13.0 3.6
Luxembourg Wine 17.5 12.8 n.a. n.a. 1.0
Netherlands Beer+ 9.7 4.5 6.0 13.0 6.6
Portugal Wine 12.5 14.1 1.0 8.0 1.4
Spain Wine*+ 12.3 10.5 0.0 10 2.2
Sweden Beer*+ 6.9 5.4 17.5 16.5 30.9
UK Beer 10.4 10.4 8.0 13.0 21.4
EU Beer*+ 10.8 12.7 4.7 11.0 7.8

* Less than 50% of absolute alcohol consumption (all more than 40%)
+ Spirits make up more than 20 percent of absolute alcohol consumption.

Alcohol control policies, in the Europan Union give an indication of the complexity of the problem. Such policies are a country responsibility in the European Union, understandably so given that drinking is partly culturally determined. Moreover the EU has a principle of ‘subsidiarity’, that governance should occur at the lowest possible level consistent with efficiency.

The above table tells the broad story of alcohol consumption and control in the European Union (before its recent extension). [1]

Preferred Drink

Most European countries prefer beer. A handful of “Mediterranean’ countries prefer wine. The latter tend to be low excise duty countries. In many countries more than 20 percent of absolute alcohol consumption is from spirits.

Consumption (in litres of absolute alcohol per adult – over 15 – in 2001)

Consumption ranges from Sweden at 6.3 litres an adult to Ireland at 14.5 litres a year, averaging 10.8 litres a year in the European Union. A male drinking three standard drinks a day – the maximum recommended level, although some dry days are also recommended – would consume 10.6 litres of absolute alcohol a year, the woman’s level of two drinks a day is 7.1 litres a year. (Australian adults consumed 9.9 litres (1999) a year on average, and New Zealand adults 8.8 litres a year (2001))

Chronic Liver Diseases and Cirrhosis (all ages per 100,000)

Alcohol causes harm. The rates of chronic liver diseases and cirrhosis is but one indicator, for it does not capture such harm as drunk driving, alcoholism or alcohol induced violence. The measure ranges from 4.5 per hundred thousand for the Netherlands to 21.8 per for Denmark, around a EU average of 12.7.

Control Intensity (out of 20)

Control intensity is a summary indicator of the effects of all alcohol control policies constructed by Esa Österberg and Thomas Karlsson. [2] It’s six sub-components cover control of production and wholesale, control of distribution, personal control, control of marketing, social and environmental control and public policy, but exclude taxation which is treated separately. The index is attempting to capture the intensity of public policy responses aiming to limit alcohol consumption or harm. In 2000 the index ranged from a relaxed 7 for Austria and Greece to an intense 16.5 for Sweden, averaging 11.0 for the whole of the EU. When interpreting the index we need to be careful about causation. The index may be high because there is a lot of harm in the country, but it may be high because it reducing harm.

Österberg and Karlsson also calculated their index for 1950. The control levels were much lower then, averaging 4.7 instead of the today’s 11.0 , presumably reflecting today’s greater willingness to tackle alcohol harm. The spread was greater in the past. Two countries, Sweden and Finland, had even more controlling policies in 1950 than 2000. Thus there has been a policy convergence, although it is probably better explained by shifts to best practice and changing attitudes. Globalisation, other than the spreading of information and attitudes, probably has a small role.

Taxation of Alcohol (Euros per litre of absolute alcohol).

Taxation is often seen as the most effective way to reduce alcohol consumption, and thereby reduce harm. It does so clumsily, because it also reduces drinking that is not harmful, and may even be benign. However most health professionals in Australasia would argue, at a minimum, we need high excise duties on alcohol insofar as other control measures are only partially effective, or take a long time to be effective.

That may not be the view in all European countries, for we see there wide variation of excise duties and other taxes ranging almost zero rates in (typically) wine drinking countries to a punitive 33.3 euros per litre by Finland (and a tendency for Northern European countries to levy the highest taxation rates). (The current New Zealand rates are 14.4 euros for drinks less than 14 percent absolute alcohol by volume and 26.3 euros above, including GST.)

While historically excise duties on alcohol were used for revenue purposes, increasingly higher taxes in order to increase prices and reduce harmful consumption has been a public policy goal in many countries. Within the tabulated data there is an crude correlation between low tax countries being high consumption countries.

Any correlations in the table are crude, and perhaps not worth pursuing given the limited number of observations. But there is a larger message from the table. The individual countries have very different drinking practices, and their alcohol control policies have responded also individually.

However, there is a tension between that principle for alcohol control, and the principle of freedom of movement of goods and services within an economic union. What is to be done about travellers who crossing national borders, with goods taxed under on the parting jurisdiction at a lower rate than at the arriving destination?

The EU abolished duty free allowances for travellers between its member countries, but allows the traveller to carry sufficient purchases for their personal use. In the case of alcohol the effective personal allowance is up to
110 litres of beer, plus
10 litres of spirits, plus
90 litres of wine, plus
20 litres of fortified wine.

That amounts to almost 23 litres of absolute alcohol, more than two year’s average consumption of a European Union citizen, or two years of the maximum recommended male consumption rate.

Under the EU rules, travellers can purchase the quantities in a low tax country, and consume it in a high tax country. The biggest differential would be a person travelling from Spain or Italy where the allowance would carry a tax burden of 46 and 53 euros respectively, to Sweden or Finland where the tax would be 703 and 688 euros respectively. The differentials are smaller for contiguous countries, but four Swedes returning from Germany in a light van laden with their ‘personal’ allowance would clear over 2400 euros. (Apparently Finnish alcohol control policy is having a similar problem with it neighbour new EU member, Estonia. There alcohol is so cheap there that Brits fly for the weekend, get wasted, and fly home – if they remember.)

The opportunity travel presents to avoid excise duties on alcohol (and tobacco) are considerable. Enough, it would seem to pay for some trips. Of course the rule is that the alcohol must be used for personal consumption and not on-sold or exchanged. Yea right! (For public health purposes it would be better that they did.)

Conclusion

Individual jurisdictions may continue to pursue independent policies of most of the elements covered by the control index. While varying minimum drinking ages may affect the drinking opportunities of young travellers, that will not markedly undermine the alcohol control policies of the home country. The exceptions are that advertising limitations are being undermined by media which cross international boundaries – television today as well as radio and the print media. It may also be difficult to sustain public monopoly provision of alcoholic drinks because of WTO and EU rules.

There may well be a convergence of these controls over time, as best practice becomes clearer, but that will be the voluntary decision of the countries involved, and hardly attributable to globalisation other than via the sharing of information.

Even so, given the pressures from the personal shipment of alcohol within the European Union there may be some convergence of alcohol tax regimes. The likelihood is a convergence to lower excise rates. It would seem then that the globalisation inherent in the European Union provides a pressure on high excise duty countries to scale back their tax rates. Were the rates there just for revenue purposes that would be a matter of fiscal concern. But insofar they are for reducing harmful drinking, Europe is losing one of its most effective policy instruments for public health where alcohol is concerned.

The World Health Organisation’s Framework Convention on Tobacco Control might suggest how such differences might be handled. (But it contains nothing about international trade in tobacco, other than illicit trade.) Such a solution, while leaving considerable freedom to individual countries, is a form of policy convergence, although it need not be a one racing to the bottom.

The European Union has such a protocol, but it is honoured more in the breach, since it involves low duty countries raising their rates. It is not hard to set out the political economy of those who have resisted such changes. Presumably alcohol lobbies, such as this College, will have to become have to become more internationally cooperative with the like minded in other jurisdiction, to offset the resisters And develop effective international contentions.

Since Europe is far away from Australasia this example may seem to be a curiosum. However the European Union is a sort of mini-globalised world. This case study illustrates the possibilities of tension between the free trade of goods and services and public health. It is a tension which does not just apply to alcohol (and tobacco), including their advertising. Other examples include genetically modified foods, unapproved pharmaceuticals and, of course, psychotropic drugs. What is to happen if we require an additive such as iodine in salt but others done? Australia and New Zealand are currently contemplating a Trans-Tasman agency for drug approvals. No doubt there are other examples, and new ones will arise.

My work has yet to identify the circumstances where policy convergence as a result of globalisation is inevitable or those circumstances, if any, where a jurisdiction may have significant independence to follow its own policy judgements. At this stage I merely concludes that because international boundaries are increasingly porous, policy areas, such as public health, face new challenges which at first seem little to do with the jurisdictional boundaries John Donne famously said that ‘no man is an island’. Nor, increasingly, is any public policy.

Notes
[1] Luxembourg is omitted in the text. Its data tends to be distorted by the numbers of people who live outside its borders but work within it, together with the cross-boundary problems that are discussed.
[2] Esa Österberg and Thomas Karlsson (2002) Alcohol Policies in EU Member States and Norway: A Collection of Country Reports.

Go to top

Taxing Spending: Should We Think About Introducing a Progressive Expenditure Tax

Listener: 7 May, 2005.

Keywords: Regulation & Taxation;

I have long been intrigued by Nicholas Kaldor’s proposal for an Expenditure Tax. Instead of taxing income: what one puts into the economy, why not tax expenditure: what one takes out? Should not those on the same income who can live more frugally pay less tax than the profligate? (Can I hear you saying, “Easton, you are a puritan”? I plead guilty, but am also attracted for environmental reasons.) As Kaldor points out, advocates for such a tax have included Thomas Hobbes, John Stuart Mill, Alfred Marshall and Irving Fisher, which shows that supporting it is not a matter of being politically left or right.

I confess that, more contentiously, I also support a wealth tax (with a large exemption), as did Kaldor, because wealth gives political and social power. I favour lots of private wealth, but the power more evenly distributed. Wealth taxes are not on the current policy agenda (and involve some nasty technical problems). I’d go for an expenditure tax, even without a wealth tax.

GST is a sort of expenditure tax, but it is proportional to spending rather than progressive where one pays proportionally more tax as expenditure rises. It suffers from various loopholes, including the fact that GST is not charged if one spends directly overseas as a tourist or is a modest personal importer.

During the big tax reforms of the 1980s, when GST was introduced, I looked at a progressive expenditure tax as a replacement for income tax. But transitional arrangements make it extremely difficult to introduce overnight. There are just too many opportunities for tax evasion and fiscal instability, while transitional compliance costs would be high.

An expenditure tax is essentially an income tax with savings deducted from income so they are exempted from being taxed. We could move towards an expenditure tax, by subtracting some savings when income is calculated for tax purposes. This is particularly attractive these days when New Zealand is desperately short of domestic savings. As shown by the rising current account deficit – how much we borrow overseas – we need to save more. It is not just that the nation may be increasingly owned by foreigners, but we are also increasingly vulnerable if there is an international financial crisis.

Will subsidies raise the private savings rate by more than they reduce the public savings? The empirical evidence as to how people save is murky. The evidence suggests that people don’t behave according to that mainstay of economic theory, rational economic man. Significant savings are most likely to occur when people lock themselves into long-term savings plans. This suggests that, given the national savings deficit, we might use the government subsidies to get people to commit themselves. That is a major attraction of the September 2004 Harris Committee’s proposal to encourage workplace based schemes, a voluntary version of the 1975 New Zealand Superannuation Scheme, in which worker’s commit themselves to investing directly some of their wages and salaries into pension accounts. The government is expected to announce its response (and also fostering home ownership) in the Budget.

The proposal will not rule out other measures to promote private savings. We could exempt from taxation income deposited in certain sorts of long-term savings accounts. When someone does their tax return they would deduct those deposits from the their income liable for tax. However, when they withdrew savings in the account, the withdrawals would be taxed as if they were income.

Would this be an incentive to save? The scheme defers tax, rather than avoiding it altogether. (Experts call it an “EET” scheme, since deposits and interest are “exempt” but withdrawals are “taxed”. Our current approach is “TTE”.)

Deferral has its attractions. First, the savings are likely to be withdrawn when one is retired with less income, so the tax rate would be lower. Second, since nobody fully trusts the government, better to avoid paying tax now than expect any promise to avoid paying in the future, the current arrangement.

Any deferral means the government receives less tax revenue, so it saves less even if we save more. I can think of ways of maintaining fiscal discipline. Introducing a partial expenditure tax will involve a lot of thinking. We should think about it.

Governing in an Mmp Era

Leadership New Zealand Programme Retreat, May 5-6.

Keywords: Political Economy & History;

The speakers were asked to respond to ‘Today without assured parliamentary majority, the government has to consult over its policies rather than impose them’ The Whimpering of the State: Policy after MMP

The introduction of MMP by 1993 referendum was the single most important step in the development of the New Zealand constitution in the twentieth century. It’s ramifications will echo throughout much of the twenty-first century.

Big-step constitutional changes are unusual in New Zealand, since incrementalism has been the way it typically progresses here. The revolution occurred out of the electorate’s frustration that it could not get the government to do what it wanted. In 1978 and 1981 the Labour Party had won more votes than National, but the peculiarities of the Winner-Takes-All electoral system gave more seats to National. When Labour became government in 1984 it ignored its election promises, as it did again after the 1987 re-election. Turning to National in 1990, the electorate found a government which intensified, rather than reversed, the policies the electorate hated. So in desperation, it voted for a constitutional revolution in 1993.

I am not sure the electorate knew exactly what it was doing. Many people told me they voted for MMP because the majority of politicians and the business sector were against it. They were tired of the dictatorship that the existing system generated, and lashed out.

The electoral dictatorship, as Quintin Hogg – later a British Lord Chancellor – called it, based on an electoral system of Winner-Takes-All worked where the government operated in a spirit of consensus. By Rob Muldoon’s time we had run out of such enlightened dictators. The unenlightened dictators said they knew what was right, although the economic records of Muldoon, Roger Douglas and Ruth Richardson gave the rest of us little confidence they were.

I called the old system ‘Winner-Takes-All’, because First-Past-the-Post is misleading. There is no post to past. There may once have been in a two party race, but post-war New Zealand society was becoming more complex, and there evolved many parties reflecting different interests. In 1993 the National government was formed with 35 percent of the vote, less than it got in 1984 when it was thoroughly defeated. But it was still a WTA system, and because they won more seats, despite almost two-thirds of the electorate rejecting them, National remained the government.

One of the oddities of the old system was there was hardly any formal recognition of political parties before the 1990s, although they had been an informal part of the constitution for a century. We pretended that each MP was an independent spirit who judged issues on their merits, in the tradition articulated by Edmund Burke that ‘your representative owes you, not his industry only, but judgment; and he betrays, instead of serving you, if he sacrifices it to your opinion.’ The reality was that party considerations dominated MPs’ choices. Unenlightened dictators used the leverage of party loyalty to pursue policies for which they had no electoral mandate and for which they would not have had a majority in the house on a free vote. Too frequently it was a take no prisoners approach.

The point about MMP then, is not how we chose the MPs, but how they vote in Parliament. In particular it is now unlikely that any party will win a simple majority of seats in the house again. That means that every statutory policy measure requires the support of at least two independent party caucuses.

Exactly how the system works depends upon whether there is a coalition of parties (as happened from 1999 to 2002) or whether there is a minority government which depends on one or more parties for supply and confidence, but which do not guarantee to support any particular policy (the current arrangement). In either case the executive has less power, although of course they, the prime minister and cabinet, remain the greatest power in the country, although the blind, deaf and dumb market may be even more powerful.

So unlike when there were unenlightened dictators, the government has to be more consultative. It needs to persuade other caucuses, who will try to consult the public. Policies submitted to parliament have to be better prepared. If they are not, parliament will amend them, in a way it did not in the past. The highly-ideological imperfectly-worked-through policies which were common from the unenlightened dictators, are much rarer today. You may disagree with the policy and the desired outcome – a good portion of the public usually will – but assessed in terms of policy quality we are doing much better than a a couple of decades ago.

The parliamentary process is changed. Helen Clark, and Jim Bolger before her, are political managers making sure they have a coalition to enable them to govern. They still have policies they want to implement – Clarke has taken to heart Bolger’s regret that his government did not spend more on the arts. But, like John Ballance who was perhaps the most radical prime minister New Zealand ever had (he governed from 1891 to 1893, when he died in office), they will counsel their caucus that while they support certain policies, the public is not ready for them. Jenny Shipley famously said that every morning when she was prime minister she counted heads. What she was doing was seeing whether she had the support of over half of the New Zealand people as indicated by their representatives.

However, the current system of government based on MMP is not perfect. Fundamental constitutional change takes a long time to bed in. Many journalists, public servants, academics and even politicians seem to assume we are still in a WTA world. As my Whimpering of the State pointed out, we are not.

My second caution is that while policy development is better quality today it was a decade ago, it can still be improved. CP Scott, the long time editor of The Manchester Guardian, famously said that facts were sacred but comment was free. Our policies are still over-dependent upon opinion, lacking good analysis. Quality social science has always been fragile in this country, but it was unenlightened dictators brutally repressed social scientists when it pointed out – rightly, we know with hindsight – that they were mistaken. New Zealand social science has yet to recover. If MMP is a system for generating quality decision making, it needs quality analysis to underpin it.

The third caution arises from the possibility that MMP government generates policy stasis. I discuss this in the book. The argument is that the policy process is limited to incrementalism and are blocked by interest groups. Dictators claim they can see past interest groups to the interest of the public. But that does not mean they get it right? Between them Douglas and Richardson caused a seven year stagnation of the New Zealand economy, the longest we ever have had.

Better quality policy making would alleviate but not eradicate the possibility of stasis. What is needed is the ‘vision thing’. Interestingly, Helen Clark has a vision. It is set out in Growing an Innovative New Zealand. But she does not often present it, and I would bet that the average government adviser does not know of its existence.

Leaders with a vison acceptable to the public have been rare in New Zealand, even under WTA – John Balance, Peter Fraser and Norman Kirk. We are probably going through a time when the vision thing is not on the public agenda. It may be what New Zealanders currently want is solid incremental policy development dealing with problems as they arise, buttressed by good political management. In the longer run such a demand could lead to stasis – whether we have a MMP or a WTA parliament.

Go to top

Globalisation and Little Old Nelson

“Spirited Conversations”, Nelson, April 27.

Keywords: Globalisation & Trade;

Introduction

The Royal Society of New Zealand has awarded me a grant from the Marsden Fund to study globalisation. What I am going to do this evening is set down the framework economists use when we study that globalisation.

First their definition: Globalisation is the economic integration of economies – regional and national economies. It is accepted that globalisation is not solely an economic phenomenon. It has political, social and cultural consequences.

Many people treat globalisation as if it is a particular phenomenon, such as brands, or the World Trade Organisation, or foreign investment. But different people think of different phenomenon, whereas the economists’ approach is to proved an overall framework, into which each can be fitted. I have not time tonight to deal with every possible phenomenon so the aim is to introduce the framework to you, with some illustrations, in the hope that you can use it to explain a problem which is puzzling you.

Second, economists think globalisation began in the early nineteenth century, so the phenomenon is almost two centuries old. I use lots of history to illustrate the framework. I’ll do so tonight.

We forget is that we are the children of globalisation. That’s why we are here, for the New Zealand we know is an consequence of it. Our non-Maori ancestors came here as a part of the process of globalisation. The shape of the economy is the consequence of refrigeration which reduced the cost of sending meat and dairy products and apples to Britain from prohibitive to negligible. New Zealand might have been the Falklands of the South Pacific without it.

Because human’s are naturally conservatives, each generation accepts the globalisation which has happened in the past, and yet is apprehensive about what is happening now and in the future. History provides some comfort that our grandchildren will think the same.

Third, globalisation is caused by the falling cost of distance: transport costs, plus the costs of storage, security, timeliness, information, and intimacy.

To see how these costs have fallen, consider how 150 years ago, it took about three months to get from New Zealand to Britain, whether you were send a package and person or a message. Lets represent the time by a line across the page:

***********************************************

Today it takes only a month to go to Britain by ship. That is because ships are faster, and they can go through the Panama Canal. That line now looks like:

**************

But a person can fly to Britain in a couple of days, while the holds of the planes carry goods with high value to weight ratios. That line looks more like

*

But today one can send information in vast quantities almost instantaneously via the world wide web. On the same scale that time is represented by something smaller than the full stop which ends this sentence.

Distance costs continues to fall. Did you know that our third to biggest export port by value is the Auckland International Airport? It is also the second biggest import port. And those calculations do not include our single biggest export earner – tourism. The US sends as much by air as by sea.

The final general point is the policy conclusion. The issue is not being for or against globalisation. It is how this force shapes the world economy and the societies in which we live, and how it can be harnessed to give desirable rather than detrimental outcomes. That will be my final theme.

This is a very compact summary of a lot of economic thinking. The best I can do this evening is give you a sense of the how some of these principles develop.

What The Falling Costs of Distance Means to a Region

In order to see the complexity of issue, I am going to look at the impact of a local change in distance, leading to a greater integration of two regions.

Suppose a state highway was built between Hamner and Tophouse along the already existing a route. It would reduce the road travelling time between Nelson and Christchurch by over two hours, turning to what is close to a full day trip to a half day one. What would be the effect of this closer distance be on the two regions?

Let’s put aside collateral impacts on other regions such
– Maruia Springs and Murchison would experience a loss of custom;
– travellers on the Picton to Christchurch road would suffer less congestion, because Nelson travellers would not use it;
– Residents of Hamner might be quite divided between those who like the extra custom and those who dislike the extra traffic through the town;
– and so on.
But the collateral effect reminds us, that on the world scale third countries and regions may be affected. For instance the airports of Shannon in Ireland and Gander in Newfoundland at first thrived as the jumping off points for cross-Atlantic air travel, and then suffered when aircraft range increased and the planes overflew them.

So how would the road impact on the two regions. My list includes
– some goods would now be supplied to Nelson from Christchurch businesses, and the previous Nelson suppliers would close down or change;
but also
– some Nelson businesses would start supplying goods or more goods to Christchurch.
Meanwhile
– the Nelson tourism in all its various facets would flourish because more Christchurch and overseas visitors based from Christchurch would come.

Would all that result in more or less jobs in Nelson? I dont know.

One source of new jobs would be that some Nelson exporters would pack airfreight containers of fresh and just-in-time products, and road them down to Christchurch International Airport (which is seventh as an export goods port, and sixth as an import goods port although it will be going up the rankings). In order to survive New Zealand will have to export more high value to weight ratio goods. Moreover new long range planes will soon be able to fly New Zealand-New York and New Zealand-India so our share of the world to be supplied by air freight is going up, and we gotta seize the opportunity. Nelson producers are among those who might seize them, creating more added value and more jobs in the province.

Incidentally, the port of Nelson is already a smaller value export and importer than Christchurch airport (12th and 13th in 2003), and it is possible it would lose business to Lyttelton if the Hamner-Tophouse road was in place. There could be more, or there could be less, flights between Nelson airport and Christchurch.

So would Nelson be worse or better off if there was the road? What we do know is that some people in Nelson will be better off and some may be worse off. Many of those initially worse off will be better off in the long run as they move to new jobs, but those jobs may not be in Nelson.

The conclusion that some people will be better off and some worse off may at first seem a weak one, but it is central to understand the globalisation debate. Those who are worse off from some aspect of globalisation say they are agin it, and those who are better off say they are for it. They are both correct, about themselves, but they lack an overall perspective. That is why the globalisation debate is so confusing. We are talking about a very complex phenomenon, which lacks simple yeses or nos.

Consider refrigeration. Was that a good thing for every New Zealander. (It was a bad for English hill farmers who found their business undercut by cheap New Zealand lamb.) It is so long ago it is hard to remember that some of the big New Zealand sheep stations specialising in wool found themselves at a disadvantage when their workers went off to start up their own farms. And they lost political power too. (The biggest losers happened to be the North Island Maori who had their underutilised land taken in order to settle Pakeha farmers on it.)

Before looking at the international parallel, there is another phenomenon I want to spend a little time on. Given a Hamner-Tophouse road, I am fairly sure you would find more Christchurch people buying second homes in Nelson, driving up on Friday evening and returning at the end of the weekend or longer holiday. Some permanent Nelson dwellers would benefit: builders, local shops for instance. You would have out-of-towners paying a share of your local authority rates – probably more than the share of resources they would use. Moreover, those of you who have a Nelson house you want to sell would have more interested buyers, so you would get a better price.

But are the higher prices a good thing for Nelson as a whole? Waterfront prices would go up most, and Nelson locals would find it harder to live by the sea. The higher house prices would also impact on the poor, who would be less able to buy their own houses, and their rents would probably be higher. And to complicate things, here were more jobs for the poor the poor.

Same story as I was telling you before. Some people would be better off and some people would be worse off. Again we can argue the balance, knowing there is no simple scientific method which tells us whether the region is better or worse off. What is important about this extension, is that it involves investment by outsiders, which also happens internationally.

You might say that it is one thing to have Christchurch people crowding out the housing: it is quite another thing to have foreigners. After all, you may quite like Cantabrians –even be a Crusaders fan – and you may have lived there once yourself or friends and relatives do. On the other hand, you may not. And in any case, the outsiders may vote differently on matters of local importance. Again there is no simple answers.

What Falling Costs of Mean Internationally

There are a couple of important differences between the regional and international responses to falling costs of difference.

The first is whatever one’s view of the people of Christchurch, you think you have more of common with them than with those overseas. Moreover if you job closes down and moves to Christchurch you may be able to follow it. There will be an upheaval, but it is not impossible. On the other hand if a the extended range planes (or the internet) means your job moves to New York or Bangalore. it is not only very difficult for you to follow, but migration restrictions may not let you anyway.

There is also a matter of fiscal redistribution. If a job moves to Christchurch the government is likely to redistribute some of the additional taxation it gathers in Christchurch back to Nelson, especially to cover transition costs such as unemployment benefits. That wont happen if the job moves to New York or Bangalore.

Related to this is the notions of nation and sovereignty. We have to be very careful when we are discussing these notions, for they are often idealise beyond the practicalities. Yet there is a sense that we New Zealanders are a nation. But it is not hard to see that both nation and its ability to act in its on interests are changing. Nostalgia says things are getting worse. Historians would remind us that they always have been – nostalgia is not what it use to be. Economists would remind us that usually any loss of sovereignty is a tradeoff for other benefits. I have written on this elsewhere – two papers are on my website. We could talk about the issue later, if you like.

Falling Cost of Distance as a Driver

Perhaps the most important difference from my local example is that Nelson is not going to have a lot of influence on the falling costs of distance generally. If Nelson does not want the Hamner-Tophouse road it wont be built. If Nelson does not want extended range planes, the local feelings will will not make one iota of difference to whether they are built or not. We can, of course, ban them. A ban from Nelson airport would not make a lot of difference, but New Zealand could ban them from all our airports. In which case New Zealanders would fly to New York via Sydney (at a cost of a few extra hours of flying time): a way can often be found round prohibitions on new technologies.

Will the costs of distance continue to fall? They may rise. Oil prices may go up, terrorism could restrict transportation. The cost lifts, however, would probably have to be enormous. Were the avgas price to double, because of the improved efficiencies of planes, air travel costs would only go back to where they were in 1990. But even were there to be no new physical technologies, the indications are that new soft technologies – managing and organising the hard technologies better – will continue to press down on the costs of distance.

There is quite a debate about offshoring in the information technology and communications industry. (Offshoring is outsourcing that goes overseas, deriving from the dramatic reduction in costs of moving information. In effect it makes part of the service industry as internationally mobile as manufacturing.) I am struck by how clumsy are current management practices of offshoring, with the expectation that as they improve the costs of offshoring will come town, even if the wages in the offshored industry continue to rise.

Insofar as globalisation – the closer integration of national and regional economies – is driven by the falling costs of distant – and given that they are likely to fall – for some products anyway – the pressures from that extraordinary force which we call globalisation will persists for some time, and so will the consequences. So what are we going to do about it?

So What Are We Going to Do About Globalisation?

History provides some guidance. In the nineteenth century globalisation and industrialisation were intimately connected. Manufacturing processes, once performed at home, moved to increasingly larger factories, as the falling costs of distance made possible the reaping of their economies of scale. Our ancestors moved from their villages into the slums of Britain and Europe. There was an extraordinary destruction of the environment, as dark satanic mills’ polluted air and water, and the urban crush created cesspools. Deteriorating conditions caused many to travel from their homes to the other side of the world. Economic historians still debate whether living standards rose or fell over the nineteenth century. They probably rose for some, fell for others. Yet if the industrialisation caused much personal trauma to those involved. their descendants of today benefited, although not all of the environmental damage has been reversed.

Over time, mankind learned to harness the new technologies by creating social institutions which regulated them.. Factory Acts prevented the use of child labour, regulations, Public infrastructure dealt with the disposal of waste and public hygiene. Public income support protected the weakest. Workers’ compensation started in Germany as a response to factory accidents and was copied in New Zealand in 1901. So gradually the capitalist tiger unleashed by nineteenth century technological change was tamed – in part anyway. Mankind learned to control the forces and make them work in our interests.

At an early stage of nineteenth century globalisation and industrialisation, a major dispute took place between philosophers of the political Left. On one side was the French anarchist, Pierre-Joseph Proudhon. Appalled by the human costs of the changes, he argued for a reversion to the way of life which preceded these changes, with a nostalgia for an Arcadia which never existed, but which he hoped could be recreated. Another version of the Arcadian nostalgia was in some of the reasons people came to New Zealand. They thought they could escape the trauma of European industrialisation by coming to a green and pleasant land, starting afresh to create a utopia (not, one adds, always sensitive to the indigenous people already here).

The best-known opposition to Proudhon came from Karl Marx, a man we often see through the perspective of his twentieth century followers, many of whom misrepresent him. Marx argued that industrialisation and globalisation were essentially progressive forces. The processes, he said, was unstoppable, even though they caused misery to the worker caught up in the transformation. But, Marx went on to argue, ultimately the outcome would benefit workers with the creation of a communist state, in which they would enjoy the fruits of their labour.

With hindsight we can see that Marx was broadly correct. Sure, we have not reached any communist state – Marx himself was a bit vague about what he meant by the notion. But ultimately the workers of the world are better off by the industrialisation. Had they retreated to the nostalgia of Proudhon’s Arcadia, they would not be, for they would be isolated from the benefits of the technology which drove globalisation and industrialisation. Admittedly there has not been much equity in the sharing of the fruits of the transformation. Among those who have benefited least were those in the continents of Africa and Asia. But even they are probably glad to be here today, rather than in the pre-globalisation economy of 1800 (HIV-Aids aside).

For no matter how awful some of the effects of nineteenth century industrialisation were, our ancestors, informed more by democratic socialism than by the ideas of Marx or Proudhon, learned to control it and to benefit from it. That is the challenge and the prospect for this bout of globalisation too, to harness the forces of globalisation, not to deny them or petend we can reverse them.

Marx famously wrote “philosophers have only interpreted the world in various ways; the point is, to change it.” But first we have to understand the implications of the reductions in the costs of distance. I am most fortunate that the Marsden Fund has made that possible for me, and I’m glad to be able to share my understanding with you.

Go to top

Fiscal Management

Finance Minister Michael Cullen faces the political reality of election year

Listener: 23 April, 2005.

Keywords: Macroeconomics & Money;

How to slow an economy that is pressing on its capacity to produce? A couple of decades ago, the government might have imposed a credit squeeze, reducing private investment. Additionally, it would have tightened restrictions of consumer credit (especially hire-purchase), raised taxes to reduce household consumption (but not in election year), reduced government current spending and told government agencies to defer their investment plans. Today, it appears to rely only on the Reserve Bank to squeeze credit by raising interest rates.

In the past, expenditure was restrained across the board, easing pressure on all production. Today, the direct restraint is only on that affected by interest rates – mainly private investment, leaving three-quarters of expenditure untouched. The change came about in the late 1980s, when the Reserve Bank was given the task of restraining inflation, via its monetary policy.

Unfortunately, the way that this was presented seemed to imply that fiscal co-ordination was unnecessary. But the resulting credit squeeze has to be much tougher, and interest rates higher, compared to the government also restraining private consumption and public spending, sharing the burden across the entire economy rather than imposing just on the investment sector. Of course there will be collateral damage, as those paying interest will report. Perhaps they will cut back their consumption, which will ease the pressure on available resources. Damage also occurs when higher interest rates drag up the exchange rate, making exporting more unprofitable, so that exporters reduce their production. That also eases the pressure on resources for production, but, together with the cutback in investments, the cost is New Zealand’s long-term growth prospects.

The policy was adopted in the 1980s, in part because extremist monetarism was then fashionable. We were besotted by the US experience, where fiscal management – especially a tax hike – is difficult because the President does not command Congress in the way that our Prime Minister commands Parliament. So, Americans have to rely on monetary policy exercised by the Federal Reserve (chaired by Alan Greenspan). We have more possibilities.

Those who uncritically adopted the American way, despite our different governance, argued that fiscal management was unnecessary. Not having to be fiscally disciplined makes political management easier, since politicians do not have to make the tough decisions. Not surprisingly, during much of the late 1980s the fiscal position was far too slack, a major reason why the economy stagnated for seven years, while the rest of the world prospered.

None of the above will surprise Minister of Finance Michael Cullen. He knows that, with low unemployment and high utilisation of productive capacity, the government fiscal stance has to be restrained. But he faces the political reality of election year. Other Cabinet ministers are keen to spend more: many of their proposals are worthy – no doubt, Cullen secretly thinks so, too. Meanwhile, there is the clamour of other political parties promising substantial tax cuts and public expenditure increases. How to resist the siren calls?

It will be easier than 1990 when the Labour government, not expecting to get re-elected, made numerous fiscally extravagant promises. Cullen expects to be Minister of Finance after the election, and will not want to have to reverse his pre-election decisions. We may see a Budget that will spend only a little more and tax only a little less, if at all. But it will promise more spending and tax cuts in the future. Expect those promises to be played up during the election campaign, rather than the admirable fiscal discipline the government has shown.

Need we take measures to restrain the economy? The automatic adjustment mechanism is a little more inflation. It may move outside its target range in the next few quarters. The Reserve Bank is not required to worry about such short-term fluctuations. But, in practice, it is deeply concerned that a higher inflation track will build in higher inflation expectations of the public. In which case, it won’t be just a little more inflation, but a little more, and still more. Ideally, the Reserve Bank would like to hold interest rates at their current level, although overseas events could change its plans. Hopefully, onshore events – including the Budget – will not.

PS. As I filed this column, Cullen indicated that some public capital investment would be deferred and some public expenditure trimmed.

The Gains from Reducing Waiting Times

Paper to the Wellington Health Economist’s Group, 21 April 2005.

Keywords: Health;

Introduction

This paper demonstrates that there can be substantial health benefits – as valued by economists – from reducing waiting times, far more than from the single earlier treatment necessary to get the reduction underway. For while the individual benefits from the treatment, all those that follow her or him also benefit from earlier treatment even though no additional resources are necessary.

Waiting times have long interested me, because their mathematics involves interaction between stocks and flows. However the medical injury problem led me to apply, for the first time, standard health care evaluation techniques. To my surprise they show there are spectacular benefits from reducing waiting times. It is this analysis I share today.

The conclusion is based on a mathematical formulation, albeit a relatively simple one. Even so, not everyone may be able to follow all the mathematics. They may be comforted that this paper follows the rules of Alfred Marshall, a good mathematician and one of the greatest economists. Writing to Arthur Bowley in 1906 he said:
“(1) Use mathematics as a shorthand language, rather than as an engine of inquiry.
(2) Keep to them till you have done.
(3) Translate into English.
(4) Then illustrate by examples that are important in real life.
(5) Burn the mathematics.
(6) If you can’t succeed in (4), burn (3).

This last I did often.”

The mathematics in this paper is shorthand. And the conclusions follow intuitively and practically.

A Formal Model

We divide the world into periods.

Assume that the treatment is recommended to be done in the period 1, but it is actually done in period 2, so there is a waiting time of one unit period.

We assume the cost of the treatment, which is the same in either period, is C.

A treatment in the first period has a net benefit B, where B is a discounted sum of the streams of net benefits to the patient together with the net resource savings to the health system and the rest of the economy.

(We know B > C, otherwise the treatment should not proceed.)

We assume that if the treatment occurs in the second period the net benefit is rB, where r < 1 one, because treatment delay leads to fewer benefits and/or greater costs.[1] (Note that rB is evaluated from period 2. We discount it when we do the evaluation from the perspective of period 1.) (Since the delayed treatment goes ahead, we know that rB > C. It need not. Suppose the waiting patient died, . In which case r < 0, and the treatment would not proceed. ) We take the discount rate between periods as ‘d’, so the benefit of the net treatment in period 2 is rB/(1+d). Suppose in period 1, the agency had found some extra resources to do one more treatment, so that someone who would normally have waited to period 2 to get treatment gets treated in period 1. (Perhaps the treatment team works over time, perhaps a new team comes in.) The immediate net gain is B - rB/(1+d). But that releases the resources used for treatment in period 2, which means that a patient who has just gone onto the waiting list can be treated immediately. So there is another gain of B-rB(1+d), to the second patient, although for evaluation purposes that has to be discounted to period 1. Note this benefit occurs without additional resources, since they were already available. And the same applies to period 3 and a third patient, and period 4 and a fourth patient, and ... The following table sets the situation down, where Scenario A is where the patient gets their treatment in the first period, but in Scenario B they wait a period for treatment. However Scenario requires a one-off injection of resources so the patient does not wait. Cost and Benefit Comparison Between Scenario A and Scenario B

Scenario A A A B B B Diffe rence
Period Cost Benefit Patient Patient Cost Benefit Cost Benefit
1 C B 1   0 0 C B
2 C B 2 1 C rB 0 B-rB
3 C B 3 2 C rB 0 B-rB
4 C B 4 3 C rB 0 B-rB
5 C B 5 4 C rB 0 B-rB
6 C B 6 5 C rB 0 B-rB
7 C B 7 6 C rB 0 B-rB
C B C rB 0 B-rB

The underlying behaviour of the table is like this.
– In period 1, Patient 1 goes onto the waiting list for treatment. In Scenario A they get treated immediately in period 1, but in Scenario B they wait a period, and get treated in period 2.
– In period 2, Patient 2 goes onto the waiting list for treatment. In Scenario A they get treated immediately in period 2, but in Scenario B they wait a period, and get treated in period 3.
– The same thing happens to Patient 3, but one period later. So Patient 3 goes onto the waiting list for treatment in period 3. In Scenario A they get treated immediately in period 3, but in Scenario B they wait a period, and get treated in period 4.
– and so on for Patients 4, 5, 6, 7, ….[2]

The discounted sum of the benefits is [B-rB/(1+d)]d, or the benefit of doing the first patient divided by the discount rate. I shant prove this. In Marshall’s literary terms all the equation says is that the benefit from the one operation which shortens waiting times of a stream of patients is many times the benefit to the first patient. That is obvious – now I see it – because the other waiting patients benefit too.

How many? In New Zealand we frequently use a discount rate of 10 percent p.a. So to reduce a waiting time by one year gives a return of ten times the value to each patient. The single operation is benefiting in effect 10 people after allowing for the discounting.

In the case of reducing the waiting time by three months the multiplier is slightly more than 41 times.[3] That may seem paradoxical but this case involves four times as many patients are affected The total waiting times saved are almost identical 10 years and 10.1 years.[4] (In any case the benefit from doing a treatment which is otherwise delayed for 12 months is likely to be substantially larger than delaying for 3 months.)

I could leave the mathematics at this point but there is a loose end. We have assumed that the benefits exceed the costs of the treatment, but will the benefits from the earlier treatment be positive when we take the delaying of costs into account. Nothing thus assures us that B-rB/(1+d) is greater than C. It may not be. However, to cut a long tedious bit of mathematics short, if it is worth doing the treatment it is almost certainly worth reducing the delay in the waiting list. We return problem to this in the second illustration.[5]

TWO ILLUSTRATIONS

Treating Breast Cancer

The difficulty with applying the formula is that we rarely have either the Benefit to Cost (B/C) ratio or good estimates of r, the rate of deterioration from waiting. Fortunately there is sufficient information in the case of delayed radiotherapy after surgery for breast cancer to illustrate some features of the argument.

Pooling a large number of studies, a meta-study compellingly shows show that the local recurrence rate (LRR) is higher if the patient receives treatment in the 9 to 16 week period compared to receiving treatment in the first 8 weeks. Recurrence has two economic effects. First women die and suffer, and second there are additional treatment costs. Both are reduced by eliminating the waiting time backlog. The meta-study estimates that the LRR of those treated in the first 8 weeks is 5.8 percent and those treated in weeks 9-16 is 9.2 percent.[6]

Suppose every 8 weeks there are 1000 women diagnosed as suitable for treatment radiotherapy for their breast cancer. That is 6500 a year. In Scenario A they are treated within the 8 weeks with the 5.8 percent local recurrence rate, and in Scenario B they are treated in the 9 to 16 week period with the 9.2 percent one. So under Scenario A local cancer occurs in 377 women in an average year and in Scenario B 598. In one year, 221 of the women are saved the trauma of recurrence by shortening the waiting time by eight weeks by the 1000 earlier treatments.

If we look at only the first 1000 women treated early, we would think the treatment saved 34 of them from recurrence. However over the whole year, the saving is 6.5 times more, or 221, as subsequent cohorts get treated earlier too. Thus the 1000 treatments are far more valuable than they superficially seem. That is what the mathematics is telling us.

The number is even greater if we look out more than one year. We discount (in order to avoid the St Petersburg Paradox).[8] Taking the standard discount rate of 10 percent p.a., which is equivalent to 1.48 percent for 8 weeks., the effective (i.e. discounted) number of women who avoid a recurrence by introducing the 1000 operations is 34/.0148 = 2302. In economic evaluation terms more than twice as many women benefit from earlier treatment, than the additional treatments actually done.

Before discussing the policy implications of this high return, we consider another illustration.

Permanent Discomfort without Deterioration

Consider a health problem which is persistent and leads to discomfort, but the discomfort does not increase over time, so there is no deterioration (r = 1). (An example where there is no deterioration due to waiting might be cosmetic surgery for birth marks.) The effect of the earlier treatment is to remove the discomfort during the waiting period. Is it ‘efficient’ to reduce the waiting time? (It may still be equitable to do so, even if it is not efficient.)

It turns out that mathematically the benefits only have to exceed the costs by a small margin in proportion to the discount rate for the early treatment to be worth doing.[9] Since the discount rate is likely to be very low (recall 1.48 percent for eight weeks), this condition must usually apply. Thus it is still likely to be worth proceeding with reducing waiting times.

Similar calculations to that done previously conclude that a single treatment which reduces the waiting time for one person, saves the discounted equivalent of 10 plus years of discomfort when there is allowance the shortening of waiting times for subsequent patients.

(However, the gains for permanent discomfort may be smaller that for other treatments Resource prioritisation is likely to favour cases where the deterioration – measured by r – is greatest.)

Conclusion on Eliminating Waiting Times

The radiotherapy for breast cancer result is startling (if now startlingly obvious). One treatment which reduces the waiting time by eight weeks avoids at least two local recurrences. This is all the more extraordinary because the apparent gain – that from the treatment itself – is to reduce the probability of recurrence for each patient by less than 4 percent. Because by also experiencing shorter waiting, many other patients also benefit from the single treatment, the small gains multiply up.

The size of hospital waiting lists are an integral part of the political discourse of the effectiveness of the health system and the adequacy of resources available to it. Their prominence arises because they are an available measure which may be monitored, and because those on them – and their families and friends – grumble. They are misleading because it is waiting times which really matter, and because those who are waiting to enter the system are not even measured.

What this paper concludes is that providing it focusses on waiting times rather than waiting lists, (and on all those who are waiting one way or another for often not all are recorded), the waiting issue is an appropriate one for those concerned with resource allocation.

When a health system is functioning well, the returns from reducing waiting times can be spectacular, and it suggests that within its constrained budget, greater emphasis should be placed on reducing waiting times. The priority given to waiting times by the British National Health Service system seems sensible, although we should not overlook that there may be new treatments and prevention which also yield spectacular returns.[10]

Go to top

Notes
[1] It is possible that r>1, in which case it may be worthwhile to hold over the treatment. The example I am aware of is that this was often the advice for routine tonsillectomies for children (since the problem could clear up on its own accord), even though parents sometimes had the operation done anyway.
[2] This assumes the waiting list is stable, The modelling gets trickier if the waiting list is increasing, because treatment delay is progressively worsening.
[3] 1.02414 = 1.10. 1/.0241 = 41.5.
[4] The slightly higher second figure is a consequence of the compounding through time.
[5] We require B[1-r/(1+d)]/d > C if the accelerated treatment is to go a head. A limiting case is when B = C, that is when the benefit from the treatment just equals the cost. In which case the inequality reduces to r< 1-d2. Given how small D is likely to be then r will usually meet this requirement. (If d = .1, the threshold of r is.99, that is the benefits have only to experience a 1 percent deterioration in the year.) [7] Does Delay in Starting Treatment Affect the Outcomes of Radiotherapy? A Systematic Review by Jenny Huang, Lisa Barbera, Melissa Brouwers, George Browman, William J. Mackillop Journal of Clinical Oncology, Vol 21, Issue 3 / : 555-563.
[8] The St Petersburg paradox involves a game in which one spins a coin until it shows tails, and pays 2N if that happens on the Nth spin. The value of the game is infinite. Were there no discounting the number of women saved from recurrence would also infinite, but most would be in the far distant future.
[9] Given r = 1, the net benefit for a single patient is B-rB/(1+d) = dB/(1+d). The benefit for all patients is B/(1+d). The efficiency rule would be to go ahead if the benefits exceed the cost or B/(1+d) > C, or as the text uses B > C(1+d).
[10] I am grateful for help from Rob Bowie, Don Gilling, and Alan Gray

Go to top

Bums on Seats

In some tertiary education it has been “Never mind the quality: feel the width”.

Listener: 9 April, 2005.

Keywords: Education;

Third party funding occurs when neither producer nor consumer pays for the economic trans-action, but some outsider does. This opens up the possibility that the two beneficiaries from the funding will thwart the funder’s purposes. So funders typically impose additional rules, to ensure their money is spent as intended.

For instance, major health care is financed by the state or by private insurance. Either is vulnerable to the patient and/or the health professional mis-using the funds, say, for treatment that is unnecessary or by overcharging. So the third party sets criteria for that which is to be funded.

We have been much less careful with tertiary education funding. There is a long history of third party funding without the tight controls of the health services. Universities were once given block grants. On the whole, they did not misuse the funding. True, some of the staff were somnambulant, putting little effort into teaching, research or public service, and some courses were – well – low quality. But the universities supplied world standard degrees, if not education (as the Reichel-Tate Royal Commission once characterised it).

The system succeeded for three reasons. First, our universities tried to imitate the best overseas ones, as well as they could (for their funding was never generous). Second, most of the staff aimed for international professional standards. Third, the block grant system, for all its faults and inequities, did not give as many opportunities to cheat, especially as academics fought bitterly over the limited resources within universities, using quality as one of their criteria.

About 15 years ago this arrangement started to break down. Today’s university third party teaching grants are largely related to the student numbers (aka “bums on seats”, the crudity indicating how crude the method is). Other tertiary institutions such as polytechnics and private training establishments (PTEs) are funded the same way. Opportunities for “cheating” on third party funding increased.

Here is one. Suppose the government gives $6000 for a course. Recruit students by offering them a free computer and cellphone – perhaps costing $2000 – which they keep. Use $2000 for teaching, and the remaining $2000 goes to the institution’s overheads. Even if students do not bother to turn up, they have free computers and cellphones. Money for jam as far as the tertiary providers and student consumers are concerned.

What does the funder (taxpayers like you and me) get in return? I have no problem with the principle that we should contribute to the cost of preparing students for life. But too often the course we pay for is of little social value, and its certificate worthless, even if the students complete the course. Sometimes over half don’t. The public money is still paid.

Such rorts do not happen everywhere. They don’t work for long, expensive courses that require considerable student commitment, so they appear to be rare in universities. Many polytechnic courses are also expensive, but there have been reported lapses for some of their cheaper ones. As for PTEs: well, some are very good – but not all of them.

The abuses need not be illegal. Rather, they depend on a poorly administered scheme, whose objects are unclear, and whose quality controls are minimal. Can we really justify some of the courses that have been offered – a certificate in surfing, barista training? Yeah, right!

The reforms have achieved a spectacular increase in the proportion of our young adults who get some tertiary training. We are now near the top of the OECD in terms of adults. However, the comparison does not allow for the poor quality of many of those certificates. Our OECD proportion for degrees is about middle, which is what you would expect from the current funding arrangement – a bias towards the ephemeral over the solid.

Regrettably, public recriminations have focused on isolated courses and institutions rather than on the system as a whole. Any failures that the various investigations identify will be much smaller than the general waste from slack controls over the third party funding. Far better to spend those wasted funds on degrees and advanced vocational qualifications that meet defined social purposes, reducing the costs of students and increasing the public funding for quality.

Energy Plan: What Will Happen After Oil Production Peaks?

Listener: 26 March, 2005

Keywords: Environment & Resources;

One day the world’s total oil production will peak, and decline thereafter. Some experts think that it will be this year or next, others predict that the peak is decades off. In 1979, British Petroleum experts predicted that it would happen in 1985. Forecasting the date involves a complicated assessment of future oil demands, production possibilities and costs of depleting fields, the discovery of new fields, and the extent to which alternative fuels will substitute.

But one day – probably in our lifetime – the world’s total oil production is going to peak.

The world as we know it will not end the following day. The peak will be signalled by rising oil prices, which will encourage further production from old fields. After the peak, the price of transport fuels will continue to rise. Substitutes – from biofuels, coal, electricity and gas, possibly hydrogen – will become economic. Users will become more fuel-efficient.

Our ways will steadily – but probably slowly – change. Suppose motorists have to pay $3 a litre for their petrol, with a 50-litre tankful costing $150. They would drive less – in smaller, more fuel-efficient cars – and walk, bike and use public transport more. Houses on the outskirts of town would become less attractive, and their relative price would fall. Inner-city accommodation may boom.

Because some of the alternative fuels have non-transport uses (and elsewhere in the world, fuel oil is used for home heating and industrial production), the price of other energy forms would also rise. Heating your home would be more expensive; you would reduce air-conditioning; industry would look for more efficient production methods; we would cut back our consumption of energy-intensive products like aluminium. Some products currently made offshore would be made locally to conserve costs of transport, although information industries would remain footloose.

But the world as we know it would not end. It would change steadily – as it usually does.

Some events may confuse us. Oil prices fluctuate, so they may sometimes go to dizzy heights before the peak production. Suppose a tanker got blocked in the Straits of Hormuz at the entrance to the Persian (or Arabian) Gulf, perhaps because of navigational error or terrorism. Countries to the east, which include New Zealand, would have their oil supplies reduced. In the nature of things, big countries like China and Japan would get more than their fair share of the available supply. There could be much public hysteria, but we plan to have 90-day stocks of oil for such emergencies, although prudence would also add some rationing, and prices would be higher. Being a member of the International Energy Association may help, too.

A longer term threat is that some country – or some interests in some country – will think that it can secure its energy supplies by invading a net energy producer. In principle, this involves only a change in ownership of the resource, not its long-term usage. In practice, war can be very disruptive.

That is what we learnt from World War II, which in part was a struggle for control of resources. Fortunately, Germany now knows that Lebensraum can be better pursued by commercial market transactions: Japan learnt the same about its Co-prosperity Sphere. Some of the American Right, with “might equals right” under its rhetoric, have yet to learn that markets are more efficient than war. But many Americans have. Let’s hope their good sense prevails.

So, what should we do? At the personal level, think about $3-a-litre petrol and electricity costing twice as much. What energy-efficient life would you choose if you knew that was the situation in 2010? The prudent will be planning their housing and transport for 2010 now, knowing that capital items take time to adjust. Businesses need to be planned, too.

As for the public sector, it seems likely that our transport infrastructure has got so far behind, much of what we are putting in today will still be viable in a decade, even if fuel prices are much higher. But we would need more public transport. The thoughtful local authority may wish to consider what its region might look like when energy prices are substantially higher, and what steps it should be taking to be prepared for that eventuality.

Marsden Project Pvt 301: New Zealand in a Globalising World (march 2005 Report)

The Royal Society of New Zealand which made a grant to pursue this work, requires an annual report of work in progress. It was submitted in March 2005. This is an edited version.

Keywords: Globalisation & Trade;

Summary

The Marsden Fund of the Royal Society of New Zealand made a grant to fund three years research (at 60 percent of the time) to research and write a book on the topic of globalisation as it affects New Zealand. The grant began in November 2003. This report is written towards the end of March 2005.

In the end research has to be evaluated by its outputs. The Appendix to this report lists a large number of publications in various forms, and some presentations to various audiences. However the key output is the promised book.

In the period since the grant was made the researcher has read widely, including mastering the underlying theory, collected a considerable amount of material from reading and visiting nine states of the United States of America (under a Fulbright New Zealand Grant) and Samoa, written and presented a number of papers on globalisation, and structured and begun writing the book.

The details are set down below.

Shaping the Book

Much of the early part of the reported period was concerned with shaping the book. Six crucial decisions were made.

First, while the book will be informed by current developments in economic theory, particularly by that which underpins The Spatial Economy: Cities, Regions and International Trade by Mashia Fujita, Paul Krugman and Anthony J. Venables.(MIT Press 1999), it would be written for a wider audience than the specialists. The target audience ranges from the general economists to the public who, it seems to me, are very badly served by the popular writing on globalisation which is often poorly underpinned by rigorous analysis, and frequently dominated by moral indignation or complacency. The intention is the Marsden book will better inform readers who can then apply they political preferences for their policy judgements.

The theory is quite difficult. Much of the mathematics seems intractable (that is, there is no analytic solution), so results often rely on simulations, which may not give general conclusions. The underlying mathematics, abandons one of the key assumptions that has informed much economic analysis – that (plant and industry) economies of scale are not important. Their introduction changes the shape of the mathematical spaces, and undermines the intuitions that go with them, without offering – as yet – replacing intuitions. Much of this paragraph will be opaque to the target audience, and yet any presentation to them needs to be true to the underlying theory.

The developments are very exciting. While economics has some history (going back to J. H. von Thunen in the nineteenth century) of a spacial dimension in the economy, it is very limited, as illustrated by the rarity of maps in most economic texts, or that standard international theory is not greatly concerned whether the two countries have a common border or on opposite sides of the world. Distinct economies existed because countries had immovable resources, such as land (it was frequently assumed labour, capital and technology were also immovable).

Once the costs of distance – ‘trade costs’, which are more than just transport costs – are introduced, location becomes a vital element of economic behaviour. The interaction between such costs and economies of scales generates outcomes which seem to be quite different from the standard theory without costs of distance and only diminishing returns. This is frontier of economics stuff, which if economists can get the analytics and intuitions right, represents a major shift in the paradigm.

Thus the writing challenge is to convey to the general audience some of the notions – and the excitements – of these developments.

Second, the book will be deeply based in history and geography, which is where globalisation actually occur. Much of the popular debate is about the now of globalisation, with little recognition that the phenomenon is around two hundred years old. History and geography opens a welter of examples . For the reader it embeds the analysis in a practical reality, which if more demanding to the presenter, should enable the reader to connect better.

This realisation led to the third consideration. Originally the book was to focus on New Zealand in a globalising world. But that traps the framework into the standard economics space of two columns in a table or two axes on a graph, rather than there being some sort of geographical proximity, or lack of it.

Eventually I concluded that in order to break out of the framework, the book should be written for an international audience as well as a New Zealand one. This does not mean New Zealand will be neglected. Leaving aside there will be at least one chapter devoted to New Zealand (and possibly more), many chapters draw on New Zealand material to supplement the themes. For instance, the opening chapter contrasts the colonial experiences of Hawaii and Samoa, but the experiences of the New Zealand Maori are also used. One can think of the analysis remaining New Zealand centred but it is checked against the rest of the world.

Fourth, on learning that I am working on globalisation, potential readers commonly ask ‘are you for it or against it?’ I explain that I have the methodological stance that I would not answer this question until I had completed the analysis. That remains my overall position, although the analysis so far leads one toward the view that globalisation is one of the great forces of history, which is largely unstoppable but to some extent is governable. Asking whether it is a good thing or not is not particularly helpful. First, one must define it and analyse it, and then query what are the harnessing options. The book will aim to give the reader better understandings in order to enable make her or his to make my own decisions rather than imposing my political views and expecting the reader to agree. .

Indeed, I am as intrigued as those who pose the question, as to what the conclusion say. I am continually equivocating over whether there is (or rather what king of) convergence in culture and policy as a result of globalisation. I shant be surprised in that equivocation remains in the conclusion.

The fifth decision was that the book will not consider the implications of globalisation for military activities or vice versa. It is written by an economist with a limited span of expertise. Having said that, there is a story to be written about how the military, as much of the economy, has been shaped by effective distance. It took the US almost three years to get their troops onto the European continent during the Second World War: it took them just over three months to invade the much trickier logistical proposition of Iraq sixty years later.

The sixth decision was to leave the process of economic growth as largely exogenous. The book would otherwise be large. In any case the theories I am using largely make the same assumption. Of course that cannot be quite right, since the diminishing cost of distance is a part of the growth process even in a non-spatial model, since it releases resources and creates new products. Oh dear. This is the assumption most likely to be abandoned as the work progresses.

Themes

The central themes of the research can be summarised as follows:

1. Globalisation is the economic integration of economies – regional and national economies – and the social and political consequences of that integration.

2. Globalisation began in the early nineteenth century, so the phenomenon is almost two centuries old. Since globalisation an historical phenomenon, so focusing on it in just the last few decades throws away a rich source of insights.

3. Globalisation is caused by the falling cost of distance – not just transport costs, but the costs of storage security and information. This gives a driver for the globalisation process. Costs of distance are a trade cost, like tariffs but larger. So one can use the economic theory of tariffs to model globalisation.

4. Globalisation is not solely an economic phenomenon. It has political and social consequences.

5. The policy issue is not being for or against globalisation, but how it can be channelled to give desirable outcomes.

The Structure of the Book

Aside from the Introduction (which the previous section drew upon) and the Conclusion, the book is in three parts: Part I presents the underlying economic theory; Part II looks and the political and social consequences of the economic transformation; Part III looks at different ways of accommodating with a globalised world. Details.

Progress

Of the 28 chapters, only six have been written or near written. Considerable material has been collected for many other chapters. This reflects the usual time it takes to set up a research project. It is currently anticipated that the project should be finished near schedule, although it will be much tighter than originally proposed because the funding is for three days a week rather than four. Some sacrifices have had to be made on lower priority parts of the project.

Ironically, a major disruption to writing is overseas travel, and yet it has been vital for informing the book. Last year Fullbright New Zealand enabled me to spend four months in the United States (at the Centre for Australian and New Zealand Studies, Georgetown University, Washington DC and at the Economics Department, Harvard University, Massachusetts as well as visiting some other states and institutions). I also visited Samoa. This year the government of the Federal Republic of Germany will enable me to spend two weeks in Germany, and I shall also be visit Canada partly from funding provided by a evaluation of drug abuse conference, and partly using Marsden funding.

The websites for the New Zealand Centre for Globalisation Studies has been set up, but the supervising structure has proved more difficult to establish than was expected.

Least progress has been made on writing up the theoretical models. I think that may be the cost of the funding limitations.

A major sacrifice has been the abandoning of the writing of Transforming New Zealand, another book mentioned in the application. It is two thirds complete. The completed chapters are on the website.

Other Work.

The Marden funding was for three days a week, not for full time research. The remaining time was to be available for consultancy and the like, and unpaid public good research. Some of the latter included additional time on the globalisation project. Ideally the consultancy work should support the Marsden Project intellectually as well as financially, although this has not always been possible, although so pervasive is globalisation that rarely is there no intellectual interaction.

What was unanticipated is that because I have been a researching for forty odd years on an extraordinary wide range of topics, I am still being consulted about them, in a way which is normal for an academic – often without recompense (e.g. academic refereeing, talking to journalists or officials, or presenting to voluntary organisations). While I have not charged this work to the Marsden Project, it has cut into the public good time available for it.

The following are among the major topics worked on outside the project over the last year, which have not been funded by the project. They are listed in order of their contribution to the Project.

1. Growth and Innovation Advisory Board

I am a member. The work program includes global connectedness, infrastructure, and growth culture which has been influenced by my globalisation work, and influences my work.

(Various direct contacts with officials have also already benefited from the globalisation research. I remain on a couple of officials’ committees – macroeconomic forecasting and statistics – which dont have a lot to do with globalisation.)

2. Measurement of GDP

The website reports a number of papers in this area. One is a major challenge to the conventional wisdom, showing that the conventional measure of PPP adjusted GDP is invalid. It is closer to PPP adjusted GDI, and the two are not equal. The research was presented at a conference at the University of Davis, and has been taken up by other academics (notably Rob Feenstra). It is likely to affect the next round of OECD estimates. (In passing this means much of the New Zealand research on comparative growth is invalid.)

3. New Zealand’s Economic Performance

I wrote three major papers last year (two with some external funding).:
‘The Development of the New Zealand Economy’ (February 2004)
‘Paradigms of New Zealand Economic Growth: A Memoir’ (August 2004)
‘The Economy’ for Te Ara (The On-line Encyclopaedia) (Launched February 2005)

This work is implicitly important for the Project out of which it developed, and which continues to enable me to explore some aspects of global connectedness, and the New Zealand economy.

4. Licit Drugs

I am both a national and international expert on the evaluation of the social costs of alcohol and tobacco use, and on their taxation and regulation. There has been some consultancy work over the year, and the usual amount of public good provision of free advice to officials, lobbyists, journalists and the general public.

Happily there has been two globalisation spinoffs. The chapter on policy convergence is using the EU difficulties over the regulation of alcohol to illustrate the problems of cross-border interfacing of public health policies and free trade. And attendance at an international seminar in Ottawa in June, will enable me also to do the field work for the Canada chapter on cultural convergence.

5. Miscellaneous Work

5.1 consultancy for a Pharmaceutical Company. It is an absolutely fascinating area, but does not particularly have a spatial dimension, although it is very instructive about how hi-technology growth occurs in a non-spatial economy (which is outside the scope of the book). Apparently my grasp of the industry is sufficient for one industry leader to suggest I write a book on it (most of the books are not much better than the popular books on globalisation). Alas I shall not have the time. Regrettably the material is confidential to the client, and not available on the website.

5.2. Advising SHORE (Massey University) on the economics of a gambling project. This was a (not large) commitment made before the Marsden Project was announced. There is a minor globalisation dimension with offshore electronic gambling. However the underlying problem (interfacing of policies) will be illustrated in the globalisation book by alcohol policy. Papers on the economics of gambling.

5.3 Medical Misadventure: This public good activity maintains an interest I have in ACC and also health policy. This has led to a paper on the evaluation of waiting times, and which will be submitted to a top-of-the-line British medical journal, after presentation to a Wellington seminar in April.

5.4 Family Policy. This has been a long term expertise and I continue to be consulted by officials, lobbyists, journalists and the general public, especially in 2004 when the government introduced a major fiscal package for families.

5.5 Nationbuilding. New work has explored nationalism and globalisation issues.

Omitted are details of publications, presentations and the like.

Go to top

The Globalisation Of Nations: Distance Looks Our Way plan Of a Book

Contents (But note that the contents have changed since this was put here.)

Keywords: Globalisation & Trade;

Preface: Globalisation Is Not Just Jargon.

Part I: Globalisation
Chapter 1: The Analytics of Globalisation
Chapter 2: The Significance of Location:Polynesia
Chapter 3: When Distance Changed: Refrigeration
Chapter 4: Regional Integration and Plant Economies of Scale: United States
Chapter 5: Cities and Industry Economies of Scale: New York
Chapter 6: The Indeterminancy of Location: Finland’s Nokia
Chapter 7: Intra-Industry Trade: The Motor Vehicle Industry
Chapter 8: When Services Become Tradeables: Bangalore
Chapter 9: Labour Mobility: Mexico and US
Chapter 10: Resources: Oil
Chapter 11: Technology Transfer: Japan
Chapter 12: The Rich Club: Argentina
Chapter 13: The Poor Club: Africa
Chapter 14: Why There is No Significant Middle Club: The Bifurcation Model
Chapter 15: The Pattern of World Development: China

Part II: Nations
Chapter 16: Nationalism: Germany
Chapter 17: Is Cultural Convergence Inevitable?: Canada–United States
Chapter 18: Diasporas: Australia
Chapter 19: Migration: The World’s Population
Chapter 20: The Meaning of Sovereignty: The Globalisation of Time
Chapter 21: Is Policy Convergence Inevitable?: Healthcare
Chapter 22: The Social Market Economy: France and Holland
Chapter 23: Foreign Investment: McDonalds?
Chapter 24: The WTO: Saudi Arabia or Russia?
Chapter 25: Kinds of Nations: New Zealand

Conclusion: Distance Looks Our Way

This research is funded by the Marsden Fund of the Royal Society of New Zealand)

 

The Globalisation Of Time

Paper for the Symposium “Institutions and Economic Development”, University of Otago, 18-19 March (Also draft of chapter for “Distance Looks Our Way”.)

Keywords: Globalisation & Trade; Political Economy & History;

Introduction

The Royal Society of New Zealand has awarded me a Marsden Grant to study globalisation. The ultimate output will be a book. This paper presents a draft of one of its chapters. Because it is a conference paper, it is necessary to say something about the context in which the chapter takes place. The study is based on five primary principles.

1. Globalisation is the economic integration of economies – regional and national economies.

2. Globalisation began in the early nineteenth century, so the phenomenon is almost two centuries old. Since globalisation an historical phenomenon, focusing on just the last few decades throws away a rich source of insights.

3. Globalisation is caused by the falling cost of distance: transport costs, plus the costs of storage, security, information, and intimacy. This gives a driver for the globalisation process. Costs of distance are a trade cost, like tariffs but larger. So one can use the economic theory of tariffs to model globalisation.

4. Globalisation is not solely an economic phenomenon. It has political and social consequences.

5. The policy issue is not being for or against globalisation, but how it can be channelled to give desirable outcomes.

Underpinning this analysis are some major developments in international (and regional) trade theory in recent decades, particularly with the addition of economies of scale. Currently the best exposition is The Spatial Economy: Cities, Regions and International Trade by Mashia Fujita, Paul Krugman and Anthony J. Venables. It a very exciting development because the new theory gives a geographical dimension to economics which thus far as been largely missing. We can now start thinking systematically about economic activity in space.

Unfortunately, the modelling becomes very complicated because of the mathematics which underpin economies of scale, It is frequently analytically intractable, and one has to depend on plausible simulations rather than pure mathematical intuition. The book I want to write is for a much wider audience than mathematically trained international trade theorists. How to get around the complexity?

Its structure is as follows. The Part I develops the Fujita, Krugman and Venables model, using a minimum of mathematics. Once the economic underpinnings have been established, the Part II of the book explores political and social consequences such as nationalism, sovereignty, policy convergence, cultural convergence, and diasporas. The Part III looks at various options for nations in a globalised world. To engage with the reader, each chapter is based upon a particular historical experience., typically focusing on a single country.

The book reflects the structure of the first course in economic history which I took. Each seminar appeared to be based upon a particular country or topic, but in fact each studied article was also chosen to raise a theoretical issue, giving the course a balance of economic theory, economic history, and geographical diversity.

The course was run by Barry Supple, then professor of economic and social history at the University of Sussex, but as one of the overseas guests at this Symposium. There are few greater tributes to a teacher than that a course taught almost forty years still resonates with one of its students. I present today’s paper in honour of that teacher. Those of you familiar with the University of Sussex of my, and Barry’s, time will recognise this is a very Sussex paper with a strong theoretical disciplinary underpinning, a broad range of interdisciplinary interests and, I hope, a little wit.

Here beginneth the first chapter of the Part II: ‘The Globalisation of Time: The Problem of the Meaning of Sovereignty’

Local Time and International Time

Five minutes after nine o’clock, Big Tom, the clock above the quad of the Oxford College of Christ Church strikes some 101 times, a signal to the 101 men of the original foundation that curfew begins. That time is set by the sun, rather than British Standard Time (a.k.a. railway time), for the sun arrives five minutes later at Oxford than Greenwich 100 kilometres to the east.

Two hundred years ago every locality had its own time, and such aberrations – if that is the right way to describe them – went unnoticed. Watches were not accurate enough, so arriving in a new town, the traveller would simply recalibrate its time against the town clock, just as we do today when we cross a boundary between time zones. Accuracy was not only infeasible but hardly necessary, Would it matter if a London visitor arrived five minutes early to a meeting in Oxford?

It would matter if it were the wrong day, so calendars developed thousands of years of years earlier. Our began Roman times – hence July and August after a couple of caesars. It was taken over by the Roman Catholic Church, with its present form settled by Pope Gregory in 1582. The Gregorian calendar was immediately adopted by Catholic countries, but others were more reluctant. Britain, afraid of a papist heresy, adopted it in 1752, Russia in 1917 after the Revolution.

A nice example of the confusion is that Shakespeare died on St Georges Day, 23 April 1616. But that is by the earlier, Julian, calendar. His Gregorian calendar date was 3 May, Not that it matters, the day is a convenient one to celebrate the man. (And who knows what day that St George died, assuming he ever lived.)

While God may be a mathematician, he or she was not particular interested in integers – not to mention elegant non-primes with many useful factors – when the orbit of the Earth was determined. The 365 days and a bit annual cycle, involves some calendar tinkering of which leap year is the best know, if the annual cycle is to reflect the movements of the sun and seasons (climate change aside), while the moon’s non-integer cycle is recorded but ignored. With divisors of only 73 and 5, the calendar divisions are complicated too. Computers use a daily measure, which they convert into the Gregorian Calendar for mere mortals.

In 1792, in the heady days after the French Revolution, the Republicans replaced the Gregorian calendar with one of 12 equal 30 day months, plus five or six extra days a year. Weeks were of ten days, which meant that rest days were less common (apparently increasing work intensity after the peoples’ revolution was not confined to the Soviets). Days were divided into 10 hours, of 100 minutes and 10,000 seconds. Napoleon returned France to the Gregorian calendar in 1806. He was not having any of his troops turn up on the wrong day for battle.

He, and other commanders, used clocks to coordinated troop movements. They did not need an international standard, but navigation did. By one of those curious twists of science – preceding Einstein’s notion of space-time by centuries – time became location, and accurate location required accurate time.

Any point on a globe requires two coordinates to uniquely identify it. The distance in degrees from the equator – the latitude – is relatively easily measured by using the height of the sun and a sextant. The second coordinate proved much harder to measure.

The natural second coordinate is the longitude, the great circle from pole to pole around which the earth spins. Various methods were used to determine which longitude a ship was on. Dead reckoning on the rough rude sea does not work, because the ship’s speed could not be measured accurately and there was drag from current and wind. Astronomical sightings were not accurate enough either. One can but marvel at the navigational achievements of the Polynesian and Viking seafarers.

Accuracy was important. It was not just a matter of calling into the wrong harbour on a coast, wrecking though that can be. The location of the sub-Antarctic Auckland Islands 460kms to the south of New Zealand (latitude 50’ 16” to 51’ 19” south: longitude 165’ 32” to 166’ 39” east) were at first plotted some 56kms out of position on maritime maps, despite the availability of chronometers. Ships sailing through the ‘furious fifties’ (the latitudes of 50 to 59 degrees south), confident of their navigational skills but hindered by poor visibility and bad weather, ran aground against the sheer basaltic cliffs on the western coastline, sinking the ships and travellers.

The eventual resolution to longitude measurement was an accurate clock, set on a particular longitude, transportable without error through heaving seas (or on a rolling mule’s back). By looking at its time at the local noon, the local longitude could be calculated. In Oxford the clock calibrated to Greenwich would show 12.05 at noon. Since the earth spun 360 degrees in 24 hours, or a quarter of a degree a minute, Oxford had to be 1.25 degrees (1′ 15″) to the west of Greenwich.

Clock accuracy was a big challenge. Eventually Englishman John Harrison constructed a sufficiently precise chronometer. James Cook’s expeditions confirmed its accuracy. Chronometers became standard on all ocean going shipping, reducing the costs of distance by providing more secure navigation and, usually, accurate maps. Telegraph, radio and satellites were to improve the accuracy even further.

This did not solve the problem of every town having its own time. A few minutes matter a lot when running a railroad. In 1853 fourteen people died when a couple of trains on America’s Providence and Worcester Line slammed into each other on a blind curve because, it was said, a conductor had a slow watch. Railways required a consistent standard time, although late nineteenth century French passengers faced three: station courtyard and departure lounges set at Paris time, platform clocks set to give the traveller a margin of error, and the local time outside in the town. For it took many countries almost to the end of the nineteenth century before they got a nation wide system of time.

Moreover, countries insisted on their own standard times, lacking international coherence. In 1897 France was 9 minutes and 21 seconds ahead of Germany despite being to its west. As telegraph cables ringed the earth the chaos could only increase. Given they transmit information almost instantaneously, once the convention had been agreed, they made its implementation simple.

As usual, international agreement did not come easily. International conventions setting scientific standards began in the mid-nineteenth century. The Convention for the Metre of 1875 was the first great one, the international standard being housed in France. In Washington in 1884 the World Time Conference set the prime meridian, from which longitude would be measured, at the Greenwich Observatory. In effect its sun time became world time, with each country able to set a local time (or times, in the case of long east to west ones like Canada, Russia or the United States).

While Greenwich time may seem natural today, the location of the prime meridian was bitterly fought over. Some of the proposals were nutty or nostalgic, but the science requirements reduced the choice to Berlin, London (Greenwich), Paris and Washington. The Germans, preoccupied with unifying their domestic time dropped out, and the Americans sided with London, thus releasing observatory circle for their Vice-President’s dwelling. The French argued vigorously and ingeniously for Paris (and also for the adoption of decimal time). But eventually 21 of the 24 countries at the convention favoured Greenwich, with San Domingo dissenting and Brazil and France abstaining. Factors such as that 70 percent of the world shipping already used Greenwich and its anti-meridian going through Baring Strait and not crossing land may have been persuasive (although it cleft in two the Pacific Islands which would become Kiribati).

Today, time and date are so routine with international commerce, science, and travel depending upon it, that we rarely give a thought to an earlier world in which there was little coherence of time and date. The same is equally true for standards of weights and measures. But in this routine, there is a question of national sovereignty.

International Time and National Sovereignty

The international establishment of the notion of sovereignty is often attributed to the 1648 Treaty of Westphalia. The core of its international governance system was the principles of the state and sovereignty. The world was divided into territorial parcels each to be ruled by a separate government. The state was ‘sovereign’, exercising comprehensive, supreme, unqualified, and exclusive control over its territory. ‘Comprehensive’ meant that the state had jurisdiction over all the affairs in the country; ‘Supreme’ meant that it recognised no superior authority; ‘Unqualified’ meant that its right to total authority over its territory was treated as sacrosanct by other states; ‘Exclusive’ ruled out joint sovereignty.

Of course the Westphalian order is a historical phenomenon, and it is not hard to see how these principles, including the implicit notion that the territories are eternal, have often been breeched. However globalisation means it may not be practical to exercise sovereignty in the way that was envisaged 350 years ago, in a world in which there was little international economic intercourse between sovereign states. We get a sense of the difficulty by considering the amount of freedom a country has to choose its standards of time, weights and measures.

Typically countries legally determine their standards, and have the de jure power to change them. A sovereign country could pass legislation enacting a different calendar. Some have their own calendars, although typically they are only used for ceremonial purposes. For practical purposes the Gregorian calendar is used. In any case, the two main exceptions – the Jewish and Moslem calendars – also have a seven day week cycle, so the practical translation is not difficult. There is a Chinese calendar, yin-yang li, which continues to have ceremonial significance – including the Chinese New Year being a day for the Chinese to celebrate their identity in Western countries. China adopted the Gregorian calendar in 1912.

In principle, any sovereign country could divide its day any way it wishes, but even the French have not adopted decimal time, because the conversion system to international practice would be too complicated – for the human mind (although computers find it a cinch).

The international convention allows time zones different from Greenwich time, but calibrated to it. In practice the difference is exact hours (or sometimes half hours) thus creating a set of time zones in the world. Again every country has the de jure power to use a different time standard from Greenwich Mean Time, just as Oxford did when it used sun time. In practice they do not. They may change the time in the time zone – as when the British switch to summer time – but the new time is always anchored to GMT.

(In 1995 Kiribati, fed up with the scattered islands divided into two days, decreed that the International Date Line would henceforth run along the many-cornered eastern boundary of the republic, giving the date line a very noticeable eastward protrusion from the 180 degree meridian, the antemeridian to Greenwich. There is no international convention for the date line. The de facto line is determined by the unilateral choices of time zone by those countries next to it.)

So whatever de jure powers a country may have, its de facto ability to set time, and weights and measures, is circumscribed by international practice. San Domingo may have voted against Greenwich Mean Time, but you may be sure that its time practices conform to it. Only a country completely isolated from the rest of the world could do otherwise.

An interesting exception is that a large country like the United States may have its own system of weights and measures. The US is one of the signatories of the Convention of the Metre, so it acknowledges the Paris based system. But domestically it uses the American standard foot, and its own distinctive volume and weights, albeit calibrated with the metric system.

Some countries waited until the twentieth century before adopting the International System of Units based on metre, kilogram, second, ampere, kelvin, mole, and candela. New Zealand did so between 1970 and 1976, the seven-year timetable aiming to reduce the cost of the transformation. The change, which caused considerable hardship to older people and expense to business, might have been justified by arguing that the metric system is intrinsically simpler than the imperial system. But, instructively, the justification for the decision to change was almost entirely on the necessity to keep in steps with overseas trading partners. While New Zealand has the de jure power to return to the imperial system or something else, its de facto powers over weights and measures is more limited as long as it wishes to participate in international intercourse.

Because it is larger, the US has more de facto sovereignty. Not only is the economy over a fifth of the world’s total production, but it exports only 10 percent of it, half the international average, so it is a much more self-sufficient economy. Even so, as one American industrialised drawled, ‘I export in metric: I import in American’.

Running two separate measurements systems can have its problems. The Mars Climate Orbiter spacecraft burnt up in the Martian atmosphere in October 1999 because the acceleration data for controlling its thrusters had been provided in pounds of force (the US standard unit) but entered into the space craft’s computer as newtons (the SI unit). Little information was obtained from the trip, so most of its $US240 million cost was wasted.

Some might argue that conventions on time, weights and measures have a scientific underpinning, whereas most conventions do not. The scientific underpinning is only to a degree. The Gregorian calendar has a host of cultural assumptions overlaying any science, which is why the French revolutionists wanted to abandon it.

But ‘non-scientific’ conventions are also necessary. Consider the aborted Multilateral Agreement on Investment (MAI). Any country which is accepting foreign investment, even reluctantly, needs a framework so that foreign investors know exactly what is expected of them.

Currently this is largely carried out on a country by country basis. Why not have a common set of rules? In 1995, the OECD Council tried to reach an agreement which would provide a broad multilateral framework for international investment with ‘high standards for the liberalization of investment regimes and investment protection and with effective dispute settlement procedures’. It was to be a free-standing international treaty open to all OECD Members, who are some of the main net investors, and to non-OECD Member countries who are typically the main net debtors.

Of course any country had the de jure power not to agree to such an investment convention, but had it been adopted, those concerned with effectively attracting foreign investment would have be unwise to opt out, since the potential investor would be deterred by the uncertainties that a different institutional arrangement generates. In practice a country might accede to the convention, with particular reservations, or offer a more generous deal to investors. But once adopted by sufficient countries the MAI would be the framework for all, even implicitly for those which stayed outside.

As it happens, the MAI was not adopted. The debtor countries dissented, and the OECD found there was not quite the internal consensus it had assumed. It is said citizen protest killed the treaty. This is probably an exaggeration, The likelihood is that a proposal for a broad multilateral framework for international investment will arise again, and if it is managed more sensitively than on the last occasion, it will be adopted by sufficient countries to force the remainder to accept the inevitable and accede to it too, with possibly some dissenters.

Typically the smaller economies face the an invidious position that any international convention does not meet all their needs, Individually and collectively they work to make it a better – less lopsided – arrangement. But in the end each has to judge whether the benefits of being in a bad agreement are superior to the detriments of not being in it. While this appears to be an exercise in de jure sovereignty, in fact there is much less choice.

Some big economies – characterised by being both large and high income – may have more de facto sovereignty than small economies should be no surprise. But even they have not the full de jure autonomy. Economic intercourse has some analogies to marriage. The sovereign individual takes on a relationship which reduces her or his sovereignty. He or she does so because they calculate that they are better off despite the loss of full sovereignty.

But entering into economic intercourse is not a one-off affair. Each sovereign country is continually entering into arrangements which limit its de facto sovereignty. Of course it has the de jure sovereignty to withdraw even where there is no explicit provision to do so. In practice such withdrawals are rare because it is better to be inside the tent than out.

But the arrangements may not be as fair to small countries as to the primary larger negotiators, even where there is one country one vote (as in the World Time Conference), or every country has a unilateral veto (as applies to most international trade negotiation rounds). It is the big countries who determine the agenda, Once the deal is agreed, the individual signatory has only the option of deciding whether it is better off in or out, not whether it is as well off as other signatories. If it judges there is a net benefit, it is likely to agree, signing away some more de facto sovereignty.

Conclusion

The globalisation of time shows that there can be practical reasons for a country adopting an international convention. Those reasons can be so strong that while there is a fig leaf of de jure sovereignty, the de facto reality is the country may have little option but to follow the international conventions over which has little influence.

Where will it end? Does globalisation mean that ultimately a country abandons all its sovereignty. This cannot be entirely true, for as the globalisation of time shows, there remains a rump over which the locals have some influence . How big is that rump, how significant is it? That question is explored in future chapters of the book.

Go to top

Yankee Dollar Blues: How Will the Us Correct Its External Deficit?

Listener: 12 March, 2005.

Keywords: Macroeconomics & Money;

Underneath much of economics is the notion of “homeostasis”, the tendency to respond to an external shock by adjusting internally to maintain equilibrium. So a surge in demand for a product results in its price rising, reducing the amount demanded and increasing the amount supplied.

Indeed, the argument among economists about the Depression of the 1930s amounts to whether there was homeostasis so that unemployment was a transitional phenomenon, or whether there was no significant self-correcting mechanism so that an external adjustment, such as increased government spending, had to be imposed.

Today the worry is the falling US dollar. Is there an automatic mechanism to correct the situation or, perish the thought, is the situation more pathological?

The external shock that has caused the falling dollar was the Bush administration’s cutting of taxes, switching the US Government from being a saver running a budget surplus, to a borrower running a budget deficit. The US economy became a major net borrower from the rest of the world, spending more than it produces. This gap between expenditure and production is covered by additional importing. The resulting deficit in the current external account is covered by foreign borrowing.

In most economies, foreign borrowing pushes up the exchange rate, as we saw in the 80s when the Rogernomic policies involved substantial budget deficits. However, the US dollar is the international currency, so when the US borrows offshore, the increased competition for the available funds pushes up the exchange rates of other economies, and the US dollar falls relative to them.

Homeostasis, if it is to happen, requires a reduction in the US savings deficit. There are some additional savings as the US economy expands, but apparently not enough. Any increase in private savings has been overwhelmed by greater private investment, as firms increase capacity to meet the expansion. The increase in tax revenue as the economy expands has been insufficient to offset the tax cuts.

Are there any other self-equilibrating mechanisms? Under Alan Greenspan, the US Federal Reserve has been lifting its interest rates. If it lifts them far enough – ouch! – that might encourage people to save rather than spend, although the effect is likely to be small. The larger effect is that higher interest rates discourage investment, so there is a need for less savings. (This is the channel the Reserve Bank of New Zealand relies upon to restrain demand.)

US interest rate hikes could be a remedy, but that would be painful to the whole world. There are likely to be complicated effects as other countries raise their rates to compete for the world’s savings, so no one knows how high the interest rates may have to go. Rates may be being lifted too slowly, and the world will wallow in disequilibrium for some time, making the eventual adjustment very painful.

Some journalists say that the lower US dollar will encourage exporting and reduce importing, while the opposite occurs elsewhere in the world, including here. But that does not resolve the US savings shortage. One possibility is that the falling dollar will raise US domestic prices as their prices for exports and imports go up. The US external sector is only a third proportionally of what it is in many countries, so the inflationary pressures (and hence the inflationary self-correcting mechanism) from exchange rate depreciation are not so high. When they come into full force, the outcome could be internationally explosive.

There is at least one further major complication. Thus far I have treated the US here as if it is a lone economy. Currently, some Asian economies – especially China – have maintained a fixed exchange rate with the US dollar, and invested their savings in US bonds. In effect, the US tax cuts are being covered by loans from Asians, but only temporarily, because the lending will have to be redeemed. We don’t know how long this financing arrangement will go on for, and we know even less about what will happen when the Asians change their strategy. An inevitable outcome must be that the European currency, the euro, will become more prominent in inter-national finance. As the Chinese say, “May you live in interesting times.”