Wednesday, 24 May 2017

Resource endowments and agricultural commercialization in colonial Africa: Did labour seasonality and food security drive Uganda’s cotton revolution?

Michiel de Haas is PhD student at
Wageningen University
Agricultural commercialization was a key driver of African economic change during the colonial era. Why did some African smallholders adopt cash crops on a considerable scale, while most others were hesitant to do so? 
In recent years, debates on the determinants of African development have been dominated by institutional explanations (Acemoglu and Robinson, 2010). However, there is also a current of literature pointing at the importance of ecological factors and resource endowments (Austin, 2008; Tosh, 1980). In this study, we investigate and test the plausible claim that factor endowments crucially shaped the degree to which export crops were adopted by smallholders in colonial tropical Africa. 
Kostadis Papaioannou is PhD student
at London School of Economics 

In his seminal contribution, Tosh (1980) made a strong case for a resource endowments perspective on African agricultural commercialization during the colonial era. Tosh argued that different responses of African farmers to export crop cultivation can be explained by the distinction between ‘forest’ and ‘savanna’ areas, in which farmers faced distinct resource endowments. Forest areas have fertile soils and well-distributed rainfall patterns, and are suitable for crops that yield high caloric returns per unit of labour (such as yam, banana) as well as lucrative cash crops (such as coffee and cocoa). Savanna areas, instead, were characterized by a brief growing season during which farmers struggled to secure their subsistence requirements. Not only were labour demands unevenly distributed under such dryer savannah conditions, but farmers also had to rely on drought-resistant but labour intensive grain cultivation, and less lucrative cash crops such as cotton and groundnuts. Consequently, insufficient labour was available to cultivate cash crops without facing food insecurity. 

Uganda’s cotton revolution
We conduct an in-depth case study of the ‘cotton revolution’ in colonial Uganda to put Tosh’ argument to the test. Ugandan smallholders, unlike their counterparts elsewhere in Sub-Saharan Africa, adopted cotton on a substantial scale. Cotton was first exported in the early 1900s, and already by the 1920s, Uganda had become the world’s fourth cotton exporter in per capita terms (and 11th in terms of total production). Two common explanations exist in the literature for the exceptional cotton uptake by Ugandan farmers. Firstly scholars arguing from a resource endowment perspective have argued that the success of cotton in Uganda should be attributed to cultivation of the perennial banana by Uganda’s cotton growers. Bananas yielded high caloric returns to labour and left farmers with sufficient labour to cultivate an inedible and labour intensive export crop (Elliot, 1969; Tosh, 1980). However, as illustrated by Figure 1, bananas were only grown in some parts of Uganda, while smallholders in grain growing regions were equally invested in the cultivation of cotton. In other words, this crop-based version of the resource endowments explanation does not hold up to the historical record. Secondly, scholars taking an institutional perspective have argued that Ugandan cotton adoption was the outcome of particularly effective colonial coercion (Hanson, 2003; Young, 1994). Again, however, the explanation does not hold up to the historical evidence: while the most outright Ugandan colonial coercive policies were scaled back during the 1920s, cotton production in this period accelerated.

Figure 1: Average annual cotton cultivation in Uganda’s colonial districts (1925-1960)



Figure 2. Bi-modal rainfall and Cotton Planting in Uganda

Rainfall patterns, labour seasonality and cash crop adoption
We argue that the previous literature has focused too much on crops and coercion, and has overlooked a crucial environmental condition that characterized all of Uganda’s cotton growing regions, namely its equatorial bimodal rainfall patterns (see Figure 2). Bimodal rainfall – i.e. the occurrence of two distinct rainy season per year – gave smallholders an important edge over their counterparts farming in conditions of unimodal rainfall, and enabled them to cultivate cash crops while retaining food security. The benefits were two-pronged.


Firstly, the occurrence of two rainy seasons meant that agricultural labour demands were more smoothly distributed throughout the year, effectively enabling farmers to use more labour to grow crops. Indeed, farmers used the first rainy season to grow food crops, and relegated cotton to the second rainy season. The benefit of spreading out farming operations over two rainy seasons is best illustrated by a comparison between the Teso region of Uganda and northern Côte d’Ivoire (Figure 3). If we assume that the month with the greatest labour input in Figure 3 signifies the potential maximum monthly household labour capacity, this would imply that, because of a more favourable seasonal distribution, farmers in the three Ugandan cases were able to use between 69 and 71 per cent of their annual labour capacity in agriculture, while farmers in Côte d’Ivoire, could only effectively exploit 49 per cent of their labour capacity to produce crops. This gap of just over 20 percentage points may well account for the difference between cotton adoption and rejection.

Figure 3. Intra-annual distribution of labour inputs (left axis) and rainfall (inches, right axis)

Secondly, having two chances to secure subsistence per year mitigated the impact of a single harvest failure, and allowed farmers to take greater risks, such as devoting one of the rainy seasons to the cultivation of an inedible cash crops. We show that farmers relegated cotton to the second rainy season, and hypothesize that the allocation of labour to cotton depended on the degree of food security achieved in the first rainy season.



Food security and cotton cultivation: an empirical analysis

To test this hypothesis, we perform a panel analysis on a newly constructed, strongly balanced, district-level dataset of cotton acres per capita over a 36 year period (1925-1960). Food crop yield data are impossible to obtain in a smallholder economy with very limited administrative capacity. To proxy for food crop harvests in the first rainy season, we look at rainfall deviation. An extensive body of previous research has shown that rainfall deviation is a reliable estimate of harvest outcomes (Papaioannou, 2017; Papaioannou & De Haas, 2017 for further discussion). Usually, studies look at rainfall on an annual basis, but for our analysis and bimodal context, we take rainfall deviation of the first six months of the year – that is before the planting of cotton. Our panel analysis indicates that rainfall deviation in the first season had a negative impact on subsequent cotton cultivation, suggesting that food security trumped cotton cultivation, and that farmers shifting resources from cotton to food during the second reason to compensate for the disappointing food crop harvest. The effect is stable and highly significant (at the 1% confidence level), and holds up to numerous robustness checks. Interestingly, when we do not find heterogeneity between the grain and banana districts, indicating that rainfall patterns rather than farming systems were decisive to smallholders’ willingness and ability to adopt cotton. 

Conclusion

Our study highlights the importance of food security and labour seasonality as important determinants of agricultural commercialization in colonial tropical Africa. We propose that, in a colonial context, bimodality was a close-to-necessary condition for a ‘cash crop revolution’ to occur. At the same time, we are careful not to argue that cash crop adoption can be understood and explained solely by looking at labour seasonality, or even resource endowments more broadly. In this study, we have treated thin markets for credit and food, and limited adoption of agricultural technologies as exogenously given. In reality, of course, such limitations were an outcome of the policies of colonial governments, which operated on a shoestring and were unwilling to invest to any large extent in the agricultural development of their colonies. That resource endowments mattered so much testifies to the poor institutional context in which famers operated.


The working paper can be downloaded here: http://www.ehes.org/EHES_111.pdf


References
Acemoglu, D., and J.A. Robinson. 2010. “Why is Africa poor?” Economic History of Developing Regions 25 (1):21-50.

Austin, G. 2008. “Resources, techniques, and strategies south of the Sahara: revising the factor endowments perspective on African economic development, 1500-2000.” The Economic History Review 61 (3):587-624.

Elliot, C.M. 1969. “Agriculture and economic development in Africa: theory and experience 1880-1914.” In Agrarian change and economic development, edited by E.L. Jones and S.J. Woolf. London: Methuen.

Frankema, Ewout, Jeffrey Williamson, and Pieter Woltjer. 2015. An economic rationale for the African scramble: the commercial transition and the commodity price boom of 1845-1885. National Bureau of Economic Research.

Papaioannou, Kostadis J. 2017. ““Hunger makes a thief of any man”: Poverty and crime in British colonial Asia.” European Review of Economic History 21 (1):1-28.

Papaioannou, Kostadis J., and Michiel de Haas. 2017. “Weather Shocks and Agricultural Commercialization in Colonial Tropical Africa: Did Cash Crops Alleviate Social Distress?” World Development 94:346-365.

Tosh, John. 1980. “The cash-crop revolution in tropical Africa: an agricultural reappraisal.” African Affairs 79 (314):79-94.

Young, Crawford. 1971. “Agricultural policy in Uganda: capability and choice.” In The state of the nations: constraints on development in independent Africa, edited by M.F. Lofchie, 141-164.

Monday, 15 May 2017

How Extractive Was Colonial Trade?

Federico Tadei is Profesor Visitante
at Universitat de Barcelona
Extractive colonial institutions have been considered one of the main causes of current African underdevelopment (Acemoglu, Johnson, and Robinson, 2001; Nunn, 2007). Yet, since colonial extraction is hard to quantify and its precise mechanisms are not well understood, a paucity of research has examined exactly how successful the colonizers were in extracting wealth from Africans. 

In a new paper, I tackle this issue by focusing on colonial trade in French Africa. The French colonizers, in fact, made great use of trade monopsonies and compulsory harvest quotas to obtain agricultural commodities from African producers at very low prices and resell them in Europe for large profits (Coquery-Vidrovitch, 1972; Suret-Canale, 1971). Given this specific feature of French trade, I argue that it is possible to measure colonial extraction by looking at the gap between the prices that the African producers received and the prices that they should have obtained if colonial trade had been competitive. 

I examine this hypothesis as follows:
1) First, by using a variety of colonial publications, I reconstruct yearly estimates of prices at the French port, African producer prices, and trading costs (including shipping, insurance, inland transportation, port charges, and export taxes) for the main exported commodities between 1900 and 1960.
2) Then, I compute what producer prices should have been in a competitive market as the difference between prices at the French port and trading costs.
3) Finally, I compare actual and competitive producer prices to measure the level of colonial extraction related to export trade.
The figure below summarizes the main result of the paper, by showing the average gap between actual and competitive producer prices over time: on average prices to African producers were less than two thirds of what they would have been in a competitive market.

The figure shows the trend of average colonial extraction, defined as one minus the ratio between 
actual and competitive producer price.


In addition, I employ a two-fold approach to check the robustness of these results. First, I verify that price differentials in French Africa were much larger than the ones that we can observe in other markets not subject to colonial extraction, such as the trade between the United States and the United Kingdom and the trade of commodities produced in Africa by European settlers. Second, I use a regression analysis to take into account unobservable trading costs, such as risk compensation and productivity differences, and to demonstrate that an increase in the world price for a commodity did not generate a proportional increase in the African producer price.
Together, the evidence suggests that colonial trade dynamics were characterized by a considerable amount of extraction. Future research aimed at examining whether this had long-lasting consequences on current economic development is warranted.

This blog post was written by Federico Tadei, visiting professor at University of Barcelona.
The full paper is available at http://www.ehes.org/EHES_109.pdf.

References

D. Acemoglu, S. Johnson, and J. Robinson. The colonial origins of comparative development: An empirical investigation. American Economic Review, 91:1369-1401, 2001.

C. Coquery-Vidrovitch. Le Congo au temps des grandes compagnies concessionnaires, 1898-1930. Mouton De Gruyter, 1972.

N. Nunn. Historical legacies: A model linking Africa's past to its current underdevelopment. Journal of Development Economics, 83:157-175, 2007.

J. Suret-Canale. French colonialism in tropical Africa, 1900-1945. Pica Press, 1971.

Thursday, 13 April 2017

Alleged Currency Manipulations and Retaliatory Tariffs. Some lessons from the 1930s

Thilo Albers is PhD student in Economic History
at London School of Economics (LSE)

How forceful can retaliations to alleged currency manipulations be? What are the effects on trade? The following research seeks answers to these questions in the interwar period.

The evidence for China still deliberately undervaluing her currency is at best weak (see Cheung et al 2016). Yet, with the new US president in office, import surcharges for alleged currency manipulation against her and other countries have become more likely. Indeed, even before he had come into office, important public figures across the political spectrum had called for an import surcharge (e.g. Krugman 2010). At the heart of such debates is the argument that the country undervaluing her currency significantly gains at the expense of others. A lower real exchange rate stimulates exports, which in turn creates current account problems abroad (Goldstein and Lardy, 2006). It is frequently invoked that a retaliatory tariff could be used to force the alleged currency manipulator to re-align her currency. According to the standard narrative (e.g. Krugman 2010), this worked smoothly towards the end of Bretton Woods, when the United States forced other countries to re-align their currencies with an import surcharge. However, this was a very particular case in a very particular setting and the final realignment might have well been reached without the surcharge (Irwin 2013). Neither does this case answer the most important question. What are the potential political and economic costs of retaliatory tariff policies?

The 1930s provide a blueprint to assess such costs. Some countries had left the gold standard and floated their currencies. Other countries alleged them of deliberately undervaluing their currencies and imposed retaliatory tariffs. In a new study focusing on French commercial policy (Albers 2017), I show that moving towards discretionary tariff policies can have high political and economic costs. The study is a first attempt to quantify the relative importance of retaliatory as opposed to general tariff increases for this commercial policy episode. The retaliatory motive for French protectionism turns out to have been at least as important as factors driving the general tariff level. The effects of retaliation on trade were comparable to those of modern trade treaties – just with the opposite sign. The analysis of historical newspapers demonstrates that leniency vanished from the public discourse and nationalist agitation took over.

Alleged currency manipulation back then

When Britain had unilaterally left the gold standard in the autumn of 1931 and other countries followed suit soon after, policymakers in these countries did not intended to manipulate their currencies. The imminent threat of further deflation and the drain of gold reserves had effectively pushed countries off the gold standard, especially Great Britain (Accominotti 2012). However, many policymakers abroad perceived this devaluation as currency manipulation. At the forefront of them, the French government retaliated by raising tariffs and introducing quotas specifically aimed at those countries that had left the gold standard.

From the villain to the victim of exchange rate policies

It is not without irony that French commercial policymakers perceived their country (and other countries on the gold standard) as the victim of currency depreciations abroad. When France stabilised her currency at 20 % of its pre-war value in 1928 while many countries such as Britain returned to their pre-war parities, this led to a massive gold influx in France. Some have argued that this played a part in causing the Great Depression, because it led to further deflation abroad (Johnson 1997, Irwin 2010). The paper shows that contemporary commentators abroad likewise argued that the Franc was undervalued. In this sense, France was the villain of exchange rate policies in the late 1920s.

After the first wave of currency depreciations had hit in the autumn of 1931, tables turned. The real value of the Franc doubled against the pound in the following two years. French policymakers now felt victimised by exchange rate policies abroad. A qualitative analysis of contemporary newspapers focusing on the Anglo-French commercial policy relationship suggests that the rhetoric shifted from leniency before the devaluations to agitation afterwards. Numbers can indeed mirror this debate as Figure 1 shows. It plots the number of articles in the Guardian per year containing keywords that identify protectionism and tariffs in general and those that contain additional references to tariff wars or retaliation. The retaliatory sentiment first peaked in 1930, when the discussion about the Smoot-Hawley tariff in the United States got heated. This local peak was far exceeded by the discussions about the devaluations two years later. These numbers and the discussion of the articles behind them lead to the conclusion that the political costs of the devaluations and the following retaliation were indeed high.


Figure 1: The Rhetoric of Retaliation

Identifying the retaliatory motive in commercial policy

Tariffs had been increasing across all countries during this episode, and mostly so in countries adhering to the gold standard (Eichengreen and Irwin 2010). The new retaliatory protectionism, however, had a new quality and severe political economy implications. Retaliation was directed at certain trading partners and thus different from the previous general increases in tariffs to either balance trade and budgets or protect the home industries. Irwin (1993) coined this bilateralism “pernicious,” but so far, we know little about its magnitude relative to the general increase of protectionism and its effects on trade.
While most studies on protectionism make use of aggregate tariff data, this study employs a novel dataset of bilateral tariff rates of France against her trading partners. This so-far widely neglected dimension of tariff data allows me to separate general tariff increases from those with a retaliatory motive by using a difference-in-differences setup. Figure 1 shows that the “tariff treatment” for those leaving the gold standard was indeed very large.
Figure 2: The "tariff treatment" for leaving the gold standard
The most conservative estimate suggests that, while the general increase (against all trading partners) amounted to 5 %, the retaliatory component of the increase in French protectionism amounted to about 7.5 %. This is very close to the average tariff reduction reached by NAFTA (Burfisher et al. 2001). Hence, retaliation was important for the increase of French protectionism, but did it matter for trade, too? A back-of-the-envelope calculation and an econometric estimate suggest that the reduction in trade implied by these tariff increases was about 20 %. This magnitude, albeit being a bit smaller, is comparable to the one of trade-creating effects of Regional Trade Agreements (see median estimate by Head and Mayer 2014). In sum, the economic costs of retaliation were large.

What do we learn?

It is almost needless to say that French policymakers did not change minds abroad with their actions, especially as the abandonment of the gold standard abroad was clearly a prerequisite for recovery (Eichengreen 2013). The chaotic manner and the absence of any coordination of the devaluations, however, led to more protectionism in those countries that decided to stay on the gold standard. The quality of this protectionism was markedly different as it targeted certain trading partners. This discretion could thus lead to tit-for-tat tariff escalations, for which the interwar period has become so infamous for. The political and economic costs of retaliatory tariffs were large by modern standards.

We should be skeptical when commentators refer to the successful case of 1971, in which the United States had employed an import surcharge to force countries to re-align their currencies. There is no guarantee for retaliatory tariffs to solve currency disputes. On the contrary, the attempt to use them as a bargaining chip might fail and instead provoke ever more protectionism. After all economic policy cooperation appears to be the best recipe to avoid disaster. 

This blog post was written by Thilo Albers, PhD candidate at the Deparment of Economic History at LSE. 

The working paper can be downloaded here: http://www.ehes.org/EHES_110.pdf

References

Accominotti, Olivier (2012): “London Merchant Banks, the Central European Panic and the Sterling Crisis of 1931,” The Journal of Economic History, Vol. 72, pp. 1–43. 

Albers, Thilo (2017): “Currency Valuations, Retaliation and Trade Conflicts Evidence from Interwar France,” LSE Economic History Working Paper, No. 258/2017

Burfisher, Mary E., Sherman Robinson, and Karen Thierfelder (2001): “The Impact of NAFTA on the United States,” The Journal of Economic Perspectives, Vol. 15, pp. 125–144.


Cheung, Yin-Wong, Chinn, Menzie and Xin Nong (2016): “Estimating currency misalignment using the Penn effect: It’s not as simple as it looks.” NBER Working Paper, No. 22539

Eichengreen and Irwin (2010): “The Slide to Protectionism in the Great Depression: Who Succumbed and Why?” Journal of Economic History, Vol. 70, pp. 871–897.

Eichengreen, Barry (2013): “Currency War or International Policy Coordination?” Journal of Policy Modeling, Vol. 35, pp. 425 – 433.


Johnson, H. Clark (1997): Gold, France, and the Great Depression, 1919–1932: Yale University Press.


Goldstein, Morris and Nicholas Lardy (2006): “China’s Exchange Rate Policy Dilemma,” The American Economic Review, Vol. 96, pp. 422–426. 

Head, Keith and Thierry Mayer (2014) “Gravity Equations: Workhorse,Toolkit, and Cookbook,” in Elhanan Helpman Gita Gopinath and Kenneth Rogoff eds. Handbook of International Economics, Vol. 4, Chap. 3, pp. 131 – 195. 

Irwin, Douglas A (1993): “Multilateral and Bilateral Trade Policies in the World Trading System: An Historical Perspective,” in Jaime De Melo and Arvind Panagariya eds. New Dimensions in Regional Integration, Vol. 5: Centre for Economic Policy Research, Cambridge University Press, pp. 90–119. 

Irwin, Douglas A. (2010): “Did France Cause the Great Depression?” NBER Working Paper, Vol. 16350 

Irwin, Douglas A. (2013): The Nixon shock after forty years: the import surcharge revisited. World Trade Review, 12(1), 29-56.

Krugman, Paul (2010): “Taking on China,” New York Times, March 14, 2010

Mankiw, Gregory N. (2009): “It’s no Time for Protectionism”, New York Times, February 7, 2009

Tuesday, 14 February 2017

Between war and peace: The Ottoman economy and foreign exchange trading at the Istanbul bourse

Did events during the First World War reflect in the foreign exchange rates? A new  EHES working paper by Avni Önder Hanedar, Hatice Gaye Gencer, Sercan Demiralay, and İsmail Altay from different universities in Turkey provide evidence on the foreign exchange trading at the Istanbul bourse of the Ottoman Empire to shed light on this question.

They examine the influence of political risks on the foreign exchange rates at the Istanbul bourse during the First World War. Their empirical methodology is identifying the abrupt changes in the value of Lira against the currencies of the neutral countries at the Istanbul bourse, i.e., the Dutch Guilder, the Swedish Krona and the Swiss Franc. They exploit unique data on daily foreign exchange rates announced at the Istanbul bourse from May 1918 to June 1919. The data are manually collected from the Ottoman Empire’s official newspaper, i.e., Takvim-i Vekayi.

A column of Takvim-i Vekayi showing the value of Turkish Lira against several foreign currencies on 27 August 1918 (Takvim-i Vekayi. (28 August 1918). Kambiyo: 6.

They fill the gap in the historical literature on the Ottoman economy for the period ended by the First World War, in which there is a lack of empirical research (See Hanedar, Hanedar, & Torun (2017, 2016)). Furthermore, the literature on the impacts of the First World War on foreign exchange rates is confined (See Hall (2004), Kanago & McCormick (2013)).

The findings pinpoint the sudden changes in the value of Lira against the currencies of the neutral countries at the Istanbul bourse during important war-related events pointing out that the end of WWI was approaching. The war and occupation of the Allies deteriorated the economy of the Ottoman Empire, whereby the inflation levels surmounted along with the huge budget deficits. These circumstances were reflected in the foreign exchange rates and the Lira devaluated significantly against the currencies of the neutral countries by the end of the war.

 
The value of one Lira against Swiss Franc, Dutch Guilder, and Swedish Krona, 1918–1919. The three vertical lines in the graph represent the armistices signed by Bulgaria, the Ottoman Empire, and Germany, respectively. (Click to enlarge)
The research uncovers the effect of the war-related events on the foreign exchange rates using data from the First World War and validates the significance of these events at the beginning of the 20th century. It can be suggested that even at the war conditions, the Ottoman foreign exchange market displayed efficiency to some degree in the period marking the end of WWI. 


This blog post was written by Avni Önder Hanedar, researcher in economics and econometrics at (Dokuz Eylül University and Sakarya University).



The working paper can be downloaded here: http://www.ehes.org/EHES_108.pdf




References

Hall, G. J. (2004). Exchange rates and casualties during the First World War. Journal of Monetary Economics, 51(8): 1711–1742.

Hanedar, A. Ö., Hanedar, E. Y., and Torun, E. (2016). The end of the Ottoman Empire as reflected in the İstanbul bourse. Historical Methods, 49(3):145–156.

Hanedar, A. Ö., Hanedar, E. Y., Torun, E., and Ertuğrul, M. (2017). Perceptions on the Dissolution of an Empire: Insight from the İstanbul Bourse and the Ottoman War Bond. Defence and Peace Economics, (Forthcoming).

Kanago, B. and McCormick, K. (2013). The Dollar-Pound exchange rate during the first nine months of World War II. Atl. Econ Journal, 41(4): 385–404.

Takvim-i Vekayi. 30 May 1918–11 June 1919.

Wednesday, 8 February 2017

Why did Argentina become a super-exporter of agricultural and food products during the Belle Époque (1880-1929)?

In the first wave of globalization the populations of some extra-European countries were also able to earn high incomes but with low levels of industrialisation. These countries had been recently colonised by Europe (Canada, Argentina, Uruguay, Australia and New Zealand), and their economic growth was based on the rapid expansion of their exports of primary products and on the linkage effects of these exports with other economic activities. 

This was the case of Argentina during these years. According to the recent estimates of world trade published by Federico and Tena-Junguito (2016), Argentine exports, which represented around 0.8% of world trade during the early 1850s, reached levels of almost 4% in the 1920s .

Figure 1. Ratio of Argentine exports over world exports (% at current prices)
Source: Federico and Tena (2016)

There are very few studies that use a cliometric perspective in order to identify the determinants of such an accelerated growth in exports, which is a necessary condition for the export-led model to work. The objective of this work is to provide a cliometric contribution to this field of study, constructing a gravity model to explain the determinants of the growth of Argentina’s exports between 1880 and 1929. 
To this end, the bilateral export data we need have been drawn from a meticulous review of the Argentine foreign trade statistics. In contrast with the vast majority of the quantitative analyses of this subject, we have studied the annual path of the principal export products; that is, the destinations of each individual product. The following chart summarises Argentine exports in current and constant values (calculated with the prices of 1913) (Figure 2).

Figure 2. Argentine exports, in current and constant values (1913 prices),
in millions of pounds, 1875-1929
Source: Own elaboration according to official Argentine statistics (1875-1929) and Cortes Conde et al. (1965).

As we can see, Argentina’s integration into international markets was successful after the 1870s. But, according to Cortes Conde (1985), it was not until the last decade of the nineteenth century that exports contributed to paying for debt services and to financing imports, which was necessary not only to transform the productive structure but also to cover the consumption needs of the domestic market. 

To analyse export growth, we have separated the products into three groups: 1) traditional livestock exports, which include wool, salted and dried cattle hides, raw sheep skins, bovines, jerked meat and tallow; 2) crop exports, that consider wheat, corn and linseed and 3) processed agrifood exports, which are composed of chilled and frozen beef, frozen mutton, wheat flour, quebracho logs and quebracho extract. As we can see first, although the first group also grew, if we ignore the fluctuations and focus on a long-term perspective, the second and the third groups grew more and at a faster pace.

Figure 3. Breakdown of Argentine exports at constant prices of 1913 (thousands of pounds). Own elaboration. Source: Argentine official statistics.

Our econometric results reveal that the increase in Argentina’s GDP was important to explain the export growth. On the one hand, new lands were successfully incorporated into the productive system. On the other hand, labour and capital, traditionally scarce factors, were supplied from abroad. 

However, obviously without a solvent demand for the type of goods in which the country successively specialised, the export business would not have developed sufficiently. Therefore, the demand for food and raw materials, particularly from the most developed European countries, was essential.

The fall in transport costs was also a contributing factor. However, during the period analysed, the increases or reductions in tariffs did not have a significant effect on the country’s exports as a whole.
These overall results are better understood when analysed by types of product. This also constitutes an original contribution since the literature has generally not differentiated between different export goods. In this case, significant peculiarities may be observed. The development of the Argentine economy constituted an obstacle for the growth of its exports of livestock products (unprocessed), as agriculture competed for the land on which this activity was developed. Furthermore, the emergence of a meat-processing industry gave rise to a preference for the export of frozen and chilled meats as opposed to live animals. The opposite was the case for raw and processed agricultural and livestock products that experienced an improvement in exports as a result of the country’s economic growth. Tariff protection only had a significant effect on agricultural products, particularly wheat, which, from the end of the nineteenth century, faced increasing obstacles in some continental countries.

Vicente Pinilla
Augustina Rayes


The blog post was written by Vicente Pinilla (Universidad de Zaragoza) and Agustina Rayes (Universidad Nacional del Centro de la Provincia de Buenos Aires).


The working paper can be downloaded here: http://www.ehes.org/EHES_107.pdf




References

Cortés Conde, R. (1985): “The Export Economy of Argentina, 1880-1920”, in R. Cortés Conde and S.J.Hunt (eds.), The Latin American economies: growth and the export sector 1880-1930, Nueva York, Holmes.
Federico, G. and Tena-Junguito, A. (2016): “World trade, 1800-1938: a new data-set”, European Historical Economics Society, Working Paper 93.




Monday, 6 February 2017

Plague and long-term development

The lasting effects of the 1629-30 epidemic on the Italian cities


Guido Alfani is
associate professor at
Bocconi University
After many years of relative neglect, plague has recently started to recover a long-lost popularity among economic historians. In particular, the Black Death pandemic of the fourteenth century has been singled out as a possible factor favouring Europe over the main Asian economies, particularly India and China (for example, Clark 2007; Voigtländer and Voth 2013). Indeed, there is evidence of a long-lasting improvement in European and Mediterranean real wages immediately after the Black Death (Pamuk 2007; Campbell 2010). However, there is also evidence that in less densely populated areas of Europe, like Ireland or Spain, the long-term consequences of plague were negative, not positive, as “[Plague] destroyed the equilibrium between scarce population and abundant resources” (Álvarez Nogal and Prados de la Escosura 2013, p. 3). More generally it can be argued that maybe, among plagues and other lethal epidemics, the Black Death is the exception in having had (mostly) positive long-run consequences (Alfani and Murphy 2017).

Indeed, in a recent article I suggested that during the seventeenth century, the epidemiology of plague differed between the North and the South of Europe (Alfani 2013a). The South, and Italy in particular, was affected much more severely than the North. In 1629-31, plague killed about one-third of the population of northern Italy. A second epidemic, in 1656-57, ravaged central-southern Italy. In the Kingdom of Naples, overall population losses are in the 30-43 per cent range (Fusco 2009). The economic consequences of these plagues were negative and indeed, I argued that the differential impact of plague contributes to explain the origin of the relative decline of the most advanced areas of Italy compared to northern Europe (Alfani 2010; 2013a; 2013b).

In a new EHES working paper which I co-authored with Marco Percoco, we introduce the largest-existing database of urban mortality rates in plague years. This allows us, first, to demonstrate the particularly high severity of the last Italian plagues (in the two seventeenth-century waves, mean mortality rates in cities were in the order of 400 per thousand), and secondly, to analyze their economic impact.

By using the methods of economic geography, we study the ability of a mortality crisis to alter the growth path followed by a city (in particular, we follow the approach introduced by Davis and Weinstein 2002). We find evidence that the 1629-30 plague affecting northern Italy was able to displace some of the most dynamic and economically advanced Italian cities, like Milan or Venice, moving them to a lower growth path. We also estimate the huge losses the epidemic caused in urban populations (Figure 1), and show that it had a lasting effect on urbanization rates throughout the affected areas (note that changes on urbanization rates and in city size are often used as an indicator of economic growth or decline over the long run: see for example Bosker et al. 2008; Percoco 2013).


Figure 1. Size of the urban population in Piedmont, Lombardy, and Veneto (1620-1700)

Our argument is further strengthened by the fact that while there is clear evidence of the negative consequences of the 1630 plague, there is very little to argue for a positive effect. As we suggest, the potential positive consequences of the plague were entirely eroded by a negative productivity shock. Our regression analysis provides indirect evidence of this, however there is also direct evidence as for key cities like Florence, Genoa and Milan we have time-series of real wages of masons covering the entire seventeenth century (Figure 2). This sample of cities includes one heavily affected by the 1630 plague (Milan: mortality rate of 462 per thousand), one relatively less affected (Florence: 137 per thousand) and one entirely spared (Genoa). Interestingly, of the three, the only one showing signs of an increase in real wages after 1630 is Genoa. 


Figure 2. Real wages of masons in cities of northern Italy and overall urban and rural real wages in central-northern Italy, 1600-1700 (index based on the average of 1620-30). 

By demonstrating that the plague had a permanent negative effect on many key Italian urban economies, we provide support to the hypothesis that the origins of the relative economic decline of the northern part of the Peninsula are to be found in particularly unfavorable epidemiological conditions. More generally, our paper provides a useful new perspective on Italian long-term economic trends, including aspects like the falling-back of northern Italy compared to its main European competitors and the final consequences of the progressive “ruralization” of the Italian economies during the seventeenth century.

The working paper can be downloaded here: http://www.ehes.org/EHES_106.pdf

References:

Alfani, G. 2010. ‘Pestilenze e «crisi di sistema» in Italia tra XVI e XVII secolo. Perturbazioni di breve periodo o cause di declino economico?’, in S. Cavaciocchi (ed.), Le interazioni fra economia e ambiente biologico. Florence: Florence University Press: 223-247.

Alfani, G. 2013a. ‘Plague in seventeenth century Europe and the decline of Italy: and epidemiological hypothesis’, European Review of Economic History, 17 (4): 408-430

Alfani, G. 2013b. Calamities and the Economy in Renaissance Italy. The Grand Tour of the Horsemen of the Apocalypse. Basingstoke: Palgrave.

Alfani, G. and T. Murphy. 2017. ‘Plague and Lethal Epidemics in the Pre-Industrial World’, Journal of Economic History, 77(1): 314-343.

Álvarez Nogal, C. and L. Prados de la Escosura. (2013). ‘The Rise and Fall of Spain (1270-1850)’, Economic History Review, 66(1): 1–37.

Bosker, M., Brakman, S., H. Garretsen, H. de Jong, and M. Schramm. 2008, ‘Ports, Plagues and Politics: Explaining Italian City Growth 1300-1861’, European Review of Economic History, 12: 97-131.

Campbell, B. M. S. 2010. “Nature as historical protagonist: environment and society in pre-industrial England”, Economic History Review 63: 281-314.

Clark, G. 2007. A Farewell to the Alms: A Brief Economic History of the World. Princeton: Princeton University Press.

Davis, D.R. and D.E. Weinstein. 2002. ‘Bones, Bombs, and Break Points: The Geography of Economic Activity’, American Economic Review, 92(5): 1269-1289.

Fusco, I. 2009. ‘La peste del 1656-58 nel Regno di Napoli: diffusione e mortalità’, in G. Alfani, G. Dalla Zuanna and A. Rosina (eds.), La popolazione all’alba dell’era moderna, special number of Popolazione e Storia, 2/2009: 115-138.

Malanima, P. 2013. ‘When did England overtake Italy? Medieval and early modern divergence in prices and wages’, European Review of Economic History, 17: 45-70.

Pamuk, S. 2007. ‘The Black Death and the origins of the ‘Great Divergence’ across Europe, 1300-1600’, European Review of Economic History, 11: 289-317.

Percoco, M. 2013a. ‘Geography, Institutions and Urban Development: Evidence from Italian Cities’, Annals of Regional Science, 50: 135–152.

Voigtländer, N. and H.J. Voth 2013. “The Three Horsemen of Riches: Plague, War, and Urbanization in Early Modern Europe.” Review of Economic Studies 80 (2): 774–811.

Thursday, 12 January 2017

Accounting for the ‘Little Divergence’

This blog post was written by
AlexandraM. de Pleijt,
post doc at Utrecht University

What drove economic growth in pre-industrial Europe, 1300-1800? 


The Industrial Revolution is arguably the most important break in global economic history, separating a world of at best very modest improvements in real incomes from the period of ‘modern economic growth’. Thanks to pioneering work of van Leeuwen and van Zanden (2012) and Broadberry et al (2015) this phenomenon has recently been linked to the study of long-term trends in per capita GDP. One of the questions is to what extent growth before 1750 helps to explain the break that occurs after that date; the idea of a ‘Little Divergence’ within Europe has been suggested as part of the explanation why the Industrial Revolution occurred in this part of the world. 

This ‘Little Divergence’ is the process whereby the North Sea Area (the UK and the Low Countries) developed into the most prosperous and dynamic part of the Continent. The new series on per capita GDP demonstrate that the Low Countries and England witnessed almost continuous growth between the 14th and the 18th century, whereas in other parts of the continent real incomes went down in the long run (Italy), or stagnated at best (Portugal, Spain, Germany, Sweden and Poland) (see Figure 1). As a consequence, at the dawn of the Industrial Revolution in the 1750s, the level of GDP per capita of Holland and England had increased to 2355 and 1666 (international) dollars of 1990 respectively, compared to 876 and 919 dollar in 1347 (just before the arrival of the Black Death), and 1454 and 1134 in 1500 (Bolt and van Zanden 2014). 

Gross Domestic Product per capita, 1300-1800. 
Notes and sources: See Bolt and van Zanden (2014)


Although the ‘Little Divergence’ between the North Sea area and the rest of the continent has been established, very little is known about the causes of this phase of pre-industrial growth. Why were the Low Countries and England already long before 1800 able to break through Malthusian constraints and generate a process of almost continuous economic growth? Various hypotheses have been suggested. One of the explanations focuses on institutional changes. The rise of socio-political institutions (in particular the rise of active parliaments) and demographic institutions (notably the European Marriage Pattern) were favourable for growth in the Low Countries and England (de Moor and van Zanden 2010, van Zanden et al 2012). Other scholars have stressed the importance of the growth of overseas (e.g. Acemoglu et al 2005) – a hypothesis which is supported by Allen’s (2003) study explaining differences in real wages in Europe between 1300 and 1800. Finally, others have indicated the importance of increases in agricultural productivity (Overton 1996) and human capital formation (Baten and van Zanden 2008).

In a new EHES working paper paper, we have tested the various hypotheses explaining pre-industrial growth in early modern Europe using new data on per capita GDP, political institutions (active parliaments), human capital formation (per capita book consumption), productivity in agriculture (yield ratio’s), and international trade (per capita size of the merchant fleet). Our empirical findings show that GDP growth before the Industrial Revolution was mainly driven by human capital formation. We moreover show that institutional changes (the rise of active Parliaments) were closely related to pre-industrial growth.


The working paper can be downloaded here: http://www.ehes.org/EHES_104.pdf


References:

Acemoglu, Daron, Simon Johnson, and James A. Robinson. “The Rise of Europe: Atlantic Trade, Institutional Change and Growth.” American Economic Review 95, no. 3 (2005): 546-79.

Allen, Robert C. “Progress and Poverty in Early Modern Europe.” Economic History Review LVI, no. 3 (2003): 403-43.

Baten, Joerg, and Jan Luiten van Zanden. “Book Production and the Onset of Modern Economic Growth.” Journal of Economic Growth 13, no. 3 (2008): 217-35.

Bolt, Jutta, and Jan Luiten van Zanden. “The Maddison Project: collaborative research on
historical national accounts.” Economic History Review 67, no. 3 (2014): 627-51.

Broadberry, Stephen N., Bruce Campbell, Alex Klein, Mark Overton, and Bas van Leeuwen. British Economic Growth, 1270-1870. Cambridge: Cambridge University Press, 2015

De Moor, Tine, and Jan Luiten van Zanden. “Girl Power: The European Marriage Pattern and Labour Markets in the North Sea Region in the Late Medieval and Early Modern Period.” Economic History Review 63, no. 1 (2010a): 1-33.

Overton, Mark. Agricultural Revolution in England: The Transformation of the Agrarian Economy 1500-1850. Cambridge: Cambridge University Press, 1996.

Van Zanden, Jan Luiten, and Bas van Leeuwen. “Persistent but not Consistent: The Growth of National Income in Holland, 1347-1807.” Explorations in Economic History 49, no. 2 (2012): 119-30.

Van Zanden, Jan Luiten, Eltjo Buringh, and Maarten Bosker. “The Rise and Decline of European Parliaments, 1188-1789.” Economic History Review 65, no. 3 (2012): 835-61.