With the macro economic model there will also be a much stronger interest in the behaviour of the model as a function of time. The big new assumption in this model is that labour costs vary with employment and unemployment. It is assumed that labour costs vary as a concave function of employment, ie labour costs will increase as the employment ratio increases, and will increase at an increasing rate. Figure 4.2.1 here In this model I have used a simple square law function, shown in figure 4.2.1 above. This is not a particularly realistic function, more realistically it should be asymptotic to the vertical on the right hand side as there is a realistic maximum somewhere around 6000 hours per year. However this basic function is sufficient for the needs of the model. It is also worth noting, this is not an inflation Phillips curve. This curve is a simple supply-price Phillips curve for labour in real terms. In this model, prices of goods and labour both go up and down, just as they did in the commodities model, but they move around stable long-term values. The analogy is with the cyclical price changes seen in a Victorian economy with a gold standard. There is no long-term monetary inflation. For a pithy study of the misinterpretation of the Phillips curve see Hussman [Hussman 2011]. Again, an element of marginality has been introduced. Over short to medium terms, the supply of labour is fixed, while demand can change. Because of this labour prices can change significantly through business cycles. In these models, it is assumed that individuals always spend 40% of their income at all times, SI = 0.4. It is possible that the consumption spending will exactly balance the amount of production capacity available in the companies, however this will not always be the case. It is also possible that there will be too much or too little capital available to match the consumption demand. Looking firstly at the case of too little demand; if the 40% spending provides insufficient demand, then excess capital will be available and some of that capital will be unused. As a consequence of this there will also be a reduction in labour employed. Also, following exactly the same logic as the companies models above, if companies create insufficient wealth to meet the payout targets set by their market capitalisation, then they will be obliged to convert some of their capital to wealth for payout. Clearly in this model such a conversion of capital to returns is less realistic than the companies model. In the companies model capital was swapped for cash between the successful and unsuccessful companies. In this macroeconomic model, all companies are shrinking in size at the same time. This would mean that first stocks of goods and then fixed capital would need to be converted into payouts. This would normally mean substantial losses on the value of the capital, especially the fixed 101 EFTA00625229
capital. In this simple model, this problem is ignored, and capital is assumed to be converted into payments at par. This assumption is returned to in the discussion in section 4.4. It is also possible that there may be insufficient capital available. In these circumstances it is assumed that consumption is still maintained at the full 40% of current wealth, even though insufficient capital available, and so insufficient goods are produced. In this case the consumption funds available for purchasing are simply divided amongst the goods that are available to be purchased, so increasing the nominal market price of the goods above their long- term natural prices. Consequently this results in short-term consumer price inflation. It is implicitly assumed that consumers judge value by price and continue to spend a fixed proportion of their wealth, even though they actually receive less real value for that wealth. When this happens super-profits are then earned by the corporate sector. If employment and so wage levels are low, then the income retained by the companies is converted into new capital to allow the production of more commodities. In this manner, super-profits are converted into new capital and new production until supply rises to meet the new demand, and the prices of consumer goods then drop back to their 'natural' values based on input costs. This is closely analogous to the commodities model. It is important to note that, in the company models, the total amount of capital was fixed; however in this macroeconomic model, the amounts of capital and labour employed can vary, though labour is still needed in a fixed proportion to capital used. In this macroeconomic model the capital and labour are still used in a fixed ratio to give a given output. The amount of capital can vary freely, in line with the demand of goods from consumers. The total supply of labour is fixed however, with the amount of the labour pool employed varying in fixed proportion to the amount of capital. Labour costs vary non-linearly with the amount of labour employed, which means that labour costs vary non-linearly with the amount of capital employed. So returns to labour and capital can vary. It is still assumed that the proportion of labour required to capital is fixed over the whole period of time being modelled. This means that there is no technological progress, and also that it is not possible to substitute capital for labour. Each iteration of the model operates as follows: The expected returns are defined as 10% of the current market capitalisation. The consumption, and so the payments made for consumer goods are defined as 40% of total wealth. If these payments are less than 20% of the available capital, then the amount of goods produced is equal to the value of the consumer payments. If the payments for consumer goods are greater than 20% Of the available capital, then the goods produced are equal to 20% of the total capital, ie, the maximum production possible is 0.2 times the capital K that is in existence. The income accruing to labour is calculated, according to the amount of capital used, and so the proportion of labour employed, according to the square law. 102 EFTA00625230
The surplus revenue that the company generates is then the value of the consumer payments received, less the earnings income paid out. The new value of the total real capital is then the old capital, plus the payments received for goods, less the labour earnings paid out, less the actual returns paid out. Finally, the consumers receive their dividends from the companies and revalue the market capitalisation according to the actual returns paid out. At this point, the cycle starts again. As in the companies model, the actual returns paid to the owners (shareholders) that is the payout ratios can depend on whether the surplus revenue generated is greater than the expected returns or less than the expected returns. For example in model 1D the actual returns paid out are always 70% of the revenue generated. However in models 4A to 4C the actual returns paid out are equal to the real returns produced. It is noted that these payout factors are different to the ones in the companies model above, clearly these models are preliminary and in need of future calibration to real economies. As with the commodities model, it is also possible to put a variable lag in to model the time it takes to install capital. A further important ingredient in this model is the existence of a 'cash balance' for the householders. This is needed in their role as owners of capital and spenders of money. This cash balance can result as an imbalance of spending outgoing against income received as a consequence of these being dynamic models. If the cash balance is positive then this represents spare cash in the bank. The householders have received more in wages and dividends than they have spent in consumption. If the cash balance is negative, then this represents a debt to the bank, due to the consumers spending more than they earn. In the notes following, the cash balance is referred to as H to differentiate it from the capital owned which is now labelled Q. The consumers are assumed to be sensible, so they carry out their consumption based on their total wealth W which is the sum of Q and H, so: C= (4.2a) or: C = (Q+H).O (4.2b) So, for example, if H is negative because the consumers have net debt, then consumption is reduced below that judged by the size of Q only. 103 EFTA00625231
This model was carried out in Excel, those who wish to go through the maths in detail can past the model into Excel from appendix 14.9. 4.3 Macroeconomic Models - Results As expected this model can show different sorts of behaviour, some examples are given below: Model 4A is the base model, with all the numbers designed to be nice and round. This model has payout ratios of 1 for both the upside and downside. It also allows capital to be added instantly, without any lags. It can be seen from figure 4.3.1 that the output is very stable, and so very dull. Figure 4.3.1 here Model 46, shown in figure 4.3.2 has exactly the same parameters as model 4A, the only difference is that the initial values were different. Figure 4.3.2 here This shows just how stable this model is, with the model quickly settling down to equilibrium values. Though even in this stable model it is notable that model 4B needs to go through a number of fluctuations before it arrives at stability (cf figure 1.2.1.4). But there is a more important difference to note between model 4A and 4B. The parameters of the model are exactly the same, but the equilibrium points are very different. Model 4A started with real capital of 100 units, and settled to an equilibrium at 100 units. Model 4B started with real capital of 400 units, and settled to an equilibrium at about 184 units. As a consequence, total capital employed at equilibrium in model 4B is much higher than that in model 4A, and more importantly, total employment is higher in model 46 than model 4A. Also the ratio of returns to labour to returns to capital is significantly higher in model 4A. This is Keynes writ large. Unlike static equilibria, dynamic equilibria can have multiple points of stability. The point of equilibrium that is reached depends on the parameters of the model, but also on the initial conditions. Different initial conditions can give different equilibria even with the same parameters. Once it has reached its equilibrium, the model can stay at that point indefinitely. To change the equilibrium an exogenous force is needed. The model will not rebalance itself to a 104 EFTA00625232
particular point; a point such as full employment for example. Mass unemployment can continue indefinitely without positive external action. Model 4C is the most interesting, and most realistic, model. In this model a time lag has been introduced between capital being purchased and being brought into use. This is identical to the way capital is installed in the commodities models in section 3. Note that the payout ratios are still at unity. Figure 4.3.3 shows the long term behaviour of the model. Figure 4.3.3 here As can be seen the model shows regular cycles of capital being created and destroyed. Again it is important to note that this is a chaotic model, not a stochastic one. There is no stochasticity in this model. All fluctuations in the model are created endogenously from the Lotka-Volterra like differential equations in the model. Figure 4.3.4 shows the detail of couple of cycles. Figure 4.3.4 here These are real live Minskian / Austrian business cycles. But with one big exception. It can be seen that real capital K builds up in advance of the total wealth (in this simple model paper wealth; capitalisation is constant), this build up of capital is unsustainable, and so leads to a fall in real capital. Interestingly, although debt (negative cash wealth) is present, this is a lagging variable. In this model debt creation is fuelled by capital growth, not the other way round. The chaotic, bubbly behaviour is not caused by excess credit, it caused by the basic pricing system of capitalism. Model 4D, shown in figure 4.3.5 below has no lag in the installation of capital. Instead this model has payout ratios of 0.7 on both the upside and the downside. Figure 4.3.5 here It is believed that this is a less realistic model, however it does demonstrate how highly chaotic behaviour can be generated in even a very simple model. 105 EFTA00625233
Finally model 4E is shown in figure 4.3.6 below. This has a just a small lag of 1 unit for the installation of capital, and payout ratios of 0.8. Figure 4.3.6 here Interestingly, it seems that similar results can be achieved without a lag. If both interest rates and payout factors are reduced, an explosive result is also seen. As can be seen these minor changes in the model are sufficient to create explosive behaviour. This is a true bubble, similar to that of Japan in the 1980s, or the US in the 1920s or in the last decade. Again the cash wealth (debt) is a lagging indicator. It is possible to create explosive bubbles just from the basic pricing system of capitalism. There is finally one important thing worth noting about the models. The value of the Bowley ratios, (3, for the first four models were as follows: Figure 4.3.7 13 Model 4A 0.75 (exactly) Model 46 0.92 Model 4C 0.78 Model 4D 0.85 The Bowley ratio is the ratio of returns to labour to the total returns. The values for models 4C and 4D are averages; the Bowley ratio varies wildly over the course of a cycle in these models. The numbers above are close to the 'stylised facts' for the Bowley ratio, and are of considerable importance. This is returned to at length in section 4.5 onwards. 4.4 Macroeconomic Models - Discussion As with the previous models, the results above show that a simple combination of classical economics and a dynamic analysis gives interesting results that mirror real economies. The author expected that such a model would be easily capable of producing boom and bust business cycles, and this is discussed in some detail in this section. The production of a suitable Bowley ratio was a surprise, though a pleasant and very important one. This is discussed further in sections 4.5 to 4.7. 106 EFTA00625234
Leaving aside the Bowley ratio, the most interesting result of this model is that the booms and busts are generated internally via an endogenous spiral of creation of wealth. In the model real capital is installed, which generates more paper wealth, which generates more consumption, so feeding into another cycle of wealth creation. The upswing is finally constrained by rising wages making the capital unproductive. This then generates a downswing of declining wealth, consumption and wages. This is the normal cycle of capitalism as described by Minsky and the Austrians. Booms and busts are endogenous. Free markets are not inherently stable. Again, as with the income and company models it noticeable that there are many things that are standard elements of neo-classical or Keynesian economic theory which are simply not needed to produce this macroeconomic model, these include: • Economic growth • Population changes • Technology changes • Productivity growth • Investment • Saving • Accelerators • Multipliers • Shocks (exogenous or endogenous) • Stochasticity (in any form) • Different initial endowments (of capital or wealth) • Utility functions • Production functions It has been noted that marginality has worked it's way into the modelling in the form of the pricing curve for labour, this is a reasonable argument, as labour is a commodity that is truly unchangeable in it's supply. Although marginality might be a mathematically useful way to address this, the history of entropy and information suggests there may be better ways to address this. More importantly, the results of the model show that the detailed form of curve are completely irrelevant to the model. The curve simply needs to be concave, to ensure that labour costs eventually choke the growth. Within reason, any concave curve will do this. So the actual detail of the calculations of marginality are irrelevant and do not have any influence on the long- term equilibrium, the cycle frequency or the distributions of wealth and income. This is discussed further in section 4.7 below. It is also worth considering the 'efficiency' of the economy in this model. This model again creates chaotic behaviour endogenously. There is no stochastic noise in this model. It is politely suggested by the author that a system that endogenously creates booms and busts, with short term creation of excess capital, and far worse; short term destruction of the very same capital, may not, in fact, be allocating capital in a particularly efficient manner. 107 EFTA00625235
Investment and saving has been deliberately ignored in this model, as it has been in all previous models. This is because, as the data given from Miles and Scott in section 1.3 show, saving and investment is a minor part of the economic cycle. The core driver of business investment is the availability of cash streams. When firms have more money coming in from revenue than they need to pay out as dividends they use it for investment. When they don't have spare money they don't invest. The mechanics of saving and investment are a side-show and diversion from the base model of macroeconomics. Similarly the general public is assumed to simply consume a fixed proportion of their wealth. In the real world it seems much more reasonable to assume that people who gain more wealth will divert a greater portion of this to saving, particularly in an environment, as here, in which companies appear to be showing increasing profits on their capital. I believe this is a simplification rather than a flaw. The point of the model is that endogenous business cycles arise at the heart of the system of pricing financial assets. Allowing transfers of excess savings in booms to investment rather than consumption would clearly exacerbate these booms. Indeed it is possible that the effects of saving and investment multipliers might be significant, but that is not the issue, the issue is that saving and investment is a multiplier rather than the root cause of the instability. In identical fashion to the companies models above, expectations and behaviouralism do enter into the model in two different ways, firstly with regard to the pricing of stocks, and secondly with regard to the retention of capital within companies. Again these are obvious forms of behaviour and are supported by economic research as discussed in section 2.1 above. It can be seen from the model results that economies can behave very differently according to relatively small changes in input parameters. This is because a system like this can show different regions of behaviour, a general property of Lotka-Volterra and other similar non-linear differential equation models. Depending on the settings of the variables in the model, there can be three different cases for the outputs. Firstly, the outputs can be completely stable, quickly going to constant values, this was seen in models 4A and 4B. Secondly, the outputs can be locally unstable with values constantly varying, but hunting round within a prescribed range of values, this is similar to the lynx and hares Lotka-Volterra model discussed back in section 1.2. This appears to be the way that most normal economies behave. This effect can be caused by the behaviour of capital, either by deliberate hoarding of capital by company managers, or by the time it takes for capital to be installed. The cyclical rise and fall of capital in business cycles is analogous to the cyclical rise and fall of biomass in a biological Lotka- Volterra system. Just as the hares and lynx respond rationally to the available grass, so business investors and speculators react rationally to the opportunities in the economy. Finally, the outputs can be explosive, moving quickly off to ± infinity. 108 EFTA00625236
In models 4A and 4B these values were 'fixed' to ensure a stable model, in 4C and 4D the parameters were fixed to give a quasi-stable cyclical model, in model 4E they were changed to get explosive models. In the real world it appears that economies operate largely in zones 4C/D, with occasional excursions into zone 4E. Model 4E suggests that if both interest rates and payout rates are too low then the company sector is too profitable and capital expands exponentially before finally wrecking the whole economy in a glut of capital, see figure 4.3.6 above. It seems plausible to argue that this reflects what actually happened in the US during the late nineteen twenties, and Japan in the late eighties. Following each of these bubbles the respective economies failed to return to a self-regulating pattern of booms and busts, but appear to have been moved to new equilibrium with much less productive economic patterns. So the economies moved very quickly from a 4E to a poorly performing 4A/B. It is the belief of the author that keeping interest rates and payout ratios too low allows a second common form of macroeconomic suicide. (The first form of economic suicide is introduced in section 4.6. Both forms of suicide are discussed in more detail in section 4.10.) A very important point to emphasise in the models above is the absolute lack of stochasticity. While there is certainly a significant element of stochasticity in real markets, the macroeconomic model above contains no stochasticity. The model is not stochastic, it is merely chaotic. Chaotic models like this are common in physics, astronomy, biology, engineering, and in fact all of the sciences other than economics, where determinism has hunkered down for a very effective last stand. The failure of these models to penetrate into mainstream economics, given the obvious turbulence of stock, commodity, housing and other financial markets, is puzzling. This endogeneity of chaos in business cycles is of profound importance. Standard economic theory, whether Keynesian lack of demand or the impacts of technology in 'Real Business Cycle theory', never mind neoclassical economics, seems incapable of believing that chaotic short term behaviour can be anything but externally driven. Exogenous drivers are simply not needed for quasi-cyclical, or explosive chaotic behaviour; all that is needed is the use of the correct modern mathematics, where 'modern' means post 1890. This mathematics, and chaotic systems in general, is discussed in section 6 below. As discussed above, Lotka-Volterra models have been used in Marxian analysis by Goodwin and others, though the models can be somewhat complex. The models presented above seem more efficacious than the Goodwin type Lotka-Volterra models, as they don't need: • population change • growth in labour force • technology change • productivity growth • inflation (long-term) • accelerators 109 EFTA00625237
all of which are used as standard in the Goodwin and descendant models. A central problem in the thinking of Goodwin and the researchers that followed Goodwin is in the idea of growth. It appears to have been assumed that to model short-term cycles of growth and decline it was necessary to include long-term economic growth rates. So these models include growth in the labour force, productivity, money supply, etc. This is a bit like trying to model waves on the ocean's surface by including things that cause changes in sea level, such as the tidal effects of the sun and moon, evaporation, precipitation, glacier melt rates, etc. This brings in a lot of irrelevancies into the basic model, and make it very hard to build the basic model. Even without any of the things listed above, natural cycles can occur that build up too much capital. That is not to argue against the secondary importance of any of the above factors, especially in long-term economic cycles. Going back to the evidence of Harvie [Harvie 2000] and Barbosa-Filho & Taylor [Barbosa-Filho Taylor 2006], the cycles for the mainland European countries appear to be long term, on a decadal scale; which would suggest a strong role for technology change and productivity growth (though very little for population change). However the cycles for the US and UK appear to show much faster oscillations; of only two to three years. Intuitively it is difficult to see how technology change could operate significantly on such short timescales, and this is more suggestive of the operation of the normal business cycle modelled above. Indeed the simple model proposed above may be more appropriate for modelling the regular short period cycles of booms and crashes seen in Victorian times. The important thing to note is that the basic instability in financial markets is much deeper than that proposed by Goodwin. Goodwin style feedbacks may exaggerate this basic cycle, or add longer super-cycles, however in this regard it appears that the basic insight of Minsky and the Austrians with regard to the essential instability of capitalism was correct. However, although I believe this basic Minskian/Austrian insight is valuable, it is also notable that to build model 4A to E, and create dramatic business cycles, you don't actually need any of the following: • governments • fiat money • fractional reserve banking • speculators • Ponzi finance • debt deflation or other common elements of the Austrian school or the work of Fisher and Minsky. 110 EFTA00625238
Debt, in the form of a negative cash balance, certainly does appear in the cyclical and explosive models. But models 4C and 4D show that the debt follows the cyclical instability of capital rather than the other way round. I would not wish to understate the importance of debt in exacerbating business cycles, indeed the role of debt appears to be very interesting and important, and is discussed further in 4.6 below. However debt itself is not the prime cause of the business cycles. Again, it is not suggested that any of the factors listed above are unimportant, however it appears that all the other factors are just potential magnifiers of an underlying inherent instability The instability is very basic, and, in the short term at least, perfectly rational. The instability arises, as Minsky noted, from the fundamental fact that paper prices of assets are based on projected future cash flows, not on costs of production. This is Minsky's crucial insight, of much greater importance than his analysis of the debt cycle. This is the same assumption originally proposed in the companies model in section 2.2 above. This instability naturally produces a growing cycle of apparent wealth, which is turned into excess capital as predicted by Hayek [Hayek 1931] in Austrian business cycles. But contrary to the Austrians, and in line with research data [Kydland & Prescott 1990], the liquidity or excess paper wealth is initially generated within the valuation system of capitalism, not by lax government policy. Creation of liquidity and monetary growth are endogenous to the basic pricing mechanisms of the finance system. Endogenous creation of financial wealth then feeds back into the creation of more real capital, so creating more financial wealth. This endogenous creation of financial wealth then gives apparently secure paper assets against which debt can be secured, and of course this debt allows yet more capital creation. Clearly, if the underlying system is unstable, with endogenous liquidity production a la Minsky; then other factors such as excessive debt, speculation, fractional reserve banking and inappropriate central bank intervention policies will all magnify the size and damage of the underlying cycles. But it is not excessive debt, speculation, fractional reserve banking or poor central bank policy that causes the boom and bust cycles. The cycles are caused by the basic pricing system of capitalism. Governments may of course fail to calm the markets by extracting liquidity in a timely manner, but it is scarcely the fault of governments that most investors are momentum chasers rather than fundamental analysts. Just as central banks are expected to control changes in the money supply caused by fractional reserve banking, it seems appropriate that they also need to control liquidity growth caused by Minskian asset pricing. This is discussed in more depth in section 8.2.1 on liquidity below. As noted previously, Minsky, although a follower of Fisher and Keynes, shared the Austrians' disdain for mathematics. It is the author's belief that bringing in a dynamic mathematical approach, on the lines of Lotka-Volterra modelling, to Minskian and Austrian ideas might not only give more weight to both these approaches, but also show them to be very comfortable bedfellows. 111 EFTA00625239
Essentially the company, commodity and macroeconomic models are all simple composites of ideas from Minsky and the Austrian school, though my producing them in this way happened more by accident than design. The models have Minsky's basic split between 'normal' assets such as goods and services that are priced on a mark-up basis, and financial assets which are priced on the basis of expected future cash flow. Following Minsky, and ultimately Keynes, the expectations of future flows are simplistic projections of present flows [Keen 1995]. Unlike Minsky the models use simple known behaviour of capital to explain the source of instability. In the companies model this was company managers hoarding incoming spare cash, and using it to build more capital. In the commodity model the instability was caused by the time actually taken to build and install new capital. In the macroeconomic models, either or both of these factors could cause instability. In this sense the models follow Austrian ideas. This has the advantage over the Minsky models that you don't need a complex financial system; speculators, Ponzi finance, etc, to form the instability. You can get the instability in pretty much any system where financial assets can be overvalued; this can be Industrial Victorian Britain with its savage business cycles, or even the Roman Empire (see 4.10 below). The critical insight of Minsky, in contrast to the Austrians, and seen in these models is that liquidity and new credit are generated endogenously in even the most basic of financial systems. You don't need governments to create excess credit, though certainly they can make things worse. In fact, faced with endogenous credit creation, you do need governments to actively remove credit and liquidity when financial assets become overpriced. In defining this macroeconomic model, a number of assumptions were made. I would like to briefly review these here: Note that the assumption of conversion of capital to equity at par in a downturn does not undermine the arguments. The losses incurred in a fire sale of assets to meet investor demands would simply exaggerate the viciousness of the cycles downwards. It was assumed that the ratio of capital to labour is fixed over the time of the business cycle, and that it is not possible to substitute capital for labour. There are two parts to discuss with this assumption. Firstly, in the short term, going into a boom, replacing labour with capital would simply allow further excess capital to be installed before wage inflation would kick in, so making the booms even larger. The resultant larger overhang of capital would then make the following slump more severe. So relaxing this assumption would simply make the business cycles worse. More importantly, the model shows that, in the long term, at the level of the economy as a whole, there is in fact a fixed ratio of capital to labour at any given set of market conditions. So it is not actually possible to substitute one for the other. Much more on this in sections 4.6 to 4.8 below. Note that allowing the market interest rate to float, say by making it the moving average of real returns over the previous few periods, would also have a large magnifying effect. As more capital was employed, overall interest rates would go down, making previously unprofitable capital investment profitable. Again this would encourage further excess capital creation in the booms. Finally I would like to return to a major assumption of the companies model in section 2.2. In this model capital was deliberately, and artificially, renormalised in each of the model iterations 112 EFTA00625240
to keep a constant value of K. As I hope is now clear, this was a necessary fix in the company model to prevent the introduction of severe cycling in the model output. Directly comparing my own macroeconomic models with those of Wright is not straightforward. My own models include a financial sector which is clearly more realistic, as Wright acknowledges in the Social Architecture of Capitalism [Wright 2005], where he notes that conflation of capital concentration with firm ownership may distort modelling results. So clearly Wright's models can not show cycles of debt build up and draw down. Despite this Wright's models do show recurrent booms and recessions, with much more complex behaviour than my own. Although Wright's business cycles are debt free, he models individual companies/owners, where my own model models the business sector as a whole. As a result recessions in Wright's models are of differing length and are quasi-periodic, this is clearly superior to my own models. Wright's models are also superior to my own in that they include for unemployment. My models just measure total over-employment and under-employment against a nominal full employment. Despite these substantial differences both Wright's and my own models produce cyclical endogenous business cycles from simple models based on statistical mechanics and classical economics. 4.5 A Present for Philip Mirowski? — A Bowley-Polonius Macroeconomic Model "I mean the stability of the proportion of national dividend accruing to labour, irrespective apparently of the level of output as a whole and of the phase of the trade cycle. This is one of the most surprising, yet best-established, facts in the whole range of economic statistics Indeed...the result remains a bit of a miracle. "[Keynes 1939] "...no hypothesis as regards the forces determining distributive shares could be intellectually satisfying unless it succeeds in accounting for the relative stability of these shares in the advanced capitalist economies over the last 100 years or so, despite the phenomenal changes in the techniques of production, in the accumulation of capital relative to labour and in real income per head."[Kaldor 1956] "FUTURE ISSUES - Theory 1. Is there a deep explanation for the coefficient of 1/3 capital share in the aggregate capital stock? This constancy is one of the most remarkable regularities in economics. A fully satisfactory explanation should not only generate the constant capital share, but some reason why the exponent should be 1/3 (see Jones 2005 for an interesting paper that generates a Cobb-Douglas production function, but does not predict the 1/3 exponent). With such an answer, we might understand more deeply what causes technological progress and the foundations of economic growth."[Gabaix 2009] Whenever economists hit a bad patch, it is inevitable that outsiders will begin to sneer how it is not a science and proceed to prognosticate how "real science" would make short work of the crisis. This is such a tired Western obsession that it is astounding that it has not occurred to critics that such proleptic emotions must have occurred before, and are thus themselves a part of a chronic debility in our understanding of economic history. As I have shown elsewhere in 113 EFTA00625241
detail, neoclassical economics was born of a crude attempt to directly imitate physics in the 1870s, and American orthodoxy was the product of further waves of physicists cascading over into economics in the Great Depression and WWII... ...Actually, it is understood among the cognoscenti that physicists have again been tumbling head over heels into economics since the 1980s, as their own field experienced severe contraction at the cessation of the Cold War. And where did most of them end up? Why, in the banks, of course, inventing all those ultra-complex models for estimating and parceling out risk. Some troubled to attain some formal degree in economics, while others felt it superfluous to their career paths. In any event, the exodus of natural scientists into economics was one of the (minor) determinants of the crisis itself—without "rocket scientists" and "quants," it would have been a lot harder for banks and hedge funds to bamboozle all those gullible investors. So much for the bracing regimen of a background in the natural sciences. If anything, responses to critics that tended to pontificate upon the nature of "science" were even more baffling than the original calls for deliverance through natural science in the first place. Economists were poorly placed to lecture others on the scientific method; although they trafficked in mathematical models, statistics, and even "experimentation," their practices and standards barely resembled those found in physics or biology or astronomy. Fundamental constants or structural invariants were notable by their absence. Indeed, one would be hard pressed to find an experimental refutation of any orthodox neoclassical proposition in the last four decades, so appeals to Popper were more ceremonial than substantial. Of course, sometimes the natural sciences encountered something commensurable to a crisis in their own fields of endeavor—think of dark matter and dark energy, or the quantum breakdown of causality in the 1920s—but they didn't respond by evasive manoeuvres and suppressing its consideration, as did the economists. In retrospect, science will be seen to have been a bit of a red herring in coming to terms with the current crisis. In the heat of battle, economists purported to be defending "science," when in fact, they were only defending themselves and their minions. [Mirowski 2010] As a physicist myself, I am somewhat embarrassed to admit that physicists as a class stand guilty as charged when accused of unnecessarily increasing the complexity and opacity of finance. This is the more embarrassing as the behaviour is so far from the norm in physics, where careful investigation and gaining of understanding is the general aim, and true kudos is gained by discovering neat and beautiful solutions to seemingly complex and insoluble problems. The entry of quants into finance seems not only to have been marked by a joy in the deliberately complex, but also a wilful desire to avoid any understanding of what is really happening in an economic or financial system. As previously noted, physicists seem very comfortable in using wealth and income interchangeably, some even conflate these two concepts with money. From my own conversations, I am led to doubt whether a majority of physicists working in finance could successfully define the difference between a real and a financial asset. As a penitence, on behalf of a profession behaving badly; I had hoped in this section to present to Philip Mirowski the explanation of a basic 'constant' in economics. Sadly for me, the constant turns out not to be constant at all but merely a humble ratio; an indicator of an underlying equilibrium. Unfortunately it cannot be described as either 'fundamental' or 'invariant'. On the bright side this at least allows for changing of the 'constant', and indeed it is one of the aims of later sections to change this 'constant' to the benefit of the population in general. Even more worryingly this constant may simply be seen by many as a trivial accounting identity, a red herring at best. 114 EFTA00625242
I do not believe this is the case and, however humble this ratio may be, I believe it is the first 'constant' to be explained in economics, and as such is worthy of note. The constant in question is the ratio of earnings received by labour to those received by labour and capital, the Bowley ratio 13 that was first introduced in section 1.3 above. Before looking at the derivation of the Bowley ratio, it is worth considering this 'constant' in more detail. For most mature economies the constant varies between about two-thirds and three-quarters and can be very stable, as discussed in section 1.3 above. Young gives a good discussion of the national income shares in the US, while Gollin gives a very thorough survey of income shares in more than forty countries [Young 2010, Gollin 2002]. In emerging economies 13 can be much lower, as low as 0.5. Currently, and exceptionally, in China it may be as low as 42% [Bai et al 2006, Subramanian 2008]. Arthur Lewis [Lewis 1954] has explained this as being due to wages being artificially depressed by the reserve of subsistence workers simultaneously with the wealthy being able to save more due to low living costs caused by low wage rates. Once economies absorb this spare rural labour, and pass their 'Lewisian turning point', then the ratio of returns to labour to total income stabilises and moves only slightly. In the UK, the first country in the world to absorb its rural labour force, the ratio has been fairly stable for a century and a half. The thing about this stability is that the more you consider it, the more bizarre it seems. In the last 150 years Britain has changed from a nation of factories powered by steam engines to a modern service economy. The amount of capital currently installed in the UK is many times greater than that of 150 years ago, labour intensive industry has all but disappeared. Wealth levels have changed incredibly. In the 1850s gdp in the UK was comparable to current gdp in Indonesia or the Philippines, however life expectancy in the UK in the 1850s was roughly half that of Indonesia or the Philippines today [gapminder]. It is quite extraordinary that the Bowley ratio has remained roughly constant throughout this period. In fact it is counter-intuitive. For somebody in Victorian Britain, as in modern day Indonesia, the majority of income would have been spent on food and basic housing, with little left over for anything else, most money is paid to other people carrying out labouring duties. As incomes rise it would naturally be expected that more money would be spent on manufactures and property, and that more spare cash would be available for investing in capital of one form or another, so increasing the returns to capital. Also, as wages rise it would also seem sensible for capital to substitute for labour, and again for returns to capital to increase at the expense of labour. In the long-term total factor productivity should increase, reducing the returns to labour and increasing those to capital. Indeed futurologists have been predicting for most of a century that as capital gets more efficient and productive the need for labour should slowly decline to nothing. To date these predictions have been conspicuously wrong. Working weeks have barely declined in the last forty years, huge numbers of women have entered the labour markets and people continue to complain of the problems of the work/life balance. Indeed at the time of writing this section France is currently paralysed by strikes trying to prevent an increase in retirement ages. In the long run it seems logical that mechanisation and the increasing use of capital would result in the Bowley ratio slowly moving towards zero. 115 EFTA00625243
In fact if you analyse the data on a sectoral basis, this is exactly what is happening. Young [Young 2010] shows clearly that for agriculture and manufacturing, returns to labour have declined significantly while returns to capital have increased. In the US returns to labour in agriculture have dropped from nearly 0.8 of total income in 1958 to less than 0.6 by 1996. In manufacturing, the change has been from 0.75 to two-thirds. This has happened because labour has been slowly displaced by machines in these industries. The fascinating thing is that despite the changes in the Bowley ratios for these two (very large) sectors, the national value of the Bowley ratio has stayed near constant between 0.69 and 0.66 using the same measures. The reason for this is that the labour intensive service sector has grown dramatically in size through the same period, and this has kept the national balance of returns to labour and capital very nearly constant. In the discussions that follow it is hoped that these puzzles will be explained. As shown in section 4.3 above, the output from a fairly randomly chosen model 4A produced an output with a Bowley Ratio, of waged earnings to total earnings, of exactly 0.75 with zero debt (It is to be noted, that Wright found similar results with (3 equal to 0.6 and 0.55 in his two papers). This was the subject of further modelling. A first problem with the models used in section 4.3 above is that they have too many degrees of freedom. Depending on the parameters and the starting values of a model run, different zones of stability can be encountered, and even if the model is restricted to options that end in stable, stationary outputs, different end points can be reached with the same parameters, but different starting positions. A second problem is the role of the 'cash balance', H, which can either be a positive surplus or a negative debt. In many of the models the stable output can have very large positive or negative cash balances, with an order of size of the capital wealth Q. As is often the way with debt, an item that was used as a minor temporary convenience ends up taking on a major unlooked for negative role. Having been introduced as a simple method of ensuring that the sums add up; the role of this cash balance is not clear, and it is not obvious that it is a meaningful item. There are problems as to exactly who or what this money is borrowed from / lent to, and also why interest is not charged on the lending or borrowing. Firstly, to remove these problems, the models were rerun in Excel, deliberately choosing parameters that stabilised into stationary outputs. A second condition used was that the payout ratios, both positive and negative, were set to 1.0. This makes for an immediate simplification of the model, as company payouts are just to the market expectations and make no reference to the profits produced by the companies. In this model payout ratios are not necessary, because although the total capital can increase and decrease, other mathematical limitations prevent the capital from shrinking to zero, at least in the stationary and periodic zones. 116 EFTA00625244
Thirdly, using 'solver', the range of stationary outputs was then restrained to the single solution that satisfied the requirement that there be no net borrowing or lending, ie the cash balance was always constrained to zero. This gives the single 'Bowley-Polonius' equilibrium point. With net borrowing and lending fixed to zero, the philosophical problem of what exactly the cash balance is becomes irrelevant. By changing the parameters of the model systematically some very interesting results arose. The first interesting thing was the role of the pricing of labour. As discussed in section 4.2 above, this model assumes that labour can be a scarce supply, and that the price of labour depends on the amount required. As such the concept of marginality has introduced it's way into the modelling in the form of a pricing curve for labour, this is a reasonable argument, as labour is a commodity that is truly unchangeable in it's supply. However, investigating the model shows that the actual form of the curve is not relevant to the model. If you change the parameters of the labour curve, then the model values change, with an offsetting increase or decrease in the cash balance. But if you reoptimise the model and force the cash balance back to zero, then the model returns to an equilibrium point with exactly the same value for the Bowley ratio. This is looked at again in section 4.7. Within reason, the parameters of the labour supply curve are simply not relevant to the ratio of wages to profits. The curve simply needs to be concave, to ensure that labour costs eventually choke the growth of the economy with higher costs. Any reasonable concave curve will do this. So the actual detailed calculations of marginality are utterly irrelevant and do not have any influence on the long-term equilibrium. ('Within reason' means that there are some labour curves that prevent the model coming to an appropriate equilibrium; that is they don't allow an equilibrium at zero cash balance. But as long as the curve allows an equilibrium, the parameters of the curve do not effect the location of the equilibrium). The second interesting thing is that, at the B-P equilibrium, the Bowley ratio is influenced by only two things; the consumption rate and the profit rate. Moreover, the ratio is given by the very simple form as follows: = Bowley Ratio waged income total income f2 — r f2 = I — (r/f2) (4.5a) It is straightforward to check equation (4.5a) against reality. A suitable long-term profit rate could be anywhere between long-term interest rates and long-term real stock-market returns. Long-term real interest rates are generally in the region of 2% to 5% [Homer & Sylla 1996, 117 EFTA00625245
Measuring Worth], see also figure 4.5.1 below. Long-term stock-market returns appear to be in the region of 7% to 8% [Campbell 2003, Ward 2008] see also figure 4.5.2 below. Consumption is typically about 60% of gdp [Miles & Scott 2002, section 2.2, fig 2.3]. While non- residential capital stock is typically 2.5 to 3 times gdp [Miles & Scott 2002, section 5.1 & 14.1]. Taken together this would give ,Q, the consumption rate as a proportion of capital a range of about 0.2 to 0.25. Substituting into equation (4.5a) this then gives a possible range of values for the Bowley ratio of between 0.60 and 0.92 Clearly this range is a little on the high side when compared with the 'stylised facts' of observed Bowley Ratios in the real world varying between the values of 0.5-0.75. We are however in the right ballpark. (The figures also confirms the common sense notion that stock-market returns are more appropriate than interest rates for 'r'.) As discussed above, intuitively it is not obvious why Bowley's law holds and the ratios of returns to capital are not much higher than the returns to labour. Using the basic ideas of classical economics we would expect the returns to have increased significantly as machines have got steadily more productive over the last two hundred years. Neoclassical ideas of utility and marginality have no theory to explain this. What equation (4.5a) says clearly is that Bowley's ratio will always be less than one, and given that rates of return are generally much lower than consumption rates, the value will be closer to one than zero. This agrees in general with the stylised facts, if not in detail. In section 4.6 below possible reasons for the mismatch between the values produced in the model and the real world models are discussed. These reasons are speculative, so before moving on to this I would first like to discuss the equation (4.5a) and its consequences in a little more detail. Firstly it should be noted that this equation was discovered by experimenting with the parameters of the model. The results from the simulations give results that match the formula above to multiple decimal places. With a little playing it turns out that it is in fact quite straightforward to derive formula (4.5a) from first principles. Firstly, when the model is at equilibrium, all values of flows and stocks are constant (in this part of the modelling, only models giving stable time outputs were used, the models suggest that the periodic models move around this point on average, as would be expected in a Lotka-Volterra model). At this equilibrium point, if the total capital Q is to be constant, then the total income must equal the total outgoings, so the algebra works as follows (note that for simplicity the summations have been dropped, all variables are assumed to be summed over the whole economy): 118 EFTA00625246
Consumption = Income C = Y = e + rr (4.5b) Here, at the Bowley-Polonius equilibrium, H = 0 and W = Q. Also, the consumption ratio SI is defined by: S2 =Q (4.5c) Trivially, the profit rate is defined by: (4.5d) If we multiply equation (4.5b) by equation (4.5d), then we get: TIC = rY (4.5e) Substituting from (4.5c) into the left hand side gives: rrf2 = rY (4.5f) Rearranging gives: TT = — Y 12 (4.5g) substituting from (1.3u) gives the profit ratio: r P = f2 (4.5h) Subtracting both sides from unity gives: 119 EFTA00625247
I — p = I — (4.5j) or, substituting from (1.3v): = Bowley ratio = I — — 12 (4.5k ) The base equation here is (4.5h) which is the ratio of returns from capital, to total returns. This equation looks suspiciously like an equation of state, discussion of which will be postponed to section 4.7. Whether equations (4.5h) and (4.5k) are sufficiently 'fundamental' to satisfy Phillip Mirowski remains to be seen; I would ask judgement to be reserved until the end of section 4.7. Multiplying consumption by interest rates isn't an 'obvious' thing to do, and clearly I discovered this derivation by reverse engineering my model output. At this point, more observant readers may have noticed something familiar about equation (4.5k). Equation (4.5k) gives: = I — i (4.5k) while back in section 1.3 equations (1.3v) and (1.3w) defined the Bowley ratio as: = I - (4.51) This is made simpler by looking at the profit ratio p, then (4.5h) and (1.3w) give: (4.5m) which clearly means: 12 = F (4.5n) from the definitions of .S2 and f it then follows that: 120 EFTA00625248
(4.5o) Where C is the consumption and Y is the total income from wage earnings and profits/dividends, etc. From which trivially we arrive at: C = Y (4.5p) which we have seen a very long time ago as (1.3b). This is of course a basic assumption of all traditional macroeconomics, and so is something of an anticlimax; like setting out across the Atlantic to find the Indies, and instead discovering Rockall. It is however firstly worth noting that while this identity is an assumed equality in traditional economics, it is a self-balancing outcome of the GLV and L-V models used in this paper. Consumption is not defined as equal to income or vice versa, consumption of individuals rises and falls with wealth, wealth changes with income and consumption, income depends on consumption. In the models in this paper the dependencies go round in circles, hence the Lotka- Volterra outputs, the equality of total income and consumption naturally falls out at the equilibrium of the model. This leads to a much simpler derivation of the Bowley ratio: 13 = I — p and p= so = I - by definition, also: 12 = — and r = - by definition, but C = Y so: 0 = r so t3 = I - f2 by definition, and so: QED. Of course the definition above does not require a single line of my modelling, theorising or pontificating. And for most economists it will appear to be a trivial and unimportant accounting identity. 121 EFTA00625249
But it isn't. It is all a question of directionality. Of cause and effect. For most people it is 'obvious' that consumption follows income, ie that people earn then spend, or that: C = Y Actually it is the other way round: Y = C r = f2 or more accurately: It is the consumption rate SI that defines 1; the ratio of total income to capital. Trivially this is the case in my models, where r and SI are fixed and r is allowed to float. But of course this is not sufficient justification. The problem with the economic literature with regard to the Bowley ratio is that economists have first defined the profit ratio and Bowley ratio as: P = /3 = 1 — r They have then spent the last hundred years or so trying to explain the two ratios above by attempting to look at the microeconomic structure of industry that could affect r and 1. This has almost entirely revolved around the analysis of 'production functions', the supposed microeconomic relations between capital and labour. The Cobb-Douglas production function has become a particular focus of attention, as its form gives rise to constant shares of returns to labour and capital. (I am somewhat reluctant to criticise Gabaix, as he is one of the few economists who has recognised the importance of power-laws and other 'anomalous' invariants in economics. However his quote at the start of this section shows how deeply ingrained within economics this approach has become. Gabaix defines the solution to the problem of the Bowley ratio as the finding of a theory that not only produces the Cobb-Douglas production function, but also gives certain fixed exponents for the Cobb- Douglas function). 122 EFTA00625250
There are however very major problems with this approach. Firstly, real analysis of companies suggests that any meaningful production function needs to be based on high fixed costs and increasing returns, and is far away from the Cobb-Douglas or other standard production functions used in neoclassical economics. Secondly, as the data from Young [Young 2010] shows the relative shares accruing to labour and capital can change quite significantly within individual sectors such as agriculture and manufacturing. This shows that production functions are not giving the required output on a sector-by-sector basis. (Casual inspection of company accounts shows that returns to labour and capital can vary dramatically from company to company.) The third and most important reason is the problems following the logical steps. Firstly, traditional economics states that production functions define the relationship between r, the rate of return to capital, and 1, the rate of total income to capital. Secondly, traditional economics states that total income is equal to total consumption, so, logically, ,S2 = 1. Putting these two statements together logically means that production functions, the microeconomic structure of the commercial sector, define the saving rate Si (This leaves aside r for the moment, we will return to r shortly.) This is very difficult to swallow. Squirrels save. As do beavers. And also some woodpeckers and magpies. Laplanders build up their reindeer herds as a form of saving, as also Arab pastoralists build up their herds of camels and goats, and the Masai and BaKgalakgadi build up their cattle herds. Almost all agricultural societies store grains and other foods to tide them from one harvest to the next. And whether you live in the tropics with alternating wet and dry seasons, or a temperate climate with warm and cold seasons, saving is a biological necessity genetically selected in human beings for its beneficial outcomes. From a behavioural point of view saving is a deeply ingrained human behaviour that borders on the compulsive. Most people put money away for a rainy day. While Bill Gates and Warren Buffet have shown extraordinary benevolence, they both continue to hoard wealth far beyond their possible needs. Leaving biology aside, traditional economics has well-established logical theories for saving. Lifetime cycles make it logical for young, and especially middle-aged people to save to ensure support in their old age. Whether you look at biology or economics, savings rates are largely exogenous to the economic system. They are defined by people's assessment of, and fear of, an unknown future. Clearly my use of 12 as a consumption function is simplistic. S2 uses only total wealth as a definer of consumption. In reality consumption and saving decisions are going to depend on current income and projected earnings in a complex manner. In particular, individual consumption and spending decisions will vary significantly with age and family circumstances. 123 EFTA00625251
Indeed an interesting paper by Lettau and Ludvigson [Lettau & Ludvigson 2001] suggests that there is a constant rebalancing of asset wealth to ensure long-term consumption, and that this feeds back predictably into asset prices. In reality, as people are born and die at roughly the same rates, the total pattern is relatively fixed, and over the long-term national consumption rates are relatively steady. Clearly consumption and savings rates are affected by economic fundamentals. Savings rates go down, and consumption goes up in booms, when returns look good and fear of unemployment is low. In recessions savings rates go up, and consumption goes down, as returns go down and fear of unemployment is high. But these reasons simply reinforce the hypothesis of exogenous drivers of biology and economic lifetime planning for consumption and saving. Despite the changes with economic cycles, over the long-term, savings rates show consistent trends linked to the relative wealth of a society, as originally described by Lewis [Lewis 1954]. The point here is that SI can be explained by long-term societal trends such as age, sex, family size, amounts of spare labour in a society and the state of a country's social-security system. Short-term trends can be explained by return rates of investments, unemployment rates, etc. While n is not an absolutely fixed exogenous variable, it is a slow-changing variable that can be calculated from mostly long-term variables. It stretches credulity to breaking point, to believe that saving and consumption behaviour is ultimately defined by the microeconomic production functions of commercial companies. The causality works the other way, the systems of capitalism are set up in such a manner that the consumption rate n defines r, the rate of total income to capital. When viewed in this way the data of Young makes sense [Young 2010]. In the period Young analysed, consumption rates stayed approximately constant, as did rates of return. During the same period, both agriculture and manufacturing increased their returns to capital and reduced returns to labour. Given fixed n, to keep things balanced, the economy as a whole was obliged to create new, labour-intensive, industries to ensure that returns to labour were maintained as a whole. All those cappuccino bars and hairdressers were created by the economy; by entropy, to ensure that the Bowley ratio remained equal to 1-(r/f/). In fact the consumption rate n, the Bowley ratio (3, and the profit rate p are not very interesting pieces of economics at all. S2 is already well defined by life-time planning and/or behaviouralism. The Bowley ratio and profit ratio are trivial outcomes from n and r. I find it difficult to believe that I am the first researcher to propose that the Bowley ratio should be defined by: 124 EFTA00625252
r 0 = I — f2 rather than : 0 = I — r However, I have not been able to find any other proposal of this relationship, and the recent writings of Gabaix, Young and others suggest that this is the case. If I am the first to do so I am happy to take the credit. If not I would be happy to update this manuscript appropriately. The interesting economics is in r; the rate of returns. To date I have generally been vague about the meaning of r and have included dividends and interest payments as well as rents in r. In fact there are three near economic constants which all show very stable long-term behaviour. In all three cases the behaviour is counter-intuitive and I believe likely to be related. The three variables are long-term real interest rates, long-term stock returns and long-term gdp growth rates. Figure 4.5.1 below shows the long-term cumulative returns due to real interest rates for the UK and the US. For the UK this starts with a value of 1.0 in 1729, for the US the start is at a value of 1.0 in 1798. The returns are calculated by multiplying the successive value from each year by the interest rate less the inflation rate. Data for these graphs, and also for the gdp graphs below were taken from the website 'Measuring Worth', for a very full discussion of historic interest rates see Homer and Sylla [Homer & Sylla 1996, Measuring Worth]. Figure 4.5.1 As can be seen, although there is significant variation around the trend, there is a very clear long-term trend, which is slightly over 2% for the UK and slightly over 4% for the US. Figure 4.5.2 below shows long-term stock-market returns for the USA, from 1800 to 2008. Figure 4.5.2 [Ward 2008] Again, although there are significant short-term variations, the long-term trend of 7% is clear. Finally figure 4.5.3 below shows real GDP in 2005 dollar for the United States from 1790 and 2005 pounds for the United Kingdom from 1830. The same long-term trend can be seen. This 125 EFTA00625253
time the trend is slightly below 2% for the UK and slightly below 4% for the US. The match of long-term gdp growth trends to long term interest rates is striking. Figure 4.5.3 In the discussions above, I have chosen r as an exogenously given constant. I have been vague about whether r should be the 2-4% of interest rates or the 7% of stock-market returns, or somewhere in between. This is, of course, because I don't know. I suspect it is somewhere between the two. I do think the assumption of exogeneity, at least for the level of discussions in this paper, are reasonable. Like the Bowley ratio, both interest rates and stock-market returns show long-term constancy. The Bowley ratio is the dull one, as it is simply a result of the regularity of returns r and consumption propensity CI. (As an aside, a quick note on the changes of the Bowley ratio in recessions. It is well known that returns to labour increase in recessions, and so that the value of R increases. It is also well known that saving increases and consumption decreases in recessions. If consumption decreases, then equation (4.5k) would mean that R would decrease, which appears to be a contradiction. However in recessions both interest rates and stock-market returns also decrease, and the proportional decrease in interest-rates and stock-market returns is usually much larger than the decrease in consumption. So, overall, 13 does increase in recessions despite falling consumption.) The interesting thing is where the constancy of interest rates, stock-market returns and gdp growth all come from. Traditional economics has tended to look at technology change and microeconomic factors as the drivers, again this seems difficult to justify. Firstly, technology tends to come in bursts; steam power, electrification, motorised transport, electronics, the internet, etc. This would suggest that both gdp growth and stock-market returns would come in bursts, and not necessarily bursts with the same rate of growth. Secondly, the rate of change of technology, from casual observation, appears to be accelerating, with the bursts of new technology becoming more frequent and wide-ranging. Thirdly, the growth of economies appears to be back to front. For the UK, growth started with the industrial revolution somewhere around 1800 and has continued at a regular rate of 2-2.5% for the last two centuries. Almost all the other rich countries have followed a different path. In the first phase of the catch- up they generally had high rates of growth; typically between 5% and 10%. Until they caught- up or slightly over-took the UK. From that point on they then slowed down to a similar 2-4% rate as the UK. For a very good visualisation of the process go to gapminder [gapminder]. 126 EFTA00625254
This is counter-intuitive, as common sense says that as countries get wealthier they should be able to devote more and more capital to investment, and so they should be able to grow more rapidly, not less. The constancy of the values of interest rates, returns and gdp suggest a much deeper equilibrium is present, a simple mathematical equilibrium. An equilibrium that is actually restraining growth significantly below that possible as a consequence of technology. It is the source of these three constants, and the relations of the three to each other, that is the most pressing mystery of economics. A possible, though highly speculative, proposal for the source of this equilibrium is suggested in section 7.4. Before moving on, I would like to discuss the parallels with Wright's models. In the Social Architecture of Capitalism Wright's model produces a value of p of 0.55, while in Implicit Microfoundations for Economics p is 0.6. Wright's models are not formally mathematical, so it is not fully clear how these values are generated. In both these papers the expenditure is drawn randomly from a uniform distribution of an agent's wealth, which I believe makes S2 equal to 0.5 in both models. The way that excess wealth is generated in Wright's models is much more complex, and possibly recursive, and it is not clear (at least to me) how the equivalent to interest rate in these models would be calculated. If equation (4.5a) proves to be correct then Wright appears to have defined the interest rates for the two papers above at 22.5% and 20% respectively. Finally, it should be noted that equation (1.6d) for the exponent of the wealth distribution power-law tail should now read as: OC = 1.36(1 - (r/12)) 1.15 V (4.5q) 127 EFTA00625255
Part A.II - Speculative Building At this point in the discussion of the modelling, I believe it is appropriate to give a clear and unambiguous health warning. Up to this point in the paper; although both the economics and the mathematical approaches of the modelling have been heterodox, I believe that the models built accord with basic common sense, most notably with the various variables and constants matching, at least approximately, measurable quantities in real life economics. In the remainder of the first section of this paper, this no longer remains the case. For one reason or another the models and policy proposals in the rest of this section are speculative. The models have been included because they give results which may be interesting or plausible, and that may allow future building of alternate, more realistic, models in the future. The conclusions produced from these models must also therefore be presumed to be highly speculative. I fully expect that some or all of the models and conclusions below will prove to be wrong. It is my hope that they will however prove to be informative for further work. 4.6 Unconstrained Bowley Macroeconomic Models In section 4.5 above, we looked at Bowley models that deliberately constrained the net cash / debt balance to zero. In this section these models are explored further by changing the net value of the cash balance so it is positive or negative and seeing what happens. As previously discussed, I have a profound philosophical problem with this approach. It is not clear to me who is holding this balance or debt, where it is held, etc. Because of this no interest is paid on the balance, or interest charged on the debt, for the simple reason that I do not know where in the model I should debit the interest from, or pay the interest to. Despite this I am presenting the results because, firstly they are mathematically interesting, and secondly the outcomes are beguilingly plausible. I find this worrying, as it characterises some of the attitudes I have found most frustrating in my reading of much mainstream economics; the triumph of interesting equations and common sense over meaningful models related to underlying data. The first model run was simply to put in typical parameters, from real economies of: Returns rate r 0.03 Consumption rate a 0.2 Bowley ratio 13 0.7 Along with a Capital Wealth, Q 100 And let the model reach an equilibrium, the resulting cash balance is: Cash Wealth H -50 128 EFTA00625256
There are two things to note here. Firstly, allowing a negative cash balance; that is allowing the use of debt, allows the Bowley ratio to drop. This means that the returns to labour are reduced and the returns to capital are increased. So, in short, allowing the use of debt allows more returns to capital. It should be noted however that using an returns rate of 0.07, based on stock market returns, gives a positive cash balance of +17. To investigate this further, the parameters of the cash/debt balance were changed systematically, along with changes to other variables, to investigate the results on the model. As with the Bowley-Polonius model, the model was surprisingly easy to parameterise, and gives an equation as follows: = + n(H/Q) - r + O(H/Q) — 1 + (4/Q) — (110) (4.611) 1 + (H/Q) where H is the cash balance (wealth held in the form of cash or negative debt) and Q is the wealth held as capital. Again, this equation has been derived 'experimentally' by investigating the model, but the equation fits the modelling exactly. As in the previous section it is fairly trivial to derive equation (4.6a) from first principles. As before, when the model is at equilibrium, all values of flows, stocks and debts are constant. At this point, if the values of capital Q and cash H are to be constant, then the total income must equal the total outgoings, so, as before: C = Y = e + rr (4.6b) However this time, in the original model, in equation (4.2b), we defined the consumption ratio SI as: 129 EFTA00625257
12 = so, (Q + H) .O(Q + H) = C or, substituting from (4.6b): .O(Q + H) = Y (4.6d) again, the profit rate is defined by: rr = r() 4.6 c If we multiply equation (4.6d) by equation (4.6e), then we get: rrf2(Q + H) = iQY (4.6f) Rearranging gives: rr rQ Y 12(Q+ H) rQ P 12(Q + H) Subtracting both sides from unity gives: or: (4.6g) 1 p - 1 12(Q H) (4.6h) or from (1.3v): + 12(Q + H) — rQ 12(Q + H) .OQ + OH — rQ lags + 1 + (H/Q) — (rid?) = 1 + (H/Q) or, dividing by 0 and Q; (4.6a) Once again the base equation here is (4.6g) which is the ratio of returns from capital, to total returns. In the next section I would like to discuss the overall meaning of equation (4.6g) in more detail, but before that I would like to look at some consequences of varying the debt value H. 130 EFTA00625258
It can be seen from equation (4.6a) that the Bowley ratio can be manipulated by changing the value of the cash balance H. If the cash balance is positive and increasing, Bowley's ratio just heads closer and closer to unity, good for workers, bad for capitalists. More interestingly, if H is negative, a debt, and the size of the debt is increased, then the size of both the numerator and denominator reduce, however the value of the numerator reduces more rapidly than the size of the denominator, and the Bowley ratio slowly decreases. At least at first. If debt is allowed to continue increasing, then a rather dull function suddenly becomes more interesting. Firstly the Bowley ratio drops rapidly to zero, and then shortly afterwards heads off to negative infinity. In the model itself it isn't possible to reach these points; as the Bowley ratio heads to zero the model becomes unstable, and explosive — the economy blows up in an entertaining bubble of excess real capital and even more excess debt. This may sound familiar. This brings us to the first, more traditional, form of macroeconomic suicide; allowing too much debt in an economy. Again this is discussed in more detail later in the international model in section 4.10 below. Unfortunately the model gives no indication of the policies to be followed post explosion, though it does suggest that sensible limits on total debt (or debt ratios) in a well run economy might be a good idea. There is a further consequence of this model that is intriguing. In this model the role of debt gives a direct output to the Bowley ratio. As was found in section 1.6 above, the Bowley ratio in turn gives a direct output to the parameters of the GLV income distribution. So, if the above models hold, there is a direct link from levels of debt in the economy to the levels of inequality. Specifically, increased levels of debt lead to increased levels of inequality. Intuitively this seems plausible. Looking back over the last century, especially at the US, the first part of the century was associated with high levels of inequality, and high levels of leverage, which ultimately resulted in the Wall Street crash and the depression. In reaction to this, from the 40's to the 70's, leverage was strictly controlled, and also income distribution was much more equitable. From the 70's to the end of the 201° century, increased financial deregulation, and increased leverage, went hand in hand with increased inequality. Given the mathematical simplicity of equations (4.6g) and (4.6a) it should be straightforward to check these relationships both historically for individual countries as well as across different countries. It seems highly likely that the complexity of economics means that there are other factors that need to be included in equation (4.6g), for example, all the above has been carried out with payout factors fixed at one. However, with luck the errors might be systematic and relationships may appear. 131 EFTA00625259
As a minimum it should be noted that a more realistic version of (4.6g) would include net returns based on returns from investments, based on say 7% [Ward 2008] less returns on debt at 3%; representing long term interest rates. I would guess that this would give something like: (rk — ri)Q f2(Q + H) (4.6i) Where rk is the typical return on investments in companies and rf is a long term risk free interest rate. I emphasise that equation (4.6i) is merely a supposition and has neither been derived nor modelled. If actual economic data give support for the relationship in (4.6g) above, then this would give some support to the fact that the debt in equation (4.6g) was in fact a meaningful value. If economic data does support equation (4.6g), or a variant of it, then this raises interesting discussions on the role of debt in a national economy. The history of the last forty years has been one in which neoclassical economists have argued forcefully for the liberalisation of financial markets under the assumption that deregulation would allow deeper and cheaper financial markets and that self-regulation would ensure a natural balancing of an equilibrium. Equation (4.6g) begs to differ. Equation (4.6g) dictates that persuading governments to allow greater leverage merely allows benefits to the owners of capital, while simultaneously moving towards a more unstable equilibrium that coincidentally increases overall wealth inequalities. In fact this is the second form of rent-seeking we have seen exposed. If they were true to the core values of their religion, neoclassical economists would condemn this rent-seeking for what it is, and support strict controls on leverage. In practice neoclassical economists have consistently supported the 'freeing' of credit markets in the mistaken belief that greater access to funding will reduce prices and increase overall 'welfare'. In the real world any practical cost benefits are negligible compared to the disadvantages. The disadvantages are a substantial shift of funds from the productive sector of the economy to rent-seeking financiers, and a large transfer of 'welfare' from the poor to the rich. Equation (4.6g) suggests that control of the national level of leverage can provide three separate economic benefits. Firstly for the working of the economy there will be a optimum level of debt that allows liquidity and provides capital for genuine economically productive investment. Secondly, by preventing extreme levels of debt financial instability can be prevented. Thirdly, the level of debt may be reduced to achieve reduced levels of inequality. If the third item above is tackled successfully then the second becomes irrelevant, so the debate regarding the appropriate level of debt becomes a trade off between the first and third items. While the income distribution requirements suggest an elimination of debt, this is clearly not practical for a well functioning economic system. While much investment is funded directly from cashflow, if the economy is to grow successfully non-financial firms clearly need access to debt financing for major capital investments. 132 EFTA00625260
Similarly, while it is always fashionable to attack 'speculation' a significant proportion of speculation is clearly useful. Neither farmers nor bakers are experts at predicting weather patterns. Both use derivatives on grain production to hedge their prices. It is the entrance of speculators into the grains futures markets, speculators who are able to look at weather patterns across the different grain producing countries of the world, who keep these markets working effectively, so benefiting both farmers and bakers. The same is true of speculators in any derivative market when they are functioning correctly. However there are clearly points where derivative markets fail to be efficient finders of future prices and start to be used by uninformed momentum chasers as apparent sources of financial growth in their own right. Although the work of Minsky is not quantitative in nature, his characterisation of the phases of debt build up is clear and easy to relate to real economic cycles. If equation (4.6g) above is found to be applicable, it should be possible to look through past economic cycles and note where debt moved from a useful point; of providing funds for investment and price finding speculation, to turning into a self-sustaining provider of bubble finance. This would then provide central banks with a guide to controlling financial markets for the benefit of the economy as a whole. I would now like to look at the character of equations (4.5h) and (4.6g) in more detail. 4.7 A State of Grace It has been previously stated that equation (4.5h): r P = S2 (4.5h) for non-debt economies, and equation 4.6g: rQ P = (2(Q + H) (4.6g) for economies with debt, look suspiciously akin to what physicists call 'equations of state'. This is a very brave statement and time will tell if this proposition is accepted. However it is clear that the equations work in ways similar to equations of state, and this is important for understanding what these equations signify, especially with regards to economic equilibrium. Firstly I would like to give a little background of other equations of state in physics. Historically, the study of thermodynamics; things such as the expansion of gases, heat engines, heat production from chemical reactions, etc, was problematic because there were large numbers of macroscopic and microscopic variables. Changing one of the variables generally resulted in simultaneous changes in many other variables and it was very difficult to work out what was 133 EFTA00625261
actually happening. In this regard, classical thermodynamics was similar to present day economics. In the study of gases a series of pioneering scientists carried out various carefully controlled experiments that resulted in various relationships being established. So Boyle's law states that, at constant temperature, the volume of a gas varied inversely with the pressure. Charles law states that, at constant pressure, volume is proportional to temperature, and so on. Finally it was found that all the different laws could be put together to give the 'ideal gas law' in the form of an equation: PV = nRT (4.7a) where P is the pressure, V is the volume, T is the Temperature, n is the amount of substance in moles, and R is a fundamental constant of the sort wished for by Mirowski. In fact the 'fundamental' nature of R is an accident of history. The concepts and measurement units of pressure, volume and temperature were generated independently with idiosyncratic units. Here R is just a method of adjusting the different measurement systems so that the units fit together. Later microscopic theory showed that that the equation could be changed to a more fundamental form of: PV = NkT (4.7b) where N is the number of molecules, and k is another much more fundamental constant (Boltzmann's constant) that once again mops up all the different unit systems. If physicists were allowed to start from scratch they would change all the units so that the constants were all dimensionless '1's, which would make things easier for physicists but harder for butchers, bakers and shoppers. The point about equation (4.7a) is that for an ideal gas (and the 'ideal' is very important) equation (4.7a) defines all possible equilibrium points for the volume of gas you are looking at. With the three variables of p, V and T there are an infinite number of points of equilibrium on a two-dimensional sheet in a three-dimensional space that can be occupied. However, any equilibrium must be on this sheet. So if you double the pressure of the gas, you will either halve the volume or double the temperature, or simultaneously change both volume and temperature so that equation (4.7a) balances. Other thermodynamic systems are characterised by similar equations They are interesting for a number of reasons. 134 EFTA00625262
Firstly, despite the complexity of the underlying system, equations of state are often surprisingly simple. Secondly, the way the variables fit together can be non-obvious or even counterintuitive. Familiarity with equation (4.7a) means that people are used to it, but for the pioneers in the field, there was no obvious reason why these three variable should fit together in this way, and in fact it wasn't until many years later that the equation was independently explained at an atomic level by Maxwell and Boltzmann. Thirdly, the equations do not refer to underlying microscopic mechanisms or variables. In equation (4.7a) there are no references to elasticities of collision, the masses of the gas molecules, etc, in fact the equation should be the same for any perfect gas. Fourthly, it is common to find that many of the variables in an equation of state are intensive, that is the properties do not depend on the amount of material present. So in equation (4.7a) pressure and temperature are both intensive parameters, you can measure pressure and temperature locally at different points throughout the system as long as it is at equilibrium. Volume on the other hand is an extensive parameter that depends on the amount of stuff present. Finally, by reducing a complex system to a simple equation, equations of state are extraordinarily useful for defining and analysing systems. Going back to equation (4.6g): 1-Q P = 12(Q + H) (4.6g) this equation appears to fill all the above characteristics fully. Firstly it can be noted that both p (returns/total-returns) and (Q/(Q+H)) can be seen as macroeconomic ratios. Then equation (4.6g) becomes a formula incorporating just four intensive variables and could be expressed as: 14(1 + G) = r (4.7c) Where p is the profit ratio and G is a cash-debt gearing ratio H/Q, and none of 12, p , G or r depend on the size of an economy. This meets conditions one and four. Condition three is certainly met; there are none of the microscopic foundations beloved of economists in equation (4.6g). Condition two would appear to be the case, given that this equation has followed Bowley's original discovery by over a century. 135 EFTA00625263
The fifth condition remains to be proved. Just as an aside, an accident of history means that I am unable to present Phillip Mirowski with his fundamental constant, something similar to the R of (4.7a) or the k of (4.7b). Luckily for economists almost all variables in economics have been defined in terms of money, people or money per person. As a result the equations of state fit together automatically and the balancing constant is simply unity. Unfortunately for naming conventions, persuading people that the dimensionless number 'one' is a fundamental constant rather than a lucky accident is a little tricky. Why equation 4.6g (or (4.7c)) is important is that it says that you can't change the Bowley ratio without changing the savings ratio, the gearing ratio or long term returns. Or vice versa for any of the savings ratio, gearing ratio or long term returns. Which means that you can't change the Bowley ratio by changing things like the tax system, the education system, trade union bargaining rights, monopolistic behaviour, reducing friction in capital markets, affirmative action, inheritance laws, or a thousand and one other things that people believe will make incomes better for ordinary folks. None of the above will have any effect on the Bowley ratio unless they change one of the other factors in equation (4.6g). In extremis, as the Russians discovered and the Chinese are discovering, you can't even get more money into the pockets of the workers by introducing state ownership and a workers paradise. Ultimately, if your economy becomes technologically advanced, the factories become informally 'owned' by a nomenklatura or similar business class linked to the elite, and Bowley's law and the appropriate matching unequal GLV distribution reasserts itself. Sadly for Marx, his perceptive insights prove so powerful that they work their wonders even in 'Marxist' economies. It is for these reasons that my own proposals for solving poverty look at redistributing wealth rather than redistributing earnings. Going back to equation (4.6g), it is worth focusing again on the underlying model in section 4.6. There are very important economic factors in the model that do not appear in equation (4.6g). This includes the amount of physical capital K, or the proportion of this capital that is used. It includes the productivity of this capital. It also includes the function of the compensation of the workers, and so in a real economy, the level of employment and unemployment. All of these things have no relevance to the overall, macroeconomic balance of the model. All these things have secondary functions in the model. The overall model has an infinite number of equilibrium points that balance to equation (4.6g) even when the solutions are stationary. This is the prime equilibrium that is being sustained. The equilibrium that the system automatically and inevitably returns to. When the model moves into unstable zones, the equilibrium hunts around an equilibrium with the parameters in (4.6g) changing cyclically. There is an infinite number of points the cycles can pass through, but within a constrained zone, much like the foxes and rabbits of the original Lotka-Volterra model. Within each of these infinite solutions the values of capital, capital productivity and waged earnings all adjust to a give a solution that satisfies equation (4.6g). 136 EFTA00625264
To take a trivial example, suppose that the amount of labour needed to service the real capital K is exactly halved for all values of K. This can be modelled in model 4A, or the other models, in appendix 14.9. by changing the parameter 'labour_required' from 1 to 0.5. If you simply change the value of labour_required from 1 to 0.5 then all the various parameters in equation (4.6g) will change to new values. Most notably the value of the cash/debt balance will change. If the model is then returned to it's original overall parameters, by using solver to return the debt to its original value by adjusting K, then a new equilibrium is achieved, with a higher value of K. A comparison is shown below, column A is the first equilibrium, column B shows the result of changing the value of labour_required, finally column C shows the result of returning the cash balance to zero. Figure 4.7.1 A B C interest rate 0.10 0.10 0.10 production_rate 0.20 0.20 0.20 consumption rate (52) 0.40 0.40 0.40 labour_required 1.00 0.50 0.50 goods_payments 40.00 32.39 40.00 earnings_income 30.00 22.39 30.00 actual_returns 10.00 10.00 10.00 capital (K) 100.00 119.03 135.61 capital_wealth (Q) 100.00 100.00 100.00 cash_wealth (H) 0.00 -19.03 0.00 total_wealth (W) 100.00 80.97 100.00 total_returns 40.00 32.39 40.00 Bowley Ratio ((3) 0.75 0.69 0.75 Halved A to B Forced to zero B to C Reverts to 0.75 A to C In this case an increase in labour productivity has been balanced by decreasing employment. A new equilibrium has been achieved, and at this point there is no need for any further adjustment in the model. In the case of the change of labour_required from 1 to 0.5, the new equilibrium at zero cash balance is 136 units of capital. The requirements of labour per unit of capital has halved, but the amount of capital has increased by only a third. The actual labour required to be employed has reduced by nearly a third. The new equilibrium has rebalanced by sacking workers. The marginality of labour is not relevant to the model, the model simply moves to ensure that equation (4.6g) is balanced, it does this without any reference to the underlying labour supply curve. Model 4A, and all the other models, can create mass unemployment as a consequence of improved technology, and can then sustain that mass unemployment indefinitely. Indeed one of the main conclusions of models of section 4 and equation (4.6g) is that labour and capital, because of their different forms of ownership are not substitutable at a macroeconomic level. This is discussed at length in section 4.8 below. 137 EFTA00625265
There are many different ways that the model can be rebalanced, and many different ways that the equilibrium can be achieved. The key for the model and equation (4.6g) is that the total earnings; wages plus dividends, must balance the total consumption, which must be SI times the wealth. Which equilibrium point will be achieved will depend on other factors, but the model won't naturally rebalance to full employment of its own volition. To get a clearer understanding, I urge readers to load the model in excel from appendix 14.9 and experiment for themselves. This demonstrates that Keynes' fundamental insight was correct; that such a system could be stable even though it was not at the level of full employment, and that deliberate demand management would be needed to move it back to full employment. Unfortunately, Keynes avoided detailed mathematics in his main works, also his theories have been developed almost exclusively using the concepts of saving and investment as drivers, even when, as discussed in section 1.3 above, it has become clear that the IS paradigm is a secondary part of the economic cycle. Returning to the discussions of an equation of state it is worth noting that equation (4.6g) does not mean that other relationships can not affect the variables in equation (4.6g), just that if one factor of (4.6g) is changed, then the others must vary to compensate. Similarly it is possible that other relationships could cause one variable in 4.6g to affect another variable. It is also worth noting that the original gas model, shown in equation (4.7a) was that for an 'ideal' gas. While some gases, such as the noble gases, are close to ideal, most gases divert from the behaviour of (4.7a) under certain circumstances, most notably as temperatures drop. Water vapour, for example, obeys (4.7a) fairly closely at atmospheric pressure above 100C. However if water vapour is cooled to 100C at atmospheric pressure, the volume of the gas drops dramatically as the gas condenses into a liquid. To cope with such problems, instead of using equations of state, scientists and engineers use phase diagrams that show the relations between the state variables (p, V, T, etc) as the substance under observation changes between different states. Sometimes changes in state can be large and instantaneous. For example, superheated liquid can suddenly boil off explosively, or supercooled water can freeze instantaneously. Both these changes can be precipitated by for example a minor contaminant, or small movement. Casual observation suggests that similar phase changes may be encountered with national economies. Looking at the bubble behaviour in Japan in 1989 or the US in 1929 or 2008, in all three cases it looks like a superheated, apparently stable, system suddenly made a dramatic shift to another, very distant equilibrium point accompanied by dramatic changes in debt level, consumption level and the ratios of nominal capital (Q) to real capital (K). The example of Argentina between 2000 and 2005 suggests that income distributions can also change dramatically in the short term during major economic shocks [Ferrero 2010]. Such system changes also typically involve hysteresis so it is not possible to simply reverse conditions and return to the start point. Such phase change behaviour can be modelled within non-linear dynamics and chaotic systems, see Strogatz for example [Strogatz 2000]. It remains the case that claims that equations (4.5h) and (4.6g) are equations of state, rather than simple accounting conventions, could merely be an act of pretension. It is of course possible that the modelling, and so the equation is simply wrong. However the models and equations remain the only ever effective attempt to model theoretically the stylised facts that 138 EFTA00625266
Bowley observed a century ago, and the values produced are uncannily close to the observed data. If this approach is in fact wrong it does suggest that a similar approach may be one that finally clarifies this mystery of economics. 4.8 Nirvana Postponed In the previous section it was explained how a Bowley type model could produce an equilibrium that resulted in persistent long-term unemployment. This in itself gives severe poverty problems for the least able in society, as well as a significant tax burden for those in employment, who have to provide the welfare. A second problem for a Bowley type model is that, with interest rates, consumption rates and debt ratio generally stable over the long term; equation (4.6g) (shown again below), gives a fixed value for the Bowley ratio, and so, as we saw in section 1.5 a fixed value for alpha in the GLV distribution. The fixed value of alpha then gives a fixed ratio of inequality and means that a significant minority of the population receives substantially below the average income. Taken together these two elements mean that the bottom third or so of society in a modern economy can get a very raw deal; moving between long-term unemployment and intermittent low wage employment. There are however deeper and much more important reasons why all individuals, including the rich, suffer from poor life quality in a Bowley type economy. Going back to equation (4.6G): rQ P = 12(Q + H) (4.6g) Again given that the profit rate, consumption rate, and debt gearing are all fairly constant in a mature economy, then the Bowley ratio tends to be close to constant, and the stylised facts show that the returns to labour are typically two-thirds to three-quarters, while the returns to capital are one third to a quarter. To all intents and purposes, at the level of the economy as a whole, this means that the ratio of returns to capital and labour is pretty much close to invariant. At a macro level at least, the basic neo-classical, Walrasian assumption of substitutability of labour and capital is simply wrong. In this respect, the Austrian school is fundamentally correct, there is a 'natural balance' between capital and labour. And, in the absence of severe epidemics or genocide, the quantity of labour cannot easily be changed. 139 EFTA00625267
While it is possible to build up capital in the short term this is not sustainable, and a boom in capital above the long-term trend is followed by a bust, with at best stagnation in capital growth. If too much capital has built up, then there is the danger of capital destruction. Interestingly, in the models in section 4, the amount of financial capital Q can increase dramatically for small increases in actual capital K, especially when debt is allowed to increase. In these circumstances, the Austrian remedies for bubbles seem very sensible. As well as reducing debt back to sensible levels, the nominal value of capital, Q, needs to be reduced quickly via bankruptcies, wiping out the value of share and bond holders, etc. If this is done quickly then the economy can rebalance financial flows easily so that employment can be maintained and the fullest use of the real capital can be achieved. This was the approach used successfully in the 1990's by Sweden and other Nordic countries. In recent crises in Japan and the US, fear of hurting owners of financial assets; ultimately mostly politically important holders of pension funds, has resulted in deliberate government policies of attempting to maintain the value of financial assets in 'zombie' institutions, or to bail out asset holders altogether by nationalising debts. While this may seem sensible in the short term, the effect of delaying a return to the natural equilibrium of equation (4.6g) above may result in unexpected consequences of deflation or inflation, and the long-term destruction of real (as against financial) capital. Clearly a much better plan is simply to prevent excess debt, and so inappropriate capital building up in the first place. One thing that should be clear from a fixed ratio of returns to capital and labour, is that attempting to 'rebalance' the economy by cutting wages and 'pricing workers back into jobs' is a course of great foolishness, and would guarantee a spiral of reducing returns to both labour and capital, so reducing employment and utilisation of capital. This was one of Keynes's central insights. In one sense this 1/3"1 — 2/3rd split of returns to capital and labour can be seen as a good thing. It is caused by the shortage of surplus labour past a Lewisian turning point, and prevents Marx's prediction of ever increasing returns to capitalists and ever further impoverishment of workers. However, in a deeper sense this is also a very negative thing. As has been discussed above in section 4.7, when the productivity of machines increases, one way the system can reach equilibrium is simply by using less human input. As capital becomes more productive, to get the same returns you just use less of it. What equation (4.6g) means, in fact what any formulation of Bowley's law means, is that because the balance of returns to labour and capital is fixed, to get any progress, to get any growth in gdp; to get more wealth, you must get more returns to labour. Historically this generally been achieved by increasing the output from labour. If the returns ratio of labour to capital is fixed at 2:1, then it is the amount and efficiency of labour that has to be improved to get gdp growth. Progress is constrained by the amount and productivity of labour, not capital. Increasing the amount and efficiency of capital is relatively easy. But doing this alone has no useful effect. 140 EFTA00625268
Although Western economies are now highly mechanised, the workings of the financial system dictate that two-thirds of the earnings that are produced by capitalism are paid directly to people in the form of wages. Also, as discussed in section 1 of the paper, for 80% of people, payment for labour forms almost all their income. This necessarily demands the full time presence of people at work. We have been enslaved by the machines. In the second half of the 20th century, for most Western countries, increasing the amount of production provided by labour was very easy. It was achieved very simply by moving women out of the home and into the workforce. This one change in itself was probably the most important source of economic growth through the fifties to the seventies. Once this step has been completed, increasing the size of human capital becomes much more problematic. So the next stage is to increase the efficiency of human capital, however this is also problematic. Human capital is primarily restricted to the skills and abilities that human beings have, and carry around with them in their brains. There are a few obvious skills such as driving, using basic word processing software, or other basic computer skills that can be easily learnt by almost all people. But beyond that things get difficult. Information Technology is a good example. Computers are generally owned by companies, so returns on their wealth generated are taken by the companies. As we have seen above, if this improves returns to companies, it just results in less capital being needed overall. By replacing many basic clerking and administrative duties computers have actually taken skills that used to be in the hands of human beings and moved them to the owners of capital. Some people of course have made a great deal of money out of their personal capital in the IT revolution. Computer programmers and mathematical modellers are two examples. But to get the returns to the humans, the human capital needed is knowledge of VBA, C++, Excel, etc as well as advanced mathematics. This is human capital that is only available to a minority of people with the requisite logical and mathematical abilities. Another way to benefit from IT is to be a good and effective manager. However most would agree that this is also a minority skill. This may explain some of the apparent problems of the modern world. Firstly it might account for the non-visibility of IT in productivity despite the amount spent on it. It might also account for the imbalance in work requirements between different skill groups. Unskilled labour is now of marginal assistance to serving machines, and has been largely replaced by the machines themselves. This is as true for clerking and administrative work as it is for labour. Spreadsheets and stock control systems have replaced the clerks. Forklift trucks and containers have replaced the labourers. In contrast skilled professionals, from plumbers and 141 EFTA00625269
technicians to programmers and managers, people who have the abilities to serve the machines, find themselves under continuous pressure to increase their working hours. Taken all together this might account for the fairly acrid taste that is seen in political debate in most Western societies. On one side there is large population of the unskilled who find it difficult to find and hold decent work of any sort. These people face unemployment, poor wages, no opportunities for advancement and semi-permanent dependence on welfare. They often have stretches of involuntary inactivity. Despite their subsidies and enforced leisure, for these people hard work is not rewarded and life lacks hope of betterment. On the other side there are skilled trades people, professionals and managers, who work longer hours and pay higher taxes than their parents, primarily, as they see it, to support the idle poor. This is not a happy recipe. Futurologists have been predicting for decades that once basic needs have been satisfied, human beings would be able to relax into a life of leisure. To date, futurologists have been wrong. And it is not for the want of suitable capital, the progress of automated technology continues at an extraordinary rate. In section 9.3 examples such as fruit picking machines, automated hospitals and personal rapid transport systems are discussed. All of these examples share the common features of being able to replace large amounts of unskilled labour and also being technologies that are being brought into use. Despite this, in real life, almost the opposite is happening, working weeks have been steady, and in some cases increasing. In Europe and the US retirement ages are being revised upwards rather than downwards. In the west we have achieved enormous personal wealth, but through an accident of mathematics, we have been required to sacrifice our time to the mechanism of wealth production. Nirvana has been postponed. As an amateur futurologist, it is possible to conceive of a world where the main inputs of human labour could be reduced to direct care for the young, the sick, the elderly and the provision of entertainment and spiritual needs. Which is what, biologically, human beings are designed to do. Other animals that dance, sing and make art works; such as birds of paradise for example, are generally animals that do not face significant predation and that have more than enough resources available, and so time on their hands. In the absence of predators to compete with, or resources to fight over, they turn to competition in the arts. Almost certainly prior to the agricultural revolution, human beings fell into this class of animal. Human beings were simply not designed to work forty hours a day five days a week. Both hunter-gatherers and most agricultural societies are characterised by underemployment. Historically this was true in the West until recently. 142 EFTA00625270
The second half of the twentieth century is almost unique in being one in which the well off are characterised by having full time employment. In the past the rich were notable by not working, they lived off their capital and looked down on paid work. This labour capital split of the Bowley ratio might also explain the bizarre behaviour of growth. As has been discussed in section 4.5 above, when they start growing, economies typically follow a path of rapid expansion to use up surplus subsistence labour. Casual observation suggests that this can be associated with growth rates of up to 10%. The 10% restraint appears to be due to the difficulties of building infrastructure fast enough. China has been following this path for the last two decades, the Asian tigers did so before this; now India appears to be following the same route. Once the surplus labour has been used up then growth generally drops to a slow continuous growth rate of about 2-4%. The UK has been expanding like this for over 200 years, the US for over 150 years see figure 4.5.3 above, or gapminder for some very pretty graphics [gapminder]. In theory this is very odd, once economies are mature, why do they just not continue increasing the capital stock at 10% per annum to provide for all people's needs and eliminate the need for labour? This should be easy, as countries, and people, get richer, more of their basic needs should be provided for, so diverting revenue (in the most general sense — not just public taxation) for provision of capital should become easier to do. If however, growth is restrained by the productivity of labour, then a growth rate of 2-4% seems more sensible. Once reserves of subsistence labour have been exhausted, human capital cannot quickly be increased in the same way that physical capital can be. I suspect that this might be only part of the explanation. As discussed previously, I find the growth rate of 2-4% suspiciously regular. It also goes hand in hand with suspiciously constant real interest rates at 3% or so, and suspiciously regular stock market returns, at 7%, see figures 4.5.1 to 4.5.3 in section 4.5. The 'stylised facts' of these three growth rates are very suggestive of a deeper underlying process equilibrium. The presence of a fixed ratio of returns to capital and labour also gives a very big problem that there is a general shortage of 'real' assets. As we have seen in section 1.8 above, there simply aren't enough real assets available to provide even for everybody's retirement needs. This in itself could be a source of the search in the finance industry to create new and exotic assets that appear to solve this problem. Unfortunately, Bowley's law dictates that the underlying 'real' economy is fixed, so the total real returns are fixed. Trying to create new assets out of old is no more possible than other more traditional forms of alchemy. You can't create real new revenue streams simply by repackaging assets. Similarly, this may explain the hunger for government bonds in the financial markets, especially given their apparent safety. But ultimately, government bonds are dependent, via taxation, on revenue earned in the private sector. The most obvious example in the shortfall of capital is the example of housing. Other public goods such as health, education and pensions have obvious market failure reasons for not being provided fully. 143 EFTA00625271
Housing should be simple to provide for in a wealthy society. Simply build enough of it for everyone, then all you need to do is maintain it. In practice many societies have attempted to do this, through mass council (public) housing in the UK to the recent disaster of state subsidised mortgages in the USA. The problem of course has always been that the poor have rarely been able to afford the maintenance of the housing, never mind the capital payments. So, the key question here is whether this system can be changed so that more capital can be accumulated to carry out more work on behalf of labour. Interestingly, history suggests that the system can be changed significantly, and especially as a result of the scarcity of labour. The trick is not to change the efficiency of labour but to fully remove the surplus labour and turn it into an increasingly scarce resource that is over compensated for its efforts. Back in section 1.3 I made the assumption that labour was 'fairly' paid for its inputs to the production process. I kept this assumption through all the income models, though it was then discretely abandoned in the macroeconomic modelling. Actually, because labour is a uniquely non-adjustable factor input, it is the only truly scarce, non- substitutable resource. Also, because of Bowley's law, labour is very rarely paid it's true worth. It is usually significantly under or overpaid. Following the theories of WA Lewis [Lewis 1954], or for that matter Marx, in a society with excess subsistence labour, capital can 'under-pay' labour employed in the commercial sector, as pay rates are held down at subsistence level by the presence of under-utilised rural labour. This has been the normal state for most countries for most of history, and has provided the main critique of capitalism until at least the end of the Second World War. In such an economy, with surplus labour, the economy doesn't reach a true equilibrium for the Lotka-Volterra / GLV approaches described above. The subsistence farmers are outside the equilibrium, and they also hold down the wages of those employed. In such a society the rich are overcompensated for their ownership of capital, and also have low living costs due to the low labour costs. In these societies the Bowley ratio can be as low as 0.5, this can be seen in China today, even as it approaches its Lewisian turning point. Things are much more interesting in a 'normal' industrialised country; one that has passed it Lewisian turning point and has absorbed the majority of its cheap labour. In such an economy labour is generally over-rewarded; returns to labour are in excess of the value actually provided by labour. This was actually the case in the macroeconomic model in section 4 Where labour generally gained through the economic cycle, being 'overpaid' in exactly the same way that suppliers of commodities were overpaid in the commodity cycle in section 3. In this case the employees are successfully extracting 'rents' from the capitalists. And a good thing too. I believe that, in the second half of the 20th century, parts of the world moved, for a period, fully into the zone described in this model. 144 EFTA00625272
Following the Second World War; all the communist countries, most of the de-colonised countries, and most of Latin America voluntarily withdrew from the world trade system. The communists followed their own socialist paths; almost all of the rest followed a route of import substitution behind high tariff barriers. Following rapid post war growth, most of Western Europe and North America went through a period in the fifties and sixties with full employment and ongoing labour shortages. Meanwhile the few poor or poorer countries that remained in the world trading system; countries such as Japan, Italy, South Korea, Taiwan, Hong Kong, Singapore and Malaysia, saw breakneck growth, moving from subsistence agriculture to industrialisation in a generation. In the West full employment artificially increased returns to labour. Through the Bowley ratio this then forced investment in capital to increase returns to capital. Over the longer term, expensive labour forced investment in labour saving production, so increasing the efficiency of capital. This period resulted in a virtuous circle with high wages and full employment forcing rapid growth. Returns to both labour and capital kept increasing in lockstep. It is worth remembering that labour was so scarce in this period that large-scale immigration was allowed into the UK, and guest workers were invited to Germany, to do the menial work that Britons and Germans were unwilling to do. From the nineteen-seventies onwards many poorer countries, most notably China, re-entered the world economic system, providing alternate supplies of cheap labour, and competition for labour in industrialised countries. The portion of the world's economy that is integrated into the trade system moved back to a pre-Lewisian state with excess subsistence labour in Asia, Africa and South America competing with Western labour. It is the belief of the author that, at the time of writing, the richer, industrialised, countries are currently simultaneously in a complex pre- and post-Lewisian state. Pre-Lewisian for unskilled labour, and post-Lewisian for skilled labour. This is due to an accident of history caused by the third world's absence from, and then re-entry to, the global economy. These conclusions appear to have some support from data. As well as showing smaller cycles, many of the country graphs in Harvie [Harvie 2000] show a much longer term cycle of change in the compensation to labour, starting with lows in 1956 going to high points in the 1970s, then returning to lower points by 1994 (the last points in the data sets, all of which were for industrial economies). It will be interesting to see what happens in the near future. China appears to be passing through it's Lewisian turning point. Already China's low-cost manufacturing base is relocating to poorer countries such as Vietnam and Bangladesh. That is the manufacturing base that supplies cheap toys, shoes and clothes to richer countries. This in itself will spread wealth, and labour shortages, to these countries as they start exporting to the West. Simultaneously China will also need to start importing cheap manufactures from poorer countries to supply its own population. Given that India is already close to peak expansion rates, primarily through providing information services to the West, the worldwide supply of surplus cheap labour could dwindle very quickly. 145 EFTA00625273
It is possible that we are close to seeing a repeat of the full employment boom of the 50s and 60s, but this time repeated on a worldwide scale. Even without waiting for this process to happen naturally, it is possible that the proposed '40 acres' compulsory saving process proposed in section 1.8 above might also be able to produce the same effect artificially in single countries. Although some people are natural workaholics, most would choose to 'downsize' and have more leisure time if they could. But they can't. It is common in neoclassical economics to see discussions of individuals choosing between spending and leisure. Because of the workings of the GLV, most individuals have no such choice. To seriously consider reducing working hours; a family needs to own their own house, have a good pension plan in place, have enough money coming in to cover day-to-day expenses and be sure of access to a decent health service and a good education system for their children. Even in the richest of Western countries few people have all, or even most, of these things. Primarily because they have insufficient capital. If a 'forty acres' style system is used it would give more returns from capital to all members of society, it would reduce reliance on earned income. It could slowly start a virtuous circle like that seen in the 50s and 60s. By ensuring that all individuals move up to the point that they have sufficient wealth and income to meet their day to day needs, compulsory saving would allow people to move into voluntary saving and allow faster investment in decent housing and sufficient pensions. This would then allow a much more genuine choice between work and leisure. As individuals begin to withdraw from the labour market, this would then start a virtuous circle of rising labour costs and full employment. In the longer term this would then also encourage a drive to more labour saving capital. Probably it would start with middle class families choosing to keep a partner at home when children are young. But even such a small withdrawal would tighten the labour market in the skills removed and so push up wages. As people withdraw from the labour market, this will force wages up, and will also increase the share of returns to those still in the labour market. With labour tight, and wages rising this will also encourage adoption of more efficient, labour saving technology. With Bowley's ratio holding, returns to both labour and capital will go up, while more and more of the actual work done by the machines. The aim would be to create mass underemployment, or even unemployment, but not, as presently happens by accidentally creating unemployment at the bottom of society. Instead, the aim would be to create voluntary underemployment at the top of society, as people choose to live more on their investment income and less on their wages. As this then forces wages up, the process will then work its way down to poorer people. 146 EFTA00625274
The aim is to create underemployment at the top end of society, so creating full employment throughout society. So increasing wages for all, so increasing returns to labour, so, via the Bowley ratio, forcing up returns to capital. The aim would be to build up the v40 so that it would consist of shares in companies owning machines carrying out fruit-picking, hospital-cleaning and personal rapid transport. Meanwhile the people who used to be agricultural labourers, cleaners and taxi drivers would get more rewarding and better paid jobs with shorter hours. They would be helped by the income from their own v40's. A good aim would be to get the v40 sufficiently large for everybody that dividend payments pay the equivalent of two working days per week of total living costs, while people still work three days a week for their remaining income. On retirement, the additional drawdown of capital would provide for five working days per week of income. A three day working week seems a sensible aim. There will always be a need for human beings to provide education, caring and entertainment. Three days a week would be sufficient to give structure and integration in society, but would leave ample time for family, friendship and leisure. To many the above will seem ridiculously naïve, but the example of Norway given previously shows that the numbers can add up, and a three day week is feasible. As long as enough capital is available. Futurologists' predictions have gone wrong because of the workings of the Bowley Ratio. Understanding how the Bowley Ratio works may allow the future to be changed. 4.9 Bowley Squared Going back again to the base model shown in figure 1.3.5, this shows financial wealth W being held by households in the form of stocks and shares as claims on the real wealth K in the productive companies. Figure 1.3.5 here In one important way, this is very unrealistic. I personally don't own any shares. In reality very few people own shares directly. In fact, aside from housing, most people do not own any capital directly. Most peoples' wealth is in the form of bank deposits, pension funds, insurance policies, mutual funds, etc. All of these investments form financial claims on companies within the financial sector. 147 EFTA00625275
The companies in the financial sector then own the claims on the real assets of the non-financial sector. When it works correctly this is just a sensible way of dividing labour. Most people who have money to invest do not want to spend their spare time investigating possible investments. Also they would prefer to spread their investments across different companies to spread their risk. It makes lots of sense to lend their money to professional experts who can save costs by analysing investments on the behalf of lots of different investors at the same time. This then results in a model of the form shown in figure 4.9.1: Figure 4.9.1 here While it might seem very sensible to set up a specialist finance sector in this manner, from a control systems point of view this is something of a nightmare. This repeats the feed back loop of the simple macro economic model a second time. Instead of one simple feedback loop capable of creating endogenous cyclical behaviour, you now have two feedback loops both capable of creating endogenous cyclical behaviour, and more importantly, capably of interacting with each other to give even bigger more complicated endogenous cycles. The original macroeconomic model can be considered to be a very simple unstable model on the lines of the Soay sheep model discussed briefly in section 1.2.1 In this model the companies grow too rapidly for the base level of labour that can support them, in the same way that Soay sheep breed too quickly for the grass to support them. Introducing a financial sector, installs a second population on top of the first. It is similar to adding wolves to predate on the sheep of the first model. I have not attempted to construct this model mathematically. The models discussed in section 4 above already have sufficient loose parameters and dynamic complexity to produce confusing patterns of behaviour. They really need pinning down with real data before being expanded to the model in figure 4.9.1. But even without modelling, some of the behaviour is easy to predict. In fact we have returned back to something very similar to the original fox and rabbits Lotka-Volterra model discussed back in section 1.2. In this case, the rabbits are the non-financial sector and the foxes are the financial sector. Typically a boom would start with a small financial sector and a growing productive sector. As the productive sector grows the financial sector grows more and more rapidly taking up an increasing proportion of the economy. Then the productive sector will start to decline slowly. A short, but significant time after that, the financial sector will show a sudden and much more rapid decline. The operation of the two business sectors is analogous to the fluctuations of biomass in a Lotka- Volterra model. First biomass builds up in the rabbits then in the foxes, then it declines in the 148 EFTA00625276
rabbits and then the foxes. Similarly capital should build up in the productive and then financial sectors, followed by declines, in turn for each sector. So a prediction of this model is that over the next five to ten years, the proportional size of the financial sector in countries such as the USA and UK should decline back significantly towards proportional sizes seen in say the 1980s or early 90s. One other outcome of this model is that the two sectors can follow their own paths to a significant extent. In such a model, the secondary feedback loop, that of the finance system can vary much more dramatically than the underlying population, see figure 1.2.1.1, showing the original Hudson Bay lynx and hare populations. This makes control of such a dual speed economy very difficult when you are only using the single weapon of inflation targeting and interest rates. While the underlying economy may respond reasonably to interest rates, the liquidity generated in this productive economy can generate much larger changes in liquidity in the finance sector, which are harder to control. Also the fluctuations in the financial sector will not be in the same time phase as the main economy. To take an analogy this model can be likened to an air-conditioning system. The main economy can be imagined as a large office block somewhere in the temperate northern hemisphere. Depending on the time of year or time of day this main block will need a certain amount of heating or cooling. The financial sector can be seen as similar to a large atrium on the south aspect of the building, full of hothouse flowers. The two buildings will be connected together, and will be roughly aligned through the seasons and days, but will vary greatly in the amount of cooling and heating needed. The atrium will need more heating in winter and more cooling in summer. This will depend on the amount and direction of sun and the external air temperature. On some spring and autumn days, the atrium might need cooling when the building needs heating or vice versa. The Bowley squared model is a complex system and needs full understanding to control effectively. The topic of financial sector liquidity and how to control it is revisited in some depth in section 8.2.1 below. Despite the complexity of the model in figure 4.9.1, it remains the case that control of such a system should be straightforward using standard controls systems feedback theory. 4.10 Siamese Bowley - Mutual Suicide Pacts In the previous section one Bowley model was placed on top of another, in a way that was multiplicative. 149 EFTA00625277
An alternative model would be to put two Bowley models side by side and allow individuals in one half of the model to own capital in the other half of the model. This is illustrated in figure 4.10.1 below. Figure 4.10.1 here This gives an international model, with international trade. The discussion that follows borrows heavily from the work of Michael Pettis [Pettis 2001], whose writing I have found highly illuminating, in contrast to much standard economic work on international economics and finance. Pettis's work takes a financial framework for analysis, and concentrates heavily on flows and stocks of capital and debt. As such it fits well with the analytical models described in this paper. Pettis's work also fits closely with the known facts of repeated booms and busts triggered in poorer nations by investment booms and financial crises initiated by capital investment typically from London or New York; a process documented beautifully by Reinhart and Rogoff in 'This Time is Different' [Reinhart & Rogoff 2009]. One aside with regard to the use of the word capital, which in international economics is used in a markedly different way to that in normal macroeconomics, or the preceding sections of this paper. In this paper capital can refer to K, the stock of physical assets that produce real wealth in the form of goods and services. It can also mean W (or Q), the stocks of paper financial assets that are held as claims on those productive physical assets, such as stocks, shares and company bonds. In international finance a 'capital flow' is used to refer to a flow of money in return for a stream of paper financial assets; sometimes financial assets of companies, but these can also be assets such as government bonds. So a capital inflow from Britain to Brazil would indicate purchase of Brazilian financial assets by institutions in Britain. The ownership of these financial assets would then give the right of the British owners to receive a stream of financial income based on the wealth produced by the underlying real physical capital. In theory such a capital inflow should be used to invest in physical capital goods in the recipient country so allowing the country to become more productive and pay the interest on the loans. Unfortunately it is all too common for the 'capital flow' to be used as payments for imports into the country receiving the 'capital flow', eg, Brazil paying for imports from the UK. When this is the case, the original meaning of the word 'capital' is lost altogether, and the 'capital inflow' is simply a way of describing lending money as a form of debt, often effectively unsecured. And as can be seen from the analysis of Pettis or the research of Reinhardt and Rogoff, it is this quick and natural split of countries into creditors and debtors that is symptomatic of financial trade. 150 EFTA00625278
International finance can be very confusing, with a large number of variables, especially when currency flows and exchange rates are taken into account. Much analysis of international finance concentrates on the role of currency, along with control of interest rates and the role of inflation. Actually, history suggests that different currencies are in fact something of a red herring. To get the basic model for analysis you don't need currencies. Throughout history there are many examples of international trade, and gross trade imbalances occurring when countries shared a common currency. Pettis gives the first such well documented example as that of different parts of the Roman empire in a speculative property boom in 33AD. In this case the metropolis of Rome was the debtor, while the grain producing provinces were the creditors. History is replete with currency unions or fixed exchange rate pegs coming to grief through trade imbalances. Many of the imbalances of the depression, when the US was a creditor and most of the rest of the world were debtors, were exacerbated by the fixed exchange rates of the gold standard. Most of the countries involved in the Asian financial crises of the 1997 were on fixed pegs to the US dollar. Mexico was forced off its fixed exchange rate during the tequila crisis of 1994 And Argentina suffered severe economic problems until it abandoned it's currency board in 2002. At the time of writing Greece, Ireland, Portugal and Spain are suffering major structural problems while Germany and it's near neighbours simultaneously enjoy good growth. The common currency of the euro is currently magnifying trade problems, not reducing them. Another factor that can be ignored in a base model is relative wealth. Although it is most common for the rich nation to be the creditor nation and the poorer nation to be the debtor nation it is sometimes the other way round. Ancient Rome provides one example, where the rich metropolis was in hock to the poor provinces. A much better example is the current one of the rich USA being a very substantial debtor balanced by a much poorer China as a very substantial creditor. In fact, when looking at trade imbalances, it is my belief that it is debt, or more particularly, savings rates, that are key. In Europe rich Germany has a high savings rate while Ireland and the Mediterranean countries have lower savings rates and higher debt. On a bigger scale poorer China has one of the highest savings rates ever seen, and America has moved, in less than a century, from the world's creditor to the world's debtor. It is unfortunate that this is often seen in moralistic terms, especially by creditor nations. In fact, though cultural reasons are clearly important, savings rates are often driven by deeper fundamentals. As Lewis [Lewis 1954] pointed out lucidly, newly industrialising countries tend to have high savings rates as the newly rich elite have access to cheap land and cheap labour, and have little else to do with their money but save it. 151 EFTA00625279
The US complains bitterly about China's 'currency manipulation' causing an imbalance of trade, but the US made the same complaints about France and Germany in the 50's and 60's, about Japan in the 70's and 80's, and about the Asian tigers in the 90's. The common denominator here is the US; the exceptionalism of the US in this case is their ability to issue the world's reserve currency. As issuers of the reserve currency, the US is able to borrow at cheaper rates than other countries, so it is hardly surprising that they have become the world's biggest debtor. An identical process happened in the UK in the 19th century. In fact there appears to be a cycle in reserve countries over the last half a millennium. Reserve currency status has been held in turn by Portugal, Spain, Holland, France, the UK and now the US, with each country holding the status for a roughly a century. In each case it appears that a country starts with a solid productive base that put it at the heart of trade. This trade and creditor role then allowed its currency to become dominant in trade. Reserve currency status then allowed cheap borrowing and increased debt. The increasing debt, allied with 'imperial over-reach' defending trade routes, then caused a crisis and loss of reserve status to the next upstart. So going back to figure 4.10.1 below; Figure 4.10.1 here We have two countries, Chermany with a high savings rate, and Medimerica with a lower savings rate. The two countries could start with the same population and the same amounts of capital K and wealth W per head, but the situation is naturally unstable. Chermany, with its higher saving rate will consume less than Medimerica and will accumulate more capital. After the first iteration, Medimerica will have a little less capital, but will still have a thirst to consume rather than save. In the short term the flows can be balanced by an unholy trade off. Chermany can supply funds; 'capital outflow' to Medimerica in return for financial assets belonging to Medimerica. Medimerica can then use this cash to buy imports from Chermany, mopping up the extra production that Chermany's high savers don't need. Unfortunately, although this balances the flows in the short term, it results in a grave problem with stocks. Chermany keeps on building up capital that it doesn't need. Meanwhile Medimerica increases it's financial debt to Chermany while simultaneously running down it's badly needed capital to pay for imports from Chermany. This system is inherently unstable and can only end in tears. Eventually there will come a point where Medimerica simply can not pay the interest on it's debt. It no longer has sufficient real capital to generate the real income to do so. At this point Medimerica has to default one way or another. This can be by straight repudiation of debt, or by devaluation and inflation to reduce the value of the debts. For Chermany this then gives two problems. Firstly the loss of value of the foreign assets owned. Secondly, and more importantly, the loss of markets for the exported goods produced by the excess capital that has been built up. 152 EFTA00625280
This was most dramatically demonstrated in the run up to the depression of the 30's when almost the whole world used the gold standard. During the 20's as the world's creditor, the USA (and also France) slowly built up their proportion of the world's gold reserves until Germany, the UK and other nations ran low on gold and were forced off the gold standard. They were also forced to partially default on their debts to the USA. The US was left with a large productive capacity and no buyers for its goods and also sank into depression. The US cried foul, but with a large portion of the world's gold in the US it was not clear what the Europeans were supposed to use to buy American goods. This bilateral instability goes back to the two forms of economic suicide introduced previously. One form of economic suicide is to run up too much debt as discussed in section 4.6 which eventually becomes unsustainable. Running up debt can be very appealing, as it allows consumption to run ahead of real growth, and also inflates the values of financial assets. Until the party ends and the hangover kicks in, this feels good for public and politicians alike. The second form of economic suicide is to allow capital to build up too quickly as discussed in section 4.4 above. Again in the short term this feels good because the rapidly expanding capital base increases employment and wages. (It can also have the unfortunate side effect of increasing pride in supposed national industriousness and thrift.) While it is possible to carry out each form of suicide independently, this is not so easy. In a single isolated economy the results of too much debt or too much manufacturing capacity are difficult to ignore. It is difficult to keep increasing debt in a home market beyond a certain point, and it is also difficult to build up capital and carry out a mercantilist export policy without people to export to. It is much easier to carry this out as a form of mutual suicide pact where one country takes on the role of debtor and the other of creditor, as described in the model above. The debtor country is able to borrow more and more at easy rates, the creditor country is able to sell more and more of its exports. Unfortunately neither of these processes can go on forever. In the thirties it was the debtor countries that first collapsed one by one. In the Plaza accord of 1985 the debtor countries laid down the law with over-exporting Germany and Japan. Germany took heed and rebalanced its economy (at least until the launch of the euro). Japan continued to push export led growth and imploded in 1989; to date it has not recovered. From 2006 onwards the American economy started to sputter, stalled by too much debt. In 2008 the American economy imploded in the credit crunch taking other debtor countries such as the UK and Spain with it. At the time of writing, the creditors, primarily China and Germany, have rebounded, but with a world full of excess industrial capacity it isn't clear who they are going to keep exporting too. In Europe the need for rebalancing is obvious, Ireland and the Mediterranean members of the EU are moving into outright depression and are likely to default. In the world as a whole it remains to be seen whether China can rebalance in time to prevent a Japan style bust. The big problem for China is that easing back on its export machine will result in mass unemployment and serious political unrest. A possible solution is to move capital into the hands of the workers, as discussed in section 1.8 above, so that workers would have more to spend, 153 EFTA00625281
and would not be reliant on wages alone. All in all it would make sense for the Chinese and Germans to consume more of the goods that they make. As with the Bowley squared model in the previous section I have not attempted to create a mathematical model on the lines of figure 4.10.1 above. Again there are a lot of different variables and the base models need first to be benchmarked against real data. Conceptually however, the models should be straightforward to build. Again, this sort of system is common in control systems engineering, and should be familiar to most office dwellers. To take the example of air conditioning systems again, an analogous system is where two large air-conditioning units are installed on an open office floor, each with its own independent control loop, set to control at exactly the same temperature. Common sense suggests that two identical systems like this should move up and down together in tandem. However in this case, common sense is wrong. Unfortunately, although the two units may be wired separately, the flows of air from one part of the building to another mean that the two units are actually influencing each other in what is called a 'coupled system'. Such a system can very easily become unbalanced, for example if their settings are slightly different or if part of the office is in shade and the other is receiving sunlight. In the second example the a/c unit in the shady part will provide a little cooling, while the a/c unit in the sunny part will provide a lot of cooling. Unfortunately, the cold air can then flow from the sunny part of the office to the shady part, while the warmer air from the shady part can flow to the sunny part. In fact convection will make this inevitable. When this happens the a/c unit in the shady part reduces its cooling, while the a/c unit in the sunny part ramps up its supply of cold air, and the two units end up in an ever-increasing battle to control the temperature. Ultimately, the a/c unit in the shady part may even convert to heating mode. This results in stratified air, bad draughts, general discomfort and very expensive utility bills. In this case the two a/c units are coupled but end up working in anti-phase; working in opposite directions. This is a common outcome in this type of control system. The same can happen with national economies, though it doesn't have to be the case. For example, where a large country has good economic links with a smaller country, the smaller tends to move into phase with the larger. This is true for example with Canada and the United States. Although Canada can be influenced by external events such as commodity prices, its economy usually moves closely with that of the US. The same is true of the many smaller countries around Germany, not only does this include euro users such as Austria, Finland, the Netherlands and Belgium, it also includes others such as the Czech Republic, Denmark, Sweden and the Baltic states. Together, these countries form a linked bloc, with all countries moving closely in phase with Germany. 154 EFTA00625282
In contrast, due to their size and different economic fundamentals, Italy, Spain, Portugal, Ireland and Greece have moved into anti-phase with the problems discussed in the model above. France remains uneasily stuck between the two conditions. The model described in this section is analogous to a competitive Lotka-Volterra model (in contrast to the predator-prey Lotka-Volterra models we have discussed previously). A competitive L-V model consists of, for example, sheep and rabbits living side by side eating grass on the same island. Depending on the different growth rates and breeding rates animals in these situations can come to different equilibria. If the animals are similar, say sheep and horses, an equilibrium can be reached with fixed proportions of the two groups of animals. If the animals are different the equilibrium is unstable and moves to one extreme or the other. So with say sheep and rabbits, depending on the start point, one or other group will dominate and drive the other group to extinction. One group of animals will take over all the biomass, just as in international trade it is possible for one company to take over all the real capital. Clearly the above model could be adapted in many ways, most obviously by introducing different currencies. Empirical data from the history of failed monetary unions and fixed currencies suggests that independent currencies have a significant effect, largely beneficial. If managed correctly devaluation generally allows beneficial adjustment. Obviously to introduce currency in international trade models, it first needs to be introduced in domestic economies, this is discussed in brief in section 4.11 below. 4.11 Where Angels Fear to Tread - Governments & Money I move into a discussion of the theory of money, and the role of governments, with some trepidation. Of all the areas of economics, this seems to be the one in which a religious belief in theory unfounded on empirical fact seems to be most widespread. And discussions in this sphere seem to take on the character of arguments between religious zealots. Exceptionally, Perry Mehrling writes on this field with great clarity and insight [Mehrling 2000]. It is my belief that an understanding based on flows and stocks, as followed in the rest of this paper could be productive. It would be possible first to start by looking at commodity money as an actual commodity in line with section 3 above. Using a commodity, such as gold, in the real world is problematic, because, as Robert Triffin noted, the supply of gold is insufficient to allow expansion of the money supply to keep pace with the size of the economy. To get around this problem all modern economies have moved to systems of fiat money, generally with inflation targeting or some other control system. 155 EFTA00625283
While I have many grave reservations regarding 'Modern Money' theory (see for example [Wray 1998]) I find their central insight of treating money as an artificially created commodity flow as appealing. Diagram 4.11.1 below shows a typical treatment. Figure 4.11.1 here The big problems with modern money theorists is their almost religious belief that governments can expand public debt without limit when the economy is below full output capacity. A brief review of [Bernholz 2003], [Reinhart & Rogoff 2009] or [Pettis 2001], shows that the empirical data demonstrates that this is emphatically not true. As Perry Mehrling [Mehrling 2000] points out very lucidly, the problem with the approach of Wray and others is that the state's ability to pay coupons on government bonds ultimately depends on the states ability to raise taxes, and also on the good use that the state puts those taxes to. In the simplistic examples of Modern Money, a colonial governor in a undeveloped rural economy raises hut taxes to pay for new roads and schools, and this clearly results in substantial economic improvements. That this can be translated into a modern western economy is not obvious. In fact, in industrialised countries, much money raised, whether by taxation or borrowing from private markets, is not invested in infrastructure but instead passed straight through to consumption as transfers. In this light the relationship between government and the private economy would appear to resemble the relationship between a debtor nation and a creditor nation in the Siamese-Bowley models above. The modern money theorists are surely correct in their belief that a significant amount of government debt is good for the economy as it provides a secure asset that gives needed liquidity for effective private markets. To believe that this debt can be expanded indefinitely is to undermine the most important value this debt has; that of security. In similar vein I find much of Milton Friedman's monetary theory terrifyingly naive. However I have found the blogging of 'kitchen-sink' monetarists such as Simon Ward [Ward] and John Hussman [Hussman] enormously insightful and surprisingly able in their predictive power. Friedman's theories, though simplistic were also of course based on flows, and assumed delays in action. So although his formulation was not dynamic, his underlying model, and the data it was based on was. I am insufficiently skilled to be able to judge whether either or both of the modern money and the monetarist approaches can be synthesised effectively into the modelling framework described into this paper. But I believe it may be an approach worth pursuing. Another problem with monetary theory is that 'money' can be artificially created by at least two dynamic feedback mechanisms. The first is the loop of fractional reserve banking that can allow a large multiplier of debt to be created for each sum of reserves pushed into the economy by the reserve bank. A second multiplier is the endogenous creation of liquidity within the finance system this was seen in the models in section 4, and is discussed at length in section 8.2.1 of this paper. 156 EFTA00625284
Taking all the above together, this then ends up with a basic model of the financial system that works something like the diagram below. Figure 4.11.2 here This simple model, includes at least two amplification loops and two feedback loops with positive feedback. If housing were included in the diagram, with the leverage of mortgages, there would be more feedback and amplification. With my control engineer's hat on, the only thing I can say about this as a control system is that if I was trying to design an effective control system, it definitely wouldn't look like the diagram above. It is about as sensible as trying to control a steam engine with a system made out of cheap rusty shower mixer valves and some lengths of garden hose. In democratic countries, central bankers are expected to control the whole of the country effectively by controlling the variables on the left hand side. Whatever they are paid, it is not enough. 4.12 Why Money Trickles Up Before finishing this section on modelling, and moving on to a discussion of background theory, I would first like to revisit the premise of this paper. At this point I am forced to confess to having committed a major offence that I have accused others of. I used the phrase 'Why Money Trickles Up' as the title for this paper to give an emotional impact; the title should really have read 'Why Wealth Trickles Up' or perhaps 'Why Income Trickles Up'. I have only discussed monetary theory as a passing aside. I believe however that I have given an authoritative explanation of both how and why wealth trickles up from the poor to the rich, as well as a detailed description of the mechanisms. In brief, macroeconomic factors including interest rates, saving/consumption rates and debt define the Bowley ratio; the proportions of wealth returned as wages and profits. The Bowley ratio then defines the parameters of the General Lotka-Volterra distribution that defines the distribution of wealth between individuals. This distribution of wealth then defines the majority of the shape of the distribution of income. 157 EFTA00625285
That is why money trickles up. 158 EFTA00625286
Part B - Some Theory 5. Theory Introduction Section A introduced a range of possible models to look at some of the basic interactions of economics. Though they may have had inspiration from other sources, the models are my own work. In many ways the models are naive and simplistic. Time will tell whether they prove useful or not. If the models survive unchanged I will be pleased, but also surprised. If the models are trashed and replaced I will be disappointed, but not particularly surprised. The accuracy of the models is beside the point. The point of the models is that by using a set of tools selected from other areas of science in combination with ideas primarily from classical economics and finance, it is possible to create simple effective models that address basic, fundamental regularities in economics. This is the main point of the models. If the approaches of the models above are taken further, but the models themselves are superseded, then I will have achieved the main aim of this paper. The scientific tools come primarily from physics, biology and pure mathematics. For almost all economists these tools; ideas such as chaotic mathematics, statistical physics, and entropy will be unfamiliar to the point of being quite alien. Even for most physicists, ideas such as the GLV and maximum entropy production will be unfamiliar, and I believe these will be of interest to many working in the field of complex systems whether this includes economics or not. As to the economics, of course almost all scientists will be ignorant of the basics of economics. Sadly, with vary rare exceptions, even most physicists, mathematicians and modellers researching in economics seem to take a perverse delight in not knowing anything at all about basic economists. This attitude seems to be something along the lines of "we know all about steel plate, diesel engines, turbo-chargers, power steering, inertial guidance systems, etc — why on earth should we spend our time learning about sailing boats?" However; although sailors take a lot of time and effort tacking backwards and forwards without getting anywhere particularly fast, some of their knowledge is quite useful; for example, where the shoals and reefs are, how to use a compass and sextant, why you should carry a fog-horn, not to mention lifeboats and life-jackets. And why it is a good idea to know how to swim. In fact many of the economic ideas in this paper will be unfamiliar to many economists. The economic ideas come largely from finance, economic historians and classical and other heterodox economics; including, somewhat to my own surprise, Marxian economics. All of these ideas are outside the canon of mainstream neo-classical economics and so are not just ignored but are politely rubbished, in the case of economic history and finance, or very impolitely rubbished in the case heterodox economics. None of these ideas are included in undergraduate economics courses other than at the most maverick of universities. As this paper is largely based on non-standard economics, I have gone to some efforts, not just to explain this background, but also to justify it to sceptical economists steeped in marginality and utility theory. 159 EFTA00625287
This is firstly to explain unfamiliar ideas to both economists and non-economic scientists. Also, for the economists, it is to explain how many other things, such as liquidity and dynamic scarcity, explain large apparent diversions from the idea of intrinsic value which is inherent in classical economics but absent in neo-classical economics. Once these diversions are understood and correctly modelled, classical economics becomes a much more powerful theoretical method than neoclassicism. The economic historians such as Reinhart & Rogoff, Shiller, Smithers, Harrison, Napier and Bernholz have the advantage of the long sweep of history to prevent them from accepting high- faluting theory that disagrees with reality. This research shows clear patterns in economics, such as strong cyclical and mean reversion behaviour, that clearly supports Austrian, Minskian and similar views. This clearly supports the theory of intrinsic value, and discredits orthodox economics. Similarly the inclusion of ideas from finance was not particularly surprising, people working in finance do not have the option of embracing intellectually beautiful ideas that don't describe reality; at least if they wish to remain working in finance. They are obliged to adopt rules of thumb that work. Some of the more thoughtful financiers, people such as Pettis, Shiller, Smithers, Cooper, Pepper & Oliver have then made insightful attempts to explain why these rules of thumb work in practice. In the field of market-microstructure in particular these approaches have been researched systematically and are both close to regularisation, and are also close to melding with the work of the more insightful financial econophysicists, despite the fact that the econophysicists have approached these problems from a completely different direction. Like econophysics, market-microstructure is highly mathematised, and very difficult to comprehend on a first reading. Perhaps because of this combination of complex mathematics and inscrutability, most curiously, market-microstructure appears to have been accepted as mainstream economics. This suggests most mainstream economists have never read any market-microstructure, as its rejection of marginality is, though very discreet, absolute. Which brings us to heterodox economics. Firstly the parallels between market-microstructure and post-Keynesian pricing seem, to this author, both obvious, and of considerable practical importance. Though I stand to be corrected, this parallel does not appear to have been noted previously, presumably because post-Keynesians don't read market-microstructure papers and vice-versa. The main reason for adopting classical economics was almost accidental. I had previously rejected the dabblings of both Foley and Wright into Marxian economics as misguided foolishness. I was wrong, they were right. My first reason for rejecting Marxian economics was because the labour theory of value is so obviously wrong-headed, the second was because I had believed that Marxian economics had been systematically disproved by neoclassical economics. More reading of economics quickly proved the second assumption to be false, Sraffa was the victor of the Cambridge capital controversies. The labour theory of value is indeed nonsense. However the concept of absolute value is not nonsense, it is in fact very powerful. The concept of 'negentropy' as value, as articulated by Ayres & Nair [Ayres & Nair 1984] for example, is not just basic common sense; it works as a theoretical approach, as evidenced by the models in part A. Once the labour theory of value is replaced by a "negentropy theory of value", not only does classical economics make perfect sense, it also allows economics to become a self-consistent theory that is an obvious subset of the natural sciences. A very large, very interesting and very important subset; but a subset nonetheless. 160 EFTA00625288
In contrast, the fundamental innovation of neoclassical economics; that value is not inherent, but is set in the collective sub-conscious of buyers and sellers has proved to be a spectacular non- achiever. This assumption also has the worrying theoretical feel that one somehow has to believe in fairies; that the value of a brick or a ham sandwich can dramatically change overnight just because a lot of people believe its value should change. That is not to say that I have an intrinsic problem with believing in fairies. When studying quantum mechanics or information theory, I find the explanations seem to depend on a worrying existence of an intelligent external observer. Given the assumed existence of quantum mechanics and systems described by information prior to humanity's descent from the trees, I find this worrying. However I feel obliged to accept both quantum mechanics and information theory because the maths works well, unbelievably well, in describing the characteristics of real world systems. In contrast, neoclassical economics, despite 140 years of theoretical effort has singularly failed to achieve a single macroeconomic model of the slightest usefulness. Neoclassical theory failed spectacularly to predict the credit crunch of 2008; as it failed to predict the crash in Argentina in 2002 before that, or the failure of LTCM (despite the Nobels) in 1998, the multiple crashes in Asia in 1997, of Mexico in 1994, of the collapse of the European monetary system in 1992, or the collapse of Japan into deflation in the early 1990's. At the time of writing it is clear that the central banks of the USA, the Eurozone, Japan, the UK, Switzerland, Sweden and others are all following their own significantly different policies, based primarily on experience and intuition. This is because they have no meaningful macroeconomic models. The ones they did have in 2008 have been quietly abandoned, and they are now largely flying by the seat of their pants with a finger in the air to check the weather conditions. Such is the legacy of a century and a half of neoclassical economics. It is the belief of the author, that the movement instigated by neo-classical economics to subjective value, remains the biggest and most damaging wrong turn ever made in the history of the sciences. The teaching of chaos, statistical mechanics and entropy is famously difficult. The concepts of liquidity and market microstructure are similarly opaque when first encountered. Despite this, once the ideas are grasped they are actually quite simple and can become easily understood and then become very powerful tools to understand problems. I have neither the teaching skills nor the space in a paper of this length to do justice in explaining these ideas. What I have attempted to do in Part B is to give a basic feel for the ideas, with very simplistic models and almost no mathematics. I have than also pointed to other authors, authors more skilled than myself, who can give greater depth and clarity than I can. Finally in section 13 I have included a reading list to point the way forwards into these subjects for mathematicians, economists and other scientists. In the sections that follow I have included some lengthy quotes from some authors, primarily Duncan Foley, Steve Keen and Ian Wright. This is mainly because they explain some of the points I wish to make very eloquently. In most cases I have then attempted to explain the ideas in alternative ways in my own words. Some readers may not find the extracts easy to follow on first reading. If this is the case I suggest that readers skim these extracts and read my own 161 EFTA00625289
words, then reread the extracts. It is hoped that the two different descriptions will help illuminate the underlying theories. It goes without saying that the basic ideas in part B are not my own. The ideas of mathematical chaos, statistical mechanics and basic entropy are centuries old, as are the ideas of classical economics. Other concepts such as maximum entropy production, market-microstructure, liquidity and post- Keynesian pricing theory are relatively recent; recent enough to be largely unknown in wider physics and economics circles. My own limited input includes, firstly, occasionally suggesting possible practical examples and uses that emerge from the theory — the ideas are speculative, and whether they actually prove to be useful remains to be seen. The intention of these proposals is to encourage a new way of tackling problems in economics and finance. More importantly, I believe I have pulled together an apparent rag-bag of ideas, from seemingly unconnected fields, that may allow a systematic approach to be put together that gives economics a strong, coherent, mathematically rigorous basis that transcends the petty boundaries of the many current competing economic models. Part B.I — Mathematics 6. Dynamics 6.1 Drive My Car Before moving into the ideas of non-linear dynamics and chaotic mathematics I would like to briefly start with a discussion of the difference between statics and dynamics. Imagine that you own a car, or better a pick-up truck, a small vehicle with an open space at the back for carrying loads. For the moment we will discuss what happens when the truck is parked, this is the case were the mathematics of statics is relevant. If the truck is unloaded it will be high up on its springs, with a big space between the top of the back wheels and the top of the wheel-arch on the body. This is a particular static equilibrium, the force of the gravity and the force of the spring come to a balance at a particular point. If you then put a dozen bags of cement in the back of the pick up truck, the truck will move down on its springs and the body will move closer to the wheels. This is a new static equilibrium at a different point where the new greater weight due to gravity balances with a new bigger force from the more compressed spring. Now the truck will also have dampers; shock-absorbers fitted. In a normal pick-up truck these dampers will be quite beefy, and will slow down the movement from one static equilibrium point to another. These dampers provide a frictional force, and from the point of view of the static equilibria beloved of economists, they are very inefficient. They physically prevent rapid movement from one static equilibrium to another. From this line of thinking it would be better to reduce the size of the dampers or just remove the dampers altogether. Then, following a point change in the weight, the truck would move to its new equilibrium much faster. 162 EFTA00625290
Using this line of thinking, a neoclassical economist could also point out that, once you started driving you won't be changing the load anyway so you don't need to worry about the dampers as you won't be moving away from whichever static equilibrium you started at. More thoughtful people will realise that this is not a sensible line of argument. A moving truck is in a dynamic situation. When you set off driving you will need to turn corners and you will sometimes hit bumps in the road, this will set of bouncing in the truck, and you need dampers to slow the up and down movements of the truck. Obviously if you drive down a dirt road, with a lot of bumps, you will need dampers or the truck will bounce about all over the place. What very few people realise, even very thoughtful people, is that dynamic systems are much more difficult to control than that. If you take the dampers off a car, and then you drive the car very carefully, down an absolutely flat, absolutely straight road (an airport runway say), within a few tens of seconds the car will start bucking like a bronco and will be almost undriveable. It doesn't matter how carefully you drive the car, the car will rapidly move into a strongly vibrating mode. The problem is that as soon as you start driving the car, you introduce extra time based equations into the system of mathematics that describes the car. This new system of mathematics, the dynamic model, is completely different to the static solution. It is not an extension of the static model, it is not a modification of the static model. It is a different system with different solutions. For a car without dampers the solution is similar to the Lotka-Volterra model seen in figures 1.2.1.2 and 1.2.1.3 in section 1.2 above. This solution is naturally unstable and rotates around a central point indefinitely. Even if you deliberately start the car off with conditions at the central point (which would be the solution to the static system), the car's movements will quickly spiral out to the circle of dynamic points. That is because this circle is the solution to the dynamic equations. The central point is not a solution to the dynamic system, so the car cannot stay at this point. The car will have a natural 'resonant frequency' and will move into this form of vibration. Like the Lotka-Volterra system, this vibrational mode is the equilibrium solution for this physical model. In this case the equilibrium is dynamic, it has constantly variable parameters. If you put the dampers back on the car, then the central point is a solution to the dynamic system, the behaviour of the car then becomes similar to that seen in figures 1.2.1.4 and 1.2.1.5 in section 1.2 above, or to that seen in some of the commodity models of section 3 and the macroeconomic models in section 4. Even if the car hits a bump and starts bouncing, its movements will be damped and will quickly move back to the stable point. That is why cars have dampers, they automatically and very simply change an unstable dynamic equilibrium into a stable dynamic equilibrium. In a static framework dampers are inefficient, they prevent rapid movement to a new equilibrium. In a dynamic framework, dampers are essential, they move the system from an ever-changing cyclical dynamic equilibrium close to the static dynamic equilibrium. Similar problems are found in many other systems, a famous example is the Tacoma Narrows suspension bridge ("Gallopin' Gertie") in the United States that was destroyed by the wind (for a little entertainment do an internet search for videos of 'Tacoma Narrows'). Common sense 163 EFTA00625291
suggests that the wind should not be strong enough to destroy a bridge built of steel. But the wind blew around the suspension cables and induced vibrations in the cables at their resonant frequencies. These then induced vibrations in the bridge at its natural frequency, which eventually built up enough to destroy the whole bridge. Nowadays suspension bridges are normally built with dampers installed on the cables to prevent vibrations building up, as well as vanes to prevent alternate vortex shedding (similar vanes can usually be seen on tall steel chimneys). More recently a similar problem occurred with the Millennium footbridge near St Paul's in London. This time the vibrations were induced in the bridge by pedestrians. In this case the pedestrians started movements in the bridge at the natural frequency of the bridge. The movements of the bridge then forced the pedestrians to walk at this natural frequency, so a feedback process built up that caused large movements in the bridge. The bridge had to be closed the day it opened, and stayed closed for some months until dampers could be installed. Another very elegant example of how dynamic systems can behave in unexpected ways is the example of traffic flows. A video of a beautiful example of a system moving into a stable but chaotic zone of behaviour is given at [New Scientist 2008]. Here a number of drivers were asked to drive in a circle at a constant 30km/h. They signally failed to achieve these very simple instructions An alternative system quickly set itself up with a clear and stable wave pattern of blocked vehicles moving around the system at a steady speed. This system of flows being blocked and forcing rhythmical patterns of fast and slow is exactly analogous to the flows of goods, and changes in prices in economic systems. For 140 years economists have treated economics as a static system. A Walrasian auctioneer compares all bids and offers in the market and then closes out all purchases and sales at a market clearing price. To compare two different economic points economists use 'comparative statics'. They look at one static point, say 'stationary truck unloaded', and then look at another point, 'stationary truck loaded', and then calculate the locus of movement from one point to the other. From this view economists conclude that economic systems will quickly and naturally come to an equilibrium, they also conclude that frictional forces are bad and prevent rapid movement to the equilibrium. In recent years economists have started using what they call 'dynamic' models. With the notable exception of the Goodwin models, these are lots of small stationary comparative static analyses carried out one after the other. This might be better described as 'high-frequency statics', and are equivalent to loading and unloading the truck rapidly with lots of small bags of cement. Even the Goodwin model is highly confused, attempting to model growth process, presumably long term exponentials, via the Lotka-Volterra model, which although it shows short term growth and decline, is most certainly a long-term stable model, not a growth model. Certainly none of the 'dynamic' models proposed in recent years have made it into the mainstream textbooks, for the simple reason that the models don't work and don't effectively model anything. To take the two mainstream economic texts cited in this paper, Mankiw [Mankiw 2004] has a dozen or so time based graphs, but all show actual data, not theoretical modelling. There are lots of theoretical graphs in Mankiw, but all are static or comparative static; almost all of them being variations of price versus quantity. Similarly Miles & Scott [Miles & Scott 2002], a much better book, has many dozens of time based data graphs but only one theoretical time based graph; their figure 7.2. There is no discussion of dynamic equilibrium in Miles & Scott, all theory is discussed in a comparative static framework. 164 EFTA00625292
A century and a half of neoclassicism has prevented economists getting in the car, turning on the ignition and releasing the handbrake. Economics is a dynamic system. Whether it is a trader selling shares on a stock market or shopper buying groceries in a supermarket, traditional auctions are notable by their absence. Prices are never formally closed, prices are settled dynamically in real time. They are set by price setters; market-makers or books in stock-markets, by suppliers in retail markets. These prices are set by people who look at the prices of competitors, the rate of purchase of goods, the inventory of goods in the shops, the prices of raw materials, etc. The values of all these items are historic, they are functions of past time. With a shop, the competitor's prices may have been collected the previous day. For a stock trader the competitor's prices may only be seconds old. But with high-frequency trading, seconds old is definitely pre-historic. So the most important variable in the functions that are used for setting prices is that of time. Price setting is a dynamic process, with a lot more equations than a static process. These dynamic systems give feedback loops and often give unstable equilibrium solutions just as with biological Lotka-Volterra systems and car suspension systems. This is painfully obvious to see in the cyclical behaviour of stock-markets, house prices, commodity prices, currency fluctuations, etc. These fluctuations are inherent in economics. Because economies are dynamic systems. The fluctuations of stock-markets, house prices, commodity prices are a result of natural dynamic equilibria. Neoclassical economics states that the fluctuations shouldn't exist, and if they do it is a result of frictional inefficiencies. As a result the policy recommendations of neoclassical economists make the fluctuations in dynamic economies worse. If neoclassical economists genuinely believe that comparative statics is a sensible way to analyse and manage dynamic systems like economies, they should prove it by taking the shock- absorbers off their cars. 6.2 Counting the Bodies - Mathematics and Equilibrium In his book, Debunking Economics [Keen 2004], Steve Keen puts his finger on the problem at the heart of economics. Economists are using the wrong sort of mathematics when they attempt to solve their problems: Economics remains perhaps the only area of applied mathematics that still believes in Laplace's dictum that, with an accurate enough model of the universe and accurate enough measurement today, the future course of the universe could be predicted. For mathematicians, that dictum was dashed in 1899 by Poincare's proof of the existence of chaos. PoincarE showed that not only was it impossible to derive a formula which could predict 165 EFTA00625293
the future course of a dynamical model with three or more elements to it, but even any numerical approximation of this system would rapidly lose accuracy.... The more appropriate starting point for mathematical models of the economy are dynamic equations, in which the relationships between variables cannot be reduced to straight lines. These are known as nonlinear differential equations. The vast majority of these cannot be solved, and once three or more such equations interact, they are impossible to solve. Table 1 summonses the situation. Economic theory attempts to analyse the economy using techniques appropriate to the upper left-hand part of Table 1 (with boldface text), when in fact the appropriate methods are those in the lower right-hand part (with cells shaded gray). Table 1 The solvability of mathematical models (adapted from Constanza 1993) Equations Linear Non-linear One Equation Several Equations Many Equations One Equation Several Equations Many Equations Algebraic Trivial Easy Possible Very difficult Very difficult Impossible Ordinary Differential Easy Difficult Essentially Impossible Very difficult Impossible Partial Differential Difficult Essentially Impossible Impossible Or alternatively, as Wright puts it: The state-space of a system is the set of all possible configurations of the DOF [degrees of Freedom]. A particular configuration is a 'point' in state space. In general we find that many neat systems, if they enter equilibrium, tend toward a point or trajectory in state-space. A canonical example is a set of weighing scales. Place some weights on each arm and the scales will tend toward an equilibrium point in which the internal forces balance and the system is at rest. This is a simple kind of deterministic equilibrium, in which the equilibrium configuration is a subset of state-space. The classical mechanics concept of equilibrium was a founding metaphor of the 19th Century marginal revolution in economics (e.g., see Mirowski (1989)). And it appears in a more developed form in 20th Century neoclassical general equilibrium models (e.g., Debreu (1959)). But most messy systems, if they enter equilibrium, do not tend toward a subset of state-space. [Wright 2009] And, of course, economics is not a neat system; economics is a messy system, economics is a multibody system. Foley gives this background in more detail: The concept of equilibrium states has played a decisive role in the development of quantitative sciences. The study of mechanical equilibrium, conceived as a balancing of forces in a static system, clarified the fundamental notions of force and mass in the course of the 17th century development of Newtonian physics. The 19th century saw the emergence of characteristically statistical descriptions and theories of mass phenomena (see Stephen Stigler, 1986; Theodore Porter, 1986) which migrated from the social sciences to physics, where they blossomed into the 166 EFTA00625294
marvelously successful and equally marvelously puzzling methods of statistical mechanics (see Lawrence Sklar, 1993). These statistical theories eschew the goal of describing in detail the situation of all the subsystems that constitute a large system with many degrees of freedom in favor of drawing strong conclusions about the observable macro behavior of the system based on statistical considerations. As Edwin T. Jaynes (1978), following the approach of 3. Willard Gibbs, realized, statistical equilibrium in all its various applications occurs when the appropriately defined entropy of the system is maximized subject to the appropriate constraints. The entropy is a strictly concave function of the probability distributions describing the system, and the constraints are typically linear or convex functions, so that this maximization implicitly calculates shadow prices (Lagrange multipliers) for each of the constraints, which are uniform over the subsystems and characterize its important properties in equilibrium. One might have expected that these statistical methods would be a natural basis for the attempt to put social theory, and particularly economic theory, on firm mathematical and quantitative foundations. It is a commonplace of social and economic methodology to point out that human behavior, no matter how idiosyncratic and unpredictable it is in individual human beings, is subject to statistical regularity and predictability in the aggregate. The Maxwell-Boltzmann-Gibbs methods of statistical mechanics, furthermore, are based on the calculation of dual variables that have the dimension of prices, and effectively view the establishment of physical equilibrium as a kind of economizing process. Thus it would not have been surprising had economic theory developed a statistical concept of equilibrium. By a curious turn of the history of thought, however, economic theory, despite an almost obsessive fixation on physical models and analogies (see Philip Mirowski, 1989), gave birth to an idiosyncratic conception of equilibrium fashioned more on the mechanical analogy, in the work of Leon Walras, Vilfredo Pareto, Irving Fisher, and Francis Y. Edgeworth (to name a few of the more important figures). In Walras' equilibrium each subsystem (firm or household) deterministically maximizes profit or utility facing uniform prices "cried out" by an "auctioneer': The auctioneer experiments until she has determined an equilibrium price system at which the offers to sell and buy each good in each market are exactly balanced. Because this theory assumes as an axiom that no transactions take place until the equilibrium prices are determined, households with the same preferences and endowment will always receive the same bundle of consumption goods in the equilibrium: horizontal equity (or equal treatment) is guaranteed by this a priori assumption. The Walrasian conception of equilibrium is in sharp contrast to the statistical thermodynamic conception in which the equilibrium energy distribution of subsystems (say, molecules) is achieved by their exchange of energy as they interact during the transient approach to equilibrium. In a thermodynamic context we would be astonished to find that two molecules that started in the same energy state generally end up in the same energy state. Apparently physicists tried to alert Walras to the peculiar nature of the conception of equilibrium he was proposing, but without success, either because Walras did not understand the statistical point of view very well, or because he considered it and rejected it on other grounds. J. W. Gibbs served as Irving Fisher's thesis adviser at Yale apparently without raising questions about the non-statistical conception of the equilibrium systems Fisher was studying. Francis Edgeworth distrusted Walras' conception of the auctioneer enough to propose an abstract combinatorial model of exchange, based on the idea of recontracting among coalitions of traders (which has developed into the modern theory of the core). The recontracting feature of Edgeworth's theory, however, implies equal treatment of agents with the same preferences and endowments, thus reproducing the key elements of Walras' system. One aim of Walras' and Edgeworth's theories was to explain the emergence of coherent market price systems from the decentralized interaction of atomistic traders. Unfortunately, both Walras and Edgeworth resort to strong and unrealistic assumptions to address this issue: Walras 167 EFTA00625295
invented a fictional information centralizing auctioneer, and Edgeworth posited costless recontracting among agents. The statistical approach offers an elegant alternative in this respect: market prices can be regarded as the shadow prices or Lagrange multipliers arising inherently from entropy maximization. In this view the system constraints (market clearing conditions) give rise to global prices just as the constraints of volume and energy in a physical system give rise to the emergent properties of pressure and temperature in a confined gas. The atomistic agents in a market "feel" the effects of these global constraints combinatorially as the relative difficulty of changing their holdings of goods, just as individual molecules "feel" the global constraints on energy and volume in terms of the likelihood of reaching any given energy state. [Foley 1996b] Few physicists read economics books. Even the physicists who are profoundly interested in economics, and produce papers on economics, rarely read economics books. The main reason; for the scientifically trained, is the extraordinarily unscientific approach that they have. Statements such as 'Assume a demand curve ', 'assume a budget line etc simply inculcate an overriding feeling of 'why?'. Where on earth do these assumptions come from, and why should they be assumed. For more intrepid physicists who persevere, it comes as something of shock to discover that utility theory was directly copied from the field theory of physics in the 1870's, and copied with gross errors. More extraordinarily, having absorbed field theory and adopted it as the core of economics, economics has studiously ignored the majority of mathematics developed since the 1870's (game theory being a notable exception) even though this mathematics would be much more appropriate for the analysis of economics. In this regard economics resembles a tenacious terrier, unable to eat the plates of meat set down in front of it, due to its inability to let go of the very well chewed bone it has firmly gripped in its teeth. The full horror of this calamity is recounted at length, in very entertaining detail, in Mirowski's book 'More Heat than Light'; a book that, contrary to its title, many economists might find enlightening reading. [Mirowski 1989] The central point of Mirowski's book is that utility was copied from field theory, but in doing so economists threw away the basic conservation principles that give field theory any meaning. If fields are not conservative, then there is little point in drawing curves and lines to visualise them. Without conservation laws, two different paths between the same two points will give different values, and so the curves and lines do not have values that can be meaningfully represented; neither graphically nor mathematically. The second problem with field theory as a basis for economics, is that it is simply, and absolutely, not appropriate for multibody systems. In their different ways; gravity, electromagnetism, relativity and quantum mechanics are all varieties of field theory. But in the application of their mathematics interactions are limited to two bodies, so for example an electric current can be seen as a unified flow of the separate electrons, moving at the same speed in the same direction. Newton's theory of gravity was the first and the classic description of field theory, and with two bodies; the sun and a single planet for example, Newton's theories work perfectly. 168 EFTA00625296
But even with a very simple multibody planetary system, Newton's theories break down, and fail to explain behaviour exactly. The errors are small, but the errors are there. As soon as you get to three bodies; for example the sun, earth and moon, it becomes impossible to find exact solutions for the motions of the bodies. Even in a three body system the motions of the bodies become chaotic and unpredictable at a detailed level. In 1890 Poincare demonstrated that it is actually impossible to solve the equations for a three body system in a simple field system, so even a system as simple as the sun, moon and earth is chaotic, and can not be accurately predicted over the long term. This, and a full history of analysing the motions of the planets is written up in the very enjoyable book by [Peterson 1993], Poincar4's work is discussed in chapter seven. It is important to note that this chaotic motion is noticeable in objects as large as planets. This is not simply the chaos of quantum effects or the stochasticity found in Black-Scholes. This is 'deterministic chaos' or usually 'chaos theory'. The chaos is present even in problems that can be described in exact mathematics and are completely free from random exogenous or microscopic behaviour. The original Lotka-Volterra model is just such a mathematical system. In practice the meeting of foxes and rabbits will have a stochastic element, but the system at a macroscopic scale is described very well by deterministic equations. In deterministic chaos, the behaviour of the system can change dramatically according to very small changes in initial conditions, as described in the analogy of butterflies causing tornados a continent away. However it is of course obvious, that although the positions of the earth, sun and moon can not be predicted exactly, they can be predicted to a very high degree of accuracy, and that their paths follow strongly constrained bands. This is a different type of equilibrium, a constrained chaotic equilibrium, that never stabilises at a fixed point, and so never becomes a static equilibrium. The Lotka-Volterra equilibriums (but not the General Lotka-Volterra's) fall into this class of equilibrium. So in a simple eco-system, the number of rabbits and foxes can vary significantly, but a peak in the population of either will be followed by a trough; and the long term average values of both populations will be very stable. In economics, Minskian, Austrian and Goodwin type systems fall into these categories, and the commodity and macroeconomic models discussed in sections 3 and 4 above attempt to model such systems. Such systems can show different behaviour depending on their underlying characteristics. The systems can be very stable staying close to the long-term averages, they can oscillate strongly, or they can grow explosively to infinite positive or negative values. And of course, real economies clearly follow the same patterns empirically. Business cycles have been evident and documented for at least two centuries. The periodicity may have changed as economies have changed, but the fluctuations remain. These can be short term cycles of building up and drawing down inventories, they can be the 15-20 year land cycles documented by Harrison [Harrison 2005], they can be the decadal mean-reversions of stock prices documented by Smithers and Shiller [Smithers 2009], they can also be the once per lifetime financial crises such as the great crash or the credit crunch caused by the retirement of all the people who remember the reasons why strict controls were imposed on the financial system after the last such crisis [Napier 2007]. And in the great crashes the system moves out of periodicity into explosive behaviour. 169 EFTA00625297
Fortunately in the last half-century or so there has been a great deal of progress in analysing such systems in the field known as 'nonlinear dynamics' and there are many standard ways of solving such problems. In fact the Lotka-Volterra system is one of the simplest such systems and strictly is not necessarily non-linear, though in my models a little non-linearity has been introduced. There are two big reasons, and one small one, why economics needs to use the mathematics of non-linear dynamics. The first reason is the inclusion of time as a variable. In comparative statics prices change with supply, and prices change with demand. Equilibrium is reached when the prices match each other and supply equals demand. The mathematical derivatives for the equilibrium relate the prices and the quantities. In the real world prices cannot change instantaneously, the main derivatives of prices are with respect to time. The economy is constantly moving with a continuous series of trades, the economy rarely formally 'clears' prices. This is true even for goods such as cheap manufactures that show strong price stability, this is equivalent to a car moving smoothly down a motorway at constant velocity, it is not equivalent to a parked car. If you put a brick under the wheel of a parked car, a new equilibrium point will be reached in a couple of seconds, if you drive over a brick while doing 70mph, it might take a little longer for a new equilibrium to be reached. In real economies the most important derivatives are the time derivatives, and the mathematical framework for economics must be cast in these derivatives. Adding in the time derivatives allows extra degrees of freedom and complexity, and normally moves the real equilibrium away from the static equilibrium, it also allows oscillating and explosive solutions that do not have a short-term or any equilibrium respectively. The analogy between stock-market crashes and normal (eg car) crashes is a mathematically exact one. Comparative statics states that a temporary liquidity crisis should not bring an economy to its knees, in the same way that putting a brick under the wheel of a parked car should not destroy the car. However if the car is doing 70mph, it is quite likely that the car will end up wrapped around a lamp-post. Similarly a liquidity crisis in a debt-laden economy can turn into a general solvency crisis. The most obvious way that time is important to the economy is with the delay of installation of capital in capital intensive sector and also with housing and office building. But time delays can be much shorter and still have strong effects, the research of Milton Friedman showed that monetary effects had delays of six months or more. Inventory stocking cycles operate on similar timescales. In financial markets time delays allow momentum effects on the scale of seconds. The second big reason that economics needs non-linear dynamics is that the variables in economics have two-way effects (and as discussed above, the effects are fed back with time delays). These mutual feedback loops are legion. For example: Increasing prices of company shares creates new apparent wealth - new apparent wealth allows people to invest in companies, so pushing up share prices. Increasing wealth in the productive sector allows more consumption — more consumption allows increased investment in the productive sector. Increasing debt allows more liquidity and rising asset prices - rising asset prices gives more apparent capital against which more debt can be secured. 170 EFTA00625298
A decrease in saving propensity gives a boost to consumption and the productive sector — more earnings from the productive sector allows a decrease in saving propensity. In all these cases, and many, many, more, economics has mutually reinforcing feedback loops. And in all these cases the feedback can reverse and work in the opposite direction. In all these conditions you have coupled systems with feedback, where: dx/dt = f(x,y) and also dy/dt = f(x,y) In these systems y gives feedback to x, and x gives feedback to y. Even with linear systems this can give periodic and explosive behaviour. All of these are analogous to the lynx and hares in the original model discussed in section 1.2 the populations of both can expand or contract over long periods before an external limit changes the direction of growth. The imposition of limits brings us to the third reason for using non-linear dynamics. Some functions in economics are non-linear. The most obvious ones are when you have genuine scarcity such as a fixed supply of labour or urban land suitable for house-building. Minerals such as gold, copper, platinum or oil also have scarcity, at least in the short-term, as installing capital is expensive and takes time. In finance, access to credit and other financing can be limited beyond a certain point and can lead to highly non-linear functions. A very good text explaining these approaches, with lots of practical examples, is 'Nonlinear dynamics and Chaos: with Applications to Physics, Biology, Chemistry and Engineering', by Strogatz [Strogatz 2000], a good alternative is Hirsch, Smale & Devaney [Hirsch et al 2003]. Prior to either of these books, chapter eight of Keen gives a very good brief introduction to chaotic systems, Ruhla also gives an excellent introduction with a little more maths [Keen 2004, Ruhla 1992]. Although the approach may seem very new to most economists, actually the techniques are extensions of techniques familiar from basic economics. Most non-linear systems are not directly solvable, so mathematicians often resort to graphical representation in 'phase space' to resolve the problems. This ends up with intersecting lines and curves not dissimilar (and a bit more fun) than the diagrams found in comparative statics. Jacobian matrices, for example, appear a third of the way through Strogatz. Although dynamic systems can be very complex and are often mathematically insoluble, there are standard approaches to analysing these systems, and it is usually possible to produce important mathematical conclusions out of such analysis. It is usually possible to identify the controlling variables and the different zones of stability and instability. Indeed one of the interesting things about complex systems is that while they can be very difficult to analyse and describe, they are usually very easy to control. Usually it is just a question of installing suitable damping or time delays in the system. In engineering such systems are commonly encountered within control systems where problems of feedback can be highly deleterious. 171 EFTA00625299
On the plus side, control system engineering, and systems dynamics, have investigated the problems of such systems in detail, and when the underlying characteristics of the system are understood, relatively minor changes in the system can result in dramatic changes to the stability of the system. See for example Control Systems Engineering [Nise 2000]. In the following two sections, and also in section 9.2 I take a qualitative look at house prices and share trading and ideas of how the natural cycles in these markets could be damped out. Section 9.2 is somewhat out of order in the paper, this is because it is necessary to introduce some ideas of market microstructure first. The ideas in these sections are pretty much common sense on the issue of housing, the ideas regarding share trading are much more speculative and contentious. The main point of the discussion is to make it clear that, counter-intuitively, just as with shock- absorbers in cars, introducing damping can create a better system. 6.3 Chaos in Practice — Housing in the UK It is a common aphorism of economics that it is a difficult science to progress, as it is not possible to carry out suitable experiments. This is tosh. Experiments are regularly carried out in economics, though usually by accident. The problem is that economists ignore the results, even when the damage to the public is substantial. The example of housing provides one of the clearest and most important experiments ever carried out in economics in the UK. Figure 6.3.1 here Figure 6.3.1 above shows the prices of housing in the UK from 1953 to 2010, divided by the average wage, prepared using data from the Nationwide Building Society and the UK Office of National Statistics. The high house prices immediately following the Second World War were a consequence of substantial loss of housing during the war and a suspension of house construction for the six-year duration of the war. During the 1950s and 60s access to mortgages in the UK was tightly regulated and controlled by government micro-management of financial institutions, with direct lending ceilings imposed on banks and building societies; resulting in strict rules on eligibility, deposit sizes, etc. During this period house prices showed remarkable stability at a cost of roughly 3.0 to 3.5 times average salary. It is very important to note that, despite the strong state controls on access to housing finance, the 50's and 60's were a time of substantial private house building in the UK, as the post war generation, including large sections of the working class, fled their city terraces for suburban semis. Despite the restrictions imposed by the state, even at these regulated 'low' prices, demand created lots of supply. As can be seen in figure 6.3.2 below UK private house building reached a prolonged peak in the mid 1960s. 172 EFTA00625300
Figure 6.3.2 [ONS 2004] Access to mortgages was liberalised in 1971 under the policy of 'Competition and Credit Control', which, despite its title, pretty much abandoned credit control; in line with neoclassical theory. This resulted in the 'Barber boom', starkly clear in figure 6.3.1, stimulated by the resulting rise in liquidity, and the first, of many, UK house price bubbles. From the 1970's onwards, the UK housing market has been characterised by vicious cyclic booms and busts, with a very clear reversion to the pre-Barber long-term trend or 3 to 3.5 at the bottoms of the cycles. These cycles are identical in form to the ones discussed in the commodity models in section 3 and the macroeconomic models in section 4. Compare figure 6.3.1 (or 6.3.3 below for the US) with the outputs in figures 3.3.2 and 4.3.3 in previous sections. These are exactly the outputs you would expect from a non-linear differential system that is showing quasi-periodic cyclical stability. In fact, if you look at the pre-1971 section it is possible to see the same cyclical fluctuations, just that the amplitude of the cycles is very much smaller. It is important to note that at the bottom of both the actual housing data, and the commodities models, prices reach their 'real', 'fundamental', Sraffian values. At these prices the value of housing represents the cost of the inputs. The same can be seen even more clearly in data from the United States (this time deflated for cpi); see figure 6.3.3 below. Figure 6.3.3 here [Shiller 2010] Supply is capable of balancing demand at these Sraffian prices. Any increase above these prices is pure speculation and rent-taking. Indeed the persistence of these cycles is deep within the economy of the UK. In his book 'Boom, Bust, House Prices, Banking and the Depression of 2010' [Harrison 2005] (first published in 2005) Fred Harrison not only confirms how trivially easy economic forecasting is if you are willing to believe in fundamentals and cyclical behaviour, but also shows that the cycles in the UK go back to at least the middle of the eighteenth century. As an experiment, you could scarcely ask for clearer data output. The basic system dynamics are substantially and dramatically changed following a point change in policy. Not only that, but this experiment has controls, Germany and Switzerland for example, have retained strict controls on mortgages for house purchases and don't suffer from strong cyclical booms and busts in house prices. 173 EFTA00625301
The consequences of this experiment are of some considerable importance to the welfare of all people living in the UK. Figure 6.3.4 below has the average value of house prices included for the two periods. Figure 6.3.4 here On the scales used, average house prices from 1955 to 1970 were 3.3 times average salary. During the last thirty years, from 1971 to 2009, average house prices in the UK have cost an average of 4.0 times average salary. In the latest boom, prices have gone to even higher levels, though a meaningful average can't be given until the cycle has bottomed. The net result of the liberalisation of credit in 1971 was the increase in average cost of housing for all Britons by roughly 23%. In the last cycle, from 1996 to 2010, prices were fully 40% higher than the '55-70 baseline rate. This represents a very significant reduction in welfare for residents of the UK. It also has many secondary negative effects. Many more poorer people are unable to afford housing, and are forced to rely on social housing and subsidies paid from taxation. This then helps to create ghettos of poorer people, which exacerbate employment and crime problems, which again requires more social spending and higher taxation. Even for the well-off that can still afford to buy houses, on average they must spend more money on housing, reducing that available for saving, pensions, or simply enjoying life. The beneficiaries here are the financial companies that issue the mortgages, or rather the investors and savers with these companies. Once again, exactly like the commodity cycles in section 3, We have a case of unjustified rent-taking on a massive scale. Given that private sector rents are substantially set by house prices, some of the rent-taking is literal. Taken as a whole, this represents a large transfer of wealth from the poor and middle income individuals to the rich. Housing suffers from the same problem as capital-intensive commodities, as modelled in section 3 above. Construction of housing takes a finite time, and so house prices can go up significantly before market mechanisms have time to work. Unfortunately, housing also has the same problems of endogenous liquidity creation that is seen in the macroeconomic model. As house prices go up, people feel richer, and also as with shares 'momentum' kicks in, and house prices, and the economy as a whole keeps rising, until finally house prices become unaffordable for new entrants in the market, and the bubble bursts. As a capital-intensive industry, housing is naturally cyclical. Although this conclusion is based on casual observation, it seems that housing seems to be much more dangerous to the overall economy than other asset classes. Booms in commodities and shares seem to be survivable when they turn into busts. Normally such collapses are followed by recessions and rebalancing for a couple of years, and then the economy picks up again. Housing crashes seem often seem to morph into financial crises, threatening the stability 174 EFTA00625302
of the whole economy, and recovery from such crises normally takes much longer. It seems likely that this is because housing is the only highly-leveraged asset generally available to the public. This again shows that the contrast between the comparative statics of neoclassical economics, and the real world of dynamic differential equations is stark. With comparative statics it is easy to 'prove' that credit controls and other government interventions 'must' increase the price of goods, and so reduce the welfare of the public. So neoclassical economists always push for removal of such controls. In the real world, where speculative cycles can be endogenously created within the economic system; credit controls and other 'interferences' in the market work beneficially by 'damping' the cyclical behaviour. It may be counterintuitive, but in the right circumstances, applying controls and apparent 'costs' to the market actually reduces the price of goods. And reduces them substantially. In the area of UK housing, the experimental data shows that the reduction would be over 20% if strict credit controls were reimposed tomorrow as they were in the '50s and '60s. It is essential to understand that the logic of this argument is supported by the experimental data of figures 6.3.1 and 6.3.3. It also happens to be supported by the mathematical models, if you understand the right maths, but that is a secondary issue. The experimental data is clear; credit controls reduce the cost of houses, by very helpfully damping, and largely removing, the cyclical nature of house price movements. If you reject this experimental data, and hold on to a theory that states, purely on theoretical logical grounds, that removing credit controls must make house prices cheaper, then you are not following science. You are following a religious dogma. Again neoclassical economists, by failing to understand basic dynamic systems, accidentally support massive rent-taking by insisting on deregulation of markets in search of nebulous market efficiencies. The 'Barber Boom' of the early 1970's ended with a spectacular crash and the 'secondary banking crisis' in which the Bank of England had to launch the 'lifeboat' to rescue thirty or so banks in the UK's very own dry run of the credit crunch. Despite this early warning, deregulation was not rolled back, but instead was systematically pursued in all areas of UK finance and economics. The results can be seen in figure 6.3.1, recurring housing bubbles in UK housing of increasing size and ferocity. The strength of this religious dogma is quite profound. Since 1971 the UK has had ten chancellors and eight prime ministers, all advised by what must be many hundreds of the most intelligent economists that work in the UK. Despite this the 'reforms' of 1971 have never been questioned, never mind reversed. The citizens of the UK are consequently still obliged to spend their lives paying off their expensive mortgages. The worst economic experiment carried out in the UK in modern times continues. The damage that this dogma has done to Britain is writ large in figure 6.3.4. From the early 1970s onwards, the liberalisation of credit has increased house prices in the UK by 23%. Another more subtle problem can be seen in figure 6.3.2. Private sector house-building continued at a roughly constant rate from the 1960s to the present. The liberalisation of finance failed spectacularly in encouraging new house-building; presumably because its main effect was to make houses more expensive. 175 EFTA00625303
What did change in the 1970s was the collapse of the provision of social housing. From the mid- 1970s onwards the government reduced funding for social housing, primarily because, from the 1970s onwards, the UK has had ongoing severe budget problems. This was due to a dramatic increase in the need for welfare payments compared to the 1950s and 60s. The need for welfare payments was needed to cope with the dramatic rise in unemployed and the poorly paid in the 1970s, a problem that has never gone away. The blame for the steep rise in the poor in the 1970s has been blamed variously on oil price shocks, de-industrialisation, union power, foreign competition, etc. While all of these factors may have had contributions, it is the belief of the author that the main factor was the ongoing deregulation starting in the Barber era. This increased overall debt levels and changed the Bowley ratio and so the GLV distribution. This not only created the poor, but forced higher taxes on the rich. It is perhaps time to end this experiment. Unfortunately the political drive for deregulation is powerful. The biggest problem, at least in Anglo-Saxon countries, is that many people believe that housing is a good long term investment. Going back to figure 6.3.1 or 6.3.3 for the UK and US it is clear that the 'investment' value of housing is a chimera. Over the long term, growth in the value of houses is derisory and barely keeps up with the growth in earnings. Stock market growth is typically 5% higher than this. Smithers discusses the dual properties of housing as both a form of consumption and investment in Wall Street Revalued p 107-108 [Smithers 2009]. The fact that housing is fundamentally consumption is demonstrated by the continuous reversion to a fixed proportion of wages. Equally this demonstrates that, for all the apparent growth in the booms, housing is a lousy investment, which over the full business cycle only manages to match the increase in wages. Figures 6.3.1 and 6.3.3 show clearly that in the long-term housing is a proportion of wages, and behaves as consumption. Governments should treat it as such, and actively prevent houses being treated as investments, and most certainly should prevent them being treated as speculative investments. Despite this the booms are usually longer than the crashes, and inflation often masks real falls in house prices. Both of these effects may explain the visceral attachment of the public, and worse politicians, to housing as investments. Historically, politicians have invented many ways of subsidising housing purchase; so assisting bubbles to form, and so unintentionally, and perversely, making housing more unaffordable. In the recent credit crunch the US did this so effectively as to put the financial system of the whole world at risk of collapse. Politicians are a very big part of this problem. They seem profoundly addicted to housing booms. Encouraging home ownership is always popular, though if people don't have the wealth or income to maintain the homes they purchase, home ownership alone doesn't solve any problems. More worryingly politicians seem to enjoy the public's enjoyment of rising house prices. Very few politicians seem to be able to comprehend that house prices cannot rise above gdp growth rates over the long term, neither do they seem to appreciate that long-term rising house prices necessarily produces high, and ultimately unaffordable house prices. This is puzzling. Whether you are a dyed in the wool socialist or a radical free marketeer, it should surely be the aim of any politician to ensure decent affordable housing for all. In addition to the problem of the housing cycle causing over priced-houses, there are other very major issues. Firstly the diversion of resources to the housing sector that would be better used 176 EFTA00625304
elsewhere, secondly and more importantly, as Harrison has shown, the cycles in housing appear to be the main driver of the cycles of boom and bust in economy as a whole. One of the central themes of this paper is that governments should assist in the transfer of capital to poorer people. But housing is not productive capital, and it is the wrong target for such transfers. Of course, housing can be a very good short-term investment if you get your timing right. Anybody who bought in the UK in 1970, 1978, 1983 or 1996 will almost certainly make a substantial unearned profit when they sell. But this of course is simply speculation, and speculation in it's non-healthy form. This represents a transfer of wealth to the well informed, and usually already wealthy. This is wealth that is removed from the hands of ordinary people. And this gives another big problem with allowing cyclical behaviour in economic systems. Most people buy without addressing the timing of booms and busts. If you are lucky and buy at the bottom you win, if you are unlucky and buy at the top you lose. As such allowing this cyclical behaviour in the housing market allows massive inter-generational transfers of wealth on a completely arbitrary basis. Looking both at the UK data and the US data in figures 6.3.1 and 6.3.3, a very worrying development is that in both countries the size of the booms is steadily rising, though the falls back to normal are the same. From a controls point of view this is very worrying, it suggests that the cycles could be even more dramatic and dangerous in the future — as if the last two years were not traumatic enough. Faced with a dynamic, cyclical system, standard control systems knowledge can be used to control the system. There are two ways to remove cycling (what engineers call 'hunting') in a control system. One is to use deliberate counter-cyclical feedback; most central banks try to do this using interest rates to control the economy as a whole. As central bankers are only too aware, this is not an easy way to control anything. A good example of such a feedback loop is a domestic shower system. A combination of a difficult to use mixer valve, and the delay between making the change at the tap and feeling the change in the water temperature often results in alternating flows of water that is too hot or too cold . Wherever possible, a much better solution is to use damping of the cycle. When done successfully this can result in a dramatic drop in oscillations with fairly minor, adjustments to the system. This is like the example of using shock absorbers with a car's wheels to prevent the car vibrating wildly on its springs every time it hits a bump. The strict credit controls used in the UK prior to 1971 provided just such an effective damping system. If all else fails it is imperative that such controls are reintroduced in the UK. However it may be possible that less draconian measures may be just as effective. As a rule of thumb, to be effective, damping measures need to have a time span of a similar order to that of the natural cycle time of the system, as a minimum they should be of a length of half a cycle or so. For the UK Harrison [Harrison 2005] shows strong evidence for a fifteen to 177 EFTA00625305
twenty year cycle for house prices. Sensibly, damping measures need to be of the order of ten years or so. Looking closely at the US data in figure 6.3.3; there is the same flat trend as the UK at the bottoms of the cycles; showing the same reversion to real, non-speculative, prices. It is also clear that the booms are a relatively new phenomenon. A subtly different experiment has been carried out in the US. The change in behaviour of the housing market appears to be correlated with the rise in non-standard mortgage products. Historically the US has used fixed-rate mortgages, only moving to adjustable rate mortgages comparatively recently. In the UK adjustable, or short term fixed mortgages have been the norm for many years, and it is very difficult to get fixed rate mortgages of more than five years. The finance industry does not like fixed-rate mortgages. It leaves the issuers holding interest rate and inflation risk. Moving to adjustable rates gives the appearance of moving the risk to individual mortgage holders. This in itself is a practice to be questioned in a democratic society. Why sophisticated finance companies should be allowed to offload complex financial risk onto individuals with little mathematical, let alone financial, training is not clear. In reality, offloading risk in systemic fashion like this simply creates systemic risk. As has been made abundantly clear in recent years; ultimately the only realistic holder of systemic risk is the taxpayer. Allowing financial companies to issue variable rate mortgages is to give the financial companies government subsidised one-way bets. Figure 6.3.5 below gives a comparison of mortgage types issued in various different countries in Europe. 6.3.5 here [Hess & Holzhausen 2008] The mainly variable countries are Greece, Spain, Ireland, Luxembourg, Portugal, Finland and the UK. This pretty much speaks for itself. The solution to this is trivially straightforward. All loans that are secured against domestic property should be limited to a ten-year minimum and a thirty year maximum. They should also be fixed rate, or, as a minimum, be a fixed percentage above rpi or cpi, throughout the period of the mortgage. This would move interest rate risk back on to the shoulders of the finance industry. Where it belongs. Variable rate mortgages should be strictly illegal in any self-respecting democracy. There are other sensible mechanisms to reduce the use of houses as investments, especially as speculative investments. The most obvious one is to have a capital gains tax that is more punitive than that for other investments. The tax should be charged on all houses, including first homes, without exception. Sensibly this would be a tapered tax; starting at say 20% for the first year, then drop by two percentage points per year, so reaching zero after ten years. 178 EFTA00625306
A much better approach would be to have a sales tax on all houses. This should be applied to the seller of all houses, whether they have increased or decreased in value. Again, sensibly, the tax should be tapered over the years. A tapered capital-gains tax or house sales tax, with a ten-year taper should bring in the damping of the sort required to deal with a 15 to 20 year endogenous property cycle. People buying houses to live in would not be punished, speculators would be. In addition annual property taxes, or land taxes, should be charged on the value of houses or on the value of the underlying land, rather than on the occupants, as many local taxes are. Another sensible policy would be to have compulsory mortgage indemnity guarantee (MIG). House purchasers would be obliged to take out insurance to cover full potential losses against potential negative equity, ie the difference between mortgage loan value and likely sale value of house. Such insurance would be cheap if the purchaser had a large deposit and prices were below the long-term trend. The insurance would be very expensive if the deposit was small and it was the height of a boom. As such, compulsory MIG should act in a strongly counter-cyclical manner. (For an off topic discussion of a different sort of deposit protection, refer also to the endnote 6.3.1 below.) Many countries enforce minimum deposit requirements [Hess & Holzhausen 2008]. This seems a very sensible policy, as those with small deposits are far more likely to default, see for example figure 6.3.6 below. 6.3.6 here [FT/M 2010] It can be seen that arrears rates increase dramatically as deposit sizes reduce. As with variable rate mortgages, when governments allow financial institutions to offer low deposit rates; that is highly leveraged asset purchases, they allow financial institutions to offload their risk onto the state. There is a more sophisticated and better way of addressing this particular risk problem. Rather than prescribe laws on deposits, a more effective law would define a maximum limit of say 80% of the sale value of a house that could be repaid to pay off debt secured on the property. So if a homeowner was foreclosed on, and their property was sold off, a minimum of 20% of the sale proceeds would go to the homeowner, and the other 80% would be shared by all the creditors who have loans secured on the property. This would have a number of advantages. It would have the same effect as a minimum deposit requirement of 20%. Banks would generally be reluctant to supply a mortgage of greater than 80% of the value of the house. It would also make it much more difficult to evade the minimum deposit rules by taking out secondary loans secured on the house. More subtly it would also act in a counter-cyclical manner. When house prices were at historical lows, banks might be willing to lend 90% mortgages, confident that house price were likely to 179 EFTA00625307
rise. Conversely, when house prices were significantly above their long-term averages banks would require larger and larger deposits due to their fears that house prices might drop in the future. Similarly they would be very reluctant to allow mortgage equity withdrawal. In addition to the passive management techniques discussed above, there is also a strong case for active counter-cyclical monitoring and management of the economy by central banks and other monetary authorities. Despite protestations to the opposite, housing bubbles are very easy to spot. The first obvious measure is that shown in figures 6.3.1 and 6.3.3 for the US and UK. The ratio of house prices to median wages shows very strong patterns of reversion to mean. Similar patterns are also seen in ratios of housing costs to rental costs. When house prices are correctly valued, housing costs (mortgage payments, etc) are close to rents on equivalent properties [FT 2010]. If either of these ratios increases significantly above the long-term trend then you are moving into a housing bubble. At this point the central bank should intervene to prick the bubble as early as possible. This could be by increasing the sales tax or capital gains tax on houses, increasing deposit and MIG requirements or by imposing a tax on mortgage debt. Finally, if none of the above work effectively to damp markets then the necessary solution is to simply bring back the same credit controls that the UK had prior to 1971. It would also be wise to impose similar controls on commercial property, especially office accommodation, which also seems to be subject to dramatic fluctuations with the business cycle. Of course, many economists, banks, building societies, estate agents, and most politicians will believe, and argue vociferously, that bringing in control measures such as those above will slow the economy and make homeownership available only to the few. These people are wrong. The economic theories are wrong. Experimental data confirms that these theories are wrong. When listening to these people it is important to bear in mind that it was the very same economists, financiers and real estate professionals that created the recent housing booms, and the consequent crashes in the US, UK, Ireland and Spain. Both housing and commercial building are very important as candidates for effective damping for two very big reasons. Firstly as leveraged assets the busts following the booms can be very financially damaging. Secondly housing and commercial construction have very big impacts on employment in the construction industry and so have large effects on the economy as a whole. [6.3.1 An Aside on Deposit Insurance Talking about deposit insurance, but wandering completely off-topic; it has puzzled me as to why compulsory default insurance is not instituted for bank deposits. 180 EFTA00625308
This would not be intended as a realistic way of insuring the deposits, but as a way of introducing market pricing into the risk of government bank deposit insurance. If done correctly this would also reduce the moral hazard element of public assurance of bank deposits. Realistically, in a democratic capitalist society, a government run central bank will always need to be the lender of last resort and will need to guarantee the deposits of members of the general public to a basic level. However, such guarantees remove all risk for all but the richest members of the public. It encourages them to move their deposits to the highest interest payers without any need to worry about whether the bank is well run or in danger of collapse. This then encourages all banks, even the well run, to compete on interest paid while ignoring the risk taken. Indeed the well run banks are forced to match the foolishness of their badly run competitors if they wish to stay in business. A way to resolve this is to insist that all deposit-taking banks apply compulsory deposit insurance on their deposits. The insurance would be strictly in the form of a percentage charged on the deposits, and this would be displayed in parallel to the interest rate paid by the bank. It would be illegal for a particular bank to offer its own insurance on its own accounts, and it would be compulsory for banks to offer all alternative insurance from all alternative deposit taking banks. Bank customers would be able to swap their insurance simply and electronically at any time they wished, from a visible list of alternatives available via the account. All deposit taking banks would be obliged to offer a price for insurance for all their competitors. They may wish to price their insurance at a high level, but they would be obliged to price, and would be obliged to take on the insurance at the price offered. In the event of a bank failing, the insuring banks would be obliged to pay the deposits of the insured depositors from their own bank's funds (to avoid spreading systemic risk, reinsurance of this risk would be prohibited; banks would be obliged to carry a portion of funds against these risks on their balance sheets). The central bank would remain the ultimate insurer of the deposits but would only step in if there was a pattern of systemic risk, and even then only after bank shareholders and all bondholders were wiped out. In the event of a single bank failure due to poor management, the other banks, the insurers, would carry the costs by themselves. Further rules would apply even in the event of systemic failure. Government deposit guarantee would apply up to a maximum limit (say £100,000), but this maximum guarantee would apply across all deposits for a single person, no matter how many accounts failed at any number of banks. The maximum paid out would be £100,000 even if the person invested £l0k in each of 20 different accounts, all of which failed simultaneously. Similarly the government deposit guarantee would only cover £100,000 maximum over any 10-year rolling period. Individual bank customers would only be able to waive the compulsory bank insurance where they could demonstrate that they already had £100,000 deposited in insured accounts. Although the above may sound complex, it would be trivial to put in place in a modern electronic retail banking system. The net effect of this would be to create a market in retail bank deposit insurance. While the Bank of England may have been surprised by the collapse of Northern Rock, Bradford & Bingley and HBOS; the author was not. The rumours of all these impending bank failures were wandering around internet forums from early 2007 onwards. Banking insiders knew that the funding models for these banks were unsustainable and dangerous. Forcing banks to insure each other's deposits would force banks to price the risk on badly run banks like Northern Rock at higher rates than better run banks such as FISBC and Barclays. By pricing this risk strictly as a percentage rate, the general public would gain direct visibility of the default risk. Under this regime, a well-run bank might still pay lower interest rates, but would be compensated with even lower insurance rates. This should make the net interest rate; interest less insurance, of the low risk bank better than that of the risky bank. Competition would no longer be on interest rates alone. With the best will in the world, such a system would not be capable of insuring all deposits in the event of a systemic bubble. But that is not the point. The point is; that by introducing effective market based pricing of risk, the general public and the banks would be penalised for indulging in the risk-taking that encourages bubbles in the first place. 181 EFTA00625309
Additionally, the general rates of insurance should act as both an early warning system for the monetary authorities and even as a counter-cyclical assistance in popping bubbles in the first place. In normal times, insurance rates for all but the most foolish of banks should be ridiculously low. In the event of the economy moving into bubble conditions, insurance rates would start to creep up on the riskiest banks. This would then start to pass on the infection, via the insurance, to other banks, but at a much earlier stage than normally happens when entering a financial bubble. Faced with the obligation of holding more reserves on their balance sheets to cover the deposit failure of others, all banks would be obliged to cut back on credit in general. All banks would be affected, but with the strongest effects on the worst run and most highly leveraged banks. Monitoring of individual and overall insurance rates would give the central banks live data on the perceived risks of the banks in their charge, as well as the financial system as a whole.] 6.4 Low Frequency / Tobin Trading THE spectacular collapse of so many big financial firms during the crisis of 2008 has provided new evidence for the belief that stockmarket capitalism is dangerously short-termist Shareholders can no longer with a straight face cite the efficient-market hypothesis as evidence that rising share prices are always evidence of better prospects, rather than of an unsustainable bubble. If the stockmarket can get wildly out of whack in the short run, companies and investors that base their decisions solely on passing movements in share prices should not be surprised if they pay a penalty over the long term. But what can be done to encourage a longer-term perspective? In the early 1980s shares traded on the New York Stock Exchange changed hands every three years on average. Nowadays the average tenure is down to about ten months. That helps to explain the growing concern about short-termism. Last year a task force of doughty American investors (Warren Buffett, Felix Rohatyn and Pete Peterson, among others) convened by the Aspen Institute, a think-tank, published a report called "Overcoming Short-Termism". It advocated various measures to encourage investors to hold shares for longer, including withholding voting rights from new shareholders for a year. [Economist 2010a] Warren Buffet is of course a value investor, the sort of investor who intuitively understands the workings of the companies models in section 2 of this paper. The sort of investor that the efficient market hypothesis states cannot exist. Value investors also intuitively understand that the short-term liquidity and momentum effects seen in the commodity and macroeconomic models in sections 3 and 4 not only make value investing difficult, but also add no value to the process of creating wealth that capitalism aspires to. The proposals of the Aspen Institute were pretty much stillborn for a number of reasons. Firstly, because orthodox economics assumes, erroneously, that any cost imposed on market transactions must increase costs to the consumer. Secondly because such a tax would destroy a substantial part of the finance industry, which makes the majority of its profits by charging rents on the very volatility they create in the first place. And thirdly, and more reasonably, if such a tax were imposed in one country, trading would simply move to an alternative jurisdiction. To understand just how short-term the finance industry has become, it is worth noting that stock-trading is now dominated by 'high-frequency trading' (HFT). In the major stock-markets supercomputers trade billions of dollars of trades in seconds using automated algorithms. Individual bids and offers may be held open for fractions of a second. High frequency trading 182 EFTA00625310
systems are now being co-located within stock-exchange buildings as the speed of light now means that companies trading from a few blocks away are at a significant disadvantage. To anybody who has actually worked in a real company, the idea that the real market value of a normal company can change from millisecond to millisecond is bizarre; it is palpable nonsense. A full discussion of high-frequency trading is postponed to section 9.2 below. It is my belief that Buffet, Shiller, Smithers et al are correct, and that the unnecessary volatility is induced endogenously in share markets, causing excessive movements away from real value on timescales from seconds to decades. It is my belief that the decadal movements are caused by liquidity at a macroeconomic scale, a problem that will need tackling at a macroeconomic level — this is discussed in detail in section 8.2.1 below. Other timescales are much shorter and give the appearance of being quasi-periodic momentum effects. Although the evidence is controversial, typical time-scales for the periodicity appear to be on the order of fifty and two hundred trading days, with other shorter time scales also present. A system is proposed below that would dampen the fluctuations on these timescales. The solution proposed is a private-sector approach, independent of government. Following the same logic as housing in the previous section, it is proposed to introduce damping with losses imposed on early retrading on the lines of those proposed by Buffet et al. This would be done by introducing a new class of shares, or special investment certificates, in the companies. These shares would have different rules as to their trading. The issuing of such shares would be voluntary, at the choice of the companies involved. In the same way as housing, damping would be imposed with a haircut of say 10% imposed on anybody who sold a share within the specified time period. The haircut would be paid back to the company in which the share is held at the time of sale, as such it would be effectively a 'negative dividend' on the share, paid by the owner to the company. The haircut would automatically be deducted from the sale proceeds. In extremis the haircut would be imposed for a period of say three years. However unlike housing it is not proposed that the haircut on all shares be imposed for the full term of three years. This would present great problems for pricing of the shares. If a large purchase was made of a company's shares this would kill the market in that company's shares for years at a time, which would make price discovery for the company almost impossible. Instead it is proposed that all shares that have been sold are marked as 'locked'. This would be in contrast to all the remaining shares that would be 'unlocked'. Every trading day a random selection would be made across all the currently 'locked' shares and 1% of all the currently locked shares would be unlocked. The owners of these newly unlocked shares would then be able to sell the shares immediately without penalty. Assuming 250 days of trading per year, then this release of 1% of shares per trading day would give a half-life for locked shares of roughly six months. This means that if every single share was bought on day one, and no further trading took place, roughly half the shares would be unlocked after six months, more than 70% would be unlocked by the end of the first year, over 90% would be unlocked by the end of the second year and almost 98% would be unlocked by the end of year three. At this point, after three years, any remaining locked shares would be automatically unlocked. 183 EFTA00625311
This system would be a compromise between ensuring a haircut on fast resellers, while ensuring that shares were continually made available to the market for further trading. For an individual purchaser who bought a block purchase, their haircut on day one, if they resold all their shares would be 10%, if they sold all shares after a year the haircut would be slightly below 3%, after two years it would be 1%. After three years the haircut would be zero. In these circumstances purchasing shares for value investment would have very little risk as in such a circumstance the period would be expected to be a minimum of a few years. Speculative investment would be risky, and effectively pointless. Even better for value investors, it should be noted that the losses taken by the early sellers accrue to the company in which the shares are held, and so ultimately to the other shareholders. The losses of the speculators are transferred directly to the value investors. All of this could be simply organised electronically through the same systems that currently manage dividend payments. Interestingly, although such a system may seem complex, it may actually be one that would be driven to adoption by the market. For well managed companies, issuing such shares would give direct benefits to value investors, but much more importantly issuing such shares would in its own right be a very powerful signalling mechanism to the market. It would be very foolish for a company that is manipulating a short-term rise in its share price to issue such shares, the subsequent burning of locked-in investors would cause significant reputational loss. On the other hand, for well-run companies with long-term investment horizons, issuing such shares would be a way of signalling the long-term commitment of the management. This would particularly be the case if managers share options were restricted to these shares. Eventually, failing to issues such shares might become a good indication of a poorly managed company. Such a shareholding pattern might form a useful compromise between the pattern of 'Anglo- Saxon' free trading of shares and the 'European' model of very long-term share-holding with very low levels of open trading. 6.5 Ending the Chaos A third example of controlling chaotic financial systems is discussed in section 9.2, this ordering is necessary as it needs to follow discussions on market microstructure. In economics there has been a traditional split between the laissez-faire who wish to minimise perceived barriers to trading, and the dirigiste who wish to regulate trade to minimise perceived speculation and profiteering. Both viewpoints are based on a static assumption of economic activity. The examples above assume a dynamic system, and so introduce time-based restrictions on regulation. This is designed to eliminate short-term speculation while encouraging long-term value investment. 184 EFTA00625312
It is the belief of the author that the controls proposed for housing in section 6.3 are practical. Those in sections 6.4 and 9.2 for share trading are much more speculative. The point however is that changing dynamic, chaotic, systems to remove endogenous oscillations, is of profound importance, and usually very easy if the system is understood. The oscillations result in mispricing and misallocation of capital and are enormously wasteful. In general control of such systems is straightforward. One way to control is to use external feedback loops. Inflation targeting with interest rates is a classic example of this. This is generally fraught with danger, if the feedback control is not set up correctly, it is very common for such feedback loops to exaggerate cyclical behaviour rather than reduce it. It is nearly always better to introduce a damping mechanism into a naturally oscillating system. If the damping is of the order of the systems natural oscillations, then the system should move to stability very rapidly. My own personal experience as a commissioning engineer has shown the truth of this. It is eye- opening to see a system that is 'hunting'; moving rapidly backwards and forwards erratically, suddenly flatline as the time delay on a feedback loop is gently increased. Rather than follow the seat of the pants methods of sections 6.3, 6.4 and 9.2, a better method is to analyse the data of asset price changes and then build models using non-linear dynamics and control theory similar lines to those in part A of this paper. Then the models and data can be analysed, and the natural frequencies of the systems can be identified. Finally the control variables can be identified and modified to allow the system to be moved to a stable equilibrium point. Standard control theory books such as Nise give systematic ways to analyse and control dynamic systems, chapter six of Nise, on stability, is of particular interest [Nise 2000]. With chaos we have looked at dynamic systems with up to a dozen or so parameters, or 'degrees of freedom'. Now I would like to move on to the problems of what happens when you have much larger numbers of degrees of freedom. This leads us into the fields of entropy. 7. Entropy 7.1 Many Body Mathematics At a theoretical level, Poincare's conclusions have permeated higher economics, though at the cost of some pain. In a piece of tragi-comedy, Poincare's work appeared shortly after the marginalists had transformed economics by putting their version of field theory into the very foundations of economics. In the 1980s after much deep intellectual work theoretical economists 'proved' that the Walrasian system could not produce stable equilibria, so reproducing Poincar4is conclusions some eight decades after the original, without a hint of irony, let alone embarrassment. As Foley describes it: 185 EFTA00625313
There is no doubt, however, that the outcome of these investigations have been surprises that raise unexpected and disturbing questions about the general validity of the Walrasian approach. The initial attack on infinite commodity spaces involved the development of specific models examining economic growth, international trade, and public finance problems over time. In these models the equations of supply and demand give rise to difference or differential equations, whose solution paths represent the equilibrium allocations and prices of the model. The simplest behavior of these solutions occurs when they converge asymptotically to a steady-state in which the levels or ratios of the relevant variables remain unchanged forever. This type of stability is called saddle-point stability in mathematical jargon. In infinite horizon models which exhibit saddle point stability most of the key results of the finite-commodity economy carry over. The equilibrium paths are locally unique, so that comparative statics (which now becomes comparative dynamics, the comparison of equilibrium paths) methodology still works. Furthermore, in models with some infinitely lived agents, the first welfare theorem will hold as well. The difficulty with this line of work was that the hypothesis of saddle-point stability was not in general a consequence of the basic assumptions of the model together with the Walrasian requirements of market clearing, that is the equality of supplies and demands in each period. Researchers had to add hypotheses to assure saddle-point stability. The careful workers introduced such hypotheses into their models of technology and preferences at the price of reducing the generality and persuasiveness of their conclusions. Less careful workers simply assumed the saddlepoint property, at the risk of making erroneous statements, or confined their analysis to saddle-point paths, at the risk of reaching unjustified conclusions within their own models. A more sophisticated attack by mathematically trained theorists on this problem (see William Baumol and Jess Benhabib (1989)) revealed the surprising fad that the equilibrium paths of even very standard economic models were much richer than the saddle-point literature had suggested. Equilibria might not approach a steady-state, but could end in limit cycles, in which variables endlessly repeated cyclical movements, or even in chaotic paths of a highly irregular kind, confined to a local region of the price allocation space. The assumptions necessary to rule out these complex solutions were very strong. Thus the saddlepoint literature has limited general validity, and the problem of generalizing the finite-commodity space Walrasian results remains unresolved. [Foley 1990] The method of economics remains comparative statics. To study a phenomenon, the economist proposes a model, in which certain variables are taken to be exogenous, or unexplained, and other endogenous variables are taken to be determined by equilibrium conditions. The method of explanation requires that the specification of the exogenous variables determine the endogenous variables in some sense, so that the effect of changes in exogenous variables on the endogenous variables can be traced unambiguously... ...In fact Walras' conception of equilibrium, even in the finite commodity space case, is not very satisfactory in this regard, because, except in the case where all the agents can be regarded as a single consumer (the representative agent case), competitive equilibrium is not unique. There may be several different price systems at which supply and demand are equal. (A related serious problem is that no natural and robust concept of stability of equilibrium can be developed within the Walrasian model, because it lacks a clearly articulated dynamics.) High theory in the '60s and '70s was able (through the work of Gerard Debreu) to show that generically equilibria are locally unique. Thus the comparative static use of the theory rested on the methodological assumption that after a change in exogenous variables the economy would follow the equilibrium state it initially occupied to a new configuration of prices... 186 EFTA00625314
...I would like to underline the fundamental significance of this technical problem. If the model is not determinate in some sense, either it must be abandoned, or the comparative statics methodology must be revised. [Foley 1990] The reason for the surprise at the complex equilibrium paths remains unclear. Having failed to produce a mathematical system for dealing with multibody problems, economics then took an unfortunate route to solve the problem. By going to a single consumer model with just one 'representative agent', economics made the maths solvable by returning the model to a two-body system. This makes the sums easier, but dramatically decreases the believability of the model. As the number of bodies, or variables, increases, solution of such systems becomes more and more intractable. The problems become insoluble in detail. Once the numbers of independent bodies move into double figures the maths of field theory becomes useless by itself. A good example is the asteroid belt in the solar system, where trajectories of the asteroids can only be predicted in the short term, and individual asteroids can be ejected from the asteroid belt on an apparently random basis. Indeed as the numbers of bodies increase the description is no longer at the level of an individual body but instead becomes that of a probability distribution. And this is where the beauty and power of statistical mechanics steps in. Faced with the same problems a century and a half ago, physics borrowed statistical ideas from the social sciences and took a different route that proved much more fruitful. Effectively, physics took large numbers of identical 'representative agents' but abandoned looking at individual interactions and simply looked at probabilities of outcomes. This process became very effective, and became known as statistical mechanics. Statistical mechanics is an approximation method for describing systems characterised by deterministic chaos, see for example [Gould & Tobochnik 2010 section 1.7]. Although it is an approximation, it is capable of very accurate predictions of macroscopic properties. Counter- intuitively, with statistical mechanics, the more bodies, the more accurate the predictions. The contrast between physics and economics here is stark. Alongside Ludwig Boltzmann, the work in this field was pioneered by James Clerk Maxwell. In physics Maxwell was 'Mr field theory'. He started with the same Newtonian field theories that were adopted by the neoclassicals. He expanded them rigorously to cover the whole of optics, electricity and magnetism. This remains the crowning achievement of field theory, in the second great unification in physics, second only to the work of Newton. As a sideline he also analysed chaotic control systems and so produced the first effective governor systems for steam engines. When he started looking at the many-body systems of energy in gases he promptly junked his field theory knowledge and built on the infant science of statistical analysis pioneered by Quetelet and Buckle in the social sciences. By bringing a much greater level of mathematical sophistication and inventing statistical mechanics; Maxwell, along with Boltzmann, was able to explain the microscopic behaviour of molecules in a gas, link the microscopic to the macroscopic and explain the microscopic origins of pressure, entropy and the gas laws. In contrast economists have been attempting to apply field theory to many body systems for 140 years without success. 187 EFTA00625315
If we go back to the table from Keen/Constanza: Table 1 The solvability of mathematical models (adapted from Constanza 1993) Equations Linear Non-linear One Equation Several Equations Many Equations One Equation Several Equations Many Equations Algebraic Trivial Easy Possible Very difficult Very difficult Impossible Ordinary Differential Easy Difficult Essentially Impossible Very difficult Impossible Partial Differential Difficult Essentially Impossible Impossible The statements that many equation systems are impossible to solve are strictly correct. However, when you get to a many body system with thousands or more of independent variables, you can look at the statistics and probabilities of events happening, and things actually become easier again. It then turns out that some outcomes are so probable that they become inevitable. As a consequence of this, highly predictable system variables arise straight out of pure statistical considerations. In these circumstances, underlying microscopic drivers of behaviour become almost irrelevant, they are drowned out by the statistical effects. Counter-intuitively, in a many body situation, the statistical properties outweigh the underlying interactions, and often produce unexpected results, results that go against obvious common sense. The most important thing about this statistical mechanical approach is that a new sort of equilibrium is formed. Equilibria that are very stable. In these equilibria individual agents can change their values very significantly, but the overall distributions of values are very stable. From a mathematical point of view, statistical mechanics also has another big advantage; the maths of statistical mechanics is better behaved than the mathematical agglomeration of utility/field theory: Statistical equilibrium is much better behaved mathematically than Walrasian equilibrium. Statistical equilibrium exists and is unique for arbitrary finite offer sets without restrictions of concavity. The logarithm of the economy wide partition function is a concave potential for the statistical demand functions, which as a result have a negative definite Jacobian. From an economic point of view the statistical market equilibrium differs from Walrasian equilibrium in two important respects. First, it does not exhibit horizontal equality, since two agents of the same type will in general end up at different points in their offer sets, representing different final consumption bundles. Thus the statistical market process induces some inequality in the final allocation of the economy that was not present in the original states of the agents. This market induced inequality is a consequence of agents' trading at different, disequilibrium, prices. Second, the statistical equilibrium in general leaves some mutually advantageous trades unconsummated. The market moves the economy toward Pareto-efficiency, but does not fully achieve it. Thus certain pervasive phenomena in real markets, such as unemployment of productive factors like labor and excess productive capacity, which are inconsistent with Walrasian equilibrium, are consistent with statistical equilibrium. [Foley 1996b] 188 EFTA00625316
To take an example of the power of statistical mechanical drivers, the income data from the UK shown in figure 1.1.1 shows strong equilibrium properties. This data set runs from 1992 to 2002 with the shape of the distribution almost constant throughout this period. The actual UK economy changed through very different phases during this period, including a major recession at the beginning of the 90's; yet the shape of the distribution is almost constant. This approach also explains the fascination that statistical physicists have with wealth and income models. Although the mathematical theory of income and wealth distribution is a quiet backwater in economics, this area has attracted physicists and statistical mathematicians and engineers in significant numbers since at least the work of Champernowne. The reason is simple; to a statistical physicist, economics is obviously a multi-body phenomenon. It is messy. There are millions of agents in a typical economy, and their behaviour is not coordinated at a high level. In such a system, as physicists intuitively understand, statistics must take over from microscopic drivers, and entropy raises its head. Apart from income distribution, the other area of economics in which physicists have taken a large interest is in that of finance. The earliest work on random walks was that done by the mathematician Bachelier on stock prices. Bachelier's work predates Einstein's own random walk model of Brownian motion. For half a century Bachelier's work was largely forgotten. The use of random walks in finance was rekindled and ultimately led to the option pricing formulae of Merton, Black and Scholes. Unfortunately the random walk process has been removed from it's many body background, and individual prices are treated as moving randomly isolated by themselves. But Black-Scholes is simply the diffusion equation, and things don't diffuse with random jumps in a vacuum. The random movements of dust particles undergoing Brownian motion are caused by interactions with air molecules. Black-Scholes is used in economics without looking at the overall picture of all price movements Although it is rarely considered as such, Black—Scholes is a many body mathematical approach. Necessarily, the random movements in prices effectively assume multiple random interactions; in the real world random buys and sells by investors. If this was analysed properly, analysis should be taken across all the different stock prices changing at same time. In an investment world with no new money supplied, the purchase of one stock must be balanced by the sale of another. In a simplistic case then a conservation law would hold if money supplied to the stock market was constant. This would give an overall distribution of price changes different to a B-S application to a single stock. Without such assumptions, B-S applied to a single stock allows for infinite growth in individual stock prices, an impossible assumption without supply of unlimited liquidity. Clearly a more sophisticated multi-stock model would need to take into account increases in money supply, exogenous and endogenous, as well as movements of investment between different asset classes. Michael Stutzer has started some useful research in this direction [Stutzer 2000] using maximum entropy approaches. Despite its unrealistic use on isolated stocks, Black-Scholes has been enormously successful; possibly the only piece of theoretical economics to be used on a daily basis to successfully calculate the prices of anything. 189 EFTA00625317
Bizarrely, the success of B-S and the apparent randomness of stock market data has been used to support the theories that stock markets are efficient and fully incorporate all knowledge about stocks. That these beliefs continue to be widely held is puzzling. That B-S does not work fully is well known. Mandlebrot first discovered that price movement distributions had fat tails in the early sixties, which clearly disprove the efficient market hypothesis. The EMH needs a log-normal distribution. Smithers gives a wealth of data that debunks the efficient market hypothesis [Smithers 2009]. Given the theoretical origins of Black-Scholes, to simultaneously believe in the validity of Black- Scholes, and also believe in the Efficient Market Hypothesis is a bit like accepting that the earth goes round the sun, while still maintaining that it is flat. Although it remains isolated in finance, and used incorrectly, statistical mechanics, in the form of Black-Scholes is the most successful piece of theoretical mathematics in economics. In the next section the concepts of statistical mechanics and entropy are discussed briefly, but hopefully in a way that gives a little clarity as to why and how statistical mechanics and entropy can give a more useful approach to the whole of economics. 7.2 Statistical Mechanics and Entropy A long quote from Wright to begin with: Faijoun and Machover (1989), in their path-breaking work on Political Economy, 'Laws of Chaos; make a simple but important methodological point. They observe that an economy is a dynamic system composed of millions of people in which 'the actions of any two firms or consumers are in general almost independent of each other, although each depends to a very considerable extent on the sum total of the actions of all the rest' (Farjoun and Machover (1989), p.39); in other words, a market economy has a huge number of degrees of freedom (DOF) with weak micro-level coordination. They argue that the appropriate equilibrium concept for such a system is a statistical equilibrium in which the macro-level regularities take the form of probability distributions. Let's explore their thesis for a moment. The economy of the United States has a civilian labor force of approximately 155 million individuals. The kinds of economic activities performed by these individuals spans the whole range of human experience and subsumes a great variety of tasks, skills, situations, enjoyments and motives. An enormous variety of both mundane and novel decision-making contexts are routinely presented to the individuals that constitute the economy. The space of possible configurations of this system is of course astronomically large. Local economic decisions are globally coordinated primarily through the 'invisible hand' of supply and demand dynamics in markets distributed in time and space. The economy gropes this way and that, from one configuration to another, generally in a 'bottom-up' manner, adapting continually to new economic circumstances. The existence of this type of emergent coordination does not significantly reduce the DOF since there is no top-down plan or 'Walrasian auctioneer' to synchronize the local behavior. 190 EFTA00625318
Systems that have a huge number of DOF and weak micro-level coordination ('messy' systems) behave very differently to systems with a small number of DOF and strong micro-level coordination ('neat' systems). This is reflected in the different kinds of equilibrium they can exhibit. The state-space of a system is the set of all possible configurations of the DOE. A particular configuration is a 'point' in state space. In general we find that many neat systems, if they enter equilibrium, tend toward a point or trajectory in state-space. A canonical example is a set of weighing scales. Place some weights on each arm and the scales will tend toward an equilibrium point in which the internal forces balance and the system is at rest. This is a simple kind of deterministic equilibrium, in which the equilibrium configuration is a subset of state-space. The classical mechanics concept of equilibrium was a founding metaphor of the 19th Century marginal revolution in economics (e.g., see Mirowski (1989)). And it appears in a more developed form in 20th Century neoclassical general equilibrium models (e.g., Debreu (1959)). But most messy systems, if they enter equilibrium, do not tend toward a subset of state-space. So in the physical sciences the tools of statistical, not classical, mechanics are used to study messy systems. A canonical example is an ideal gas in a container. The internal forces never balance. Instead, at the micro-level, there is ceaseless motion and change, a process that effectively samples the whole state-space in a random fashion. Yet at the macro-level a certain kind of regularity does emerge. The probability that a randomly selected gas particle will have a certain energy is constant over time (in this case, the probability distribution is Boltzmann- Gibbs). In this simple kind of statistical equilibrium the equilibrium configuration is not a 'point' or subset of state-space but a probability distribution over an aggregate transform of the state- space (in this case, the number of atoms with a given energy level). Since an economy is more like a messy than a neat system we should expect any empirical regularities to be better captured by the concept of a statistical, rather than a deterministic, equilibrium. Essentially this is Farjoun and Machover's point. The importance of statistical equilibrium in economics has been emphasized by other authors, notably Steindl (1965), and more recently Aoki (1996, 2002) and Foley (1994).2 Nonetheless, thinking that the relation between micro and macro in statistical mechanics is related to the analogous problem in economics remains the 'less trodden path. One reason, perhaps, is that it calls into question the need for explicit microfoundations. A counter-intuitive property of statistical mechanics is that macro-level regularities are in an important sense relatively independent of the precise mechanisms that govern the micro-level interactions. So the adoption of macro-level statistical equilibrium as an explanatory principle has a concomitant implication for micro-foundations. For example, classical statistical mechanics represents the molecules of a gas as idealized, perfectly elastic billiard balls, which is a gross oversimplification of a molecule's structure and how it interacts with other molecules. Yet statistical mechanics can deduce empirically valid macro-phenomena. Khinchin (1949), who pioneered the development of mathematical foundations for the field, writes: Those general laws of mechanics which are used in statistical mechanics are necessary for any motions of material particles, no matter what are the forces causing such motions. It is a complete abstraction from the nature of these forces, that gives to statistical mechanics its specific features and contributes to its deductions all the necessary flexibility. ... the specific character of the systems studied in statistical mechanics consists mainly in the enormous number of degrees of freedom which these systems possess. Methodologically this means that the standpoint of statistical mechanics is determined not by the mechanical nature, but by the particle structure of matter. It almost seems as if 191 EFTA00625319
the purpose of statistical mechanics is to observe how far reaching are the deductions made on the basis of the atomic structure of matter, irrespective of the nature of these atoms and the laws of their interaction. (Eng. trans. Dover, 1949, pp. 8-9, emphasis added). So, analogously, the method by which individuals choose (the 'mechanical' nature of individuals) is not as important as the fact that a huge number of individuals are choosing with respect to each other but are weakly coordinated (the 'particle' nature of individuals). The approach of implicit microfoundations adopts this methodological 'rule of thumb'. Given the aim is to determine 'how far reaching are the deductions made on the basis' of the particle nature of individuals while abstracting from the mechanics of individual rationality, it makes sense, at least initially, to 'bend the stick' as far as possible in the direction of implicit microfoundations. But how do we abstract from the 'mechanics' of individual rationality and represent individuals as particles'? Sometimes it is possible to predict choice behavior in controlled experimental settings or in situations where conventions or rules play an important role. But in general the everyday creativity of market participants who aim to satisfy their goals in open-ended and mutually constructed economic situations is unpredictable. For example, Aoki (2002) writes, 'Even if agents inter-temporally maximize their respective objective functions, their environments or constraints all differ and are always subject to idiosyncratic shocks. Our alternative approach emphasizes that an outcome of interactions of a large number of agents facing such incessant idiosyncratic shocks cannot be described by a response of the representative agent and calls for a model of stochastic processes: The unpredictability of choice behavior suggests representing the choice mechanism as a random process. So the implicit approach represents economic agents not as 'white box' sources of predictable optimizing behavior but instead as 'black box' sources of unpredictable noise; that is, they are particles that choose in a random manner subject to objective constraints (e.g., a budget constraint). The single representative agent with well-defined choice behavior has been replaced by a huge number of heterogeneous agents with random choice behavior. This is the simplest possible starting point for implicit microfoundations and provides a null hypothesis against which claims of the importance of explicit microfoundations can be measured. For example, as a starting point, randomness can be modeled as selection from a uniform distribution, in accordance with Bernoulli's Principle of Insufficient Reason that states that in the absence of knowledge to the contrary assume all outcomes are equally likely. The aim is 'to explain more by saying less', or at least start by saying less and see how far that takes us (c.f. Farmer et at (2005)). The principle that many market outcomes are determined more by the objective social structure than the particulars of individual rationality is not new. For example, Gode and Sunder (1993) show that the results of an economics experiment are broadly similar when classroom students are replaced with 'zero-intelligence; random agents; Farmer et al. (2005) show that the assumption of 'zero-intelligence' agents can explain many of the statistical features of double- auction trading data from the London Stock Exchange; and Wright (2008) shows that 'zero- intelligence' agents in a simple commodity economy can instantiate supply and demand dynamics that approach efficient allocation of resources and equilibrium prices (see also Cottrell et at (2009)). A natural objection at this point is the observation that economic agents do not act according to random rules. They often think very carefully before acting. Surely it is necessary, therefore, to model individual rationality, even when considering macro-level phenomena? But the objection 192 EFTA00625320
elides the distinction between epistemology and ontology, a picture with reality. A 'black box' probabilistic model of individual agency does not imply that choice mechanisms are in fact random, only that, when placed in the range of situations routinely presented by a dynamic, large-scale economy, they are operationally equivalent, at the aggregate level, to an ensemble of random process. So the precise detail of the choice mechanism is not a decisive factor in the determination of macro-level outcomes. Randomness in a theory can be viewed as an unmodeled residual, like assuming a constant in physical theories (e.g. the constant of gravitation). Residuals should eventually be eliminated and replaced by a more encompassing theory (e.g. a theory that explains the value of the gravitational constant). But the 'rule of thumb' of implicit microfoundations says something different: eliminating randomness won't necessarily yield a better explanatory or predictive theory since the randomness represents an essential property of 'messy' systems. We should expect rapidly diminishing explanatory returns from increasingly explicit microfoundations. [Wright 2009] And a shorter one from Von Neumann, reported by Claude Shannon: "My greatest concern was what to call it. I thought of calling it 'information', but the word was overly used, so I decided to call it 'uncertainty. When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, 'You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage." Claude Shannon [Tribus & Mclrvine 1971]. I find the last quote very reassuring. In my own opinion, though less well known than Einstein, Von Neumann ranked close to Einstein in terms of genius. Like Einstein, he didn't merely bring in a single profound new idea; but seemed to change radically, for the better, any field that he investigated. Despite this, it appears he found entropy as philosophically puzzling as most other people who encounter it do. Entropy is a famously abstract concept, bordering on the mystical. I believe that the main reason for this is that entropy does not have a straightforward analogue in day to day human experience, so it is simply very difficult to relate to. I do not wish to write a book on entropy and statistical mechanics, and the following section is intended only as a brief introduction. Fortunately there are two very well written introductory, non-mathematical books on entropy and statistical mechanics; one by Atkins [Atkins 1994] and the other by Ben-Naim [Ben-Naim 2007] to which the reader can go for more illumination. There is one key fact about entropy that this section will attempt to illuminate, a key fact which goes against the whole practice of economic theory from the days of the physiocrats right up to the present day. The key fact is that statistical equilibrium is more powerful than any local equilibrium. And so local equilibria are a very poor guide to overall equilibria. The statistical equilibrium will normally be in a different place to the local equilibrium, and the system will come to rest at the statistical 193 EFTA00625321
equilibrium, not the local equilibrium. In general, low-level information is close to irrelevant as a guide to macroscopic outputs. If economics is to make theoretical progress, the process of extrapolating from the bottom up must be abandoned. More importantly, in many cases, economic 'common sense' must also be abandoned. Einstein famously stated that 'God does not play dice', with regard to quantum mechanics, and was forced to back track. In economics God runs a casino. Going back to the Von Neumann quote, another important thing to note is that entropy in its statistical form was effectively 'discovered' twice. In his ground breaking work on information theory, Shannon rediscovered the mathematics that Boltzmann had discovered 80 years previously while trying to explain the macroscopic entropy of heat flow. This is not to devalue Shannon's work, which if anything is more generally applicable than that of Boltzmann and Gibbs. The two introductory texts on entropy mentioned above follow these two different approaches at looking at entropy. The heat entropy approach is explained beautifully in the book 'The Second Law' by Atkins [Atkins 1997]. In this, entropy is explained through the traditional concept of disorder, or more accurately 'dispersion'. The more dispersed something is, the higher its entropy, and the less its value. In particular, the statistical concentration followed by dispersion of heat in heat engines giving rise to useful power. 'Entropy Demystified' is another very good book, by Ben-Naim [Ben-Naim 2007]. This follows the information path of counting systems statistically. There has been some considerable debate as to whether the two approaches of heat entropy and information entropy are isomorphic or merely analogous. This is a debate I do not wish to enter. Certainly, from a human cognition point of view, neither approach is fully satisfactory. The information approach is more obvious mathematically, but like quantum mechanics, somehow seems to imply the necessary presence of an outside observer. Dewar's work, discussed in section 7.3 below may shed some light on this discussion. Both Atkins's and Ben-Naim's books are short and well written, and I commend them both. A later book by Ben-Naim points out the basic fact that through an accident of history, the sign of entropy (like that of the electron) is intuitively wrong. In most of the things that humans count, more is better, while with entropy less is normally better. In later sections I follow Schrodinger in using the concept of 'negentropy' (negative-entropy) to get round this problem. The important point of the energy/information debate is that the same fundamental mathematical models fall out in two fields that appear to be widely different, one explaining how steam engines work, the other explaining how much information can be squeezed down a telegraph line. 194 EFTA00625322
What the two approaches have in common is an abandonment of detailed analysis of the system and replacing it with the concept of counting very carefully all of the possible available states of a system. It turns out that this simplistic approach is both very powerful and very generally applicable. Very briefly the concepts that are explained at length in the two books above are as follows. Entropy is a measure that counts all the possible statistical states that a system can occupy. When this counting is carried out, it is normal to find that a subset of these possibilities is much more probable than all the other possible states. Because of this, this subset of possible states dominates the behaviour of the system; almost absolutely. To get a feel for how this works, it is worth first looking at a similar concept that is often used, in fact rather over-used, in economics. That is the central limit theorem. The CLT states that if values are randomly selected from different underlying distributions and added together, the resultant distribution will be a normal distribution. If the underlying values are multiplied together, then you get a log-normal distribution. In both cases the resultant distributions are independent of the underlying distributions. This is a simple result of statistics. Take, for example, two underlying distributions which are both uniform distributions. Each of the underlying distributions is much more skewed, with more extreme values, than a normal distribution. A naive investigator might assume that the result of adding samples from two uniform distributions would be another uniform distribution. However this is not true, because of the likely sampling. For example, while it is quite possible that you will get a high value when sampling one of the underlying distributions, it is quite unlikely that you will simultaneously get two high values from the two distributions, or two low values when sampling from both distributions. So the resultant distribution, when the underlying samples are added, is bunched towards the centre, and if enough samples are taken, you get a normal distribution. Similar arguments produce a log-normal distribution when the underlying samples are multiplied together. If a researcher was unaware of the CLT, they might assume that different underlying distributions would produce different resultant output distributions, and that by studying the underlying distributions carefully they would be able to predict the resulting output distribution. Knowledge of statistics in this case solves a lot of unnecessary work. It doesn't matter what the underlying distributions are, if you take enough samples, statistics gives you a normal distribution if you add the samples, and a log normal distribution if you multiply the samples. In these circumstances, the underlying distributions are irrelevant. Another trivial example is that of flipping coins. If you take two coins and toss them randomly, the chance of getting all heads is one quarter. If you use three coins the probability of getting all heads 1/8 if you use ten coins the chance of getting all heads is 1/1024. 195 EFTA00625323
All sequences are equally likely, but in each case only one out of all possible sequences is all heads. In contrast the number of sequences that is close to 50:50 heads and tails gets proportionally larger and larger as the number of coins gets larger. More importantly the average variation from the mean becomes smaller and smaller. This can be seen clearly in figure 7.2.1. below. Figure 7.2.1 here So one narrow band of similar distribution results becomes so likely it becomes inevitable, others become negligible Where this gets much more interesting, and much more powerful is when external constraints, or boundary conditions are introduced. We have already seen one example of this. The log-normal distribution can be considered as a normal distribution where there is a boundary condition set at zero. This is why it is used (erroneously) as the assumed base distribution for Black-Scholes theory. The price of shares is assumed to be able to increase infinitely but cannot go below zero. In the absence of more detailed knowledge of the underlying distribution, the log-normal was the sensible choice to use in the earliest models. Following Mandlebrot, the log-normal needs replacing with an alternative distribution. There are of course many other distributions that fit the characteristics of not having negative values, many of which (including the GLV) have the required fat tails that the log- normal lacks. In statistical physics, perhaps the most well known example of the operation of an external constraint is that of a conservation principle. For example, under the external constraint of conservation of energy, distributions form a standard shape known as the Maxwell-Boltzmann distribution, typically given in the form: F(x) = xe -N (7.3a) This is a special case of the gamma distribution, and gives a shape that can be closely modelled by a log-normal distribution [Willis 2005]. For example all the molecules of air in a room have kinetic energy. Ignoring heat losses and gains through the walls of the room, the total kinetic energy of all the molecules is conserved. That is, if one molecule gains a unit of energy, another molecule must lose an equivalent amount. In theory it is possible that all the molecules could have exactly the same amount of energy (a uniform distribution), but there is only one way of creating this distribution, so it is very unlikely that this is in fact how the energy will be shared. This state has only one configuration. 196 EFTA00625324
A second possibility is to give all the energy to one molecule, with all other molecules having zero energy. This can happen in N different ways, where N is the number of molecules, so this distribution is much more likely than the previous uniform distribution. In fact it is N times more likely. This state has N configurations. But the difference between this and the first option would be enormous, there would be many moles of air in a room, so N would be much greater than 1023. However this second distribution is still a very unlikely distribution. A third option would be to give to give two thirds of the energy to one molecule and the other third to a second molecule, with all other molecules having zero energy. This distribution could be formed in N(N-1) ways. So this distribution would be (N-1) times more likely than the second option above, and N(N-1) times more likely than the uniform distribution. Clearly as energy is shared out in different ways between all N different molecules the number of possible distributions becomes enormous, with some distributions being much more likely than others. Fortunately it is relatively easy to show mathematically that the most likely distribution in this case is of the form: F(x) = e (7.3b) (The power of two arises because kinetic energy is proportional to mv2.) It is always possible that the distribution could take a form that doesn't fit the above function, for example, in theory it is possible that a single molecule could have all the energy. However the probability of the distribution being of the above form in (7.3a) is so high that you would have to wait for time periods of the order of the universe to observe a noticeable deviation from the form above. This result is a maximum entropy equilibrium. Counting of all the possible states indicates that this distribution is the most likely, and so, by definition, has the maximum entropy. Moving away from this equilibrium would require the expenditure of energy (or information) such as the use of 'Maxwell's demon', or for that matter 'Walras's auctioneer'. This maximum entropy solution relies only on the statistical analysis. It does not depend on the underlying interactions between the atoms or molecules in the gas. For instance, it is possible to compare say a bottle of a noble gas such as neon with a bottle of water vapour. Neon is a noble gas with all its electron shells full. As a result it does not form chemical reactions, and when two neon atoms collide their local interaction should be very close to a perfect inelastic collision. The results of this collision can be accurately predicted and are highly likely to be 'unequal' with a high probability of energy being transferred from one atom to another. Water molecules are at the other end of the scale. Two molecules of water can form temporary hydrogen bonds when they collide; they also have many options for temporarily storing energy in rotational and vibrational modes. In general, collisions between two water molecules are likely to be more 'equal' with both molecules of water likely to emerge from a collision with similar amounts of energy. 197 EFTA00625325
However, no matter how different the behaviour of the atoms / molecules at a local level, the resulting distribution of velocities will be a Maxwell-Boltzmann distribution for both the neon and the water vapour, as long as the water vapour is above boiling temperature. It doesn't matter how many years you spend studying the interactions of water molecules, and their energies following collisions, you will never be able to extrapolate up to the overall energy distribution. Consequently a maximum entropy equilibrium can be very different from a market clearing equilibrium. This is the basic problem with using a marginal approach; the probability of reaching a marginal solution is vanishingly small, almost infinitely small. Within economics, almost uniquely, Foley has made substantial progress in moving from a Walrasian approach of pricing to a more sensible maximum entropy approach. This is discussed in the following papers, [Foley 1996b, 1999, 2002], while a good example of the failure of market clearing is given in 'Statistical equilibrium in a simple labor market' [Foley 1996a] In the general framework of maximum entropy in economics, supply and demand are just forces driving in directions, just as electrical fields drive directions in physics. However entropy can overpower these forces. When looking at such models, a subtle point is that you don't need complete randomness to create a maximum entropy output, only an element of randomness. There has been a history in econophysics of creating 'pure' exchange models. In most of these models, hypothetically, a beggar could meet Bill Gates in the street, and walk away a billion dollars richer. Although intellectually pleasing, such models are clearly highly unrealistic. In the models of income and companies discussed above in this paper only a small amount of randomness was introduced. But even this small amount was sufficient to destabilise the system away from an intuitively logical Pareto type outcome to one based on maximum entropy. Where microscopic effects do remain important is in the ranking of individuals in the distribution, whether looking at people with different basic abilities and savings preferences, or companies with different capital efficiencies; the ranking of the individual or company is given by the ranking of abilities. However the rewards are defined by the shape of the outcome distribution. The output distribution is defined by entropy, not by the underlying input distributions. So the rewards are not 'fair'. 7.3 Maximum Entropy Production There is another very substantial, and very interesting, difference between the thermodynamic systems discussed in the section on entropy above, and the various models discussed in this paper. All the discussion so far on entropy has been about what physicists call equilibrium thermodynamic models. In these models the system has been allowed to evolve until there are no temperature differentials or net energy flows across the system. Everything has stabilised with uniform macro level variables. It should be noted that this is very different to the traditional equilibrium mathematics used in economics, which is entirely static. 198 EFTA00625326
In the thermodynamic equilibrium models of physics, individual molecules are still swapping energy and changing their places in the distribution of energies — however the shape of the distribution is stable. Historically, these equilibrium thermodynamic models are well understood and can be described exactly mathematically, with entropy values directly calculable. The models in this paper consist of sources of wealth generation in companies and sinks of consumption at households, with a continuous flow from one to the other. In this they resemble models that have continuous flows of heat in and out of the system and that have different temperatures in different parts of the model. Such models are described by physicists as 'out of equilibrium thermodynamics systems', or simply non-equilibrium systems; though it is the belief of the author that this nomenclature may need to be revisited. Traditionally, such systems have been very difficult to describe mathematically, however recent work by Lorenz, Paltridge, Ackland & Gallagher, and others in the field of planetary ecology and, also that of Dewar, Levy, Solomon and others in the field of theoretical physics appear to have changed things substantially. In the 1970s Garth Paltridge produced papers looking at the absorption of sunlight by the earth and the re-radiation of heat into space. Paltridge's model is profoundly simplistic. He split the earth into just ten cells by latitude and set up basic energy balance flows between the cells and attempted to produce a simple system of formulae to give an overall balance. In so doing he 'accidentally' rediscovered the basic formulae for entropy first discovered by Carnot two hundred years previously. This is recounted, entertainingly, in chapter three of 'Non-equilibrium Thermodynamics and the Production of Entropy: Life, Earth, and Beyond' [Kleidon & Lorenz 2005]. Despite the very rudimentary nature of the model, the model was able to give surprisingly accurate predictions of the temperature and cloud cover at different latitudes of the earth, this can be seen in figure 7.3.1 below. Figure 7.3.1 here [Ozawa 2003] This is typical of the power of entropy. All the detail of evaporation rates, wind speeds, precipitation, etc were irrelevant and unnecessary for production of the model. A simple application of entropy was sufficient. What was new, and ground breaking, with regard to the model is that this was a successful analysis of an 'out of equilibrium' thermodynamic model. At one end of the model is the Sun at 5800 degrees Kelvin, at the other end is deep space at 3K, with the earth in the middle. In such a system, entropy is not maximised, but is being produced continuously; as heat flows continuously from hot to cold. 199 EFTA00625327
What Paltridge and Lorenz discovered was that the earth appeared to act in a 'deliberate' manner, by adjusting the temperatures across the globe, to a give a maximum possible rate of entropy production. Although it is early days, this principle of 'Maximum Entropy Production' or 'MEP' appears to be widely applicable, and also appears to make many previously insoluble systems much more tractable. Analysis by other authors suggests that the same Maximum Entropy Production principle is true for the re-radiation of heat from Mars and Titan. It also appears that the use of MEP may be applicable to many other systems such as convection in the earth's mantle, and turbulent systems. Ozawa et al give an excellent review of the history and uses of MEP, while the book edited by Kleidon & Lorenz gives much more detail [Ozawa et al 2003, Kleidon & Lorenz 2005]. In Paltridge's model, earth becomes what is known as a 'dissipative structure'. Dissipative structures include things such as planets, and life forms. Dissipative structures are counter- intuitive from a normal equilibrium thermodynamic point of view. Dissipative structures are highly concentrated, highly organised, and so have very low entropy. From the point of view of ordinary equilibrium thermodynamics, they shouldn't exist. However from an MEP point of view, dissipative structures do make sense. To take a simple example of a dissipative structure, consider the convection cells (Benard cells) that can appear in a pan of water that is being heated at the bottom. Figure 7.3.2 here [Georgia Tech 2010] Figure 7.3.3 here [Eyrian 2007] The pan has a high temperature at the bottom and a low temperature above it (assume no heat flow through the walls). Conduction is not a particularly effective method of heat transfer in water. So if conduction was the only mechanism for transferring heat through the water then the heat flow, and so the rate of entropy production would be constrained. However, heating the water decreases its density so allowing the hot water to float to the top and release heat to the atmosphere. Meanwhile colder water sinks from the top to the bottom to replace the heated water. In theory the water could circulate chaotically, or it could form one large loop. In practice, at heating rates low enough not to create bubbles of gas, the water 'self-organises' into hexagonal cells. These cells are low entropy, complex, 'dissipative structures'. However their existence allows a higher rate of entropy production, transferring heat rapidly from hot to cold. This allows the total entropy of the system, that is the heating source, pan and atmosphere, to be increased, despite the local drop in entropy associated with the creation of the hexagonal dissipative structures. 200 EFTA00625328
























































