This is no minor parlour trick. The continents of the earth are pushed around on the global equivalent of Benard cells, forming mountain ranges and oceans as they do so. Our geography is an accidental high entropy output caused by the need to move heat formed by radioactive decay in the core of the planet to the earth's surface. As previously discussed, the earth's atmosphere operates as a dissipative structure moving hot equatorial air to the poles. The circulation of the oceans carries out exactly the same functions. Interestingly, it also appears that the existence of plants changes the earths albedo in ways that also maximises entropy production. Animals, then appear as efficient redistributors and processors of vegetable matter. When looked at in this manner almost everything on planet earth becomes a dissipative structure. This includes of course human society, and indeed, human economic systems. This is of considerable importance, and is returned to in section 8.1 below. (As a brief aside, it should be noted that the discussions here relate to maximum entropy production. This is a different theoretical approach to that of Prirogine who has discussed dissipative structures under a minimum entropy production principle. While Prirogine's ideas appear valid in a certain number of examples with strongly defined constraints, the minimum entropy production approach has failed to find widespread application.) In chapter 9 of 'The Second Law', Atkins gives a brief but very well written review of dissipative structures, using as one example the creation of a simple fox-rabbit ecology and introducing the Lotka-Volterra dynamics. This brings us full circle to where we began. In parallel with the above work in the field of ecology; Levy, Solomon and various co-workers have carried out pioneering theoretical work looking at the dynamics of the Generalised Lotka- Volterra distribution and how it works mathematically. In their mathematical analysis of the GLV, Levy and Solomon show that the entropy of multiple Boltzmann distributions gives the power law tails found in the GLV distribution [Levy & Solomon 1996]. In contrast the Maxwell-Boltzmann distribution of a normal thermodynamic equilibrium comes from an additive process. This is a direct conservation law, in such a system the addition and subtraction are direct and total energy is conserved absolutely. This results in a distribution with an exponential tail. The GLV comes from a multiplicative process. And multiplicative process cannot be directly conservative. The GLV process does however remain conservative in total, at least in the long term; the process of this conservation is discussed further below. Because of its multiplicative nature, the output of the GLV includes a power law tail. This can be seen as analogous to the central limit theorem. Under the CLT an additive process gives a normal distribution, a multiplicative process gives a log-normal distribution, with an exponential tail. Under an additive, maximum entropy process, the output is a Maxwell-Boltzmann distribution, with an exponential tail. Under a multiplicative, maximum entropy production process, the product is a GLV distribution, with a power tail. The mathematics of this is quite robust and 201 EFTA00625329
works under lots of different models as long as they meet some basic requirements, again more below. 'One sees therefore that a power law is as natural and robust for a stochastic multiplicative process as the Boltzmann law is for an equilibrium statistical mechanics system. Far from being an exception and requiring fine tuning or sophisticated self-organising mechanisms, this is the default [Levy & Solomon 1996] As such, the GLV distribution might better be considered to be a 'log-Maxwell-Boltzmann' distribution and the Lotka-Volterra seen as a special, non-equilibrium version of this log-Maxwell- Boltzmann. Within the fields of ecology, these ideas have been taken forward in some very interesting work by Ackland & Gallagher [Ackland & Gallagher 2004] on the modelling of ecosystems. This modelling shows that, by using simple GLV models, and some very basic assumptions it is possible to produce full food webs with all the complexity of a real ecosystem. This model allows and includes for constant evolution and transformations of predators and prey within the system. Despite this the overall parameters of the food web become highly stable in things such as numbers of predators, prey, varieties of species, etc. It is particularly interesting that a large array of different species, different types of dissipative structures, appears so as to maximise the total biomass flow. "We monitored this during our simulations and found a remarkable result—the total flow of resource (and hence total biomass) increases with time reaching a plateau after many thousands of steps—the steady-state linkstrength ensemble distribution appears to be the one which maximizes the use of resource. This type of optimisation is consistent with what has been observed in other ecological models. If the model is recast in terms of flow and dissipation, the maximization principle is equivalent to maximum entropy production: the mathematical equivalent of "entropy production" is just the total death rate, and hence the flow out " [Ackland & Gallagher 2004] It is the belief of the author that the economies of the world are acting in exactly the same manner. An economy is an MEP dissipative structure, and when it is at equilibrium it is maximising the rate of entropy production. In the natural world an ecosystem develops to a complex but stable equilibrium of different groups of animals, plants, herbivores, carnivores, etc each adapted to its niche. Similarly, in an economy, a complex ecosystem evolves which splits into extractive industries, manufacturing, services, finance etc. In both systems the apparently stable system involves constant microscopic competition, evolution and change. Clearly, this maximum entropy production approach to economics links through to evolutionary economics and theories of the sources of endogenous growth. It should be noted that this is not just an analogy. In entropy production terms, the human economic system is simply a complicated and interesting sub-section of the MEP function of the earth as a whole. 202 EFTA00625330
Returning to the mathematics, Dewar [Dewar 2005] has produced a seminal paper that derives maximum entropy production from the first principles of information theory and simple maximum entropy considerations. This derivation of a Maximum Entropy Production (MEP) approach appears to be applicable to non-equilibrium systems in general. Instead of looking at the counting of all possible statistical states, and finding the most probable, Dewar looks at the counting of all possible paths through a flow system, and finds that these can be counted using the same maximum entropy approach used by Boltzmann, Gibbs, etc. Dewar does this by maximising the path information entropy, following the ideas of Shannon and Jaynes. This follows from Shannon's interpretations of information entropy and Jaynes generalisation of the maximum entropy approach as a general recipe for statistical inference. In Dewar's words: "Jaynes saw the Gibbs algorithm as a completely general recipe for statistical inference in the face of insufficient information (MAXENT), with useful applications throughout science, not just in statistical mechanics. Viewed as such, it is a recipe of the greatest rationality because it makes the least-biased assignment of probabilities, i.e., the one that incorporates only the available information (imposed constraints). To make any other assignment than the MAXENT distribution would be unwarranted because that would presume extra information one simply does not have, leading to biased conclusions. But if MAXENT is essentially an algorithm of statistical inference (albeit the most honest one), what guarantee is there that it should actually work as a description of Nature? The answer Yes in the fact that we are only concerned with describing the reproducible phenomena of Nature. Suppose certain external constraints act on a system. Examples include the solar radiation input at the top of Earth's atmosphere, the temperature gradient imposed across a Bonard convection cell, the velocity gradient imposed across a sheared fluid layer, or the flux of snow onto a mountain slope. If, every time these constraints are imposed, the same macroscopic behaviour is reproduced (atmospheric circulation, heat flow, shear turbulence, avalanche dynamics), then it must be the case that knowledge of those constraints (together with other relevant information such as conservation laws) is sufficient for theoretical prediction of the macroscopic result. All other information must be irrelevant for that purpose. It cannot be necessary to know the myriad of microscopic details that were not under experimental control and would not be the same under successive repetitions of the experiment (Jaynes 19856). We can only imagine with horror the length of scientific papers that would be required for others to reproduce our results if this were not the case. MAXENT acknowledges this fact by discarding the irrelevant information at the outset By maximising the Shannon information entropy (i.e., missing information) with respect to pi subject only to the imposed constraints, MAXENT ensures that only the information relevant to macroscopic prediction is encoded in the distribution pi. Therefore, if we have correctly identified all the relevant constraints (and other prior information), then macroscopic predictions calculated as expectation values over the MAXENT distribution will match the experimental results reproduced under those constraints. But of course that last if is crucial. In any given application of MAXENT there is no a priori guarantee that we have incorporated all the relevant constraints. But if we have not done so, then MAXENT will signal the fact a posteriori through a disagreement between predicted and observed behaviours, the nature of the disagreement indicating the nature of the missing 203 EFTA00625331
constraints (e.g., new physics). MAXENT's failures are more informative than its successes. This is the logic of science.1Dewar 2005] The bold emphasis is my own. What holds true for 'atmospheric circulation, heat flow, shear turbulence, avalanche dynamics' also holds true for such regularities as wealth and income distributions, distributions of company sizes and the ratio of returns on labour and capital. Because these regularities are found across multiple different economies, following Jaynes' logic, their causes can be determined using a max-entropy approach along with appropriate constraints and conservation laws. 'We can only imagine with horror the length of scientific papers that would be required for others to reproduce our results if this were not the case...' As a word 'horror' accurately captures the emotional reaction when an individual with a passing understanding of the power of entropy becomes acquainted with the amount of time and energy that highly intelligent economic theoreticians have invested in attempting to produce macroeconomic models from observed (and even worse, supposed) microeconomic behaviour. I have not yet seen any theoretical work formally linking the work of Dewar to that of Levy & Solomon, however I am firmly convinced that they are isomorphous; that Levy & Solomon's mathematical derivations of the GLV should also be reproducible via working from Dewar's principals of path entropy. It is my belief that Levy, Solomon and Dewar have produced some very important and very general principles. I believe that the max entropy production model, and GLV distributions will be found to give general and stable descriptions of many complex systems that have hitherto been seen as insoluble. What Dewar, Levy and Solomon's systems consist of are three critical elements; a source, a sink, and some sort of self-limiting behaviour. This model is potentially very powerful, as this simple model is typical of many complex systems. The sources and sinks are typically energy, but can also be population, or the wealth created in an economic system, or many other things. The reason such systems are very common is because most other systems are inherently dull, at least in the longer term. Without the source, the system quickly disappears. Without the sink the system will quickly explode and disappear. Without the self-balancing mechanism the system will either explode or disappear depending on the direction of the imbalance. The self-balancing mechanism is the key to the long-term preservation of the process, and this reintroduces the conservation principle. In a classical 'static' thermodynamic equilibrium conservation is absolute. In a Dewar, Levy, Solomon type 'dynamic thermodynamic equilibrium', conservation is approximate and long term. Input and output can differ over the short term, but are brought back into balance automatically in the long term. Indeed such systems can wander backwards and forwards in a Lotka-Volterra type manner at a macroscopic level, while maintaining GLV type equilibrium at a microscopic level. 204 EFTA00625332
In section 7.4 below, I discuss the statistical mechanics of this further, as I believe there may be a shortcut way of unifying and simplifying the approaches of Dewar, Levy and Solomon by recasting the flow model into an equivalent exchange model. In economics the source is production, the sink is consumption. Going way back to section 1.2 and section 1.6 there was a discussion of the different ways of producing power laws. These different methods were combinations of two exponential processes, multiplicative process, and self-organised criticality (SOC). As discussed in section 1.6 it is the belief of the author that the first two processes, double exponentials and multiplicative processes are in fact different ways of describing the same process. In the GLV this becomes obvious if you look at the difference equation (1.3o), which can be seen as either a way of multiplying the variables (a multiplicative process) or a way of modelling two different growth rates (double exponential process). However a single GLV can have many different possible equilibria. Dewar ties this together, and shows that dynamic systems tend to a single point of maximum entropy production point, a single dynamic equilibrium, at the limit of stability, at the point of self organised criticality. This appears to be typical of many systems, and may explain the fact that many power law distributions have values between two and three even though they arise from substantially different underlying models (see Newman table 1 for example [Newman 2005]). Indeed Dewar points out that many very chaotic systems; systems close to 'self organised criticality' such as earthquakes, avalanches, forest fires and the archetypal sandpiles, can be characterised by slow steady underlying growth rates (eg tectonic plate movement for earthquakes, tree growth rate for forest fires). He also explains that such systems can be included in the Maximum Entropy Production modelling approach, even though such systems are traditionally characterised as being very far from equilibrium. Financial markets, especially asset markets, also show many of the characteristics of such SOC systems with steady growth intermittently interrupted with dramatic crashes. This analogy may shed some light on the role of debt in finance discussed in section 4.6 above. An example, for those that can remember them, is the traditional old-fashioned egg-timers. When well-built, these represented a very well behaved sandpile. In a high quality egg-timer, the sand is very fine, with equal sized smooth grains, the sand is dry and friction is very low. In such an egg-timer the sandpile has a near constant, flattish, inverted conical shape, and close observation shows that the avalanches are small but near-continuous. With a 'normal' sandpile the sand behaves much more erratically. With a little 'stickiness', caused by damp or a wide distribution of grain sizes, the pile can build up significantly into steeper and steeper hills as grains are added at the top. Eventually a dramatic collapse occurs which changes the steep hill into a much shallower one, then the process restarts. In human managed forests this lesson has been learned, though at a cost. In the middle of the last century forest managers attempted to fight forest fires by removing undergrowth and ignition sources. This appeared to work in the short run, but eventually this simply led to much larger, and more devastating and dangerous fires. In recent decades foresters now often manage nature reserves by deliberately starting fires on a frequent basis. This results in a steady stream of much smaller fires. It is the belief of the author that increasing debt, and liquidity, in an economy above a certain point is actually counter productive in that it moves the economy closer to an unstable point, the point of SOC, approaching the scale-free system in which large fluctuations become much more likely. 205 EFTA00625333
It also suggests that there will always be strong pressure from financiers and politicians to move towards increasing debt. They are pushed in this direction by the forces of entropy. But any marginal increases in efficiency are outweighed by the increase in the instability of the economy. The comparison with forest management is apposite. Allowing a forest to grow freely, and removing sources of small fires, will, in the short term, and indeed on an ongoing average basis, marginally increase the total amount of wood growing per acre. But this is of precious little reassurance when you find your village surrounded on all sides by fire, of afterwards when you discover there isn't a tree standing for twenty miles in any direction. In this light, Cooper's suggestion that central banks should aim for a pattern of small business cycles is eminently sensible. Simply reducing leverage and excess liquidity may be a better approach, if done correctly it should move the economy out of a cyclical mode altogether. For these reasons, I believe that the nomenclature of such systems needs to be reviewed. In many cases I believe that many complex systems that are currently described as 'out of equilibrium' should be described as being in 'dynamic thermodynamic equilibrium' or 'MEP equilibrium'. This form of equilibrium is reached when the system has reached the point of maximum entropy production and continues indefinitely in that state. 7.4 The Statistical Mechanics of Flow Systems In the following section I would like to briefly bring together some ideas on the statistical mechanics of power laws, from various sources cited in this paper, and also discuss their relevance to dynamic equilibrium both in economics and in flow systems in general. This section is aimed at statistical physicists, mathematicians and theoretical economists, and assumes that readers have read Glazer & Wark or the equivalent as a minimum. It is also highly speculative. It will not be easy to follow for many readers, who may wish to skip to the economics of section 8. In this section I would like to make some suggestions as to possible ways forward for a statistical analysis of the flow systems described by Levy, Solomon and Dewar. I would like to do this by attempting to reduce these models to equivalent exchange models. I have previously been somewhat scathing of exchange models, primarily because they do not provide models that realistically capture the processes of real economic systems. For these reasons I have built the models in part A following the flow pattern of the GLV of Levy and Solomon. However for a core production of the statistical mechanics I believe appropriately designed exchange models may be useful proxies for flow models. Very many exchange models have been produced by econophysicists, with many different underlying mechanisms, see section 1.1 above. In a very perceptive paper; 'The Rich Are Different!: Pareto Law from asymmetric interactions in asset exchange models' [Sinha 2005] Sitabhra Sinha points out that these models share a very basic pattern. When these models have a symmetric pattern of exchange they produce a traditional Maxwell-Boltzmann distribution. When the exchange mechanism is made to be asymmetric, then a power law is produced. Indeed; in one case an asymmetric mechanism was deliberately introduced to assist the poor, but instead produced a power law tail; so giving the opposite result of that intended. 206 EFTA00625334
I believe it is a similar simple asymmetry that drives the multiplicative flow models of Levy, Solomon and Dewar. If we go back to the base equation (1.3o) for a single agent in the economic models from section 1.3: = w,., + e + w,.,r — w I would firstly like to generalise this to the following: (1.3o) w, = + erne., — -r + wi.er — wr.,S2 (7.4a) The first change above is to allow a distribution of different possible earnings incomes e„t; this was actually the case in the later income models in section 1.3 where the uniform distribution was replaced with a normal distribution, though both distributions were defined to be exogenous. The second change is to introduce a term T. This was first discussed in passing in section 1.9.1 and represents what I called compulsory consumption, or what economists normally call non- discretionary spending. This is assumed (in my discussions) to be a base constant value that includes for basic housing, as well as minimum requirements for food, clothing, heating etc. All other spending is assumed to be discretionary, and proportional to wealth and so included in Q. If we now do a summation of equation (7.4a) across all individuals we get: = E wi.i + Let., — Er + Ewt., — E w (7Ab) Let us then assume that the dynamic flow model is at a dynamic equilibrium, ie that it is neither growing nor shrinking through time, though it is still flowing. At this equilibrium the total wealth is constant between times steps, so the term on the left hand side is equal in value to the first term on the right hand side. This gives: = - ZT Whi r - LWhip (7.4c) The obvious way to balance this economic flow system is as an accounting identity as follows: Ze,., + Ew r = Zr wh,c2 (7.4d) This balances the total incomes on the left and the total consumption on the right. And indeed this would be the natural way to balance any similar physical flow system model, because this is the way to balance the flows in and out of the system. 207 EFTA00625335
However, from a point of view of statistical analysis, I believe it would be more fruitful to show a different balance as: • - r = whin - or: • (e,., - r) = w„,(.O — r) (7.4e) This gives additive (but flowing) things on the left hand side of the exchange system and multiplicative (flowing) things on the right hand side of the exchange system. Given that r, r and Q are all constants it also reduces a somewhat complex flow system to an exchange system with only two variables, the earnings, e,,t, , on the left hand side and the wealth, w.,,, , on the right hand side. This, I believe, is close to the base model that Sinha was describing; an asymmetric exchange model. In equation (7.4e) the left hand side additive flows must balance with the right hand side multiplicative flows. The balance of flows is between net earnings; that is earnings minus base living costs, and net consumption; which is discretionary consumption less unearned income. In a normal exchange model both sides of equation (7.4e) would be additive, and indeed identical. I believe this model, with only two variables and lots of boundary conditions, may be simple enough to be tractable to a traditional statistical mechanical analysis on the lines of Dewar, or indeed Champernowne. Before moving into further discussion I would first like to follow the maths through a little more. I would like to do two things. Firstly I would like to neglect r for the moment; we will come back to r later. Secondly I would like to divide by Q. That then gives us the following: • ( 1 = wh n i - f2 f2 (7.4f) This brings us back to some old friends. The term (1-r/Q) gave us our definition of ce. the exponent of the powertail, included in equation (4.5q). Equation (7.4f) itself is just a restatement of Bowley's law as defined in section 4.5 of this paper. These relations imply that the suggested approach in this section may have promise. A second observation, which may be completely wrong, is that equation (7.4e) has the feel of simple differential equation, with wealth on one side, and earnings, the time derivative of wealth, on the other. Instinctively the solution of this would be of exponential form. Given that the solution of a symmetric exchange is Maxwell-Boltzmann with an exponential tail, then a solution of (7.4e) could reasonably be expected to be a Maxwell-Boltzmann with an exponential-exponential, or a power law tail, as per Reed and Hughes or Baek, Bernhardsson and Minnhagen or others [Reed & Hughes 2002, Baek et al 2011]. 208 EFTA00625336
An alternative approach is to look at equation (7.4e) from a maximum entropy, statistical mechanical point of view, where you need to maximise the entropy over two different distributions. On the left hand side, you have a traditional additive term that should produce a standard Maxwell-Boltzmann distribution of earnings. On the right-hand side you also have a distribution to maximise, however in this case the distribution is multiplicative, and so the ladder of energy levels are proportionately distributed. So the resultant Maxwell-Boltzmann is exponential- exponential, or power law. This seems very close to the original model built by Champernowne, and rediscovered by Levy & Solomon [Simkin & Roychowdhury 2006]. It may be possible to maximise each of these entropies independently, however it seems likely that the distributions on each side will affect each other. At this point it is worth looking at the left hand side in more detail, as this may answer a quandary discussed back in section 1.9.2, though it raises as many questions as it answers. In this section it was noted that returns from waged employment appear to follow an offset Maxwell-Boltzmann distribution, or an 'additive GLV distribution'. Looking at equation (7.4e) the answer to why earnings are distributed as a Maxwell-Boltzmann becomes, in one sense, trivial. The distribution is a Maxwell-Boltzmann because that is the maximum entropy solution for the distribution of earnings. For a statistical mechanic that is good enough. Indeed, statistical mechanics would predict a Maxwell-Boltzmann distribution of earnings even when all the individuals had identical skills. However two questions are raised immediately; why is it offset? And what is the actual mechanism for creating the distribution? The first question is one for which the answer is not at first obvious. Intuitively, the maximum entropy distribution would extend to zero, because, given a fixed total amount of incomes, this would also allow the maximum values of earnings in the tail to increase, and so give a wider total spread, which would have a higher overall maximum entropy. However, although the model above attempts to reduce the system to an exchange model, it must be remembered that it is a flow system that is being analysed. I believe that Dewar is absolutely correct that these systems must be modelled by maximising the entropy flow, not just by maximising the entropy. So, with two distributions, one on each side of the exchange, the simplistic (traditional) solution would be to maximise the entropy of the two distributions; that is to multiply the two different partition functions and maximise the single resultant function. However, both distributions are modelling distributions of flow. As well as maximising the entropy embodied in the two distributions, there is a simultaneous need to maximise the entropy embodied in the size of the flows. Hopefully this will be a straightforward trade off between the three (four?) different entropies being enumerated. Intuitively, given this extra contribution to total entropy from the flow, an offset Boltzmann distribution may achieve extra entropy flow to compensate for its narrower spread and the lower entropy in its distribution. Going back to the concept of dissipative structures and negentropy generators, a narrower Boltzmann distribution could be seen as a dissipative structure with lower entropy, but which is capable of allowing larger entropy flows through the system. Ultimately, if it allowed very high entropy flows the earnings distribution might even collapse into a very low entropy uniform 209 EFTA00625337
distribution, or, as is often seen in both real world monopolies and many econophysics models, all wealth and income would go to one individual. With a dissipative structure approach, presumably there is a negentropy flow associated with 'maintaining' the dissipative structure in its low entropy form; Maxwell's demon is continually at work narrowing the spread of the distribution. However if this negentropy flow is smaller than the entropy flow through the system, enabled by the dissipative structure, then the flow system as a whole, including the dissipative structure can be stable and long-lived. As long as a factory is making money, it is worth diverting part of the profits to maintain it. If a proposed new factory is predicted to be profitable in the long-term, it is worth borrowing money to build it. As a first approximation, it might be possible to simply maximise the product of the entropies of the two distributions multiplied by the flow that results from the macrostates. The second question; of the mechanism for creating income distributions, is also problematic. For the right hand side of equation (7.4e) the mechanism of wealth condensation producing a feed-back loop for increasing wealth via returns on assets discussed in this paper seems, to me at least, very plausible. The self-organisation of salaries into a Maxwell-Boltzmann distribution is a harder process to visualise; people do not randomly exchange jobs and salaries with each other. The first problem is letting go of the fundamental economic belief that people are fairly rewarded for their employment. We have already seen in section 4 that this is not normally true at the aggregate level. I do not believe it is true at an individual level either. I have worked as a risk manager in the water and nuclear industries, doing roughly similar jobs at roughly similar salaries (we will come back to this). However, water is cheap, electricity is expensive, especially when compared to the amount of capital installed, so the amount of value I gave in the nuclear job was many orders higher than that I gave in the water jobs. I was in fact paid a bit better in the nuclear job, but nowhere near enough to compensate for the extra value created. Similarly, as a risk manager, the wealth I created was many factors higher than that created by a security guard or a cleaner, but I was not paid many times the rate of these people, though I was certainly paid more. But it is a well-known economic puzzle that some industries, such as the oil industry, pay better than others even for secretaries, cleaners and security guards, where the jobs are identical. An entropic, Maxwell-Boltzmann, distribution of wages, varying by the wealth of the industry might explain this puzzle. Similarly, at the high end, it may explain the persistent high pay and bonuses of executives and even mid ranking staff, in financial industries that have very high cash flows but low profits. In fact when employers take on new employees they don't do a detailed analysis of the individual's probable contribution of wealth to the company. They decide if the employee is needed, they look at the market rates for the skills required and they pay the going rate. Certainly overall wage levels are checked carefully against total revenues, and deadwood is chopped back wherever possible. But wages are set externally in the market, not internally by potential wealth creation. Note also, that in a stable economy, the total sum Fe of earnings available will be fixed, giving the boundary condition necessary for a Maxwell-Boltzmann distribution to develop. Given that wages are set in the market, a maximum entropy distribution becomes more possible. As long as there is a minimum amount of stochastic churn in the market, with competition and 210 EFTA00625338
movement up and down a ladder of earnings levels, then creation of a Maxwell-Boltzmann distribution becomes possible. Moving to a different issue, an element that is missing from this model, and indeed all my models, is that of unemployment. Wright's models are superior in this regard, and may shed light on this dynamic. Equation (4.7e), and a Maxwell-Boltzmann distribution, especially an offset one, would seem to imply that all would have jobs and earnings. I can see two possible causes for persistent mass unemployment. A first explanation is given by reintroducing r, the compulsory consumption or non-discretionary spending. It is possible that when the values of ei,t at the low end of the distribution becomes less than the value of r individuals are removed from the distribution altogether. A second source of persistent unemployment could come from a combination of the maximum entropy flow, dissipative structure model combined with differing actual skill levels. With differing skill levels greater flows of entropy might be achieved by diverting all earnings to highly skilled individuals with no flows to the low skilled. Although the distribution would have lower entropy, total entropy flows might be higher. At this point I would like to return to the issue of equity, which has been a central theme of this paper. Equation (4.7e) implies that a group of identical individuals will be forced into an unequal distribution of earnings incomes. In practice, with non-identical individuals the individuals will be ranked into the Maxwell-Boltzmann distribution by their abilities. Following this the individuals with the highest earnings will then be distributed into the highest income groups of the GLV distributions as we saw in section 4. Even ignoring unemployment effects, the whole system becomes deeply iniquitous. Finally, and much more speculatively, I would like to consider what might happen when equation (7.4e) does not balance. E (e,., — -r) = w,.,(O - r) (7.4e) I think that equation (7.4e) will balance in many situations of flow systems; most physical and biological systems will come to a dynamic equilibrium when the flows in and out of the system are equivalent. This will define a pair of distributions and an entropy flow that will have a combined system maximum entropy production. However for most economic systems the above is not true. Once a market system is installed in a country, the economy starts growing and is characterised by long-term persistent levels of growth. The growth level is so persistent that this can also be characterised as being stable, in that the parameters of the system; gdp growth rate, interest rates, stock-market growth rates, etc, are very stable over decades or even centuries. This was discussed in section 4.5. For newly industrialising economies this is characterised as having high levels of gdp growth up to 10% per annum, with associated high interest rates and stock-market rates. Q is typically low, around 0.5. For mature economies, gdp growth and interest rates are typically 2-4% and Q is typically 0.7. 211 EFTA00625339
In these cases Q can be seen as the external variable. Given this external value of Q, it could then be possible that there is a set level of gdp growth, interest rates and stock-market returns that gives a maximum entropy production output for the sum of the terms represented by equation (7.4e). If this was the case then the persistence of endogenous growth would have an explanation. Even more speculatively, let us reintroduce r to the discussion. T will be defined somewhere endogenously within the system. It will basically be defined in terms of the proportion of average wage level required to provide basic housing, food, heating, etc. In a developing society it will probably be defined largely by the subsistence wage level needed to provide basic food and shelter. In an advanced economy it will be defined by basic housing rental costs and ultimately the costs of scarce land. This might explain the very similar rates of growth seen in industrialising economies. It could also explain the higher long term growth rates in the US, with its plentiful land compared to the lower rate for the UK, were land has been scarce for centuries. If T can be defined endogenously within the system, then Q should be definable endogenously in terms of T. People will need to save enough during their working lives to pay for their annual r during their retirement. In theory, then the whole system becomes an endogenous equilibrium, with the only real exogenous factor being scarce land prices in advanced economies. So, after a lot of background, we have moved back to the economics. 212 EFTA00625340
Part B.II — Economic Foundations 8. Value 8.1 The Source of Value The source of value is humanly useful negative entropy, or simply 'negentropy'. This of course raises the question of what is negentropy. Erwin Schr0dinger first introduced the concept of negentropy, in his 1944 popular-science book 'What is life?' [Schrodinger 1944] and it was used in the discussion of living systems. Schrodinger explained his use of this phrase: if I had been catering for them [physicists] alone I should have let the discussion turn on free energy instead. It is the more familiar notion in this context. But this highly technical term seemed linguistically too near to energy for making the average reader alive to the contrast between the two things... "[Schrodinger 1944] I am going to leave the definition of negentropy deliberately vague for two reasons. The first is that the exact definition of concepts such as negentropy and free energy are difficult and can vary by situation and definition of systems. The second, more honestly, is that I remain unclear in my own views as to the detailed definition, and consequent measurement of negentropy within economic systems, when working from a physical bottom up point of view. I am however convinced that this lack of clarity is of no great consequence. Over the last two hundred years physicists, chemists and engineers have proposed different quantities such as entropy, enthalpy, free energy, Landau potential, etc to deal with entropic calculations in different systems. This has been done primarily to make the sums add up in a meaningful way subject to different restraints. Maximum entropy production is a very new set of models, a new way of adding up entropy, which has yet to be made systematic in the life sciences where it originated, never mind in economics. But in the short term, this is of academic interest only. We don't need to invent a new entropy concept for economics. Human beings intuitively understand this particular negentropy, they call it 'value' and it is measured in non-SI units such as dollars, euros, pounds or yen. It doesn't actually matter that much, whether we call it negentropy or free energy, it is what people think of value, and costs £ or $ or euro to get some of it. It accumulates during the production process and disappears in the consumption process. Most importantly it is objective; while people may have different utility values, the value of a good or service has an intrinsic value. Although they may have disagreed, and indeed been wrong, on the ultimate source of value, the classical economists, from Quesnay to Marx were correct in believing that value was a real, meaningful intrinsic quantity. There are of course a number of natural objections to an intrinsic concept of value, these are discussed in section 8.2 below. 213 EFTA00625341
For a feel of what 'negentropy' is we can go back to the concept of entropy being, in very general terms, a measure of dispersion. In general terms the more dispersed something is, the less useful it is, the more concentrated it is, the more useful it is. More concentration means more negentropy, means more value. So for example, very rough estimates suggest there are about 15,000 tonnes of gold in the world's oceans, but the concentrations are so low that extracting it would not be economic, it is too dispersed; its entropy is too high, its negentropy is too low. Wealth creation is usually a process of concentration, whether this be discovery of a concentrated ore in a gold mine or oil in an oil well, the concentration of baked beans into a can in a concentration of cans in a supermarket, art works in a gallery or a concentration of people in to a factory, a physical market or a city. The example of a traditional physical market is particularly apposite; the whole point of markets is to concentrate the goods from the surrounding area, so allowing goods to be exchanged. A traditional market is a creator of negentropy, a creator of value, even though the physical goods are unchanged in the process. Classical economics produced an effective method for valuing geographically dispersed negentropy by introducing the concept of marginality. This is very useful for the pricing of genuinely scarce resources such as specific minerals, agricultural land, housing in cities, etc. Unfortunately this mathematical trick has been extended, most unwisely across the whole of economics. From an information entropy point of view, negentropy is also increased by increasing the uniqueness of an object. Whether you turn a piece of gold into a piece of jewellery, some steel into machine tools, or raw ingredients into a restaurant meal, you are increasing the complexity, the concentration of information, and so the negentropy. Another way of creating negentropy is to create artificial scarcity as a way of decreasing dispersion. This is found in almost all luxury goods, whether they are unique works of art, haute couture dresses, first edition books, penny blacks, beanie babies, etc. In this case the artificial scarcity is maintained by use of copyrights and patents, which allow the price of the goods to be raised above the value of their inputs. Money of course forms a very special form of a good that has its artificial scarcity carefully controlled. The paper currency itself is controlled through criminal legislation against counterfeiting; money creation in more general terms is controlled by banking and other legislation and the monopolistic actions of central bank policy. Although marginality has been very useful in pricing simple economic goods that have scarcity, there are other, and I believe much more effective ways of measuring value in such systems. Through the entropy of information, entropy of mixing, etc, science has an extensive mathematical toolbox for dealing with the sort of negentropy found in economics. And the way to deal with it is in a statistical-mechanical way. It is the belief of the author that pricing in this manner will provide much more general ways of pricing, and that marginality will drop out as a special simplified case. Using entropic systems should also remove many of the theoretical problems associated with imaginary Walrasian auctioneers and other such difficulties. Foley has made very significant inroads into this way of carrying out pricing [Foley 1996a, 1996b, 1999, 2002]. 214 EFTA00625342
All of the above form the first main class of humanly useful negentropy. These are economic products; goods and services generated through concentration and specialisation. These are the stores of negentropy, of economic value. The specialisation for creation of tools, as against pure decoration, leads on to the second main form of negentropy. The creation or extraction of all the above products is mitigated through the action of 'negentropic machines' or in the language of maximum entropy production, 'dissipative structures'. Dissipative structures come in many forms and can be looked at different levels. They include, beasts of burden, trucks, tractors, computers, farms, factories, mines, power stations, markets, supermarkets, stock markets, cities, national economies and the world as a whole. The most important dissipative structures in economics are human beings. Dissipative structures are 'negentropy machines'. They do two things simultaneously; firstly they produce outputs of high negentropy goods. Secondly they simultaneously produce a much larger output of high entropy waste products, normally mostly in the form of high entropy, low temperature heat. Basic physics demands that the high entropy waste stream must be larger in quantity than the high value negentropy stream. The value of dissipative structures in economic terms lies in their ability to produce large amounts of products with high negentropy values. The ultimate source of the negentropy can come from a variety of sources, the most important ones for human economies at the moment are the sun, fossil fuels, and human ingenuity, though there are plenty of other minor sources. The sun provides negentropy for the essential input of food for human beings, it also provides the evaporation and wind to provide the rain for the crops. The other main negentropy sources are coal and natural gas for electricity production, and oil products for transport. Prior to the industrial revolution almost all negentropy came ultimately from the sun, providing food for human beings and draft animals, and wood for heating. As the industrial revolution progressed, energy negentropy from plants and animals was displaced by fossil fuels. That is to say, the physical labour of draft animals and human beings was slowly replaced by the mechanical labour of machines powered by coal, oil and gas. In more recent times, the information revolution means that computers are increasingly providing the information negentropy that used to be provided from the human brain. The interchangeability of negentropy sources means that on one point Marx's theories were fundamentally wrong; labour is not the only source of value. In this, Smith was correct, both draft animals and machines can provide value. The physiocrats were also correct in their belief that land can provide value, in its role in capturing the sun's rays. Where Smith, Marx, Ricardo, Sraffa etc were correct was in their belief that value is an inherent quantity, embodied in the goods and services in the economy. However, as was found above, Marx was accidentally more than half right, as Bowley's law shows that the negentropy from human beings is roughly twice as important as the negentropy from all other sources put together. 215 EFTA00625343
Although, at a local level, sources of negentropy are interchangeable and substitutable, labour does have one very important property that makes it different from all other sources of negentropy. All non-human forms of negentropy can be, and normally are, owned in the form of capital. In the absence of slavery, humans can not be owned. Where negentropy sources have useful value, normally they are owned. Ownership of a mine or oil well gives ownership of the oil, coal or uranium as power sources, or ownership of the concentrated ores for raw materials. Ownership of land gives rights to the sun and rain falling on it that allows growing of high negentropy food. More subtly, the ownership of land in the centre of a city gives the right to use a location that has high negentropy, in that it is a location where it is possible to meet many people and do business with them. In this regard, it should be noted that adding of negentropy value is not restricted to traditional manufacturing production. Retailers add value by bringing goods to people, by 'concentrating' them in their shops. Travel agents aggregate many different holidays together to allow quick and easy selection of the best value. The financial industry concentrates knowledge of many different investments to find the best ones for their clients. While the above arguments are clear qualitatively; quantitative calculation of values of negentropy from first principles are problematic. There has been a recent history of attempts to calculate economic entropy in this manner, mostly dating back to the work of Georgescu-Roegen [Georgescu-Roegen 1971], these attempts are common in the energy economics and environmental economics fields, and are very problematic. I would like to make it clear that the negentropy being discussed in this paper is definitely not 'emergy'; embodied energy, or 'exergy' or other similar concepts originating from these sources. I believe that these particular concepts are only useful in narrow well-defined areas of application such as in the energy industry. For a general application in economics they fail to take into account three important issues; the role of locational or concentration entropy, the role of information entropy, and the concept of 'humanly useful' in entropy. The role of concentration negentropy, the opposite of dispersive entropy or the entropy of mixing, has already been discussed above. Calculation of this for things such as land prices in city centres, and the existence of markets is of large importance. The engineering like approaches of emergy or exergy fail to capture this important source of wealth. Indeed it is the belief of the author that many historic attempts to map thermodynamics to economics have failed as they have concentrated on finding analogies for pressure and temperature, etc. The key parallel is that of chemical potential. Information entropy has always been important to human economies once they moved beyond subsistence to agricultural markets. Writing was invented in Babylonia as a way of recording storage and sales of crops. Numbers and calculation were invented for the same reasons. Since the oil shocks of the 1970s information negentropy has become one of the main sources of economic growth. For the last thirty years Western Europe has enjoyed substantial growth and significant increases in material wealth despite having an almost constant rate of energy usage. In Europe, information negentropy is the primary source of new wealth. 216 EFTA00625344
In theory, pure information entropy is directly measurable in bits and bytes. But the actual negentropy embodied in the display on a computer screen is very different from the simple information displayed on the screen. Calculating the effects of information entropy is not straightforward, to take a recent trivial example, there has been a revolution in the United States in the extraction of natural gas. Innovative rock fracturing techniques (new information) allows extraction of substantial amounts of cheap gas from shale and other 'tight gas' sources. The information is easily passed from company to company and so the addition of a very small amount of information has resulted in a very significant amount of new useful energy negentropy. How this would be calculated is not straightforwardly obvious (though measuring the drop in natural gas prices is easy enough). The third problem is the concept of 'humanly useful' negentropy. It is possible to run cars on petrol (gasoline), or alternatively on compressed natural gas, and store similar amounts of calorific value in similar sized cars. However, as a liquid, petrol is far easier to store and handle, so it is more 'humanly useful', and so has a higher negentropy value to a human being. Due to accidents of genetic history, some things are much more 'humanly useful' than others. I don't believe that an emergy approach can usefully calculate the difference in values between petrol and cng. I am very confident that an emergy approach will struggle to calculate the intrinsic value of an edition of 'Playboy' from first principles This may all seem very negative and suggest that absolute value is not calculable. Actually, whether negentropy is calculable or not is not important. If it is functioning correctly, a very big if, the market automatically calculates these values for us, and prices the negentropy in $, euro, £ etc. As Sraffa and von Neumann showed, the long-term prices of goods should reflect the value of the inputs. For most manufactures and services this is easy to observe and prices stabilise quickly, prices are set by the value of the inputs. Given the day-to-day fluctuations of the prices of things such as food, petrol, shares and houses, many readers may disagree with the concept of intrinsic value. I hope to deal with these objections in the next section. 8.2 On the Conservation of Value Trivially, value is not conserved. If I drop a Ming vase on the floor, crash my car, or my country goes to war, then wealth is arbitrarily destroyed. Similarly, if I bake a cake or build some shelves then wealth is created. However, I normally buy both my cakes and my shelves, from somebody who has produced them. If I am sensible, I insure both my car and my vases, so that this possible accidental destruction of wealth is transferred to deliberate consumption; in the form of regular payments of premiums on insurance policies, along with the consumption of the cake, and in the fullness of time the wearing out of the shelves. 217 EFTA00625345
I also vote for people who I believe will avoid wars, and democracies very rarely go to war with each other — and most economists would accept that the normal rules of economics don't apply in wartime. At all times wealth is being continuously destroyed, via eating, wearing out clothes, heating homes, crashing cars, etc. At the same time it is continuously being created on farms, from mines, in factories, offices, etc. Between the very deliberate acts of production and consumption, people do their utmost to conserve whatever wealth they have. That they often fail to achieve these aims with financial investments is related to the inherent instability discussed in the macroeconomic and commodity models above. It is my personal belief that there is a very strong argument for saying that wealth in all its forms is close to being a conserved substance between the acts of production and consumption. This is supported by the fact that Lotka-Volterra and GLV models in this paper work effectively as models; and that they produce outcomes such as income distributions with power law tails, company size distributions with power law tails, and splits in capital / labour returns that match Bowley's law. These models would not work without the conservation of intrinsic wealth. This in itself strongly suggests that wealth or value is an approximately conserved quantity through the market system. There are many reasons that people might have for not believing that value is intrinsic, and can appear to be set arbitrarily. The most obvious reason for this is because the prices of things such as petrol (gasoline), houses, artworks, computers, and share prices can vary rapidly according to time and place. I believe there are five main reasons for these fluctuations in prices, being: 1. locational scarcity 2. artificial scarcity 3. technology change 4. dynamic scarcity 5. liquidity The reason for variety in the price of 'identical' houses is locational scarcity. The fact that land in the centre of London is more expensive than land in the mountains of Wales has been dealt with in both classical and neo-classical economics using the concepts of marginality. Artificial scarcity, the reason that diamonds, artworks, vintage cars, beanie babies and money have stores of value that are manifestly different to there production costs is due to the artificial limiting of these items. Both locational and artificial scarcity were discussed briefly above in section 8.1. While marginality is an effective tool for analysing value in these areas, it is the belief of the author that the dispersional and information properties of entropy will enable a better way of explaining and calculating such values. 218 EFTA00625346
Technology change is easily dealt with. All the previous discussions in this paper have been based on economies without technological change, and so values of goods remain constant, ignoring inflation effects, as new capital is created by temporary shortfalls in supply. Clearly in a real, modern economy, the rapid progress in IT and other high tech industries can result in rapidly dropping prices. This itself is a consequence of the maximum entropy production principle continually working to improve the efficiency of the dissipative structures. Dynamic scarcity is most obvious with commodities as modelled in section 3. This scarcity, affects things such as oil, metals and agricultural products. The capital intensity and long timescales needed for installation of the capital for commodity production can result in dramatic changes in prices, though generally they take the form of short term spikes in long term stable base prices. The same bubble mechanism is responsible for the dramatic changes in house prices over time. Liquidity is a much more difficult, and interesting topic. Liquidity is a measure of how easy or difficult it is to buy and sell things. It has already been shown in the macroeconomic model above that liquidity can be artificially generated in a financial system simply by the known short- termism of markets combined with standard financial pricing procedures. Liquidity has been the subject of much interesting research in recent years. This research suggests that liquidity could be of key importance in the apparent failure of markets to price assets correctly, and in the failure of financial markets in general. It does not appear that this research has so far made much impact in the fields of economics, finance or, with rare exceptions, in econophysics, which I believe is unfortunate. I therefore propose to give a brief review of some recent research on liquidity and discuss aspects which relate to my own models, and also which are of more general importance. 8.2.1 Liquidity "Liquidity is not a virtue in and of itself unless it produces a benefit to the real economy." Yves Smith [Smith 2010] "But there is one feature in particular which deserves our attention. It might have been supposed that competition between expert professionals, possessing judgment and knowledge beyond that of the average private investor, would correct the vagaries of the ignorant individual left to himself. It happens, however, that the energies and skill of the professional investor and speculator are mainly occupied otherwise. For most of these persons are, in fact, largely concerned, not with making superior long-term forecasts of the probable yield of an investment over its whole life, but with foreseeing changes in the conventional basis of valuation a short time ahead of the general public. They are concerned, not with what an investment is really worth to a man who buys it "for keeps", but with what the market will value it at, under the influence of mass psychology, three months or a year hence. Moreover, this behaviour is not the outcome of a wrong-headed propensity. It is an inevitable result of an investment market organised along the lines described. For it is not sensible to pay 25 for an investment of which 219 EFTA00625347
you believe the prospective yield to justify a value of 30, if you also believe that the market will value it at 20 three months hence. Thus the professional investor is forced to concern himself with the anticipation of impending changes, in the news or in the atmosphere, of the kind by which experience shows that the mass psychology of the market is most influenced. This is the inevitable result of investment markets organised with a view to so-called "liquidity". Of the maxims of orthodox finance none, surely, is more anti-social than the fetish of liquidity, the doctrine that it is a positive virtue on the part of investment institutions to concentrate their resources upon the holding of "liquid" securities. It forgets that there is no such thing as liquidity of investment for the community as a whole. The social object of skilled investment should be to defeat the dark forces of time and ignorance which envelop our future. The actual, private object of the most skilled investment to- day is "to beat the gun', as the Americans so well express it, to outwit the crowd, and to pass the bad, or depreciating, half-crown to the other fellow. This battle of wits to anticipate the basis of conventional valuation a few months hence, rather than the prospective yield of an investment over a long term of years, does not even require gulls amongst the public to feed the maws of the professional; - it can be played by professionals amongst themselves. Nor is it necessary that anyone should keep his simple faith in the conventional basis of valuation having any genuine long-term validity. For it is, so to speak, a game of Snap, of Old Maid, of Musical Chairs - a pastime in which he is victor who says Snap neither too soon nor too late, who passes the Old Maid to his neighbour before the game is over, who secures a chair for himself when the music stops. These games can be played with zest and enjoyment, though all the players know that it is the Old Maid which is circulating, or that when the music stops some of the players will find themselves unseated." 3M Keynes [Keynes 1936] The following is a brief review of current research and emerging ideas within the field of liquidity. This section is something of a diversion, but research in this area is proceeding rapidly, and important new conclusions have been reached in recent years. It is my belief that these conclusions are important for finance and economics in general, and econophysics in particular, but they don't appear to have become widely know. This section is somewhat technical, and assumes a basic knowledge of finance, for example through reading a standard text such as [Brealey et al 2008]. Where it is important for my own modelling is that my commodity and macroeconomic models predict endogenous creation and destruction of liquidity. If such models are to be successfully built and calibrated; understanding and meaningful measurement of liquidity will be an essential ingredient. The discussion is largely confined to liquidity within stock markets and its effects on the pricing and trading of stocks and shares. More sophisticated readers will also be amused at a discussion that is largely based on the marginalist approaches used in the CAPM and related models. Approaches that are otherwise treated with some derision in the rest of the paper. Like many other aspects of economics; I believe that recasting asset pricing models into a dynamic, chaotic framework will give significant advantages. For the moment almost all the research on liquidity, other than that carried out by econophysicists such as Bouchaud, Potters, Mezard, Wyart, French, Farmer, and others, has been carried out against the traditional models of Debreu, Arrow, et al, and I am obliged to follow this in my review. 220 EFTA00625348
As a concept, liquidity rivals entropy in it's opacity. Both the definition and measurement of liquidity presents problems. Historically stock market liquidity has been defined as the ability to trade large quantities of shares quickly, at low cost and with minimal price impact. Unfortunately this actually describes a range of desirable outcomes rather than an underlying concept or property. Similarly, measurements of liquidity may focus on trading quantity, trading speed, trading cost, volume of trade, etc. Historically it has not been clear whether these different measures were in fact measuring the same thing or not. In the last decade a large number of papers have been produced giving comparisons of measurements of liquidity and illiquidity, see for example: [Chordia et al 2000, Porter 2008, Korajczyk & Sadka 2006, Goyenko et al 2009]. Many different variables have been used to measure liquidity including trading volume, frequency of shares traded, bid-ask spreads, order imbalances, amongst many, many others. The variety of measures used reflects the difficulty of pinning down exactly what liquidity is. As well as individual measures, composite measures have been created in an attempt to capture the multiple dimensions of liquidity. Indeed there seems to be something of a cottage industry in the creation of new measures of liquidity. In the more recent papers such as those above, it appears that more sophisticated measures of the different dimensions of liquidity do in fact correlate closely. It also appears that annual and monthly, long time scale data, correlates well with daily data [Goyenko et al 2009]. These results appear to hold true for both stock markets as a whole and individual company shares. The research above suggests that the different measures of liquidity are in fact measuring the same underlying property, however the exact definition of this underlying property remains elusive. It appears that including liquidity risk as a factor may explain a number of prominent 'market failures'. The following are given as examples: Historically, domestic closed end funds have traded at a discount to the underlying shares, while international closed end funds have traded at a premium. These results can be explained by the greater liquidity of the domestic shares vis-à-vis the funds, and the less liquid foreign stock markets compared to the US fund share market [Amihud et al 2005 — 3.4.5]. Similarly, in most countries, where companies have two classes of shares for nationals and foreigners, the national owned shares trade at a lower price than foreign owned shares. In China the reverse is true. This appears to be a consequence of the high level of liquidity in the Chinese domestic stock market [Chen & Swan 2008], while in most countries the domestic market is less liquid than international markets. Similar arguments can be used to explain the discounts on restricted stocks [Amihud et al 2005] as well as the differences between prices of treasury notes and treasury bills [Amihud et al 2005 — 3.3.1] and also of treasury notes versus corporate bonds; where the price difference can not be accounted for by default risk alone [Amihud et al 2005 — 3.3.2]. Chordia et al, have demonstrated that liquidity problems can explain the post earnings drift that follows unexpectedly high or low earnings announcements [Chordia et al 2009]. While Korajczyk and Sadka show that liquidity can explain up to half the benefits of momentum strategy anomalies documented by Jegadeesh & Titman [Korajczyk & Sadka 2006]. To date I haven't seen a paper discussing the anomalies of dual listed companies such as Royal Dutch Shell, however I confidently expect liquidity to explain the long-term diversion of such share prices. 221 EFTA00625349
While all the above are interesting, probably the most important result of recent research into liquidity, is that liquidity, or more correctly, liquidity risk appears to be a major component of asset pricing. Amihud et al, give a full review of these results, which demonstrate that a liquidity augmented Capital Asset Pricing Model (CAPM) gives much better results than a traditional CAPM [Amihud et al 2005 — 3.2.3]. Other work supporting this view has been carried out by Acharya & Pederson and Pastor & Stambaugh using single measures of liquidity [Acharya & Pedersen 2005, Pastor & Stambaugh 2003], Goyenko et al, Korajczyk and Sadka [Goyenko et al 2009], Liu [Liu 2006 & 2009] and Lee [Lee 2005]. Given the poor historical performance of the CAPM, the Fama-French three factor model has often been used as an alternative. This uses firm size and book-to-market ratio in addition to a market index. The book to market ratio is fundamentally equivalent to the ratio of K to W in the modelling of part A; where K is the real capital, the book value, and W is the market capitalisation. Results from the research above strongly suggest that a single liquidity measure can replace both firm size and book to market ratio and give improved results. This suggests that both firm size and book to market ratio may be surrogate measures for liquidity risk. As discussed in section 2.1 above regarding the companies model, Fama and French's own work indicated that as well as the factors of risk, firm size and book to market ratio, a fourth momentum factor needs to be included to fully explain share price movements. If the research in liquidity stands up to further investigation, it suggests that share price movements can be explained by just risk, liquidity and momentum. Further to that, and in line with the workings of the macroeconomic models of section 4 above, the work of Korajczyk and Sadka [Korajczyk & Sadka 2005] suggests that provision of liquidity also reinforces momentum strategies. This suggests that short term momentum pricing is not 'behavioural' or even plain stupidity, but is 'rational' behaviour for participants, until the market finally reaches a position far out of equilibrium, and endogenous liquidity creation is stopped. Taken together it appears that a new 'three factor' asset pricing model involving the market beta, liquidity risk, and momentum may be superior to both the CAPM and the Fama-French 'three factor' model. This then becomes much more significant at the level of the whole stockmarket, especially in the light of the extensive work by Shiller and Smithers regarding the long-term valuation of stock markets. This work is very well summed up in 'Wall Street Revalued' [Smithers 2009]. The central thesis of this work is straightforward. Shiller and Smithers find that stock market prices do not follow random walks, but are in fact mean reverting over decadal timescales. Two measures in particular are able to capture the over or under valuation of the stock market, the two measures that do this are CAPE and Tobin's q. Tobin's q is of course the same thing as the book-to-market ratio, the same value used at company level in the research of French & Fama and various other researchers in liquidity. q is just the ratio of K to W. At a whole stock market level, both company risk factors and company size are averaged out, leaving only book to market value as a meaningful indicator. 222 EFTA00625350
It appears that by measuring the value of Tobin's q, researchers such as Shiller and Smithers have simply been measuring the liquidity of the whole stock market, with Tobin's q acting as a close proxy measure for liquidity. On the other hand the 'CAPE' is the 'cyclically adjusted price to earnings ratio', which is simply the price to earnings ratio adjusted to a long time period; normally ten years. The CAPE also provides a very good measure of over/under valuation, and consequently correlates very closely with Tobin's q. Working backwards, the logical conclusion is that the over- or under-valuation of the stock market, defined by long term earnings and prices, is simply a measure of the overall liquidity in the stock market, and that deviations away from the long term average are almost wholly due to liquidity. The anecdotal evidence that equity prices are linked to liquidity is certainly plausible. The dramatic fall in share prices during the 2008 Credit Crunch and the subsequent rebound following the introduction of quantitative easing and other fiscal loosening are strongly suggestive of a direct link between liquidity in the economy as a whole and equity prices. To date there appears to have been relatively little research in this area, which is unfortunate considering its potential importance. Pepper & Oliver [Pepper & Oliver 2006] have produced an extensive study of this issue. Their work is very persuasive, and an excellent discussion of how liquidity works in practice, but the attempts to link share price levels to monetary data, while compelling, are not conclusive. This reflects the problems of finding trustworthy monetary data, a problem that the new approach using liquidity measures may alleviate. More recently, Chordia et al, Jones, and Pastor & Stambaugh, have used different measures of market liquidity and have all noted correlations of liquidity to market movements; particularly sharp declines in liquidity associated with declining markets. [Chordia et al 2001a, Jones 2002, Pastor & Stambaugh 2003]. Liu has carried out a longer and more detailed analysis and concludes that there is evidence for mild changes in liquidity corresponding to market movements, and that this is consistent with the argument that liquidity is a state variable important for asset pricing [Liu 2006, 2009]. Chordia et al have carried out an empirical analysis of the relations between liquidity in the stock, bond and money markets, and suggest important links between liquidity, volatility, and monetary policy [Chordia et al 2005]. Important work in this area has also been carried out by econophysicists such as Farmer, Bouchaud and Wyart, this is discussed further in section 9.1 below. While it is early days, it appears that not only is liquidity of fundamental importance in the pricing of stocks and other financial assets, it appears that it may in fact be a fundamental state variable of the stock market, and one that is straightforward to measure on a timely basis. If this is true, then there are some big implications for both finance and economics. Historically, attempts to measure liquidity at a national level have focused on measurements of money supply. Most notably in the UK in the early 1980's monetary policy was used in an attempt to control the economy. The policy was quickly discredited, primarily due to the difficulties of collecting timely and accurate monetary data, and also due to the ease with which the sources of such data could be manipulated by financial institutions, see Pepper & Oliver for more details. 223 EFTA00625351
In marked contrast, some of the liquidity measures used in more recent liquidity research, for example those of [Chordia et al, 2002 & 2005], are easily calculated on a daily basis from stock market information. It would be trivially easy for indices and sub-indices of liquidity to be set up that could be observed and used by both the financial markets and economic actors. The research on liquidity suggests two implications for finance that are both quite profound, the first area relates to the pricing models based on Black-Scholes, the second to the pricing of shares under the CAPM. Almost all modern option pricing theory is based on the Black-Scholes model, or other closely- related models. Black-Scholes has been one of the most important mathematical contributions to economics or finance, and certainly the only one to have come into widespread day to day use within the financial industry. However, one of the core assumptions of B-S is that options on shares, as well as the underlying shares themselves, can be bought and sold easily in highly liquid markets. The recent body of work studying liquidity of financial assets suggests that this assumption is profoundly flawed. It seems likely that prices of both options and underlying assets will be affected significantly by liquidity. It also seems likely that the effects might not be the same for the option and the underlying. Consequently this would suggest that B-S models would, as a minimum, need modifying to take into account the effects of liquidity. That liquidity should be a concern for quantitative finance in general seems obvious; Long Term Capital Management (LTCM) was brought to earth largely through trading in products that became illiquid overnight, and illiquidity was a major factor in the collapse in asset prices that took place during the credit crunch. Clearly the effect of liquidity on asset prices appears to be an area ripe for more quantitative analysis. The possibility of a relationship between liquidity and volatility seems particularly interesting. Other than the work of Chordia et al [Chordia et al 2005] discussed above, there appears to be little published research in this area. If it is true that liquidity is an easily measurable state variable of shares, and that also there are mathematical relationships between liquidity and volatility (which seems plausible), then it may be that measurement of liquidity might be able to give good timely measures for current volatility that can be used directly in Black-Scholes models; rather than the current practice of imputing from historical volatility. A second significant area of interest for the application of liquidity in finance is to asset pricing models. The research to date suggest that liquidity can replace both the size and book to value elements in the Fama-French three factor model, leaving only risk and liquidity, along with momentum, as the determinants of equity prices. Or to put it another way, liquidity risk appears to be the main missing risk element of the various CAPM models. This knowledge gives the intriguing possibility that it should be possible to fully hedge an asset portfolio, and, more questionably, that this might even lead to self-stabilising markets in asset prices. As discussed above, some of the liquidity measures are easily calculated on a daily basis from stock market information. It would be trivially easy to set up a standard 'liquidity index', similar to the VIX index for volatility, and encourage trading of futures in the index and so allow a deep market to form in this liquidity index. 224 EFTA00625352
Investors would then be able to go long on shares, or stock market indices, and simultaneously short the liquidity index to protect against a reduction or collapse in liquidity. If the recent work on liquidity is correct, this should give almost full protection on an asset portfolio of investments. Interestingly, this should act in a strongly counter-cyclical manner. Given the mean reversion properties of the market as per Shiller and Smithers, liquidity protection of this type should be cheap at historical liquidity lows, but increasingly expensive as liquidity bubbles formed; if, of course, it was correctly priced. If such hedging functioned correctly, the cost of protecting against excessive liquidity would itself prevent excessive overpricing of assets and would automatically withdraw liquidity from the market as prices became frothy. As well as having an overall liquidity index, there would also be scope for sub-indices tracking individual sectors. Indeed it may make sense to re-sort companies from traditional 'industry' sectors into groupings that share a common pattern of historical liquidity and volatility behaviour. Clearly correct pricing, and the formation of a sufficiently deep market to cover even a portion of the stocks traded might be problematic. There are also clear possibilities of counter-party default dangers of the sort that afflicted AIG following their substantial underpricing of CDS risk. If liquidity risk is the main missing factor in the CAPM model, and also it proves possible to enumerate and hedge against this risk, then by analysing the resultant data, it may be also be possible to analyse and quantify the remaining residual risks in the pricing of assets. In an ideal world, under these circumstances, it seems possible that momentum trading would become difficult and short-term speculation might be a difficult and profitless activity. This could lead to financial investment becoming a predictable and rather dull area of both business and economics. Common sense, and the weight of history, does suggest that this is more likely to be a possibility rather than a probability. However, if deep and efficient markets in liquidity futures did form, then speculative interest would allow liquidity index pricing to change in response to external factors such as government policy, oil shocks and other exogenous events. This leads to the possibility that liquidity measures could also be very useful for macroeconomic control. Having liquidity indices of this form could assist governments in targeting liquidity in stock markets, and in the economy in general. This might answer the problem of the poor quality and timeliness of traditional monetary data. Casual observation suggests that there is poor short-term correlation between the supply of liquidity to financial markets and the health of the economy as a whole. In the United States for example, in 2005 and 2006 the stock market was booming, with very high liquidity, even though the economy as a whole was struggling (as expected in the Bowley squared predator-prey model of section 4.9). In such circumstances, central bankers face acute problems. With the single tool of interest rates, governments are in a cleft stick. This was admitted to recently by Kate Barker, an ex- member of the UK monetary policy committee [Guardian 2010]. In 2005 the UK appeared to be in both a housing bubble and a stock market bubble, but the general economy was sluggish, and inflation was historically very low, with the threat of deflation in the wings. Raising interest rates to calm down the housing and financial markets risked initiating a recession, possibly moving into outright deflation. However, failing to raise interest rates caused an ongoing bubble to continue its expansion, which had very unfortunate consequences including the collapse of Northern Rock, and the bailing out of Bradford & Bingley, Royal Bank of Scotland and other institutions. 225 EFTA00625353
As discussed above, in the past, attempts have been made to control the economy through the control of the money supply, or as Cooper correctly describes it by controlling the supply of debt. Historically these attempts have not worked well, partly because the money supply is difficult to quantify and measure reliably. I believe a second problem is that there are two sources of liquidity. The money supply and debt is one of them, but the endogenous creation of liquidity within the pricing system is another, and in my view this is the prime source and the larger source. So, certainly increasing the money supply and debt can increase liquidity in the stock market. But increases in stock valuations also create their own liquidity, and also provide apparent extra wealth against which new debt can be secured. These two sources of extra liquidity feed on each other in a most unhealthy way. I believe targeting a liquidity measure in stock markets may be more effective than monetary targeting, as a liquidity measure is measuring the output, the residual, of the liquidity creation process. A certain amount of debt and new money supply is needed in an economy. If insufficient is supplied, then the stockmarket declines, if too much is provided the stockmarket booms, the stockmarket is normally a good weather vane for liquidity in the economy as a whole. An important caveat here is the role of housing, which as discussed above in section 6.3 is more important than even the stockmarket as a driver of booms and busts. Controlling liquidity and money supply for an economy will only be effective if the housing market is stabilised. Absent an effective measure of liquidity in the housing market, then other damping measures and long term indicators need to be used such as historical ratios of house prices to wages and ratios of mortgage payments to rents. The macroeconomic models in section 4 above suggest that liquidity can be formed endogenously, in exactly the way proposed by Minsky. This suggests that, just as central banks are expected to control changes in the money supply caused by fractional reserve banking, it seems appropriate that they also be obliged to control money supply growth caused by Minskian asset price bubbles. The recent research in liquidity, and the models in this paper, suggest that liquidity needs to be targeted separately, in addition to the inflation targeting of the overall economy. The ease and timeliness with which liquidity can be calculated, and compared to historical liquidity levels, suggests that this would be relatively straightforward to do. For instance it might be possible to use active management of the bond market as has recently been done under 'quantitative easing', on a regular basis to increase and decrease the liquidity of financial markets generally. So in 2005-06 it might have been sensible to actively embark on 'quantitative tightening' to restrain the financial markets, while simultaneously lowering interest rates to assist the larger non-financial economy. This takes us back to the building-atrium air-conditioning model discussed in the 'Bowley squared' model in section 4.9 above. The figure is shown again below: Figure 4.9.1 here 226 EFTA00625354
On these lines there was an interesting recent proposal by Martin Weale of the National Institute for Economic and Social Research [Telegraph 2010a] to introduce a specific tax on debt. If this tax were differentiated for housing borrowing, financial borrowing and non-financial borrowing (industry, services and other non-financial borrowing), in theory it might be possible to kill bubbles in the housing or stock markets while maintaining economic growth. Whether this would be practical remains to be seen, it is likely for example that there would be significant problems preventing companies gaming such a system. 8.2.2 On the Price of Shares In his excellent book 'The Origins of Financial Crises' George Cooper [Cooper 2008] points out that one of the clever sleights of hand of neoclassical theory is to demonstrate how supply and demand works well for simple commodities, and then blithely assume that this pricing system is equally valid for financial 'commodities' such as share prices. As Cooper points out this is clearly wrong as the whole point of financial assets is that they have genuine scarcity or 'artificial scarcity' value and, very specifically, supply can not be ramped up to meet demand. People invest in gold because most of the world's existing gold deposits have been found and are owned. Similarly people invest in shares under the expectation that companies will not arbitrarily keep issuing more shares to other investors. I would like to discuss this idea in practical terms and look at why share prices are so different to commodity prices, and discuss one possible way of looking at what is the source of company value, and what is the 'price' of a stock. The prices of most goods and services, especially those that do not depend on scare mineral inputs are characterised by long-term stable prices. Though, as has been shown in the commodities model, even the prices of some basic commodities can vary widely through the results of delays in installing capital. The valuation of a company can take these 'simple' commodities as a starting point. Most companies take in one sort of commodity from their suppliers and produce a more sophisticated commodity, which they then sell on to their customers. Speaking as a humble engineer the 'value' of a company is how many useful things it produces every month. However even an engineer is forced to admit that, for the owners, the meaningful value is the difference in output of the manufactured items and the various inputs; raw materials, labour, rent, etc. The market capitalisation is based on the profit stream, as discussed in section 2. Assume that the outputs are more or less homogenous with a single price level in the market. Assume also that there is one main input responsible for the majority of costs, normally one of the following: a single raw material input, energy, rent, capital or labour. If the difference between inputs and outputs is 10%, as the preceding models have assumed, then a 5% change in the cost of the main input, or the price of the main output price will result in the company's value changing by roughly 50%. 227 EFTA00625355
A moment's thought shows that the value of the company becomes a derivative price based on (at least) two underlying prices. And this derivative has very substantial leverage on the underlying prices of the commodity inputs and outputs. This is further complicated by market interest rates. Let us assume, firstly that the input and output prices are absolutely stable, and that therefore the business has an absolutely stable dividends stream. Then to price the company's shares, this stream must then be compared to the risk free market rate. So the price of the company's shares will go up as the risk free rate goes down, and down as the risk free rate goes up. As such the company's share price will vary in the manner of a bond, or more accurately, a perpetuity. So the market 'value' of the company is in fact the price of an artificial perpetuity based on a derivative of two or more underlying commodity prices. On top of this variability needs to be added the effects of things such as liquidity and momentum as discussed in the previous sections. Looking at stocks and shares in this way, even such a simplistic model shows that the 'price' of a company is related to the prices of normal commodities in a very complex way. This gives an insight into why share prices are so volatile. It certainly makes it clear that supply and demand cannot operate in a normal manner on company share prices, the price is not simply set externally by the utilities and preferences of buyers and sellers of shares. 9. Supply and Demand 9.1 Pricing An interesting puzzle in the history of economic thought is why the mathematization of economic theory in the 1940s and 50s took place through the formalization of the static Walrasian model, rather than through the study of infinite horizon production based models that arise from the Classical view. This puzzle is particularly intriguing because the best mathematician who ever worked on economic problems, John von Neumann, introduced the key mathematical tools in a study of such a Classical, infinite-horizon, production-based model before Arrow and Debreu used the same tools (mostly topological) to formalize Walras. [Foley 1990] The idea that production rather than exchange is the source of value is contentious within mainstream economics, though why this is so is puzzling. Both the theoretical history and empirical data support this central view of production. The theoretical debate goes back at least to the work of Sraffa and the Cambridge Capital Controversies. The conclusions of the debate have been discretely forgotten; Sraffa's work demonstrated that the production function approach of marginalism was not appropriate, and that pricing of produced commodities through the long period classical approach was the appropriate way forward. 228 EFTA00625356
Sraffa and others demonstrated that pricing of capital can not be carried out using marginality. The original work of von Neumann was also classically based, and also showed that a coherent system of prices can be built using the approaches of classical economics. The work of Sraffa and von Neumann has since been systematically synthesized by Kurz & Salvadori [Kurz & Salvadori 1995] to give a modern classical framework. Meanwhile Arrow, Debreu and others took von Neumann's insights and battered them into a neo-classical framework; back into the realms of field theory. With regard to the clash between classical and neo-classical approaches, the work of Burgstaller [Burgstaller 1994] is particularly intriguing, in that he proposes that both the neo-classical and classical approaches can be presented as subsets of a unified approach. In particular he shows that the neo-classical approach is appropriate when no labour is involved, as for example in a pure exchange process, while the introduction of labour results in the necessity of a classical approach. (It is the opinion of the author that the neo-classical approach is only appropriate when no value is added, whether by labour or machines. This remains however only an opinion.) Burgstaller's work suggests that the neo-classical approach is only suitable for processes such as the purchase of raw materials, or interestingly in the exchange of financial products. In this light, marginalism would at first appear to be very useful in defining the mechanics of the purchase and sale of financial assets. With financial assets, owners have strong preferences for ownership, based on different preferences for risk, liquidity, etc. At a particular point in time, they will also have set initial endowments. Following an exogenous event, such as an unexpected change in dividends, interest rates, etc, market participants will presumably want to rebalance their endowments to bring them into line with their preferences. Unfortunately, the financial field of market microstructure, with its wealth of data, has long moved on from the simple cartoons of static supply and demand curves. Research in market microstructure has shown that the determinants of prices are stocks (inventories not shares), information, liquidity, etc, while marginality has been quietly sidelined. This is primarily caused by the problems of matching supply and demand over a time basis. When time is taken into account, marginality is replaced with a focus on inventories of financial assets owned, and the information encoded in order flow. Or, as has been pointed out previously, comparative statics cannot model effectively in a dynamic environment. These conclusions on sources of costs are based on substantial quantitative research, supported by some very interesting theoretical work. This work is well reviewed in papers by Stoll, Madhavan and Biais et al, Stoll is a particularly good introduction [Stoll 2003, Madhavan 2000, Biais et al 2005]. Lyons discusses this with great clarity in 'The Microstructure Approach to Exchange Rates' [Lyons 2001]. In sections 6.3 to 6.5 Lyons captures the difference between the 'Tastes & Technology' approach of traditional economics and the 'Information & Institutions' approach of market- microstructure. What Lyons is too polite to point out is that the utility approach of 'tastes & technology' rests on hypothetical foundations invented in the late 19th century, while the 'information & institutions' foundations are based on theoretical models proposed to fit large scale data sets through the finish of the 201° century and the start of the 21st. As Lyons notes: "The microstructure approach also includes utility maximization, but as we saw in chapter 4, utility is specified very simply, typically in term of terminal nominal wealth." 229 EFTA00625357
Market microstructure has analysed two main forms of markets, those composed of continuous double auctions and those made by market makers. The second is of particular interest. Market makers buy and sell shares or other financial assets in financial markets. Financial markets involve buying and selling things in a dynamic time frame. There is no guarantee that somebody will want to buy something at exactly the same time that someone else will want to sell something. Market makers keep markets working by 'providing liquidity' and ensuring that there is always somebody who is willing to buy and sell shares at any particular time. Market makers make markets by acting as intermediaries and do not normally hold on to shares on a long-term basis. They make their living by maintaining a small margin between the prices at which they buy and sell. Market makers are normally obliged by market rules to post prices at all times, and are obliged to fulfil purchases and sales at their advertised prices. They normally have to do this while in competition with other market makers. The speed of trading means that markets never formally 'clear' and market makers are often working 'blind' with little information other than the recent trading history of themselves and their competitors, and the knowledge of the level or inventory of assets that they currently have on their books. Market makers make money by having a margin between the prices at which they sell and buy, this is known as the 'bid-ask-spread' or simply spread. Market microstructure empirical research, experiments and theory have left the models of supply and demand behind; primarily because there is no evidence to suggest that market makers use marginality in pricing, and significant evidence that other factors are used in their pricing strategies. Research suggests that the bid-ask spread is made up of five main components, these are discussed briefly below, for a more detailed review see Stoll, Madhavan or Lyons. The first type of cost is administrative or 'handling' costs and other overheads. These reflect the costs of renting offices, paying wages, running systems etc. For modern electronic share-dealing these costs are generally very small, though the arms war of high-frequency and algorithmic trading, which demands both expensive technology and highly numerate employees may be pushing these costs back up. For non-standardised 'over-the-counter (OTC) products, these costs can also be higher. Another cost may be caused by non-competitive practices, such as industry standards on tick sizes or standardised bid-ask spreads. A third source of cost is related to the cost of holding unwanted inventory. Market makers are like bookies at horse races. Bookies probably know the horses and jockeys far better than the punters, but they don't make their money by betting on the horses. They make their money be balancing the supply and demand of the various punters, and making sure they take a small margin in the middle. It is dangerous for them to take a lot of bets on one horse, even if they think the horse will probably lose, because if it does win they will be wiped out. If they do get a lot of bets on one horse, even if they think the horse is lame, they will increase the price of that horse (by decreasing its odds) and decrease the price of the other horses (by increasing the odds) until they bring their positions back in to line and ensure that they will make a small profit whichever horse wins. In the same way market makers also generally know their markets much better than their customers. But they do not normally wish to hold large positions in a single stock, because if the price of that stock should collapse unexpectedly then they could go bankrupt overnight. Because of this managing and hedging inventory can be a significant source of costs. 230 EFTA00625358
This leads on naturally to the fourth source of cost, the cost of 'adverse information'. However well the market-maker knows his markets, he will never know them as well as 'informed traders', that is people who are closely linked or even working for the company whose shares are being traded, and so will have knowledge of good or bad news about the company before the market- maker. These 'informed traders' are able to make money out of the market-maker, and for the market-makers to stay in business, they must collectively recoup this money from the 'uninformed traders', they do this by having an appropriate extra margin in their bid-ask spreads. A final source of costs is what is known as the 'free option' cost. In a well administered market, providers of liquidity are forced to hold their quotes open for a fixed minimum period. Priority rules then ensure that orders are closed out in a fair manner normally based firstly on price priority, then on time priority, where prices are equal. These rules force market-makers to compete with each other and so protect the ordinary share-trading public. One problem with this is that it forces the market-maker to hold his price for a fixed time period; in this time the market price may move, giving an advantage to a well informed customer who can make money out of this 'free option'. To protect themselves, market makers add a small extra margin into the bid-ask spread. As well as the work of pioneers in finance and economics covered in these papers, this area has also recently been extensively researched by others from the field of econophysics such as Farmer et al, Wyart et al and Bouchaud et al [Farmer et al 2005, Wyart et al 2008, Bouchaud et al 2009]. The convergence of the work of economists and physicists in this area is interesting both in its own right, and also more remarkably as it demonstrates that physicists and economists are in fact capable of both reading and citing each others recent research work. Taken together, the fields of market microstructure and econophysics seems close to providing full models for financial market functions that combine good theoretical underpinnings with good fits to actual data. There also appears to be strong areas of similarity between the research that has been carried out in the area of market microstructure and that of post-Keynesian pricing theory. To the best of my knowledge these parallels do not appear to have been investigated. Post-Keynesian pricing theory is primarily empirical, and its empirical basis is of a depth and surety rarely found in economics. In 'Post-Keynesian Price Theory' [Lee 1999] Frederic Lee gives an excellent review of how far disconnected from reality is the marginal approach to the pricing of manufactures. Despite the book's title, 80% of the book provides an excellent review of extensive historical research showing how businesses actually carry out pricing policy. The results of the research show that, in the real world of business, marginality is non-existent. In particular, most businesses have their maximum profitability at maximum output. Diminishing returns simply don't appear in real world manufacturing, this has been clear for decades, see for example [Eiteman & Guthrie 1952]. In almost all production processes costs decrease with production right up to maximum output, and extra capacity, in the form of new factories, can be added easily and speedily. Under these conditions; of decreasing returns to scale, marginality is irrelevant as it simply cannot work. In the real world almost all companies carry out their pricing using some variation of an average cost and 'mark-up' basis, with standard additional costs being added to the prices of the inputs. 231 EFTA00625359
It is important to note that, as with market makers; manufacturers and retailers also price their goods in advance of sale when supplying to the public. They also often do so on long-term defined contracts when supplying to other companies. It is also notable from the post-Keynesian research that manufacturers and retailers focus strongly on inventory levels and the prices of their competitors for their decisions on prices and production quantities. An interesting piece on pricing in industry by Langlois [Langlois 1989] looks at pricing in the automobile industry. Particularly interesting in this research is the prime role manufacturers give to the monitoring of inventories in pricing goods and controlling output. All this is immediately obvious to anybody who has actually worked in a factory environment, including of course pin factories. The existence of mark-up pricing and controls based on inventory levels, along with the absence of diminishing returns, is strongly supportive of the classical economists' point of view. One of the main conclusions of all the research into the real world of business is the absolute irrelevancy of marginal pricing outside the areas it was originally used by the classical economists, areas such as land or mineral extraction. The parallels between post-Keynesian pricing theory and market microstructure theory are clear. Companies are obliged to behave as market makers. Complex market makers; but market makers none the less. For a company, their 'mark-up' is directly analogous to the 'bid-ask spread' of the financial market-maker; though the weightings in the spread are a little different. An easy example to follow is that of a retailer. A shop buys goods from manufacturers and sells the same goods on to the general public. So in this case the main inputs and outputs are identical; in the same manner as a financial market maker. While overheads for a financial market-maker are very small, for the retailer they are much larger, and need to pay for the remaining inputs of staff wages, distribution costs, rental of shop space, services, advertising, etc. They also need to include for payment of profit on capital and interest on debt. But just like stock markets, prices never formally 'clear', and pricing is based on information from competitors, rates of sales turnover, and levels of inventories of goods held. Purchases of new goods are strongly influenced by inventories of goods within the supply chain. Prices are raised when turnover is high; at Christmas for example, and are dropped in the January sales to get rid of excess inventory. Manufacturers, or providers of services, follow exactly the same logic, but now the stocks bought and the stocks sold are of different goods, and the 'bid-ask spread' is even larger and now includes the costs of the value adding processes used in production. It appears that the substantial body of post-Keynesian empirical work could benefit strongly from looking at analytical ideas from market-microstructure and econophysics research. Indeed the processes of market-making and market microstructure approaches in general appear to be ubiquitous and universally applicable in its role of price formation in economics as well as finance. Perry Mehrling provides a very thoughtful analysis of the US banking system using market microstructure approaches, while Lyons does the same for currency trading [Mehrling 2010, Lyons 2001]. 232 EFTA00625360
The processes described by market microstructure concentrate on order flow and spread. They arise from markets in which prices are dynamic and not formally settled, where prices ultimately are linked to long-term values, but public information on those values is usually not complete. In this price discovery process information is found, and long-term prices are defined on different levels. Long-term prices will ultimately link to fundamental values, but as has been shown above, 'correct prices' will also vary with the point that has been reached in different cycles, on levels of liquidity and debt in the economy, on levels of government activity in the markets, on relative levels of trade and capital flows between different countries and levels of inventories of financial assets in the portfolios of different investors. As described in section 8.2.2, in such complex systems, the link to 'fundamental' values is weak and time dependent. Market microstructure describes the mechanisms that allow buyers and sellers to discover these 'correct' values. It is not the balance of buyers and sellers that define these values. As ever Foley hits the nail on the head: I believe that the informational view of prices brings modern economics closer to the Classical economists than to Walras. The Classical economists argued that costs of production are the fundamental determinant of prices. Costs of production are the relevant transversality condition for durable and reproducible commodities. Thus forward-looking speculators will price current commodities on the basis of their estimates of long run costs. The new information that disturbs asset prices is, in this way of thinking, primarily information about long-run costs of production. [Foley 1990] ....it makes more sense to interpret the commodity bundles of agents as stocks, such as stocks of consumer durables (the food in the refrigerator, for example). The availability of well- organized markets permits agents to keep close to their desired stocks at equilibrium prices at all times. Since agents are human beings who get hungry, wear out clothes, and in general deplete stocks, it is necessary for them to make transactions more or less continuously to keep close to their desired stocks (selling their labor-power, paying their rent, buying food, and so forth). These transactions, which generate national income, are not in this way of thinking the result of irreversible movements from far-from-Pareto endowments to a Pareto allocation, but the result of agents' constant effort to maintain their desired stocks given equilibrium prices. Something like Hicks' Sunday night, in which the economy and its agents are suddenly moved to a point far from the Pareto set, occurs only rarely as the result of external shocks to the system. If we regard actual data on economic transactions as arising in this way, conventional specifications of demand functions in which flow transactions are a function of market prices and incomes are inappropriate. The prices at which transactions in a close-to-Pareto allocation economy take place are in fact equilibrium prices, which we can thus observe directly. The quantities transacted, however, depend on the dynamics of consumption and depreciation of stocks, which require specific modeling. The assumption that agents generally remain dose to desired stocks, and that the economy can as a result be analyzed with the concept of reversible transformations, is a strong abstraction. For example, an agent who loses her job typically feels that she has been forcibly (irreversibly) moved to a lower utility level. Real economies experience shocks (wars, revolutions, depressions, and technological innovations, for example) that intuitively seem to be best understood as irreversible transformations. The gradual processes of economic growth and development move agents to higher indifference surfaces, 233 EFTA00625361
but on a time scale much longer than that of the establishment of market prices. We would like to emphasize the notion that the method of reversible transformations is best adapted to analyzing ongoing economies operating more or less normally. [Smith & Foley 2008] And, of course, moving to a focus on stocks means moving to a world of dynamic equilibrium, of Lotka-Volterra models, predators and prey and maximum entropy production. The work of Sraffa, von Neumann, Kurz & Salvadori, Burgstaller, etc give a very good starting point for the calculation of long term prices in such a world. Unfortunately the approach used by these authors remains one based on static processes and single period analyses. Recasting this work into a dynamic approach should be straightforward. A sensible way forward would seem to be by using the market microstructure, market maker / post-Keynesian approach to attack the single-commodity, multiple-commodity, joint-production, etc, problems. If a simulation approach was used, rather than an algebraic approach, this might also reduce the ratio of headaches to results. Unfortunately the non-existence of diminishing returns, and the work of Sraffa et al leave a problem as to what exactly does form the limit on the volume of goods produced. An obvious limit is scarcity, the restraints on growth provided by a limited planet. I would like next to explore in detail just how much scarcity there actually is in the world at the start of the 21st century. 9.2 An Aside on Continuous Double Auctions In previous sections I have been scathing about the fashion for high-frequency trading. In an act of some foolishness I would like to look at this in more detail. I do this with some trepidation, moving into an area where debate is vociferous and my knowledge is limited. However, despite my inexperience, from my naive viewpoint it appears that the structure of financial markets often seems perverse and appears to be incentivised against easy price discovery and the simple execution of large trades. This discussion is also in the wrong section, and logically fits with liquidity or the control of dynamic systems, however for reasons of intelligibility it was necessary to leave this discussion until after the discussion of the role of market microstructure. Finally, the debate in this section is somewhat technical, and not core to the paper. It is simply an example of how using a controls system mindset might allow more efficient markets to be constructed. I suggest that those who are not interested in these issues skip this section. For those who are interested, but are unfamiliar with market microstructure, reading the excellent paper by Stoll [Stoll 2003] should give sufficient background to follow this section. As discussed previously, stock trading is now dominated by 'high-frequency trading'. On the major western stock markets the majority of trading is done by high-frequency algorithmic trading. In these stock markets supercomputers trade billions of dollars of trades in seconds using automated algorithms. Individual bids and offers may be held open for fractions of a second. This is done in the sacred name of 'liquidity', which is assumed to be always a good thing. 234 EFTA00625362
The current data suggests that high-frequency traders largely provide their liquidity to well- traded shares in preference to infrequently traded ones. They also prefer doing so at times of low volatility to high volatility. By definition this is opposite to the requirements of effective liquidity supply, and the reverse of a couple of centuries of defining the role of liquidity suppliers. The quote from Keynes at the beginning of section 8.2.1 gives his views of the benefits of liquidity, and it appears reasonable to assume his opinion of high-frequency trading would not have been positive. More recently other experienced financiers have shared similar views [Noser 2010], and at least one commodity trade body has denounced 'parasitic' traders [FT 2011b]. That my concerns are more widely held is supported by the recent decision of Credit Suisse to start a 'light-pool' for institutional investors. This is deliberately aimed at large volume traders and 'opportunistic traders' will be specifically denied access to the system [FT 2011a]. My own fundamental problems with high-frequency trading are three fold. Firstly it is trivially obvious that the value of companies does not change from microsecond to microsecond. In fact research suggests that publicly announced information has negligible effect on trading, see for example [Joulin 2008, Ranaldo 2008, Bouchaud et al 2009]. In fact information largely comes from large trades by institutional traders, and as Bouchaud et al make clear, the savagery of the market means that such large trades now need to be broken up into small trades and fed into the markets in a piecemeal fashion, sometimes in periods as long as months, to prevent adverse price movements. This brings me to my second fundamental problem with hft. In a dynamic, chaotic system, reducing the time constant of trades, allowing trades to be faster and faster, increases the speed and volatility of short-term momentum processes. To go back to the idea of a traditional market, if I was a customer trying to buy or sell oranges from or to a stall-holder, I would naturally prefer to see all the stall-holders displaying their prices while I get the opportunity to walk around and chose the best price. If each stall holder just flashed a quote for one second and told me to take it or leave it, things would be much more difficult for me. Finally, and leading on from the above, there is very little evidence that high-frequency trading does in fact provide liquidity. The paper by Bouchaud et al is magisterial in its depth, and the main conclusions are that, although a lot of shares are traded, revealed market liquidity is very low. Like the orange sellers in my example, the short time of quotes makes it very difficult for buyers and sellers to move large volumes without changing the prices. In their role as liquidity providers, high-frequency traders have taken over the role of market- makers as being traders who do not buy shares to hold in their own right, but simply buy and sell to others and make a profit on this trading. Unfortunately the traditional duty of market-makers to ensure an orderly market, and not to favour themselves over their clients, seems to have been lost in the cracks somewhere. As Noser points out, there are well-established rules for order book precedence in market- making and there is no obvious reason why high-frequency traders should be exempted from these rules. As a minimum high-frequency trading needs reforming, with a return to the rules traditionally imposed on market makers, including a minimum required time for a quote to be offered of say 235 EFTA00625363
five seconds, along with reinstatement of the normal price and time rules for filling orders. (Traditionally market order books are filled first by precedence of price, and then by time of arrival of the quote.) This would allow competition to revert to that of price and spread, rather than speed. The resultant recreation of meaningful bid-ask spreads, though possibly larger would be much better at providing signalling of liquidity requirements, which is of course the whole point of market- making in the first place. The increase in price transparency should far outweigh the cost of the free options offered. Looking more broadly, speed of trading, and narrowness of spread are not the only benefits required from a liquid market. As is seen in Bouchaud et al's paper, high speed does not guarantee the ability to trade a large volume. Similarly, a narrow spread does not mean good value if the upper and lower bands of the spread move against you rapidly as soon as you start trading. In fact, a good liquid market has a combination of three dimensions, the ability to trade large volumes, at good prices, at high speeds. The way markets are structured allows high-frequency trading to prioritise the advantages of speed at the expense of price and volume. Supporters of high speed trading show reduced spreads as the main benefit of their technologies, with the implicit assumption that this has clearly reduced costs for all market participants. But the reduced spreads have been accompanied by increased volumes of trading. It is the belief of the author that the increased speed of trading, and the faster reaction of markets to order flow mean that short term momentum effects have been increased, so obliging all traders to balance their portfolios more frequently. It is trivially obvious that if spreads are halved, but traders are forced to trade three times more frequently, then overall trading costs have been increased by 50%. If the majority of gains are going to the algorithmic traders, then costs to normal traders have been increased even further. And here 'normal traders' ultimately means the general public as savers, and genuine capitalists raising money to invest in productive capacity. One possible way to manage this is to change the trading rules so that they also reward providers of volume, longer quotes, and so good stable pricing. The big advantage in offering larger volume quotes is clearly that more trading can be done faster, and at lower cost. The existence of over-the-counter 'upstairs' markets suggests that institutional investors often want to sell and buy large quantities at the same time, however the ad-hoc nature of upstairs markets can make such exchanges slow and expensive, indeed 'dark- pools' appear to be part of an ongoing process to formalise this upstairs market. Whether 'light- pools' form an extra step in this process remains to be seen. The big disadvantage of trading large volumes is that it gives a large information signal and cause large movements if only one side of such a potential trade advertises their potential trade. Similarly, if more bidders provided longer quotes this would give more quotes available, more price transparency and greater competition. Unfortunately, as discussed above, a long-life quote gives a 'free-option' to traders who can predict the direction the market is going to move. This therefore encourages short quotes, which in a circular reinforcement, encourages rapid movements. 236 EFTA00625364
It is possible that Credit Suisse, or other organisers of 'light pools' may be able to increase the effectiveness and liquidity of their trading platforms if they used rules along the lines of the following for filling orders against the limit order book. 1. All quotes to be quoted with both a size and a 'valid-to' time as well as a price. The quote would stand at least to the valid-to time. The valid-to time could be extended, or be rolling from the present time, but the quote could not be cancelled before the valid-to time, and a rolling quote would only be convertible into a valid-to quote of the same length. 2. Impose a minimum valid-to time of a few seconds. 3. Fill orders firstly according to price. 4. Where offers have the same price the offer with the furthest 'valid-to' time is selected first. 5. Where offers have the same prices and 'valid-to' time, the offer with the largest volume is selected first. All incoming orders would follow the same rules, any that crossed the existing order book would be settled immediately, any that don't cross would be obliged to remain on the book until at least the end of the minimum 'valid-to' time. This would be a 'no-time-wasters' market. It is possible that all quotes submitted would be for the minimum valid-to time, with small quotes competing on price only. However it is the belief of the author that such a market would encourage competition first on length of quote and then on volume. The minimum time period would form an initial 'level playing field' and would discourage opportunistic bids. Given an existing price level, a new quote on the market that wanted to ensure a sale could simply quote a better price. Alternatively they could put in a quote at the same price but with a later 'valid-to' time. If the extension of time was relatively short, this second course would probably be cheaper than quoting a better price, especially if the market was stable. So at first the market should get a greater amount of quotes going further into the future. With more bids on each side of the limit book, dealers that had large positions to move would then be able to compete on volume. If they did this alone in the current hft market it would be suicidal, but with more 'revealed liquidity' on each side of the book, the proportion of new information revealed would be smaller. This process should allow more visibility and stability in pricing and so better price discovery. This could then feed back into more competition on quote duration and volume. Ultimately, if this system did work it would have more quotes, more volume and more revealed liquidity than other markets, and ultimately, smaller spreads. The whole point of the proposed system above is to make traders behave more like fruit stall holders, or better, shop-keepers; to incentivise them to advertise their prices for longer periods and greater amounts of goods, so allowing better competition. Counter-intuitively under such a system much greater liquidity, and better overall price value may also be achieved by limiting the intervals in prices at which shares can be traded and also by limiting the frequencies at which 'valid to' times can finish, say every 2 seconds. Infinite granuality would be reserved just for volume. This would be a reversal of recent history in the management of stock-exchanges. This would prevent price competition at very small fractional levels of price and time, and so encourage more competition on quote time length and volume. 237 EFTA00625365
A second area in which the current structure of markets seems sadly lacking is at the opening and closing of sessions. Currently this is commonly done by complex bidding procedures and crossing algorithms to dictate median prices. The suspicion that these procedures don't work; that the median prices are not in fact the market prices, is reinforced by the fact that the majority of trading in equity markets takes place in the first and last hour or so of the trading day. Figure 9.2.1 below gives the price (thick line), and volume (smaller grey shading towards bottom) for shares in HSBC, a large UK bank. Although the scale is a bit small, it can be seen from the volume that the majority of the trading takes place at the start and end of the trading sessions. This is typical of share trading patterns. Figure 9.2.1 here Markets/data] The problem here is that as the market opens, liquidity goes from zero to near infinite instantaneously. Conversely, at the close of the market, liquidity goes from infinite to zero instantaneously. It is well known that increasing liquidity decreases spreads, so conversely, deliberately decreasing liquidity should increase spreads. This suggests an alternative to crossing procedures. Opening a market could be managed by steadily increasing the liquidity over the first half hour. This could be done easily by opening the market with a very large minimum trade size, in the UK market this would be a minimum multiple of the normal market size 'NMS'. With this large minimum trade size, bids and offers would be a long way apart, and it is very unlikely that any trading would take place. Over the first half hour the minimum bid size would then slowly be moved from a large multiple of NMS to the normal minimum quote size. At some point during this process the bid and offer prices would come close enough for trading to start. This starting point would then be exactly the correct market price. A similar process could be used in reverse for closing markets. Following the ideas above, it might be better to use the length of time that a quote is held open as the way of manipulating liquidity. At the opening of the market, minimum quote length would be in the order of minutes, and would then be steadily shortened. This would have the same effect of bringing the bid and ask prices together slowly, while having the advantage of not discriminating against small traders. In fact, although this process would be very useful for restarting a stopped market, it wouldn't generally be necessary. Some commodities markets have already solved this problem. For example the oil futures market run by ICE has trading hours between 01:00am and 11:00pm (UK time). Again the figure below gives price (thick line), and volume (smaller grey shading towards bottom). 238 EFTA00625366
Figure 9.2.2 here Markets/data] Although this might raise fears of traders being forced to work anti-social hours, actually the reverse is true. Trading through the night is low, and then trading and liquidity both rise to a morning peak, followed by a larger afternoon peak before dropping off again. Clearly this has settled to a standard pattern where people who have large trades wait for the liquidity peak to build before they move in to trade. It would certainly be feasible to do the same for the major stock-exchanges, if only for the larger shares such as those in the FTSE100 index. All the above are the suggestions of an amateur game theorist. Within economics in recent years there has been an explosion of literature on game theory and auction theory, but this seems to have had little practical input to the trading of financial assets in general and market microstructure in particular. The systematic application of game theory to continuous double auction markets would appear to be a very productive potential future field. 9.3 Supply - On the Scarcity of Scarcity, or the Production of Machines by Means of Machines "Economics is the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses. "[Robbins 1932, p. 15]. I write this section with some trepidation, given the beliefs, held by a significant number of intelligent people, that the world is simultaneously on the edge of a dramatic ecological crisis and about to run out of many critical resources, most notably oil. I had considered a third alternative title of 'The Confessions of a Cornucopian'. Because, after twenty years working daily in the environmental industry, and having read widely on engineering, technology and economics, I am philosophically a strongly committed cornucopian. I personally agree with the binary economists and Amartya Sen that there are more than enough commodities in the world for everybody; just that most people do not have the money to buy them. From the point of view of the analysis of economics, it is most unfortunate that heterodox economics appears to have been significantly infected by this Malthusian virus of scarcity. As can be seen from the discussions above, the environmental and ecological scientists have the right mathematical tools for creating a radical and effective new economics, both specifically in the form of the Lotka-Volterra and maximum entropy production models, and more generally in their understanding of complex inter-related evolving systems. Ultimately this maths finds it's roots in the work of Malthus and Sismondi, unfortunately the environmental and energy economist movements seem to have also inherited Malthus' pessimism lock, stock and barrel. 239 EFTA00625367
A significant source of the problem appears to be the work of Georgescu-Roegen [Georgescu- Roegen 1971], the first person to successfully introduce the concept of entropy into economics. I believe that Georgescu-Roegen's work was of ground-breaking importance, and profoundly insightful. However some of his conclusions, though meaningful at the time of writing, have proved to be wrong in hindsight. This is not his fault. In the middle of the 20th century there was no sign of declining human fertility, green revolutions or cheap photovoltaic cells. Georgescu-Roegen's work can be seen as similar to that of Lord Kelvin. Kelvin's scientific genius is not in question, but few people today believe that the earth is in imminent danger of 'heat death'. In almost all recent work in environmental and energy economics the supposed 'restraints' imposed by entropy, first proposed by Georgescu-Roegen have been treated as fundamental truths. Unfortunately these precepts are trivially mistaken. The paper of Ayres & Nair [Ayres & Nair 1984] provides a typical example. I have found this paper profoundly useful in guiding my ideas as to how to link the mathematics of economics to the real world of science. The present paper would never have been written without the assistance of Ayres & Nair. But the atmosphere of doom and gloom runs deeply through the paper, from beginning to end, with the clear prediction, on the last page, made in 1984: "What are the prospects of avoiding a resource-depletion catastrophe? It will not be avoided without a major effort, we believe. "And much more in the same vein. Despite these predictions, the economies of the West have plodded on at their long term 2-4% annual growth rates, the developing world has managed at least double this, and China has lifted a billion people out of poverty in the greatest single advancement of welfare that humanity has ever seen. Oddly, although neoclassical economists are generally very optimistic with regard to the market providing for peoples needs, and are often philosophically opposed to the environmental movement; neo-classical economics shares with the greens a bizarre fixation with the concept of scarcity. As typical examples the second page of my edition of Mankiw [Mankiw 2004] states that 'Economics is the science of how society manages its scarce resources while on page 4 of 'Macroeconomics', Miles & Scott give the definition of the whole of economics as 'Economics is the study of the allocation of scarce resources: Robbins is generally credited as being the originator of this meme; he is quoted above at the beginning of this section. Prior to Robbins the study of economics was generally defined as the study of the distribution of wealth, as for example by Ricardo at the very beginning of this paper. The conversion to a definition of economics based on scarcity represented the absolute victory of marginalism over common sense. The definition using scarcity seeks to define the whole of a scientific field in terms of one cheap mathematical trick. It is as if; exactly as if, 100 years ago the field of physics had been defined as the study of conservative fields. In the following sections I would like to briefly discuss these apparent constraints of scarcity. Population The world's population is rising, and because of the relative youth of most people in developing countries, it will continue to rise for some time. However the decline in fertility in recent years has been dramatic. China dropped below replacement rate years ago, along with most of the rest of East Asia. India's fertility rate is dropping dramatically and will soon be below replacement rate. High fertility is now confined to a 240 EFTA00625368
small number of countries in Africa and the Middle East, and is dropping quickly in most of these places. The pattern of fertility drops is strong, and is clearly linked to women's education levels and general economic wealth, both of which continue to improve at a rapid rate in all countries except for the few that are at war or are failed states. Many examples can be seen at the excellent site 'gapminder' [gapminder]. The current median predictions, from the UN in 2008, for the future world population are nine billion people in 2050, which is expected to be close to the peak [UN 2008]. For those who are horrified by these numbers, a little context might be useful. If there were 10 billion people alive in 2050 and every single one of them lived in the USA, the population density of the USA would be 712 people/km2, that lies somewhere between the present day population densities of the Island of Jersey at 789 people/km2 and the Palestinian Territories at 667 people/km' [UN 2004]. Oddly enough, the island of Jersey has a long and successful history of selling itself as a bucolic rural tourist destination and a quiet millionaires playground. Meanwhile the Israelis appear to have signally failed to realise that the Palestinian Territories are overpopulated, and have installed an extra half a million people there as illegal settlers in the last forty years. Energy Very roughly, world energy production can be split as follows, one third oil, one quarter coal, one quarter natural gas, while the rest is made up of nuclear, biomass, hydro and other renewables. Nearly all of the oil is used for transport. Most of the coal, gas and other is used for producing electricity for industrial and domestic use; other than a minority of the natural gas which is used directly for heating homes in the Northern hemisphere. All of this usage can be replaced easily and rapidly from other sources, at only moderate extra cost. The amount of solar radiation received on the earth is many orders higher than the amount of energy used by human beings, roughly one hour's sunlight hitting the earth would supply humanity's energy needs for a year. The black dots on the map below show how little area would be needed to supply all the world's energy needs. Figure 9.3.1 here [Loster 2006]. For electrical generation, and so also space heating, solar power can be used directly, or with storage for use at night. Current storage options include hydroelectric, already used in Northern Europe with storage in Norway, hot oil in CSP plants, grid scale sodium/sulphur batteries, or just old fashioned domestic electrical storage heaters with bricks in them. The cost of photovoltaic solar energy has already reached grid parity in Italy, where the sun is plentiful and electricity is expensive [NEA 2010]. The recent experience of both Germany and Spain have shown that when there is an economic incentive, solar power can be rapidly installed in industrial quantities. 241 EFTA00625369
As solar has reached grid parity in Italy, and will shortly do so in Spain, Australia and the South West of the USA, installation will proceed rapidly unless it is forestalled by something cheaper such as shale gas or an innovative form of nuclear power. Given both that solar has already reached grid parity, and will inevitably continue to get cheaper, and coal power will inevitably get more expensive; this means that CO2 emissions will also peak in the very near future and will decline rapidly thereafter. The recent revolution in shale or 'tight' gas, illustrates the pitfalls of the Malthusian tradition of underestimating the combined powers of economic incentives and human ingenuity. As recently as five years ago it was believed that natural gas supplies in the USA had peaked, and so very substantial money was invested in natural gas import facilities in the USA. With the new techniques for extracting shale gas, the estimated gas reserves of the US increased by a third from 1998 to 2008 [EIA], and have increased substantially in the last two years as more unconventional gas reserves have been made accessible. Exploration for tight gas supplies in the rest of the world has barely started, and there is good reason to believe that reserves similar to those found in the US will be found in similar rock formations worldwide. The one third of the world's energy supply, mostly oil that is used for transport cannot be replaced directly by solar, but many other alternatives exist. With the boom in shale gas; compressed natural gas for transport use is the trivial short-term solution. In countries such as Argentina and Turkey, natural gas is already widely used for transport, and if the shale gas revolution continues at its current speed, this short-term change over appears inevitable worldwide. Ethanol, specifically cellulosic ethanol, is another alternative. Brazil already supplies more than half its needs for car fuel from ethanol. Brazil alone could supply enough ethanol to replace all current world oil needs, using just a quarter of its land, supplying cellulosic ethanol from sugar cane [Biopact 2007]. In the meantime, battery technologies are improving rapidly, and cars are slowly becoming more and more hybridised. This will increasingly allow grid electricity (from solar if needs be) to supply power for short distance commuting transport — which of course forms the majority of usage. Already half the two-wheeled motorised vehicles produced in China are electrically powered. In China electric bicycles are already outnumbering petrol-powered motorcycles. Around the world hybrids are replacing straight diesel engines for buses, refuse lorries and parcel delivery vans. This is being driven by economics, in any application that involves rapid stop-start cycles hybrids are already competitive with traditional drive chains. The other longer-term reason for expecting oil demand to drop is the installation of personal transport systems in urban areas. This is no longer a high tech dream, but a reality with the Ultra system working at London Heathrow [Ultra]. For techie people it is perhaps worth briefly looking at EROEI; the energy returned on energy invested. The EROEI of photovoltaic solar has come close to that of natural gas, which is why it has reached grid-parity in Italy. The EROEI of solar is continuing to drop at a steady rate. The only reason solar might not expand dramatically is because shale gas has dropped the EROEI of natural gas dramatically overnight. Similarly the EROEI of sugar cane ethanol is already equal to that of gasoline, and is steadily decreasing. It is only tariff barriers in the US and EU that are preventing its widespread adoption. Given excess land for both solar and growing sugar 242 EFTA00625370
producing crops, analysis of EROEI shows that there is no energy crisis. Indeed EROEI is a poor measure of efficiency. If engineers used 'free-energy returned on free-energy invested' or 'negentropy returned on negentropy returned' then their measures would be much better from a scientific and engineering point of view. However if they did this it would of course render EROEI pointless, as it is in fact just an engineers ham-fisted way of calculating market values. Food In recent years, changing appetites in Asia, combined with oil price movements pushing up the price of natural gas, and so fertiliser prices, have created spikes in food prices that have panicked people into believing the world faces food shortages. As discussed above, the world's population is surprisingly small compared to the amount of land available. Until very recently both China and India were self-sufficient in food and fed large populations (nearly half the world) on comparatively small land areas. Compared to China and India; Russia, Ukraine, the USA, Canada, Brazil and most of Africa are empty. All of these places have substantial potential for extra food production. The table below is FAO data, pulled from a good background article by the Economist [Economist 2010b]. Figure 9.3.2 here [Economist 2010b] To put things in perspective it is worth looking back at the European common market between 1986-1989. This is in the period after Spain and Portugal joined, but before East Germany joined. In this period, the EEC had a population density of 150 people/km2. This population fed itself, and lived on a high protein, high meat, high dairy, highly unhealthy, westernised diet. Not only did Europe produce enough food to feed its population in this way, it also; under the Common Agricultural Policy, built up large mountains of surplus food and subsidised food exports to other countries, so destroying agricultural economies across the third world. In comparison, the future population density of the whole world assuming 10 billion population, and excluding the land in Antarctica, would be 74 people/km', or less than half of the EEC during 1986-1989. But clearly not all the world's land is fertile. So for the sake of argument, we can also exclude Australia, which is almost all desert, and Russia, which is mostly tundra. The European part of Russia can approximate for the Himalayas and central Asian deserts. Similarly we can exclude Canada, largely tundra, and also to compensate for the Rockies and deserts of the USA and Mexico. Brazil can be excluded for its rainforest and to compensate for the Andes. Finally we can exclude half of Africa to compensate for the Sahara, Kalahari and Namib deserts. (Agronomists can be forgiven a wry smile, as we have now excluded most of the world's major bread baskets.) If you exclude all this land area, and also assume a world population of 10 billion, you get a world population density of 130 people/km2, still less than the EEC at the height of its butter mountain madness. 243 EFTA00625371
In the meantime the rapid expansion in the supply of natural gas from shale means that natural gas prices have de-linked from oil prices and so we are guaranteed cheap fertiliser for the indefinite future. Other Natural Resources As discussed above, energy and agricultural land are available in excess. For the construction of engineering equipment, buildings, cars, etc; steel, aluminium, and silica are also all available in excess. As the oil price has increased, so plastics are already being manufactured from sugar cane. With a single possible exception, all other raw materials are fungible, if something becomes scarce, its use can easily be replaced with something else. The only near convincing case for an essential resource being consumed, than cannot be replaced, is that of phosphorous, an essential part of fertiliser for agriculture. Oddly enough, the last time I designed a wastewater phosphorous removal plant, the waste iron phosphate was sent to land fill for dumping. Phosphorous is still so cheap and plentiful that it has no commercial value for recycling. In these circumstances, worrying about it running out seems a little premature. capital As discussed above, with the possible exception of phosphorous, there are no meaningful restraints on the supply of raw materials, energy and food for provision of goods and services to human beings. Supply of raw materials is unlimited in meaningful terms for all reasonable human needs. With regard to capital, with raw materials available in excess, the only natural limit is human ingenuity, at least in the short term, and there does not seem to be any great constraint on human ingenuity at the present time. In manufacturing industry, the current level of automation can be extraordinary. In state of the art factories human beings are largely confined to a supervisory and intermittent maintenance role. In western countries, to a great extent, the production of machines is already carried out by machines, and increasingly this is moving beyond the scope of the factory floor. A first recent examples of how machines are able to provide additional humanly useful value is the rise of fruit picking machines. These are fully automatic, capable of travelling over rough ground, identifying fruit, checking their ripeness and removing them from trees or vines without damaging them. The complexity involved in these processes is enormous, which is why this task has remained a labour intensive process, at least until now [Economist 2009]. A second example is an automated hospital, where a fleet of robots will automatically perform duties such as removing clinical waste, delivering food, cleaning operating theatres and dispensing drugs [BBC 2010b]. These two examples, along with personal rapid transit systems are interesting in that fruit picking, cleaning and taxi driving remain three of the last significant redoubts remaining for the employment of unskilled labour. The IT revolution has already taken over swathes of semi-skilled labour; the clerks that used to dominate offices have largely disappeared, sacrificed to spreadsheets. And IT is slowly but surely eating into skilled and managerial work. Casual observation confirms that supplying new capital is trivially easy. Whether it is the supply of new manufacturing capacity in Japan in the nineties, housing in the USA or Spain in the 244 EFTA00625372
noughties, or more recently solar panels in Spain and Germany, history has shown that provision of new capital in large quantities has never been a problem. The problem has always been how to ensure good use is made of the capital once it has been provided. The work of Sraffa, von Neumann, Kurz & Salvadori, Burgstaller and others leaves a major quandary; an inverted Malthusian quandary. Whether you use the mathematical approaches of these academics, or the commonsense engineering approach in this section, it is clear that there simply are no external constraints on the wealth that individual human beings can own. In an exact reversal of Malthus's fears, we are left with an (intellectual) problem, the population of the world is stabilising rapidly, the production of commodities by commodities should be an exponential growth. The problem is the one economic input that is truly scarce: labour. The reasons for the apparent scarcity are simple. The scarcity is a consequence of the financial system and Bowley's law. Capital, and so wealth can not be accumulated because the amount of capital that can be built up is dependent on the amount of labour available. If there is no shortage of supply, what of demand? 9.4 Demand On the supply side, things are relatively straightforward. Costs are defined by 'negentropy', or intrinsic value, and, bubbles aside, prices adjust to these costs in the long term. With a few minor exceptions, and the two major exceptions of labour and land in cities, marginality is of little relevance. On the demand side things are a bit more complex, and a lot more fraught. I would like to start this particular section of discussion by stating categorically that, unlike many physicists, I firmly believe in the concept of utility. That ownership of a second car, for example, gives less benefit than the ownership of a first car seems obvious and plausible, and indeed important. I have worked for many years in the water industry, where measurement of utility (or rather disutility) has become important. Water is expensive to transfer over long distances, so water companies are natural monopolies, this, along with the non-substitutability water, and the potential for trace contamination of supply, means that water companies are normally subject to strict regulatory control. In practice the base cost of treating water can be very low, but the expense can vary enormously according to required service levels. These service requirements include things like interruptions to supply, (harmless) discolouration or odour, pollution of watercourses, etc. All of 245 EFTA00625373
these are externalities and/or low-probability high-risk events that are not easily priced by means of the market. Eliminating all possible service failures would be enormously expensive, and there is no easy way of using supply and demand to fix the levels customers believe appropriate for rare events. To get round these problems, it has become commonplace in the water industry to use surveys of customers which use pairwise comparison exercises. These allow customers to choose which of two outcomes are worse or better, repeatedly over large numbers of different outcomes. By using some comparators that are directly costable, it is possible to measure and build up disutility curves for customers. The results are interesting; the curves can be highly non-linear. For example nearly all customers are highly tolerant of a short break in water supply, say up to eight hours, seeing this as a much lesser problem than, for example, a spillage that kills hundreds of fish. However most customers are highly intolerant of a water supply outage of more than say twelve hours. Under these circumstances sympathy for aquatic life evaporates. Another notable feature of these surveys is that, despite outliers, the disutility curves are very similar for most people. Whether you follow the psychological, hierarchy of needs theories of Maslow [Maslow 1954], or the marketing methods of multinationals, this is hardly a startling observation. People are very similar in their basic needs and desires, which for economic goods is a simple hierarchy of needs to define. A basic list starts with food, ranking through needs for shelter, transport, leisure/entertainment, health care, education and retirement security. Utility curves do change according to sex, parenthood and age, but the basic requirements of food on the table and a roof over the head are fundamental. That the list changes dramatically with wealth is well known. In developing countries people start buying bicycles en masse at one wealth level, motorbikes at another higher level and cars at another level higher than that. These markets are predictable, opening up at threshold levels of average income. That is why Coca Cola, a very cheap 'luxury', is marketed in almost every country in the world, but Ferrari, at the time of writing, appears to have only two dealerships in the whole of sub-Saharan Africa, unsurprisingly, in Cape Town and Johannesburg. Rich people may appear to have very different utilities to poor people, but that is only because they are rich, not because they are different. And, as has been noted above, it is the GLV distribution that defines the split of people into rich and poor, into 'workers' and 'capitalists'. So utilitarian desires are fixed by wealth, which, as we have seen, are fixed by entropy. Attempting to use utility as a foundation for the whole of economics is to put the cart firmly before the horse. A further problem with utility theory is it's absolute approach to relative value. As stated above, I can see the common sense in the fact that for a single person, ownership of a second car clearly has less utility than ownership of a first car. But this is only true if I am obliged to retain ownership of both cars. 246 EFTA00625374
If I am allowed to sell either of the cars, then the 'utility' of the second car is simply it's market value less the transaction costs (my time and trouble of selling it). Following the discussion of the sections 8.1 on value above, the value of other cars on the market will be the fundamental value of the long term costs of producing them, so the 'utility' of my second car will simply be its fundamental value (less transaction costs). Personally, I am not much of a fan of Picasso; Picasso paintings have negligible 'utility' for me. But if somebody offered to sell me one for a thousand pounds (and I was sure of provenance and ownership) I would certainly buy it. And quickly resell it. Living in the UK I prefer to receive wealth in the form of pounds and pence, but will happily accept alternate stores of value if I stand to gain from the deal. Although the utility of a Picasso painting to me is low, its value is actually set in the market by entropic measures discussed in section 8.1 above. In fact human beings' desires for economic goods fit into two categories. With one very big exception almost all goods are satiable. Realistically most people only want one bicycle, one car, one house, one set of furniture to put in the house etc. Even large houses are a disadvantage in countries that don't have a supply of cheap labour to clean the rooms and maintain the garden. In the UK it is striking that the majority of owners of large houses; 'stately homes' have been obliged to let the public visit them and assist with payment of the upkeep. In high-income countries second homes are largely confined to people who are required to live in cities for work reasons. This holds true even for smaller low value items, as Steve Keen has pointed out; "Two bananas per day may well be preferred to one; three per day is probably pushing the envelope for most humans; and you would have to be a monkey to, for example, prefer twenty bananas in a day to nineteen. Most humans would kill rather than consent to eating a twentieth banana in a day. Thus, when we consider consumption as a function of time, anyone who behaves in a fashion which economists call rational-always preferring more bananas per unit of time to less-is clearly insane" [Keen 2004]. While some people own collections of things, these fall into two main categories. Either they are the low cost collections of hobbyists, and so count as leisure activity. Or, they are investments. Investments are of course the exception to the rule of satiability. Unlike things that give actual utility, human beings seem to have a near insatiable desire to accumulate stores of wealth; 'potential utility' or better 'potential negentropy', whether this is in the form of property, shares, artworks, prestige cars or just 'money in the bank'. Taken together, this suggests that there are straightforward ways of using concepts from physics to model utility and the resulting distributions of goods between individual human beings. Statistical physics has standard methods for dealing with localised fields en masse. It gives each agent a 'potential energy well' which can be filled in a quantum mechanical manner. Such potential wells normally have defined levels at which the levels can be filled. So for molecules of gas, translation energy levels can be filled at a certain temperature, rotational energy levels can be filled at another higher temperature and vibrational levels can be filled at a third level. In this way the energy 'needs' of the molecule can be filled at different temperatures. Human beings 247 EFTA00625375
could be modelled in a similar way, with bicycle needs filled at a certain level of GDP, motorcycle needs filled at another level of GDP, and motor car needs filled at a third level of GDP. Statistical mechanics has well-understood and consistent mathematics for dealing with such problems, and the probability of the levels being filled is defined by different statistical rules according to the type of 'good' that is filling the potential well. So, for example, for modelling investment wealth ('money in the bank') classical statistical mechanics would be appropriate. For most other goods that are wanted in limited quantities, the correct statistical mechanical approach will be some variation or modification of Bose-Einstein or Fermi-Dirac statistics. Such an approach could be very powerful, instead of using a single representative agent, as current macro-economic models do, it would be possible to use a large number of identical representative agents and calculate the macroeconomic parameters from the statistical mechanical properties. In economics, some interesting work along these lines has already been carried out by Foley [Foley 1996a, 1996b, 1999, 2002]. Foley's work is also important as it gets to the heart of the problems of using utility and marginality alone in a multi body system. Even where there are genuinely scarce resources such as labour or special minerals, in all but the simplest cases, the market will never clear at the marginal price. Statistical mechanics will force it's own equilibrium. However, as Foley has demonstrated, the statistical mechanical approach is more powerful, with functions that behave sensibly and can give meaningful equilibria. Part B.III — The Logic of Science 10. The Social Architecture of Capitalism All the above however does still leave the central question of supply and demand unanswered. Given the reasonable assumption that basic human desires are fundamentally the same, it is clear that in most parts of the world, even the most basic needs for food, clothing, water and shelter are not fulfilled. In most of the rich world demand for good health, education, housing and most importantly secure and decent retirement income is not satisfied for the majority of people. If supply is unlimited in the basic physical sense, and demand is far from being satiated even in the rich world, then a basic question needs to be answered. What exactly is it that controls the balance of supply and demand, or more importantly, why are the basic needs of human beings, for decent housing, education, health, pensions and leisure not provided by the capital available, or the capital that could very easily be made available? The reason for this substantial market failure is the structure of economics and finance, or to again borrow Ian Wright's phrase, it is a consequence of the 'Social Architecture of Capitalism'. In Wright's paper of this name he put together the first ever, coherent, effective, meaningful model of an economic system based on capital, a model that can be applied to feudal land- 248 EFTA00625376
owning, Victorian owner managers, or with minor modifications modern disintermediated capitalism. Wright's models are much less 'knowing' than my own, with no financial sector, and no preordained mathematical basis. With this simplistic approach Wright shows just how powerful a statistical mechanical approach is. The behaviour of a normal economy 'emerges' naturally out of the model without any assistance from the model builder. Although the make up of Wright's models is very different to that of my own, the interesting point regarding the comparison of Wright's models with my own is not the differences but the similarities. And this is entirely a consequence of statistical mechanics. Both Wright and myself make some very basic and similar assumptions, these are briefly: • Economies are multibody, chaotic, stochastic, statistical mechanical systems • Wealth is produced in companies • Wealth is conserved in exchange • Wealth is destroyed in consumption • Returns on capital are proportional to wealth owned Once these assumptions are made, and these assumptions are trivially obvious assumptions to anybody who has a passing knowledge of statistical mechanics, and has worked for a living in a factory, then it doesn't really matter how dodgy your models are. The 'Social Architecture of Capitalism' drops out in short order; replete with gross inequalities of wealth, bubbles, crashes, inflation, recessions and persistent unemployment. And as discussed in length in section 4 above, you simply don't need utility, consumption or production functions and the rest of the marginalist paraphernalia to explain all this. Neither Wright's nor my models even need economic growth. The maths, and indeed the gut feel of statistical mechanics, can be initially quite daunting. However what the models of Wright and myself demonstrate is that this approach makes life much easier. Many millions of complex local interactions get washed out in the sweep of entropy. This modelling approach is very powerful, and offers an effective way of building comprehensive economic models along the lines below. The big problem with modelling any multi-body, thermodynamic system; which includes economic systems, is the large number of parameters. Care is needed in identifying and reducing the active variables initially so that the most general and basic model can be built first, before then being expanded. The role of economic growth is a good example. Almost all macroeconomic models include economic growth as a variable. Yet casual observation of the depression era, or the last two decades in Japan, demonstrate trivially that capitalism can survive indefinitely with its structures operating intact in a long-term zero growth environment. Growth is clearly not a primary variable, and should not be included in base models. Its inclusion at this level merely causes confusion with too many variables. 249 EFTA00625377
The first part of an approach is to deliberately limit the modelling to the stable macro-economic zone of the economy from the point of view of a high level Lotka-Volterra systems. That is to say, for this level of modelling, the macroeconomic Minskian/Austrian cycles are deliberately ignored. The maths is deliberately constrained so that the economy is deliberately 'damped' to a stable dynamic equilibrium. At this point the economy will be in a state of maximum entropy production (MEP), and so it is in a 'stable dynamic thermodynamic equilibrium', at this point microeconomic conditions such as income distribution, company size, unemployment and debt can be investigated in detail. The effect of traditional underlying economic relations can be investigated. Also the economy can be moved through different loci of stable points, such as those that are defined by the Bowley equation (4.6g). The models would be expanded slowly to include other factors such as housing and other compulsory spending, as well as corporate and household saving and debt. Eventually such models would include government taxation and spending, currency, imports, exports and exchange rates. However they would still be held at stable dynamic equilibria. At this stage growth would be ignored. Under the above circumstances, total consumption is equal to total production, and both are unchanging. The total demand is set by the balance between labour and capital at the maximised MEP, given by the appropriate version of Bowley's law. Total wealth and income is dictated by the amount of productive capital installed, which will depend on the equilibria above. Wealth and consumption for each individual is set by their place in the GLV, their earning ability, their compulsory spending on housing and other goods, the consumption and saving preferences etc. This then generates different classes of owners and workers in society and a final equilibrium solution. This creates a total quantifiable aggregate level of demand. This equilibrium also defines the prices of the different types of capital; primarily companies and housing. In such a system supply balances to equal the above demand, where supply costs of commodities are calculated by aggregating costs of inputs on a cost plus basis, as per Sraffa, (un-reconstituted) von Neumann, market-microstructure and post-Keynesian analyses, but done on a dynamic equilibrium basis. At this level, price distortions due to capital hoarding, and the delay of installation of capital would be prevented. In this system supply and demand are both constrained by maximum entropy production and Bowley's law, so issues such as increasing returns on capital are not problematic. This first level of modelling allows many underlying interactions to be quantified and analysed in detail, and correlated to real world data. This first approach would primarily define microeconomic interactions, though it would also give insight into macroeconomic models. Once such models have been built satisfactorily, then the models can be relaxed and the damping progressively removed. In a normal economy, inherent instabilities exist due primarily to the basic process by which capital is priced. This creates endogenous cyclical behaviour within economies, so you then move to the states that may be characterised as 'quasi-periodic quasi- stable dynamic thermodynamic equilibrium'. Under these conditions Minskian and Austrian theory; variations in capital, debt and liquidity, along with relevant behavioural economics can be analysed. 250 EFTA00625378
There are many secondary sources of instability that can destabilise economic systems, which may either exacerbate the fundamental instability of capital pricing, or create their own cyclical patterns. Such destabilisers include capital cycles in commodities, housing and commercial property as well as corporate behaviour such as 'capital hoarding', both of which have been modelled in a simplistic manner in part A. It would also be appropriate to look at the effects of savings and investments at this level. Other destabilisers include household, corporate and government debt, fractional reserve banking and feedback from monetary authorities. Investigation at this level, allowing models to evolve dynamically around their points of stability, would allow detailed analysis of changing macroeconomic variables. Finally, when such models are well understood, longer-term trends can be modelled; trends such as population growth, economic growth, technology change, productivity growth, cultural change, institutional change, etc. This leads into fields such as evolutionary economics, institutional economics and growth theory. 11. The Logic of Science In their abstract to 'Worrying trends in econophysics' Mauro Gallegati, Steve Keen, Thomas Lux and Paul Ormerod wrote: 'Our concerns are fourfold. First, a lack of awareness of work that has been done within economics itself. Second, resistance to more rigorous and robust statistical methodology. Third, the belief that universal empirical regularities can be found in many areas of economic activity. Fourth, the theoretical models which are being used to explain empirical phenomena. The latter point is of particular concern. Essentially, the models are based upon models of statistical physics in which energy is conserved in exchange processes. There are examples in economics where the principle of conservation may be a reasonable approximation to reality, such as primitive hunter-gatherer societies. But in the industrialised capitalist economies, income is most definitely not conserved. The process of production and not exchange is responsible for this. Models which focus purely on exchange and not on production cannot by definition offer a realistic description of the generation of income in the capitalist, industrialised economies.' [Gallegati et al 2006] I am slightly embarrassed to admit that, due to both time constraints and limited experience in econometrics, the present paper remains significantly remiss with regard to the second criticism above. But, then again, to rephrase Ernest Rutherford; if you need to use statistics to prove your theory, you ought to have thought of a better theory. In the event of some party choosing to award me remuneration for my ongoing research I would hope to remedy these shortcomings in future papers. However I believe the present paper has come a long way in answering the other criticisms. In particular, I believe criticisms one and four have been fully addressed in this paper. 251 EFTA00625379
I believe however that the authors' third criticism is fundamentally flawed. It is the nature of science that a field can appear complex and difficult to make any sense of until a significant insight can bring sudden clarity. It has taken time for physicists to bring this clarity to economics, but to physicists, the multi-body nature of economic and financial systems meant that the belief that universal empirical regularities would be explained was only a matter of time. It is this insight that drove Champernowne half a century ago. It is this insight that resulted in Wright, Souma & Nirei and myself independently producing similar models near simultaneously. It is a canard among economists that physicists have moved into economics and finance due to the lack of job opportunities in mainstream physics. This may be the case for quants in the city, but it is not the case for econophysics researchers. For the research oriented physicist the attraction is a mathematical field that has not been effectively analysed, but that clearly has parallels with other fields that have been regularised. Finding wide open research areas like this in the mainstream sciences or engineering is difficult. Economics offers the low-hanging fruit of major new research findings, that is if you can truly describe a field full of watermelons as 'low- hanging'. Indeed the 'universal empirical regularities' pooh-poohed by Gallegati et al where always there. Wealth distributions, company size distributions, and the split of the returns from labour and capital are all long-standing 'anomalies' within economics. Economists such as Schumpeter and Gabaix have noted these regularities [Gabaix 2009]. Why almost all other economists, even heterodox economists such as Gallegati et al, have shown such disinterest in investigating these recurrent and profound features of economics has always been puzzling to physicists. Economics has systematically treated such persistent 'anomalies' as anomalies, ignoring raw data while retreating into the comforts of intellectual hypothesis, whether this be neoclassical, Keynesian, Marxian, behavioural or other. Even in areas such as post-Keynesian or behavioural economics, were data collection has become something of a fetish, the flavour of the data collection still gives the feel of data being ransacked to prove the previously held opinions of the researcher, rather than the data being looked at and analysed as it is found. This behaviour is the behaviour that has kept economics as a branch, to be generous, of political philosophy. It is this behaviour, understood intuitively by the general public, and explicitly by natural scientists, that is responsible for the very low regard that both have for economics as a science. To be brutally honest, given the fixed holding of ideologically motivated positions against the evidence of recurrent arbitrary destruction of wealth in bubbles, widespread poverty and persistent unemployment, economics as currently practiced should be considered a branch of religious philosophy, fitting somewhere between fundamentalism and cargo cults. At least all the mainstream religions include compassion and charity as compulsory elements. The main difference between economics and religion being that, in the majority of countries, members of the public are allowed to voluntarily remove themselves from the experiments of zealots. It is precisely by investigating 'anomalous' but persistent data outputs that the natural sciences have progressed. By definition, if data output is persistent, it is not 'anomalous'. If data output is persistent, it is normal. It may be 'anomalous' with regard to current theory. But that simply makes the current theory by definition 'anomalous', not the data. 252 EFTA00625380
In these circumstances the theory must be abandoned, not the data. Einstein, for example, is usually characterised as a theoretical physicist. But his biggest single insight (amongst many) was to treat the experimental fact of the constancy of the speed of light as a given. From this he abandoned 'common sense' and simply worked out the mathematical consequences of this fact. Thus was relativity born. Economists seem to prefer the route of Einstein's peers, forever producing more complex theories to substantiate the existence of a hypothetical aether. Science can not be built simply on common sense, intuition and intellectual rigour. Science must start with the observed facts if it is to make progress. This, at a much deeper level than that intended by Jaynes, is the logic of science. For any multi-body system, entropy has to be the guiding force, it has taken time for physicists and mathematicians to get to the root of this, mainly because the entropy was dynamic path entropy rather than static state entropy, but the driving power of entropy in economics is immediately obvious to anybody who has a passing understanding of entropy. Economics is a specialised study of entropy. It is a branch of thermodynamics, a branch of physics. Like information theory, in fact even more so than information theory; economics is a very complex, interesting and important subject in its own right. But nonetheless it is a subset of thermodynamics. It is the application of dynamics and statistical mechanics to political economy. It is econodynamics. 11.1 Afterword It was noted in the introduction that this paper was researched and written in a little over a year, without financial support or academic supervision. Foolishly, I have gone against a basic conclusion of this paper, and spent a significant portion of my own capital in producing it. If you have found the paper of interest or value, any donation to defray the costs of writing it, no matter how small, would be gratefully received. Those who wish to make a donation can do so by clicking on the Paypal link below: click here to make donation (Paypal accept all major credit cards, you do not need to have a Paypal account.) 253 EFTA00625381
254 EFTA00625382
Part C - Appendices 12. History and Acknowledgements Between 1980 and 1982 I was taught A-level physics by Malcolm Ruckledge using the innovative Nuffield Foundation Physics course. This was a powerful combination of an outstanding teacher with outstanding material. The section on statistical mechanics was particularly well written and taught, and gave me an early and profound intuitive insight into the power and simplicity of entropy. I suspect this paper would not have been written without this insight. Sometime in my first year studying physics at the University of Manchester, in 1992/3, while looking at a picture of the Maxwell-Boltzmann distribution of molecular velocities on a blackboard, it occurred to me that wealth in a society was shared out in a similar manner; a lot of people with a little wealth and a few with a lot of wealth. It further occurred to me that the underlying systems, involving a lot of freely interacting particles/individuals, where fundamentally similar. At the time I imagined this was a unique and very clever insight, however it turned out that a lot of other physicists and mathematicians have had similar insights, some preceding mine by many decades. After this, nothing very much happened for a decade or so, though the idea refused to go away, and being by nature an engineer at heart, I thought a lot about how income and wealth inequality might be tackled as well as to why it exists. In 2003 I had a letter published in the New Scientist, this is reproduced at the end of this section. This encouraged me to take my ideas more seriously, and while working abroad in 1995 I had the opportunity to write down my ideas at that stage into a fairly amateurish paper. On returning to the UK I circulated the paper around various individuals I thought might be interested. The paper was greeted on a spectrum that largely went from disinterest through to derision. One exception was Michael Stutter, who suggested I forward it to Duncan Foley, with whom I had a brief but very rewarding correspondence. I remain very thankful to both these individuals and especially to Duncan Foley for encouraging my work even when it was at this very early and amateurish stage. After this nothing very much happened again for some years, as I lacked the skills, in both economics and mathematics to take the work forward. I did however read a paper by Ayres & Nair 'Thermodynamics and economics' which I found very useful in linking the concept of entropy to the economic concept of value. This changed in August 2000 when, via the New Scientist, I discovered the work of Bouchaud & Mezard and other researchers, primarily physicists but also some heterodox economists, working 255 EFTA00625383
in the new field of econophysics. The majority of the work was in the field of asset pricing in finance, but there was also a parallel stream looking at wealth and income distributions. Over the next few years I attended a number of econophysics and related conferences where I learned a lot more about both the maths and economics from the other participants. During this period I was given support and guidance, from Steve Keen, Thomas Lux and others, but most particularly from Juergen Mimkes, for which I would like to give thanks. Thomas Lux gave me some very useful insight into the real meaning of value and wealth that helped to generate the ideas in this paper. Steve Keen gave interesting discussions on economics and also pointed me in the direction of James Galbraith who was also very supportive. As stated in the introduction I met Wataru Souma at the Econophysics of Wealth Distributions conference at Kolkata in 2005. I almost certainly attended his lecture on his paper 'Universal Structure Of The Personal Income Distribution'. I found Souma & Nirei's model complex and difficult to follow, and did not knowingly use it further. Judging from the pile of papers that I rediscovered it in; it appears that I read Ian Wright's 'The Social Architecture of Capitalism' some time shortly after the Kolkata conference. I remember reading this paper quite clearly, as the style of the paper was unusual. The paper is very strongly a modelling paper, with very little formal mathematical content. This resulted in my finding it very difficult to make much sense of, and in fact I didn't understand the paper until some years later. I also, at the time, found the Marxian approach very naïve and off-putting, particularly in the insistence on the use of the labour theory of value. This seemed to me plainly wrong; so at this stage I dumped this paper in the 'irrelevant' pile and forgot about it. That was a big mistake. In 2006 it was suggested to me that the general Lotka-Volterra distribution might make a good fit to some high quality income data I had acquired from the UK Statistical Office. It turned out that the data did fit the GLV exceptionally well; better than alternative distributions. As a scientist, this dictated that building models along the lines of the GLV would be the most sensible way forward. By this stage, my knowledge of economics had expanded a little, and I was somewhat dismayed by the naivety and complexity of the approaches taken to economics by most physicists. It seemed to me that power law distributions, and gross inequality, had a universality through geography and more importantly history (cf the paper regarding inequality in ancient Egypt [Abul-Magd 2002]), and that they appeared to be valid in any society where wealth, including land, was traded. This could be contrasted with, for example, community owned land systems in Africa, which though associated with general poverty appeared to be characterised by low levels of inequality. In my view any model for wealth distributions should be able to accommodate payments to capital in the broadest sense, whether this be via dividends, interest rates, or rent on land and property. With this in mind I attempted to fit, in the simplest way possible, basic economic concepts to two different generating equations that I was aware were capable of producing GLV distributions. These two systems were the exchange system of Slanina and the GLV system of Levy & Solomon. I wrote a note and circulated it to a number of academics in early 2006, I have reproduced the note in full below in section 12.1. 256 EFTA00625384
Unfortunately, none of the academics proved interested in my proposals. Also unfortunately, I did not send the note to Wright or Souma & Nirei, as it had been some time since I read their papers, and I didn't consciously connect them to this present work. I lacked the mathematical or programming skills to take this forward, so once again, nothing much happened for a few years. In 2009, in the middle of the post-credit-crunch recession, I took the opportunity to start an MSc in Finance at Aston University. Due to some very unfortunate circumstances I was unable to complete the course. However in the two terms I attended the course I acquired a lot of useful knowledge regarding basic finance and economics. I would also like to give thanks to Patricia Chelley-Steely for giving me important insights into the role of market-microstructure in general and liquidity in particular. I was also able to gain invaluable assistance from George Vogiatzis and Maria Chli with regard to producing simulations of my models proposed in 2006. The exchange model proved difficult to construct. However, in March 2010 Maria and George produced the first Matlab model for me based on the GLV model in the second part of my 2006 note. Somewhat to my surprise, this produced a perfect GLV distribution on its first run, though no power law. It turned out that, to generate the power law, the profit ratio had to be increased substantially from the initial 5% proposed to somewhere near 50%. A little investigation revealed that the returns to capital where indeed on this scale, and so this was realistic. At this point George and Maria politely, but firmly, suggested that I conquer my technophobia and learn to program in Matlab myself. I followed their advice and discovered that it is a lot easier than other programming languages I had encountered. From the first programme, I produced all the other programmes in this paper in short order, with almost all programming work being done in May 2010. I remain deeply indebted to Maria and George for their initial assistance and support with this work. The income model followed naturally from the wealth model. The companies model followed naturally from the wealth and income models. The commodity model followed naturally from the companies model. During the modelling process I was rereading Steve Keen's 'Debunking Economics' and had also read some of the Goodwin modelling work while investigating the ratio of returns to labour and capital. It seemed to me that by combining the wealth, company and commodity models it would be possible to generate a much simpler but effective Goodwin style macroeconomic model. This proved to be the case, with a resultant simple base model that appeared to produce Minskian/Austrian cycles endogenously. At some point after the modelling was largely complete, while rereading a large volume of papers I had collected over the years, I reread Wright's 'The Social Architecture of Capitalism'. For the second time I found it difficult to follow, and found the labour theory of value difficult to accept. However something in the paper was nagging at me. I reread the paper for a second time, more carefully; and slowly realised that, though coming from a completely different angle, Wright had built a model that was both making the same base assumptions as my own, and producing many of the same outputs. Indeed, in many ways Wright's models produced better results than my own. Given the very different ways that Wright and myself produced our models, I believe that my approach was not influenced by Wright. My original proposals of 2006 were deliberately, 257 EFTA00625385
mathematically based on the GLV, and were also focused on a financial sector with returns paid on capital. Wright's models are significantly different to my own, most notably in not involving a financial sector. Also, unlike the present paper Wright takes a 'black box' and 'zero intelligence' approach to modelling which eschews formal fitting of the models to mathematical equations. Despite this belief, I am obliged to accept that I may have been influenced subconsciously by Wright's work. Much later in the writing of this paper, close to it's completion, I reread the work of Souma & Nirei. Again I found the complexity of the mathematical approach of Souma & Nirei very difficult to follow, and I believe this complexity is unnecessary, and that my own approach is more useful as a basis for analysing economics. However the parallels between their work and my own are significant. Most notably Souma & Nirei use consumption as a dissipative part of their model in a way that is almost identical to my own models. They also use capital as a main source of new wealth in their model, which is analogous to my own, though less strongly than with consumption. Souma & Nirei use capital growth as the main form of supplying new wealth to their model. They justify this by using supporting data from the Japanese economy. While this may have seemed sensible at the time, given the collapse of the Japanese stock-market and property prices over the last two decades, this now looks less sensible. Although I believe that capital growth can form a part of wealth generation, on a long- term cyclical basis this is likely to be very small. I believe that my simple model of returns to capital in the form of interest, dividends and rent is a better basis for future economic modelling. As with Wright, I do not believe I was influenced directly by Souma & Nirei. My first model in 2006 was a simple exchange model, quite different to that of Souma & Nirei, while I generated the second model by simply substituting what to me were the most obvious and simple economic terms into Levy & Solomon's generating equation. Indeed my original model was a little over-complex and significantly different to that of Souma & Nirei. However, even more so than Wright's work, the parallels between the models of Souma & Nirei and my own are striking. And the possibility that I was subconsciously assisted by their work seems significant. I would like to state in the strongest terms that I believe that the work of Wright, Souma & Nirei is of considerable importance. These three academics have been able to bridge the gap between the physics and the economics in a way that no other academics have been able to. Also they all carried out this work prior to my own. Where my own work differs to that of the gentlemen above is that it has a clear mathematical basis, unlike that of Wright, and that the mathematical basis is dramatically simpler than that of Souma & Nirei. It is my hope that Wright, Souma, Nirei and myself can share the credit for finally bringing an effective mathematical and modelling approach to the understanding of economics. Figure 12.1 here 12.1 Proposed Models 2006 258 EFTA00625386
Pair exchange process. after Slanina• Wi.I•1 +13u - 13, — P.+r * Wit = Wp + R. — Pi; — p, + r Wi., 13,, would be a good or service received, 13„would be money exchanged for the good or service, (or vice versa) you could make this more 'economist friendly' by using: 139, for a good or service received, 13m for money exchanged for the good or service, typicallyp,,would be a factor smaller than W,, in size A13 = Pal — Rr is a small random difference in wealth due to the exchange not being exactly equal, typically AO would be a few percent of (3„ (economists would argue that AO would be equal to zero at equilibrium, I believe this is not the case, however it is much easier just to argue that there will be small random differences in the wealth exchange, which is a very plausible assumption) I see the APIs as the main stochastic driver in this model. pi is the profit taken by a third party. If I buy a car directly off you, then p, equals zero, but if I buy a car off you via e-bay, a small percentage of I ; p. and/or p, is taken by e-bay. (In e-bay's case, the seller is charged, so p. = 0). Ignoring the example of e-bay, I would initially model this by assuming that all p,'s are a fixed small percentage of the exchange. So: p. = P93 * Prate r is the interest rate (factored down to a weekly or daily rate, whatever At is) Annual real interest rates (after inflation) are very stable, varying between 0.5 and 4% (annual) over long time periods. I would also initially model this as a small fixed percentage. (To get a working model with equations that balance it may be necessary to have a fixed relationship between and and r ; prate = const * r ) I do not see any reason to make the r 's a distribution set. Most peoples investments are stable, poor peoples especially so. Rich people will only hold a portion of investments in riskier, more variable funds. I would only really see a need to introduce a distribution set if it was the only way we could generate the necessary curve. So in this model the change in wealth comes from a small random element from the exchange, a small element taken in profit, and a small gain of interest which, crucially, is proportional to current wealth. From a max entropy type approach I would then add the following two conditions: E W., = E Wit., ie, all wealth is conserved (ie. there is no economic growth or recession). 259 EFTA00625387
And: E p, = r W,. ie, all profit is recycled as interest on peoples wealth. In this model the stochastic variability comes from the wealth exchanges; the Ai This combined with the assumption of conservation of wealth would provide a boltzmann type distribution if profits p, and interest r were equal to zero. I believe the extra terms of profit and interest will be a circular reinforcing mechanism that should produce the power tail. If you can solve, this or something similar, hopefully you will get a wealth distribution that is a GLV with alpha = 1.5 GLV type process. = + Inc, * At — pine — Con, * At — pc«, + r W.., Inc. is waged income; income from employment. Realistically I would expect this to be a stable distribution, very much on the lines of Juergen's arguments. (http://arxiv.org/abs/cond-mat/0204234) Pim is the small profit taken by the employing organisation. Modelled as previous model. Con, is consumption, which includes food, clothes, new cars, petrol, rent°, mortgage payments', holidays, etc. (° not completely sure about these two). Consumption is the big variable, and is where I would expect the stochastic element to come in strongly. Peon is the small profit taken by the shop, landlord, building society, etc. r As previous model. Again, from a max entropy type approach, I would then add the following two conditions: EW„=EW..,., again, all wealth is conserved. And: E (pine + pcon ) = r W., Again; all profit is recycled as interest. From this equation you can derive something like: Total Income = I, 260 EFTA00625388
= [ Ina + ( r / At )] = [ wages + interest, etc ] =[ Con. + - ) + ( + pcon) ] / At)] If you can solve this or something similar, hopefully you will get an income distribution that is a GLV with alpha = 4 to 5. 13. Further Reading One major aim of this paper has been to introduce existing concepts of mathematics and economics to audiences that may not be familiar with them. Primarily this means introducing the mathematics of chaos and statistical mechanics to economists, and introducing some basic economic and finance theory to mathematicians, scientists and engineers. However, the majority of the economics and finance referred to in this paper is heterodox, and so will also be new to most economists. Figure 13.1 below shows a suggested route through the more central works referred to in this paper. The top section discusses statistical mechanics, the second section chaos and the bottom half economics and finance. Figure 13.1 here */# - alternative / additional texts available B, K & M — Brearley, Myers & Allen G & W — Glazer & Wark K & L — Kleidon & Lorenz M & S — Miles & Scott P & O — Pepper & Oliver R & R — Reinhart & Rogoff The diagram above is for assistance and is not intended to be prescriptive. The arrows simply indicate that, for example, the review by Ozawa will be easier to follow if Atkins and Ruhla has been read beforehand. If you have a strong mathematical bent and significant knowledge of finance then by all means start with Bouchaud et al. To get a strong feel for how statistical mechanics works, both Atkins and Ben-Naim are essential reading, both use the minimum of mathematics and superb writing to explain difficult concepts very lucidly. Atkins follows a traditional energy approach, while Ben-Naim follows an information approach. I strongly recommend that anyone new to statistical mechanics read both books [Atkins 1994, Ben-Naim 2007]. 261 EFTA00625389
'The Physics of Chance' by Charles Ruhla [Ruhla 1992], is also a very good book, well written with clear explanations, it builds from the foundations of probability into the basic ideas of both statistical mechanics and chaotic systems, and forms a natural bridge between Atkins/Ben-Naim and more formal textbooks. Following this, Glazer & Wark is a well written basic statistical mechanics text book with a more mathematical treatment [Glazer & Wark 2001]. Gould & Tobochnik is an alternative, though it also covers standard thermal physics material; for statistical mechanics start at chapter three [Gould & Tobochnik 2010], Engel & Reid is a similar alternative, start at chapter 12, [Engel & Reid 2006], both are less easy to follow than Glazer & Wark. For a discussion of the origin of power law tails, the paper by Newman is excellent, though I also recommend reading Mitzenmacher and Simkin & Roychowdhury [Newman 2005, Mitzenmacher 2004, Simkin & Roychowdhury 2006]. Unfortunately, the jump from standard statistical mechanics to the General Lotka Volterra work of Levy & Solomon is significant. The GLV approach is new and I don't know of any good book discussing the GLV. It is for this reason that I have attempted to explain the GLV in some detail in section 1.2 of this paper. I have included Solomon's own review of the GLV in the proposed reading scheme, but it is highly mathematical [Solomon 2000]. Following from Atkins and entropy in general, the paper by Ozawa et al gives an excellent review of the research and theory of maximum entropy production. This is expanded on with a set of very interesting papers in Kleidon & Lorenz [Ozawa et al 2003, Kleidon & Lorenz 2005]. The paper by Dewar [Dewar 2005] is of particular importance, and, in my opinion, links directly to the work of Levy & Solomon. The website of Kumar [Kumar 2006] gives a brief but good introduction to plain Lotka-Volterra systems, and so an introduction to chaotic systems in general. Chapter eight of Keen gives a very good brief introduction to chaotic systems, Ruhla also gives an excellent introduction with a little more maths. 'Nonlinear Dynamics and Chaos' by Strogatz is an extraordinarily well written book, giving a full understanding of highly complex systems, including the mathematics, while using lots of clear examples and clear writing to keep things easy to follow. An alternative work by Hirsch, Smale & Devaney is also very good [Strogatz 2000, Hirsch et al 2003]. Following Strogatz or Hirsch, the works by either Britton or van den Berg move into the mathematics of more complex biological systems, where the Lotka-Volterra forms one of the simplest models [van den Berg 2010, Britton 2003, ]. It is my belief that either of these books will prove a treasure-trove for people trying to find suitable models for economic and financial systems. On similar lines, especially for financial regulators, Nise or similar standard control engineering texts show how straightforward it is to analyse and control complex dynamic systems [Nise 2000]. With regard to economics books, the most important thing is what not to read. Almost all standard economics textbooks are pure neoclassicism with a few scraps of Bowdlerised Keynes thrown in. Unfortunately, despite being very wrong, neoclassicism is intellectually coherent and can be interesting to study, in the same way that for example ancient Latin or Greek is. It is however still wrong. 262 EFTA00625390
To understand the historical reasons why it is wrong read Mirowski, which is highly entertaining, but not necessary to learn about real economics [Mirowski 1989]. To understand the theoretical reasons why neoclassical economics is wrong I suggest reading Cassidy, Cooper and most importantly Steve Keen's 'Debunking Economics'. John Cassidy's 'How Markets Fail' [Cassidy 2009] is ostensibly about the recent credit crunch. However the first two-thirds of the book gives a superb potted history of economic theory and how it measures up to reality. He includes heterodox economists such as Hayek and Minsky, as well as monetarism, behaviourism and game theory, along with neoclassical economics. The result is an outstanding review of economic history without any mathematics. George Cooper [Cooper 2008] follows on from Cassidy with a more detailed look at finance, in an equally well written, non-mathematical book. For a more mathematical, and very sharp analysis of the state of economics then you need 'Debunking Economics' by Steve Keen [Keen 2004]. If you only read one book out of those listed in this section, make sure that it is Debunking Economics (If you only read two books, make sure they are Keen and Ruhla). Keen explains in detail the main faults of neoclassical economics, and why the theories in the textbooks are simply wrong. He also discusses how economics needs to be changed, most notably by introducing proper dynamic modelling. He also reviews the various alternate strands of heterodox economics. In parallel to the theoretical background of Keen, I would recommend the books by Smithers, Harrison, Reinhart & Rogoff, Bernholz and Lee [Smithers 2009, Harrison 2005, Reinhart & Rogoff 2009, Bernholz 2003, Lee 1999]. These books deal with share prices, house prices, financial crises, inflation and pricing respectively. Each is written with a long historical viewpoint and very full data. They give a clear feel for how real economies actually work, and the first three in particular make the dynamic, cyclical nature of economics clear. The most important of these books is 'Post Keynesian Price Theory' by Lee which shows in careful detail how pricing is actually carried out in non-financial markets. Finally, having been fore-armed with the theoretical background of Keen, and the real data of the six writers above I would recommend Miles & Scott as a standard macroeconomic text and Bodie, Kane & Marcus as a standard finance text [Miles & Scott 2002, Bodie et al 2009]. Miles & Scott use neoclassical techniques throughout, but are unusually honest with their questioning of the validity of assumptions. Their book is very good on giving underlying data on economics, and is a very good guide to the jargon and thinking in mainstream economics. Bodie, Kane & Marcus is similarly well written and is well supported with data, just keep in mind Cassidy, Cooper and Smithers' demolitions of rational markets in mind as you read it. In international economics, Pettis has produced a profoundly insightful work that builds highly plausible theory to explain the history presented by Reinhart & Rogoff [Pettis 2001]. Mehrling [Mehrling 2000] gives a similarly insightful discussion of monetary economics. For domestic financial markets Pepper & Oliver provide a short and highly, readable account of how liquidity and central banks affects markets from a practitioners point of view. The review by 263 EFTA00625391
Amihud et al 2005 gives much more background on the recent mathematical research in liquidity. For the pricing and trading of financial assets in general; the field of market-microstructure is essential. Unfortunately there is not yet a good introductory book to cover this emerging and mathematical field. A very good introduction is given in a paper by Stoll, while an alternative discussion is given in Madhavan [Stoll 2003, Madhavan 2000]. The book by Lyons deals with market-microstructure in foreign exchange markets; this is in contrast to most market- microstructure work which is with equities. Despite this, Lyons is a well written work which deals very well with the basics of market-microstructure theory. Finally, the work of Bouchaud, Farmer, Wyart and others in the econophysics community are bringing together detailed data analysis with theoretical work from market-microstructure and econophysics. [Farmer et al 2005, Wyart et al 2008, Bouchaud et al 2009]. 14. Programmes The programmes used for most of the modelling are included below. The income and company models were modelled in Matlab, the commodity and macroeconomic models were modelled in Excel. If the Matlab models are pasted directly into the Matlab program editor they should run straight away. Minor modifications are needed to some of the programs to model different scenarios, the modifications required are indicated in the commented sections of the programmes. (nb. I am not by nature a programmer. The one thing I have learnt from my brief experience with Matlab is that whatever way you just did something, there was a better way. I ask for forbearance with my amateurish programming.) The Excel files need to be pasted into a text editor such as Notepad, then imported into Excel. They then need further columns of formulae to be copied over, and graphs to be produced from the data. Finally, different data needs to be pasted in for each separate model. This is explained in full detail for each of the models. 14.1 Model 1A (Matlab) rand('state',0); randn('state',0); profit_rate = 0.5; number runs = 10000; agents = 10000; halfway_wealth = zeros (1,agents); consumption = zeros (1,agents); 264 EFTA00625392
waged_income = zeros (1,agents); investment_income = zeros (1,agents); total_income = zeros (1,agents); total_waged_income = zeros (1,agents); total_investment_income = zeros (1,agents); profit = zeros (1,agents); average_final_wealth = 1000; initial_wealth = 1000 * ones (1,agents); final_wealth = 1000 * ones (1,agents); production = 200 * (ones(1,agents)); consumption_rate = 0.3 * (ones(1,agents)); for p = 1:number_runs profit = zeros (1,agents); total_profit = 0; total_wealth = 0; initial_wealth = final_wealth; for j = 1:agents consumption_rate(j) = 0.3 * ( 1 + 0.3*randn ); end %j consumption = initial_wealth .* consumption_rate; waged_income = (1 - profit_rate) * production; initial_wealth = initial_wealth + waged_income - consumption; profit = profit_rate * (production); % + consumption); total_wealth = sum (initial_wealth); total_profit = sum (profit); investment_income = (initial_wealth * total_profit) / total_wealth; final_wealth = initial_wealth + investment_income; average_final_wealth = (sum (final_wealth)) / agents; % halfway check results if p < (number runs / 2) halfway_wealth = final_wealth; end %p %income gathering - last 1000 runs if p > (number_runs - 1000) total_income = total_income + waged_income + investment_income; total_waged_income = total_waged_income + waged_income; total_investment_income = total_investment_income + investment_income; end %p average_total_income = (sum(total_income))/agents; end %p 265 EFTA00625393
% deciles deciles = ones (1,(agents/10)); % earnings deciles production = sort (production); decile_production = zeros ((agents/10),10); decile_production(:) = production; production deciles = deciles * decile_production; production_decile_ratio = production_deciles(10)/production_deciles(1); % wealth deciles final_wealth = sort (final_wealth); decile_final_wealth = zeros ((agents/10),10); decilefinalwealth(:) = final_wealth; final wealth - deciles = deciles * decile final wealth; wealth decile ratio = finalwealthdeciles(10)/finalwealthdeciles(1); % income deciles total_income = sort (total_income); decile_total_income = zeros ((agents/10),10); deciletotalincome(:) = total_income; total income - deciles = deciles * decile total income; income decile ratio = totalincomedeciles(10)/total_income_deciles(1); % gini coefficients index = zeros(1,agents); for i = 1:agents index(i)=i; end %i gini_earnings =((2*sum(production ((agents+1)/agents); gini wealth =((2*sum(final _wealth ((ag;nts+1)/agents); gini income =((2*sum(total income _ ((ag;nts+1)/agents); % relative poverty levels poverty_number_wealth = 0; poverty_ratio_wealth = 0; poverty_numberincome = 0; poverty_ratio_income = 0; for i = 1:agents .* index))/(agents*sum(production))) - .* index))/(agents*sum(final_wealth))) - .* index))/(agents*sum(total_income))) - if final_wealth(i) < average_final_wealth /2 poverty_number_wealth = poverty_number_wealth + 1; end if total_income(i) < average_totalincome /2 poverty_number_income = poverty__number_income + 1; end end %i poverty_ratiowealth = poverty_numberwealth/agents; poverty_ratio__income = poverty_number__income/agents; %vertical display data display_wealth = final_wealth'; 266 EFTA00625394
display_income = total_income l; = display_waged_income total waged_income'; display_investment_income = total investment_income'; display_halfway_wealth = halfway_wealth'; display_consumption_rate = consumption_rate.; display_production = production'; 14.2 Model 1B (Matlab) rand(istate',0); randn(istate',0); profit_rate = 0.5; number runs = 10000; agents = 10000; halfway_wealth = zeros (1,agents); consumption = zeros (1,agents); waged_income = zeros (1,agents); investment_income = zeros (1,agents); total_income = zeros (1,agents); total_waged_income = zeros (1,agents); total_investment_income = zeros (1,agents); profit = zeros (1,agents); average_final_wealth = 1000; initial_wealth = 1000 * ones (1,agents); final_wealth = 1000 * ones (1,agents); production = 200 * (ones(1,agents) + 0.2 * randn (1,agents)); consumption_rate = 0.2 * (ones(1,agents)); for p = 1:number_runs profit = zeros (1,agents); total_profit = 0; total_wealth = 0; initial_wealth = final_wealth; consumption = initial_wealth .* consumption_rate; waged_income = (1 - profit_rate) * production; initial_wealth = initial_wealth + waged_income - consumption; profit = profit_rate * (production); total_wealth = sum (initial wealth); total_profit = sum (profit); investment_income = (initial_wealth * total_profit) / total_wealth; final_wealth = initial_wealth + investment_income; average_final_wealth = (sum (final_wealth)) / agents; % halfway check results if p < (number runs / 2) halfway_wealth = final_wealth; 267 EFTA00625395
end %p %income gathering - last 1000 runs if p > (number_runs - 1000) = total_income total_income + waged_income + investment_income; total_waged_income = total_waged_income + waged_income; total_investment_income = total_investment_income + investment_income; end %p average_total_income = (sum(total_income))/agents; end %p % deciles deciles = ones (1,(agents/10)); % earnings deciles production = sort (production); decile_production = zeros ((agents/10),10); decile_production(:) = production; production deciles = deciles * decile_production; production_decile_ratio = production_deciles(10)/production_deciles(1); % wealth deciles final_wealth = sort (final_wealth); decile_final_wealth = zeros ((agents/10),10); decilefinalwealth(:) = final_wealth; final wealth - deciles = deciles * decile final wealth; wealth decile ratio = finalwealthdeciles(10)/finalwealthdeciles(1); % income deciles total_income = sort (total_income); decile_total_income = zeros ((agents/10),10); deciletotalincome(:) = total_income; total income - deciles = deciles * decile total income; income decile ratio = totalincomedeciles(10)/totalincomedeciles(1); % gini coefficients index = zeros(1,agents); for i = 1:agents index(i)=i; end %i gini_earnings =((2*sum(production ((agents+1)/agents); gini wealth =((2*sum(final _wealth ((ag;nts+1)/agents); gini income =((2*sum(total income _ ((ag;nts+1)/agents); % relative poverty levels poverty_number_wealth = 0; poverty_ratio_wealth = 0; poverty_numberincome = 0; poverty_ratio_income = 0; for i = 1:agents .* index))/(agents*sum(production))) - .* index))/(agents*sum(final_wealth))) - .* index))/(agents*sum(total_income))) - if final_wealth(i) < average_final_wealth /2 poverty_number_wealth = poverty_number_wealth + 1; end if total_income(i) < average_total_income /2 268 EFTA00625396
poverty_number_income = poverty_number_income + 1; end end %i poverty_ratio_wealth = poverty_number_wealth/agents; poverty_ratio_income = poverty_number_income/agents; %display vertical display_wealth = final_wealth'; display_income = total_income'; = display_waged_income total_waged_income'; display_investment_income = total_investment_incomel; display_halfway_wealth = halfway_wealth'; display_consumption_rate = consumption_rate'; display_production = production'; 14.3 Model 1C (Matlab) rand('state',0); randn('state',0); gini_vector = zeros (7,19); wealth vector = zeros (agents,19); income__vector = zeros (agents,19); for m = 1:19 profit_rate = m * 0.05; %profit_rate = 0.5; number runs = 10000; agents = 10000; cross_check_randn = zeros (1,agents); halfway_wealth = zeros (1,agents); consumption = zeros (1,agents); waged_income = zeros (1,agents); investment income = zeros (1,agents); total_income = zeros (1,agents); totalwaged_income = zeros (1,agents); total__investment_income = zeros (1,agents); profit = zeros (1,agents); average_final_wealth = 1000; initial wealth = 1000 * ones (1,agents); final wealth = 1000 * ones (1,agents); = rent 0 * ones (1,agents); production = 200 * (ones(1,agents)); consumption_rate = 0.2 * (ones(1,agents) + 0.1 * randn (1,agents)); for p = 1:number_runs profit = zeros (1,agents); total_profit = 0; total wealth = 0; 269 EFTA00625397
initial_wealth = final_wealth; consumption = initial_wealth .* consumption_rate; waged_income = (1 - profit_rate) * production; initial_wealth = initial_wealth + waged_income - consumption; profit = profit_rate * (production); total_wealth = sum (initial wealth); total_profit = sum (profit); investment_income = (initial_wealth * total_profit) / total_wealth; final_wealth = initial_wealth + investment_income; average_final_wealth = (sum (final_wealth)) / agents; % halfway check results if p < (number runs / 2) halfway_wealth = final_wealth; end %if p %income gathering - last 1000 runs if p > (number_runs - 1000) = total_income total_income + waged_income + investment_income; total_waged_income = total_waged_income + waged_income; total_investment_income = total_investment_income + investment_income; end %if p average_total_income = (sum(total_income))/agents; end % deciles deciles = ones (1,(agents/10)); % earnings deciles production = sort (production); decile_production = zeros ((agents/10),10); decile_production(:) = production; production deciles = deciles * decile_production; production_decile_ratio = production_deciles(10)/production_deciles(1); % wealth deciles final_wealth = sort (final_wealth); decile_final_wealth = zeros ((agents/10),10); decilefinalwealth(:) = final_wealth; final wealth deciles = deciles * decile final wealth; wealth decile ratio = finalwealthdeciles(10)/finalwealthdeciles(1); % income deciles total_income = sort (total_income); decile_total_income = zeros ((agents/10),10); deciletotalincome(:) = total_income; total income deciles = deciles * decile total income; income decile ratio = totalincomedeciles(10)/totalincomedeciles(1); % gini coefficients 270 EFTA00625398
index = zeros(1,agents); for i = 1:agents index(i)=i; end %i gini_earnings =((2*sum(production .* index))/(agents*sum(production))) ((agents+1)/agents); gini wealth =((2*sum(final wealth .* index))/(agents*sum(final_wealth))) ((ag;nts+1)/agents); gini income =((2*sum(total income .* index))/(agents*sum(total_income))) ((ag;nts+1)/agents); % relative poverty levels poverty_number_wealth = 0; poverty_ratio_wealth = 0; poverty_numberincome = 0; poverty_ratio_income = 0; for i = 1:agents if final_wealth(i) < average_final_wealth /2 poverty_number_wealth = poverty_number_wealth + 1; end if total_income(i) < average_total_income /2 poverty_number_income = poverty_number_income + 1; end end %i poverty_ratiowealth = poverty_numberwealth/agents; poverty_ratio__income = poverty_number__income/agents; %vertical displays display_wealth = final_wealth'; display_income = total_income.; = display_waged_income total_waged_income'; display_investment_income = total_investment_income'; display_halfway_wealth = halfway_wealth'; display_consumption_rate = consumption_rate'; display_production = production'; gini_vector (1,m) = profit rate; gini_vector (2,m) = gini_wealth; gini_vector (3,m) = gini_income; gini_vector (4,m) = wealth decile ratio; gini_vector (5,m) = income decile ratio; ginivector (6,m) = poverty_ratiowealth; gini__vector (7,m) = poverty_ratio__income; for j = 1:agents wealth vector (jou) = display_wealth (j,1); income__vector (jou) = display_income (j,1); end %j end 14.4 Model 1D (Matlab) 271. EFTA00625399
• Note, different commented sections below need • to be uncommented to model maximum wealth. • compulsory saving and model 1E rand(istate',0); randn('state',0); profit_rate = 0.5; number runs = 10000; agents = 10000; maximum wealth = 1500; cross_check_randn = zeros (1,agents); halfway_wealth = zeros (1,agents); consumption = zeros (1,agents); waged_income = zeros (1,agents); investment income = zeros (1,agents); total_income = zeros (1,agents); total_waged_income = zeros (1,agents); total_investment_income = zeros (1,agents); profit = zeros (1,agents); average_final_wealth = 1000; initial_wealth = 1000 * ones (1,agents); final wealth = 1000 * ones (1,agents); = rent 0 * ones (1,agents); production = 200 * (ones(1,agents) + 0.1 * randn (1,agents)); consumption_rate = 0.2 * (ones(1,agents) + 0.1 * randn (1,agents)); for model lE change 0.2 to 0.3 in the equation above. for p = 1:number_runs profit = zeros (1,agents); total_profit = 0; total_wealth = 0; initial_wealth = final wealth; consumption = initial_wealth .* consumption_rate; cp. % compulsory saving - START uncomment this section for compulsory saving • for j = 1:agents • if initial_wealth(j) < (0.9*average_final_wealth) consumption(j) = 0.8*consumption(j); • end %if • end %for j % compulsory saving - END waged_income = (1 - profit_rate) * production; initial_wealth = initial_wealth + waged_income - consumption; profit = profit_rate * (production); total_wealth = sum (initial wealth); total_profit = sum (profit); 272 EFTA00625400
investment_income = (initial_wealth * total_profit) / total_wealth; final_wealth = initial_wealth + investment_income; average_final_wealth = (sum (final_wealth)) / agents; % halfway check results if p < (number runs / 2) halfway_wealth = final_wealth; end `,if p %income gathering - last 1000 runs if p > (number_runs - 1000) = total_income total_income + waged_income + investment_income; total_waged_income = total_waged_income + waged_income; total_investment_income = total_investment_income + investment_income; end %if p average_total_income = (sum(total_income))/agents; ‘ b maximum wealth barrier - START % uncomment this section for maximum wealth barrier % %. also choose whether to enforce with decreased production or % %. increased consumption t %. t %. for j = 1:agents t %. t %. % % % uncomment for decreased production (and comment if below) % % if final_wealth(j) > maximum_wealth % % production(j) = 0.95 * production(j); t %. % % uncomment for increased consumption (and comment if above) t %. if final_wealth(j) > maximum_wealth t %. consumption_rate(j) = 1.05 * consumption_rate(j); t %. t %. end %if t %. t %. end %j % %. % maximum wealth barrier - END P end %p % deciles deciles = ones (1,(agents/10)); % earnings deciles production = sort (production); decile_production = zeros ((agents/10),10); decile_production(:) = production; production deciles = deciles * decile_production; production_decile_ratio = production_deciles(10)/production_deciles(1); % wealth deciles final wealth = sort (final_wealth); decile_final_wealth = zeros ((agents/10),10); decilefinalwealth(:) = final wealth; final wealth deciles = deciles - * decile final wealth; wealth decile ratio = finalwealthdeciles(10)/finalwealthdeciles(1); % income deciles total_income = sort (total_income); decile _total_income = zeros ((agents/10),10); deciletotalincome(:) = total income; total income deciles = deciles - * deciletotalincome; 273 EFTA00625401
income decile ratio = totalincomedeciles(10)/totalincomedeciles(1); % gini coefficients index = zeros(1,agents); for i = 1:agents index(i)=i; end %i gini_earnings =((2*sum(production ((agents+1)/agents); gini wealth =((2*sum(final wealth ((ag;nts+1)/agents); gini income =((2*sum(total income ((ag;nts+1)/agents); % relative poverty levels poverty_number_wealth = 0; poverty_ratio_wealth = 0; poverty_numberincome = 0; poverty_ratio_income = 0; for i = 1:agents end .* index))/(agents*sum(production))) - .* index))/(agents*sum(final_wealth))) .* index))/(agents*sum(total_income))) if final_wealth(i) < average_final_wealth /2 poverty_number_wealth = poverty_number_wealth + 1; end if total_income(i) < average_total_income /2 poverty_number_income = poverty_number_income + 1; end poverty_ratiowealth = poverty_numberwealth/agents; poverty_ratio__income = poverty_number__income/agents; %vertical displays display_wealth = final wealth'; display_income = total_incomel; = display_waged_income total_waged_incomel; display_investment_income = total_investment_incomel; display_halfway_wealth = halfway_wealth'; display_consumption_rate = consumption_rate'; display_production = production'; 14.5 Model 2A (Matlab) rand('state',0); randn(Istate',0); number runs = 10000; companies = 10000; total_capital = 10000000; minimum capital = 10; initial_capital = (total_capital/companies)* ones(1,companies) final_capital = initial_capital; initial_market_cap = initial_capital; 274 EFTA00625402
upside_payout_factor = 1.0; downside_payout_factor = 1.0; production_rate = zeros(1,companies); production = zeros(1,companies); expected_returns = zeros(1,companies); actual_returns = zeros(1,companies); halfway_capital = zeros (1,companies); for p = 1:number_runs initial_capital = final_capital; for k = 1:companies production_rate(k) = 0.1 * (1 + 0.2 * randn); end %end k production = initial_capital .* (production_rate); % production generated expected_returns = initial_market_cap * 0.1; for k = 1:companies if production(k) > expected_returns(k) actual_returns(k) = (expected_returns(k) * upside_payout_factor) + (production(k) * (1 - upside_payout_factor)); else actual_returns(k) = (expected_returns(k) * downside_payout_factor) + (production(k) * (1 - downside_payout_factor)); end %if end %end k final_capital = initial_capital + production - actual_returns; initial_market_cap = actual_returns .* 10; total_final_capital = sum(final_capital); final_capital = (final_capital * total_capital)/total_final_capital; % halfway check results if p < (number_runs / 2) halfway_capital = final_capital; end % end if p end display_capital = final_capital'; display_halfway_capital = halfway_capital'; 275 EFTA00625403
14.6 Model 2B (Matlab) rand(Istate',0); randn('state',0); number runs = 10000; %100000 companies = 10000; total_capital = 10000000; initial_capital = (total_capital/companies)* ones(1,companies) ; final_capital = initial_capital; initial_market_cap = initial_capital; upside_payout_factor = 0.9; downside_payout_factor = 0.9; production_rate = zeros(1,companies); production = zeros(1,companies); expected_returns = zeros(1,companies); actual_returns = zeros(1,companies); halfway_capital = zeros (1,companies); for p = 1:number_runs initial_capital = final_capital; for k = 1:companies production_rate(k) = 0.1 * (1 + 0.2 * randn); end %end k production = initial_capital .* (production_rate); % production generated expected_returns = initial_market_cap * 0.1; for k = 1:companies if production(k) > expected_returns(k) actual_returns(k) = (expected_returns(k) * upside_payout_factor) + (production(k) * (1 - upside_payout_factor)); else actual_returns(k) = (expected_returns(k) * downside_payout_factor) + (production(k) * (1 - downside_payout_factor)); end %if end %end k final_capital = initial_capital + production - actual_returns; initial_market_cap = actual_returns .* 10; 276 EFTA00625404
total_final_capital = sum(final_capital); final_capital = (final_capital * total_capital)/total_final_capital; % halfway check results if p < (number_runs / 2) halfway_capital = final_capital; end % end if p end display_capital = final_capital'; display_halfway_capital = halfway_capital 14.7 Model 2C (Matlab) rand(Istate',0); randn('state',0); number_runs = 10000; companies = 10000; total_capital = 10000000; initial_capital = (total_capital/companies)* ones(1,companies) final_capital = initial_capital; initial_market_cap = initial_capital; upside_payout_factor = 0.9; downside_payout_factor = 0.5; production_rate = zeros(1,companies); production = zeros(1,companies); expected_returns = zeros(1,companies); actual_returns = zeros(1,companies); halfway_capital = zeros (1,companies); for k = 1:companies production_rate(k) = 0.1 * (1 + 0.1 * randn); end %end k production_rate = sort (production_rate, 'descend'); for p = 1:number_runs initial_capital = final_capital; production = initial_capital .* (production_rate); % production generated expected_returns = initial_market_cap * 0.1; 277 EFTA00625405
for k = 1:companies if production(k) > expected_returns(k) actual_returns(k) = (expected_returns(k) * upside_payout_factor) + (production(k) * (1 - upside_payout_factor)); else actual_returns(k) = (expected_returns(k) * downside_payout_factor) + (production(k) * (1 - downside_payout_factor)); end %if end %end k final_capital = initial_capital + production - actual_returns; initial_market_cap = actual_returns .* 10; total_final_capital = sum(final_capital); final_capital = (final_capital * total_capital)/total_final_capital; % halfway check results if p < (number_runs / 2) halfway_capital = final_capital; end % end if p end display_capital = final_capital'; display_halfway_capital = halfway_capital'; display_initial_market_cap = initial_market_cap'; 14.8 Model 3 - Commodity (Excel) Instructions Open a text editor programme - in windows you can go to 'all programs' / 'accessories' and open 'notepad'. From the text below; under 'Program', select and copy all the text between the two rows of asterisks - but do not select the asterisks themselves. Go to the text editor and paste all the text into the text editor. The first line in the text editor should read: this writing should be in cell Al. If you have pasted the asterisks into the text editor, delete them. If there is a space above the first line delete it. Save the data as a plain text file in a location you will be able to find easily. Open excel, open a new worksheet. 278 EFTA00625406
Go to 'Data' / 'Get External Data' / 'Import Text File'. Use the explorer window to find and open the text file you saved above. Select 'delimited' and then 'Next. Select 'Comma', also unselect 'Tab', select 'Next'. Select 'Finish'. Put the data in the existing worksheet, in cell $A$1. The phrase: 'this writing should be in cell Al' should be in cell Al. If it isn't, select all the text and move it en masse so that the phrase is actually in cell Al. Check all the formulae are all working as formulae. The process above should work, however sometimes the formulae still keep the apostrophe (') in front of the equal signs (=) from the CSV input. If there are any apostrophes in front of any equal signs, delete them before going on to the next step. (Note that if a formula is showing "#DIV/0!" it is working correctly as a formula; the next step below will provide the missing data to prevent the division by zero errors.) Select all the data from cell K16 to K34 inclusive. Copy this data over into cells L16 to HB 34; the easiest way to do this is by moving the cursor over the small black square at the bottom right of the selection, right-clicking on it and dragging across to column HB. If this is done correctly row 16 should automatically increment from 1 to 200 timesteps. To create a graph, select the whole area from I16 to HB34, and then press the chart wizard button. Set up the graph, using x-y scatter, with data points connected by smoothed lines. Once you have your graph you can format it, make a copy of it, and delete unwanted data series as required. Now you can run different parameters in the model to see what happens. Enter the parameters in column 3, between 33 and 38. The parameters for the models in this paper are given in cells D1 to F11. 279 EFTA00625407
Program *********** **St** * *** ***** **************** XX ******** ****** **St** ***********II*********** this writing should be in cell A1,,,values for different models below required values entered in column J below, „,Model 3A,Model 3B,Model 3C, „,0.1,0.1,0.1„,interest rate,0.1, „,0.2,0.2,0.2„,productIon_rate,0.2, „,0.4,0.4,0.4„,consumption rate,0.4, „,1,1,0.9,„upside_payout_rabo,0.9, „,1,1,0.9,„downside_payout_ratio,0.9, „,0,2,0„,lag (max 10),0, „,1000,1000,1000„,c,1000, initial, values, timesteps,0,1, expected_retums„=J34*$J$3, commodity payments variable component„=$)$10•)22+$1$11, ,average,average commodity payments mimimum component,100,=$J$19, ,timesteps,timesteps commodity payments - actual„=MAX(K18:K19), ,1 to 200,21 to 20V produdion_rate capital„=$3$4•)33, ,=MAX(K22:HB22),=AVERAGE(AE22:HB22) Actual production - smaler of 2 above,100,=MIN(K20:K21), ,=AVERAGE(K23:HB23),=AVERAGE(AE23:HB23) prices„=K20/K22, ,* allows equilbrium to form capilal_employed„=K22/$3$4, production_revenue„=K20-K22, downside returns„= (K17 *$)$7) + (K25 * (1 -$)$7)), upside returns„= (K17 * $)$6) + (K25 * (1 -$J$6)), retums_selector„"=IF(K25<K17,1,0)", ,=AVERAGE(K29:HB29),=AVERAGE(AE29:HB29) actual_returns„=(K26*K28+K27*(1-K28)), =K20-X22-K29, capital procured in line above - do not enter values in the line above, capital_added„-=OFFSET(K32,-2,($)$85-1),1,1)", ,=AVERAGE(K33:HB33),=AVERAGE(AE33:HB33) capital_available,500,=)33+K32, ,=AVERAGE(K34:HB34),=AVERAGE(AE34:HB34) capital_wealth,500,=K29/$43, SIMS*** ******************* *** ***** ***** ********* ************************ ***** ******* ***** 280 EFTA00625408
14.9 Model 4 - Macroeconomy (Excel) Instrurtinnc Open a text editor programme - in windows you can go to 'all programs' / 'accessories' and open 'notepad'. From the text below; under 'Program', select and copy all the text between the two rows of asterisks - but do not select the asterisks themselves. Go to the text editor and paste all the text into the text editor. The first line in the text editor should read: this writing should be in cell Al. If you have pasted the asterisks into the text editor, delete them. If there is a space above the first line delete it. Save the data as a plain text file in a location you will be able to find easily. Open excel, open a new worksheet. Go to 'Data' / 'Get External Data' / 'Import Text File'. Use the explorer window to find and open the text file you saved above. Select 'Delimited' and then 'Next. Select 'Comma', also unselect 'Tab', select 'Next'. Select 'Finish'. Put the data in the existing worksheet, in cell $A$1. The phrase: 'this writing should be in cell Al' should be in cell Al. If it isn't, select all the text and move it en masse so that the phrase is actually in cell Al. Check all the formulae are all working as formulae. The process above should work, however sometimes the formulae still keep the apostrophe (') in front of the equal signs (=) from the CSV input. If there are any apostrophes in front of any equal signs, delete them before going on to the next step. (Note that if a formula is showing "#DIV/0!" it is working correctly as a formula; the next step below will provide the missing data to prevent the division by zero errors.) Select all the data from cell N17 to 037 inclusive. Copy this data over into cells P17 to HE37; the easiest way to do this is by moving the cursor over the small black square at the bottom right of the selection, right-clicking on it and dragging across to column HE. If this is done correctly row 17 should automatically increment from 1 to 200 timesteps. 281 EFTA00625409
To create a graph, select the whole area from L17 to HE37, and then press the chart wizard button. Set up the graph, using x-y scatter, with data points connected by smoothed lines. Once you have your graph you can format it, make a copy of it, and delete unwanted data series as required. Now you can run different parameters in the model to see what happens. Enter the parameters in column M, between M3 and M12, values for capital should be changed in M32 and M33. The parameters for the models in this paper are given in cells F3 to I14. To experiment with Bowley ratios and cash balances you will need to use Solver. You may need to install this if it isn't already installed. To check, make sure a cell (any cell) is selected on the spreadsheet. Go to the 'Tools' menu and look for 'Solver'. If Solver is on the list open it, if Solver is not available, go to 'Add-Inns', tick the box for 'Solver' and click OK. You will then be able to install Solver, you may need your original software discs. Once you have Solver open you can target particular levels of cash wealth or Bowley ratio (earnings / total_returns). To target a cash wealth, insert the value of your required cash wealth in cell G34, open Solver, set the target cell as H34 (cell H34 is a formula — do not enter any values in cell H34). Select 'Min' on 'Equal To:'. Under 'By Changing Cells:' select cell M32 — the Capital(K). Then select 'Solve'. To target a Bowley ratio, insert the value of your required Bowley ratio in cell G37, open Solver, set the target cell as H37 (cell H37 is a formula — do not enter any values in cell 1-137). Select 'Min' on 'Equal To:'. Under 'By Changing Cells:' select cell M32 — the Capital(K). Then select 'Solve'. 282 EFTA00625410
Program ****************** ******** ***XXXVIA******S***********M*******M*******M************** this writing should be in cell A1,,,,values for different models below required values entered in column M below, ,,,,Model A,Mcdel B,Model C,Model D,Model E „,interest_rate,0.1,0.1,0.04,0.04,0.04„,interest_rate,0.1, „,production_rate,0.2,0.2,0.2,0.2,0.4„,production_rate,0.2, „,omega,0.4,0.4,0.4,0.5,0.5„consumpticri rate,omega,0.4, „,upside_payout_ratio,1,1,1,0.7,0.8„,upside_payout_ralio,1, „,downside_payout_ratio,1,1,1,0.7,0.8,„downside_payctit_ratio,1, „,lag,0,0,3,0,1„,lag,0,(max 12) „,labour_required,1,1,1,1,1,Jatour_required,1, „A1,1,1,1,1,„A,1, ,,,B,4,4,4,4,4,,,B,4, „,0C,100,100,100,100,100„,C,100, ,,,capital (K),100,400,100,100,100, „,capital_wealth (W),100,100,100,300,100, „„average,average, „„bmesteps,timesteps timesteps,0,1,2, ,,,,1 to 200,21 to 200* expected_retums,0,=M33*=,=N33sIM, „„=AVERAGE(N19:HE19),=AVERAGE(AH19:HE19)„„wealth * omega„goods_payments,0,==*M35,==*N35, „„* alows equilbrium to form production_rate * capital„potential production,0,==*M32,==*N32, smaller of two above„production,O,=MIN(N19:N20),=MIN(O19:O20), capital_employed,0,=N21/=,=021M, AVERAGE(N23:H AVERAGE(AH23:HE23)„„quadratic„eamings_income,0,== (( MI)*N22)/1000,==* * ((M s 022 - )*022)/1000, „ ******* „production_revenue,0,=N19-N23,=019-023, „ ,,,,,,, „downside retums,0,= (N18 *=) + (N24 * (1 - =)),= (018 * + (024 * (1 - =)), „ ,,,,,,, „upside retums,0,= (N18 *=) + (N24 * (1 - =)),= (018 • + (024' (1 - =)), retums_selector,07=1F(N24<N18,1,0)","=W(024<018,1,0)", „„=AVERAGE(N28:HE28),=AVERAGE(AH28:HE28) actual_retums,1,=(N25*N27+N26*(1-N27)),=(025*027+026*(1-027)), 0,0,0,0,0,0,0,0,0,0,0,0,0,=N19-N23-N28,=019-023-028, capital procured in ine above - do not enter values in the line above, capital_added,07=OFFSET(N31,-2,(=*-1),1,1)","=OFFSET(031,-2,(Mc-1),1,1)", „„=AVERAGE(N32:HE32),=AVERAGE(AH32:HE32) capital (K),100,=M32+N31,=N32+031, „„=AVERAGE(N33:HE33),=AVERAGE(AH33:HE33) capital_wealth (W),100,=N28/IM,=028/=, „„=AVERAGE(N34:HE34),=AVERAGE(AH34:HE34),0,=ABS(F34-G34)„„cash_wealth,0,=M34-N19+N23+N28,=N34-019+023+028, „„=AVERAGE(N35:HE35),=AVERAGE(AH35:HE35) total_wealth,100,=N33+N34,=033+034, „„=AVERAGE(N36:HE36),=AVERAGE(AH36:HE36) total_retums,0,=N23+N28,=023+028, „„=AVERAGE(N37:HE37),=AVERAGE(AH37:HE37),0.7,=ABS(F37-G37)„„eamings/total_retums,0,=N23/N36,=023/036, set targets,minimise, in cells,values above, column G,in column H, above,using solver., do not enter, values in, column H, N22 283 EFTA00625411
Baquero & Verbeek 2009 15. References Abul-Magd 2002 Acharya & Pedersen 2005 Ackland & Gallagher 2004 Amihud et al 2005 Atkins 1994 Atkinson & Bourguignon 2000 Ayres & Nair 1984 Baek et el 2011 Bai et al 2006 Bandourian et al 2002 Ben-Rephael et al 2008 Barbosa-Filho Taylor 2006 BBC BBC 2010a BBC 2010b Ben-Naim 2007 Bernholz 2003 Biais et al 2005 Biopact 2007 Bodie et al 2009 Borges 2002 Bouchaud & Mezard 2000 Bouchaud et al 2009 Bray et al 2005 Berg 2010 Brealey et al 2008 Britton 2003 Burgstaller 1994 Campbell 2003 Abul-Magd AY, 2002. Wealth distribution in an ancient Egyptian society. Phys. Rev. E 66, 057104. Acharya V, Pedersen L, 2005. Asset pricing with liquidity risk. Journal of Financial Economics 77,375-410. Ackland G.), Gallagher ID, 2004. Stabilization of large generalized Lotka-Volterra foodwebs by evolutionary feedback, Phys. Rev. Lett., 93, doi:10.1103/PhysRevLett.93.158701. Amihud Y, Mendelson H, Pederson H, 2005. Liquidity and Asset Prices, Foundations and Trends in Finance 1:4. Atkins PW, 1994. The Second Law: Energy, Chaos, and Form. New York, Scientific American Library. Atkinson AB, Bourguignon F, (Eds.), 2000. The handbook of income distribution, V.I. Amsterdam and New York, Elsevier. Ayres RU, Nair I, 1984. Thermodynamics and economics. Physics Today November: 62-71 Baek SK, Berhardsson S, Minnhagen P, 2011. Zipf's law unzipped. New Journal of Physics 13 043004. Bai CB, Chang-Tai H, Qingyi Q, 2006. Returns to Capital in China. Brookings Papers in Economic Activity 2006(2). Bandourian R, McDonald 3B, Turley RS, 2002. A Comparison of Parametric Models of Income Distribution across Countries and Over Time. Working Paper No. 305. Luxembourg: Luxembourg Income Study. Baquero G, Verbeek M, 2009. Investing: Evidence from Hedge Fund Investors. Presented at 2009 Conference of the European Financial Management Association, http://www.efmaefm.org/0EFMAMEETINGS/EFMA0/020ANNUAL0/020MEETINGS/2009- milan/EFMA2009_0646 Jullpaper.pdf Barbosa-Filho NH, Taylor L, 2006. Distributive and demand cycles in the US economy — a structuralist Goodwin model. Metroeconomica, 57: 389.411. BBC. GCSE Bitesize Science, Compete or Die, Predators and Prey. http://www.bbc.co.uk/schools/gcsebitesize/science/ocr_gateway/environment/2_compete_or die2.shtml 16 May 2010. Household wealth grows five-fold in past 50 years. http://www.bbc.co.uk/news/10118346 BBC News, 18 June 2010. Forth Valley Royal Hospital to use robot 'workers', http://www.bbc.co.uk/news/10344849 Ben-Naim A. Entropy Demystified, The second law of thermodynamics reduced to plain common sense.; World Scientific: Singapore, 2007. Ben-Rephael A, Kadan O, Wohl A, 2008. The diminishing liquidity premium. WP presented to the NBER workshop on financial microstructures. Bemholz P, 2003. Monetary Regimes and Inflation: History, economic and political relationships. Cheltenham: Edward Elgar. Biais B, Glosten L, Spatt C, 2005. Market microstructure: A survey of microfoundations, empirical results, and policy implications. Journal of Financial Markets 8:217-264. Biopact, 2007. Brazilian biofuels can meet world's total gasoline needs. Bodie Z, Kane A, Marcus Al, 2009. Investments. McGraw Hill, Boston. Borges EP, 2002. Empirical nonextensive laws for the geographical distribution of wealth. cond-mat/0205520. Bouchaud JP, Mezard M, 2000. Wealth Condensation in a Simple Model of Economy. Physica A282:536-545, cond-mat/0002374. Bouchaud JP, Farmer JD, Lillo F, 2009. How markets slowly digest changes in supply and demand. In Handbook of Financial Markets: Dynamics and Evolution, ed. T Hens, KR Schenkhoppe, pp. 57-130. Amsterdam: Elsevier Bray A, Graham 3, Harvey C, Michaely R, 2005. Payout policy in the 21st century. Journal of Financial Economics 77,483-527. van den Berg H, 2010. Mathematical Models of Biological Systems. OUP, Oxford. Brealey RA, Myers SC, Allen A), 2008. Principles of corporate finance. New York: McGraw-Hill. Britton NF, 2003. Essential Mathematical Biology. Springer Verlag. Burgstaller A, 1994. Property and Prices. Cambridge: Cambridge University Press Campbell 3Y, 2003. Consumption-Based Asset Pricing. In G. Constantinides, M. Harris, and R. Stulz (eds), Handbook of the Economics of Finance, North-Holland, Amsterdam, 803-887. 284 EFTA00625412
Cassidy 2009 Champemowne & Cowell 1998 Chatterjee et al 2007 Chatterjee & Chakrabarti 2007 then & Swan 2008 Chordia et al 2000 Chordia et al 2001a Chordia et al 2002 Chordia et al 2005 Chordia et al 2009 aementi Gallegati 2005a aementi Gallegati 2005b Cooper 2008 Corbett & Jenkinson 1997 Dewar 2005 Dial & Murphy 1995 Dragulescu & Yakovenko 2001 Dynan et al 2004 Economist 2005 Economist 2009 Economist 2010a Economist 2010b Economist 2010c EIA Eiteman & Guthrie 1952 Engel & Reid 2006 EWCO 2010 Eyrian 2007 Fama & French 1992 Farmer et al 2005 Ferrero 2010 Cassidy J, 2009. How Markets Fail. Allen Lane Champernowne DG, Cowell F, 1998. Economic Inequality and Income Distribution. Cambridge, UK: Cambridge University Press. Chatterjee A, Sinha S, Chakrabarti B, 2007. Economic inequality: Is it natural? Current Science 92 (10), 1383. Chatterjee A, Chakrabarti BK, 2007. Kinetic exchange models for income and wealth distributions. The European Physical Journal B 60, 135-149. Chen Z, Swan P, 2008. Liquidity Asset Pricing Model in a Segmented Equity Market. Stock Market Liquidity, ed Lhabitant & Gregoriou, Chapter 23. Chordia T, Roll R, Subrahmanyam A, 2000. Commonality in liquidity. Journal of Financial Economics 56, 3-28. Chordia T, Roll R, Subrahmanyam A, 2001. Market liqudity and trading activity. Journal of Finance 56, 501-530. Chordia T, Roll R, Subrahmanyam A, 2002. Order imbalance, liquidity, and market returns. Journal of Financial Economics 65, 111-130. Chordia T, Sarkar A, Subrahmanyam A, 2005. An empirical analysis of stock and bond market liquidity. Review of Financial Studies 18, 85-129. Chordia T, Goyal A, Sadka G, Sadka R, Shivakumar L, 2009. Liquidity and the Post-Earnings- Announcement-Drift. Financial Analysts Journal, 65.4, 18.32. Clementi F, Gallegati M, 2005. Income Inequality Dynamics: Evidence from a Pool of Major Industrialized Countries. Talk at the International Workshop on the Econophysics of Wealth Distributions, Kolkata. http://www.saha.ac.in/cmp/econophysics/abstracts.html Clementi F, Gallegati M, 2005. Power law tails in the Italian personal income distribution. Physics A 350, 427-438. Cooper G, 2008. The origin of financial crises: Central banks, credit bubbles and the efficient market fallacy. London: Harriman House. Corbett _I, Jenkinson T, 1997. How is investment financed? A study of Germany, Japan, UK and US, Manchester School, 65 supplement, 69.93. Dewar RC, 2005. Maximum entropy production and nonequilibrium statistical mechanics. In Non-Equilibrium Thermodynamics and the Production of Entropy: Life, Earth, and Beyond, edited by A. Kleidon and R. D. Lorenz, Springer Verlag, Heidelberg, Germany. Dial J, Murphy K, 1995. Incentives, downsizing, and value creation at General Dynamics. Journal of Financial Economics 37, 261-314. Dragulescu A, Yakovenko VM, 2001. Evidence for the exponential distribution of income in the USA. Eur. Phys. J. B 20:585-89 Dynan KE, Skinner ), Zeldes 5, 2004. Do the Rich Save More? Journal of Political Economy, 112, 397-444. Economist 14 Ma 2005. Su ar-coatin the piggy bank. Economist 10 December 2010. Fields of automation. Economist, 20 February 2010. A different class. Economist 28 Au ust 2010. The miracle of the cerrado. The World in 2011, Economist publications. US Energy Information Administration. U.S. Dry Natural Gas Proved Reserves. http://www.eia.doe.govidnaving/hist/rngrllnus_la.htm Eiteman W3, Guthrie GE, 1952. The shape of the average cost curve. American Economic Review 42: 832-838. Engel T, Reid P, 2006. Thermodynamics, Statistical Thermodynamics, and Kinetics. San Francisco, CA: Pearson-Benjamin-Cummings. EWCO, 2010. Working poor in Europe - Norway http://www.eurofound.europa.eu/ewco/studies/M0910026s/no0910029q.htm Eyrian, 2007. Convection Cells, Wikipedia http://commons.wikimedia.org/wild/File:ConvectionCells.svg Fama EF, French KR, 1992. The cross section of expected stock returns. Journal of Finance 47, 427-465. Farmer D, Patelli P, Zovko I, 2005. The predictive power of zero intelligence in financial markets. Proceedings of the National Academy of Sciences of the United States of America, 02(6):2254(2259, 2005. Ferrero 3C, 2010. The individual income distribution in Argentina in the period 2000-2009, A unique source of non stationary data. Anciv preprint; http://anciv.org/ftp/arxiv/papers/1006/1006.2057.pdf 285 EFTA00625413
Foley 1990 Foley 1996a Foley 1996b Foley 1999 Foley 2002 Friedman 1962 FT 2010 FT 2011a FT 2011b FT/M 2010 Gabaix 2009 Gaffeo et al 2003 Gallegati et al 2006 gapminder Georgescu-Roegen 1971 Georgia Tech 2010 Glazer & Wark 2001 Gollin 2002 Gould & Tobochnik 2010 Goyenko et al 2009 Griffiths & Tenenbaum 2006 Guardian 2010 Harrison 2005 Harvie 2000 Hayek 1931 Hess & Holzhausen 2008 Hirsch et al 2003 Homer & Sylla 1996 Hussman Hussman 2011 Foley DK, 1990. Recent developments in economic theory. Social Research, vol. 57, no. 3, 666-87 Foley DK, Statistical equilibrium in a simple labor market. Metroeconomica, 47(2), 125-147. Foley DK, 1996. Statistical Equilibrium Models in Economics. Presented at Summer Meetings of Econometric Society. Available at: http://cedar.newschool.edu/ foleyd. Foley DK, 1999. Statistical Equilibrium in Economics: Method, Interpretation, and an Example. Paper presented at the XII Workshop on "General Equilibrium: Problems, Prospects and Alternatives", Certosa di Pontignano, Siena, Italy, http://homepage.newschool.edufrfoleyd/statecinotes.pdf Foley DK, 2002. Maximum Entropy Exchange Equilibrium. New School for Social Research, New York. http://homepage.newschootedu/—foleyd/maxentexeq.pdf Friedman M, 1962. Capitalism and Freedom. Chicago: University of Chicago Press. Financial Times, Ma 21, 2010. Can't sot an asset bubble? BI s, Money Supply. Financial Times Jan 12 2011. Let Their Be Li ht...Pools. Blois Althaville. Financial Times, Feb 8, 2011. Sugar Body Denounces 'Parasitic' Traders. Financial Times, Nov 4, 2010. Mortgage arrears linked to low deposits. Gabaix X, 2009. Power Laws in Economics and Finance. Annual Reviews of Economics, 2009, 1, 255-93. Gaffeo E, Gallegati M, Palestrini A, 2003. On the size distribution of firms: Additional evidence from G7 countries. Physica A, 324:117-123. Gallegatti M, Keen S, Lux T, Ormerod P, 2006. Worrying trends in econophysics. Physica A 370, 1-6. Rosling H. http://www.gapminder.org/ Georgescu-Roegen N, 1971. The Entropy Laws and Economic Progress. Cambridge, MA: Harvard University. Georgia Tech, 2010. ME6601: Introduction to Fluid Mechanics, http://www.catea.gatech.edu/grade/mecheng/mod8/mod8.html Glazer M, Wait J, 2001. Statistical Mechanics: A Survival Guide. Oxford University Press. Gollin D, 2002. Getting Income Shares Right. Journal of Political Economy 110(2):458-474. Gould H, Tobochnik 3, 2010. Statistical and Thermal Physics: With Computer Applications. Princeton University Press. Goyenko R, Holden C, Trzdnka C, 2009. Do liquidity measures measure liquidity? Journal of Financial Economics 92 2, 153-181. Griffiths TL, Tenenbaum JB, 2006. Optimal predictions in everyday cognition. Psychological Science, 17, 767-773. Guardian Business, 24 May 2010. Top policymaker says Britain risks copying Japan's lost decade. http://www.guardian.co.uk/business/2010/may/24/kate-barker-bank-complacent- financial-crisis Harrison F, 2005. Boom, Bust, House Prices, Banking and the Depression of 2010. Shepheard Walwyn, London. Harvie D, 2000. Testing Goodwin: growth cycles in ten OECD countries. Cambridge Journal of Economics, 24 (2000), 349.376. Hayek F, 1931. Prices and Production. London: Routledge & Kegan Paul. Hess A, Holzhausen A, 2008. The struct 'imortgage markets. Special Focus: Economy & Markets 01/2008. resources/en/images/wp_europaeische_hypothekenmaerkte_eng.pdf Hirsch MW, Smale 5, Devaney R, 2003. Differential Equations, Dynamical Systems, and an Introduction to Chaos. Academic Press, 2 edition. Homer S & Sylla R, 1996. A history of interest rates. 3rd ed., rev. New Brunswick, M., Rutgers University Press. Hussman JP. Weekly Market Comment. Hussman JP 4 A. ril 2011. Will the Real Philli.s Curve Please Stand Up? Jones 2002 Jones C, 2002. A Century of Stock Market Liquidity and Trading Costs. Working Paper. Columbia University, New York, NY. loulin 2008 Joulin A, Lefevre A, Grunberg D, Bouchaud JP, 2008. Stock price jumps: news and volume 286 EFTA00625414
Kaldor 1956 Keen 1995 Keen 2004 Keynes 1936 Keynes 1939 Kleiber & Kotz 2003 Kleidon & Lorenz 2005 Korajczyk & Sadka 2006 Korajczyk & Sadka 2005 Kumar 2006 Kurz & Salvadori 1995 Kydland & Prescott 1990 Langlois 1989 Lee 1999 Lee 2005 Lettau & Ludvigson 2001 Levy & Solomon 1996 Lewis 1954 Loster 2006 Lotka 1925 Liu 2006 Liu 2009 Lyons 2001 Madhavan 2000 Mankiw 2004 Maslow 1954 Measuring Worth Mehrling 2000 Mehrling 2010 Miles & Scott 2002 play a minor role. Technical report. http://arxiv.org/PS_cache/arxiv/pdf/0803/0803.1769v1.pdf Kaldor N, 1956. Alternative theories of distribution. Review of Economic Studies 23: 94.100. Keen S, 1995. Finance and Economic Breakdown: Modeling Minsky. Journal of Post Keynesian Economics, vol.17, pp. 607.635. Keen S, 2004. Debunking Economics: The Naked Emperor of the Social Sciences. Zed Books. Keynes JM, 1936. The General Theory of Employment, Interest and Money. London: Macmillan. Keynes JM, 1939. Relative Movements of Real Wages and Output. Economic Journal 49, 34- 51. Kleiber C, Kotz 5, 2003. Statistical Size Distributions in Economics and Actuarial Sciences. New York: Wiley Kleidon A, Lorenz RD, (Eds), 2005. Non-equilibrium Thermodynamics and the Production of Entropy: Life, Earth, and Beyond. (Understanding Complex Systems) Springer, ISBN-10: 3540224955. Korajczyk R, Sadka R, 2006. Pricing the Commonality Across Altemative Measures of Liquidity. Working paper, Northwestern University. http://www.efmaefm.org/0EFMAMEETINGS/EFMA0/020ANNUAL0/020MEETINGS/2007- Vienna/Papers/0034.pdf Korajczyk R, Sadka R, 2005. Are momentum profits robust to trading costs. Journal of Finance 59, 1039-1082. Kumar A, 2006. Lotka Volterra Model. http://www.personal.psu.edu/auk183/LotkaVolterra/LotkaVolterra 1.htm I Kurz HD, Salvadori N, 1995. Theory of Production, A Long-Period Analysis. Cambridge University Press, Cambridge, Kydland FE, Prescott EC, 1990. Business Cycles: Real Facts and a Monetary Myth. Federal Reserve Bank of Minneapolis Quarterly Review 14 (Spring 1990), 3-18. Langlois C, 1989. Markup pricing versus marginalism: a controversy revisited. Journal of Post KeynesianEconomics 12: 127.151. Lee FS, 1999. Post Keynesian Price Theory. Cambridge: Cambridge University Press. Lee KH, 2005. The World Price of Liquidity Risk. Ohio State University working paper. http://www.cob.ohio-state.edu/fin/dice/seminars/LCAPM_Iee.pdf Lettau M, Ludvigson S, 2001. Consumption, aggregate wealth, and expected stock returns. Journal of Finance 56, 815.849. Levy M, Solomon S, 1996. Power Laws are Logarithmic Boltzmann Laws. International Journal of Modem Physics C, 7 (4), 595-600. Lewis WA, 1954. Economic Development with Unlimited Supplies of Labor. The Manchester School of Economic and Social Studies, 22(2): 139.191. Loster M, 2006. Total Primary Energy Supply - From Sunlight. http://www.ez2c.de/ml/solar_land_area/ Lotka 1925. Elements of physical biology. Baltimore: Williams & Wilkins Co Liu W, 2006. A liquidity-augmented capital asset pricing model. Journal of Financial Economics 82-3, 631-671. Liu W, 2009. Liquidity and Asset Pricing: Evidence from Daily Data over 1926 to 2005. Nottingham University Business School Research Paper No. 2009.03 Lyons RK, 2001. The Microstructure Approach to Exchange Rates, MIT Press, (Cambridge, Massachusetts). Madhavan A, 2000. Market microstructure: A survey. Journal of Financial Markets 3, 205.258. Mankiw G, 2004. Principles of Economics (3rd edition). International student edition, Thomson South Western. Maslow AH, 1954. Motivation and Personality. Harper and Row, New York. Williamson SH, Lawrence H. Measuring Worth. Mehrling P, 2000. What is Monetary Economics About? Barnard College, Columbia University mimeo. http://www.econ.barnard.columbia.edu/faculty/mehrling/papers/What0/020is 0/020Monetary0/020Economics0/020About.pdf Mehrling P, 2010. Monetary Policy Implementation: A Microstructure Approach, David Laidler's contributions to Macroeconomics. Ed. Robert Leeson, pp 212-232. Palgrave Macmillan, 2010. Miles D, Scott A, 2002. Macroeconomics, Understanding the Wealth of Nations. John Wiley and Sons. 287 EFTA00625415
Mirowski 1989 Mirowski 2010 Mitzenmacher 2004 Napier 2007 NEA 2010 Newman 2005 New Scientist 2008 Nirei & Souma 2007 Nise 2000 Norton Ariely 201 Noser 2010 ONS 2003 ONS 2004 Ozawa et al 2003 Pareto 1896 Pastor & Stambaugh 2003 Pepper & Oliver 2006 Peterson 1993 Pettis 2001 Porter 2008 Ranaldo 2008 Reed & Hughes 2002 Reinhart & Rogoff 2009 Ricardo 1817 Robbins 1932 Ruhla 1992 Schrodinger 1944 Shiller 2010 Simkin & Roychowdhury 2006 Sinha 2005 Slanina 2004 Smith 2010 Mirowski P, 1989. More Heat Than Light: Economics as Social Physics, New York, Cambridge University Press. Mirowski P, 2010. The Great Mortification: Economists' Responses to the Crisis of 2007—(and counting). http://www.iasc-culture.org/publications_artide_2010_Summer_mirowski.php Mitzenmacher M, 2004. A Brief History of Generative Models for Power Law and Lognormal Distributions. Internet Mathematics, 1(2):226-251. Napier R, 2007. Anatomy of the Bear. Harriman House Ltd. Nikkei Electronics Asia, April 2010. Photovoltaic Cells on the Verge of Explosive Growth. http://techon.nikkeibp.cojp/article/HONSHI/20100326/181377/ Newman MEJ, 2005. Power laws, Pareto distributions and Zipfs law. Contemporary Physics, 46:323-351. New Scientist 04 March 2008. Shod<wave traffic m recreated for first time. time.html Nirei M, Souma W, 2007. A two factor model of income distribution dynamics. Rev. Income Wealth 53:440-59 Nise NS, 2000. Control Systems Engineering. 23rd ed. London: John Wiley. Norton MI, Ariely D, 2010. Building a Better America - One Wealth Quintile at a Time. Forthcoming in Perspectives on Psychological Science http://ncsjp.org/news/wp- content/uploads/2010/09/Building-a-Better-America.pdf Noser G. 2010. Whose Stock Market is this Anyway. Institutional Investor. Office for National Statistics, 2003. New Earnings Survey. Stationary Office Books, http://www.statistics.gov.uk/statbase/Productasp?vInk=5749&More=Y Office for National Statistics, 2004. Housebuilding completions: by sector: Social Trends 34. http://www.statistics.gov.uk/StatBase/ssdataset.asp?vInk=7317&More=Y Ozawa H, Ohmura A, Lorenz RD, Pujol T, 2003. The second law of thermodynamics and the global climate system - A review of the maximum entropy production principle, Rev Geophys 41: 1018. Pareto V, 1896. Cours politique. Reprinted as a volume of Oeuvres Completes (Droz. Geneva, 1896-1965) Pastor L, Stambaugh R, 2003. Liquidity risk and expected stock returns. Journal of Political Economy 113, 642-685. Pepper G, Oliver M, 2006. The Liquidity Theory of Asset Prices. Wiley Finance. Peterson I, 1993. Newton's Clock. WH Freeman & Co Ltd. Pettis M, 2001. The Volatility Machine: Emerging Economies and the Threat of Financial Collapse. Oxford University Press. Porter R, 2008. The Multiple Dimensions of Market-Wide Liquidity. Implications for Asset Pricing. Stock Market Liquidity, ed Lhabitant & Gregoriou, 2008, Chapter 21. Ranaldo A, 2008. Intraday Market Dynamics Around Public Information Arrivals. Stock Market Liquidity, ed Lhabitant & Gregoriou, 2008, Chapter 11. Reed W3 & Hughes BD, 2002. From gene families and genera to incomes and intemet file sizes: Why power laws are so common in nature. Phys. Rev. E 66, 067103. Reinhart CM, Rogoff K, 2009. This Time is Different: Eight Centuries of Financial Folly. Princeton: Princeton University Press. Ricardo D, 1817 (1951) . On the Principles of Political Economy and Taxation. The Works and Correspondence of David Ricardo, vol. 1, ed. P. Sraffa, Cambridge: Cambridge U. Pr. Robbins LC, 1932. An Essay on the Nature and Significance of Economic Science. London: Macmillan. Ruhla C, 1992. The physics of chance: From Blaise Pascal to Niels Bohr. Oxford: Oxford University Press. Schrificlinger E, 1944. What is Life? Cambridge Univ. Press, Cambridge, 1944. Shiller R, 2010. Robert Shilller Online Data, http://www.econ.yale.edu/-shiller/data.htm Simkin MV, Roychowdhury VP, 2006. Re-inventing Willis. preprint, http://arxiv.org/abs/physics/0601192 Sinha S, 2005. The Rich Are Different!: Pareto Law from asymmetric interactions in asset exchange models. In: Econophysics of Wealth Distributions, 177-184 eds. A. Chatterjee, B. K. Chakrabarti and S. Yarlagadda, Springer Verlag, New York 2005. Slanina F, 2004. Inelastically scattering particles and wealth distribution in an open economy, Phys. Rev. E 69, n 4, 46102.1.7. Smith Y 4 Oct 2010. Is Li uidi All Tt)aa'sCdted U to Be? 288 EFTA00625416
Smith & Foley 2008 Smithers 2009 Solomon 2000 Souma 2001 Souma & Nirei 2005 St Louis Fed 2004 Stoll 2003 Strogatz 2000 Stutzer 2000 Subramanian 2008 Telegraph 2010a Telegraph 2010b Tribus & Mclrvine 19 UN 2004 UN 2008 Ultra Volterra 1926 Ward Ward 2008 be.html Smith E, Foley DK, 2008. Classical thermodynamics and economic general equilibrium theory. Journal of Economic Dynamics and Control 32, 7-65. Smithers A, 2009. Wall Street revalued: imperfect markets and inept central bankers. John Wiley & Sons Solomon 5, 2000. Generalized Lotka Volterra (GLV) models [of stock markets]. In G. Ballot and Weisbuch G., editors, Applications of Simulation to Social Sciences, pages 301-322. Hermes Science, Paris. http://arxiv.org/PS_cache/cond-mat/pdf/9901/9901250v1.pdf Souma W, 2001. Universal structure of the personal income distribution. Fractals, 9, 463-470. Souma W, Nirei M, 2005. Empirical study and model of personal income. In Econophysics of Wealth Distributions, 2005 Chatterjee, Yarlagadda, and Chakrabarti, Springer-Verlag, 2005, pp. 34-42. Pakko M, 2004. Labor's Share, St Louis Fed, National Economic Trends, Aug 2004. http://research.stlouisfed.org/publications/net/20040801/cover.pdf Stoll HR, 2003. Market microstructure. In Constantinides, G., Harris, M. Stulz, R. (Eds.), Handbook of the Economics of Finance. North-Holland, Amsterdam. http://202.194.27.133/jrtzx/uploadfile/pdf/books/handbook/15.pdf Strogatz SH, 2000. Nonlinear dynamics and chaos: with applications to physics, biology, chemistry and engineering. Cambridge, MA: Westview/Perseus Books Group. Stutzer M, 2000. Simple entropic derivation of a generalised Black-Scholes option pricing model. Entropy, 2, pp. 70-77. Subramanian A. 2008. What Is China Doing to Its Workers? Peterson Institute for International Economics 0 in Delhi. Conway E, 01 May 2010. Radical tax on debt put to parties. http://www.telegraph.co.uk/finance/economics/7663978/Radical-tax-on-debt-put-to- parties.html Spencer R, 2010. Welfare reform: What George Osborne and IDS can learn from socialist paradises. http://blogstelegraph.co.uk/news/richardspencer/100050874/welfare-reform-what- george-osborne-and-ids-can-learn-from-socialist-paradises/ 71Tribus M & Mclrvine EC, 1971. Energy and information. Scientific American 224(3), 179.88. United Nations - World Population Prospects - The 2008 Revision, via wikipedia United Nations - World Population Prospects - The 2008 Revision http://esa•un.org/unpd/wpp2008/pdf/WPP2008_Highli hts. cif Ultra PRT, 2010. ULTra at London Heathrow Airport. Volterra V, 1926. Variazioni e fluttuazioni del numero in specie animali conviventi. Mem. R. Accad. Naz. dei Lincei. Ser. VI, vol. 2. Ward S. Money Moves Markets. Ward S, 2008. A (very) long term ook at US equities, Money moves markets. Wilkinson & Pickett 2009 Willis & Mimkes 2005 Willis 2005 Woo et al 2010 Wray 1998 Wright 2005 Wright 2009 Wyart et al 2008 Young 2010 equities.html Wilkinson R, Pickett K, 2009. The Spirit Level: Why More Equal Societies Almost Always Do Better. Allen Lane Willis G, Mimkes 3, 2005. Evidence for the Independence of Waged and Unwaged Income. Presented at the 'Econophysics of Wealth Distributions' International Workshop, Kolkata, India. March 2005. http:/floocarxiv.comell.edu/abs/cond-mat/0406694 Willlis G, 2005. Relieving Poverty by Modifying Income and Wealth Distributions. In Econophysics of Wealth Distributions, Springer Verlag, ISBN 8847003296, Sept 2005. Woo B, Rademacher I, Meier 3, 2010. Upside Down - The $400 Billion Federal Asset-Building Budget. The Annie E. Casey Foundation & CFED, http://www.aecf.orgHmedia/Pubs/Initiatives/Family0/020Economic 0/020Success/U/UpsideDownThe400BillionFederalAssetBuildingBudget/0330/02010_UpsideDown _final.pdf Wray LR, 1998. Understanding Modern Money: The Key to Full Employment and Price Stability. Cheltenham: Edward Elgar. Wright I, 2005. The social architecture of capitalism. Physica A 346, 589-620. Wright I, 2009. Implicit microfoundations for macroeconomics. Economics (e-joumal) 3, 2009- 19, Wyart M, Bouchaud JP, Kockelkoren 3, Potters M, Vettorazzo M, 2008. Relation between bid- ask spread, impact and volatility in double auction markets. Technical report, 2006. http://arxiv.org/PS_cache/physics/pdf/0603/0603084v3.pdf Young AT, 2010. One of the things we know that ain't so: is US labor's share relatively stable? 289 EFTA00625417
Journal of Macroeconomics (forthcoming). Vol 32, 1, March 2010, 90-102. 290 EFTA00625418
16. Figures 3500 3000 2500 2000 1500 1000 500 .c 0 10000 1000 100 10 0 Figure 1.1.1 - UK Income Data 20021 • •• • •‘ • •\ • • • ♦ • 2002 — log noun It 0 200 400 600 800 1000 1200 1400 Income Band Figure 1.1.2 - UK Income Data 20021 • • • 2002 log noun fit 200 400 600 800 1000 1200 1400 Income Band 291 EFTA00625419
1 00 010 z 001 45 4 9.5 Figure 1.1.3 - UK Income data 2002 (cdf) • 2002 100 1000 Income Band 10000 3- 2.5 2 - 1 5 a E 05 0 • • • • • • • •• • 1992 Figure 1.1.4 - US Services Weekly Income 200 400 600 • - -- log noun fit * * * 0 800 1000 1200 1400 1600 Income Band 292 EFTA00625420
10- 1 U1 .0 E z 1; '11 'EOM 25W 2000 1500 1000 500 0 • Figure 1.1.5 - US Services Weekly Income • • ••• - • * 4W 600 800 Income Band Figure 1.1.6 - UK Income Data 20021 • • . 1992 169 1101111 tit • • • • * 21,1_ • * • • • • • 1400 16O0 • 2U0• V tit / I • 0 200 400 600 800 Income Band 1200 1400 293 EFTA00625421
10000 1000 E 100 10 Figure 1.1.7 - UK Income Data 2002 • I • • • • • ••• •• • • • • • • •• • e • • • • • • *, • . • • 2002 GIN O! • • 0 200 400 600 800 1000 1200 1400 Income Band • 140 - 120 - 'a 100 - S 80 - th C E a 60 — '8 40_ 20 0 1845 18:55 1865 1875 1855 1- Year Figure 1.2.1.1 Snowshoe hare Canadian lynx 1595 1905 1915 1925 294 EFTA00625422
40 - 33 2 30 - g 25- 0 • g 20 - C . Ce •-• 15 - 1 10 - O. 5 - 0 0 - - Prey Population — Predator Population I • i • . . . . . • I 0 1000 2000 3000 4000 5000 Figure 1.2.1.2 I2 10 8 0 Iterations Phase Space 10 IS 20 25 Rabbis 30 35 0 Figure 1.2.1.3 40 295 EFTA00625423
30 - •—• 25 - m N 2 is_ •-• I 0. 10 - 5 - 0 — Prey Population — Predator Population 0 :000 woo' Iterations Figure 1.2.1.4 0, 8 - 5 - 3 - 8000 10000 30 I 0 15 20 Rabbis Figure 1.2.1.5 25 296 EFTA00625424
Figure 1.3.1 1 Revenue Goods and services sold Firms - produce and sell goods and services - Hire and use factors of production Figure 1.3.2 MARKETS FOR GOODS AND SERVICES - Firms sell - Households buy 4 Spending Goods and Services bought Households - Buy and consume goods and services - Own and sell factors of production Factors of MARKETS Labour. land. production FOR and capital FACTORS OF PRODUCTION )0- - Households sell Wages, rent, and profit Income - Firms buy Income (Y), wages, rent, interest, dividends, profit I I I Firms 4 4 I I Investment 0 I ( L I Factor Services - labour, capital, land, etc Goods & Services (G) Consumption (C) Money paid for Goods & Services I I I I I 297 EFTA00625425
Figure 1.3.3 e = earnings (wages) I I n = returns (profit, rent, interest, dividends, etc) I I I I Capital, land, etc I I • I I I I • I L = labour I • + Firms Capital = K /-•\ y = Goods & Services 1 L Individuals Wealth = W My - Money paid for Goods & Services 1 Consumption (C) TABLE 14. 1 The Financing of Investment: Flow-of-funds Estimated (%) (1970-1994) internal finance Germany 78.4 Japan 69.9 UK 95.6 USA 94.0 Bank finance 12.0 30.1 15.0 12.8 Bond finance —1.0 3.4 3.8 15.3 New equity —0.02 3.4 —5.3 —6.1 Other 10.6 —6.8 —9.1 —16.0 Note: Internal finance comprises retained earnings and depreciation. The other category includes trade credit and capital transfers. The figures represent weighted averages where the weights for each country are the level of real fixed investment in each year in that country. Source: Corbett and Jenkinson, "How Is Investment Financed?" The Manchester School (1996) vol. LXV, pp. 69-94. Figure 1.3.4 298 EFTA00625426
Figure 1.3.5 x = Inputs, raw materials, power, intermediate goods & services, etc — — Mx = Money paid for inputs r e = earnings (wages) n = returns (profit, rent, interest, dividends, etc) Firms Capital = K value added negentropy Wastes, heat, etc increase in entropy y = Outputs = Goods & Services My = Money paid for Goods & Services Capital negentropy source • • L = labour negentropy source C = Consumption increase in entropy 299 EFTA00625427
Figur* 1.3.6 earnings earnings Waste x1 - Inputs Mineral Extraction AS • • • • • Mx1 - Money paid tor inputs earnings I I returns earnings I I returns Intermediate iputs Consumer Goods Goods _ _ _ _ Manufacturers mx2- Money Manufacturers paid for inputs Waste/ AA x3 - Inputs st Waste A o- 3- Money paid for inputs I returns I I Retailers returns y - Goods & Services Wastesv AA Capital My - Money paid for Goods & Services Capital . . . . . . . Capital Capital labour labour labou' iat our Consumption EFTA00625428




















































































