Online Book "Physical Economy of National Development"

The Physical Economy of National Development

By Jonathan Tennenbaum

[Version 27.9.2015]

Table of Contents
Click to move to text and from text back to contents
 

Message to the reader - in lieu of a preface

Introduction

 

Part I: Essentials of Physical Economy
 

Chapter 1    The center of physical economy is the human being
 

1.1  Economy and human happiness

1.2  The economics of optimism

1.3  Economic policy
 

Chapter 2    Growth and development without limits
 

2.1  History refutes Malthusianism   

2.2  The relativity of resources

2.3  Overcoming limits: examples

2.4  The human mind – an unlimited resource

2.5  Economics and the self-development of human creativity
 

Chapter 3    An economy as a living organism: structure and metabolism
 

3.1  An economy as a special type of living organism

3.2  The metabolism of a biological organism

3.3  The metabolism of an economy – a first approximation

3.4  Infrastructure

3.5  Machine tools and the investment goods sector

3.6  Studying the structure of the economic metabolism using the input-output method

3.7  Surplus production – the physical “profit” of an economy

3.8  Growth or contraction?
 

Chapter 4   Nonlinear development
 

4.1  The limits to growth in the linear mode

4.2  Strong nonlinearity: changing the relationship of Man to the Universe

4.3  Economic development as a morphogenetic process

4.4  “Between the jumps”: the assimilation of scientific and technological breakthroughs

4.5  The hierarchical ordering of scientific and technological progress

4.6  Modes of physical-economical development

4.7  Lack of strongly nonlinear development in the recent period

4.8  Perpendicular axes of optimization of the economic process
 

Chapter 5   A universal metric of physical-economic development

 

5.1  Creative discovery -- overcoming the relative boundedness of a given mode of thought

5.2  Fountains of discovery – higher hypotheses and the concept of “hypothesis of the higher hypothesis”

5.3  The problem of defining a universal metric of economic development

5.4  Relative potential population density
 

Chapter 6    Special topics in physical economy
 

6.1  Characteristics of physical-economic development

6.2  Energy and physical-economic development

6.3  The principle of increasing power density of technology

6.4  Examples of “quantum jump” increases in the power density of technology

6.5  Other forms of power density: urbanization and infrastructure

6.6  Shift in labor force structure toward increasingly science-intensive activities

6.7  A new concept of living standards
 

Supplements to Part I
 

How do we know that scientific and technological progress can continue indefinitely?

What about Nature? How a rational environmentalism can contribute to economic development  

 

Part II  Physical-Economic Development: Pathways to the Future
 


Chapter 7    Background: the 1970s branching point 
 

7.1  Introduction

7.2  The fateful turn in U.S. economic policy

7.3  A road to disaster

7.4  International impact

7.5  Cultural effects
 

Chapter 8  Relaunching full-scale development   
 

8.1  Introduction

8.2  Where will the jobs come from?

8.3  Where will the money come from?

8.4  Education and training

8.5  Informing the public

8.6  Cultural paradigm shift

8.7  Examples of the multiplier effect: the Apollo program and the invention of the transistor
 

Chapter 9   The Knowledge Generator Economy
 

9.1  The future of industrial society

9.2  Beyond industrial society: the thirst for knowledge as motor of economic growth

9.3  Characteristics of a Knowledge Generator Economy

9.4  Preparing the transition to a Knowledge Generator Economy

9.5  Visionary projects

9.6  Project I: A new era of space exploration and mass participation in scientific research

     50 million astronomers?

9.7  Project II: Infrastructural revolution – the end of fossil fuels

1.  A gigantic undertaking    

2.  Bringing fission out of the stone age

3.  Fusion

4.  Nuclear batteries

5.  100% electricity-based road transport

6.   Electric aircraft

7.  Nuclear-powered shipping
 

Chapter 10   The problem of alienation
 

10.1  Objections to our proposals

10.2  Alienation and the concept of work

10.3  The medieval craftsman as an exception   

10.4  Where does the extreme alienation of present-day society come from?

10.5  Taylorism

10.6  The mental price of “scientific management”

10.7  Automation and robotization -- the end of Taylorism?

10.8  Alienation in science

10.9  Alienation and the fallacies of “artificial intelligence”

10.10  Creativity as randomness?

10.11  Alienation in modern biology

10.12  “Emergence” – a miracle?

10.13  Alienation at the foundations of physics
 

Chapter 11    Renaissance humanism and the pathway to a Knowledge Generator Economy
 

11.1  The spirit of the Renaissance

11.2  The individual nature of scientific discovery

11.3  Overcoming alienation – lessons for today

11.4  The educational model of Wilhelm von Humboldt

11.5  Alexander von Humboldt’s “Cosmos”

11.6  Excerpts from “Cosmos”

1. Nature and the development of human knowledge

2. The infinite horizon of scientific discovery

3. Science, beauty and economic development

4. The joy of thinking

5. The enjoyment of Nature

11.7  Recipe for success
 

Chapter 12    Science for mass participation
 

12.1  Introduction

12.2  The phenomenological approach to the study of Nature

12.3  Work for everyone: biology, medicine, geosciences, astronomy and materials research

12.4  Citizen Science

12.5  Organizing scientific research in a Knowledge Generator Economy

12.6  Leibniz’s strategy

12.7  Modern lessons
 

Supplements to Chapter 12
 

Creativity and the phenomenological method

Kepler versus Newton

Limits of the analytical/reductionist method

The complementarity of phenomenological and analytical/reductionist methods

Conclusion

 

Part III: National Development from the Standpoint of Physical Economy

 

Chapter 13: National Economy
 

13.1 The essential role of the nation-state

13.2  Economic instruments of the state

13.3  Building the nation: the function of Great Projects

Infrastructure projects

High-technology projects  

13.4  National banking and the national financial system

13.5  Private enterprise or state ownership?  Protectionism or free trade? Economic planning or market mechanisms?

Supplements to Chapter 13

Globalization vs. national economy – a political note

The role of the state in the U.S. economy today


Chapter 14  Economic policy and the battle of systems
 

14.1 The “American System” versus the “British System”

14.2 Inventions and the spirit of the American System

Benjamin Franklin

Abraham Lincoln

14.3  The spread of the American System: examples of Germany, Russia and Japan

Friedrich List and the American System in Germany

Sergei Witte and the American System in Russia

The American System in Japan
 

Chapter 15  Examples of physical-economic development in the postwar period
 

15.1  The United States

   Prelude: the New Deal

   The war mobilization

   The Manhattan Project

   The postwar boom

   Vannevar Bush and the launching of “Big Science”

   Nuclear power

   Medical use of isotopes

   Lasers

   Microelectronics

   Computers

   The Sputnik Shock and the Apollo Project

   Internet



15.2 The postwar “economic miracles” in Germany, France and Japan   

   The Bretton Woods system and the Marshall Plan

   Technology transfer – key to the postwar “economic miracles” in Western Europe

15.3  West Germany

15.4  France: the “Thirty Glorious Years“

15.5  The post-war “Japanese miracle”

15.6  The rise of the Soviet Union

   Heavy industry

   Science and technology

   Nuclear energy

   The Sputnik and Yuri Gagarin

   Defeat in the race to the Moon

   
The “Era of Stagnation”  

 

Chapter 16  The rise of China

 

16.1 From Sun Yatsen to Mao Zedong

16.2  Soviet-style industrialization

16.3  Reform and the two-track policy

16.4  “Catching the mice”: modernization with the world’s best technologies

16.5  Pillar industries and the systematic use of directed credit

16.6  Mass production of scientists and engineers

16.7  The greatest infrastructure boom in history

16.8  Urbanization – hundreds of new cities and towns

16.9  The Future of China
 


Conclusion: The future of physical economy

 

 

 
Message to the reader - in lieu of a preface

Physical economy is concerned with principles of economic development. Putting aside conventional monetary measures of economic performance, it focuses on economic growth and development as defined in real physical terms. An economy is analyzed as a special kind of living organism evolving under the influence of scientific and technological progress, which is the chief source of increases in the productive power of society. The inseparable connection between physical-economic development and human creativity defines a deeper level of physical economy, focused on the creative process itself.

The following text is taken from the draft of a forthcoming book entitled “The Physical Economy of National Development” which is intended for publication first in Brazil (in Portuguese translation), and later hopefully in other languages. Meanwhile, in view of the urgent practical importance of the topic, and in the desire to stimulate discussion and to obtain ideas and constructive criticism for future versions of my book, I have decided to make a large section of the English draft available to the public in advance. References and footnotes will be supplied when the final text is published.

It is important to distinguish the meaning of the term “physical economy” employed in this book, from others encountered in the literature. Nowadays “physical economy” is most often used essentially as a synonym for “energy economics” or “ecological economics”.  Here the object of study is not economic development per se and how to achieve it, but rather the resource dependence and environmental impact of economies.

Wassily Leontief comes much closer to the subject of this book with his early efforts to apply the input-output method to the problem of economic development. There is very big difference, however, which becomes clearest when we consider the general case of development driven by the progress of science and technology. Leontief’s input-output tables provide a useful, albeit linear representation of what we call the metabolism or metabolic state of an economy as it exists at a given moment in time. Taking into account the fact that this metabolic state changes with time, we are led to a preliminary picture of a developing economy as an ordered sequence of input-output tables whose coefficients and dimensionalities change in a nonlinear fashion as new technologies are introduced. The fundamental problem posed by this representation lies in the fact that the process by which scientific and technological progress is actually generated, and which involves the irreducible element of human creativity, is located entirely outside the domain of analysis, as a kind of exogenous factor. And yet it is exactly that element which drives real economic development forward.

One encounters the same difficulty in the remarkable attempts by the late Pobisk Kuznetsov and others in the former Soviet Union to unite physics, biology and economics into a single science. The Kuznetsov approach to physical economy traces its origins (among other things) to the 1881 treatise by the Ukrainian socialist Sergei A. Podolinsky on “Human Work and Its Relation to the Distribution of Energy”, which pioneered a thermodynamic approach to economic processes. The Soviet approaches, however, seem not to have addressed the fundamental paradox inherent in any attempt to derive universal laws of economics from the laws of physics as presently known. That paradox lies in the fact that science is constantly developing; in future stages of development, economies will surely make use of scientific discoveries that have not yet been made today, and their behavior as physical processes will not "fit in the box" of present-day physics. Truly universal principles of economics must embrace this essential evolutionary characteristic of human society.    

In short it appears that in the varieties of “physical economy” identified above, human creativity, as embodied in the generative process of science and technology (and of human knowledge more generally), remains essentially outside the scope of the chosen analytical methods. Whereas for physical economy in my sense, the generation of human knowledge must be the center of attention -- the center around which everything in economics revolves.

The notion of physical economy elaborated in this book thus corresponds roughly to that of Lyndon LaRouche, with whom the author was connected for many years. I have also benefitted from extensive work by his associates on a wide range of topics relevant to physical economy, alongside other contemporary and historical sources to which my research has led me. Unfortunately, for reasons connected with the biases and fallacies of presently dominant economic thinking, as well as LaRouche’s polemical excesses and often rather turgid style of writing, his conception of physical economy as a kind of universal science has not received the attention it merits.

My object has been to rethink the field of physical economy, to promote its study and practical application, and to elaborate new ideas and approaches that may prove useful for its future development. I have also permitted myself to put forward a number of visionary proposals, centered on what I call a “Knowledge Generator Economy” and the challenge of overcoming the deep alienation of present-day society. Included are projects for involving large masses of the population directly in scientific research.

Needless to say, everything concerned with economics raises political issues, and the world situation today raises very serious issues indeed. The purpose of my book, however, is not to deal with political issues per se. It is left to the reader to draw his or her own conclusions. For similar reasons polemics have been avoided as much as possible in favor of a cool-headed, reasoned approach.

I trust that my book, whatever its shortcomings, will suffice to demonstrate the extraordinary importance and fruitfulness of physical economy, and inspire others to pursue its application and further development in the interest of mankind’s future.

Dr. Jonathan Tennenbaum

Berlin, September 27, 2015

www.physicaleconomy.com

 

 

Introduction: What is physical economy?

Physical economy is concerned with principles for the economic development of nations. In contrast to conventional economic theories, it focuses exclusively on the real economy, putting aside financial and monetary measures of economic performance. More precisely, physical economy studies economies as physical processes – as living organisms of a special sort.  Analyzing an economy as a whole as an interconnected process of physical production and consumption, we arrive at a notion of physical-economic productivity (or productive power) which is independent of financial measures. We identify scientific and technological progress, as the main driver of increases in real productivity, and we study how economies evolve through an unending series of stages of development and structural transformation, under the impact of scientific and technological progress and related advances in human knowledge. From the general characteristics or “laws” of this evolutionary process we derive principles for long-term development of nations, which are of vital importance for economic policy-making.

The intimate relation between physical-economic development and human creativity – in the strict sense exemplified by the process of scientific discovery – leads to a further, profound field of investigation. On the one side, human creativity is shown to be an essentially unlimited resource, providing the means for progressively overcoming all conceivable limits to the future expansion of human activity on the Earth and beyond.  On the other hand, to the extent the economy of a nation is organized to foster maximum rates of scientific and technological progress, the economy becomes an instrument for realizing the creative potentials of the population, through their involvement in the generation of new knowledge and its assimilation into the economic process as a whole.  In this context we discuss the problem of alienation, which is the main barrier standing in the way of involving the masses of the population into scientific activity, and propose a future direction of economic development leading to the emergence of what we call a “Knowledge Generator Economy”. The essential goal of physical economy is human happiness: Not only freedom from material want, but to create the conditions in which the creative potential of each individual is realized to the maximum extent possible, in an environment of progress and a general expansion of knowledge.

The present book elaborates the principles of physical economy and illustrates their practical application with examples from successful periods of economic development of various nations. These include the rapid industrial development of the United States and German in the 19th century under the so-called "American System" or "National System" of political economy (Chapter 14) and the "economic miracles" which took place from the end of World War II until the 1970s in U.S., France, West Germany, Japan and the Soviet Union (Chapter 15). These examples show how successful physical-economic development can be realized under very different institutional arrangements and even in different economic systems altogether. In Chapter 16 we examine the more recent example of China in the period following its economic reforms. 

Although financial issues are not the focus of this book, we give some basic indications and historical examples of the method of productive credit-generation, and how it has been used – together with a suitably-structured and directed banking system -- to finance physical-economic development. We argue that any trajectory of economic development, which is feasible in physical, technological and human resource terms, can in principle be financed. This requires however, that the financial system and financial policies be strictly subordinated to the requirements of the physical economy. Full (or nearly full) employment is also attainable on a sustained basis, but only through discarding the presently dominant neo-liberal models.

The importance of physical economy today

The approach of physical economy in our sense differs radically from that of academic economics today. Physical economy does not deal with the sociological and political aspects of economics. It is not a study of economic behavior. It is not about supply and demand, markets and prices, equilibrium, or conjectural cycles etc. All of these things can be important in a tactical sense, but on the level of long-term strategy it is science and technology, infrastructure, education and culture that determine the economic fate of nations.

The principles of physical economy are not only urgently needed for decision-makers in governments and institutions, but should belong to the common knowledge of the citizenry.

The striving for economic development has led to very different outcomes in different nations and historical periods. Some nations have risen to become prosperous, while others remained trapped in backwardness. Often periods of apparent prosperity are interrupted by devastating crises, placing the future of entire nations in jeopardy. Too often, development is aborted or one-sided, leading to an unstable mixture of impoverished regions or population groups living alongside areas of great economic vigor and wealth – a condition we find today in the world as a whole.

Success or failure in economic development depends on many factors. Of decisive importance in the long run are the basic conceptions of economics and of development which shape attitudes and decisions at all levels of society. This is demonstrated again and again in the current and past history of nations. The successful development of a nation requires much improvisation. But like navigating a ship in a stormy ocean, we must always know the direction we are aiming for. That is the subject of physical economy

Flawed ways of thinking about economics are rampant today -- in governments, in the population generally and not least of all in the ranks of professional economists. The most damaging flaws lie not in technical details, but on the most fundamental level, in the concept of economics itself. It is here that physical economy provides an urgently needed corrective. To make our point clear, we chose an example that will be familiar to all.

GDP growth – the False God of the Economists

A characteristic manifestation of flawed economic thinking is the widespread tendency to confuse real economic development with the artificial notion of “economic growth” defined in terms of the so-called Gross Domestic Product (GDP).

“GDP growth” could indeed be called the False God of the Economists – worshipped on a daily basis by governments and leading institutions around the world as a central focus of economic policy-making. Many governments and central banks routinely use economic models to choose between alternative policies, on the basis of their projected effect on the GDP. The GDP is seductive – but also often extremely misleading -- because it is a single number. It can be a useful indicator in the analysis of conjunctural cycles and some other economic phenomena. It is useful for investors in stocks and bonds. But the practice of utilizing GDP growth as the standard for evaluating economic development policies is a fundamental error whose consequences can be disastrous. 

The worship of GDP reflects flawed economic thinking on a number of levels. Identifying them briefly provides a good way to show why the principles of physical economy are urgently needed today. 

The most obvious, banal source of error is the tendency to see economics primarily through the prism of money and monetary measures of economic value. People tend to forget that money is just paper or, in the era of computerized transactions, just bytes of information. We cannot eat money, we cannot build houses, factories or airplanes from money. All these things must produced by real physical processes, involving live human beings and processes of transformation of matter and energy which as such belong to the domain of physical science.

Experience shows that monetary valuations of commodities or services – i.e. their market prices – often diverge greatly from the degree of their actual importance to the physical process upon which the existence of a nation, its population and its economic activity depend. Since GDP is calculated on the basis of monetary valuations of the goods and services produced and consumed, it inherits the tendency to diverge from reality in this respect. Thus GDP growth can go hand-in-hand with gigantic speculative bubbles and unsustainable investment booms; with the failure of governments to maintain and renew essential infrastructure, with widespread technological obsolescence and waste, a falling educational and cultural level of the general population etc.

In this respect important lessons can be learned from the history of the gigantic bubble on world markets, whose collapse in 2007-2008 triggered a global financial crisis of unprecedented proportions. In the period leading to the financial collapse, the economies of the U.S.A. and other major industrial nations were in excellent condition as measured in terms of GDP growth; hence the collapse came as a great shock to most – although not all – leading economists. Under the impression of the shock, it became fashionable to speak of the “real economy” in contrast to the artificial world of the financial markets. Not accidentally this was accompanied by a flurry of articles and studies criticizing the use of the GDP as a criterion for the success or failure of economic policy.

The crash of 2007-2008 and its sequel demonstrated that the monetary system, the financial markets and financial institutions cannot be allowed to exist in a world of their own. They must not be permitted to dictate economic policy. On the contrary: the banking system and financial markets must be tightly regulated, and financial policies must be judged rigorously according to the criteria of the real economy. The financial system must serve the real economy, not the other way around.

But what is the “real economy”, really?  If development of the “real economy” is to be a criterion for economic decisions -- instead of GDP growth --, then we first of all need a clear conception of what the term “real economy” should mean. The answer is provided by physical economy.

There a second, more subtle flaw in prevailing thinking. This deeper flaw expresses itself in the tendency to confuse economic growth with economic development.

Growth and development are completely different conceptions, and they lead to very different consequences when taken as the basis for economic thinking and practice. Development always involves a certain type of growth, but it is much more than simply growth.

Consider, as an analogy, the life of a young child. The child is growing in weight and stature. But at the same time, the child is developing: it learns to walk, learns to run, learns to speak, explores the world around it, begins to build things, constantly evolves new potentialities and abilities. The development of a child exemplifies, in its early stages, the unlimited potential for development of cognitive powers – the ability to grasp and to generate new ideas, new conceptions and new knowledge – which is unique to the human species.

In analogy, economic development involves a constant process of qualitative transformations of the totality of economic activity under the impact of scientific and technological progress, and advances in human knowledge generally. Through such progress Mankind develops new physical powers, as exemplified in prehistoric times by the mastery of fire and the production of metal tools, or in modern times by the advent of the steam engine, electricity, airplanes, nuclear energy and manned space travel, for example. Each of which belongs to specific stage or era of economic development. Most important is the cognitive character of development, expressed in an unending series of fundamental discoveries and conceptual breakthroughs which depend on the unique creative capability of the human mind. Not only the initial breakthrough, but also its assimilation in the general practice of society depends on these creative capabilities.

By contrast, economic growth in the physical sense (as opposed to nominal growth in GDP) signifies a mere expansion in the scale of economic activity as reflected (in a first, very rough approximation) in increases in the quantitative output figures for the most important forms of energy, foodstuffs, manufactured goods, amounts of passengers and goods transported, in real parameters reflecting the performance of essential services such as medical care and education, and so forth.

Just as no sane person would propose to judge the healthy development of a child by its weight alone, so in the same sense it is an error to take growth of physical production alone – not to speak of GDP -- as the criterion for the development of a nation.

Needless to say, real economic development cannot be characterized by a single number or even a set of numbers; to characterize it adequately calls for a different methodology than that commonly used in economic analysis today – the methodology we shall elaborate in this book. Here there is room for a long overdue cognitive development of the world’s economists!

This brings us to the most widespread and most fundamental flaw in contemporary economic thinking.  What is the purpose of economic development? What is the purpose of growth, however that is to be measured?  In the declarations of governments, central banks and others “economic growth” is commonly treated as if it were something self-evidently good, in and of itself. But we have only to look at what is happening in the United States, for example, to see that GDP growth can go hand with growing income disparities, with the sinking of entire sections of the population into chronic unemployment, poverty and hopelessness, with extreme forms of cultural decadence and an epidemic of “burnouts” among the so-called successful strata of society. In many developing countries with growing GDP the situation is in some respects even much worse. Reacting to these realities, the United Nations and a number of other organizations have proposed various of alternatives to the GDP, which would include parameters such as the so-called Human Development Index (HDI), subjective and objective measurements of wellbeing, the feeling of happiness, environmental sustainability etc. But in our opinion, the alternatives proposed so far all miss the most essential point: the relationship between economic activity and the realization of the creative powers of discovery and generation of knowledge which are unique to human beings.

Overview

Our book consists of three parts:

Part I is theoretical in character. It sets out the basic principles of physical economy. For reasons that will become clear, we include an extensive  defense of what we call the economic optimism against the pessimistic viewpoint associated with Thomas Malthus. We demonstrate the purely relative nature of resources and identify the fundamental fallacy involved in neo-malthusian claims of supposed absolute “limits to growth”. Having thus cleared the pathway forward, we take up the analysis of the economy as a physical process, investigating its “metabolism” in analogy with living organisms. We give a preliminary characterization of growth in terms of physical input-output relations, briefly presenting the original approach of Leontieff in this context. Next we focus on the development process by which new technologies are injected into the economy, transforming the structure of the economic organism, including its input-output relations. We distinguish between essentially linear forms of growth on the one side, and nonlinear and “strongly nonlinear” modes of economic development on the other -- the latter deriving from fundamental scientific discoveries. Of crucial importance to physical economy is the “multiplier effect” by which the real costs of scientific and engineering R&D are repaid (often many times over) via the resulting increases in the physical productivity of the economy. We illustrate this multiplier effect with the examples of the U.S. program for the first manned landing on the Moon (the Apollo program) and the developments flowing from the creation of the first transistor.

Next we turn to examining the microcosm of physical economic development: The process of creative discovery in science which -- via the transformation of scientific discoveries into new forms of technology and other innovations -- is the chief locomotive of physical-economic development. To what extent are creative scientific discoveries isolated “accidents”? Or can we speak of creative discovery as an ongoing, unending process? Addressing this question sets the stage for probably the most difficult theoretical problem in physical economy: how to define a universal measure of real physical-economic development. In an extensive step-by-step approach we present the solution in terms of the concept of relative potential population density and its rates of increase along alternative development trajectories. Here we base ourselves on the pioneering work of Lyndon LaRouche on physical economy as a kind of universal science.

Part I continues with discussion of some specific aspects of physical-economic development: (1) the principle of increasing power density of technology; (2) the principle of densification in other domains, including computer technology, transport infrastructure and city-building; (3) the gradual shift in the structure of the labor force toward increasingly science-intensive activities; (4) changes in the quality of living, including the use of leisure time. Part I concludes with two self-contained supplemental sections: “What about Nature? How a rational environmentalism can contribute to economic development” and “How do we know that scientific and technological progress can continue indefinitely?”

Part II addresses the question: What would the future trajectories of development look like, under the assumption that the world’s nations – or at least some of them – were to adopt the principles of physical economy as the basis for their economic policies? We preface this discussion by looking backward to a concrete historical case: the post-WWII economic trajectory of the United States. We contrast the successful economic development trajectory of the United States in the 1945-1970 period with that of the later decades, when the United States shifted more and more toward a neoliberal, “postindustrial” trajectory directly counter to the principles of physical economy. Among other things this change of policy resulted in a deep-going change in employment structure, away from the productive sector into a massively overinflated service sector. Similar transformations followed in other industrial nations, and had serious effects also on developing nations. It only more recently, with the spectacular rise of China, that “postindustrial society” policies are increasing seen in the U.S. and elsewhere as a strategic disaster.

Looking forward, we show how healthy industrial development can be launched in nations around the world, via large-scale projects in the area of infrastructure, science and technology. We show how, in this context, virtually full employment can be attained and describe feasible methods for financing these measures. We answer the questions: “Where will the jobs come from?” and “Where will the money come from?”.

The perspective of a resumption of healthy industrial development poses a new question: What is the future of the industrial society? Should we just keep producing more and more? We argue that the trajectory of physical economic development leads inevitably to the emergence of a new type of economy, the Knowledge Generator Economy. Our conception differs fundamentally from that of the so-called “knowledge economy”, which has become fashionable in recent times. In the Knowledge Generator Economy, the maximum realization of the creative intellectual powers of the population in the expansion of human knowledge becomes the explicit goal of economic development. The thirst for knowledge becomes the main driver of employment and demand. Among other things a large part of the total workforce – growing to 20% and beyond – is to be employed directly in scientific research, and the population as a whole will be drawn into this process in their leisure time and through employment in activities which are connected with the development and production of scientific equipment and the assimilation of new scientific knowledge into economic activity generally. We propose two visionary projects which can serve as takeoff points for launching Knowledge Generator Economies: One is the large-scale exploration of space and the involvement of masses of the population in the process of “digesting” the enormous volume of information provided by manned missions and space-based astronomical instruments. This first project is designed to capture the imagination of the population, especially of the younger generation. The second project is apparently more pragmatic, but requires an effort of science and technology comparable, for example, to creating the first manned colony on Mars. The second project is to transform the entire infrastructural base of the world economy – including all forms of transportation – by replacing the combustion of fossils fuels nearly entirely by the use of electric energy combined with new forms of nuclear power and other revolutionary technologies whose power densities are orders of magnitude larger than those utilized today.

Leaving aside the particulars of such visionary projects, the task of involving large masses of the population in scientific research faces a huge problem: the problem of alienation. This forces us to examine the origins of the extreme forms of alienation prevailing not only in society at large today, but also in contemporary science itself. We propose a radical solution: To engage the entire population in what we call “phenomenological studies”, including particularly the phenomenological study of so-called collective effects in the Universe. This is an unlimited domain of research which does not require abstract mathematical methods, but only powers of direct insight and conceptualization shared by all human beings. To produce solid results this effort requires close cooperation persons engaged in such phenomenological studies, and specialists trained in the entire range of analytical methods. It goes without saying that even a partial realization of this proposal would have profound effects on the overall cultural climate of society. A precursor can already be seen in the popularity of the “Citizen Science” movement in the USA.

 Part II closes with considerations about how such an unprecedented scale of scientific effort could be organized. The best single conceptual reference-point, we find, is the strategic conception (Grand Design) of Gottfried Wilhelm Leibniz for the establishment of a network of scientific academies stretching from Europe and Russia to China, combining European theoretical methods with the Chinese culture of “Observations”; and his approach to developing a universal language for science. As more recent examples we point to the vast scientific establishment which arose in the United States in the post-WWII period (described in some detail in Part III), and the powerful “science machine” built up in the Soviet Union in the same period. 

 Part III deals with the means and instruments by which the principles of physical economy can be implemented in practice for the development of nations. We argue that the first and most essential instrument for realizing sustained physical-economic development is a sovereign nation-state. We note that like all instruments, the institutions of the sovereign nation-state can be used in good ways or in bad ways. That being said, until now only sovereign nations have been able to develop powerful industrial economies. We must thus speak of “national economies”. In every case, state regulation and intervention into the economic process played an essential role -- albeit in ways which differ substantially from case to case. We identify the chief means by which the state can foster and guide the overall physical economic development of the nation, with special attention to the role of credit generation by a suitable national banking system (or central banking system under the essential control of the state) and the channeling of newly-generated credit flows into prioritized projects and economic sectors. We briefly address the perennial debates about the role of the state versus private enterprise, which typically fail to address the key issue of the quality of state institutions, and apply the wrong criteria for success or failure. In a political note we warn of the dangers arising from growing attacks on the principle of the sovereign nation state, which in economic and financial terms take the form of neo-liberal policies of globalization, radical deregulation and privatization. We contend that sustained physical economic development is impossible under such policies.

The following sections provide historical background for the question, how the principles of physical economy can be realized in the practice of sovereign nations. Historically, the most successful embodiment of the principles of physical economy in the practice of nations so far is what the famous German economist Friedrich List called the “American System of National Economy”. Throughout the 19th century, and continuing in various forms until today, economic policy has been the battleground of an epic struggle between the “American System” associated with Alexander Hamilton, Henry Carey and Friedich List, and the “British System” associated with Adam Smith, David Ricardo, Thomas Malthus and the liberalist doctrine of the “invisible hand”. Unfortunately, beginning around 1970 the U.S. itself moved more and more in the direction of the British System in a modernized form, finally striving to establish what might be called a global neoliberal empire.

We first sketch the main features of the traditional American and British systems, emphasizing their opposing economic philosophies and practices. We characterize the optimistic spirit of the American system and its focus on the key role of what the early American thinkers called “improvements”: inventions, new ways of doing things, development of territories by infrastructure etc. As examples we quote from President Abraham Lincoln’s famous lectures on “Discoveries and Inventions”, the construction of the transcontinental railroad in which Lincoln played a crucial role, the earlier founding of the Corps of Engineers and the “Roads and Canals Act”.  We briefly discuss the U.S. Civil War -- a conflict flowing directly from the struggle between the British and American systems. Finally we recount how the American System was taken up by other nations in the 19th century, including especially Germany, Russia and Japan.

In the following sections we move forward in time to most recent period in which leading nations adopted policies in line with the principles of physical economy. We begin by following the rise of the United States to the leading economic, scientific and technological superpower in the period stretching from the giant infrastructure projects of the “New Deal” and the economic mobilization for World War II, into the three postwar decades.  Among other things we show how the “Science Machine” set up in the U.S. after the war is the origin of many of the technologies that have shaped our present-day world, including nuclear power reactors, lasers, microelectronics, computers and the internet. We discuss the Apollo Moon landing program with its close “networking” of government and private enterprise, and compare it with the parallel effort in the Soviet Union. We quote from the 1976 study by Chase Econometrics of the multiplier effect of the Apollo program on the U.S. economy.

After examining the case of post-WWII development of the United States we turn to the economic successes of other nations in the period up to 1970, including the France’s “30 glorious years” and the “economic miracles” in West Germany and Japan.  These examples are particularly instructive because they demonstrate how the principles of physical economy can be realized in different ways, under different institutional arrangements and in different national contexts.

Part III closes with a brief look at the rise of China, which is the prime example of physical-economic development in our present-day world.

We wish the reader enjoyable reading, and look forward to comments and suggestions.

 

 

 


Part I: Essentials of Physical Economy
 

Chapter 1    The center of physical economy is the human being


1.1  Economy and human happiness

This section summarizes the philosophical standpoint which serves as the point of departure for our elaboration of physical economy.

The center of physical economy is the human being.

Economic activity can contribute to human happiness in two main ways:

Firstly, by supplying the material necessities of life and lessening its physical burdens, and by providing an improving quality of health, education and leisure.

 Secondly, by supplying the means and context for human beings to realize their creative mental powers, in collaboration and exchange with others, through suitable forms of employment and other activities that contribute to the further development of society. 

Economic thinking until now has focused mainly on the first category of benefits, while giving little attention to the second one. Physical economy combines the two into a unified conception, while emphasizing the much-overlooked cognitive (noetic) function of economic activity. 

In doing so, we acknowledge the fact that throughout history the vast majority of human beings have lived in a condition of constant struggle for the means of physical existence. The notion that scarcity is a permanent condition of Mankind still embedded as an “axiom” in most economics today. We shall argue, however, that science and technology provide the means, in principle, for eliminating material want in the ordinary sense altogether, while successively overcoming all conceivable limits to the expansion of human activity on the Earth and beyond. In this perspective scarcity and poverty are in essence purely a political problem; the technical means to overcome them on a global scale are at hand or within reach. 

But a trajectory of economic development which would actually meet all the material needs of humanity on a sustained basis would require an incomparably higher rate of scientific and technological advance, and an incomparable greater participation of population as a whole in process of the expansion of human knowledge, than have ever been realized before. The time has come when the cognitive function of economic activity must be brought into the center of attention.

The creative powers of human reason distinguish Man from all other forms of life on this planet. Unlike other forms of life, which display relatively fixed ranges of behavior, human beings have the unique capacity, through conscious creative insight and discovery, to gain increasing knowledge of the world and to assimilate new knowledge in the form of improvements in human activity – both on the level of the individual, and on the level of society.

Although human creativity takes many forms in addition to scientific activity, such as art, music, literature, drama, it is the scientific aspect which is the most directly and intimately linked with economic activity, and thus the aspect we shall mostly focus upon in this work. It is not an accident, however, that great scientists have nearly always had a deep love for classical culture, which – as Friedrich Schiller said – expresses human creativity in its purest and freest form. 

Creative acts of an individual human being – useful inventions, scientific discoveries or other contributions to knowledge, great works of art, great moral teachings -- live on after the individual dies, embodied in the thinking and activity of society. Even those individuals who seem not to have contributed directly in that way, participate indirectly in such contributions. Even those of the past who lived as oppressed slaves: their children or children’s children might become great thinkers and leaders.

Besides its spiritual quality, human existence is inseparable from physical activity: it is based on processes of transformation of matter and energy, and the totality of what we call natural science. We are physical beings; the physical world is not only the substrate of our existence, but also the subject of human creative discovery. In Christian theology this is expressed by the idea that God placed the soul in a body for the benefit of the soul, in order that the soul might gain knowledge. In a generalized sense the physical economy of a nation constitutes the “body” for the intellectual and spiritual life of its population.

It belongs to the nature of Man to think, to explore the Universe, to build, to experiment playfully, to grasp and communicate ideas, to love and work together with others.

The task of how to realize human potential to the maximum possible extent in the actual lives of nations and their populations, defines the basic mission of physical economy as a scientific discipline – a mission that is essentially moral in character.

We thus understand the physical economy as an instrument not only for sustaining human life, but also for human beings to interact ever more intensively with each other and with Nature, for the purpose of expanding human knowledge. For, physical-economic activity not only sustains an increasing population of human beings as potential thinkers and explorers, but also produces the means and instruments for exploring the world in an ever-enlarging scope, in the directions of the infinitesimally small and the infinitely large. In a sense, physical-economic development is an ongoing scientific experiment.

This has deep implications for the notion of work in economic life.

From the standpoint of physical economy work has a two-fold function:

On the one side, human beings must work to sustain their own physical existence and that of society as a whole, in ever-improving conditions of life.

On the other hand, work provides the context for exercising and developing human creative and moral capabilities. In the context of healthy physical-economic development, employment provides people with dignity and a positive identity: a sense of their own value in contributing to society.

The function of work as a means of self-development is today – as in history -- rarely fulfilled in practice: for the vast majority of people, work is a sacrifice of time and energy, necessary in order to obtain the means of physical sustenance, and certainly not as a means for creative self-realization.

Despite this pervasive phenomenon of alienation – which will be the subject of a later section – we can nevertheless discern a gradual increase in the cognitive (noetic) quality of predominant forms of employment in the course of human history: in the progression from primitive hunter-gatherer society, to agrarian society, to modern industrial society with its growing emphasis on science and technology. The increasing cognitive component of work is reflected in increasing requirements for education and training of the workforce.

To the extent the principles of physical economy are adopted in the practice of nations in the coming period, the growth in the noetic content of work will accelerate sharply and the concept of work itself will undergo a profound transformation. The expansion of human knowledge per se will more and more become the central focus of economic activity. Scientific and technological progress will become the main generator of employment, investment and demand.

We consider this process to be inevitable.

1.2  The economics of optimism

Physical economy, as we understand it, is rooted in a fundamentally positive view of Man. While accepting the reality that every human being has imperfections, and that human beings are capable of terrible destructive acts, we nevertheless insist on the essential perfectibility of human beings and of human society in the long run. In this view, evil is fundamentally a manifestation of privation – of insufficient development.

We thus disagree with the basically pessimistic view of human nature that lies at the basis of the so-called classical school of economics associated with Adam Smith, Thomas Malthus and David Ricardo. The pessimistic view sees Man as predominantly greedy and egoistic, driven by self-interest and the pursuit of wealth. Accordingly, economic life is characterized by a Hobbesian-Darwinistic struggle of “each against all” for maximization of individual gains. Economic growth is seen as a product of competition between rivaling individual interests. In this way, according to Adam Smith, individual greed can work to the benefit of society.

In the classical school of economics, the pessimistic view of Man is coupled with what could be called the “axiom of scarcity”. The “axiom of scarcity” is historically associated with Thomas Malthus, one of the fathers of the classical school. It basically states that the tendency for growth of the population and material consumption of human race constantly collides with the limits of natural resources.  As a result, the laws of economics reflect the fact that human society exists in a situation of perpetual scarcity.

The idea that scarcity is fundamental to economic behavior is widespread today even among the schools of economic thought which do not explicitly embrace the neo-Malthusian doctrine of “limits to growth”. In his famous textbook, the Nobel Prize-winning economist Paul Samuelson states: “Economics is a study of how people and society end up choosing with or without the use of money, to employ scarce productive resources that could have alternate uses”.

This definition of economics completely ignores the phenomenon of scientific discovery, which has nothing to do with a choice between alternative uses of resources. Creative thinking in general results in the emergence of new possibilities which were previously not recognized to exist. Scientific research can sometimes be spurred by problems connected with the insufficiency of some important resource for human life. But the characteristic of creative scientific discovery is that it contributes, directly or indirectly, toward overcoming apparent limits to human development. In effect, creative discovery “creates” new resources! Later in this book we shall examine the process of overcoming limits and creating resources in detail.

More generally, Samuelson’s definition of economics ignores the main resource for human development: the human mind. Is this resource scarce? Not at all! We presently have 7.2 billion people living on the Earth. How well are we using this productive resource? Miserably! And how many people, even in the so-called developed countries, are using their own minds to the maximum of their capabilities? Almost no one!

We believe that the miserably inadequate use of the human mind is the ultimate reason why resources so often appear to be scarce. We believe that scientific and technological progress provides the means for overcoming any material scarcity which could stand in the way of Man’s further development.

The main reason why the “axiom of scarcity” has become fixed in the minds of economic decision-makers and the population today has nothing to do with a true lack of sufficient productive resources. One of the main causes, no doubt, is a fundamentally flawed financial system, which prevents existing resources from being properly used. Constantly having to deal with lack of funds has conditioned governments and populations into believing in the axiom of scarcity, while at the same time we are everywhere surrounded by unused resources and potentialities.

Below, and in more detail in Parts II and III of this book we shall discuss the method of productive credit generation, which can eliminate the chronic lack of sufficient long-term investment that cripples nearly every economy in the world today. The potential for large-scale productive credit generation has been utilized by a number of nations in their most successful periods of development and is being used in China today.  But this possibility is rejected by the leading financial institutions and central banks in most nations, which operate in many respects like a “second government” in the economic sphere. Thus the problem of insufficient funding of productive investment, chronic unemployment and apparent scarcity of resources is mainly a political problem, which can be solved by political means. We shall say more about this issue later.

The above considerations lead us to a view of economics that is in many ways opposite to that of the “classical school”.

From our optimistic viewpoint, each child, each additional human being on the Earth, is inherently a benefit to humanity as a whole. Each individual is born with the creative potential to contribute to human society in such a way, that in the future an increasing population can support itself at an increasing quality of activity. An obvious example is a scientist whose work leads to a breakthrough in the capability to produce food, to treat diseases or to produce energy and other goods efficiently using new types of resources. But even in their normal work, ordinary people exercise creativity in solving all kinds of problems, putting new ideas and techniques into practice and adding all kinds of improvements in daily activity, whose cumulative effect increases the potential to sustain a growing scale and quality of human activity. The result is the perspective of an unending self-expansion of human creative activity.

1.3  Economic policy

How could such a high-flying conception of Man’s future be turned into concrete economic policies? The answer will become clear in the course of this book. We analyze how an economy functions as a physical process – as a living organism in a certain sense – with particular attention to the structure of the “metabolism” of the economy as an organized process of transformation of matter and energy. We study how physical economies grow and develop under the impact of human cognitive activity, manifested above all in scientific and technological progress, and mediated by physical investments. We examine the different modes of scientific and technological progress and the process by which scientific advances are assimilated into an economy, transforming the structure of the economic organism. We identify general principles of technology and the role of different forms and qualities of energy. We examine the mental processes underlying the development of science and technology. We identify the feedback between economic development and the progress of science and of human knowledge generally. We put forward a universal metric for physical-economic development, which is independent of monetary valuations.

This investigation shows key points at which economic policies act to steer the trajectory of the real economy, and how the standpoint of physical economy provides criteria and standards by which to judge the effectiveness of those policies.

In particular, by concentrating on the physical side of economic processes we free our thinking from the biases and distortions commonly associated with money and finance. For us the monetary and financial system is nothing but an instrument whose purpose is to mediate economic activity. The financial system must be strictly subservient to the needs of the physical economy, and financial decisions must be made accountable for their effects on the physical and mental wellbeing of the population.

Furthermore, there is no alternative to a strong economic role of the state in fostering a nation’s development. It is intrinsically impossible, even under the most favorable conditions, to realize sustained physical-economic development of a nation on the basis of “market forces” acting alone, without some sort of overall guidance. The principle that “the whole is more than the sum of its parts” applies more to economics than any other field. There must be an agency that is responsible for the economy as a whole. That can only be the legitimate government of a sovereign state. 

In Part III of this book we shall examine the instruments by which states can realize physical economic development in practice, and illustrate their use with examples drawn from some of the most successful periods in the economic history of United States, Germany, France and Japan from the standpoint of physical economy. The rapid development of China today is also relevant. These examples demonstrate cases in which the principles of physical economy were realized in a limited, but nonetheless significant way, through deliberate economic policies. They demonstrate that it is not necessary to invent something completely new. Much can be learned from the fact that the implementation of physical economic principles took place under widely differing conditions, in economic and political systems which differed substantially from one another.

Chapter 2    Growth and development without limits

2.1  History refutes Malthusianism   

In the introduction we emphasized the essential distinction between growth and development, illustrating the conceptual difference with the example of a child growing up. In physical economy, growth signifies an increase in the physical scale and intensity of human activity, as reflected in the flows of matter and energy per capita and per square kilometer of inhabited land, and in a gradual increase in the total population.  Development, on the other hand, signifies a process of qualitative transformations of the structure of the economy, associated with advances in human knowledge and the assimilation of those advances in the form of improvements in the organization of economic activity. The most familiar case is that of scientific discoveries and their realization in technological revolutions. But development can also derive from advances in the understanding and cultivation of human reason itself – advances belonging to the realm of philosophy, for example --, which are transmitted to the whole society through improvements in education and the overall level of culture.

As we shall see, growth and development are inseparably connected. There can be no sustained growth without development, nor is it possible to maintain a process of development in the long term without physical growth.

It is extremely important to emphasize this point, especially today. Recent decades have witnessed the spread of the so-called neo-Malthusian ideology adopted – unfortunately -- by most of the environmentalist movement. According to this ideology there are absolute limits to growth. According to them, the world economy must now make a transition to “zero growth”, otherwise all resources will be used up, the environment will be destroyed and there will be a huge catastrophe. This line of argumentation often takes the form of asserting that the Earth has a finite “carrying capacity” for the human population.

In fact, the phenomena of large-scale pollution, wasteful use of resources and senseless destruction of Nature – phenomena which undeniably exist, and are a legitimate cause for concern – are typical symptoms of a lack of real development, with failure to implement technological advances, with cultural stagnation and the absence of fundamental scientific progress.

A good example is the present, massive over-reliance on fossil fuels for energy production and transportation, connected with the failure to fully utilize the potential of nuclear energy and with lack of sufficient progress in developing alternatives to the internal combustion engine – a technology more than a century old. Unfortunately, the environmentalist movement has done everything possible to block development of nuclear energy, and is generally hostile to large-scale scientific and technological progress. Ironically, the environmentalist movement has thereby contributed to the pollution problem rather than solving it!

In an economy, just as in a living organism, growth can take on harmful, pathological forms when there is a lack of development or -- in the case of cancerous “bubble” growth -- negative development through reversion to a more primitive mode of organization. In contrast, healthy physical-economic growth goes hand-in-hand with a constant process of transition to higher, improved forms of organization, through scientific-technological and other forms of progress. Conversely, maintaining development in the form of a constant process of reorganizing the economy on ever higher levels of technology requires an increasing amount of “work” reflected in growth in the total consumption of energy. As an economy develops, the number and variety of different areas and types of activity increases. In the traditional language of industrial economics there is an ever greater division of labor.  Ultimately the population must also grow, because the increasing complexity of the development process requires an increasing number of creative “problem-solvers” to accomplish the transition to the next high level.

But what about the supposed “limits to growth”? Can the increase in population, in the consumption of energy and so on, simply be allowed to continue indefinitely? Won’t this use up all the resources? If so, then Mankind will have to adapt to a steady-state, “zero growth” economy, in which development will slow down and essentially stop, or otherwise face disaster. Thus, the proponents of radical environmentalism demand that human beings should learn to live like other species of animals in a state of eternal equilibrium with the biosphere, which they also assume has reached its ultimate limits and exists in a state of “zero growth”.  

In the following we shall show why the idea of absolute limits to growth is wrong. Limits are always relative – they depend on the existing state of human knowledge. “Limits to growth” apply only to the case of a linear expansion of an economy – expansion without development. Real physical-economic development occurs through a process of constantly overcoming relative “limits” through human creativity, in the form of breakthroughs in science and technology.

Human creativity is not a kind of magic that makes problems disappear without effort. Overcoming limits requires an economy that can sustain the costs of maintaining a high rate of scientific and technological progress, and is organized in such a way that technological innovations are rapidly disseminated throughout the width and breadth of productive activity. It requires an economy with a high rate of real investment, and a population with high levels of education and culture. All of these factors involve large costs. But if an economy is organized according to the principles of physical economy, then the increases in the overall productivity of the economy, resulting from rapid scientific and technological progress, will compensate those costs many times over.

Focusing on the fundamental fallacy of “limits to growth” provides a good means to clarify the essential nature of physical-economic development. Exposing this fallacy is indispensible. Not only is the thesis of “limits to growth” wrong, but to accept it would make a true science of physical economy impossible.

In the following we shall look at the background of the “limits to growth” thesis, going back to Thomas Malthus, and compare the constant Malthusian predictions of disaster with the actual physical-economic development of human society.

Obtaining clarity on these issues is especially important because traditionally, the subject of economics has been based on the premise of “scarcity” as the central issue. This orientation was above all inspired by the work of Thomas Malthus. As we shall show, taking “scarcity” and the administration of scarce resources as the central issue in economics is not only wrong, but can lead to disastrous results. 

In his famous 1798 Essay on the Principle of Population the British pastor Thomas Malthus claimed that the human population was bound to grow faster than the capability to produce food, leading inevitably to mass death: 

"The power of population is so superior to the power of the earth to produce subsistence for man, that premature death must in some shape or other visit the human race.”

Malthus believed that a “war of extermination” would be necessary to limit the population – a “war” either by human beings killing each other, or by “natural” means of starvation and disease:

“The vices of mankind are active and able causes of depopulation … and often finish the dreadful work themselves. But should they fail in this war of extermination, sickly seasons, epidemics, pestilence, and plague advance in terrific array… Should success be still incomplete, gigantic inevitable famine stalks in the rear, and with one mighty blow levels the population with the food of the world".

When Malthus wrote these words, the world population was approximately 1 billion people.  Today more than 7 times as many people are alive, and the average food consumption per capita has increased substantially. World agriculture produces 17 percent more calories per person today than it did 30 years ago, despite a 70 percent population increase. This is enough to provide everyone in the world with at least 2,720 kilocalories (kcal) per person per day.  Evidently, food production has grown faster than the population! While famines have occurred, these have been localized events, most often connected with social-political causes and not with an overall lack of means to produce food. The same is true of hunger and malnutrition in the world today.

The late 1960s witnessed a rebirth of Malthus’ doctrine, with the promotion of a “neo-Malthusian” ideology which shaped the rise of the “environmentalist” movement. One of the famous early works launching the neo-Malthusian movement was “The Population Bomb” by Paul Ehrlich, published in 1968. This book began with the prediction:

“The battle to feed all of humanity is over. In the 1970's and 1980's hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.”

Up to today, in the meantime, the world population has approximately doubled, but hunger has decreased overall. Once again, food production has grown faster than the population – about 1.5 times faster according to the United Nations Food and Agriculture Organization (FAO)! Unfortunately many people did die of hunger in impoverished areas of the world in the 1970s and 1980s. But the apocalyptic disaster predicted by Ehrlich did not occur. Impressive is the case of India, where the population has grown by 3 times since Ehrlich’s prediction, but the percentage of population suffering from hunger and malnutrition has greatly decreased.

In 1972 a new phase in the neo-Malthusian movement was launched with the famous book “Limits to Growth”, published by the so-called Club of Rome. This book was in some ways more sophisticated than Ehrlich’s, basing itself on a computerized “world model” with 5 numerical variables representing population, industrialization, pollution, food production and resource depletion. The basic conclusion was remarkably similar to apocalyptic prediction made by Malthus almost two centuries earlier:

“If the present growth trends in world population, industrialization, pollution, food production, and resource depletion continue unchanged, the limits to growth on this planet will be reached sometime within the next one hundred years. The most probable result will be a rather sudden and uncontrollable decline in both population and industrial capacity.”

This threat of a coming catastrophe became the basis for the environmentalist movement to demand a stop to population growth and industrial development. But the expression “the limits to growth” embodies a fundamental assumption, which was taken to be self-evident, namely that the resources for growth are intrinsically finite. If we accept that there exist absolute limits to growth, then the only question is when these limits will be reached.

The “limited resources” cited in the Club of Rome report include the amount of land that can be used for food production, the total amount of oil, coal and of various metals and minerals that can be extracted from the Earth, as well as the limited capacity of the biosphere to absorb industrial pollution. 

The spread of the neo-Malthusian concept of “limits to growth” in the general population was greatly helped by the “oil crises” of 1973 and 1979, with sudden increases in the price of oil.  Both crises were caused by political events in the Middle East, but in the public consciousness they were associated with the idea that oil was running out. The concept of “peak oil” became popular – the idea, that oil production was nearing its maximum and would begin to decrease, because the economically usable reserves of oil were being used up.

Now, more than 40 years after the Club of Rome report, the world population has nearly doubled. World energy consumption has grown even faster. Since 1970 the annual world production of crude oil has doubled.  But at the same time the so-called “proven reserves” of crude oil – the amount of technically and economically recoverable oil which has been determined to exist at a given time – have also grown, and are today more than twice as large as in 1970. Gigantic amounts of oil have been used up, and yet the amount known to be available for extraction keeps growing! How is this possible? 

Meanwhile the rapid industrial development of China and India, where more people are living today than in the whole world in 1950, is consuming gigantic amounts of mineral resources. Yet the Chinese and Indians don’t seem to be worried that the supplies will run out. What has happened to the supposed limits? 

Since the 1980s especially, there has been less and less public discussion from the side of the environmentalist movement about a supposedly impending exhaustion of resources. Instead the focus has shifted to the assertion that continued economic growth is occurring at expense of destroying Nature.

Here an important distinction must be made. It is clear that the growth of human population and its economic activity cause changes in the environment. Are those changes only negative? And if so, according to what criteria? Some people regard every change in the environment which can be attributed to human activity automatically as “destruction”, practically by definition. But is it automatically “destruction”, for example, for human beings to bring water to a desert and to plant trees there, given that the desert ecosystem would thereby be changed? More or less dramatic changes in local and even global ecosystems have occurred and will continue to occur without any human intervention. Are all such changes “good”, whereas the ones where human beings are involved are automatically “bad”?  

More reasonable is to concentrate on cases where the negative effects of human intervention are clearly established: cases such as massive pollution, large-scale release of harmful or poisonous substances into the air and water,“slash and burn” agriculture over large areas etc. Examining such cases, we find that severe environmental damage of this sort is by no means a necessary consequence of population growth and economic development per se; instead, such damage is most often the result of unbridled profit-seeking, technological obsolescence, poverty and backwardness (as in the case of ,“slash and burn” agriculture) and more generally of a purely extensive mode of growth rather than real development as we have defined it. 

A good example is the alleged threat of a worldwide catastrophe due to “global warming” caused by man-made releases of carbon dioxide into the atmosphere. Leaving aside scientific issues connected with long-term climate predictions, it is crucial to recognize that the enormous scale of carbon dioxide emissions from combustion of fossil fuels today is a symptom of chronic technological stagnation at the base of the world economy. In Part II of this book we shall outline a program for completely eliminating dependence on fossil fuels, on the basis of technologies which will greatly boost the productivity of the world economy at the same time. In the section “What about Nature? How a rational environmentalism can contribute to economic development” below we indicate how a transition from extensive to intensive modes of economic growth can drastically reduce the problematic effects of human activities on the Earth’s natural environment, leaving most of the Earth as a “garden” for recreation and scientific study. 

2.2  The relativity of resources

Now we return to the question: are there absolute limits to growth? Are resources really finite? Are they really limited?

At first glance the finiteness of natural resources – of cultivable land, of oil, coal and gas, of mineral ores etc. on the Earth – seems totally obvious. But this impression overlooks a crucial fact: Scientific and technological development expands the range of resources that an economy can utilize.

Firstly, the use of existing resources becomes more efficient and improvements in the technology of extraction and processing increases the amounts of existing types of resources that can be exploited economically. This is exemplified by the case of oil, where the economically usable reserves have increased more rapidly than the consumption. 

Secondly, as an economy develops under the impact of breakthroughs in science and technology, new types of resources become available.  The concept of “resource” is purely relative. When we call something a “resource”, this is always relative to the existing state of scientific knowledge. With new scientific discoveries, and breakthroughs in technology, the notion of what constitutes a “resource” changes, and new potentialities are opened up for the further growth and development of the physical economy.

Thus, the claim of “absolute limits to growth” is nonsense. Limits exist only relative to a fixed level of scientific and technology, or more generally a given level of human knowledge. While each particular stage of development of human knowledge implies certain limits to the possible growth of an economy based on that level of knowledge, the development of human knowledge through creative discoveries successively overcomes such limits, opening up new potentialities for continued growth. Thus the appearance of “limits to growth” is invariably a symptom of technological stagnation.

History shows that this process of overcoming the relative limits to growth has occurred again and again, and is a characteristic feature of physical-economic development. Conversely, a society which attempts to base its existence upon a fixed level of knowledge and technology, is threatened with collapse no later than the exhaustion of one or more of the resources required for its existence in that mode. Thus, to the extent there are “limits to growth” at all, these limits are symptoms of the failure to maintain an adequate rate of scientific and technological process.

2.3  Overcoming limits: examples

History is full of examples of the creation of new resources for growth and development through scientific and technological revolutions.

One is the resources for feeding the human population. Already in pre-historical times, the transition from the so-called hunter society to the agricultural society transformed land into a resource, replacing the stock of wild animals as the main resource for feeding the population. The use of irrigation and other hydraulic technology transformed land that was previously not cultivatable, and therefore not a resource for food production, into a new resource.

Much later, with the advent of modern cultivation methods, the yields per unit of land have multiplied many times over, making it possible to supply a vastly greater world population. In the present, more and more food is grown in artificial environments, from greenhouses to hydroponics, which no longer require ordinary soil as their basic resource. The perspective is opening up for urban centers to supply a large part of their nutritional requirements directly, utilizing multistory “food towers”, without requiring large land areas. In the future, an increasing role will be played by bioreactors producing proteins and other nutrients utilizing microorganisms and cell cultures. For such production, land area plays essentially no role.

Another case is the burning of wood for heating and cooking, and later for the production of mechanical energy (the steam engine). This resource is limited by the size and rate of growth of forests. In developed countries, however, the use of wood as an energy source was nearly entire replaced by hydrocarbon fuels – coal, oil and gas. The large-scale use of these hydrocarbons was in turn made possible by development of the technology of extraction, transport and refining. And as we noted above, the relative limits of availability of oil and gas -- the total recoverable reserves, or so-called proven reserves – are continually being overcome as new and better techniques are discovered for extraction of hydrocarbons. The development of the technique of hydraulic fracturing, for example, has dramatically increased the amounts of economically exploitable oil and gas. At the same time, the fuel efficiency of automobiles has increased substantially. 

Similarly, with advances in the technology of mining and extraction, economically exploitable reserves of practically all economically important elements, including strategic metals, has constantly grown. Even the idea of obtaining valuable rare elements by “mining” asteroids is no longer regarded as pure science fiction, but could become a real possibility decades from now. At the same time, thanks to constant improvements in the technology of recycling, waste materials are becoming a more and more important resource for every modern economy.  

The recent period has also witnessed a revolution in the creation of new types of materials -- materials with novel properties which are increasingly replacing traditional materials such as steel in many areas of manufacturing, and are based on different resources. More examples of the emergence of new types of resources could be given from practically every area of industry.

Coming back to the present, perhaps the most instructive case of the creation of new resources in history so far is the revolution that began with the discovery of nuclear fission and fusion in the first half of the 20th century. The discovery of nuclear fission reactions and development of nuclear reactors suddenly transformed uranium oxide – which up to then was used only to make yellow-colored glass and ceramics for decorative use – into a gigantic energy resource, which in principle could replace coal, gas and oil. Using present-day nuclear technology, the energy which can be produced from a single kilogram of uranium oxide is equivalent to that released by burning 42000 kg of coal! Test reactors have already been operated that utilize thorium, which is several times more plentiful than uranium. One kilogram of thorium would provide as much energy as 200 kilograms of uranium or over 300 000 kg of coal. On that basis, presently extractable thorium reserves would provide the energy equivalent of more than 4000 times all known reserves of coal, oil and natural gas!

Using fusion reactors, which are in the process of being developed, ordinary sea water will suddenly become an energy resource. Using the deuterium contained in a single liter of sea water, fusion will be able to produce the energy equivalent of burning 300 liters of gasoline. The deuterium extractable from sea water sufficient, in principle, to supply the entire present energy consumption of the world for billions of years!

As we shall see latter, the higher power density of nuclear power, compared to other existing energy sources, makes nuclear power intrinsically more productive in physical-economic terms. Thus, when properly developed, nuclear fission (and later fusion) technology promises to greatly reduce the real cost of energy production. Meanwhile, the completely unexpected discovery of so-called “Low Energy Nuclear Reactions LENR)” (otherwise known as “cold fusion”) occurring in metallic lattices loaded to high density with deuterium (and in some experiments even ordinary hydrogen), points to a scientific revolution whose implications cannot be predicted in full today, but will most probably include new ways to produce energy. A most extraordinary result, obtained by the Advanced Technology Center of Mitsubishi Heavy Industries in Japan, is the transmutation of elements, raising the perspective of some day being able to create rare elements out of plentiful ones. 

“Creating” new energy resources in the above-mentioned ways changes the situation with respect to practically all material resources. This is because the potential for extracting, processing and recycling materials depends to a great extent on the cost and availability of energy. As we noted above, large-scale recycling of materials, which is an energy-intensive process, effectively transforms “waste” into a resource.  

The availability of large amounts of energy also makes it possible, in principle, to replace the use of oil and natural gas, which must be extracted from the Earth, by hydrogen and hydrogen-based synthetic fuels produced using ordinary water as the “raw material”. In the case of hydrogen, the product of combustion is again water, which is thereby automatically “recycled”. This possibility is already a focus of intensive research and development, aimed at the long-term transition of the present, largely fossil fuel-based economy, to a future “hydrogen economy”.

In the foreseeable future, the mastery of nuclear processes will make it possible to artificially generate rare metals and other essential elements, overcoming the limits of extraction from the Earth. Present-day reactors already produce macroscopic amounts of a variety of elements by the fission of uranium.  Nuclear waste contains small percentages of ruthenium, rhodium, palladium and silver, for example. The applicability of this method is limited, however, by the cost of separation from high-radioactive waste and the fact that most of the fission products are radioactive isotopes. Certain types of fission reactors can be used to produce large fluxes of “slow” neutrons which can induce transmutation processes. The most well-known case is the generation of macroscopic amounts plutonium – an element otherwise existing only in minute quantities in Nature. Transmutation of elements can also be accomplished by particle accelerators, although this has not yet been done on a large scale. Another potential source, already mentioned above, is so-called “low energy nuclear reactions” (or “cold fusion”), where transmutation has been observed. Thus, although not economically practical today, there is no doubt that nuclear energy generation plus the artificial synthesis of elements by transmutation has the potential some day to free mankind from dependence on extracting rare mineral resources from the Earth.

2.4  The human mind – an unlimited resource

These examples demonstrate how the creative powers of the human mind, realized in the form of scientific-technological revolutions and analogous breakthroughs in human knowledge, embody a potential for successively overcoming all relative limits of resources and providing for unlimited, never-ending physical-economic growth and development into the future.

In this sense the human mind is an unlimited resource. The human mind is an inexhaustible generator of knowledge. It has the potential to transcend the limits or “boundedness” of any given state of knowledge, by discovering and demonstrating a principle of reality, such as a new physical principle, which lies outside the domain of existing knowledge. The same capability is manifested in the ability to grasp and assimilate new discoveries, once they have been made, and in a less spectacular way in the solution of the countless problems of everyday life. 

Like every resource, however, human creativity must be “mined” -- fostered, stimulated, cultivated, educated -- and it requires advantageous conditions for it to be utilized in an effective way. Unfortunately, societies up to now have at best tapped only a tiny fraction of the creative potential of their populations. Worse, in nearly all cases the predominant forms of education and employment have tended to blunt or even suppress creative powers which are present, albeit often in dormant form, in every human individual.

2.5  Economics and the self-development of human creativity

Now is the moment to close the circle:

We have argued that human creativity is an unlimited resource for physical-economic development. At the same time, the purpose of physical-economic development lies in supplying the means and context for human beings to exist and to realize their creative mental powers. Putting these together, we see that physical-economic development is nothing but the physical form which the process of self-development of human creativity takes in our Universe.  

It is from this standpoint that we now turn to economics itself. What is the necessary structure which an economy must have, as a physical process, in order to serve as the substrate for the self-development of human creativity? Evidently an economy must satisfy the following conditions:

1. The economy must provide for the physical existence of the human population -- including food, water, housing, medical care and other necessities of life – as well as the material requirements and other preconditions (including education) for individuals in society to exercise and realize their creative potentials. That includes large-scale scientific activities such as the exploration of space. In the future “knowledge generator economy”, as we shall see, the development of scientific knowledge will become the main driver of demand and employment.  

2. The economy must be able to sustain itself as a physical entity. It must supply itself with the energy and material resources needed to maintain its entire activity, constantly replacing and renewing its productive capacities (its farms, factories, infrastructure etc.) using the newest technology.

3. The economy must have the capability to sustain high rates of scientific and technological progress. To realize scientific discoveries and technological innovations requires a gigantic apparatus of scientific laboratories and research and development facilities. The economy must have the capability and structure to be able to transform scientific discoveries efficiently into new generations of technologies; to produce those technologies on a large scale; to spread and integrate new technologies into production and other economic activities; to continuously renew the economy through successive waves of technological transformation etc. All of this requires a high-technology industrial base, highly skilled labor force, social and institutional infrastructure etc.

4. In order to supply the enormous resources required for maintaining high rates of scientific and technological progress, the economy must generate a large real physical surplus. Physical surplus signifies, to a first approximation, the portion of physical output and labor resources over and above what is required mere to maintain the economy in a hypothetical “steady state".

These considerations bring us to the core study of physical economy: the analysis of an economy as a physical process. We shall begin very generally, and then focus more and more on how an economy can fulfill the requirements listed above.

Chapter 3    An economy as a living organism: structure and metabolism

3.1  An economy as a special type of living organism

An economy is a holistic process: no single activity can occur without involving the totality of activities directly or indirectly. Each individual activity depends on the entire society and its interconnected network of economic activities reaching back into history.

Take a family household, for example. Look at the food on the dinner table. That food had to have been produced somewhere and transported to the household location. The transport required fuel, which had to be produced from oil extracted at some distant location, refined and distributed. The tractor and tools used by the farmer (for example) and the truck which transported the food, had to have been produced in factories. Those factories required materials, such as steel, which had to be produced at steel plants from iron ore extracted in some mine, and transported to the steel plant by rail or ship. In addition to steel, the factories producing the tractor, tools and truck required equipment, buildings, skilled labor, transport, electricity, communications etc. The equipment had to be designed by engineers and produced by workers in other factories, the buildings constructed using previously produced construction materials and construction machinery; the workers had to be born and raised in households, educated in schools, cared for by doctors in medical facilities. Each step of the production process is based in turn on other production processes, labor and services. Each step also required knowledge that ultimately derived from the research of scientists. Each step required workers that needed food for their dinner table, and so on. 

In this way, starting from the food on a dinner table and following the process by which that food was produced backwards in time and space, we obtain a “tree” whose branches extend outward to nearly the entire world economy with its entire past history!

Consequently, when looking at an economy we are looking at a totally interconnected system, an “organism”, which requires a special sort of analysis.

For the purpose of study and analysis it is instructive to compare the physical production and investment cycles of an economy with the metabolic cycles of a living organism. Among other things, this approach can help us to free ourselves from the common habit of thinking about economics in terms of money.

From a physical standpoint, the metabolism of an economy consists of processes of transformation of matter and energy, taking the form of production, transportation and consumption of goods. Like a living organism, an economy must “feed” itself through its interaction with the natural environment -- by extracting various types of resources, including energy resources (coal, oil, gas, solar energy, uranium in the case of nuclear energy etc.), raw materials (ores), air and water from the environment. As we saw above, the “diet” of an economy – the amounts and types of resources it uses – depends on its overall level of technology and can change as the economy develops.

In our view, the comparison of an economy with a living organism is more than a simple analogy: we think an economy deserves to be classified as living organism – but one of a very special kind, distinguished by the unique role of the human population and the phenomenon of an unending succession of developmental stages deriving from advances in human knowledge. Leaving aside physical labor – which is becoming less and less important in modern economies – the population can be considered as constituting the “brain” of the economic organism.

Before examining that crucial difference from ordinary biological organisms, we shall first concentrate on the aspects which are similar. This will later help us to identify the points at which the analogy breaks down, and which turn out to be crucial to a competent understanding of physical economy. 

3.2  The metabolism of a biological organism

(In the following we assume that the reader has an elementary knowledge of biology.)

The body of an animal consists of a huge large number of individual productive units – cells – organized in a hierarchy of structures and interconnected by a dense circulatory network (“transport infrastructure”) extending down to the microscopic level. Each cell carries out one or more tasks in the organism, such as the absorption and synthesis of various organic substances, generation of mechanical work (muscle cells), the maintenance of structure and transmission of mechanical energy (bone and tendon cells), the generation of electrical impulses (nerve cells) and so forth. The circulatory system, including blood vessels and capillaries, the lymphatic system and microscopic structures in the intracellular medium, supplies the cells with the materials and energy required for their functioning, and takes up the chemical substances produced by the cells, which are in turn absorbed by other cells or eliminated from the organism.

Each individual cell is itself a living entity – a microorganism in its own right -- and exercises a certain degree of spontaneity within the whole organism. To put it differently, the organism does not function as a rigid mechanistic system, but rather like a social entity within which the participating cells maintain a certain amount of freedom within a highly organized, regulated and directed whole. Each single cell has an internal metabolism involving a huge number of interrelated chemical reactions taking place at different locations in the cell – for example in so-called “protein factories” -- and the products are transported to other locations for further “processing”. Here the necessary “transport infrastructure” includes a fine network of microtubules which (among other things) function as “railways” for moving substances and entire microstructures (organelles) through the cytoplasm.

Needless to say, all of this activity in the animal and its cells occurs without any exchanges of money!

It is of great interest, also, to compare an economy with the biosphere in its entirety, which functions in many ways as a giant “macro-organism”. As in the case of a normal biological organism, the biosphere consists of a huge number of interconnected and hierarchically organized processes of transformation of matter and energy (biogeochemical cycles) in which the metabolic activity of living organisms plays an essential part. The biosphere “feeds” itself from solar energy and inorganic matter from the Earth, processing this inorganic matter into various forms of organic matter which is mostly “recycled” again into inorganic matter.

The most interesting property of the biosphere, from the standpoint of physical economy, is the evolution of the biosphere over time. Parallel with the evolution of the new species of living organisms, the magnitude and structure of the flows of matter and energy in the biosphere has changed over geological times. A well-known example is the transformation of the chemical composition of the Earth’s atmosphere as a result of the emergence of photosynthetic organisms, which generate free oxygen. The array of species of living organisms, existing in the biosphere at a given point in time, can be seen as analogous to the array of technologies utilized by an economy. The evolution of the biosphere is thus analogous to the development of an economy through technological progress.  In this context the emergence of photosynthesis constituted a great “technological revolution” in the history of the biosphere.

Looking more closely at the flows of matter and energy in a normal animal, we can distinguish the following features:

Firstly, the animal’s internal metabolism: an array of cycles of transformation of matter and energy, by which the organism maintains and renews itself, while generating a surplus required for it to survive and grow. This includes processing food and oxygen taken from the outside into internally useful forms (nutrient substances and oxygen bound to hemoglobin) which are circulated to the various tissues and cells of the body, where they undergo further physical-chemical transformations, such as the synthesis of proteins and other biochemical substances, the generation of mechanical work by muscle cells and of electrical impulses by nerve cells. Finally, cells have limited lifetimes and must constantly be replaced by new cells through cell division mainly of so-called stem cells – a process which also requires inputs of matter and energy.

Secondly, to maintain its internal metabolism the organism requires inputs of matter and energy from the outside (food, oxygen and water) obtained through an active interaction with the environment which requires its sense organs and musculature. Obtaining those necessary inputs (for example by searching or hunting for food) involves a physical “cost” to the organism in the form of an expenditure of energy as well as certain metabolites (such as ATP) which must be supplied by the internal metabolism. The organism must also eliminate the waste products of its metabolism.

Thirdly, all of this activity must be steered and regulated – a task which involves the nervous system, the hormonal system and a large variety of feedback mechanisms. Particularly for higher animals, this consumes a substantial portion of the organism’s internal resources, constituting a kind of “overhead cost” for the metabolism. In the human organism, for example, the brain consumes about 25% of the body’s total oxygen supply. 

In order to grow, a growing animal must have a positive metabolic balance in terms of energy and materials. Growth can only occur, when the animal’s metabolism produces more per unit time than what it would require in order to merely maintain itself in a steady state, without growth. The growth of the animal can be conceived of as a kind of investment cycle: in each cycle a metabolic surplus is produced, which is “reinvested” in the growth process, above all in an increase in the population of cells forming the tissue of the animal. If the animal can find sufficient food, the next cycle produces a proportionally greater surplus, which is consumed in the further growth of the organism.

Considering energy first, this signifies the following: Let A = the energy which the metabolism consumes per unit time in maintaining and renewing the tissues and organs of the body in their healthy functional capacity (taking into account heat losses to the environment); B = the energy which the organism must expend in order to secure its necessary inputs from the environment (i.e. searching of hunting for food); C = the total energy liberated in the metabolic process. Then C – (A + B) is the “free energy” which is available per unit time for the organism to grow, and to carry on various spontaneous activities in its environment in addition to those necessary for its immediate existence, but which can be essential to the maintenance of the animal species. This naturally includes sexual reproduction. 

Similarly, the animal’s metabolism must have a positive material balance in terms of the inputs and outputs of its component cells. This means the following: The total amount Cs of each substance s that is produced per unit of time by the organism’s cells (including from the processing of ingested food) and made available to tissues via the circulatory system, minus the amount Bs excreted to the outside, must be greater than or equal to the amount As of that same substance, which would hypothetically be required per unit time for each the body’s cells to carry out its functions (including replacement via cell division) in the absence of growth. Cs – (As  + Bs) is the “surplus” amount of the substance s generated per unit time, available for growth and spontaneous activity of the organism.

Under the condition, that the surpluses of energy and material products of the metabolism are consumed (“reinvested”) in an expansion of the metabolism – for  example via an increase in the number of cells – the result is a growth of the organism. This process of growth can be pictured as an expanding spiral.  In the simplest hypothetic case, we imagine an animal simply growing larger in a linear fashion, without any change in proportions.

In reality, as demonstrated by the pioneering studies of Bertalanffy, the growth of an animal is very far from being simply linear. The very fact that the surface area increases faster than the body mass, requires that the parameters of an animal’s metabolism must change in a nonlinear way as the animal grows. More interesting is process of metamorphosis accompanying the stages of development by which an animal develops from a fertilized egg. These provide an analog to development of an economy, with the difference, that the series of developmental stages of an economy continues indefinitely. In this sense a healthy economy never reaches a final, “adult” form. We shall return to this point later below. 

3.3  The metabolism of an economy – a first approximation

Let us now try to transfer this model of analysis from the metabolism of an animal to the case of a national economy. Assume for simplicity, that the economy is essential self-subsistent, i.e. that imports and exports play no major role.  

In a national economy the role of cells is played by nodes of human activity, constituted essentially by individual households and workplaces (plants and factories, offices and shops, laboratories, schools and hospitals etc.) These nodes are connected together and supplied by a vast network of energy, water, transport and communications infrastructure. As we have emphasized above, economic activities are interrelated by endless chains of causalities: If we start with any arbitrary node, we find that its activity depends on inputs from a variety of other nodes. Each of those other nodes in turn requires inputs from further nodes. Following this process backward, as we noted above, we discover that the entire economy is involved, directly or indirectly, in creating the preconditions for any specific activity.

We should remind ourselves once more that money per se is extraneous to the physical processes which constitute the real substance of an economy. From the standpoint of physical economy, money has a purely “informational” role, mediating the agreements between human beings who steer the physical processes of the economy on various levels. In particular, there is fundamental difference between the real physical costs, in terms human mental and physical work, energy, materials, equipment etc. and costs in money terms. The same holds for net physical surplus vs. profit in financial terms. The real wealth of national economy – as opposed to financial wealth -- lies in its population with its knowledge and skills, together with its physical capital, which can be defined as the totality of its buildings, machinery and equipment of all kinds, physical infrastructure (roads, railways, power lines, communication lines etc.).

In this context it is essential to bear in mind that every element of an economy’s physical capital has a limited lifetime. As in a living biological organism, nothing in a physical economy is eternal. Everything “ages” or wears out under use. Consequently all parts of the physical capital require periodic repair and maintenance, and must eventually be replaced. Similarly, each member of the population ages and dies, and must be renewed by the birth, nurturing and education of new generations.

In a healthy economy a large portion of the total production is consumed in maintaining and renewing its population and physical capital. We therefore speak of a process of physical self-reproduction of the economy. Intuitively we can conceptualize this as a cycle of production and consumption, in which the physical output of the economy, generated in the first half-phase of the cycle, is consumed in the second half-phase. In case there is a surplus left over after meeting all the physical and manpower requirements for maintaining and renewing the economy, this physical surplus (or “physical profit”) can be “reinvested” in expanding the economy, i.e. in real physical-economic growth.

At first glance there is no difficulty in defining what a positive metabolic balance should mean for an economy. Here, in analogy with the case of a living organism, real growth can occur only when the economy produces more, in physical terms, than it would hypothetically need to simply maintain itself (in the medium-term, at least) in a steady-state “zero-growth” mode. To a first approximation, this surplus, taken relative to some unit of time, constitutes the net physical profit generated by the economy over the given time period. To the extent that surplus is consumed (“reinvested”) in expanding the scale of physical activity – for example by an increase in the population and/or increases in the level of per capital material consumption of the population, by building new mines, factories, infrastructure, etc. – the result is physical-economic growth. Physical-economic growth is generally accompanied by an increase in the number of and density of nodes of activity – analogous to the increase in the number of cells in a growing animal.

This picture is greatly over-simplified, however. In fact, the maintenance and renewal of the population and physical capital involves a large number of different cycles having different time-scales, analogous to the great variety of metabolic cycles in a living organism. Some equipment and tools must be replaced each year, while roads and buildings can last many decades. The reality is even more complicated, as we shall see below. 

Implications of the difference between short and long cycles of physical investment can be seen in the widespread decay of infrastructure which we find in many parts of the United States and other countries today. This decay points to the fact that a significant portion – if not all – of the nominal growth as measured by the GDP, is fictitious. The appearance of a positive economic balance is created by running down the infrastructure and other categories of physical capital and ignoring this physical depreciation in the estimation of “growth”.  Likewise we witness a decay in the average level of education, skills and cultural level of the masses of the population in the USA and many other countries. These phenomena of ““self-cannibalization” takes the form of either neglecting the long-term metabolic cycles entirely, or shifting resources from long-term cycles into short-term cycles. If continued too long, this practice ultimately leads to economic collapse.

Let us now take a closer look at the metabolism of an economy, in analogy with that of a biological organism.  

The metabolism of an economy consists of:

 (i). The human population (including its reproduction from generation to generation), which provides the labor force for the whole organism, requires housing, food and other consumer goods, energy and various other products and services for its maintenance. The population is divided into households as a basic economic unit.   

(ii). The processes of production, distribution and consumption of the entire array of physical goods and forms of energy utilized in the economy, and the physical activities directly required to support that process. The products can be divided into consumer goods (food, clothing etc.) and industrial goods (i.e. so-called capital goods or producers’ goods) such as basic materials (steel and other materials, industrial chemicals etc.) and intermediate goods, machinery etc., useful forms of energy (electricity, chemical fuels etc.), the outputs of the construction sector (buildings, infrastructure facilities etc.). Mining, drilling and other extraction activities provide the raw materials which are needed to “feed” the economy. Besides the production of goods, we have transport, warehousing and distribution centers (including stores and other points of sale). Finally, the metabolism includes physical support activities such as maintenance and replacement of plant and equipment, maintenance and operation of transport, energy, water and communications infrastructure, waste disposal etc. 

 (iii). Non-productive service activities such as education, health, scientific research and development, administration and government (including public institutions of various sorts, courts, police and military). We can also include leisure activities of the population (hobbies, entertainment, vacationing etc.) and the corresponding services (hotels, theaters etc.). All of these services require manpower, facilities, infrastructure and physical goods and constitute a kind of physical overhead cost for the economy. 

Included in the “nonproductive”, administrative category are the activities of steering and regulating the economy. How this should occur is a subject of considerable controversy. We shall discuss this in Part III of the book, in the context of the historical struggle between the so-called “American System of National Economy” and the “British System”. We shall merely note here that the practice of developed industrial nations in their most successful periods has always combined a strong economic role of the state with a large stratum of independent private entrepreneurs, a key role being played by small and medium-size high-technology industries. The economic role of the state is primarily to set priorities for economic development, to manage the financial system (including control of money generation), to nurture domestic industry by a variety of means (including trade protection, where needed), to regulate markets and carry out strategic investments in infrastructure, science and technology, utilizing for this purpose methods of productive credit-generation and channeling of newly-created credit in prioritized directions. In addition, the state is responsible for providing, directly or indirectly, for essential services such as education and health care, pension and unemployment insurance, which are a precondition for a modern economy. So-called “market forces” work on the “micro-level”, whereas the “macro-level” of the economy is shaped by an overall strategic orientation which often tames and overrules the “market forces”. We shall return to these issues in Part III of the book.

There are two specific sectors of the economy, whose role in the economic metabolism is so important, that we must identify them briefly here: infrastructure and the sector of production equipment.

3.4  Infrastructure

Basic physical infrastructure – water, transport, energy and communications infrastructure – plays an analogous role in the economic organism, to the blood vessels, capillaries, lymphatic and nervous systems of the human body. Since it is precondition for every economic activity, infrastructure occupies a central place in physical economy.

Since the quality of each economic activity depends on the availability and quality of infrastructure, infrastructural improvements constitute the single most effective means to boost the real productivity of the economy. This applies especially to improvements which are connected with the introduction of new, more efficient infrastructural technologies and especially technologies which introduce “new dimensions” of infrastructural services. Classical examples are the development of the railroad in connection with the steam engine, the introduction of electric power – including its generation, distribution via suitable networks -- and the advent of new types of products such as electric lighting and electric motors, which revolutionized the entire economy.  Further examples are the telephone, the advent of the internal combustion engine in connection with a great extension of road networks and increases in the quality of roads. Most recently we have high-speed digital communication and the internet. Naturally, each of these basic innovations led to a long, open-ended process of more or less incremental improvements, which in some cases continues until today.

Before turning to the next topic we want to emphasize a type of infrastructure which is the foundation for everything, but is often overlooked or taken for granted: water infrastructure.

Water is an absolute necessity for human life. It is the basis for hygiene, it is indispensible as a coolant medium and thereby essential to large-scale industrial processes. It is indispensible for the protection against fires in cities. In the form of canals, dykes, irrigation and drainage systems etc. water infrastructure is essential to inland transport, agriculture and protection against natural disasters.

All of this is often overlooked by the modern consumer in developed countries, who believes that pure water comes magically from the faucet -- completely oblivious to the gigantic apparatus of water distribution networks, sewage systems, water purification and storage facilities, pumping stations and original water sources in Nature, which are all necessary to deliver the pure water coming out of the faucet.

More critical, of course is the situation in many developing countries, including those where natural water sources are scarce, and where the lack of adequate water processing leads to the massive spread of communicable diseases.

3.5  Machine tools and the investment goods sector

Production in a economy, particularly industrial production, requires an enormous variety of different sorts of machinery, equipment and tools. All of these things, which make up the physical investment required to set up a factory or other production process, belong to the category of investment goods.

Evidently, the sector of investment goods production is of the utmost economic and strategic importance to every nation. This is not only because investment goods are the basis for every sort of industrial production, but also because the productivity of an industrial process and quality of its products depends on the equipment used in production. Equally important: the capacity to produce new and improved types of products depends of the ability of the investment goods sector to supply the necessary production machinery. The investment goods sector is also the main channel through which new technologies are “injected” into the economic process.

The heart of the investment goods sector is the sector of production of machine tools – the machines required to cut, form, and shape metal or other rigid materials, and which produce the precision parts needed to construct every other sort of machine. One can say that besides a skilled workforce, the machine tool sector is the single most important strategic asset of an industrial nation

In World War II, for example, it was the enormous scale and strength of the machine tool sector in the United States – built up especially around automobile production -- which permitted the U.S. to double its industrial output within five years. By 1943, the U.S. production of munitions, tanks, bombers, naval ships and other military hardware had reached such gigantic volumes, that a German victory in the war was completely ruled out. In this sense the war was over, but unfortunately the killing continued for another two years.

Experience has shown that nations can rebuild after wars and severe economic depressions, as long as the machine tool sector and related key areas of investment goods production remain intact, and sufficient supplies of steel and other essential materials are available.

The weakness of the machine tool and investment goods sectors in many developing countries, and their high degree of dependency on the import of production equipment, is one of the chief sources of economic failure. It is true that large-scale import of modern production equipment has played a big role in the rise of many nations. Exemplary are the cases of Japan and South Korea in the postwar period and China in recent decades. If we look closely, however, this import policy has had two main strategic goals: (1) to gain access to the most advanced technologies, embodied in the imported production equipment; (2) to build up, in parallel, the domestic sector of production of investment goods in a comprehensive way as the basis for overcoming strategic dependencies and gaining a productive edge on the world markets.   

We consider that in the future, the notion of investment good will be expanded from the industrial sector per se to include a rapidly growing demand for scientific instruments and other equipment (such as space vehicles) needed for the production of knowledge.

3.6  Studying the structure of the economic metabolism in terms of input-output relationships

A first approximation to how the metabolism of an economy works, over short periods of time at least, is provided by the method of “input-output matrices”. This method goes back to the Tableau Économique published by Francois Quesnay in 1758, but was first elaborated for detailed analyses of entire economies by Wassily Leontief in a classical 1936 paper entitled “Quantitative Input and Output Relations in the Economic System of the United States”. In the meantime it has spawned a vast field of economic research, and is utilized routinely by governments and corporations for analysis and forecasting purposes. 

Later we shall identify a fundamental limitation of Leontief’s method, located in its inherently linear character. But the simplicity and clarity of his basic method provides us with an excellent starting-point for focusing attention, by means of contrast, on the fundamentally nonlinear character of physical-economic development. At the same time, looking at input-output relationships can help us to conceptualize more clearly the notion of an economy as a totality, as an organism. It provides an approximation to the metabolism of the economy in similar sense to how the tangent line to a curve approximates the curve in the vicinity of the point of tangency. In mathematical language, the developmental trajectory of a real economy always “curves away” from such linear approximations – that “curvature” being due above all to scientific and technological progress, which constantly reshapes the internal structure of the economy. We will return to the issue of curvature later below. 

For our purposes here it is sufficient to give just a rough schematic idea of the input-output approach, applied in a physical-economic context. Its practical application involves many complex issues which are not essential to our discussion here. We shall also not be concerned with numerical computations. Readers who are interested can consult Leontief’s original work or any one of the countless modern texts on the subject. 

To carry out an input-output analysis of an economy we first divide the types of economic activity (i.e. the nodes described above) into categories or branches according to their type of output. In this context, “output” signifies: (a) the production of physical goods (in the case of farms, mines and industries); (b) the work performed by the various types of physical infrastructure, especially the production and distribution of energy in various forms, water supplies and transportation; (c) the “output” of the households of the population, which for the purpose of input-output analysis is usually taken as the labor force which is an “input” to each economic activity; (d) the “output” of services which are essential to production and the maintenance of the population, such as administration, education and health etc.

In practice the number of categories or sectors utilized depends on the degree of detail which is desired, but is often limited by the detail and quality of the data base which is available. Leontief’s book “The Structure of the American Economy, 1919-1939” presents input-output analyses in which economic activities are divided into 42 sectors. 

(In his studies Leontief did not work with physical parameters for the inputs and outputs of the various sectors, but with expenditures and income expressed in money terms, which is the form in which statistics are most generally available. He notes that if prices are known one can in principle convert back to physical units. For very detailed studies this can be problematic because generally many different sizes and types of products are included in a single sector. For our present discussion, however, we need not deal with such complexities. In this book, all parameters are assumed to represent real physical quantities, unless otherwise indicated. )

Now, the basic principle of input-output analysis is already suggested by the name: Every economic activity produces some result (“the output”) on the basis of certain necessary preconditions (“the inputs”) which are used up or consumed by the activity. The output of a given economic activity in turn furnishes necessary inputs to one or more other economic activities. 

A typical case is the production of an industrial product such as a truck. Examining a typical truck-producing plant in the economy we wish to study, we find that fabrication of each truck requires certain numbers of kilograms of steel and of rubber, certain numbers of semi-finished components of various types, a certain number of kilowatt-hours of electricity, a certain number of man-hours of labor and hours of operation of various sorts of production equipment – the latter constituting some definite portion of the total service-life of a given machine. Within the interconnected network of the economy, described above, the output of trucks is “consumed” in various proportions as inputs to other sectors, for example to farming, construction industry, the transportation sector etc. At the same time, the materials, electricity, equipment and other material inputs truck production are provided as quantifiable portions of the output of corresponding production activities, e.g. the steel industry, electricity generation plants, equipment manufacturing etc.

Generalizing from such cases, we see that in each specific economic activity the inputs are connected with the output in a more or less rigid quantitative way which can be approximated by a mathematical function. Assuming a relatively fixed scheme of interconnections between the inputs and outputs of all the nodes of economic activity in a given economy, it should therefore be possible in principle to set up a system of mathematical equations which would permit us to calculate the output of each specific category of economic activity that would be required in order to produce a specified net output (i.e. surplus) for the economy as a whole.

In a real economy, of course, the relationship between inputs and outputs is subject to many sorts of fluctuations depending on market conditions, natural conditions, on political factors, on managerial decisions etc. Plants often operate at less than their full capacities, inputs and outputs may be stored for some time before being utilized etc. The natural physical flows of the economic metabolism can also be distorted or even disrupted by financial crises, or more generally by the failure of the financial system to support the healthy operation of the physical economy.

For our present purposes we will ignore such fluctuations and aberrations, assuming that relatively fixed relations exist between the average values of inputs and outputs, and that these correspond roughly to the physically necessary relations, i.e. the normal relations of production.

Most important in this context are the “long cycles” mentioned above. In a tractor plant, for example, we find facilities and equipment which need to be replaced only once in 10 or 20 years, or even longer; things such as plant buildings and storage facilities, concrete floors, underground pipes etc. Certain machines (or components of machines) may have to be replaced more often. Thus, the orders placed to the sectors that supply the needed replacement equipment, buildings, machinery etc. are not constant in time, but vary from year to year depending on when the replacement is needed, or when the managers of the plant decide to make the replacement. In order to take account of the long-term physical costs for an economy to maintain itself, the easiest procedure is to “spread out” such long-term inputs uniformly in time. In other words, if a machine has to be replaced every 10 years, we will count as an input 1/10 of the machine every year. Since normally each sector comprises a very large number of individual nodes of activity (e.g. factories) this averaging is acceptable for most purposes. 

In his approach Leontief assumed, for simplicity, that the relationship between quantities of inputs and outputs for each specific category of economic activity is approximately linear. For the example of tractor production this would mean that producing 10% more tractors would require 10% more steel, 10% more electricity, 10% more labor, 10% more production machinery etc.

Leontief knew, of course, that strictly linear relationships never hold in real economic practice.  Even without taking account of technological progress – which we shall focus on later – it is well-known that the proportionality between inputs and output of an economic activity changes under the influence of effects such as the so-called “economies of scale” and the “Law of Diminishing Returns”. Economies of scale are common for example in electricity production, where larger plants generally require smaller amounts of labor and material components per kilowatt-hour than small ones. The so-called Law of Diminishing Returns refers to a reverse type of effect: increasing the inputs of an economic activity beyond a certain point no longer produces a proportional increase in the output. An obvious example is agriculture, where the limitation of land area of a country imposes a limit on the amount of increase in production which can be achieved by merely increasing the number of farm laborers and other inputs.

Despite these and other effects, the assumption of linear relationships can be justified as a useful approximation for relatively short periods, where the percentage changes in inputs and outputs is not too large, and major technological changes are excluded. A great advantage of this assumption is that it leads to systems of linear equations whose solution is straightforward in mathematical terms – although it was extremely laborious in the period before the introduction of high-speed computers.

Leontief showed how the interconnections of the sectors and corresponding equations could be represented in a convenient and transparent way by means of input-output matrices. In the context of our present discussion the basic approach can be sketched as follows. 

(Note to readers: for those who have difficulty with the little bit of algebraic notation and equations presented below, don't worry! They will only be used in this and the next two sections, and are not essential to the content of the book. Important is the basic point addressed by the comparison between "Case 1" and "Case 2" in Section 3.8 below on  "Growth or Contraction". For that, the mathematical discussion is not absolutely necessary, but it can help make the main point more clear for those who can follow it.)  

We list the chosen categories or sectors of economic activity E1, E2, … En and designate the corresponding outputs of the sectors by x1,  x2, … xn. Thus, xi signifies the total amount of output of activity Ei produced in a chosen unit of time (e.g. a year).

It is important to bear in mind that the labor force and the labor force requirements of the various sectors are included in this analysis. Thus, we can define as sector E1 the totality of households of the population. In the input-output framework the output of the households is defined as the labor force which is available for employment in the various sectors, and the inputs are the amounts of food and other consumer goods, housing and services, needed to maintain the households per unit of labor force at a given living standard.

Within the economic metabolism the output xi is divided up into portions, each going to a different sector, and providing a necessary input to that sector’s production. It should be noted that some sectors, such as machine-building, provide inputs to themselves: machine-building requires machines.

To obtain a quantitative overview of these relationships we construct a matrix of numerical coefficients, in which the entry in the ith row and jth column, Aij , is defined as the amount of the output of sector Ei that is required as an input to sector Ej, per unit of output of Ej. Thus, in order to produce its output xj the sector Ej requires an input Aij xj from the sector Ei .

In his book Leontief speaks of “the technical relation between the physical output of an industry and the input of all the different cost elements absorbed in production. In short, it describes the production function.”

We shall assume that the numerical values of the coefficients Aij are known either from an analysis of the physical production processes themselves, or by empirical statistics of the relation of inputs to output for each of the sectors. Taken as a whole, the input-output matrix (Aij) contains essential information about the structure and state of the economic metabolism. In the following we shall how some of this information can be extracted.

We should note that exports and imports can easily be included in the input-output scheme. For the purposes of the present discussion, however, it is sufficient to focus on the case where the economic metabolism is essentially self-contained, i.e. there are no imports and no exports.

3.7  Surplus production – the physical “profit” of an economy

In order to determine the amount of the output of the activity Ei which is consumed in the total production process of the economy, we simply have to sum up the portions of Ei’s total output xi consumed as inputs by each of the sectors, i.e. we must sum up Aij xj over all values of j. Whatever is left over of Ei’s total output xi, when we have subtracted off all the amounts consumed as inputs by the other sectors, constitutes a net surplus production of that sector, remaining after all the consumption requirements of the metabolic process have been met.

(Note: We have deliberately avoided the term “final demand” which commonly appears in conventional applications of input-output analysis, in place of what we call “surplus production”. This is because we are not concerned with market relations of supply and demand of goods. What concerns us here, are the magnitudes of production and consumption which are physically necessary in order for the economic metabolism to sustain itself. The question of how and how well the necessary flows are actually realized via a combination of markets, internal flows within industries, long-term contracts, state investments and allocations etc. is a separate issue. Here, as in other cases, we insist that physical economic analysis should serve as a basic criterion and standard by which to judge the functioning of markets and other economic instruments.)  

In algebraic form the surplus production in sector Ei is expressed by the formula:

Si  = xi  -  [ Ai1 x1 + Ai2 x2 + … +  Ain xn ]

This surplus can be regarded as a kind of “physical profit” for the corresponding sector. It provides a potential margin for real growth in the economy. We should keep in mind, of course, that “physical profits” should not be confused with financial profits, e.g. the financial profits of companies working in the given sector. The latter depend on many factors that are not related in any simple way to the physical economy – factors such as price and market conditions, interest and debt payments etc. In particular, the surplus production may appear on the level of the individual producer as overproduction – goods that could not be sold because of insufficient demand. The mechanisms by which physical surpluses can be consumed in a process of growth and development requires a separate discussion in which the essentially static analysis presented here must be replaced by a dynamic one. We will come back to this point below. 

The simple scheme outlined above can be used in a variety of ways to study quantitative relationships for an economy as a whole, and to carry out various “thought experiments” for how the economy would react to changes in the values of some of the parameters.

One possibility is to assign numerical values to the total outputs x1  x2 … xn  – either from data on the sector outputs, or as hypothetical values -- and calculate the corresponding values for the surplus production – i.e. the “physical profit” – for each sector. Surpluses can be utilized either to expand the scale of production (i.e. as physical investments into economic growth), or can be exported. In case one or more of the surplus values are negative, the economy will need to import the corresponding amount in order to produce the specified outputs.

Alternatively, we can set the values of the surpluses S1, S2, … Sn and calculate the values of total output x1,  x2, …, xn which would be necessary in order to generate the given surpluses. This amounts to solving a large set of linear equations. The mathematical tools for this calculation are provided by matrix algebra.

In the cited book Leontief showed how this kind of analysis can be used to answer questions about the behavior of an economy which are important for economic policy decisions. A concrete example from his studies is the situation of the United States economy in 1944. At that time it was clear that World War II was going to end soon, and the U.S. government had already begun the process of planning for the transition from the war economy to a coming postwar economy. Among other things this example shows that economic thinking in the United States was totally different, then, from the extreme form of “market liberalism” predominating today. In a 1944 article Leontief discusses various possible applications of his approach to the concrete problems of the U.S. economy:

“How will the cessation of war purchases of planes, guns, tanks and ships – if not compensated by increased demand for other types of commodities – affect the national level of employment? How many jobs will be created by the consumers’ demand for an additional one million passenger cars, how many of these jobs can be expected to be located in the automobile industry itself, and how many in other industries such as the steel and chemicals, coal and petroleum industries? ...

“These are the kind of questions which arise with any practical discussion of the immediate as well as the long run prospects of our post-war economy. In an attempt to indicate a possible approach to a factual statistical answer to these questions this article describes a method of estimating the quantitative relationship which exists between the primary demand for the products of all the various branches of the national economy, on the one hand, and the total output and employment in each of them, on the other…

“Given the annual bill of goods which is to be made available for consumption (and new investment), the total outputs of various industries requisite for its actual production depends primarily upon the technical structure of all the many branches of agriculture, mining, manufacture, transportation and service which directly or indirectly contribute to the output of the various commodities included in this final bill of goods…

“With the thorough-going division of labor characteristic of the modern industrial economy, the secondary effect of any change in the demand for consumer and investment goods upon industries contributing to their production mainly in an indirect way is no less significant than the direct effect upon those branches of the national economy which are principally engaged in the production of commodities and services which directly enter the final bill of goods. The tracing of these repercussions… requires, however, application of a rather elaborate analytical technique…

“(Alternatively) instead of attempting to compute employment figures from a given list of consumers’ purchases (and net investments), it is possible, of course, to reverse the analytical procedure and, having assumed a given level of total employment, compute the bill of goods which would have been produced if available labor forces were apportioned among the various branches of production with a view to supplying the households with finished goods in certain anticipated proportions, while providing at the same time for net investment of a certain fraction of the resulting net national income.”

3.8  Growth or contraction?

Under what conditions can the population of a nation sustain itself at a certain specified level of per-capita consumption, based on an economic metabolism having input-output relations (Aij)? To answer this question we examine the hypothetical case in which the entire output of the economic sectors, with the exception of the households of the population, is used up in the metabolic process, i.e. there is no material surplus for reinvestment in expansion. Accordingly we set S2 = 0,   S3 = 0, … Sn = 0. In this situation we can think of the economy as being devoted entirely to supplying the households of the labor force, i.e. the population, at the specified standard of per capita consumption. That level of per capita consumption, divided up according to its components from the various sectors (i.e. household labor, food, various goods etc.), is expressed by the coefficients A11, A21, … An1, ; the corresponding consumption for the whole population would require the amounts A11x1 from sector E1, A21x1 from sector E2 etc.

How much labor would be required to provide this level of consumption? This can be estimated, in a very rough first approximation, from the input-output scheme. For our purposes it is not necessary to actually carry out this calculation; it will suffice to indicate how it would be organized.

We set x1 to be equal to the whole employable labor force, which is a certain proportion of the total population. From the assumption S2 = 0,  S3 = 0, …  Sn = 0 we obtain a set of equations for the unknowns x2, …, xn denoting the outputs of the economic sectors E2 … En .  Solving those equations gives us the outputs of the sectors corresponding to the assumptions stated above. From the outputs, in turn, we can determine the amount of labor which is required by each specific sector. Summing these up, we obtain the total labor requirement of the economy. Now we compare that value with the total labor force, i.e. the maximum amount of labor which is available to the economy.

Depending on the outcome of this calculation, we have different alternatives for the future of the economy.

(In this context we should not forget: Although the indicated calculation is based on hypothetical values e.g. S2 = 0,  S3 = 0, …  Sn = 0, its outcome reveals invariant properties of the economic metabolism which result from the totality of input-output relationships (Aij), and which are valid so long as the matrix (Aij) remains unchanged.)

Case 1: The estimated minimum labor force required to sustain the population at the prevailing level of consumption, turns out to be larger than the actual usefully employed labor force.

In this case, the economy is headed for physical contraction. Either useful employment must be expanded -- assuming there exists a sufficient pool of unemployed or poorly employed labor with the necessary skills etc. --, or additional inputs must be imported from the outside, or living standards must fall, or attempting to maintain prevailing living standards will cause the surplus of one or more sectors to become negative, leading over successive cycles eventually to a chain-reaction-like collapse of the economy.

In some cases the process of contraction can be halted by readjusting and optimizing the array of outputs x2, …, xn, for example by shutting down unnecessary or redundant activities and reorganizing  production in such a way as to reduce the amounts of inputs required relative to the outputs. New technologies can be introduced which increase the overall productivity of the economy’s physical metabolism. All of these measures amount to changing the values of the input-output matrix (Aij). 

If such efforts does not succeed in overcoming the negative surplus expressed in terms of the labor force, then the attempt to maintain the level of consumption (e.g. according to the policy of the “consumer society”) leads to a progressive weakening, and finally destruction, of the productive base of the economy. In many cases this process begins by economizing on “long-cycle” inputs such as maintenance and replacement of buildings, infrastructure etc. in an effort to maintain existing levels of production. Eventually the resulting decay and obsolescence leads to decline or even collapse of production in the affected sectors. The result is gaps in the supply of necessary inputs to other sectors, and so on.

Naturally the implications of a basic physical-economic reality – such as the inability of a given economic metabolism to sustain the population at a given level of consumption -- can manifest themselves in many different ways on the “surface” of the economy. Ironically, for example, an objective shortage of labor in physical economic-terms can appear as mass unemployment! Unemployment means a de facto reduction in the average level of consumption of the population. Another typical “solution” is to “outsource” production to outside areas (e.g. other countries) where salaries and living standards are lower. The decline of production in various sectors can also appear in the guise of voluntary decisions by owners and investors to “downsize” those sectors in order to reduce costs and increase profits – decisions make long before the point of physical unsustainability has been reached.

In practice economic crises typically involve a self-amplifying negative symbiosis between dysfunctions of the financial system, subjective factors (e.g. market panics) and dysfunctions of the physical economy. This symbiosis can easily be seen in the case of the U.S., where the ideology of the so-called “post-industrial, consumer society” led in the 1980s and 1990s to an accelerating shift of investment and employment out of industry and into a “bubble” of nonproductive activity. The structural weakness of the U.S. economy was masked, however, by a huge trade deficit and a financial bubble which burst in 2007-2008. For a truly adequate diagnosis and treatment of economic crises it would be necessary to separate out the financial side from the physical side and above all to analyze the unfolding situation in real physical-economical terms. Unfortunately this has rarely, if ever, been done in any comprehensive way.  

Case 2

The happier case is when the structure of the economic metabolism -- as reflected in its input-output relations – is such that the population can maintain itself, by its economic activity, at the desired standard of physical consumption. Better – and actually necessary in the long term, as we shall see – is a sustained physical surplus which can be utilized to expand and develop the economy.

Assuming for the moment that the species of nodes of activity in the economy remain fixed – i.e. no essentially new types of activities are introduced – economic expansion can occur through an increase in the number of nodes of the various species, or an increase in the output of existing nodes, or both. The first type of expansion could be expressed for example an increase in the number of households (population growth) or the construction of new factories, expansion of the transportation network etc.

In practice the process by which surpluses are in effect “reinvested” for physical-economic expansion is complicated, and cannot be adequately described by input-output analysis alone. The complexity of the physical investment process lies in the fact that the relationships between inputs and outputs are never strictly linear. The assumption of approximate linearity is applicable at best only to average values. Indeed, if there were no flexibility in the relationship between inputs and outputs it would be practically impossible for an economy to expand: an increase in output of any sector would instantly necessitate proportional increases in all other sectors, without any delay.  

On the short term, however, a producer can nearly always boost output significantly without requiring more inputs in terms of machinery, by increasing the capacity utilization of existing equipment. In such cases the demand for new equipment from supplier sectors follows only with a time delay, as the service life of equipment is exhausted more rapidly. The increase in other inputs, such as energy, materials, labor etc. is essentially “covered” by the physical surplus, taking into account the flexibility provided by accumulated stocks and inventories, unemployed or underemployed labor etc. Due to these factors, plus the action of market supply and demand on investment decisions etc., the process of investment and increased production in a physical economy takes the form of waves propagating through the network of production. The study of these waves is a special, complex topic which we shall not pursue further in this book.

Once more we must emphasize that the process of physical investment, on the one side, and the flows of financial investment on the other, are different in nature and have no simple relationship to each other. In “ideal” socialist economies, physical investment would be accomplished by direct allocation, without the use of money at all. In present capitalistic economies, reinvestment is mediated to a large extent by credit.  An exemplary case is the granting of credit by a bank to an industrial producer, permitting the producer to purchase new machinery leading to an expansion of output and/or increase in the efficiency of production, generating an addition margin of profit which allows the entrepreneur to pay back the loan. The operation of the new machinery changes the flow of inputs and outputs in the network of production in a specific manner, while the financial “payback” to the bank has no immediate effect on the physical economy. The difference is even larger in the case where the investment involves the introduction of new technology (see below).

Correct policies for economic development must start by determining which investments are feasible in terms of available physical resources, prioritizing between them, and then insuring an adequate financing of those investments, for example via measures that encourage the flow of banking credit and private investment into the desired directions. Where sufficient capital is lacking, it can be generated via productive credit generation by a national bank or state-controlled central bank. In some cases direct state investments can be the most suitable vehicle. 

For our present purposes we shall put aside these complexities in order to concentrate now on the key point, which is the necessity of technological progress.

 

Chapter 4   Nonlinear development

4.1  The limits to growth in the linear mode

Throughout our discussion of input-output analysis we treated the economic sectors as being fixed and assumed that input-output relations among these sectors were governed – on the average – by linear equations defined in terms of constant values of the coefficients Aij .

Under these assumptions, the simplest hypothetical form of physical-economic growth would be a constant rate of increase in the outputs of all sectors, let us say by 2% per year. In a highly simplified visualization of that process, we imagine that at the end of each year, each sector Ei has a produced a surplus Si equal to 2% of its output xi. That surplus is then simply distributed as increased inputs to the sectors supplied by Ei, in the proportions defined by the input-output coefficients. That includes inputs of machinery and other forms of physical capital which are required to expand production. The result is an additional 2% increase in the output of all sectors, and in their surpluses, in the next yearly cycle. Population would ultimately have to increase at the same rate in order to supply the required labor power. The result over time would be an exponential growth of the physical output of the economy and – of necessity – a corresponding exponential growth of the consumption of the natural resources upon which the economic metabolism depends.

It is evident that exponential expansion in this purely linear mode cannot continue indefinitely. If linear growth were the only mode of growth available, then the Malthusians would be correct. Evidently, a continuation of such a growth process would rapidly exhaust the resources such as land and raw materials needed to feed the economy’s metabolism: the economic organism would finally “starve to death”! 

In actuality, of course, resources never come to an end abruptly. Continued utilization of a fixed array of resources first leads to a gradual increase in the physical cost of supplying them to the economy, i.e. in the amounts of manpower, equipment, energy and other inputs required per unit output in mining, extraction and processing sectors.  The corresponding input-output coefficients will thus increase with time. Up to a certain point the increased physical cost of providing raw material inputs to the economy can be compensated by rationalization: reducing redundancy and waste and increasing the efficiency of economic sectors through better organization and logistics. In the long term simple rationalization also has limits. As improvements in productivity via rationalization reach a plateau, the continuing growth in costs erodes the overall productivity of the economy.

In addition, agriculture and many branches of industry are subject to the so-called “law of diminishing returns”: increases in the inputs no longer produce a proportional increase in the outputs. “Diminishing returns” are already imposed by the limited territory available to the economy. Beyond a certain point, for example, increasing the number of tractors, amount of fertilizer, the number of farm laborers etc. per unit land no longer yields a proportional increase in the output of food. Since the total agriculturally usable land is limited, the productivity of agriculture will decrease. Factories and physical infrastructure also take up land area, and there are obvious limits to the possibility to concentrate more equipment and people into a given space.

All these effects cause the corresponding input-output coefficients Aij to increase, producing a slowdown in economic growth. Eventually the surpluses turn negative: the economy goes into a process of self-accelerating contraction.

The whole logic of our discussion was based on the premise that the level of technology is fixed, i.e. there are no technological improvements in production. The conclusion can be summarized as follows:

Without the introduction of new technologies, any economy is ultimately doomed to collapse. 

Ironically, this holds even for the case of “zero growth”, demanded by environmentalists as a supposed solution to the problem of exhaustion of resources. Why?

In history it seldom, if ever, occurs that societies or civilizations collapse simply because of an objective lack of resources. Much more often, the problem has subjective dimensions, expressed for example in irrational political decisions. If maintained too long, linear growth, i.e. growth without scientific and technological progress leads to cultural decadence, to the loss of an orientation toward truth and creative reason. This naturally includes the case of zero growth. By failing to provide the social context for exercising and developing the creative mental powers of the population, the linear form of economic activity encourages an atmosphere of indolence and idiocy. Like a muscle, the creative powers of the mind must constantly be exercised in order to be preserved.

4.2  Strong nonlinearity: changing the relationship of Man to the Universe

Fortunately human society has again and again proven its ability to break out of the “prison of linearity”.  We documented this in our refutation of Malthusianism. Science and technology have again and again permitted Man to overcome the resource problem inherent in the linear form of growth. The essential reason, as we pointed out, is that the notion of resource is purely relative: the range of objects in Nature that an economy can utilize for its growth and development is determined by the level of development of science and technology.  As science and technology progress, the resource base of the economy is constantly redefined. Something that is practically useless today can become a valuable resource tomorrow. In this sense one could say that scientific revolutions invariably “create” new resources, while at the same time introducing new ways to utilize existing types of resources. We gave examples of this above, in the section above on “Overcoming Limits”.  The emergence of new resources is a byproduct of the fact that scientific revolutions transform Man’s relationship to the Universe.  Not only the resource base, but the entire structure of the economic organism which concretely embodies Man’s relationship to the Universe, undergoes profound changes.

But are scientific revolutions merely a way to escape from limits?  By no means! Since the beginning of the industrial era, science and technology have moved and more into the center of the real economy. Scientific and technological progress has become the main locomotive of economic growth. Growth in the linear mode – i.e. mere expansion of the scale of production without technological change – is becoming more and more an exception, an expression of backwardness and stagnation.

To put it another way: development through scientific and technological progress is the primary reality of economics, and growth is a consequence of development.

4.3  Economic development as a morphogenetic process

This brings us to the crucial difference between a healthy modern economy, seen as a physical process, and ordinary living organisms.

After an animal has gone through the series of metamorphoses that run from the fertilized egg through the embryonic phases to the adult stage, the structure of its metabolism and the types and functions of its cells stop changing. This structure, including the array of biochemical inputs and outputs of the cells, is essentially encoded and “preprogrammed” in the animal’s genes. Within a normal range of parameters, the metabolism of an adult organism can be described approximately by one of a finite array of fixed input-output relations, each one corresponding to a certain metabolic state, i.e. the resting state, state of arousal etc.  

A healthy physical economy, by contrast, never reaches a final, “adult” stage, but goes though a seemingly unending series of developmental stages, under the impact of scientific and technological revolutions and other advances in human knowledge. Not only does history document a long series of such stages, but there are intrinsic reasons for concluding that the process of fundamental scientific discovery can never reach a final conclusion. The reality of our Universe transcends the limits of any given system of scientific knowledge in such a way, that the potential for new scientific revolutions is inexhaustible. Although there can never be a “formula” for generating the next such revolution, it is possible to identify a universal creative principle underlying the process of fundamental scientific discovery.  To the extent Man obeys that principle, progress never comes to an end. 

How can we describe this process in more precise conceptual terms?

As a first approximation we can represent it schematically as a succession of input-output tables, one for each developmental stage. For this purpose we can visualize the tables as located on parallel horizontal planes in historical order, with the later ones located above the earlier ones. The upward-pointing vertical axis passing through the input-output tables would represent the “arrow of development”. Within any given stage of development, economic growth would be governed by the corresponding input-output matrix, and be approximately linear.

In this idealized representation scientific and technological revolutions take the form of discontinuous “jumps” from a one stage of relatively linear growth to the next one. Since scientific and technological revolutions invariably lead to the emergence of new economic sectors, the number of rows and columns of the input-output matrix increases from one stage to the next. The “dimensionality” of the metabolic process increases, as each additional sector constitutes a new degree of freedom for the economic organism. Most importantly, each “jump” overcomes one or more limits inherent in the preceding mode of growth.

Examples of this can be drawn from throughout human history. In the dawn of civilization, agriculture emerged as a new type of organized activity. As agriculture developed, additional sectors were added, such as the construction and operation of irrigation systems, the construction of means of conveyance based on the wheel etc. Other new sectors of economic activity emerged with the advent of large-scale metallurgy and related extraction activities in the so-called Bronze Age, and then the Iron Age. Jumping ahead to the industrial era, we can see how multiple new sectors emerged in connection with the steam engine, electricity and the internal combustion engine. A recent example is the emergence of the computer industry.

Each new stage is “born” from the previous stage in a manner analogous to morphogenesis in embryology. New technologies and economic sectors emerge in a process of differentiation, analogous to the emergence of new organs and other structures in biological morphogenesis.

It is essential to realize that scientific and technological progress lies entirely outside the domain of input-output analysis per se. Relative to the latter it appears as a “deus ex machina” -- an “outside” agent which intervenes to transform the whole complex of input-output relations. For this reason physical-economic development is not only nonlinear in the technical mathematical sense, but is “ontologically nonlinear” in the sense that it proceeds through the introduction of new concepts and principles.  The deus ex machina responsible for this process is the creative powers of the human mind.

There is an additional source of jumps, in addition to breakthroughs in science and technology: large-scale infrastructure projects, especially in the areas of transport and energy. Since every form of economic activity depends on transport and energy, large-scale infrastructure improvements boost the productivity of each sector directly and indirectly. We shall discuss this more below and in Part III of this book. Every case of successful national-economical development has been linked with large-scale investments into basic infrastructure, including “Great Projects” transforming entire regions of the national territory.

Most powerful is the combination of infrastructure development with technological revolutions. Classical examples include the advent of the steam engine and railways, and the economic revolution connected with electricity and great projects of electrification, which are continuing even today. A recent example is the advent of high-speed data processing and data transmission technology and the internet, which have impacted every sector of the economy.

4.4  “Between the jumps”: the assimilation of scientific and technological breakthroughs

The idealized representation presented above – linear phases of economic expansion separated by revolutionary jumps – is misleading in one respect: a nonlinear element of progress is everywhere present in healthy economic processes, and not only at the moment of a technological revolution. Integrating a revolutionary breakthrough into the full breadth of an economy is a complex, highly-extended process involving creative problem-solving activity on the part of huge numbers of people. It requires development of prototypes, experimental applications and trials, optimization, elaboration in various forms for different applications. The large-scale dissemination of new technology ultimately reaches into the lives of the whole population. It invariably involves changes in thinking and ways of doing things – a process which often means overcoming resistances. There are also resistances from the side of economic interests that may be threatened by the new technologies. All of these processes lie completely outside the domain of input-output analysis, involve a high degree of unpredictability and are hardly susceptible to simple mathematical representation.

Even apart from the processes directly involved in the development and integration of new technologies, the impact of human creativity on the physical economy also takes the form of countless small improvements made every day by people at their workplaces, solving problems and finding better ways to do things. These include improvements in the design of machinery and other equipment, which increase the efficiency of the production process: producing more and better outputs with less input of energy, labor and material resources. In their cumulative effect such “micro” innovations improve the input-output relations for various sectors and thereby contribute to the productivity of the metabolism as a whole. Moreover, the constant striving for improvements within an existing level of technology prepares the ground for introducing new technology, in part because it helps to define the limits of existing technologies and creates an awareness of those limits among a broad section of the workforce.

In fact, one of the most effective policies for promoting physical-economic development is to launch projects which deliberately drive existing technologies to their limits, creating challenges that accelerate the innovation process. An example is the manned colonization of space, which is effectively beyond the limits of present-day technology. Launching a long-term program for the colonization of Mars, with suitable measures to support innovation, would be one of the best ways to promote scientific and technological progress today.

4.5  The hierarchical ordering of scientific and technological progress

Examining the history of physical-economic development, we find that scientific and technological progress occurs on a hierarchy of levels:

(1) Fundamental advances involving the discovery of new physical principles or “axioms” of physical science. Here “physical principle” is taken in the broad sense which includes principles of living organisms (i.e. biology), whose existence is an essential property of the Universe. Fundamental discoveries are accompanied by the emergence of new scientific conceptions and (implicitly) change the relationship of Man to the Universe.

(2) The process of revision, elaboration and expansion of scientific knowledge which ensues as a consequence of a fundamental advance -- including theoretical investigations, empirical investigations using new types of experimental apparatus, and the systematization and codification of results, often with the emergence of new branches of science.

(3) The ensuing process of applied research and development, resulting in the creation of new technologies in prototype form. Prototypes often evolve from experimental equipment created in step 2.

(4) Optimization of prototypes and development of finished products embodying the new technologies, together with means for their production.

(5) Integration of the new technologies into breadth and width of the economic metabolism.

The number of persons involved in these 5 levels increases from a relative handful of scientists, or even a single scientific thinker at Level 1 of fundamental discovery, to a sizeable section of the workforce, or even the entire workforce on Level 5. Each of these stages depends on somewhat different types of creative thinking on the part of those involved.

Although development flows naturally from the upper to the lower levels, in practice there is a constant feedback from the lower to the higher levels. Thus, the step from a prototype to a production-ready product (Level 4) often requires addition research and development efforts, and the construction of further prototypes (Level 3). Similarly, applied research and the work of creating prototypes often encounter problems and anomalies that call for further elaborations of the basic principles (Level 2).  

The emergence of microelectronics gives us an excellent example of the five levels described above.

Here the fundamental advance (Level 1) was the emergence, in the first two decades of the 20th century, of a new type of physics – quantum physics – whose principles differ radically from those of the “classical physics” which prevailed up to that time. The quantum revolution was initiated by a relative handful of physicists including Max Planck (discovery of the quantum of action), Einstein (hypothesis of the photon), Erwin Schrödinger, Niels Bohr and Werner Heisenberg (first elaboration of the fundamental laws of quantum mechanics). The result was a totally new conception of physical processes on the microscopic level, with profound consequences for practically every area of natural science, making it necessary (Stage 2) to revise and rework vast areas of existing knowledge.

Stage 2 encompassed the elaboration of quantum mechanics – and later quantum electrodynamics, integrating Einstein’s relativity principle -- into a precise, completed theory with its own mathematical tools, and a vast number of experimental investigations into phenomena that were predicted by, or could be explained by the new theory. One of these areas was the quantum theory of solids and a revolution in the understanding of electrical conduction in various solid materials. An enormous amount of experimental work laid the basis for deliberate efforts to develop solid-state systems to replace the vacuum tubes which had previously been the basis for all electronic devices such as radio transmitters and receivers, as well as for the first digital computers. (More details on this story can be found in Part III of this book in the section on the postwar economic development of the United States.)

The first prototype transistor was constructed in 1947 (Stage 3). In the subsequent period intense work was directed at making better transistors and developing methods for producing them in large numbers. In 1953-1954 the first prototype of a transistor-based digital computer was constructed.  In 1958 the first prototype integrated circuit was built by Jack Kilby, who described it as “a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated.”

Meanwhile the transition from prototype solid-state electronic devices into finished products progressed rapidly (Stage 4), leading to the emergence of an entire new sector of the economy. The first commercial transistor radio came onto the market in 1954. Transistor-based digital computers were first utilized as finished products in the 1950s, as components of U.S. weapons systems. The first personal computers came onto the market in 1965.  In 1969 they were used to perform computations for the 1969 manned landing on the Moon.

Since then solid-state electronic devices, including high-speed computers and the internet networks based on them, have profoundly transformed the economies of the world’s industrial nations (Stage 5). This process is still going on.

Microelectronics is only one branch of the gigantic “tree” of scientific and technological developments based on quantum physics. No less important is the revolution which quantum physics unleashed in the field of chemistry, providing a completely new understanding of molecular structure, the chemical bond and the interaction of atoms and molecules with electromagnetic radiation. The laser is a further example.  

The example of quantum physics illustrates an essential aspect of the hierarchical relationship between the 5 stages sketched above: the development processes on any given level all take place entirely within the domain of possibilities opened up by the higher levels. They do not go beyond that domain.  The higher level defines the foundation and framework within which the development processes on the lower levels take place. This relationship can be expressed in a more precise way by saying that each level in the hierarchy bounds the lower levels from the outside. The notion of “bounding from the outside”, which we intend here, is drawn from the work of the mathematicians Bernhard Riemann and Georg Cantor on the theory of manifolds.

Thus, the elaboration of quantum physics (including quantum electrodynamics) into a formal mathematical theory was based on the new physical principles discovered by Planck, Einstein, Schrödinger, Bohr and Heisenberg, but did not add any other fundamental principles. Similarly, the research and development work leading to the first prototype transistor utilized the elaborated form of quantum theory, including the methods of calculation of quantum-physical problems, but did not change the basic mathematical form of quantum theory. The optimization of transistors and development of finished products based on them flowed from the breakthroughs embodied in the first functioning prototype transistor (and other original prototypes such as the first integrated circuit); but the optimization and commercialization process did not add any comparable breakthrough.

4.6  Modes of physical-economical development

Identifying the five different levels of scientific and technological progress and their hierarchical relationship allows us to distinguish between fundamentally different qualities or modes of physical economic development.

Linear growth (zero development) in which no significant technological advance occurs, but only an expansion of the scale of economic activity.

Weakly nonlinear development: new technologies are being generated and introduced into the economic process, but on the basis of a preexisting state of basic scientific knowledge, i.e. without new fundamental discoveries being made.

Strongly nonlinear development: development “powered” by an ongoing succession of fundamental discoveries and ensuing technological revolutions.

All three modes can be found in the economic history of various nations and regions of the Earth.

It appears that much of the so-called Early Medieval period in Western Europe witnessed technological stagnation or even regression – zero or negative development – while technological development flourished elsewhere, for example in China and the Islamic world.

The period from the late 18th century until the middle of the 20th century exemplifies the strongly nonlinear mode of development, characterized by a rapid succession of fundamental discoveries in practically all domains of natural science, with ensuring technological revolutions.

The last 50 years exemplifies the intermediate case, weakly nonlinear development: flourishing in terms of technological advances, but with no fundamental scientific revolutions.

Indeed, the last 50 years have witnessed the emergence of new technologies at an unprecedented rate, laying the basis for an enormous expansion of the world economy and for sustaining a human population of over 7 billion. There have been countless discoveries of new phenomena in all areas of natural science and an enormous expansion of detailed knowledge in physics, chemistry, biology, astronomy etc.  But from the standpoint of fundamental physical principles there has been little decisive progress, in our opinion at least. The last fundamental scientific revolution was the advent of quantum physics, whose essential features (including quantum electrodynamics and quantum field theory) were established by the 1950s. Subsequent developments in physics have all occurred within the established framework of quantum physics, albeit with many new areas of application and advances in sophisticated mathematical methods as exemplified by the so-called “Grand Unification” and developments in the field of so-called elementary particles.

The discovery of DNA and the “genetic code”, made possible by technologies such as X-ray crystallography, certainly meant a revolution in many areas of biology, but cannot be said to embody a new fundamental physical principle. The advent of molecular biology does not seem to have brought us closer to answering the question posed by the Erwin Schrödinger in his famous 1944 article, “What is Life?”

The history of microelectronics, sketched above, typifies the mode of physical-economic development since the 1950s: a continuous, ongoing process of emergence of new technologies based on fundamental physical principles that had already been established 50 years ago. The last half-century has been a period of weakly nonlinear development.

4.7  Lack of strongly nonlinear development in the recent period

This limitation of technological developments of the last 50 years is symptomized by the failure to find good solutions to tasks which are essential to Man’s future development. Here are some examples:

Failure to develop inexpensive, efficient and safe means to transport substantial numbers of human beings into space. As a result of this failure, projects such as the establishment of permanent colonies on Mars remain beyond our reach.

Failure to overcome the many deficiencies and drawbacks of existing forms of nuclear power. Present-day nuclear fission reactors cannot make full use of the enormous energy density of nuclear reactions, and they produce of large amounts of long-lived radioactive products (“nuclear waste”) which must be removed and stored for large periods. For these and other reasons, nuclear power today is at best only marginally more economical than generation of electricity by the combustion of fossil fuels. 

Failure to develop effective means to accelerate and control radioactive decay processes.

Failure to develop economic methods for transforming nuclear energy directly into electrical energy on a large scale.

Failure to realize controlled nuclear fusion in a form suitable for efficient large-scale power production. Even when they finally succeed in achieving so-called “breakeven”, the prevailing design approaches of fusion reactors today -- magnetic and inertial confinement systems -- are not suitable for making full use of the enormous intrinsic power-density of fusion reactions, and may not be competitive with optimized fission-based power sources.

Failure to develop room-temperature superconductors that could be utilized economically on a large scale.

Failure to achieve decisive breakthroughs in slowing or reversing the process of aging and the accompanying degenerative diseases.

It is conceivable that one or more of these challenges might be solved without a fundamental scientific revolution. But even in that case, the long delay can be seen as a symptom of stagnation on the fundamental level. Another symptom is the explosive growth in the complexity of systems needed to solve problems which ought to have much simpler solutions. A good example is the systems for launching human beings into space, which are still based on the principle of the liquid-fueled rocket engine invented a century ago, and are extremely expensive.

In the long term, fundamental scientific revolutions are necessary. When continued beyond a certain point, the weakly nonlinear mode of development encounters a problem of diminishing returns: unless new physical principles are introduced, the net physical-economic benefits of generating new technologies (on Level 2, 3, and 4) gradually decrease, and the process of overcoming the relative limits of resources eventually comes to a halt. We can see an example of this is the world today: Much of the increase in physical productivity over the last 20 years has been due to the advent of high-speed data processing systems based on microchips. By this means, the efficiency and productivity of already established technologies and production methods in the various economic sectors have been greatly improved. But the complexity of computer-based systems is now tending to grow faster than the benefits in physical productivity, and progress in computer systems has not led to overcoming the failures listed above.  

4.8  Perpendicular axes of optimization of the economic process

Much attention has been given to the problem, how to optimize productivity and economic growth. A similar problem is posed on the level of individual enterprises. We can distinguish between three main approaches, corresponding to the linear, weakly nonlinear and strongly nonlinear modes of economic development discussed above.

In the first approach is to reform and restructure the economy without introducing any essentially new elements, and without taking into account the requirements for development in the future. The economy is treated as a machine which must be overhauled and readjusted in order to maximize its efficiency. A geometrical analogy would be to look for the shortest pathway between any two points, replacing curved lines wherever possible by straight-line pathways. Typical measures include eliminating all activities judged to be nonessential, holding new investments to a minimum while making maximum use of already existing equipment and resources, and minimizing labor costs and social expenditures by a variety of means. A further step is to relocate production to locations where costs are lowest. The latter approach goes back to Ricardo’s theory of “comparative advantage” and is particularly common today in the context of free-trade globalization.

On the level of specific industrial sectors the first approach is associated with the concept of “rationalization” and more recently “lean production”. In Part II we shall discuss a famous historical case, the application of so-called “scientific management” or Taylorism to the mass-production of automobiles.

The rationalization approach is strongly favored by neoliberal economic policies and by large financial investors because it can generate increases in nominal productivity and increased profits with relatively little investment, and fits into the fiscal austerity regimes now being imposed on many nations. But rationalization has limits. Attempting to further rationalize a process which has already been rationalized seldom yields a comparable productivity gain.  Worse, activities judged to be “nonessential” from the standpoint of short term efficiency, often include activities which are vital to the medium- and long-term survival of the economy. Rationalization can thus lead to economic collapse further down the line.

The second approach aims to maximize what we called weakly nonlinear development. In contrast to the first approach it involves a significant amount of innovation. This approach is based on extracting the maximum benefits from existing basic scientific knowledge, without going substantially beyond it. Existing technologies are constantly refined, improved and combined in new ways, boosting their performance and sophistication.

A classical example is the automobile sector. Although the basics of the automobile have remained unchanged for a very long time, today’s automobiles embody a huge number of improvements and innovations. A similar process can be seen in the enormous improvements made in recent decades in the performance and safety of passenger aircraft, in household appliances, in electrical systems of all sorts, in automation and process control systems in industry, in medical diagnostics and pharmaceuticals, in computing systems and their applications throughout the economy etc. etc. Meanwhile, existing basic scientific principles are elaborated into more and more complex theories and mathematical models, and gigantic amounts of data are gathered.

This second mode of optimization is limited, however, by an effect of “saturation”: From a certain point on it is difficult to realize really significant additional improvements without revolutionary breakthroughs, and the efficiency of the development process decreases. An example from the technological side is once again the automobile sector, where there are many signs that the process of increasing energy efficiency, increasing comfort and building in all kinds of electronic devices is approaching a saturation point. Here marketing and expansion into new markets play a much bigger role in boosting sales, than substantive improvements in the basic product.

In the area of science, theories and models become more and more complicated and unwieldy, while the rate of advance in understanding and mastering natural phenomena slows down. 

On the level of the world economy as a whole the limits of the second form of optimization are reflected, for example, in a continued one-side dependence on the combination of fossil energy sources and the internal combustion engine, despite enormous improvements and refinements made over the last 100 years.

The third approach is to optimize the rate of scientific and technological breakthroughs and related fundamental innovations. This approach appears to be very expensive and difficult from the standpoint of the first and second approaches to optimization. It requires enormous investments in basic scientific research, in education and training, in research and development capabilities, etc. It can only be sustained if the gains in productivity which are generated from the elaboration of fundamental breakthroughs into finished technologies, are sufficient to compensate for the huge expense of sustaining the discovery and innovation process. We shall discuss this necessary “multiplier effect” in Part II and Part III. On a deeper level the third approach requires a culture of scientific progress, an atmosphere of inspiration and intellectual excitement, and the deliberate fostering of scientific genius over generations.

The above three approaches to optimization can be thought of as three perpendicular axes in space. They are not symmetrical in terms of their economic effects, however. When applied alone for a significant period – as commonly occurs in the context of neoliberal austerity policies -- the first approach destroys the entire basis for an economy to realize scientific and technological progress. The second approach is certainly preferable, and dominates the economic practice of successful nations today. In the case of China, as in the case of Japan in the post-WWII period, an enhanced effect of the second approach is being achieved through a large-scale import and transfer of advanced technology from the outside. As we pointed out, however, exclusive reliance on the second approach eventually converges on “saturation”, at which point the economy is threatened with long-term stagnation and eventual decline.

It is only the third approach which can insure an unlimited trajectory of economic development into the future. In practice, of course, it must be combined with certain elements of the first two approaches. Among other things the mere existence of a revolutionary new technology does not mean that it automatically contributes to the overall productivity of the economy. Typically, an enormous effort of research and development is needed before usable products emerge. This process takes the form of refinement and elaboration of the original new technology, as well as a “rationalization” process to maximize the efficiency of the finished product. Finally, an economy burdened with gigantic waste and inefficiency can never achieve the necessary scale of investment and the other “take off” conditions for major technological innovations. Here, also, a certain degree of rationalization is required, but one which frees up the creative process rather than eliminating it.

It is worth noting that both the second and third form of optimization require substantial direct or indirect state support for the innovative private sector and a close “networking” of that sector with state institutions of scientific research and R&D. A much discussed reason is the so-called “Valley of Death”: it often takes as much as 10 years or longer to transform a scientific/technological breakthrough into marketable commercial products. In the intervening period the private company has costs but no profits; it “dies” before reaching the goal. The close relationship between government and private enterprise is often best realized in the context of large high-technology projects in which specific tasks are assigned to private companies, while the state shoulders the main risk and a large part of the investment. This system reached a great degree of perfection in the United States in the period of the successful Apollo moon landing program of the late 1960s and early 1970s, as well as in countless technological breakthroughs of the post-WWII period.

Ironically, the close networking of state institutions and private enterprise has continued to play a big role in the U.S., especially in defense-related sectors, despite the fact that the U.S. has since become the leading “missionary” for radical neoliberal ideology, demanding that nations around the world carry out policies of radical privatization and “less government”.  In the United States itself, the symbiosis of key strategic industries with government agencies continues, typically under the banner of national security and the need to maintain U.S. competitiveness and technological leadership.

 

Chapter 5    A universal metric of physical-economic development
 

5.1  Creative discovery -- overcoming the relative boundedness of a given mode of thought

Our discussion of the five levels of scientific and technological progress brings us to epistemological issues which – although they may be difficult for many readers to grasp – cannot be omitted here, because they lie at the heart of physical economy. The following discussion is based on our own understanding of the approach of Lyndon LaRouche, the American economist who has pioneered the subject of physical economy.
 

From the standpoint of the thought processes involved, each of the five levels of the scientific and technological progress bounds the next lower level from the outside. The notion of “bounding from the outside”, which we intend here, is drawn from the work of the mathematicians Bernhard Riemann and Georg Cantor on the theory of manifolds. To a first approximation, “bounded from the outside” signifies “circumscribed by” or “operating within the framework determined by” an agency or factor that is not internal to the given process. The problem of the bounding of human thought processes is a mirror image, in the subjective domain, of the limits of a given level of technology and the limits of resources relative to that technology, in the domain of physical economy. In both cases the bounding is relative, not absolute.  

We can illustrate the concept of “bounding from the outside” with the example of deductive systems of mathematics. Each such system starts with a set of axioms about some type of mathematical objects – for example the whole numbers 1, 2, 3, ….  A “theorem” in the system is any statement which can be derived (“proven”) from the axioms by a chain of logical deductions.  But the truth of the axioms – and thereby also of the theorems derived from them – cannot be established by deductions inside the system. Indeed, such reasoning would be circular because it would presuppose the truth of what it claimed to prove. In classical number theory, for example the axioms (the so-called Peano axioms) consist of statements which mathematicians regard as self-evident, i.e. as pertaining to the essence and meaning of the concept of a whole number. Such judgments are a product of thought processes that are completely different from logical deduction. In this sense the logical-deductive system of classical number theory is bounded by the higher-level acts of insight from which the axioms were derived.

Something analogous occurs with the “laws of physics”. A great deal of the ordinary work of physicists is to try to explain and predict various natural phenomena through mathematical calculations based on the known physical laws – the laws of classical and quantum mechanics, electrodynamics, thermodynamics etc. -- as these have been codified and set forth in the textbooks of physics. Engineers and technicians also base their work on the laws of physics, although often indirectly by using so-called semi-empirical formulas (and nowadays computer software) that are based on some combination of physical theory and the data of experience. All of this activity is thus “bounded from the outside” by the process of fundamental discovery, from which existing knowledge of the “law of physics” is ultimately derived.

These cases exemplify a general characteristic of the way the human mind deals with the world. At any given moment our judgments and perceptions of reality are shaped by certain habits of thinking, concepts and assumptions about how the world is organized – habits and assumptions which people typically acquire through their upbringing, education and social interactions. This set of habits, concepts and assumptions is in some ways analogous to the definitions and axioms of formal mathematics, or the software which determines the behavior of a computer. While in mathematics the axioms are explicitly stated, most people most of the time are not aware of the basic assumptions underlying their thinking, nor of where those assumptions came from. In this sense their thinking processes are “bounded from the outside”. They are living in a kind of mental prison but don’t realize it, because the walls of the prison are invisible to them. In some ways the effect of “invisible” assumptions can also be compared to the way the behavior of animals is shaped by instincts which are the product of a long process of evolution, and passed on genetically.

Hidden assumptions and habits of thinking play a key role when human beings try to solve problems – whether in scientific research, in industry, or in everyday life. If a problem is solvable, then a solution exists; the problem is only to find it. Often we fail to find a solution because we are looking in the wrong direction. In such commonplace experiences the interesting question is: why do people look in the wrong direction when they are seeking solutions to a problem? And why do people in many cases reject or rule out solutions which later turn out to be perfectly valid? 

Fortunately, human beings can change their basic habits of thinking, concepts and assumptions. This can occur for example under the impact of profound shocks and crises, through the spread of great cultural and religious ideas, and also under the influence of changes in the practice of society that may occur as a result of scientific and technological revolutions. In such cases the change is induced from the “outside”. But we human beings can also deliberately change our assumptions and habits of thinking by inner reflection, by becoming conscious of the specific “bounding” of our own thought processes, and transcending it. This capability is unique to human beings. It is the characteristic of all true creativity, and is manifested most clearly in the process of fundamental scientific discovery. Although fundamental discoveries are typically provoked “from the outside” by a paradox or experimental anomaly, the real discovery is entirely internal to the mind of the discoverer, and lies in a deliberate change in the mode of thinking.   

This brings us to a paradox which has crucial implications for physical economy: Each fundamental scientific revolution (or conceptual revolution more generally), through which the mind frees itself from the “prison” of a certain set of assumptions, concepts and habits of thought, at the same time creates a new “prison”!

We can see this most clearly in the case of formal mathematics and mathematical physics: We add a new axiom in mathematics, for example, or even replace the whole set of existing axioms by new ones. The result is a new system of mathematics, where the whole process of generating theorems is bounded – “imprisoned” -- by the new set of axioms. The same applies to revising and/or expanding the “laws of physics”. The resulting new physics still has an essentially deductive form: the newly-adapted “laws of physics” determine what we think can happen and cannot happen in the Universe, just as much as the older set of laws did in the past. In this sense we have built a new “prison” for ourselves. It is larger and allows more possibilities for development than the preceding one, but it nevertheless constitutes a bounded domain of thinking.

Some might object to the idea that scientific progress means going through an endless succession of conceptual prisons. Why should it not be possible to liberate the mind completely, eliminating all habits and assumptions?

Our answer is that such a state would be inconsistent with the nature of Man’s relationship to the physical Universe –a relationship which necessarily develops through hypotheses. Suppose, for example, you believed that your thinking was unbounded and free from all assumptions about the world around us. How could you claim to know this? How could you know that there are no hidden assumptions in your thinking? As soon as you take some deliberate action in the physical world, you have made a decision, and that decision must be based on something, on some idea or expectation concerning the consequences of the action. But we can never claim to have complete, 100% accurate knowledge of the world around us. Even if we had a scientific theory that could predict the results of measurements to a million decimal places, we cannot exclude the possibility that a more precise measurement will reveal a discrepancy at the next decimal place. In fact, many scientific discoveries have been made because of small discrepancies between measurements and the predicted values.

What about Isaac Newton’s famous statement “hypotheses non fingo” – “I don’t make hypotheses”? As Newton himself explained, this was intended to signify that his famous law of gravitation was nothing but a logical deduction from experimental observations, and that he was not proposing any theory or hypothesis about the nature and origin of gravitation itself.

In fact, however, Newton’s procedure is not at all free from assumptions. He assumed as self-evident the notion of a rigid physical body having a definite position and velocity in space; he applied theorems of Euclidean geometry to the observational data in order to deduce his law of gravity, assuming that real physical space is the same as the abstract mathematical space of Euclidean geometry, and also assumed an absolute universal time. Two centuries later Bernhard Riemann demonstrated the arbitrary nature of these assumptions, developing a general notion of the curvature of manifolds which later served as the basis of Albert Einstein’s work on the curved space-time of general relativity. 

The cases of formal mathematics and mathematical physics provide a conceptual model for the type of “boundedness” which is characteristic of human thinking generally. But the historical evidence of scientific progress, as well as evidence that can be gained from rigorous introspection, demonstrate that the human mind is capable of repeatedly transcending the relative bounds of any given set of assumptions and habits of thinking, in the direction of a growing mastery of the principles underlying our Universe. Physical economy provides proof of this growing mastery in terms of the ability of Man to sustain human life on the Earth and in the future beyond the Earth.

5.2  Fountains of discovery – higher hypotheses and the concept of “hypothesis of the higher hypothesis”

A precise conceptual definition of “power to sustain human life” is given by LaRouche’s concept of “relative potential population density”, which we shall examine in the next section. But first we must focus on a profound question raised by what we have just said:

Is the act of discovery a mere isolated event, an accident? The history of science and the development of human knowledge generally, from the earliest periods of human culture until today, points to the opposite conclusion: namely that each individual discovery is part of an ongoing process unfolding in a lawful way. In other words: human creativity must have its own “laws”! Indeed, if you believe that the Universe as a whole is organized in a lawful way, then human creativity, scientific discovery and physical-economic development – which are all part of the Universe – must also be lawful processes.

Examining the internal evidence of the development of human knowledge, we find that discoveries typically come in series or chains, in which a handful of seminal ideas function as “fountains of discovery”. Abstracting from the many complexities involved, suppose that A, B, C, …. is a sequence of successive states of scientific knowledge. Each of these states of knowledge has a “bounding” which can be expressed as set of hypotheses (assumptions) concerning the organization of Nature. Each of the steps A->B,  B->C , C->D etc. is the result of a creative discovery, in which the previous “bounding” of knowledge is overturned, and a new one emerges.

Now we ask: what is the “bounding” of such a chain of discoveries? Evidently it is connected with the kinds of ideas we mentioned above – ideas which are not hypotheses of the usual sort, but function on a higher level, as “fountains of discovery”. LaRouche has used the term “higher hypothesis”: a hypothesis governing a succession of hypotheses (or sets of hypotheses), each of which is the product of a creative discovery.

We can take our investigation a step further: examining the history of knowledge once more, we find that the character and intensity of the process of scientific discovery varies from period to period. Imagine a series of chains of discovery spanning successive eras in the history of knowledge, each governed by its own higher hypothesis:

A1 → B1 → C1 → …   governed by higher hypothesis U1

A2 → B2 → C2 → …   governed by higher hypothesis U2

A3 → B3 → C3 → …   governed by higher hypothesis U3

etc.

Now consider the sequence of higher hypotheses U1  , U2  ,U3 , …  . The process U1 → U2 →U3 → must have some “bounding”, some generating principle. LaRouche calls this a “hypothesis of the higher hypothesis”. LaRouche pointed to a close relationship with the notion of the transfinite discovered by the mathematician Georg Cantor.   

We can make the seemingly abstract conceptions just presented more concrete by examining their relationship to rates of scientific and technological progress and comparing the rates of scientific and technological progress in various epochs of human history. (Here we use the notion of “rate” in a broad conceptual way, not in the strict mathematical/physical sense of the derivative of a scalar-valued function. We shall discuss the issue of how to measure scientific progress and economic development below.)

As a “fountain of discoveries,” each higher hypothesis generates a certain rate of scientific progress. A more powerful higher hypothesis generates a higher rate of discovery. A successful “hypothesis of the higher hypothesis” would then correspond to a rate of acceleration of the rate of scientific progress.  A close study of the history of knowledge shows that successful hypotheses of the higher hypothesis are closely connected with certain qualities of ideas concerning the creative nature of Man and his relationship to the Universe. An example is the explosion of scientific progress in European civilization, which began with the 15th century Renaissance and continued through to the end of the 19th century. This extraordinary acceleration in the pace of scientific advance was inseparably connected with profound works of philosophy, theology, painting, music and architecture.  An expression of the corresponding “hypothesis of the higher hypothesis” can be found in works such as Docta Ignorantia by Nicolaus of Cusa, and later in the work of Wilhelm Gottfried Leibniz. Such profound, seminal ideas lay the foundation for entire eras of scientific and technological progress.  

5.3  Defining a universal metric of economic development

From the conceptual side of scientific and technological progress we now turn to the physical side.

In the introduction we criticized the common practice of using the Gross Domestic Product (GDP) to measure economic performance, noting that GDP growth can often go hand-in-hand with a decay of the real productive base of the economy, with a ballooning of wasteful consumption and speculative bubbles. Apart from the distortions introduced by monetary measures in the calculation of GDP, the whole conceptual framework surrounding the use of GDP ignores the crucial difference between growth and development, and fails to identify the strongly nonlinear form of development which is the essential to maintaining human existence in the long term.

Evidently an alternative is needed. What we are looking for is not some simple numerical parameter, but rather a conceptual standard or yardstick for physical-economic development which can serve to orient economic policy and practice in the right direction.

Lyndon LaRouche has shown how to solve the problem of defining a universal metric of economic development, using his notion of “relative potential population density”. Although his solution is profoundly theoretical in nature and has yet not been elaborated as a tool for the practical analysis of economies, it is indispensible for an adequate conceptual grasp of physical economy. It provides the clearest conceptual foundation for framing long-term economic policy.

In the following we shall present LaRouche’s solution, as we understand it, in a series of steps.

5.4  Relative potential population density

We begin with the central focus of physical economy: the human being. We remind ourselves that the fundamental criterion for measurement of an economy it not its output of goods, but rather its ability to support human existence. That means, more precisely: the ability for a growing population to maintain itself, through its economic activity, at an increasing level of existence in material and cognitive terms. Looking at the economy as an interconnected system of farms and mines, housing and infrastructure, factories and laboratories etc. we can regard it in its totality as a “life support system” – a giant physical apparatus or “tool” by means of which a given population maintains its existence in interaction with its physical environment. The physical environment is defined by a given geographical territory with its natural resources.  Put another way, the economy mediates the relationship between Man and Nature.

From that standpoint we approach the problem of measurement by asking the question: What is the maximum population of human beings that could potentially support itself on a given territory, on the basis of the prevailing level of knowledge, technology and skills embodied in the practice of the given economy – i.e. its level of development – using only the resources located on that territory? Dividing this maximum number of persons by the number of square kilometers of territory, we obtain the potential population density of the economy, relative to the given territory.

This, to a first approximation, is the proposed universal parameter for measurement of an economy as a whole. Real economic development would thus be defined as increase in the potential population density.

Note that this provisional definition already distinguishes between development and growth. A mere expansion of the scale of economic activity, without progress in science and technology, has no effect on the potential population density, or may even reduce it.

Three points must be added for clarification:

Firstly, we should not forget that the notion of maximum potential population employed here has nothing to do with the actual size of the population. For a healthy economy, the potential population will always be much larger than the actual one.  

Secondly, in practice the potential population density can only be very roughly estimated. What is important here is not the numerical value, but rather the way it changes as a function of time.

Thirdly, we emphasize that density referred to here is only an average density for the whole territory. A modern city, for example, can have a population density of over 10 000 persons per square kilometer. But to supply those 10 000 persons with food at the present nutritional standard level and agricultural productivity of the USA, for example, requires a land area on the order of 40 square kilometers. Thus, under these conditions a city requires 40 times its own area, in the form of surrounding farm, to feed its population. Taking into account the much lower population density of typical agricultural regions, we obtain, as a very rough estimate, a potential population density on the order of 200 persons per square kilometer. The actual population density in the USA is about 34 persons per square kilometer.

Note, however, that this estimate of potential population density only takes account of the farming area needed to feed the population. But the high yields per unit land of a typical American farm depends on supplies of water, electricity and fuels, fertilizers and other chemical products, machinery, etc. Also the population needs many kinds of products in addition to food. To estimate the potential population of the U.S. we must take into account the land use required by each sector of the economy to provide the outputs needed, directly and indirectly, to maintain the population. The non-agricultural activities take up a very significant amount of land area. They include land used for highways and other infrastructure, factories, warehouses and other storage areas, reservoirs, mines and quarries, forestry, residential areas, shopping areas, office buildings etc. We must also take into account the fact that the U.S. economy imports significant amounts of raw materials and other products from outside. To estimate the potential population density, we must estimate the additional land area requirements which would arise from substituting all inputs from the outside in some way by production based only on the territory of the U.S. 

Taking all of this into account, the estimated potential population density of the U.S. will be much lower than 200 persons per square kilometer. We see that this figure is sensitive to technological developments not only in agriculture per se, but in practically every domain of economic activity.

Turning back to the general case, we should note that the concept of potential population density is per se not yet adequate to compare economies of different regions with each other. This is because the area needed to support a given population clearly depends on the quality of the land, on its climate and resources.  Thus, for a given level of technology, a territory with favorable climate, fertile land and easily available resources can support a much larger population than a barren, arid area. To provide a measure that is independent of the particular territory we should take some fixed quality of land, climate and resources as the basis for our estimates. In this way we obtain the concept of “relative potential population density” – i.e. the maximum population density relative to the specific geographical and other natural conditions under which the economy must operate.

The notion of relative potential population density applies not only to territories on the Earth, but by implication also to the much more difficult environments which will confront Man in the future, in establishing permanent colonies on the Moon, Mars and beyond.

At first glance relative potential population density, so defined, would seem to provide a proper measure of what we should call productive power in real physical-economic terms. Economic development would then be measured in terms of rates of increase of the productive power of the economy.

This definition contains a paradox, however. When we speak of the maximum population which could theoretically support itself on any given territory (under given natural conditions), on the basis of a given level of science and given technology, we did not say for how long i.e. for what length of time. Without scientific and technological progress, the economically usable resources will be used up more and more; the real cost of exploiting those resources will constantly increase, and the economy will ultimately collapse. Consequently the maximum size of the population which could maintain itself at the given fixed level of technology decreases as a function of time.

How do we take the effect of increasing costs for obtaining resources into account? One way would be to define productive power as the maximum density of population which could support itself for some specific length of time – 25 years, for example -- at the level of knowledge, technology and skills embodied in the existing practice of the economy – on a given territory.

There is a weakness in this definition, however. It makes productive power a measure of the prevailing technological and other practices of an economy, but it says nothing about the ability of an economy to progress – to reach a higher technological level in the future.

Developing and implementing major technological advances involves very large real costs which are not present in an economy oriented to linear forms of growth. We can compare an economy embarking on a technological revolution with an airplane on the runway. To be able to take off, the airplane must reach a certain critical speed, which requires a large expenditure of energy before the plane leaves the ground. Similarly, an economy must generate a physical surplus sufficient to cover the enormous investments of resources needed to implement technological advances on a large scale. Otherwise the economy will “stay on the ground” or crash before reaching the next highest technological level.

Generating a large physical surplus is not sufficient, however. To be able to fly, it is not enough for a vehicle to have powerful engines. Similarly, the ability to achieve “technological take off” requires that the economy be designed and organized in an appropriate way. It must have the necessary institutional framework, including a well-functioning credit system; it must have suitable research and development capabilities, leadership capabilities, a suitable industrial base, a labor force with the necessary skills and know-how, a certain cultural outlook and educational standard of the population etc.

In Part III of this book we shall examine cases of national economies – including the U.S., Germany, France and Japan from end of World War II into the 1970s – which have realized high rates of technological advance over sustained periods. In each case this was achieved by deliberate policies and measures aimed at creating an “economic airplane” able to achieve “technological takeoff” on a continuous basis. On the other hand the economic history of the last 40 years reveals how, under the influence of neo-liberal ideology, nations have dismantled much of their in-depth capability for sustaining scientific and technological progress over the long term. To investors looking for quick profits or politicians obsessed with budget-cutting, maintaining high rates of scientific and technological progress appears “too expensive”. The resulting policies undermine the viability of the economy. If continued, they ultimately lead to total collapse.

We shall call an economy “viable” in the physical-economic sense if it possesses the capacity to realize scientific and technological progress at a rate sufficient to compensate for the gradual depletion of the resources available at any fixed level of development. In other words, if there exists a potential trajectory of uninterrupted physical growth and development, starting from the economy in its given state, and continuing into the indefinite future.

Naturally, the mere existence of such a trajectory – or “world line” in the vocabulary of Albert Einstein – does not mean that it will be realized in a give concrete case. At any moment wrong political decisions can cause a society to diverge from a trajectory of upward development and even plunge into collapse. In many cases an economy must first go through a process of recovery and reorganization before growth and development can resume.

Now we put all the above considerations together to define a universal metric of economic development, as follows:

1. Let L signify the level of development of the knowledge and capabilities of the population, embodied in the activity of the given economy E at a given moment in time t.  Here the notion of “human knowledge and capabilities” is to be understood in the most general sense to include knowledge of science and technology, skills, know-how (including information and data relevant to the construction and operation of equipment of all sorts), creativity and the general quality of labor and leadership, as well as relevant knowledge and capabilities associated with the arts and so-called humanities. We think of L as a function of time, L = L(t), which advances as the economy develops.

Defined in this way, L says nothing directly about how the economy E is organized and how large its population is, nothing about its output and input-output relations etc. Theoretically, many different economies might embody the same level L of human knowledge and capabilities at a given moment.   

2. Next, we choose a sufficiently large territory T of some chosen quality of land, climate and resources as our standard of reference.

3. Consider all viable economies embodying the given momentary level L of knowledge and capabilities of the population, which could hypothetically be build up within the confines of the territory T and which utilize only the resources of that territory. The size of the population, sustained by a viable economy on the territory T, will depend on that economy’s specific structure and parameters.  

3. Among all viable economies on the territory T, having the momentary knowledge and capability level L, chose the one which has the maximum population. Denote that population by P. 

4. The relative potential population density (RPPD) corresponding to the level L of knowledge and capabilities, is equal to P divided by the surface area of the standard territory T. Thus defined, the relative potential population density is a function of L: RPPD = RPPD(L).

Put as briefly as possible, the RPPD corresponding to the level L is the maximum population per unit area of a standard territory T, which could maintain itself and its capacity to realize scientific and technological progress at a rate sufficient to compensate for the exhaustion of economically utilizable resources, on the basis of an optimally organized economy whose trajectory starts from level L of knowledge and capabilities.

Now we have what we need to define a universal metric for economic development.

Starting from the economy E we wish to measure, let L(t) signify the level of knowledge and capabilities attained by E at time t. The measure of physical-economic development of E is defined as the rate of increase of RPPD(L(t)). In other words: the rate of growth of the relative potential population density resulting from the increasing level of knowledge and capabilities in the economy.

This definition expresses the fundamental standpoint of physical economy: the notion of an economy as an instrument for human development. We could express this by saying that the work done by an economy in its development is defined by the increase in the knowledge and realized creative capabilities of the population. The latter is measured in terms of the potential for an ever larger human population to sustain itself and its potential for further development, on a given territory. Our universal measure thereby combines the physical-objective side (production, energy, infrastructure etc.) with the cognitive-subjective side of economic life -- the knowledge and creative intellectual powers of the population for scientific discovery and problem-solving etc.

In this context we should emphasize that a high level L of knowledge, skills and abilities in the population is by no means sufficient in and of itself to guarantee a successful trajectory of physical-economic development. Viability, an adequate “takeoff” of the economy to a higher level of development can only be realized when existing intellectual/cultural capabilities of the population are combined with a highly-developed physical economy having a suitable productive base, energy and infrastructure, facilities for research and development, etc. If the cognitive capabilities of a population are not supported by a rapidly developing physical economy, then those capabilities will be lost. Many lessons can be learned, in this regard, from the devastating effects of the so-called “post-industrial society” policy implemented in the United States and many other nations over the recent 20 decades, on the cognitive levels of the population.  

The universal metric of physical-economic development, which we have just defined, is evidently closely linked to LaRouche’s notion of higher hypothesis and hypothesis of the higher hypothesis. The minimum precondition for a trajectory of unlimited development is the ability to generate an unending succession of scientific discoveries. For that, some suitable “fountain of discoveries” must be embedded in the knowledge and practice of society. Thus, the concept of a viable economy is inseparably linked to that of a higher hypothesis as presented above.

In practice a trajectory of development may flow from more than one higher hypothesis, and new high hypotheses can emerge in the course of time through acts of creative reflection on the boundedness of a prevailing mode of development. This suggests the following consideration:

In an economy developing according to a given set of higher hypotheses, the state L of knowledge and creative capabilities of the population is constantly advancing. As a result, the relative potential population density of the economy is increasing at a certain rate R which is function of time. When a new higher hypothesis emerges, which overcomes the relative boundedness of the given set of higher hypotheses, the result is to accelerate the growth of knowledge and creative capabilities of the population, and thereby to increase the rate of growth of the RPPD. To the extent the introduction of improved higher hypotheses unfolds according to a higher principle – a hypothesis of the higher hypothesis in the sense of LaRouche – that higher principle corresponds to a certain rate of increase of the rate of increase of the relative potential population density of the economy.

These considerations may appear very abstract at first glance, but they can provide profound insights into the past and future of the human race. The clearest example of a hypothesis of the higher hypothesis in action is the new conception of mathematical physics and of the natural sciences generally which emerged in Europe in the course of the 15th, 16th and 17th centuries and is most clearly associated with the names of Nicolaus of Cusa, Johannes Kepler and Gottfried Wilhelm Leibniz. Although explicit notions of science and technology had existed since antiquity, progress occurred in a very different mode, according to different species of higher hypotheses. The explosion of scientific and technological progress that ensued from the new “hypothesis of the higher hypothesis” made possible a drastic acceleration in the rates of growth of both the potential and the actual population density of world, lasting into the recent era.

The study of higher hypothesis and hypothesis of the higher hypothesis in the history of science and technology, and their correlation with physical economic development, is an extremely fruitful area of research which leads beyond the bounds of this book. The same holds for the important problem of estimating the relative potential population densities of different regions of the world in the course of time. The concept of relative potential population density provides an extremely useful vantage-point from which to identify developmental principles that have remained valid from the earliest periods of human history up to today.

The science of physical economy is still in its infancy. Even though basic principles of physical economy have been embedded in the success practices of nations throughout history, it is only in the recent period – thanks largely to the contributions of Lyndon LaRouche -- that these principles been begun to be made explicit and elaborated in a systematic way.

Chapter 6    Special topics in physical economy

6.1  Characteristics of physical-economic development

Apart from an increasing rate of scientific and technological progress, physical-economic development is inseparable connected with the following processes:

1. Increase in the flow of energy utilized per square kilometer and per capita of the population.

2. Increase in the power density of technology (as defined below) and increase in the values of analogous density parameters for the performance of basic physical infrastructure.  

3. Expansion of the domain of physical-economic activity in the direction of ever smaller and ever larger scale-lengths (i.e. “towards the infinitely small and the infinitely large”).

4. Increase in the cognitive content of the work process, with corresponding shifts in the composition of the work-force.

5. Progressive improvements in the educational level, material living standards, quality of leisure activity, health and longevity of the population.

In the following sections we shall examine some of these points more closely.

6.2  Energy and physical-economic development

It is well-known that the emergence and development of modern industrial economies has gone hand-in-hand with a continual increase in the amount of energy utilized in the economy, per square kilometer of area and per capita of the population. In fact the relationship between increased energy use and physical economic development – increase in the relative potential population density – goes back to the dawn of civilization. The use of fire and animal power are early examples.

There is no mystery in this relationship. To sustain human existence Man must act upon Nature and act upon the products of his action – tilling the land, extracting raw materials and transforming them into useful products, constructing tools and structures, transporting people and materials etc. That action takes the form of physical work, and work requires energy. The larger the density of population, the larger the amount of energy that must be expended per unit area of inhabited land. To sustain a growing population density the productivity of labor must increase, which requires supplementing human muscle power more and more by energy from the outside. As living standards increase, the energy consumption per capita increases further. With the increasing use of resources, more energy must be consumed in their extraction, concentration and processing into useful forms, further increasing the consumption of energy per capita and per unit land. The recycling of resources also requires energy.

The growth of energy production and consumption is a nonlinear process. Physical-economic development requires energy in increasing quality as well as quantity. This applies not only to so-called primary energy (the original energy sources), but above all to the quality of the energy employed in its final, end-use. The most important example until today of increasing the quality of energy on a large scale is the revolution connected with the introduction of electricity and the process of substituting electricity for other forms of energy in end-use – a process which is continuing today, for examples in the efforts to replace the internal combustion engine more and more by electricity-based transport systems such as electric cars. Examples in the domain of primary energy are the shift from combustion of wood to coal, oil, gas and later to uranium in primary energy generation. Utilizing thermal energy at higher temperatures, as exemplified by the tendency toward increasing the temperature of combustion in engines and power plants, is another case of the shift to higher qualities of energy. Coherent forms of electromagnetic radiation, especially short-wavelength laser radiation, constitute a still higher quality of energy.

Generally speaking the shift toward higher qualities permits a more precise and efficient use of energy. As energy transformation processes become more efficient, the amount of primary or source energy will generally increase less rapidly than the application of higher qualities of energy. This tendency, already seen in the improvement of thermal efficiency of many processes, may be greatly accelerated in the future, e.g. by the development of methods for converting nuclear energy directly into electricity.

(Note: It cannot be ruled out that sometime in the future the apparent consumption of energy per capita – in the sense that “energy consumption” is understood and measured today – might actually decrease, as energy is more and more “recycled” in reversible processes, as opposed to being dissipated in heat. Already today, some modern mass transport systems utilize electromagnetic brakes that return a portion of the electrical energy, used to accelerate the vehicle, to energy storage devices or back into the power grid. In principle ultra high-speed magnetic levitation vehicles travelling through evacuated tubes will be able to “recycle” nearly 100% of the vehicle’s kinetic energy. The hypothetical possibility of a tendency toward a decrease in nominal energy consumption at some future time might appear to contradict the arguments presented above. The contradiction is only apparent, however. Even if the entire amount of energy utilized in the economy were retained and recycled, and none at all “consumed”, the energy flux – the amount and intensity of energy flows in the economy -- will still continue to grow per capita and per km² as a function of physical-economic development. Thus, the passenger in the hypersonic magnetic levitation train will utilize a huge flux of energy, while consuming almost none. The energy used to accelerate the vehicle will be recovered in the act of slowing it down again, via electromagnetic induction.)

6.3  The principle of increasing power density of technology

Apart from the overall increase in energy use per capita and per square kilometer, the growth in potential population density correlates strongly with increases in the concentration or intensity of the energy flows utilized in the processes of production.

In the language of physics, power-density (or energy flux density) is defined as the amount of energy flowing through a given area per unit time. The role of power density in technology is exemplified most clearly in processes which involve the application of energy to some material ("a work piece"), or the transformation of energy from one form to another. In the simplest cases, we can identify a primary working surface at which the energy is applied to the work process, or across which the transformation of energy occurs. The power density of the process then defined as the amount of energy flowing across that working surface per unit time and per unit area,  measured for example in watts per square centimeter W/cm² = J / (s x cm²) ).  Illustrative examples, where this definition can most simply be applied, include: welding, cutting and surface treatment processes, pistons or turbine blades; heat exchangers; chemical reactors with well-defined reaction surfaces; various sources of radiation, etc. In some other cases it is more useful and meaningful to define power density not in terms of an interior working surface of a technological process, but rather in terms of the exterior dimensions, volume or weight of the entire apparatus in which the generation, application or transformation of energy occurs. 

The principle of increasing the power density in the application or transformation of energy has been exploited by Man from the very earliest times with the invention of the first stone tools. A cutting tool concentrates the mechanical action generated by the muscles of the arm and hand onto a much smaller area – the sharp edge of the tool – giving primitive man the ability to cut through materials which would otherwise resist his efforts.  The same principle applies in a different form to the hammer: here the energy of motion, accumulated in the process of swinging the hammer, is concentrated into a short instant of time, when the hammer impacts the target object.

Combining the hammer and chisel produces a concentration of action in both space and time. The result is to generate nonlinear effects in the material being worked upon, making it possible (for example) for human being to break up hard rock – something that was impossible on the basis of direct muscle-power alone.

We can say that technology in the modern sense was born when primitive Man began to utilize such principles in a deliberate, conscious way, changing Mankind’s relationship to Nature. Man became a tool-maker, and tool-making emerged as a new sector in the physical economy of early human societies. The growing use of tools based on the concentration of energy increased the population potential of the human race. 

The intimate relationship between increasing power density and increasing RPPD is demonstrated by the whole history of technology until the present day.

At first glance there is nothing mysterious about such a general correlation. Intuitively, increasing the power density of productive processes should mean being able to do more work in less time and in a smaller area (or space),  which is evidently is a precondition for being able to support a larger population on a given territory.

The matter is by no means so simple, however. The argument just sketched fails to take account of the costs connected with the development, production and operation of technologies operating at higher power densities. Such technologies can be significantly more complex than their predecessors, for example, requiring much more effort in their fabrication, and special training of the workforce operating them. How can we be sure that the economic gains obtained from use of the new power-dense technologies, will be sufficient to compensate for their costs?  This question arose already for stone-age Man, who had to devote significant time and effort to making and using his tools.

The answer cannot be found in the abstract domain. It has to do with the way the real physical Universe is organized. In the case of our stone-age man breaking up a rock with a hammer and chisel, the concentrated pulse of energy creates a shock effect in the crystal structure of the stone, which is qualitatively different from the effect of applying the same amount of muscle power without the use of tools, i.e. by pressing on the stone. Materials respond differently at different power densities. Increasing the power density of action beyond certain threshold values opens up new domains of physical phenomena and adds new dimensions – new potential degrees of freedom – to the physical economy as a whole. Naturally this does not mean that every new device invented to utilize the higher power density will automatically turn out to be profitable in real physical-economic terms. Achieving a net benefit has often required an extensive period of experimentation, selection and optimization. Human history demonstrates, however, that Man has always learned to exploit new physical phenomena in ways that increase the population potential of the human species despite the expenditure of resources involved in this process.

On the deepest level this continued success provides experimental evidence for the conclusion, that the principle of unlimited growth in the potential population density of the human species is somehow embedded in the organization of the physical Universe itself.

We now turn to more examples of the role of power density in technology. 

The "stone-age" principle of increasing power-density is beautifully illustrated today by the use of lasers in machine-tools and other equipment. By focusing a laser beam onto a tiny area and/or releasing laser energy in ultra-short pulses (as in today’s femtosecond laser systems), we can produce gigantic power densities. Today there are compact ("table top") short-wavelength pulsed laser devices that can achieve power densities comparable to those at the center of a nuclear explosion!

The relationship between wavelength, spectral density and the potential power-density which can be delivered by a laser system, exemplifies another general principle: Increase in the power-density of technology correlates with an increase in the precision of manufacturing processes. Lasers and other modern "directed energy systems" employ not only the focusing of energy, but also a precise tuning and "shaping" of pulses to achieve highly specific, precise effects on a given target material.

A somewhat different case, of very great economic consequences in the course of history, is the process of increase in the power density of power-producing machines -- beginning with the use of wind and water power to replace or supplement human and animal muscle power, and continuing through the development of the steam engine, electric and internal combustion engines, and finally nuclear power systems. This progression exemplifies a deeper side of the correlation between power density and real economic productivity: the increase in the power density of technology is not a simple linear process, but occurs in "jumps" as a result of scientific-technological revolutions. In attempting to achieve higher power densities Man encounters various sorts of limits, which can be overcome only by the discovery of new physical principles and which call for qualitatively new types of technology. It is no accident that the "quantum jump" increases in power density of energy technologies, observed in history, have involved the discovery and mastery of different scales of organization of the physical Universe (for example macroscopic, molecular, electronic, nuclear  scales). Conversely, the effort to increase power density confronts Man with the behavior of matter under more and more extreme conditions, uncovering new phenomena whose investigation leads to scientific discoveries.

Thereby, the principle of increasing power density becomes, in effect, a higher hypothesis for the generation of knowledge. 

The close relationship between increases in power density and the progress of scientific knowledge, explains why these increases in power density tend to occur in parallel, in the same historical period in different sectors of an economy. The power density which can be achieved by a specific machine or process reflects a global characteristic of the economy, a specific stage of development of its overall scientific and technological potential.

6.4  Examples of "quantum jump" increases in the power density of technology

A central attraction of the American Centennial Fair in 1876 was the famous Corliss steam engine, a high point in development of steam power. The engine produced about 1400 horsepower (1.04 MW), at that time a gigantic power output. It was a very big machine: 20 meters high and more than 15 meters across, and weighed 56 tons. Nevertheless, this was a much smaller space, than would be required by 1400 living horses!

The advent of internal combustion engines made it possible to produce comparable power outputs with much smaller engines. Today, marine diesel engines producing 1400 horsepower have typical weights of around 2 tons, and lengths and heights of the order of 1.5 meters. Thus, the power-to-weight ratio of modern diesel engines is about 35 times higher than that of the Corliss engine.

To compare internal power densities, it is reasonable to choose the surface of the pistons as the main working surface. The total surface area of the two pistons of Corliss engine was about 2m², giving a power density of about 52 W/cm². By contrast, the 12 cylinders of a typical marine diesel engine, taken together, have a total surface area of about 0.15 m². That gives a power density of about 700 W/cm² -- more than 13 times larger than that of the Corliss engine.

It is no accident that the first viable powered airplanes emerged only after the internal combustion engine had been developed. Taken together with their fuel supply, water supply and boiler system, steam engines were too bulky and too heavy to permit an aircraft to take off under its own power. Modern aircraft flight became a reality only thanks to the drastic improvement in the power-to-weight ratio of suitably designed internal combustion engines compared to steam engines. The “quantum jump” increase in power-to-weight ratios was made possible by the order-of-magnitude larger power density of the internal combustion engine and the development of energy-dense refined hydrocarbon fuels.

On the other hand, the power density of the modern piston-based engine is miniscule compared with advanced turbine technologies and other aerospace propulsion systems. Take for example the self-powered liquid hydrogen turbopump which supplied fuel to the U.S. Space Shuttle main engine. The Space Shuttle turbopump weighed 320 kg and was about half the size of the marine diesel engine mentioned above, but it generated 50 times as much power (52,2 MW, or 70000 hp). Its power-to-weight ratio was about 160 times that of the diesel engine, and more than 17500 times that of the Corliss engine. Taking the sum of the surfaces of the turbopump's turbine blades as the working surface of the turbopump, we find a power density of about 250 kW/cm², which is more than 350 times that of an ordinary diesel engine, and 5000 times that of the Corliss engine. 

The power density of a technological process, defined in terms of working surfaces alone, is of course not the only factor determining productivity. In discussing the Corliss steam engine, for example, we until now only considered the steam piston itself. Looking at the entire system, we note that the Corliss engine was fed by an outside source of high-pressure steam: a coal-fired water boiler installed in an adjacent building. If we take into account the volume and working surfaces of this steam generating system, then the net power density of the Corliss engine comes out even much lower than for a diesel or gasoline engine, where fuel combustion occurs directly in the piston cavity.

More generally the economic productivity an engine or other power source depends on the entire context in which the engine operates, including the various inputs, infrastructure and other preconditions upon which its operation depends. In particular we must consider the real costs of supplying necessary fuels and other materials to the power generating system -- including extraction, processing and transport costs. Each of these processes, in turn, has power density parameters of its own.

In this context there is a broad correlation between increases in the energy density of fuels (in various stages of extraction, processing and transport), and increases in the economic productivity of the total system formed by the engine (or other power source) and the network of activities needed to supply fuel to the engine. The most obvious source of economic advantages of nuclear energy, for example, compared to all other presently-available primary power sources, lies in the extremely high energy density of nuclear fuels. The energy densities of nuclear fission fuels, as defined by the amounts of extractable energy per unit mass or volume of fuel, are hundreds of thousands of times larger than those of fossil and biomass fuels.

We should bear in mind that in this whole discussion we have utilized the concept of energy as it is understood in present-day physics. This is good reason to believe that this notion will be undergo drastic revisions in the future. The principle of increasing power density will remain intact, in essence, but will be expressed in a different form.

6.5  Other forms of power density: urbanization and infrastructure

For the purposes of physical economy it is useful to generalize the concept of power density to include not only the application of energy per se, but any useful form of action or activity. As in the case of energy, densification is a nonlinear process: The process of reaching higher densities encounters limits which can only be overcome through new technology; and at the same time densification gives rise to qualitatively new phenomena.

A most instructive and profound example, reaching back thousands of years, is the emergences of cities and of a qualitatively new type of human culture: urban culture.

Modern cities with high-quality employment and efficient infrastructure are the prime locomotives of physical-economic development of nations. Factors such as the reduction of travel distances; much more direct access to water, food and other goods; much more effective use of infrastructure and services (including health and education) etc. drastically reduce the per capita costs of maintaining a population at a given standard of living compared to those of the same population spread out over a large area, while greatly increasing its productivity. Here the physical advantages are complemented by cognitive ones: well-functioning cities provide for the most intensive contacts between people, for the lively interchange of ideas, for immediate access to institutions of higher education and scientific research etc. We can regard urbanization itself as a kind of technology. A precondition, of course, is advances in the labor productivity of agriculture which greatly reduce the percentage of the population needed in the countryside to supply food to urban areas.

Historically, the development of cities with high population densities has involved many challenges. Not least of these was the vulnerability to the rapid spread of dangerous diseases due to the close proximity of people to each other. The 14th century Black Death is an example. Crucial, up into the modern period, has been the development of water and sanitation systems, including the improvement of water quality and suppression of water-borne diseases, and the creation of underground sewage systems.  

To realize the advantages of population concentration requires a high “power density” of infrastructure. Lack of adequate infrastructure is a major cause of the monstrous economic inefficiency of “urban sprawls” in many countries. Besides the water, sanitation, power and communication grids which are today standard in modern cities, the most essential element is efficient urban public transport. The power density of urban mass transport systems can be measured in parameters such as passenger-kilometers per day per unit city area and reduction in average point-to-point transport times.

A major problem for many existing cities is the enormous difficulty and expense of installing new urban transport lines in already built-up areas. It is much more economical to create new cities with a dense network of above-ground and underground mass transport already “built in”. It is also much more economical to build new medium-sized cities connected by a high-speed transport grid, than to allow existing cities to grow without limit. 

One of the highest priorities for physical economical development world-wide is to launch a new era of city-building based on the most advanced infrastructure technologies. We shall discuss this further in Part II.

6.6   Shift in the structure of the labor force toward increasingly science-intensive activities

Economic development has always been associated with changes in the structure of employment in the direction of activities with more intellectual content, requiring a higher quality of education and an increasing use of the individual’s creative powers. The process of change in the structure of employment goes back to the earliest known historical times, first to the shift from predominant hunting and gathering to more and more organized forms of agriculture. The rise of great cities of antiquity went hand-in-hand with a growth of non-agricultural forms of employment, including in the areas of construction and tool-making, handicrafts, transport and commerce, administration and other services. This process continued up to the beginnings of the industrial era, with its dramatic shift toward large-scale manufacturing. (We shall discuss a problematic feature of the latter process -- the growth of alienation -- in Part II. One can argue, in fact, that the employment of large sections of the labor force in industrial mass production constituted a step backward in terms of the cognitive content of employment. This situation has fortunately been changing with the advent of automated production techniques, eliminating the mind-deadening effects of what became known as “Taylorism”.)

Today, sustained physical-economic development requires accelerating the rate of scientific-technological progress, increasing the proportion of real investment and employment in science, research and development and science-intensive sectors of the economy such as the development and production of equipment based on new physical principles.

Healthy economic development also requires an increasing share of employment in cultural and educational activities that are oriented to the development of the human mind.

We shall discuss these points in more depth in Part II, where we shall inquire into the future of industrial society and present our vision of a “knowledge generator economy”.

6.7  A new concept of living standards

In this context it is important to rethink the concept of “standard of living”. The commonplace conception defines living standards mainly in terms of income as measured by so-called purchasing power, and in terms of concrete items such as nutrition, quality of health care and housing, etc. The relationship between living standard and quality of employment is mainly seen as a defined by wages and salaries: higher-quality jobs give more income. The principle of higher pay for higher skilled labor is not merely a consequence of competition on the labor market, however; it is also understood that work requiring a high degree of skill and responsibility can only be carried out by workers whose living standard is adequate. No passenger would like to travel in an airplane whose pilot is suffering from malnutrition!

Our emphasis on the cognitive, creative content of human activity implies a much closer relationship between living standard and employment, as well as a different concept of “living standard” itself. From our standpoint the criteria for higher living standards must encompass the following aspects:

1. Increasing longevity, not only in the sense of total life-span, but most importantly in terms of the span of time over which the individual is physically healthy and mentally fully capable. Extending this time span further and further is an essential aspect of physical-economic development. This is firstly because it reduces the relative cost of educating the individual relative to that individual’s contribution to the economy. Since education is an expense to the economy, increasing the economically active lifespan leads to a gain in overall productivity. More importantly, a longer active lifetime means more time for intellectual development and accumulation of life experience, and for the individual to pass on his or her wealth of knowledge and experience to the younger generation. 

2. Reduction in weekly working hours and increase in the amount of free time which the individual can devote to relaxing and regenerating his or her mental powers, to physical exercise, to pursuing intellectual interests and cultural activities. In speaking of culture we have in mind the classical notion of culture as an instrument for elevating the mind and soul and nurturing the same kinds of creative mental powers that are the basis for scientific discoveries. Among other things, a high standard of living should include the mastery of a classical musical instrument, as well as singing, up to at least a semi-professional level.

3. Another important free time activity is the authoring of articles and books, participation in conferences and seminars etc. In Part II we shall present a perspective for the participation of virtually the entire population in scientific research, where a key role will be played by “hobby scientists”, i.e. people who engage in serious scientific work in their leisure time.

4. Increasing physical consumption connected with scientific and cultural activities, in the form of personal hobbies pursued outside of normal employment, but which contribute to the overall development of society. As mentioned above, this includes scientific research activities requiring various sorts of equipment (e.g. for a home laboratory). The demand for such equipment will become an increasingly important driver of economic growth.  

5. Increasing consumption for travel connected with gaining knowledge and enriching intellectual and cultural exchanges, in a manner that goes beyond mere “tourism”. Much travel will be devoted to participation in scientific activity including missions of exploration.

6. Improved quality of physical environment: cleaner, quieter cities with beautiful park areas, improved access to large areas of Nature for study and enjoyment.


Supplements to Part I

How do we know that scientific and technological progress can continue indefinitely?

This is a profound issue that would require an entire book to discuss in an adequate way. Here we merely sketch several arguments for the thesis, that there are no limits to scientific discovery; that in a certain sense science is always in the “stone age” relative to what it will become in the future. The same will evidently also apply to technology, which is developed on the basis of scientific knowledge.

1.  The simplest argument is to point out that the opposite assertion -- that scientific and technological progress must come to an end – is impossible to prove and runs counter to the entire experience of mankind. Human knowledge has been growing throughout the course of history. Periodically this process takes the form of scientific revolutions, in which radically new insights and conceptions are gained which redefine a great part or even the entirety of existing scientific knowledge. In the ensuing phase the consequences are elaborated, including in the form of new technology. The historical documentation of this nonlinear mode of development invalidates the apparent limits to progress which arise from a “linear” extrapolation from the existing state of scientific knowledge.

2. At any given time, all empirical, experimental measurements that we can carry out are limited in accuracy on the smallest and largest scales of space and time. The idea that physicists already have a “theory of everything”, or will ever have one, cannot be proven empirically and is merely an arbitrary assertion.  A “theory of everything” assumes we know what “everything” is. More honest would be to call it a “theory of everything that we know about” -- which is practically circular because our technology for investigating and observing the Universe, as well as the way we interpret what we observe, depend on our momentary state of knowledge. As technology and productive power increase, the scope of our capability to gain empirical knowledge also grows. We not only can discover more of the Universe, but we discover more of its possibilities, more of its states and processes. Those discoveries provide the basis for technological developments which eventually provide the means for extending human observation even further in the direction of the “infinitely small” and “infinitely large”. The characteristic of our Universe is that “the closer you look, the more you see”.

3. Present-day theoretical physics is only a mathematical model, and one should never confuse a model with reality, even if appears to explain many things and to be able to predict the outcome of many experiments with great accuracy. The Universe is not obliged to obey our mathematical formulas. No mathematical model can represent the reality of the Universe completely.  No system of laws of physics can hold true for the real Universe exactly and completely, but always leaves out “nearly everything”.

4. Thanks to the work of Kurt Gödel we actually possess a proof of analogous statements in pure mathematics, for example in the domain of the positive whole numbers (1, 2, 3, …) . Here the problem would be to deduce all properties of the whole numbers from a fixed set of axioms specifying their basic properties, in analogy with the laws of physics). An example of such properties would be that every whole number can be expressed as a product of prime numbers, or that a given so-called Diophantine equation (such as x5 + y5  = z5) has or does not have a solution in whole numbers.  The analog of a physical theory that could explain all observed phenomena, would be a complete formal theory of the properties of the whole numbers. That would mean a system of axioms and an algorithm for generating, one-by-one, a list of all statements logically deducible from the axioms, such that every true statement about the whole numbers appears somewhere in the list. Gödel demonstrated that for every non-contradictory finite set of axioms about the whole numbers, one can find a true property of the whole numbers which cannot be deduced from the given axioms.

We can take the natural numbers as representative of the objects of the physical Universe. Evidently there are aspects of the nature of whole numbers, and their interrelations, which cannot be fully captured by any fixed set of basic mathematical “laws”. Does this mean that the human mind faces an impassible barrier is trying to gain knowledge about the interrelations between whole numbers? Gödel pointed to how Edmund Husserl’s phenomenology could provide a method for generating ever more powerful sets of axioms. Husserl’s method is to progressively develop deeper insights into the essence, the intentional meaning we “read into” the whole numbers when we conceptualize them in our minds. These insights would lead to the formulation of new axioms, not logically deducible from the given axiom set, whose adjunction to that axiom set would yield a more complete set of axioms, and so one. In that way one could progressively account for all true statements about the whole numbers. The development of a new axiom, not deducible from the previous axiom set, corresponds to a scientific revolution -- to the discovery of a new physical principle increasing Man’s power in the Universe, as part of a developmental process which never ends. In fact, number theory can be regarded as a part of physics in the sense that the properties and interrelationships of the whole numbers can be expressed in terms of the long-term behavior of certain machines, the so-called Turing machines, which could in principle be built and operated as real physical entities.

 5. In the section above on boundedness we discussed the capability of the human mind to conceptualize the bounded nature of any given way of thinking, and on this basis to break free of that specific boundedness via new insights which were not accessible before. We think this capability is closely related to the existence of free will. Assume I could know, on the basis of conceptualizing the relative boundedness of my thinking at a given point, what I was going to do on the next day. Then I could deliberately decide to do something different. My behavior on the next day would be unexplainable from “inside” the original bounded way of thinking. It would constitute a physical anomaly which could only be explained from the standpoint of the higher mode of thinking I have arrived at by breaking free of the previous one. 

6. All of the above considerations embod< insights made long ago by Wilhelm Gottfried Leibniz and Nicolaus of Cusa before him, the latter in works such as “Idiote de Mente” and “Docta Ignorantia”. We cannot go more into this subject here, but merely quote a famous excerpt from Leibniz’s essay, “On the Ultimate Origination of Things” (1697):

“… We must also recognize a certain constant and unbounded progress of the Universe as a whole, so that it always proceeds to greater cultivation, just as a large part of our Earth is now cultivated, and more and more of it will become so. Certain things regress to their original wild state and others are destroyed and buried, but we should understand this in the same way as the afflictions that I discussed a little earlier: this destruction and burying leads to the achievement of something better, so that we make a profit from the loss, in a sense. You may object: ‘If this were so, the world should have become Paradise long ago!’ I have a quick answer to that.

“Many substances have already reached great perfection; but because of the infinite divisibility of the continuum, there are always parts asleep in the depths of things, yet to be roused and advanced to a greater and better condition, advanced to greater cultivation, in short. Thus, progress never comes to an end.”

The Universe itself is a process of development, and the physical-economic development of Man a manifestation of the same universal tendency.


What about Nature? How a rational environmentalism can contribute to economic development  

As we mentioned above, recent decades have seen the rise of environmentalist movements demanding an end to economic growth. Following the arguments of Thomas Malthus and the neo-Malthusian “Club of Rome” in favor of so-called “limits to growth”, these movements are generally hostile to scientific and technological progress; they see the increase of Man’s power over Nature through scientific and technological progress as a monstrous aberration. Man should return to a state of equilibrium with the environment, they argue, in which human beings would be only a particular species of animals among many others – talking animals.  These views are often accompanied by a quasi-religious worship of “untouched” Nature and even a hatred of human beings, as articulated by Club of Rome President Aurelio Peccei’s motto “the Earth has cancer, and the cancer is Man”.

From our standpoint, the susceptibility of a large section of population to environmentalist ideology – including especially young people -- is a consequence of the profound sense of alienation in modern society. We shall discuss the problem of alienation and propose a solution in Part II. Unfortunately, rather than overcoming alienation, environmentalism aggravates it further by negating Man’s essential nature.

As we demonstrated above in detail, the thesis of “limits to growth” is fundamentally flawed. There are no absolute limits to growth, but only relative limits associated with any given state of scientific knowledge. But science and technology are constantly developing – as long as misguided environmentalists do not prevent it.

Apart from the horrific consequences of abandoning scientific and technological progress in a world of more than 7 billion people, the environmentalist ideology is full of misconceptions and internal inconsistencies.

A fundamental error is to portray human development as somehow “unnatural”, and to impose the notion of equilibrium upon a Nature that is constantly moving further away from equilibrium. Scientific evidence shows us that the biosphere has developed at an accelerating rate over time, and that Man – with all his capabilities and his increasing power in the Universe -- emerged as a product of that developmental process. If so, then nothing is more natural than human development! Why do the environmentalists propose to suddenly stop that process now? Would they be happier if evolution had stopped at the point when the biosphere was nothing but a sea of bacteria? If human beings bring water to a barren desert, transforming it into a flourishing green biotope, does that constitute a destruction of Nature?

Most curious also is the aesthetic attitude of environmentalists who, while claiming to love untouched Nature, support the destruction of the natural landscape by hundreds of thousands of gigantic wind turbines.

More thoughtful environmentalists are gradually coming to the realization that only nuclear energy can provide a realistic alternative to the combustion of gigantic amounts of coal, oil and gas in the world economy. This is encouraging. Whatever truth there may be in the alleged threat of a coming catastrophe due to “global warming”, the world’s continuing dependency on hydrocarbon fuels is a symptom of technological stagnation and a source of massive inefficiency and waste. In Part II we present the outlines of a “crash program” to virtually eliminate the combustion of hydrocarbon fuels in the world economy, using nuclear energy and other advanced technologies having orders of magnitude larger power densities. In contrast to wind, solar and other so-called alternative energy sources, this program would drastically increase the real physical productivity of the world economy.

We are convinced that a large part of the economic activities that can justifiably be regarded as harmful to the environment are the product of an economic system that is oriented to short-term financial profits rather than real physical economic development. Wholesale destruction of large regions of Nature is typically a symptom of an unhealthy, linear form of growth – an expansion of the mere scale of economic activity in the absence of real scientific and technological progress.  

On the other hand we think it is wrong to regard every intervention of Man in Nature as destructive per se, in the way suggested by environmentalist ideologies. We have reason to believe, for example, that farming  – where it is not subjected to extreme pressure to reduce costs and increase short-term profitability in the context of neoliberal economic policy -- is beneficial to the biosphere as a total system. In particular, it is likely that the increase in overall flows of matter and energy, brought about by agriculture, has a stabilizing effect on the biosphere.

We cannot pursue this issue further here, but merely point to the work of the great Russian naturalist Vladimir Vernadsky on Man’s positive role in the development of the biosphere. Vernadsky conceived of the emergence of Man as a new stage in the natural evolution of biosphere as a whole. He called this new stage the Noosphere: the further development of the biosphere through the emergence of human reason.     

In contrast to environmentalist forms of alienation, we believe that physical economic development is fully coherent with a profound love of Nature. In fact, the two are inseparably linked to each other.

Nature everywhere embodies the same principle of creativity that is expressed in human creative activity -- but with the fundamental difference, that the creativity manifested in Nature is unbounded, while human thought is always relatively bounded in the sense we discussed earlier.

It is only in the process of successively overcoming the relative boundedness of any particular state of human knowledge that Man gets a glimpse of the principle of “unbounded” creativity of which he himself is an expression. But for that, the contemplation of Nature provides Man with an indispensible inspiration and a guide. Man is one with Nature only when Man is genuinely creative.

In the experience of the beauty of Nature, we experience the creative principle inside us resonating with the unbounded principle of creativity embodied in the natural world. Man approaches this kind of beauty otherwise only in the greatest works of art and music, above all those rooted in a deep religiosity.

The tiniest living creature is a miracle embracing an inexhaustible wealth of inner structures and processes. The closer we look, the more we find.  By contrast, the technological creations of Man all bear the imprint of the boundedness of the knowledge by which they were created. But at the same time it is the development of science and technology that provides Man with the instruments with which to come to know and admire Nature ever more intimately, from the tiniest particles to entire galaxies. Witness the incredibly beautiful images of the Earth taken from orbit, or the astonishing videos of living cells, made with new generations of microscopes. By developing and exercising Man’s powers in all directions, physical economy brings Man into ever more intimate interaction with Nature.

Not accidently, the development of science in previous centuries has centered to a large extent on the work of great naturalists such as Alexander von Humboldt. While the methods of modern physics tend to break up phenomena into small pieces, the work of the great naturalists reveals the entire symphony of the laws of the Universe all acting in concert. Hence the title of Alexander von Humboldt’s great work: “Cosmos”. The biosphere is a great laboratory of the creative process. It offers mankind an inexhaustible potential for discoveries, and refreshment for the mind. This applies especially to the study of collective “social” processes in the living and nonliving domains, which we think will stand in the center of science in the future. It is the key to our proposal, presented in Part II, for a “knowledge generator economy”

For all these reasons Man has the strongest interest in preserving the richness and beauty of Nature.

How do we reconcile that with unlimited growth and development of the physical economy? The needless destruction of Nature is ugly, but – environmentalists should not forget – human poverty, ignorance and stagnation are also ugly.

We shall not attempt to give a complete answer here, but only identify some essential elements of a trajectory of physical economic development that would reserve the greatest part of the Earth as a beautiful garden, while at the same time permitting Man the full exercise of his creative powers.

The most essential thing, in physical economic terms, is a transition from extensive to intensive forms of growth and development. This means increasing the density of economic activity, as opposed to merely expanding its scale. As in the case of power density in technology, increasing the density of economic activity correlates with higher productivity in physical-economic terms. If the possibilities of science and technology are fully utilized, higher productivity will go hand in hand with improvement of the natural environment. Here are examples: 

1. New urbanization: In the future the largest part of the world population could life in beautiful new medium-sized cities, where a dense, highly-efficient automated urban transport network and other infrastructure are built in from the start. The new Korean “Eco-city” Songdo is exemplary in many respects, although mass transit does not yet completely replace automobiles. Of Songdo’s total area – 6 km2 – only about half is built up, the rest being taken up by parks and green areas. With a population of about 67 000, the average population density over the total area is a little more than 11000 persons/km2. Assuming hypothetically that the entire world population would be housed in such medium-sized new cities, the total area required would be less than 630 000 square kilometers – the equivalent of less than 0.5% of the land area of the Earth! The rest would be a “Garden of Eden”.

2.  There is no doubt that in the future, such cities could in principle become virtually self-sufficient in food, using controlled-environment “food towers” and various forms of bioreactors build up within the city area. A pilot program in Berlin has created “urban container farms” based on a symbiosis of hydroponic cultivation of vegetables together with aquaponic production of fish, and a high degree of recycling of water. (We also note that scientists succeeded in 2013 in growing meat in cell cultures! According to reports, the first “artificial hamburger” did not taste very good. But it can no doubt be improved in the future). Naturally, we are not proposing the elimination of traditional agriculture. We merely want to show that intensive forms of development will make it possible to greatly reduce the land area required to feed the population.

3.  Applied to transportation the principle of intensive development demands a radical shift away from road transport (including personal automobiles and trucks) toward automated track-based or guided forms of transportation based exclusively on electricity. This is a great technological challenge, which we shall discuss further in Part II. It is well known that rail transport of freight and passengers is far more efficient than road transport, both in energy consumption and in terms of the land area required. Rail transport is also free of traffic jams. Nevertheless, up to now rail is economical only on lines with large traffic, and cannot provide direct access to every point in a territory in the same way as the road network. The use of individual vehicles – cars and trucks -- able to reach their destinations independently, permits far greater flexibility than rail. Hence, rail is today mainly used in combination with road transport, requiring transfer of passengers of reloading of freight. Up to now, eliminating the use of roads altogether would be feasible only for new cities with built-in high-density rail-based public transport networks. This would apply only to passenger transport, however; the city would still need roads for delivery of goods. Fortunately new options are now appearing, thanks to developments such of so-called automatic guided vehicles (AGVs), robotic vehicle systems including driverless cars etc. Most promising is the “Intelligent Multimodal Transportation System” (IMTS) developed by Toyota, which has already transported 2 million passengers at a world exposition. The IMTS utilizes individual vehicles which can move independently to any location; but over heavily trafficked routes they organize themselves into “platoons”, forming a chain of vehicles moving at a uniform speed, like cars on a railroad train. This permits high transport densities and speeds on these main routes, as in the case of urban rail transport, but with the advantage, than vehicles can leave the “platoon” at any time, going over into a flexible independent mode of operation to reach any destination. The platoons are guided by magnetic markers embedded in the main route roadway. With further development, the breadth of the roadways could be reduced to be comparable to that of a railway, while maintaining the flexibility of road transport. While the IMTS has so far been limited to passengers, there is no reason why it could not be adapted for freight also. In the future this kind of system could conceivable fulfill the entire transportation requirements of new cities and even entire regions, if it were built in from the start. 

4. As we discussed at length above, nuclear energy has a power density orders of magnitude larger than any other existing power source. In contrast to the intensive form of development exemplified by nuclear energy, the so-called “alternative” energy technologies have become popular in the recent period constitute a step in the opposite direction – requiring much larger areas and much larger amounts of material. Let us compare nuclear power with large wind turbine parks, which are the most economical of the alternative sources.  A typical nuclear power plant generating 1800 megawatts of electrical power requires less than 5 km2 of area, counting auxiliary buildings, storage etc. To produce the same output requires 720 large wind turbines occupying an area of 438 km2!  One could argue that the area between the wind turbines can still be used for agriculture, which is true; but houses and other human dwellings are excluded out to a “safety distance” of between 500 and 1500 meters to the nearest wind turbine (data from Germany). On the other hand, for the same nominal energy output as the nuclear power plant, the wind turbines require over 7 times the amount of steel and concrete. Here one must also take into account the fact, that nuclear power plants generate power over 90% of the time (capacity factor > 90%), while wind turbines typically have capacity factors of about 33% due to the fact that wind is not blowing all the time. It is not surprising that rational environmentalists are turning more and more to nuclear energy.


Part II  Physical-Economic Development: Pathways to the Future


Chapter 7    Background: the 1970s branching point   

7.1  Introduction

In history up to now nations have followed trajectories of real physical-economic development only for limited periods, only to abandon them later, in a back-and-forth battle between opposing economic policies.

A case of great importance for today is that of the United States in the period from the end of World War II until today. This is not only because of the enormous power and influence of the United States, but above all because the “branching point” which occurred in the physical-economic trajectory of the United States around the beginning of the 1970s continues in many ways to characterize the alternatives for the economic policies of the world’s nations today.

At the 1970s branching-point the United States abandoned its previous, relatively successful pathway of physical-economic development, and went off in the opposite direction. This wrong turn had devastating effects not only for the United States but also for very many other nations which came under the grip of the aggressive neoliberal policies promoted by the U.S. in the ensuing period. We shall address some of these effects, including deep cultural effects, in the following section. 

The 1945-1970s trajectory of the U.S. had been very far from perfect -- among other things it remained nearly entirely in the “weakly nonlinear” mode we defined in Part I – but it nevertheless remains the best reference-point in recent history for sustained physical-economic development of a technologically advanced nation. This is especially the case if we include, in addition to the U.S., the “economic miracles” which occurred in West Germany, France, Japan and a number of other nations in the same period. Despite significant technological advances over the last 40 years, and the recent rise of China, one could correctly say that the world has still not recovered from the wrong turn taken by the U.S. from the early 1970s on. It is especially useful to consider what would have happened, and how the world would look today, if the U.S. had instead continued on its pre-1970s path of physical-economic development, and permitted other nations to do so. In many ways launching healthy physical-economic development around the world today can be thought of as resuming the upward trajectory that was abandoned at the 1970s “branching point”.

7.2  The fateful turn in U.S. economic policy

During World War II under the leadership of President Roosevelt the U.S. government adopted principles of physical economy to achieve a gigantic expansion of industrial production and rapid technological development (radar, atomic bomb, electronic computers etc.).  In the immediate postwar period, physical-economic principles guided the conversion to civilian production, large-scale investments in infrastructure and education and the establishment of a vast apparatus of government-supported scientific research and development. The advent of the Cold War catalyzed a further expansion of U.S. scientific and technological capabilities, with a huge buildup of high-technology industries, including the aerospace, nuclear and electronics sectors.

The 1960s boom in scientific education and research in the United States occurred on the background of the famous “Sputnik shock”: the 1957 launching by the Soviet Union of the first artificial satellite into orbit around the Earth. This was followed in 1961 by Man’s first journey into space, the orbital mission of Soviet cosmonaut Yuri Gagarin. The ensuing “space race” culminated in the successful landing of U.S. astronauts on the Moon in 1969.

The magnitude of the U.S. government-supported boom in science education in the 1950s and 1960s is reflected in the following excerpt from a 1989 report of the U.S. Office of Technology Assessment (OTA):

“Graduate enrollments and doctorate production in science and engineering rose rapidly during the 1960s  … Graduate enrollments in all fields more than doubled between 1958 and 1968 and Ph.D. awards tripled … This growth in education and research was launched by the Sputnik, fueled by the Apollo (Moon landing) program … During this ‘golden era’ of academic research and graduate spending, Federal R&D spending doubled (in constant dollars) … Until the Apollo program was scaled down … Federal support of academic R&D increased by about 29 percent a year (in constant dollars) … Ph.D. job market booms created by Federal and other research and education funding drove science and engineering graduate study…”

Quite apart from the competition with the Soviet Union, the first steps on the Moon embodied the spirit of optimism and inventiveness which has characterized the United States in its most successful periods of development. This optimism back goes to the very beginnings of the U.S. republic and the emergence of what became known as the “American System of National Economy” of Alexander Hamilton, Henry Carey and the German economist Friedrich List. The optimistic “American System” stood in direct opposition to the “British System” of Smith, Malthus and Ricardo with its pessimistic view of Man. In Part III we shall describe the original “American System” -- which is radically different from the neoliberal economic policies that are promoted worldwide by the United States today – and the influence of the “American System” on the 19th century industrial development of Germany as well as Russia, India, Japan, Brazil, Mexico and other countries.

It is important to emphasize that the original “American System of National Economy” is something very different from what became known in the 20th century as “American management methods”. The latter, otherwise known as “Scientific Management” or Taylorism, refers to methods of organization of mass production which were pioneered in the U.S. automobile industry around the beginning of the 20th  century. Although these methods led to an enormous increase in labor productivity in many branches of industry and to a certain kind of material affluence, they did so at the price of an extreme form of alienation of the workforce (and indirectly of society generally) which contradicts the cognitive purpose of physical economy. We shall return to this point later.

Coming back to the three decades following 1945, it is important to stress that this was also the period of the famous “economic miracles”in West Germany, France, Japan, South Korea and a number of developing countries, including Brazil. Although the political and institutional frameworks differed from case to case, the governments involved all shared a strong orientation to physical-economic principles.

The same applies to the Soviet Union. Operating on the basis of a completely different economic and political system from that of the Western countries, the USSR succeeded in maintained high rates of physical-economic growth and development in the three decades following WWII.  The average living standards, health and educational level of the population improved greatly. The Soviet Union built up a gigantic scientific-industrial capability. At the end of that period, despite the fact that those capabilities were not used very effectively, the Soviet Union could rival or even surpass the United States in many areas of science and technology, especially in military-related areas. Decisive for this success was a strong emphasis on scientific and technological progress, education and a systematic buildup of the productive base of the economy.

There is no doubt that so-called Cold War played a decisive role in propelling the physical-economic development both of the U.S. and it rival, the Soviet Union. Unfortunately it has often been the striving for military power, national defense and preparations for war, rather than the benefits of physical-economic development per se, which have impelled governments of nations to invest large resources into science and technology. Although the striving for scientific and technological progress has characterized the United States from its very beginning as a nation, the history of the U.S. has been one of a constant struggle against internal and external forces oriented to “British system” policies. Initially the competition with the Soviet Union helped to hold those forces in check. Unfortunately that was soon to change.

7.3  A road to disaster

From the late 1960s and early 1970s on, the U.S. began to abandon its orientation to physical-economic development and to embrace the so-called neoliberal model: deregulation of the financial system, privatization and progressive elimination of the role of the state in steering the overall direction of the economy. More and more the economic future of the United States was blindly entrusted to the so-called “free markets” and the decisions of powerful financial groups. A turning-point was President Nixon’s unilateral termination of the Bretton Woods system of stable currency exchange rates in 1971, and of the linking of the value of the dollar to gold. The stability provided by the Bretton Woods system and a high degree of regulation of financial markets had been essential to the postwar economic successes not only of the United States, but of many nations around the world.

The growing “financialization” of the U.S. economy in the 1980s and 1990s went hand-in-hand with a paradigm shift away from an industrial orientation toward what became known as the “consumer society”, “postindustrial society”, “service economy”. The percentage of the workforce employed in manufacturing was reduced by a half, dropping from about 20% to about 10%. Today twice as many people are employed in finance, insurance and real estate, than in the production of goods.

The U.S. radically cut back on its space effect and halted its program of manned exploration of the Moon. A significant part of the organizational and industrial infrastructure which had made the lunar landing possible was dismantled. The last manned mission to the Moon was more than 40 years ago.

The OTA report cited above goes on to describe the effect of the policy change on education:

“Social and political priorities shifted away from Cold-War-inspired science ... From 1968 to 1974, the number of Federal Government fellowships and traineeships (in all fields) plummeted 85 percent … science development programs were eliminated … Engineering and physical science were the most affected … National Aeronautics and Space Administration (NASA), Department of Defense (DoD) and Atomic Energy Commission (AEC) research funds dropped 45 percent in real terms.”

In the 30 years from 1970 to 2000 the process of pumping up the service sector and a giant bubble of consumer spending led to the creation of over 60 million jobs, including over 45 million in the areas of sales and marketing, finance and management, leisure and entertainment, and government bureaucracy. This process was financed, ultimately, by monetary expansion (i.e. creation of new money) by the U.S. Federal Reserve Bank, amplified by the private banking system. GDP growth – which we referred to in our Introduction as the “False God of the Economists” -- became little more than the result of feeding more and more newly-created money into consumption and superfluous service sector employment. Indeed, in the indicated 30-year period the share of manufacturing in total U.S. GDP dropped by 50% while the share of finance, insurance, real estate and leasing increased by 200%. From the standpoint of what is today called “economic growth” this policy was a great success: from 1970 to 2000 the GDP grew by more than 2½ times.

All of this reminds us of the famous proposal, commonly attributed to John Maynard Keynes, that the government (or the central bank) could create unlimited numbers of new jobs by simply printing money and paying people to dig holes and fill them up again. (In fact what Keynes actually said was somewhat different, but that is not important for the present discussion). It sounds like a joke, but in essence this is exactly how the “postindustrial economy” was created in the United States and other countries. Naturally, for this to work it is not enough to print money; one must also have the material resources to supply the personal consumption of the millions of people engaged in more or less superfluous forms of employment. In the case of the U.S. the resources of the more and more “downsized” productive sector were supplemented by a flood of importing goods, running up a gigantic trade deficit and a mountain of public and private debt.

Parallel with the shift in U.S. economic policy, the postwar “economic miracles” in Germany, France and Japan came to an end. Physical-economic development slowed and in many respects reversed. The world economy as a whole entered a more turbulent period, with frequent crises and ups and downs in the various nations.

7.4  International impact

Although the U.S. GDP continued to grow, the real physical economy of the United States weakened more and more, as reflected for example in a deteriorating and increasingly obsolete infrastructure, a growing dependency on imports and a dramatic growth in income disparities. Two events of the 2007-2008 period underlined the decline of the United States’ real economy: one was the collapse of an 8-lane highway bridge in the city of Minneapolis during rush hour on August 1, 2007, and the other was the bankruptcy of Lehman Brother’s Bank a year later, triggering a global financial crisis whose effects continue today.

In the meantime the power of the United States and U.S.-dominated international institutions, especially the IMF, was utilized to impose radical neo-liberal policies on countries around the world. Among the first victims were the developing nations of Iberoamerica and Africa, many of whom had achieved significant economic progress in the quarter century after World War II. In the period beginning with the mid-1970s the economies of these nations were ruined by a huge accumulation of debt and by savage austerity policies imposed by the U.S. and the IMF to enforce debt repayment to Western banks.

Explained very briefly, Nixon’s decoupling of the dollar from gold and the end of the fixed currency rates system signaled the beginning of a spectacular fall in the value of the dollar relative to Japanese and major European currencies. This, together with political events in the Near East, induced oil-producing nations to drastically increase the price of oil, which was traded exclusively in dollars. The dollar price of oil grew by nearly 10 times in the course of the 1970s. Huge amounts of “petrodollars” that could not be immediately spent, found their way into Western commercial banks which in turn “recycled” these dollars in the form of lavish and indiscriminant lending to developing countries. Instead of going into long-term productive investment, much of this money was used by the debtor countries to compensate for the higher oil prices and for other short-term uses, with little or no positive effect on their own development. In addition, most of these dollar loans had short maturities and variable interest rates. Due to the flood of petrodollar liquidity the interest rates were initially low, but this suddenly changed. 

In 1981 the Chairman of the U.S. Federal Reserve Bank Paul Volcker suddenly raised U.S. interest rates to over 20%, compared with 11% in 1979. This action had a devastating effect not only on the debtor nations, but also on the physical economy of the United States. The “Volcker shock” unleashing a savage process of “deindustrialization” in the U.S. itself, affecting especially the heavy industry, construction and automobile sectors, and unleashing mass unemployment. Formerly thriving industrial regions collapsed, turning into what has since become known as the “rust belt”. The high interest rates led to a shift of credit flows away from productive sectors into an artificially inflated finance and service sector. The major winners were the Wall Street-centered financial interests, whose political influence became ever more dominant as traditional industrial interests were increasingly marginalized. 

The highly indebted developing countries now found themselves confronted with a huge increase in interest rates and a simultaneous decline in prices of the primary commodities that made up a large portion of their exports. The debt became unpayable. Already in 1982 Mexico announced that it could no longer service its debt, unleashing the first of a long series of Latin American debt crises. Instead of helping Mexico and other developing nations to build up the productive base of their economies, as the United States had done in the postwar Marshall Plan reconstruction of Europe, U.S. policy was now focused exclusively on enforcing repayment of debt to the Wall Street banks. The economies of Mexico and other debtor countries were placed under the de facto financial dictatorship of the IMF and subjected to crushing austerity measures, from which they have not fully recovered even today.

This is not the place to examine the ensuing, unending series of crises which have plagued the nations of both the so-called developing sector as well as the so-called advanced sector nations during the last 30 years. We should only note that although the recent process of globalization of the world economy has provided increased income to many exporting nations, this has been at the price of a growing loss of economic sovereignty – a weakening of the ability of nations to manage their own economies. As a result, the fate of nations is more and more determined by gigantic capital flows on the deregulated global financial markets, making them virtual colonies of a U.S.-centered “neoliberal empire”.

Lessons can be learned from the fact that China, which has insisted on retaining a large degree of economic sovereignty, was one of the nations that suffered the least amount of damage from the financial collapse of 2008-2009. The spectacular physical-economic development of China over the last 25 years has changed the global situation, creating new prospects for overcoming the nightmares of the post-1970 period.  

It would be beyond the scope of this book to fully analyze the reasons why the United States and other leading industrial nations abandoned policies for physical-economic development in favor of the neoliberal model. Certainly, in order to continue real physical-economic development it would have been necessary to correct serious flaws in the prevailing mode of industrial growth, particularly beginning in the late 1950s. This mode of growth was far too one-sidedly centered on the expansion of the automobile sector – a linear mode of growth in the sense we described in Part I. Not accidentally, the growth of the U.S. and other major industrial economies became dependent on a single commodity – oil – setting the stage for the famous oil crises of the 1970s. But instead of correcting this flaw and mobilizing the technological revolutions embodied in the space program, the development of nuclear energy etc., policymakers chose a trajectory of transition to a “post-industrial society” or “service economy” combined with deregulation of the financial system.   

7.5  Cultural effects

In light of what we said above about the cognitive significance of economic activity, it is important to point out that the United States’ abandonment of physical-economic development went together with a profound cultural change.

The characteristic optimism of the postwar period was replaced more and more by cultural pessimism in various forms. One was a return of Thomas Malthus in the guise of radical environmentalism. Malthusian ideas were injected into a social environment of alienation and revolt of the younger generation against the “automobile society” and its culture of mass consumption. Exemplary is the publicity campaign surrounding the 1968 book “The Population Bomb” by Paul Ehrlich and the famous 1972 “Limits to Growth” report by the Club of Rome, which we already discussed in Part I. The image of Man as a creative being, of the human being as a producer, inventor and discoverer, was replaced by the image of Man as a destroyer of Nature.

Another expression of the cultural pessimism connected with the advent of neoliberal policy was the rapid spread of the hedonistic so-called “youth counterculture”. It roots went back to the 1960s rise of the Hippie movement, the sexual liberation movement, the widespread use of marijuana and LSD as “recreational drugs” and related forms of music. The traditional political left, which had sought its base among the industrial workforce and was generally favorable to science and technology, was supplanted by a “New Left” whose constituency was mainly college students. While emerging originally from protest movements against the Vietnam War and social injustices, the New Left became more and more inseparable from the counterculture and the radical environmentalist movement with its hostility to industrial society and scientific rationalism. Ironically, New Left demands for “liberation” from allegedly authoritarian state institutions fit very well with the neoliberal culture of radical privatization and deregulation of financial markets.                                      

All of this added up to a profound change in the self-conception – the sense of identity – of people in society. While at first confined to a relative minority, this change came to affect Western society as a whole, and has been closely related to the shift in the dominant form of economic activity. Formerly, most working people were engaged either directly in productive activity or in services necessary to an industrial economy.  Today, the lives of working-age people are increasingly dominated by forms of employment which are more or less devoid of meaning -- such as sales and marketing jobs derived from the consumer-goods bubble -- and by “hidden unemployment” in various forms. In such situations people lose the anchoring in reality which is connected with productive employment. Those still engaged in productive activity are often faced with extreme pressure as a result of cost-cutting practices of firms striving to maximize shareholder profits and to survive in a brutal regime of global competition. A growing percentage of the younger generation suffers from economic insecurity and a lack of orientation concerning their future.

The fact that the bulk of the population of the United States and other nations has remained largely passive in face of such conditions is no doubt due to the influence of the electronic media, spectator sports and other forms of mass entertainment, the drug and disco “junk culture”, video games etc., all of which function in effect as “opium for the masses”.

On the other hand, the powerful base of applied scientific research and high-technology industry that had been built up in the U.S. and other industrial nations did not completely cease to exist. Although much weaker than it would have been without the 1970s neoliberal turn in economic policy, it continued to generate significant developments in fields such as biotechnology, microchip technology, high-speed computing and the internet, automation, sensors, new materials etc.. These technologies have helped maintain a certain growth in the real productivity in the world economy over the last 30 years. In practically every case, however, they are outgrowths of breakthroughs made in the pre-1980 period.

Meanwhile we witness a growing split in the workforce. On the one side we have the high-tech sector with an elite minority of well-paid, highly trained specialists working in modern laboratories and production facilities, while on the other an increasingly lumpenized majority of the population live and work in a environment of  rawness, stagnation and intellectual backwardness, without much perspective for the future.

Unfortunately, the trend toward ever more extreme forms of cultural decadence, infantilism, irrationality and ugliness has spread to a considerable part of the world’s population, especially the youth. For millions of youths in regions and social strata with high unemployment rates, the combination of boredom and rage, a weak sense of identity and self-worth, longing for excitement and camaraderie together with the breakdown of moral restraints, has made participation in violent gangs and militias increasingly attractive.

In our view this situation cannot last for long. We can already see many signs that the pendulum is swinging in another direction. Today it is clear that the “neoliberal empire” is weakening and is greatly overextended in financial and strategic terms. Belief in the neoliberal paradise is disappearing everywhere. We believe that the global financial crisis together with the spectacular rise of China, with its strong orientation to physical economy, its gigantic industrial and infrastructural buildup and rapidly growing technological capabilities, marks the end of the post-1970s era and the beginning of a new era.

It is quite possible that the rising economic, technological and military power of China, India, Brazil and a number of other nations, the new role of Russia etc. will induce the United States and other Western countries to abandon their radical neoliberal policies. It is also possible that the world will first go through a terrible period of conflict and chaos before sanity returns. Sooner or later, we are convinced, the principles of physical economy will become the dominant paradigm.

Chapter 8   Relaunching full-scale development   

8.1  Introduction

There are many signs today of a popular revolt against neoliberal policies and a longing for real development in nations around the world, including the U.S. itself. Lacking, however, is a clear notion of what the goals and criteria of economic policy should be. Defining the necessary goals and criteria is exactly the purpose of this book.  

What would the pathways of development look like, if nations today were to adopt economic policies in accord with the principles presented in Part I? In the following sketch we address the situation of nations which have been strongly affected by the general crisis described above, i.e. that of most nations in the world today. We focus here first on the priority targets of an adequate economic policy. Among other things we shall describe the use of productive credit generation, large-scale state investment and the priority channeling of credit into selected sectors, as indispensible instruments for realizing those targets. We shall examine historical examples in Part III.

The essential precondition for success is that economic policy be based on an analysis of the economy as a physical organism developing under the impact of scientific and technological progress. The monetary and financial process must be strictly subordinated to the real, physical economy, and not the other way around. 

The first priority of economic policy is to mobilize the greatest resource of any nation – its population -- by rapidly expanding employment in useful, technologically-progressive forms of economic activity. We insist that it is both possible and necessary to raise employment levels rapidly to levels approaching full employment, and thereby to alleviate the sense of economic insecurity that afflicts a large part of the population today. The longer-term priority is to put the economy on a pathway to a mode of economic development which we shall call the Knowledge Generator Economy.

The indicated full-employment goal will immediately raise two big questions in the minds of decision-makers as well as the general population today: (1) where will the jobs come from? and (2) where will the money come from? We must evidently answer those questions, at least in essentials, before proceeding further. 

8.2  Where will the jobs come from?

A useful starting point for answering this question is to imagine what would have happened if the United States had continued its pre-1970 trajectory of physical-economic development, rather than abandoning it.  We shall briefly indicate what it would mean in terms of the creation of new jobs, for the United States and other so-called advanced nations to return to the pathway of physical-economic development that was renounced by the U.S. in the early 1970s. Afterwards we shall turn to the situation of developing nations today.

The positive alternative to the 1970s “wrong turn” in U.S. economic policy would have been to continue and improve upon the pre-1970 upward trajectory of physical-economic development, creating tens of millions of new, highly-qualified jobs by (i) rapidly expanding investment and employment in scientific research, advanced R&D and engineering, and (ii) upgrading the nation’s entire industrial and infrastructural base using the most advanced technologies available. This process would have been driven forward by large-scale state-financed high-technology projects, including a vastly expanded space program and rapid development of nuclear energy in various innovative forms. A precondition would have been to mobilize all available educational resources to upgrade the training of the workforce and prepare a new generation of young people for the indicated types of activities (see below).

These goals are as valid today as they were in the 1970s. They indicate the general direction for job creation under the assumption that the United States and other advanced sector nations return to the pathway of physical-economic development which was abandoned at that time. One must naturally take into account the technological advances which have occurred since the 1970s. We shall elaborate detailed scenarios below in the sections on visionary projects and discuss how they lay the groundwork for a new stage of physical-economic development, which we call the Knowledge Generator Economy, in the decades ahead. 

So far we have focused on the perspective for advanced sector nations. What about nations that still lack a highly-developed productive base?

In practically every historical case, the creation of large numbers of new jobs and the economic “payback” necessary to sustain them have been driven by large-scale state investment into physical infrastructure: water systems, energy generation and distribution, transportation and communications networks, using the most advanced technologies available. As we stressed in Part I infrastructure is the basis for all economic activity, and the combination of technological improvements together with large-scale infrastructure development is the single most effective way to increase the physical productivity of an economy. 

Large-scale investment into physical infrastructure development creates a corresponding demand for equipment, components, machinery and other investment goods, providing the conditions for expanding a nation’s industrial base and generating more jobs. In this context top priority should be given to expanding and upgrading the machine-tool sector, which is the heart of a nation’s industrial capability. Establishing a network of infrastructure corridors combining transport, energy and communications provides optimal conditions for the emergence of “pearl chains” of urban-industrial centers whose efficiency is enhanced by the high density of productive activity. Continuously injecting new technologies into the investment cycle accelerates the growth of real physical productivity, paying back the investment in infrastructure many times over.

It should be recalled, in this context, that state investment into large-scale infrastructure development was the centerpiece of the “New Deal” policy that created an estimated 15 million jobs in the United States between 1933 and 1939. Although the economy initially remained sluggish, the New Deal program of rural electrification and its 34 000 projects for construction of roads, bridges, water and sewage systems and dams helped lay the basis for the ensuing, explosive growth of U.S. industrial output during World War and – after conversion to civilian production – in the postwar period.

To obtain the multiplier effect of physical infrastructure investment on employment and economic growth it is necessary to insure that the largest portion of the contracts go to domestic producers, even if this is means higher costs in monetary terms; and that the bulk of imports consists of products which cannot be supplied in an adequate fashion from domestic sources. Naturally all of this requires a large degree of state intervention into the economy, as well as appropriate tariff policies, both of which were commonplace all over the world up into the 1970s. A certain degree of liberalization can occur once a solid all-round productive base has been established, but only to the extent this has been shown to be beneficial in real physical-economic terms.

8.3  Where will the money come from?

Leaders in most countries today would respond to what we just outlined by saying: “That all sounds good. But where will the money come from?” The budgets of most governments today are strained to the utmost, with high deficits and piles of state debt.

Beyond the special case of financing large projects, we have the general question: How is it possible to finance physical-economic development? To do so requires extending large amounts of long-term, low interest credit to private enterprises in the productive sectors of the economy, permitting them to innovate, expand and invest in new technology. 

The basic answer for both cases is simple:  by creating new credit and directing the flow of newly-created credit into the relevant sectors of the economy. There is more than one way to this in practice. It would go beyond the scope of this book to deal with the details of financial and monetary policy, so we shall merely indicate the simplest approaches as a way to illustrate the basic principle involved. In practice nations often employ a mixture of various methods.

The foundation of the solutions is the power of every sovereign nation – if the government has the political will to use it! – to take control of its central bank and the issuance of currency.

The simplest option is for the state to simply “print” money and use it directly to fund priority projects. Here “printing” need not mean issuing physical currency, but can be accomplished in effect by a suitably authorized national bank or government-controlled central bank, by simply adding sums to the accounts of agencies in charge of the project. The project agencies then disburse the funds, as required, to relevant contractors and companies.

This method has the advantage that the state incurs no debt and no deficits. It also has an obvious limitation, however: except for the case where there is an oversupply of physical and human resources for the projects, the use of newly-created money in this way tends to cause inflationary price increases.

It is possible, however, to avoid inflationary effects or keep them at an acceptable level. The method is essentially to expand production in the economy in a coordinated way, taking into account its input-output relations, in such a way that no major inflation-generating shortages occur. This assumes that the state (and other relevant institutions) has sufficient influence over the overall direction of investment flows in the economy.

The classical example of the method of currency emission is U.S. President Lincoln’s issuance of the famous “Greenbacks” in 1861-65 as a supplementary currency to finance the war effort of the Union states in the U.S. Civil War. The government paid soldiers and suppliers to the war effort with newly-printed dollar notes. Simultaneously with the issue of Greenbacks to finance the war, the Lincoln administration used a variety of tools to promote development of the steel industry, the continental railroad system, the sector of farm machinery and other areas of the physical economy. The enormous burden of war production did lead to inflation, but it remained within manageable limits.

What is the alternative to direct emission? By far the most powerful approach is to exploit the money- and credit-generating functions of the banking system.

It should be well known that nearly all money in a modern economy is created by banks when they make loans. This occurs via the “credit multiplier effect”: money lent out by a bank to a firm or private customer shows up as deposits in the same bank or other banks, providing the basis in turn for issuing new loans, and so on. The total number and volume of loans generated from the original loan is limited only by the reserve requirements set by the central bank, by the demand for credit and the willingness of banks to lend at a given interest rate. Today practically the whole process is carried out electronically, without the use of currency, by adding and subtracting numbers in computerized accounts. (In modern economies only a tiny percentage of financial transactions is carried out in physical currency.)

Central banks have a variety of instruments for regulating the credit-generating activity of the banking system and for increasing its volume as required. The latter can be accomplished, for example, by purchasing bonds or other financial assets with newly-created money. Again, this occurs without currency, by simply changing the numbers in the relevant accounts.

So far there is nothing new in what we have said – the process just described is going on every day around the world, even though it seems like magic to most people. There is nothing easier than to create money and credit in arbitrary amounts! The question is: Where does the money go? Who receives the loans?

In the U.S. and other OECD countries this is to a large extent determined by the “invisible hand” of financial markets and the lending policies of large banks. Today the vast majority of loans go to the financial, real estate and service sectors, and only a very small portion to industry and other productive uses. Since the 1970s the huge growth in money supply and nonproductive credit in the U.S. and many other countries has led again and again to gigantic speculative bubbles and subsequent crashes. The global financial crisis which began in 2007-2008 was caused in exactly this way.    

Exploiting the money- and credit-generating functions of the banking system in order to finance real development requires powerful means for channeling the flow of credit preferentially into areas chosen according to the criteria of physical economy.  The preferential channeling of credit in accordance with government policies is generally known as “directed credit” or “guided credit”.

The most extreme case would be to nationalize the whole banking system and to allocate credit directly, e.g. by ministerial decree. This method leads to poor results, however, for the same reason as command economies do generally, due to the dangers of over-bureaucratization, excessive rigidity, favoritism, stifling of individual initiative and independent decision-making, lack of the self-regulating function of markets on the micro level etc.

Spectacularly successful, in contrast, is the so-called “window guidance” system utilized in Japan in the postwar decades. Instead of allocating credit directly, the Bank of Japan (the central bank) set quotas and target areas for the lending of the private banks. The window guidance process had a somewhat informal character, operating via a close relationship between the BOJ and the banks. As the case of Japan demonstrated, it is feasible on this basis to build up the physical economy rapidly and achieve nearly 100% employment of the workforce on a sustained basis without generating high inflation. We shall discuss further aspects of the Japanese experience in Part III. China studied the Japanese example carefully.

Another important means of utilizing direct credit is to establish large state development banks which support physical-economic development by financing large-scale projects and building up key sectors of the economy, influencing the lending of private banks in indirect ways e.g. by generating loan demand from industrial suppliers to projects.

Exemplary of this method is the key role of the Kreditanstalt für Wiederaufbau (KfW, Reconstruction Credit Institute) in West Germany’s postwar “economic miracle”. The KfW was established in 1948 as a state-owned bank to finance the reconstruction of the German economy. The KfW focused initially on infrastructure, industry and housing construction, and later on the promotion of exports and the development of the sector of small and medium-sized manufacturing enterprises, which form the main core of Germany’s economy.

An important recent example is the Chinese government’s 1994 creation of three state-owned “policy banks”: the China Development Bank (CDB), the Agricultural Development Bank of China and the Export-Import Bank of China. These banks operate under the direct control of the State Council, the highest executive organ of the Chinese government, as instruments of economic policy. The largest of these, the China Development Bank, has played a decisive role in the spectacular growth of China’s physical economy over the last 20 years. The CDB is currently the largest development bank in the world. While it is most famous for financing “mega projects” such as the Three Gorges Dam (the world’s largest single generator of electric power) and China’s high speed railway network, the lending activity of the CBC is much broader, embodying a strategy of systematically building up the whole productive base of China’s economy. CDB loans have gone mainly into the areas of hydropower and nuclear power, road and railway construction, mining, ports, telecommunications, urban infrastructure, so-called “pillar industries” (automobile, construction, mechanical, electrical and petrochemical) and new high-tech industries of strategic importance.

As in the case of the KfW, bonds of the China Development Bank are guaranteed by the state. The CDB can thus raise money at very low cost on the capital markets, benefitting from the growth of the money supply. 

A noteworthy common feature of the postwar “economic miracles” in Germany, France and Japan, as well as the recent development of China, is that equity financing (e.g. financing via the sale of company shares) played a relatively minor role compared to bank credit. Although it has a role to play, equity financing has the disadvantage that the interests of shareholders -- especially for large firms and portfolio stock investors -- lies mainly in extracting profit from a company rather than reinvesting it in the company’s further development.  

8.4  Education and training

As part of the policy for full employment sketched above, it is essential to provide the education and training (including retraining) needed to prepare the workforce for science- and technology-intensive forms of employment.  

A useful historical example is the “G.I. Bill” enacted in the United States a year before the end of World War II. Among other things the G.I. Bill provided for the U.S. government to pay tuition and a living allowance for millions of young veterans, returning from the war, to study at universities and colleges. The G.I. Bill unleashed a revolution in U.S. education, creating a large core of high-qualified scientists, engineers, technicians and other professionals that has been key to U.S. preeminence in science and technology in the postwar period. Under the G.I. Bill and other government measures, the numbers of students and teachers at U.S. colleges and universities expanded at an explosive rate. Hundreds of new colleges and universities were created. The massive U.S. government support for scientific education led to a 400% increase in the number of PhDs in physical science between 1950 and 1970. Similar explosive growth occurred in the area of training of technical manpower.

In addition to the education and training for new entrants to the workforce, ample opportunities were provided for re-training, permitting millions of already-employed persons to move into higher-qualified jobs. 

This and other examples demonstrate that it is possible, through suitable policies, to very rapidly expand the scale and quality of preparation of the workforce for science- and technology-intensive forms of employment. Education and training resources are required, but -- as exemplified by the rapid development of the U.S. in the postwar period – the expenditures for education are repaid many times over by the huge gain in productivity resulting from the increase in qualification and skill of the workforce.    

8.5  Informing the public

For political and other reasons it is essential to explain the most important features of those policies to the general public, including most importantly: 

  • the difference between nonproductive investment and investment which increases the real productivity of the economy, and why it is possible utilize newly created credit for productive investment without generating high inflation. Although the heyday of neoliberal policies has been accompanied by a gigantic monetary expansion in support of the financial sector, propagandists for neoliberal policies constantly play on the popular notion that “printing money creates inflation”. 
  • the multiplier effect of investment into well-designed projects for modern infrastructure, science and technology. Creating a broad public understanding of the multiplier effect is essential in order to overcome the common idea that projects such as a manned mission to Mars are “too expensive” and would place additional burdens on the average citizen. We shall discuss the example of the U.S. moon landing program below, including estimates of the multiplier effect from that project.
  • in general the general public should become familiar with the basic principles of physical economy. This is eminently feasible, as the topic is interesting, relevant to everybody and not difficult to master.

8.6  Cultural paradigm shift

As mentioned above, the 1970s “wrong turn” in economic policy is connected with very negative cultural effects, especially for the younger generations. These negative effects in the direction of “cultural pessimism” aggravated the already existing problem of alienation in modern society. We shall discuss alienation and strategies for overcoming it in chapters 11 and 12 below.  Part of the answer is to capture the imagination of the population with visionary projects that generate large numbers of attractive new jobs. For the younger generation to object is to create the maximum possible contrast between the joy of discovery, of adventure in exploring the Universe, of learning new things, on the one side, and the “junk culture” of rap music and discos, video games, drugs etc., on the other. 

8.7  Examples of the multiplier effect: the Apollo program and the invention of the transistor  

The Apollo program cost over $100 billion (2010 dollars). The net result of this gigantic investment was to have a total of 12 people walk and drive around on the Moon and bring back 380 kg of rocks and soil from the Moon’s surface for the purpose of scientific investigation. The astronauts also took many beautiful pictures. None of these things had any direct tangible benefit on the economy. To many people, including some economists, the Apollo program was “throwing money at the Moon”—a colossal waste of resources which at best could only be justified by the prestige gained by the United States in the Cold War competition with the Soviet Union. This was refuted, however, by a 1976 study by Chase Econometrics Associates on “The Economic Impact of NASA R&D Spending”. This study concluded that for each $1 spent on the Apollo moon landing program, approximately $7 were gained by the economy through increases in industrial productivity, generated from Apollo program-linked R&D activities and the technological upgrading of industrial suppliers to the program. In this sense the Apollo program not only paid for itself, but generated a very sizeable profit in macroeconomic terms. The precondition for such a multiplier effect – fulfilled to a large extent by the United States at the time of the Apollo program -- was a high rate of investment in the high-technology industrial sectors of the economy.

The Chase Econometrics study was based in a 185 sector inter-industry input-output model. Although the study focuses on supply and demand relationships and is not strictly physical-economic in methodology, the main line of argument reflects a strong physical-economic outlook. Here are some key quotes: 

 “We have already shown … the magnitude of increase which will occur in the productive capacity of the economy for an increase in NASA R&D spending. However, there is no automatic increase in demand which will occur just because the total supply is now higher, and until this newly created capacity is utilized through higher demand no social benefits are realized.

“There is an economic mechanism through which additional supply does create its own demand. Greater R&D spending leads to an increase in productivity, primarily in the manufacturing sector. As a result of this increase, less labor is needed per unit of output. This in turn lowers unit labor costs, which leads to lower prices. Yet this increase is not immediately transferred into higher output and employment. As prices are lowered (or grow at a less rapid rate), real disposable income of consumers increases at a faster rate. Consumers can then purchase a larger market basket of goods and services, which in turn are now available because the production possibility frontier has moved forward …

“As greater productivity is translated into higher demand, we find that the economy can produce more goods and services with the same amount of labor. This has two beneficial effects. First, the unit labor costs decline, hence lowering prices. Second, lower prices enable consumers to purchase more goods and services with their income, hence leading to further increases in output and employment …

“NASA R&D spending increases the rate of technological change and reduces the rate of inflation for two reasons. First, in the short run, it redistributes demand in the direction of the high-technology industries, thus improving aggregate productivity in the economy. … 

“Second, in the long run, it expands the production possibility frontier of the economy by increasing the rate of technological progress. This improves labor productivity further, which results in lower unit labor costs and hence lower prices. A slower rate of inflation leads in turn to a more rapid rise in real disposable income permitting consumers to purchase additional goods and services being produced and generating greater employment.”

Critics of the Apollo program commonly respond by claiming that a much larger economic gain could have been achieved if the $100 billion had been invested in other ways. This might be true in terms of GDP. But from our standpoint maximizing GDP growth is not the purpose of economic activity, nor should it be the criterion for economic decisions. From the standpoint of physical economy, the manned exploration of the Moon is valuable in and of itself. It is sufficient that the Apollo program had a positive multiplier or “payback” through boosting the overall physical productivity of the economy, thus guaranteeing the sustainability of the required investment.

The multiplier effect of the Apollo program was limited by the fact it was focused on technology only and did not go beyond established scientific principles. The payback from fundamental scientific discoveries and the technological revolutions flowing from such discoveries is orders of magnitude larger.

One of the best examples of the payback of fundamental research from recent history is the process which began with the discovery of the quantum of action in 1899 and the emergence of quantum mechanics as new fundamental physical theory in the early decades of the 20th century. This led to a new understanding of semiconductivity and related phenomena, and via the creation of the first transistors in the late 1940s, followed by integrated circuits and microprocessors, to the computer and internet revolution of recent decades. The resulting impact on the real physical productivity of the world’s industrial economies includes enormous savings in labor and resources in industrial production through automation, increases in the speed, precision and reliability of manufacturing processes, a vast improvement in logistical capabilities, in communications and in the collection and handling of the huge amounts of technical data required for the operation of a modern industrial economy.

Although quantitative estimates are not available for this whole process, there is no doubt that the cited gains in physical productivity have “paid back” the real investment in fundamental and applied science and R&D -- measured in terms of manpower and equipment -- tens or (more probably) hundreds of times over.

The Apollo Moon program and the technological revolutions flowing from quantum theory illustrate a general principle: Any nation with a sufficiently developed industrial base can exploit the productivity gains generated by advances in science and technology, to employ an ever larger percentage of the population in scientific research and other activities that contribute to the growth of human knowledge. Since there is an infinite potential for development of scientific knowledge, there is ultimately no limit to the number of jobs which can be generated in this way!  

 

Chapter 9   The Knowledge Generator Economy

9.1  The future of industrial society

What is the future of industrial society? What can be the long-term motor of economic growth and development?

What happens in the ideal case, when presently existing deficiencies have been overcome, and we have a world consisting of modern industrial nations? What comes next? Could we continue indefinitely with the model of industry-centered growth, as it was practiced by the United States, Japan, Germany and other nations before the imposition of neoliberal policies? Is the explosive growth of industrial production in China today a model for the long-term future of Mankind? Is it the aim of an economy to just produce more and more and consume more and more?

Several arguments have been given, why a continuation of industrially-oriented economic growth in the traditional sense is impossible in the long run:

1. Unsustainability – The present mode of industrial growth is unsustainable because it is using up limited unrenewable natural resources, exhausting the so-called carrying capacity of the Earth and destroying the Earth’s ecosystems (e.g. global warming).

2. Overproduction – Once a nation has attained a high living standard and population growth has slowed to a halt (as in most of the rich countries today), the growth of consumption will ultimately slow down and approach a point of saturation. From that point on there will no longer be sufficient additional demand to sustain further industrial growth. Either that growth stops or the result will be chronic overproduction.  Modern mass production methods can generate floods of goods at low cost. Already today, enormous pressures are being applied on consumers by sophisticated marketing methods, to create new needs and to get them to buy more and more. Otherwise mountains of goods will remain unsold. But a day has only 24 hours and there are limits to what a single person can consume.

3. Unemployment – The automatization of mass production and the development of robots able to replace human beings in many sorts of tasks, means that most classical forms of industrial jobs will disappear. Industrial growth will no longer function as a driver of employment. What will happen with the countless millions of people whose jobs will thereby become superfluous? What will happen to the masses of young people who will enter the workforce in the coming period? What will happen when a resumption of policies for physical-economic development -- or before then a giant financial crash – turns off the sources of money which currently prop up a “bubble” of higher-paid, but artificially-created jobs in financial services, real estate, advertizing and marketing, the “dot-com” sector etc.? How can mass unemployment be avoided?

9.2  Beyond industrial society: the thirst for knowledge as motor of economic growth

From the standpoint of physical economy we can envisage a trajectory of development of the world economy that would answer all the objections mentioned above and provide the basis for unlimited growth into the future. The characteristic of the required trajectory will be presented step-by-step in the following chapters.

We have already discussed how the development of science and technology provide the means for overcoming all apparent limits to growth, while practically eliminating the environmental problems created by present extensive forms of economic growth via a transition to intensive growth and development.

But to realize the potential for unlimited growth into the future will require much higher rates of investment in scientific and technological progress than have ever been realized before – at least one or two orders of magnitude higher than those ever achieved by the United States and other nations until now. Investment in scientific research and related activities will become the main generator of employment and demand. All of this is impossible on the basis of a simple continuation of earlier modes of industrial growth. It means a new paradigm.

What is the economic significance of science? Until now, nearly all economists – and ordinary people as well – would answer that the main economic function of scientific research is to increase the productivity of an economy. The pathway for this effect is well known: scientific advances leads to the emergence of new technologies and improvements in existing technologies, which in turn lead to higher productivity in industry and other sections of the economy. Pure and applied science, and the R&D activity that transforms new scientific knowledge into useful technologies, are thus seen as a necessary investment for any modern economy.

Thus the growth of human knowledge is not seen as a goal of economic activity, but rather as a something which must be paid for in order to obtain economic benefits. 

The notion of “Knowledge Economy” or “Knowledge-Based Economy”, which has recently become popular in policy discussions, is based on exactly this understanding of the economic role of science. The only new aspect of the “Knowledge Economy” is that the growth of knowledge -- including especially scientific and technical knowledge -- is now recognized as the main source of increased productivity, rather than only one of many sources as it has been regarded in the past. We can read, for example:

“The ability to create, distribute and exploit knowledge and information seems ever more important and is often regarded as the single most important factor underlying economic growth and improvements in the quality of life … The role of knowledge (as compared with natural resources, physical capital and low skill labor) has taken on greater importance. Although the pace may differ, all OECD economies are moving towards a knowledge based economy” (OECD 1996)

“Ideas, innovation and knowledge are the key drivers of modern economies … Knowledge and skills are not only directly productive in making the most of both natural and (particularly) human resources, but are also the drivers of ideas that allow productivity to grow.” (British Academy)

It is fortunate that human knowledge has come to be valued so highly among economists and policy-making institutions. But we should not fail to notice something rather bizarre about these statements. Does the value of human knowledge lie merely in its usefulness in generating economic growth? Is human knowledge merely a tool? Are human beings just tools?

From the outset of our presentation of physical economy we rightly placed human beings and the creative powers of the human mind at the center of economics. For us an economy is a tool of human society -- a tool for promoting happiness by sustaining human life and creating the context and possibility for human beings to realize their creative potentials.

In order to present our correction to the concept of the “Knowledge-Based Economy” it is important to clarify what should be meant by “creativity” and “knowledge”.

The term “creative” is intended here not in the rather arbitrary sense often used today, but as the ability to expand human knowledge about the Universe – including about the human mind and society – as well as the type of creativity embodied in great classical works of art. It can be shown that these two expressions of creativity are inseparable from each other. Creativity in our sense is also manifested in the daily lives of people when they assimilate new knowledge, develop new solutions to problems and exercise their powers of observation and insight to learn something new about the world around them.

Here we use the term “knowledge” in a broad sense which includes not only theoretical and empirical knowledge in the ordinary scientific sense, but also observations, experience and insights gained in any domain of human activity. For example, we would regard as “knowledge” the observations reported by the astronauts who landed on the Moon, concerning what they saw and how they reacted subjectively to their experiences. This applies to exploration in general. The impulse to explore is inherent in human nature.

The essential criterion for “knowledge”, as opposed to arbitrary fantasy, is (1) that it be communicable (in principle, at least) to every other human being and (2) that it contribute to developing a coherent and self-consistent conception of reality which encompasses the entirety of existing human experience, in the context of a continual expansion of Man’s activity in the Universe. Given that human knowledge can never be perfect, this implies a certain kind of accountability: contradictions and incoherencies, whenever they appear, must be resolved and explained through the progression to a more perfect conception of reality. For example: if we make observations that appear to contradict the laws of physics as we know them, then we must either find an error in the observations, or revise our physics in such a way that it can account for the new observations, while at the same time being are coherent with what we otherwise know about the Universe. 

An analogous sort of process occurs in the rigorous contrapuntal method of classical music: dissonances are generated and resolved in the course of development of a single unified conception. Grasping that “higher harmony” which encompasses the dissonances as necessary features, and thereby grasping the intention of the composer, can legitimately be regarded as a form of knowledge. Most important, classical art of this sort exercises the same creative powers of human reason that are required for scientific discovery. It is thus not an accident that classical music played an important role in the lives of many great scientists of the past.

The thirst for knowledge is expressed not only in scientific research and in great art, but also in daily life in many situations where we ask: “what happened?” and “why did it happen?” Implicit in all such questions is the notion of a single reality and constant striving to integrate the entirety of experience into a coherent unity.    

In light of these considerations we propose to turn the notion of the “knowledge economy” upside down, as follows:

The future belongs to economies that are optimized to achieve the most rapid all-round development of human knowledge, mobilizing the creative powers of the population to the maximum possible degree.  Such economies grow first and foremost in the domain of knowledge, and secondarily – in order to achieve that cognitive goal – in the domain of physical activity.

For lack of a better term, we shall call this the Knowledge Generator Economy.

9.3  Characteristics of a Knowledge Generator Economy

What would such an economy look like? How could such an economy be built up? We are not talking about some sort of paradise which can be achieved all at once, but about a trajectory of development which leads step-by-step from industrial society as presently understood, to the proposed Knowledge Generator Economy.

Concretely, we consider it entirely feasible for world’s industrial economies to transform themselves over the next approximately 30 years into economies having the following characteristics:

•             The process of expanding human knowledge will be the main driver of demand for goods and services, as well as (directly and indirectly) the main generator of employment.

•             Thanks to computerization, automatization and robotization, only a small percentage of the workforce will be employed in the large-volume (mass) production and distribution of consumer and capital goods and in the operation of infrastructure. Throughout the economy, routine activities will be handled by computer systems and robots, with minimum requirements for human intervention.

•             20% or more of the workforce will be employed directly in pure and applied scientific research, working in state institutions and in a large number of private laboratories, research companies etc. Nearly the entire population will engage in some form of science-related activity for at least part of their time. More and more contributions to science and scientific R&D will come from “hobby scientists” who carry out experiments and other scientific work in growing amounts of leisure time (see Chapter 12). An increasing percentage of homes and communities will have their own laboratories, workshops and facilities for carrying out investigations of various sorts (mostly via data links to observatories and other major scientific installations serving large numbers of users).  The population will become large-scale consumers of scientific equipment.

•             Supplying the hardware and infrastructure for all types of scientific activity – from laboratory apparatus to equipment for space exploration, undersea exploration etc. – will account for a significant and increasing portion of the total demand for manufactured goods.  The other main component of demand will arise from the constant renewal and upgrading of the productive base of the economy, including physical infrastructure, on the basis of new technologies derived from scientific advances.

•             Employment in manufacturing will be probably remain the range of 30-40% of the work force, but will be concentrated more and more on the development, engineering and production of specialized, “one-of-a-kind” machinery and equipment, instruments and software, which have a relatively low production volume but very high value. This includes especially “one of a kind” products required for scientific experiments and the creation of prototypes for new technologies in the productive sectors of the economy. Such work is a labor-intensive and science-intensive activity demanding large amounts of creative problem-solving and innovation. Most such work will be carried out in small and medium sized enterprises. In this domain of unique products, price competition plays little or no role.  This sector of equipment and prototype production mediates the process whereby the economy assimilates scientific advances and transforms them into new technologies. Its relationship to the scientific community will be much closer than today, because of the higher rate of scientific progress and a need for a much more intensive cooperation among scientists, engineers and the skilled labor force generally. Among other things, anomalies encountered in the development of prototype systems serve as starting-points for new scientific discoveries.

•             The majority of the adult population will have the equivalent of at least a university-level education. Lifelong learning will be the norm; the traditional classroom form of education will be largely replaced by a combination of self-study, seminar work (group study) and on-job training.

•             Service sector employment will be dominated by education and health care. The percentage of the workforce employed in administration, sales & marketing and financial services will be greatly reduced compared to today. An exception, most likely, will be highly-qualified employment in institutions concerned with state support for science and technology.

•             Capital will be plentiful, and investments will be limited primarily by projected demand and the availability of labor and physical inputs.  

•             With the economy structured in the indicated way, the growing investment of manpower and resources into scientific activity will be compensated by the increases in physical productivity which will be constantly generated as a byproduct of the growth of scientific knowledge. This “multiplier effect” is the basis for the economic sustainability of the Knowledge Generator Economy

We have framed this description in the context of capitalist or so-called mixed economies, but it could in principle be realized in economies of a more socialist type.

For developing countries the transition can take somewhat longer because of the need to build up the necessary productive base and to achieve the overall standards of living and education of the population that permit a “take-off” into the Knowledge Generator Economy.

Needless to say, realizing such a trajectory of development requires radical changes in economic policy. It requires a reorganization of the financial system to provide for large-scale credit generation for investment in science-intensive activities, a drastic upgrading of education (including adult education), a shift in prevailing cultural tendencies and much more. Achieving high rates of scientific and technological progress – necessary to put a nation on a trajectory of sustained physical-economic development – requires building up a complex network of advanced industries and R&D capabilities.  It requires a new generation of brilliant scientists and engineers. Above all it requires a high quality of leadership at various levels of society. There must be a clear commitment to this policy course, because a change is required in thinking of society. Such a paradigm shift will not be completed overnight – the trajectory we propose encompasses one generation – but it can be initiated quickly by suitable decisions and projects that set a new economic course.

9.4  Preparing the transition to a Knowledge Generator Economy

The most essential step in preparing the ground for the Knowledge Generator Economy is to awaken in the population – especially the younger population – a desire to participate actively in the exploration of the Universe and in the progress of knowledge, plus confidence that such activity can provide a secure economic future. Thereby a favorable social climate and political base will be created for the required changes in policy.   

Since curiosity, idealism and yearning for adventure are naturally present in young people, this task can be accomplished in large measure by the combination of two concrete policy steps: 

1. Increasing the economic attractiveness and social status of careers in science and technology, providing support and incentives for large numbers of young people to qualify themselves in advanced areas of science and technology. In this context it is indispensible to give young people a credible perspective of obtaining a secure, well-paid and interesting job after completing their education.

2. Launching an array of high-profile science- and technology-intensive projects which capture the imagination of youth while at the same time rapidly expanding employment and production in the science and technology-intensive sectors of the economy.

The effectiveness of such policies is exemplified, in the history of the United States, by the successes gained by the combination of G.I. bill and related education measures, together with the space program and other high-technology programs.

9.5  Visionary projects

In the history of the world’s nations, the best periods of economical development have nearly always been associated with some sort of Great Projects. To launch a Knowledge Generator Economy requires projects on an unprecedented scale in terms of the investments of manpower and resources – projects that have a very large multiplier effect in promoting physical-economic development and mobilize the creative potentials of the population. For various reasons two such mega-projects stand out as feasible and most optimal for this purpose. We call them “visionary” because they have the character of transforming the whole future of society. In their full scope they exceed the capabilities of any single nation. Nevertheless, they provide the overall directions within which national projects can be designed. In the best case, these would fit into a worldwide cooperation and division of labor for the realization of indicated the visionary goals.  

9.6  Project I: A new era of space exploration and mass participation in scientific research

We propose a vast expansion of human activity in the solar system, including manned missions, robotic probes and new generations of space-based observatories, as a driver of physical-economic development on the Earth. The focus must be on systematically building up the infrastructure and technological capabilities for: 

(i) Systematic manned and unmanned exploration of the Moon and Mars for scientific purposes and with the goal of establishing permanent human colonies in the longer term. A priority is prospecting for subsurface water/ice and mapping out the locations of various types of mineral deposits. 

(ii) On-site investigation of the potential for utilizing microorganisms to gradually transform the atmosphere and soil of Mars in such a way as to support higher forms of life (terraforming). 

(iii) An intensive program of unmanned space missions to remote destinations in the solar system, as exemplified by the recent, sensational Rosetta mission to the comet 67P.

(iv) Establishment of a large network of astronomical observatories in various regions of the solar system.

(v) Development of new means of access to space, eliminating present dependence on chemical-propulsion rocket engines and reducing the real cost and risks of space missions by as much as two orders of magnitude. Travel into Earth orbit and beyond would become routine.

Among the scientifically validated possibilities the most promising appears to be the “space elevator” originally proposed by the Russian space science pioneer Konstantin Tsiolkovsky back in 1895. The technological difficulties involved are so immense, however, that actually realizing any of these possibilities has long seemed like a mere dream. Building a “space elevator” requires materials with strengths vastly beyond anything previously known or even considered possible. Recently, however, unexpected breakthroughs in materials science – specifically the development of carbon nanotube fibers – have suddenly transformed the space elevator, in the variant known as the “space tether”, into a project that could conceivably be realized within one or two decades. A number of government laboratories and private companies are already engaged in relevant R&D efforts, and there is an international network of organizations devoted to the space elevator. Here the necessary technologies include not only super high-strength materials, but also means of transmitting large amounts of energy to the elevator vehicle.

In addition to the space elevator there are a number of other alternative approaches which can be expected to become technologically and economically feasible in the foreseeable future. 

The effort necessary to realize the listed goals encompasses nearly the entire “orchestra” of all branches of science and technology and is thus ideally suited as a locomotive for physical-economic development. 

The multiplier effect will be amplified by new knowledge concerning “exotic” physical processes occurring in planets, stars, the centers of galaxies etc. which can lead to scientific and technological revolutions on the Earth. Essential in this context is the relationship between increases in the power density of technology and increases in the real productivity of the economy, discussed in Part I. “Exotic” astrophysical processes typically involve the behavior of matter at very large power densities. A historical example is the original investigations of Eddington, Bethe and others on the energy sources of the Sun and stars, which has led to the ongoing development of nuclear fusion energy as a future energy source for mankind.  

In this context we should note that although the number of persons actually travelling into space will initially remain only a very small percentage of the world population, the activity generated by the new era of space exploration will involve a large part of the population directly and indirectly. On the one side we have the huge effort of designing, manufacturing, testing and operating the myriad systems and components required for hundreds and later thousands of space missions. On the other side we have the process of applying the flow of technological innovations, generated in the context of the space effort, to increase the productivity of the economy here on Earth – a task that will involve much of the work force. Finally, we have mass participation in scientific activities connected with the exploration of space.

50 million astronomers?

More profound and far-reaching, for the transition to the Knowledge Generator Economy, is the perspective of engaging tens or even hundreds of millions of people in the process of analyzing the flood of observations and data which will be generated by space activities.

The following example shows that this idea is by no means so fantastic as might appear at first glance. The Hubble Space Telescope, which has been orbiting the Earth since 1990, has produced images of a tiny sector of the sky (11.5 arc-minutes in diameter) in which approximately 10 000 galaxies are visible. Assuming a similar density of galaxies for the whole sky, we can estimate that a total of more than 12 billion galaxies are already now within the visible range of this instrument. Thus, there are more than enough galaxies for each living human being on Earth to “adopt” one of them for study. (They could also be dedicated to a child at birth, or as a birthday present!) The same applies to the approximately 100 billion stars within our own galaxy, the Milky Way. We know from direct detection that some of these stars have systems of orbiting planets, and it is estimated that at least 100 billion planets exist in our galaxy. With the establishment of a large network of astronomical instruments utilizing the most advanced technologies and detector systems, it will become possible not only to investigate each of these objects in detail, but also to carry out observations and measurements on millions of objects in parallel, just as parallel processing is used in computing today. In this way millions of people on Earth can indidually study their choice of objects via data links to space observatories.

Although astronomers classify stars and galaxies into various categories and types, in reality no two of them are alike. Each galaxy is a unique world unto itself, and although the time-scale of development of galaxies is extremely long, a typical galaxy (including our own) has constant activity – variable stars, novas, flares etc. – occurring in time-scales accessible to direct human observation. In addition, instruments carried by manned and unmanned missions to the Moon, Mars and other objects in the solar system will generate enormous quantities of images and data for analysis. That includes the study of our own star, the Sun, whose spectacular ongoing activity can be already be observed in extraordinary detail from orbital observatories such as NASA’s Solar Dynamic Observatory. Although the existence of a strong link between solar activity and the climatic and other conditions on the Earth has long been known, the full extent and structure of this link is still very far from being fully known. Here astronomy intersects with the extraordinarily rich area of the Earth sciences.

The notion that tens or even hundreds of millions of people might participate in astronomical investigations is not so far-fetched as it might seem at first glance. In spite of a rather low level of scientific culture in today’s society, hundreds of thousands of enthusiastic amateurs are already carrying out astronomical observations, using more and more sophisticated instruments.  With the advent of CCD technology, small amateur telescopes can now carry out observations that were only possible on large professional instruments a decade ago. Many of these amateurs are already “networked” with each other and with scientific institutions, and supply important information to professional astronomy. There are so many objects in the sky and so many events occurring in space, that it would be impossible to analyze more than a tiny fraction of them without the participation of a very large number of observers.

Space research is a special case of a much broader perspective for mass participation in scientific research, which we shall discuss in depth in Chapter 12.

9.7 Project II: Infrastructural revolution: the end of fossil fuels

In order to move from extensive to intensive modes of economic growth it is necessary to transform the infrastructural base of the world economy in the direction of much higher power densities. The priority goal is to eliminate the present dependency of the world economy on the combustion of oil, coal, gas and other chemical fuels, by rebuilding the energy and transport system on the basis of technologies whose physical productivity is an order of magnitude higher than those now in use.

Currently, power production and transportation in the world economy is nearly entirely based on the combustion of approximately 20 billion tons of coal, oil and gas every year. This situation is a typical product of an extensive (or linear) mode of economic growth: a huge expansion of the scale of production and consumption without fundamental technological revolutions. The extraction, processing, transportation, distribution and combustion of these 20 billion tons of material impose enormous costs on the world economy. The much-discussed environmental effects go hand-in-hand with a gigantic waste of resources, chronic inefficiency and technological stagnation at the base of the economy.

The task of virtually eliminating the large-scale use of fossil fuels can only be accomplished by exploiting the full potential of nuclear-physics-based technologies in combination with improved electricity-powered transport systems. The transition from chemical to nuclear energy as the primary energy base of the world economy follows the entire logic of economic development, both in terms of the progression to ever higher power densities, and in terms of the extension of human knowledge in the complementary directions of the extremely small (nuclear) and extremely large (astronomical) scales of the Universe.

As we discussed in Part I, the physical productivity of an economy correlates closely with the power density (energy-flux density) of the technologies employed in its physical infrastructure. The potential for nuclear-based technologies to achieve very high power densities is a consequence of the fact that the binding energies of the atomic nuclei are typically 6 orders of magnitude larger than the atomic binding energies of molecules. Nuclear reactions release millions of times more energy per unit mass than chemical reactions. The transition to nuclear power can thus liberate the economy from the gigantic burden imposed by the extraction and transport of billions of tons of fossil fuels each year for electric power production.

The potential of nuclear-physics-based technologies includes not only inherently safe and low-waste (or even waste-free) nuclear fission and fusion reactors as sources of electricity and heat, but also the option of using nuclear batteries to power electric vehicles. 

Meanwhile, progress in mastering the physics of self-organizing collective processes in materials, including high temperature superconductors can greatly increase the power density of electric motors, power storage systems and transmission systems.

Continued research in these areas holds enormous promise for the future. Among other things, many signs point to a coming scientific and technological revolution at the intersection of nuclear physics and the physics of collective processes in solid materials. One of them is the totally unexpected phenomenon of so-called low energy nuclear reactions (LENR) in solid materials loaded to high densities with isotopes of hydrogen. Besides the release of heat, evidence has been found for a variety of nuclear transmutation events taking place in LENR systems. At present the existence of such processes can no longer be doubted – the accumulated mass of experimental evidence is overwhelming – but (in our view) there is still no satisfactory explanation. At present it is impossible to predict what role LENR might play in power generation and other applications, but they are sure to be revolutionary.

1. A gigantic undertaking    

Given that the world economy presently produces 80 % of its electric power and 98% of its transport energy from fossil fuels, the goal of eliminating fossil fuel dependency is a gigantic undertaking both in terms of the volume of industrial production, engineering, construction work required, and in the terms of the scale of scientific research and applied R&D efforts required to develop the necessary technologies. In order to succeed, this effort will have to be extremely broad-based. An enormous range of possibilities must be investigated and followed up, including ideas of a relatively speculative sort. Countless prototypes must be constructed and tested. 

Apart from the direct benefits to the economy from increases in the efficiency and power density of transport and energy infrastructure, the payback from investment in this project is guaranteed by the extraordinarily broad spectrum of technologies whose development will be driven forward and the close link of many of them to fundamental areas of research. Indeed, the task of unlocking the full potential of nuclear energy brings us to the forefront of scientific knowledge, necessitating breakthroughs in some of the most advanced areas of physics.

Alongside space exploration this is probably the single most effective project for stimulating the science and technology-intensive sectors of the economy.

A byproduct of the proposed undertaking will be to eliminate the main source of environmental pollution and of emission of carbon dioxide into the atmosphere by human activity. But unlike present ill-advised policies of many governments to promote so-called renewable, low-power-density energy sources such as wind and solar energy, the development and application of advanced forms of nuclear energy will greatly increase the real productivity of the economy.

At present the main obstacle against using nuclear energy technologies is the hysterical opposition from sections of society to nuclear energy in any form. But thoughtful environmentalists are more and more coming to support the option of nuclear energy, realizing that there is no way that so-called renewable energy sources (wind, solar, geothermal, hydroelectric, biomass) can realistically be expected to replace fossil fuels as the main energy sources for the world economy. This is evident from simple-minded cost considerations alone, quite apart from the issue of real physical-economic productivity. With the constant improvement in extraction technology, the estimated recoverable reserves of oil and gas continue to grow faster than the consumption, while the price remains stable (or is even falling). Coal also remains plentiful. As long as oil, gas and coal remain plentiful and cheap, there is no way that so-called “renewable” sources, which are much more expensive, can be competitive on a large scale without massive subsidies from governments. The latter are hardly to be expected in the case of developing countries, where energy consumption will grow at the largest rate. It is not an accident that China today has the world’s largest and most ambitious nuclear energy program. In Germany, where the government has expended gigantic sums into wind and solar energy, the citizens are gradually waking up to the fact that electricity is going to become very expensive. Unlike nuclear energy, the development of wind and solar energy brings very little technological benefits to the economy.

2.  Bringing fission out of the stone age

At present nuclear fission is still in the “stone age”, as typified by the present, nearly exclusive use of giant light water fission reactors (LWRs) for power generation. These “stone-age” nuclear power plants have numerous disadvantages: they generate a mixture of many hazardous radioactive isotopes requiring special handling and storage; their safe operation depends on complex, multiply-redundant safety systems and huge multi-barrier containment structures; they require uranium enrichment, which is a complicated and expensive process; they make poor use of the energy content of the fuel and of the high power-density of nuclear reactions. These disadvantages combine to make present-day nuclear power generation at best only marginally more economical than conventional fossil-fuel power plants. The failure so far to make better use of the intrinsic advantages of nuclear energy is to a large extent a result of the abandonment of most of the research programs for alternative nuclear technologies in the U.S. and Europe, following the 1970s paradigm shift in economic policies. As a result, the basic technology of today’s nuclear power plants remains in essence the same as that of the reactors originally developed for submarine propulsion in the 1950s.

Fortunately a variety of alternative nuclear technologies exist today in various stages of development, which avoid all or most of the stated deficiencies. These alternatives include gas-cooled high temperature reactors (HTR or HTGR) and molten salt reactors, whose safety and stability are assured by the physics of the process itself and which require no active safety systems (“inherent or passive safety”). These reactors operate at much higher temperatures than LWRs, and can thereby achieve much higher efficiencies in the generation of electricity, as well as serving as heat sources for industrial processes. Many of the alternative systems envisage the use of thorium fuel, which is much more plentiful than uranium and does not require costly isotope separation with the associated danger of military proliferation. China is presently leading the world in the effort to develop thorium-fueled HTGR and molten salt reactors with inherent safety, as well as a variety of other designs. Another direction of development is fast neutron reactors which can “burn up” radioactive waste products. A promising example is the PRISM fast neutron reactor designed by GE-Hitachi. There are also designs for reactors in which the energy of fission reactions is converted directly into electricity at high efficiency.

3.  Fusion

Breakthroughs in fusion energy will open up a whole universe of additional options. In fusion reactors energy is generated by the fusion of light atomic nuclei in plasmas at temperatures of the order of 100 million degrees. There is practically no radioactive waste, and virtually unlimited amounts of fuel are available in the form of hydrogen isotopes contained in sea water. Fusion has number of intrinsic safety advantages compared to fission. The fusion process can be turned on and off instantaneously and there is no danger of an uncontrolled, “run-away” reaction process. Unlike fission reactors, there is no large inventory of radioactive materials which continue to produce heat after the reactor has been shut down. Fusion is also more flexible than fission in many ways. There is a very wide variety of different types and geometries for fusion reactors: tokamaks, stellerators, so-called mirror machines, plasma focus and Z-pinch devices, laser fusion, particle-beam fusion etc Unfortunately, starting in the late 1970s the United States, which had the world’s largest and broadest fusion program, greatly reduced the size and scope of the program and dropped many promising directions of research. This is a main reason why the achievement of so-called “ignition” conditions for a fusion reactor has dragged on for so long.

 It is urgently necessary, in the context of our overall plan, to drastically expand the scale and scope of international fusion research. The required crash program to realize fusion energy in various forms must be carried out on the broadest possible front and with the goal to achieve breakthroughs in the shortest possible time. The multiplier effect of such an effort will be immense, for many reasons: (1) the intimate connection between fusion energy and astrophysics; (2) the ubiquitous role of the plasma state (“the fourth state of matter”) in the natural world; (3) the growing range of applications of plasmas in technology; (4) the close link between fusion research and the development of advanced technologies such as high power lasers and high-field superconducting magnets; and (5) the enormous scope of engineering challenges posed by the design and construction of fusion reactors.     

All in all, there is little doubt that and intensive international effort directed at achieving breakthroughs in the development of both fission and fusion, has the potential to reduce the cost of nuclear-generated electricity to a small fraction of the present generating cost of fossil-fueled plants.

The first phase of a transition away from fossil fuels for electric power generation will no doubt be based mainly on a mixture of various types nuclear fission reactors optimized for specific purposes: electricity generation; process heat for industry; isotope generation (including isotopes for medical use and isotopes for “nuclear batteries”); reactors for “burn up” of nuclear waste; reactors for the propulsion of ships, etc. Later fusion will come on line.

4.  Nuclear batteries

Energy storage systems based on nuclear processes have the potential, in principle, of storing millions of times more energy per unit weight and volume than present-day chemical batteries. An electric vehicle could run for decades without the need for recharging or battery change. Today we are still very far from being able to fully realize the potential for nuclear-based energy storage devices. Nevertheless, nuclear power sources based on the decay of isotopes are commonly used in space probes. For example, the Mars Rover Curiosity which has been exploring the surface of Mars since 2012, is powered by a “multi-mission radioisotope thermoelectric generator (MRTG)”.

At present there are a number of different types of nuclear batteries. In the MRTG, heat released by the decay of isotopes is converted into electricity by thermocouples. Several different isotopes can be used for such systems. In addition, betavoltaic batteries have been developed which can generate electricity directly, utilizing isotopes whose decay produces high-energy electrons (beta-radiation). A promising area of research is the possible utilization of nuclear isomers – metastable excited states of nuclei – as power sources.  It is conceivable that energy storage systems might be developed which utilize the energy of inner shell electrons – rather than merely the valence electrons employed in chemical processes – thereby opening up a range of power densities intermediate between chemical and nuclear energy.

5.  100%  electricity-based road transport

Turning to the issue of transport, we note that over 90% of all passenger transport and over 80% of freight transport in the world economy today is based directly or indirectly on the combustion of fossil fuels. Nearly 100% of all cars, trucks, ships and aircraft are powered by piston or turbine engines fueled by petroleum products; rail transport is split between diesel-powered and electric locomotives, with about 2/3 of the electricity coming from combustion of fossil fuels. First-generation battery-powered automobiles have come onto the market, but are still only a tiny percentage of the total. This indicates the magnitude of the transformation required.

The biggest task lies in the conversion of road transportation, which accounts for about 73% of the energy consumption of the transport sector. Thanks to enormous development efforts, all-electric-powered cars, vans and trucks have now come onto the market. These have major advantages in terms of operating costs, due mainly to the more efficient use of energy and low maintenance requirements due to the very small number of moving parts compared to internal combustion engines. The principle weakness lies in the limitations of presently available chemical batteries. A full “electrification” of road transport using battery-powered vehicles would clearly not be feasible without a huge improvement in the power and energy density of battery systems. Given the size of the market for cars, an enormous amount of research is being concentrated on this problem today. It is conceivable, but by no means guaranteed, that breakthroughs in chemical batteries could eventually make it possible to reach energy and power densities comparable to those of gasoline and diesel fuels. But the real revolution would come through the use of nuclear-based energy sources (see below).      

At the same time, novel “supercapacitors” are being developed utilizing nanotechnology, which could possibly reach energy densities comparable to chemical batteries and would have substantial advantages. Supercapacitors are already routinely used in many hybrid vehicles, including passenger busses in China. Electrical energy is generated and stored in the supercapacitor when the vehicle is braked, and then used to supply extra power for acceleration when the vehicle starts moving again.

Another direction of development with enormous potential is the wireless charging of electric vehicles while in motion. The Korean city of Gumi is already operating a fleet of electric-powered passenger busses which are recharged without mechanical contact, via electromagnetic induction, while driving over charging strips installed under selected portions of the roadway. In this way the busses can run “forever” without needing to stop for recharging. For such “Online Electric Vehicles” (OLEVs) the demands on battery storage capacity are significantly reduced, and the cost per kilometer of installing the charging strips adds only a small fraction to the normal cost of roadway construction. OLEVs and roadway wireless charging systems are currently under development in a number of countries and can be applied in principle to cars and trucks. Advances in the technology of wireless energy transfer now permit energy efficiencies of over 90% to be realized.  

This technology constitutes a big step towards so-called “smart highways” which would enable “driverless driving” of all-electric vehicles as well as wireless recharging. Cars, busses and trucks would be guided automatically and the overall traffic flow managed and optimized by data transfer via the roadway.

Realizing driverless road transportation, at least on the most heavily utilized routes, would generate enormous savings to the economy in terms of time and strain of driving and costs due to accidents. Over 1.3 million people die in accidents on the world’s roadways every year, and an estimated additional 20-50 million people are injured or disabled. The average American spends an average of 15 hours per week driving, de facto increasing the average work time per week by over 30%.  With automatic driving a major source of stress would be eliminated – the former driver could relax, read, or do work on computer, just as a passenger in a train.

In summary, the results already obtained from a variety of technological development lines make feasible a transition of all-electric road transportation in the coming 20-25 years. This process will go hand-in-hand with a period of intensive city-building, particularly in developing countries, which provides the ideal opportunity to install the most modern public transportation systems together with “smart” infrastructure for individual vehicles. 

6.  Electric aircraft

The electrification of commercial air transport would bring enormous advantages in terms of reduced energy consumption, increased safety, elimination of pollution, greatly reduced noise and lower maintenance costs. Battery-powered passenger aircraft have now come onto the market, but up to now they are small (1-2 passenger) vehicles operating at low speeds and short ranges. At the same time, work is ongoing to develop hybrid aircraft as a first step toward all-electric passenger air transport. In the hybrid aircraft, turbofans are driven by electric motors supplied with electricity from a kerosene-powered generator. Also under development is the application of high-temperature superconductors to greatly decrease the size and weight of the motor system.

On this background, creating all-electric aircraft that can replace hydrocarbon-fueled aircraft in commercial passenger and freight service, depends nearly entirely on increasing the power and energy densities of batteries or other electricity storage systems, as well as their energy-to-weight ratios, by 1-2 orders of magnitude relative to existing technology. In particular, energy storage densities must be achieved which are comparable to or exceed those of the present-day jet aircraft fuels. Here again, nuclear physics-based systems offer the greatest potential.

7.  Nuclear-powered shipping

Shipping accounts for about 10% of the energy consumed by the world’s transportation sector. Here nuclear power is the only viable alternative to the present, nearly 100% dependence on fossil fuels.

There is already a large amount of experience with the use of nuclear energy to power ships of various kinds, demonstrating the advantages of being able to operate for years without refueling. The safety record of nuclear-power ships has been excellent.

Today there are over 140 nuclear-powered ships in operation around the world and new ones are under construction in a number of countries. Nearly all are military --submarines and large aircraft carriers – but Russia also operates a highly successful fleet of nuclear-powered icebreaker ships. Four prototype nuclear-powered freight ships were built, by Russia, the U.S., Germany and Japan, to study the feasibility of nuclear power for commercial shipping.  Although they could not compete with conventional diesel freighter – mainly owing to higher capital costs – the experience with these prototypes demonstrated significant potential advantages of future nuclear freighter ships: lower fuel cost, the ability to run for many years without refueling, long service life and shorter “turn-around times” at ports.

In the recent period there has been increasing interest in the use of nuclear reactors to power the huge container ships which now carry a large part of world trade. Apart from the popular anxiety about nuclear energy, the main obstacle to a large-scale commercial use of nuclear-powered ships has been the high cost of the nuclear propulsion system, especially the reactor itself, relative to conventional propulsion. The cost can be greatly reduced, however, by utilizing small, standardized modular reactors with inherent safety, which can be produced in large numbers for a variety of applications. With the expected breakthroughs in the technology of nuclear power production itself (see above) nuclear power is practically certain to become the common form of power for large cargo ships.

 

Chapter 10   The problem of alienation

10.1  Objections to our proposals

Realizing the principles of physical economy and the perspective of the Knowledge Generator Economy raises many political and social issues. Most of these are beyond the scope of this book, but one issue is so important, that we must address it here: the problem of alienation.

This problem is perhaps best expressed by the following heavy criticisms which might be directed against our entire conception:    

“The conception of physical economy as an instrument for the masses of the population to exercise and develop their creative mental powers, is absurdly idealistic and has nothing to do with the reality of today’s world.  This goes especially for the central role you have given to science and technology. Scientific and technological progress has in fact had the opposite effect. Modern industrial society, managed by a small technocratic elite, has reduced the individual to a mere piece of a mass-production machine.  Present-day science and technology have become so incredibly complex, that they are beyond the comprehension of the masses of the population, who are thereby excluded from having any real sense of participation in scientific and technological progress. Modern civilization has estranged Man from Nature and from inner spiritual values.”

Another line of criticism:

Most people do not like to think. That is simply a fact. It has always been that way and always will be that way. It is therefore absurdly unrealistic to propose a scenario in which more than a small part of the population would be engaged in science and other intellectually challenging work. Science today is divided into extremely specialized areas, often requiring a complex mathematical apparatus which is difficult even for highly motivated students to master. Scientific work requires special talents which can be found only in a small percentage of the population.

“It is crazy to propose that a person without scientific training, whose only job experience is cooking hamburgers in a restaurant, for example, could be employed as a researcher in a scientific laboratory. Who can expect that people whose background is advertising, financial services or the entertainment industry, will be able to do serious work in theoretical physics?”

10.2  Alienation and the concept of work

On the surface there appears to be much truth in these arguments, especially insofar as they apply to the situation today. In our view they reflect a pervasive state of alienation of people in present-day society. Here we are using the term “alienation” in a somewhat different sense from Karl Marx and others who have written on this topic. The essence of the alienation we have in mind lies in the fact that the predominant modes of economic activity – and forms of education and culture associated with them – not only fail to realize the creative mental powers of individuals in society, but actually suppress them. In this sense people become alienated from the creative potentials of own minds, from the potential for genius which every human being is born with. The result is a society of mental cripples, deprived of most or all of the natural curiosity and thirst for knowledge about the world around them that is typical of every healthy child.

Naturally the phenomenon of alienation in our sense is not something completely new. Its origins go back to the birth of industrial society and even – in the opinion of some -- to the beginning of organized civilization with its division of labor and specialization of individual activity. As we noted in the first chapter of this book, the “elementary” form of alienation is embodied in the traditional notion of work as a necessary sacrifice in order to gain the means of subsistence. Accordingly, the process of preparing children and youth to become adults in society has tended to suppress their in-born curiosity and thirst for knowledge, since these had little place in the realities of working life. A life of creativity was possible only for a small minority of the population: scientists, philosophers, artists etc. For the great mass of people the exercise of creativity, when it took place, was limited to aspects of personal, cultural and religious life lying outside the sphere of daily work.

10.3  The medieval craftsman as an exception   

A very important exception is the profession of the craftsman or artisan which emerged in Europe during Middle Ages. The most remarkable features for us are the long and systematic educational process which an apprentice had to go through in order to become a master of his craft, and the growing sophistication of the medieval workshops in which master craftsmen, working freely under their own direction, produced a variety of manufactured goods. This educational process had pronounced cognitive components connected with the transfer of knowledge and experience from master to apprentice, and the requirement that aspiring craftsmen, after having completed their apprenticeship and proven their skill and knowledge in examinations, had to pass the stage of journeymen, spending several years travelling from town to town studying the different working methods of local master craftsmen. The activity of the craftsmen in their workshops involved much problem-solving, and there was a close relationship between craftsmanship and art, as exemplified by the creation of the Gothic cathedrals and other jewels of medieval architecture. The craftsmen were an influential social force in the rise of urban culture in Europe, and played a key role in the development of science and technology in the Renaissance.

The great economic and social importance of craftsmen continued up to industrial revolution, when the growing mechanization of production led to a profound transformation in the entire economic structure of society. The workshops of the craftsmen were mostly replaced by factories employing unskilled or minimally-skilled labor and producing large quantities of low-cost goods. The great mass of craftsmen and artisans lost their special status in society and were absorbed into the industrial proletariat. This process was accompanied by phenomena such as the Luddites and so-called machine breakers who rebelled against the mechanization of production. At the same time, industrialization gave rise to new sectors of economic activity in which the tradition of the craftsman continued in a different form -- in the form of highly skilled workers and engineers employed in the design and construction of machinery for production, in the rapidly growing machine tool sector and other areas at the focus of technological development. Here, as today, work has a strong cognitive component. With the development of industrial economies, the importance of this section of the work force grew more and more. The tradition of the craftsman also continues in the small and medium-sized high-technology enterprises which are the foundation for the economic strength of Germany and many industrial countries today.

10.4  Where does the extreme alienation of present-day society come from?

It can be argued that the present hostility to scientific and technological progress, as an expression of alienation in our sense, goes back to the period of industrialization when the introduction of machinery was accompanied by many negative social effects. But the extreme form of alienation found today in the relatively affluent nations of the so-called developed world has a different quality and goes much deeper, in our view. Thanks to scientific and technological progress living standards have greatly improved. The physical drudgery and misery, suffered by the masses of workers in factories of the 19th century, has been largely eliminated – at least in the technologically advanced nations. The association of work with a struggle for mere physical existence appears completely outdated in this context.  So what is the problem?

To understand the extreme alienation and the wide-spread hostility to scientific and technological progress in the modern world we must take into account at least three developments of the 20th century:

One pervasive factor is the alienating effects of linear optimization of economic processes. Linear optimization, as we defined it in Part I, is characterized by the attempt to increase productivity (or profits, on the level of individual companies) in the absence of significant technological progress. A classical case of linear optimization, which has had a deep and continuing effect on industrial society up to today, is the application of Taylorism (otherwise known as “scientific management”) to the mass production of goods.

In recent decades this process has been extended from the manufacturing sector to agriculture, exemplified by intensive animal farming in factory-like facilities, eliminating the traditional empathic relationship of farmers to their animals.

Neoliberal policies have greatly increased the pressure to implement linear forms of optimization at all levels of the economy. Brutal price competition, focus on extracting short-term profits at the expense of long-term viability, austerity policies leading to insufficient financing of social services, lack of investment in maintaining and renewing infrastructure etc. – all of these add up to ever-increasing pressure to squeeze the maximum performance out of existing labor and physical capital without changing the basic modes of activity. Such forced linear optimization renders work even more stressful and mind-deadening, further aggravating the effects of alienation. Although not the whole workforce is affected directly, these effects radiate into society, shaping the entire cultural climate.  

A second factor -- whose effect on society is more subtle but no less profound -- is the lack of fundamental revolutions in physical science since the early decades of the 20th century and the virtual hegemony of radical reductionism and positivism in the thinking of present-day scientists. As a consequence, economic development since then has occurred nearly entirely in the weakly nonlinear mode we defined it in Part I. The enormous progress in science and technology since the 1920s has been mainly based on drawing out and elaborating the consequences of fundamental scientific discoveries that were made in the preceding period, without going substantially beyond them. At the same time, science has broken with its traditional, close relationship to classical philosophy and humanism and has more and more taken on a purely technical character. On the deepest level this defect was identified by Edmund Husserl as a “Sinnentleerung”of science, an “emptying out of meaning”.

The third factor is the impact of the policies of the “postindustrial society” and “consumer society”, and the shift of employment from productive activity toward more and more superfluous, empty forms of service-sector activity. 

In the following we shall examine the first two factors. The third one has already been touched upon above in our discussion of the 1970s branching point.

10.5  Taylorism

A major cause of the alienation in industrial societies today can be traced back to the impact of so-called “scientific management” techniques which were first introduced on a large scale in the American automobile industry 100 years ago. These have shaped the development of modern society and its culture up to today.

In his famous paper “The Principles of Scientific Management” (1911) Frederick Taylor wrote:

“The fundamental principles of scientific management are applicable to all kinds of human activities, from our simplest individual acts to the work of our great corporations ... The greatest permanent prosperity for the workman, coupled with the greatest prosperity for the employer, can be brought about only when the work of the establishment is done with the smallest combined expenditure of human effort, plus nature's resources, plus the cost for the use of capital in the shape of machines, buildings, etc … The body of this paper will make it clear that, to work according to scientific laws, the management must take over and perform much of the work which is now left to the men; almost every act of the workman should be preceded by one or more preparatory acts of the management which enable him to do his work better and quicker than he otherwise could. … The first of the four elements which constitute the essence of scientific management is the development (by the management, not the workman) of the science of the given work task, with rigid rules for each motion of every man, and the perfection and standardization of all implements and working conditions.” 

It is essential to recognize that Taylor uses the terms “science” and “scientific” in a sense that has nothing to do with creative scientific discovery. Taylor himself was not a scientist but a practical man with experience in industry, who first worked as a machinist, then engineer and industrial consultant. For him “scientific” was essentially synonymous with “objective” and “systematic”, with making decisions based on observations, measurements and logical reasoning. 

It is crucial to point out that Taylor’s approach to optimizing an industrial process has nothing to do with introducing new technology. It is directed instead to rearranging the already existing basic elements of the production process in order to maximize its efficiency. The approach thus corresponds exactly to the linear mode of optimization which we identified in Part I.

“Scientific management” became most influential with the advent of assembly-line mass production, pioneered by Henry Ford in his revolution in automobile production.

The production process was divided into a large number of rigidly fixed steps, each of which was assigned to a particular factory worker or group of workers who had to repeat the same step constantly without change. This was in great contrast to the activity of a traditional craftsmen and even factory workers in the earlier phases of industrialization, which had a much more all-round character and involved a significant degree of problem-solving, improvisation and spontaneity.

 “Scientific management” aimed at “separating work for the head from work for the hands”. Activities that involved thinking, such as production planning and organization, were carried out exclusively by managers and management specialists. Workers were not supposed to think, but rather to carry out tasks which were precisely defined for them. In the context of “scientific management” techniques, often even the individual body movements of the workers on the production line were planned and optimized in terms of time and efficiency. Since work on the assembly line no longer required special training, unqualified and cheaper labor could be employed. Machinery was also used more efficiently. The production process itself became a giant machine where the workers operated as parts of the machine. The same overall approach, in less extreme form, came to be applied to industrial activity in general, to large-scale agricultural enterprises and even in some areas of the service sector. 

“Scientific management” took hold first in the United States but gradually became the norm for all industrial nations. There was resistance in Europe -- where the tradition of the all-round craftsman had persisted in industry to some extent – but this resistance was largely overcome in the context of the postwar Marshall Plan, when “American methods” were systematically exported to Europe and Japan as part of so-called “productivity drives”.

“Scientific management” also became a central feature of the Soviet model of socialist economy. While he condemned the use of Taylorism as an instrument of exploitation under capitalism, Vladimir Ilyich Lenin declared it to be indispensible for building socialism in the Soviet Union. In his 1918 article “The Immediate Tasks of the Soviet Government”, Lenin wrote:

“The famous Taylor system, which is so widespread in America, is famous precisely because it is the last word in reckless capitalist exploitation … At the same time, we must not for a moment forget that the Taylor system represents the tremendous progress of science, which systematically analyses the process of production and points the way towards an immense increase in the efficiency of human labour. The scientific researches which the introduction of the Taylor system started in America, notably that of motion study, as the Americans call it, yielded important data allowing the working population to be trained in incomparably higher methods of labour in general and of work organisation in particular … The Socialist Soviet Republic is faced with a task which can be briefly formulated thus: we must introduce the Taylor system and scientific American efficiency of labour throughout Russia by combining this system with a reduction in working time, with the application of new methods of production and work organisation undetrimental to the labour power of the working population. On the contrary, the Taylor system, properly controlled and intelligently applied by the working people themselves, will serve as a reliable means of further greatly reducing the obligatory working day for the entire working population, will serve as an effective means of dealing, in a fairly short space of time, with a task that could roughly be expressed as follows: six hours of physical work daily for every adult citizen and four hours of work in running the state.”

Taylorism had a profound influence on the entire subsequent development of the Soviet Union, from management philosophy to its culture and social structure. This is a vast topic, which cannot be pursued further here.

10.6  The mental price of “scientific management”

In its effects on society, both in the East and West, Taylorism has been a two-edged sword.

On the one side, the application of Taylor’s approach to the mass production of manufactured goods resulted in a tremendous increase in the productivity in terms of man-hours per unit output. It reduced the cost of manufactured goods to the point where they became available to large sections of the population, thereby creating a huge market and permitting further productivity gains via economies of scale. A famous case was Henry Ford’s policy of setting wage levels high enough that his workers could afford to purchase their own automobiles. In combination with technological innovation, mass production made possible a level of material affluence which would have been unimaginable in earlier times. In this sense “scientific management” fulfilled Taylor’s promise, that his methods would provide prosperity not only to the employer, but also to the workers. In the United States, for example, thanks both to the increased productivity and the influence of organized labor, a broad section of the workforce ascended into the middle class.

On the negative side, the application of “scientific management” meant virtually eliminating what remained of the cognitive content of economic activity for a large section of the labor force. It marked the virtual end of the culture of craftsmanship which had been passed on from the Middle Ages. Before the advent of the assembly line, the production of such a complex machine as an automobile required skilled, craftsman-like labor. The automobiles were produced individually. The activity of the individual worker involved many different tasks; it demanded considerable deliberation and an overview over the whole process. Now everything changed. Thinking on the part of workers of the assembly lines threatened to interfere with the optimization of the work process, and was accordingly discouraged in various ways. The result was – and remains today -- an extreme form of alienation.

Professor Sharon Beder describes this profoundly alienating effect of “scientific management” on commonplace forms of employment up to the present day:

“The influence of Taylor and Ford is still evident in many factories today—particularly in mass production in the US and Britain. It is manifest in the job fragmentation, minimal skill and training requirements, maximum repetition, separation of thinking and doing, and the lack of variety in workers’ jobs.

“Observers at a Texas Levi Strauss jean factory noted in 1995: ‘In our tour of the plant, we were struck by the minute segmentation of operations. Six different sewing machine operators, each doing one simple task, were needed to sew a pocket onto the pants. And most operators did the same task, hour after hour, day after day, year after year.’ 

“George Ritzer in his book on The McDonaldization of Society, argued that modern fast food restaurants operate in this way with work being ‘highly rationalized, geared to discover the most efficient way to grill a hamburger, fry chicken, or serve a meal.’ Workers repeat the same simple tasks over and over like robots, have no scope to show initiative or innovation, and utilize a small portion of their capabilities … “

Beder’s account includes the important observation, that Taylorism is by no means restricted to industrial employment, but includes a large portion of the jobs created by “postindustrial” and “consumer society” policies of the post-1970 period:

“Today … Taylorist methods have been extended to the service sector in the form of standardized, routinized, regimented, repetitive tasks, often requiring of employees ... scripted value-added speech (‘Did you find everything you were looking for?’ or ‘would you like a drink with that order?’ and the like) that is typical of work routines in retail sales, telemarketing, automobile service, clerical jobs, and other low wage, nonprofessional occupations, often regimented by Pavlovian automated signaling systems.”

10.7  Automation and robotization -- the end of Taylorism?

Fortunately the era of classical Taylorism is coming to an end. The uses of human labor to which “scientific management” has classically been applied, are exactly those which are most easily replaced by robots and automated production systems today. Thanks to the growing automation of industrial production, the kind of purely repetitive physical work exemplified by that of assembly-line workers in Henry Ford’s automobile plants, has already been eliminated to a large extent -- at least in the advanced-sector nations. Masses of sweating workers can no longer be found on the factory floors of modern high-tech industrial plants. Instead we find relative handfuls of technicians operating and maintaining sophisticated computer-controlled equipment and machinery.

 (Unfortunately old-style production persists in many developing countries with very low wages where, in the context of globalization, a considerable amount of production has been “outsourced”. Most recently, however, a reverse trend can be observed: automated production systems have become so efficient that they can out-compete even very cheap labor in many areas of production.)

At the same time we are witnessing the early stages of a new revolution: the emergence of autonomous, self-steering robots that can orient themselves in space, recognize objects and acoustical information and carry out complicated manipulations with sensor-equipped robotic arms and hands. Robots and advanced computer systems promise to replace human labor in practically every type of work which does not require unique human capabilities, both in the industrial and service sectors. Human beings can be freed once and for all from mind-deadening, routine forms of work. 

This means a great opportunity, but obviously also the threat of mass unemployment. To eliminate that threat it will be necessary to create a large number of new jobs of a type that cannot be replaced by computers and robotic systems – jobs that involve the use of creative mental powers unique to human beings. Exactly this will be accomplished by the transition to a Knowledge Generator Economy.

Naturally, the alienating effects of Taylorism in various forms have become deeply embedded in the culture of society, and cannot be overcome in a short time. Overcoming alienation will also require much more than merely an improvement in the cognitive quality of employment. Above all the cultural life of society must recover from the extreme decadence into which it has fallen in recent decades. This means a shift away from the present-day predominance of hedonistic “junk culture” toward forms of cultural activity which promote the development of the mind. In Chapter 11 we propose the European Renaissance as a model, but updated to include the mass participation of the population in scientific activities, making scientific research an integral part of popular culture.    

10.8  Alienation in science

There is a great difficulty with the proposed focus on science-related activities, however. The deep alienation associated with Taylorism and the linear mode of optimization of economies in general, has its reflection within science itself. The problem of alienation inside science – which we shall examine in some exemplary cases below -- creates a major barrier to the realization of a Knowledge Generator Economy. It makes scientific disciplines very difficult to master, and can cripple the creative capabilities of those who succeed. Its effects radiate into society as a whole, blocking any real sense of participation in the development of knowledge and reinforcing the irrationalism and hedonism of present-day popular culture.

For these reasons it is essential to clarify the nature of the problem inside science, before going on to propose a course of remedies to alienation in society generally.  

Among the causes as well as symptoms of this alienation in science is the tendency toward extremely narrow specialization and one-sided dependence on complex mathematical models at the expense of creative insight. Both of these greatly increase the psychological distance between the researcher and the physical reality which is the object of the research. The problem goes even much deeper, however. It affects the entire conception of the Universe and of Man, including the scientist’s own self-conception.

The powerful positivistic and reductionist bias of modern science has led to a situation where the phenomenon of human creativity, as we have characterized it, is effectively excluded from the scientist’s view of the Universe! This creates a vicious circle: to the extent scientists restrict themselves to mechanistic models of natural processes, they tend to lose the capability to imagine anything that could be beyond the reach of such models. Under such conditions their own creative processes are suppressed or take on a distorted, “other-worldly” form, as reflected in the popular parody of the “mad scientist” or autistic mathematical genius. 

10.9  Alienation and the fallacies of “artificial intelligence”

A most revealing case of alienation in science can to be found in the field known as “cognitive neuroscience”, where the human mind and brain constitute the subject of study. Here alienation is clearly expressed in the widespread tendency to apply concepts and models from the area of digital computing systems, to the study of human mental processes. This tendency is reflected, among other things, in the frequent use of terms such as “information processing”, “coding”, “computation”, “wiring” etc. in reference to the human brain. It is true that analogies between processes of the nervous system and man-made electronic circuits can sometimes be useful when applied in a limited context. Progress in the understanding, diagnosis and therapy of a variety of diseases of the nervous system has been achieved with the help of such analogies. The problem we are addressing arises when they are extended beyond their legitimate limits, ignoring the fundamental distinction between the human mind and any type of machine.

The most extreme case is the field of “artificial intelligence”, where leading researchers such as Marvin Minsky hold the view that all the functions of the human mind, including creative thinking, can in principle be duplicated by machines. According to Minsky what we call intelligence is a product of the interconnection of a very large number of simple computing elements (such as the so-called “flip-flop” circuits which are the elements of digital computers). He proposes that the study of complex machines provides a basis for understanding how the brain works. In doing so, he and others fail to take account of two fundamental facts:

Firstly: the entities regarded as the elements of the human brain – neurons – are living cells whose behavior is totally different from that of the digital switching elements which make up a computer. Among other things, neurons display spontaneous and unpredictable behavior typical of single-celled microorganisms. Each neuron is an individual, and no two neurons are exactly alike. Since scientists do not know how to predict the spontaneous firing of a neuron, it is often characterized as random. However, the use of the term “random” expresses little more than the ignorance of the researcher and the inability to account for individual events in the life of a neuron. The electrical signals generated by a single neuron are generally not a mathematical function of the impulses received from the neurons connected with it; the relationship between “input” and “output” is constantly changing. Furthermore, as is now well known, neurons in the brain constantly add new interconnections and modify existing ones (“neural plasticity”), and do this also with a great deal of spontaneity. The attempt to apply the reductionist method to a system such as the brain is doomed to fail. As paradoxical as it might seem, there is no empirical evidence to prove that the supposed elements of the brain – the neurons – are really simpler than the brain itself! On the contrary, to the extent the cognitive functions of the brain are governed by relationships of ideas, the “holistic” behavior of the brain is very much simpler and more intelligible than that apparently chaotic behavior of its component cells.  

The penchant for trying to impose simplistic, mechanistic schemata onto living processes, in the face of these and related facts, is characteristic of an alienated mind-set. Indeed, it would be nothing less than a miracle if living neurons were to follow the mechanical, “Taylorist” rules of behavior demanded from the parts of a machine!

Secondly: the most distinguishing feature of the human mind is the capability to make fundamental scientific discoveries. As we discussed in Part I, such discoveries require conceptualizing and then overcoming the relative boundedness or limits a given mode of thinking, and to discover an experimentally demonstrable principle of Nature which lies outside the scope of that mode of thinking. Knowledge resulting from a fundamental scientific discovery cannot be deduced in a logical way from previously existing knowledge. This makes the behavior of the human mind at the point of discovery fundamentally indeterminate and unpredictable from the standpoint of pre-existing knowledge. By contrast, the operation of a computer can be analyzed into a series of individual steps, in which the state of the device at each moment is determined – and is predictable (in principle) -- from its state at the preceding step, according to a fixed set of rules. This characteristic belongs to the essential nature of a computer. Besides this, every machine is permanently bounded in its behavior by the design according to which it was constructed, and indirectly by the boundedness of the thinking of its designers. Although investigations of the possibility of artificial intelligence” envisage computers able to change their own programs, such changes would also have to be determined by some kind of rules and would thereby have to proceed from one step to the next in an intrinsically predictable manner. Otherwise it would be meaningless to speak of a “machine”.

In view of these considerations, researchers in cognitive neuroscience and “artificial intelligence” face the following alternative: Either they must accept that the functions of the human mind are intrinsically incapable of being replicated by a computer (or machine), or they must deny the existence of human creativity in the sense we have defined it. Minsky and many others in the field have effectively chosen the second alternative, thereby falling into the syndrome of alienation that pervades present-day science.

Two quotes from a 1982 article by Marvin Minsky, “Why people think computers can’t think” (The AI Magazine, Fall 1982) serve to illustrate these points:

“I don't believe that there's much difference between ordinary thought and highly creative thought … Do outstanding minds differ from ordinary minds in any special way? I don't believe that there is anything basically different in a genius, except for having an unusual combination of abilities, none very special by itself.”

Minsky evidently has no understanding of fundamental scientific discovery. “Ordinary thought” remains within the bounds of an existing set of conceptions and modes of thinking, whereas fundamental scientific creativity breaks through those bounds to discover a valid principle of the Universe which lies outside the domain of existing ways of thinking. This distinction is fundamental; without recognizing it there is no cognitive science worth the name.

“To me there is a special irony when people say machines cannot have minds, because I feel we're only now beginning to see how minds possibly could work -- using insights that came directly from attempts to see what complicated machines can do.”

Ironically, the most fruitful developments have come from the opposite direction, namely by applying concepts drawn from the study of the human (and animal) brain, to the design of machines. An example is the creation of “artificial neural networks”, in which Minsky himself has played a leading role. These are computing devices organized on the model of networks of neurons in the human brain. Such systems “learn” by changing or modifying the interconnections between subsystems in response to input signals, in analogy to the behavior of living neurons in the brain. Artificial neural networks are being used, among other things, in robotic systems whose applications are growing in scope and economic importance every day. Indeed, nearly the whole ongoing development of robotics has occurred through trying to imitate the behavior and biological organization of humans and animals.

10.10  Creativity as randomness?

It is hardly surprising that the predominance of extreme mechanistic and reductionist conceptions of the human mind and brain would provoke a counter-reaction. The same is true in a much broader context for the cultural impact of Taylorism. Parallels can be seen in the 19th century romantic counter-reaction to so-called Enlightenment rationalism and to the rise of industry with its growing mechanization. But  although there have been echoes of the classical romanticist movement in the modern, post-WWII period -- expressed for example in the “Back to Nature” movement of the 1960s and 1970s -- the predominant form of reaction has been different and in some respects more radical.  Creativity becomes more and more associated with the notion of violating any kind of coherence. Certain trends in modern art and modern music express this very clearly, as also strong tendencies in popular mass culture in the direction of what might be characterized as a cult of the arbitrary and meaningless. Rather than overcome alienation, this cultural reaction merely reproduces it in a different, aggravated form.

The appeal of such reactions is not hard to understand, in the context of a cultural environment where people are largely deprived of the experience of scientific discovery and the rigorous kind of artistic creativity exemplified by classical musical composition. If my mind is nothing but a computer, if my thoughts and emotions are equivalent to strings of 0’s and 1’s, then who am I? How can I be free? The only way to be creative in a world of mechanical determinism is to assert randomness – mental behavior which is completely incoherent and disconnected from reality.

The flight into irrationality or even psychosis in modern popular culture, finds its expression also in science itself. The most explicit case is the attempt to create “artificial creativity” by integrating random number generators into computers designed to simulate artificial intelligence. This idea was discussed very early by the pioneers of what is today called cognitive neuroscience and remains a major focus of research. Exemplary is a 1955 “Proposal for the Dartmouth Summer Research Project on Artificial Intelligence” authored by Marvin Minsky and three other famous scientists including Claude Shannon, father of the statistical theory of information. In their proposal we find a section on “Randomness and Creativity”:

“A fairly attractive and yet clearly incomplete conjecture is that the difference between creative thinking and unimaginative competent thinking lies in the injection of some randomness. … The educated guess or the hunch include controlled randomness in otherwise orderly thinking.”

In fact, the notion of randomness or random variation as the key to creativity goes far back in the history of science and in a certain sense lies at the heart of the Darwinian theory of evolution. Of course the subject of that theory is not creativity in human beings, nor does Darwin apply the term “creative” to the process of evolution. But it is practically impossible to contemplate the emergence of new species in the course of biological evolution, without feeling an analogy with the emergence of new ideas and inventions in human history. It is thus a small step to think of applying Darwin’s model of evolution via selection among randomly generated variants, in order to try to explain human creativity and eventually to replicate it by artificial means.

10.11  Alienation in modern biology

The case of biology brings up some subtle points, which are worth discussing at more length.

The advent of molecular biology, including the unraveling of the so-called genetic code and the mechanisms of protein synthesis in cells, has brought an explosive growth of detailed knowledge concerning the organization of living processes on the microscopic level. A technological revolution has been unleashed with far-reaching consequences for medicine, agriculture and other areas. Unfortunately these developments have occurred on the background of a singular lack of progress – or even a regress! – in understanding the fundamental nature of life itself. The result is a state of alienation in biology, analogous in many ways to that prevailing in so-called cognitive science.

Symptomatic is the tendency to see the living cell as a kind of “molecular machine”, similarly to the way the digital computer is used as a model for the functioning of the brain. More generally, present-day molecular biology has a strong tendency to try to reduce entire phenomenon of life to an array of mechanisms. Since molecular biology is now generally regarded as the scientific foundation for the understanding of living processes, this tendency has come to affect biology as a whole.

A most extreme expression of alienation, in this context, is the notion that higher capabilities of organisms, such as learning in animals and creativity in Man, can be explained as products of natural selection in the struggle for survival. The elementary absurdity of this notion lies in the fact that natural selection, in and of itself, creates nothing and cannot possibly be the origin of any abilities of living organisms. As the name implies, natural selection can only select among possibilities which must already have been generated by the living process. Trying to attribute various capabilities of living organisms to the struggle for survival is like claiming that a problem automatically generates its solution. It is clear that intelligence can contribute to the fitness to survive and to the proliferation of certain species, but not where the intelligence ultimately comes from. The same applies to any other capability of living organisms, such as the singing of birds or the ability of amoeba to capture bacteria.

In classical Darwinism the “productive” role in evolution is attributed to random mutations in the genetic material. In recent times increasing attention is being given to other mechanisms of genetic change, such as gene transfer, as well as the role of epigenetic factors. But attempting to account for the origin of the various capabilities of living organisms in this way involves a further absurdity:  genetic material, in and of itself, produces nothing. DNA cannot generate a living organism and cannot produce any of the powers and abilities of living organisms. The genetic material fulfills its functions only in the context of a living process, i.e. when it is embedded in a living organism. The genes cannot cause an organism to do anything or to have any capability which is not already within the scope of potentialities of living processes. Genes only affect the manner in which those potentialities are expressed in a particular organism.

By analogy, the architect’s drawing does not produce the building, but only selects between the alternative courses of action of the construction workers, determining the ultimate shape of the building. The architect’s design can only be realized if the construction workers have the mental skills and capabilities, the materials and machines to build it. Whether a fertilized egg cell develops into a dog, monkey or human being depends on its genetic material; but none of these possibilities could be realized unless the living process, embodied in that cell, already had the potential to produce each of those outcomes.

These points are elementary, or even trivial, but the alienation embedded in modern-day scientific education often leads to them being completely overlooked, with devastating consequences for the progress of science. Attention is diverted away from the real subject of biology -- living processes themselves – and focused instead nearly exclusively upon instruments employed by living processes, and whose significance lies only in that context. Indeed, there is a strong tendency in molecular biology to treat the biochemical reactions in living organisms as if they would take place in isolation, in the chemist’s test tube, rather than in the very special context, environment or “geometry” defined by the living process. This is in a sense natural because molecular biology has so far failed to develop the conceptual tools needed to characterize that special environment. The situation is much analogous to that of physics prior to the Einstein’s demonstration – following Riemann -- that real physical space (as opposed to the abstract space of Euclidean geometry) has a distinct curvature which shapes the course of all processes taking place in it. 

The need to situate everything within its context is more essential in biology than perhaps in every other branch of natural science. In particular we must not forget that the struggle for existence and the process of natural section – which are supposed to be the motor of evolution -- do not occur in the context of a fixed environment, but rather in one which is being changed and transformed, in large part by the activity of the living organisms themselves. The biogenic changes in the environment occur on all scales, from the microscopic to the global scale, including the well-known transformation of the chemical composition of the Earth’s atmosphere by photosynthetic organisms. As the great biogeochemist Vladimir Vernadsky emphasized, the evolution of the species can only be properly understood as an included feature of the development of the biosphere as a whole, in which the physical-transformational power of living matter (the totality of living organisms) has constantly increased over geological time. This open-ended growth and development of the biosphere is in many ways analogous to, and coherent with, that of a human physical economy as we have described it in Part I. Here the analog to Man’s technological progress lies not only in the evolutionary emergence of new species, but also in “innovative” changes in behavior and in the interactions of populations of living organisms among each other and with the environment. From this standpoint Vernadsky considered the appearance of Man and his cognitive abilities as neither accidental nor miraculous, but rather as a coherent expression of a tendency inherent in the living process itself.

The existence of certain universal, intrinsic characteristics and properties of living processes – such as the tendency to transform their environment -- is not a metaphysical postulate, but rather what Vernadsky called an empirical generalization:  a conclusion drawn from a vast accumulation of evidence. It is a “given” of experience; it has nothing to do with trying to find explanations or theories, and thus nothing to do with the endless debates between Darwinists and Creationists, for example. The reluctance or inability to acknowledge something which surrounds us everywhere in Nature, typifies the alienation of modern biology.

A tragicomic example can be found in attempts to explain the origin of playfulness and fun in terms of natural selection. The January 2015 special feature of the Journal “Current Biology” contains examples, as well as indications of a struggle by some biologists to overcome it. Here are two exemplary quotes:

“So, concerning play and fun, we can ask: Why did they evolve? How do they promote survival value and reproductive fitness and allow individuals to come to terms with the social situation in which they find themselves? What causes play and fun? ...” (Marc Bekoff, Professor of Ecology and Evolutionary Biology at the University of Colorado, Boulder)

“Play may have arisen in vertebrate lineages as a by-product of traits associated with the complex behaviors and cognitive abilities, in turn associated with increased brain size. Although we know that invertebrates are far from the mindless machines they were once considered to be, it might be that the neural architecture available to add new levels of control required for play is lacking, or the local solutions employed by invertebrates don’t benefit from the adaptive advantages conveyed by play. Or perhaps it is simply that we are overlooking countless examples of play in invertebrates.” (Sarah Zylinski, Lecturer on Animal Biology, University of Leeds, (my italics)).

Whether intended or not, Zylinsky’s remarks point to a radically different way to approach the problem. Rather than seeking an origin in evolution let us assume, instead, that something akin to playfulness and the joy of living belongs to the innermost nature of the living process itself, and can be found in some form in every living organism. What changes in evolution, and from one species and individual to another, is only the specific way in which such universal attributes or potentialities are expressed.

It is relevant to note, in this context, that single-celled organisms are now known to display many behavioral characteristics – including a certain sort of learning -- which were once believed to exist only in higher, multi-cellular organisms.

The same would hold for other attributes which we have recent to believe are inherent in the living process and are present in one form or another in every living organism, including for example:

(1) directedness

(2) plasticity

(3) the presence of “free”, spontaneous behavior

(4) a quality expressed as willfulness or intentionality in Man and animals

(5) the tendency to explore, react and adapt to, as well as act upon its environment

(6) individuation and holism: living processes form individual wholes (from organisms and superorganisms to the biosphere as a unified system), no two of which are exactly alike.

A chief starting point for any scientific approach to living processes must be the evident adequacy of living processes to serve as the substrate for human cognition. In other words we must look at evolution in the reverse, as well as the forward direction. We must seek, in every living organism, expressions of the potentialities which are first fully realized in the human brain with its creative cognitive functions. We have no choice but to admit the existence of such potentialities, for otherwise we would be stuck in a similar dilemma to that of cognitive neuroscience: we would either have to deny the existence of human cognition, or postulate a miraculous emergence of cognition “from out of nowhere.”

10.12  “Emergence” – a miracle?

More generally it is necessary to correct a now-popular use (or misuse) of the concept of “emergence”, which assumes that qualitatively new, higher-order properties can somehow arise (emerge) by the mere increase in complexity of systems. Ironically, although the concept of emergence was intended by some scientists to escape from the straitjacket of reductionism, it nowadays often serves to justify an extreme reductionist/mechanistic approach to living processes and to cognition. Apart from a hidden tendency toward mysticism, the now-popular notion of emergence overlooks the fact that there are no simple entities in Nature, and that in the real world – as paradoxical as it may seem – the part is not necessarily simpler than the whole. Even the so-called “elementary particles” are extraordinarily complex entities, as reflected by their wave functions and their spontaneous, apparently statistically unpredictable behavior. The same is true of molecules, whose actual physical character is totally different from the components of a machine. In this way it is nonsensical to speak of living cells as “molecular machines”.

Every time something apparently new emerges in Nature, the potential or possibility must necessarily have existed already beforehand; and we have good reason to expect to find precursors in previous states of the system, as well as suitable predispositions in the interacting entities. As in human society, interactions among any sort of physical objects provide the context for evoking and bringing to fruition potentialities which lie “sleeping” in the individual objects.

Doubtless the most powerful antidote to alienation in modern biology would be to revive the spirit of the great 19th century naturalist Alexander Humboldt, embodied in his masterwork “Cosmos”. We shall discuss Humboldt’s conception of the Science of Nature in relationship to human development in the next chapter, quoting some exemplary passages from “Cosmos”. Rethinking and integrating the accomplishments of modern biology, including molecular biology and genetics, into the framework of what is sometimes referred to as “Humboldtian science” would have enormous benefits for a Knowledge Generator Economy, and help open the way to mass participation of the population in biological research.  

10.13  Alienation at the foundation of physics

Physics is understood to be the ultimate foundation for the natural sciences today. Modern physics requires a complicated apparatus of abstract formal mathematics which is extremely difficult for normal people to master. But the role of mathematics has also changed over the last 100 years. In previous times formal mathematics served merely as a tool for formulating exact physical laws, applying those laws to concrete problems and for engineering calculations. The basic elements of physical reality, as far as they were known, were conceived in a way that could be grasped intuitively without the mediation of formal mathematics.  But with the advent of quantum theory in the early decades of the 20th century this intuitive intelligibility was lost. In this sense alienation is built into the very foundations of natural science.

When asked “what is an electron?” most physicists today will admit (if they are honest) that they don’t really know; that there are two mutually incompatible mental pictures – that of a particle and that of a wave – each of which applies under certain circumstances, but not in others. Evidently neither of them corresponds fully to physical reality, and there is at present no generally accepted alternative. Some philosophically radical physicists may even assert that the question, “what is an electron?” is meaningless; that there is no “true” reality, but only mathematical calculations which give the right answers when compared with the results of experiments. “Electron” would thus be nothing more than a label for certain mathematical formalisms and experimental procedures.

In most cases, in fact, quantum theory does not actually give “the right answers” in terms of predicting individual events, but only probability distributions that agree statistically with the results of repeated experiments. The obvious conclusion would be that quantum theory is incomplete and will be superseded. Orthodox quantum physicists claim, to the contrary, that it is in principle impossible to obtain more than a probability; that it is in principle impossible to answer the question, why certain specific events (such as the ticking of an instrument called a particle detector) occur at a specific place and time. They claim that the postulated impossibility of a causal understanding of such individual micro-events is a fundamental property of physical reality, rather than a limitation of quantum theory.  

Students who wish to become professional physicists are commonly trained to accept such a bizarre state of affairs as final and to submit to it, sacrificing the mind’s natural impulse to make the world intelligible. The result is very much analogous to what Frederick Taylor recommended for managing factory workers: to separate doing from thinking. The analogy holds at least to the extent that thinking presupposes a reality which the scientist is struggling to understand. In this case the thinking was supposedly finished with the creation of modern quantum theory. Now aspiring scientists are supposed simply to learn to calculate with the mathematical formalisms and not to inquire more into the physical reality behind them.

The resulting alienation is not limited to those engaged in scientific work, but radiates into society as a whole. It is compounded by the misleading way in which fundamental science is presented to the public in popularized form. Often, popularized accounts have the quality of science fiction or commercial entertainment, evoking no real thinking from their audience and failing to communicate the rigorous truth-seeking attitude of real scientists. For example, entities such as “quarks”, “gluons”, “strings”, “black holes” and the “Big Bang” are typically presented in a manner similar to the fairy tales children used to be taught about where babies come from. Such accounts give little or no sense of the hypothetical character of these entities and the fact that the evidence for their existence in reality is indirect and subject to different interpretations. The result is a wildly exaggerated impression of how much scientists really know, with any degree of certainty, about the Universe at extreme scales of space and time.

Today most people, including scientists, do not grasp the fact that we are still only at the beginning of the development of science; and that our present knowledge will appear primitive and simplistic to people in future centuries, just as the science of earlier ages appears to us today. It is inevitable, for example, that present-day quantum physics will someday become obsolete as the result of new fundamental discoveries. New conceptions and theories will emerge which will overcome the paradoxes and unintelligibility of today’s basic physics and unleash a new scientific and technological revolution. There are reasons to expect that this moment is not far away.

In human knowledge we are always at the beginning, because the Universe is inexhaustible and the process of discovery never ends.


Chapter 11   Renaissance humanism and the pathway to a Knowledge Generator Economy

11.1  The spirit of the Renaissance

The best reference-point for overcoming the pervasive alienation of present-day society is doubtless the European Renaissance from the late 14th century into the 16th century. We have reason to believe that launching real physical-economic development under present circumstances would create a favorable climate for a certain kind of spiritual-intellectual renaissance to occur. The abandonment of destructive neoliberal policies and the creation of large numbers of secure, high-quality jobs and new opportunities for state-supported education, together with visionary projects for space exploration and other areas of science and technology, will already be experienced by the population as a certain kind of “rebirth”. But it is first with the transition to a Knowledge Generator Economy that the classical-humanistic spirit of the European Renaissance might – and must! -- find a new home.

What is it about the spirit of the Renaissance that makes that spirit indispensible to overcoming the alienation of modern society?

Put as briefly as we can:

(1) the self-conception of Man as a creative being, of each human individual as an embodiment or microcosm of the creative principle governing the Universe as a whole;

(2) the joyous realization and celebration of that self-conception (among other things) in great works of poetry, music, drama, theater and architecture;

(3) an insatiable thirst for knowledge, a reawakening of curiosity about everything;

(4) the understanding that human knowledge can never be perfect, but can constantly be perfected through a unending process of creative discovery;

(5) the confidence that Man, through this process, can achieve a growing mastery of the physical Universe;

(6) a notion of beauty and harmonic proportion which embraces Man, Nature and the progression of human knowledge -- a notion that applies not only to poetry, music, painting and architecture, but also to natural science and to physical-economic development itself.  

Most importantly, the spirit of the Renaissance was not merely a philosophical idea, but was lived and affirmed by concrete creative activity; in great works of art, in a new era of exploration of the world, a rapid expansion of knowledge in every field, and a flood of new inventions and technological improvements. The exercise of creative reason embraced not only artists, inventors and discoverers, but also all those whose creative mental processes were evoked and developed by studying the writings of great thinkers, viewing masterpieces of painting or architecture, reading beautiful poems or watching a great drama, assimilating a new discovery in practice etc. That meant a significant part of the population in the flourishing urban centers of the Renaissance, such as Florence.

Not accidentally, the spirit of the Renaissance is commonly identified with a host of extraordinary individuals such as Raphael, Michelangelo, Brunelleschi, Dürer, Leonardo (art and architecture); Columbus, Vasco da Gamba, Toscanelli (explorers); Pacioli, Stevin, Copernicus, Kepler, Galileo, Gilbert, Paracelsus, Vesalius, Leonardo (mathematics, astronomy, physics, medicine, natural science generally); Ficino, Nicolas of Cusa, Erasmus (philosophy). Below we shall discuss some of the reasons for the indispensible role of individual personalities in the progress of knowledge.

Among the extraordinary personages of the Renaissance, Leonardo da Vinci and Nicolas of Cusa have a very special relevance for the transition to a Knowledge Generator Economy.

The special importance of Leonardo da Vinci for today is not only his insatiable thirst for knowledge and the extraordinary many-sidedness of his interests and accomplishments – for which he is known as the archetype of the “Renaissance Man” -- but lies above all in the scientific method utilized by Leonardo, and which is inseparable from his artistic work. Leonardo’s scientific method provides a model for what we shall call in the following chapter “the phenomenological study of Nature”.  It is key to our strategy for engaging the masses of the population directly in scientific research.

Speaking to us from over 500 years ago, Nicolas of Cusa lights our way to a future in which human activity will revolve around the process of advancement of knowledge. While Cusa can rightly regarded as one of the fathers of modern science, the development of science up to today is very far from realizing the profound conceptions laid out in Cusa’s De Docta Ignorantia (Of Learned Ignorance), De coniecturis (“On Hypothesis”) and related writings.

In Cusa’ work we find, clearly articulated, the principle of infinite perfectibility of human knowledge of the Universe. This principle finds its expression in an unending process of discovery, based on a creative principle embodied in the human mind. Human knowledge is always imperfect, but in fact it is exactly by conceptualizing the nature of this intrinsic limitation or bounding -- as a “negative” form of knowledge -- that we find a way to constantly progress further in our positive knowledge of the Universe. This gives us a first glimpse at Cusa’s notion of Learned Ignorance. In his De coniecturis, Cusa elaborates this notion as a further development of Plato’s method of hypotheses. Here are some brief quotes:

 “Since perfect knowledge of the truth is unattainable, every positive assertion concerning truth is a hypothesis. For, our ongoing progress in grasping truth never exhausts it. Since our present knowledge stands in no specifiable relationship to the maximum knowledge attainable to Man, therefore our imperfect, uncertain efforts to grasp after the truth, far removed from its purity, transform all our positive propositions into mere hypotheses …”

Cusa proposes to show his students “the pathway to the Art of Hypothesis, and then to pick some flowers of its application, to refresh those who thirst and hunger for the truth.” He continues by discussing “The Origins of Hypotheses”:

“Hypotheses must flow from our mind, as the real world flows from the God’s infinite Reason. For, the human mind, the sublime image of God, participates to the greatest possible extent in the fecundity of creative Nature…”

Cusa then speaks of the development of hypotheses as a self-realization:

“(The human mind) creates… in likeness of real things, conceptions… In this way the development of the conceptual world, which flows from the human mind, is there for the benefit of creativity itself. The deeper it looks into the world it has created, the more the mind inspires itself; for, its aim is infinite Reason, which alone is the measure of everything and the living center of our mind. Hence the natural striving for knowledge, through which we perfect ourselves.”

We cannot go deeper into the philosophy of Nicolas of Cusa here, nor discuss his contributions to mathematics, astronomy, medicine and areas of knowledge. Suffice it to say that his work anticipates everything we presented in Part I of this book on hypothesis, higher hypotheses and the hypothesis of the higher hypothesis, plus much more.   

11.2  The individual nature of scientific discovery

It is essential to stress that Renaissance humanism’s conception of the creative human individual has nothing to do with the irrational, anarchistic form of “individualism” that pervades modern liberal society. The latter expresses, in its own way, exactly the alienation we seek to overcome.

Studying Renaissance humanism’s conception of the individual takes us into the one of the deepest issues of physical economy. The issue is the role of the individual human personality in the progress of knowledge.

Throughout this book and in our discussion of Renaissance humanism we have spoken of the creative nature of human beings. The question arises: Are human beings interchangeable with respect to their creative function? A routine task such as opening a door does not require any specifically individual quality of the person who carries out the act. With respect to the result of the action, it is irrelevant which particular person carried it out -- it could have been any other person.

In the case of a fundamental scientific discovery the situation is different. The act of discovery and its meaning are inseparable from the personality of the discoverer and the mental process by which the discovery was made. The act of discovery is not a purely collective process, but has an intrinsic individual element. Although scientific discoveries invariably occur in the context of intense social interactions, and although the roots of a discovery can be traced back for generations, the crucial event occurs in an individual human mind. For these reason practically all scientific thinkers who have made significant original contributions in the past, learned science by studying the works of their predecessors, and most often also through direct personal contact with outstanding scientists of their day. They understood science not as a mere collection of objective facts and theories, but as a dialog stretching across centuries, whose participants are concrete historical personalities. They strived always to “get inside the minds” of original thinkers.

It is true that in the process of elaborating scientific discoveries into formal theories of the sort typified by conventional textbooks of physics, discoveries become depersonalized, objectified. All traces of the actual thinking process behind each discovery, and of the historical dialog which constituted the context for the individual act of creativity, are “cleaned away”. The use of mathematical formulas and formalized knowledge is certainly legitimate and often even necessary in scientific and engineering work. But when the teaching of science is restricted to merely imparting “textbook knowledge”, the result is an extreme form of alienation. In many respects such practices are analogous to those of medieval scholasticism which Renaissance thinkers sought to overcome.

11.3  Overcoming alienation – lessons for today

Important lessons for today can be found by studying how the Renaissance overcame the deep alienation embodied in many aspects of life in medieval Europe.

Here we touch upon a highly controversial topic. Especially in recent times, scholars have argued strongly against the traditional characterization of the medieval period as a “Dark Age”, pointing out numerous sources of “light” in this long and exceeding complex period of European history -- developments which lay the basis for the 15th century Renaissance and anticipated it in many ways. Among other things the creative activity of craftsmen and artisans played a key role in the developmental axis passing from the Middle Ages through the Renaissance and beyond. We must also bear in mind the stratification of medieval society and the great differences in cultural environment between the feudal agrarian sector, the monasteries and towns.

Granting all these points, we nevertheless insist that profound alienation, in the sense we have defined it, was a dominant characteristic of medieval society. The strongest and most unambiguous evidence is the way human beings were portrayed in medieval art -- in paintings, sculpture, frescos decoration and manuscript illuminations. The alienation becomes immediately apparent when we place medieval artworks side-by-side with those of Raphael or the sculptures of Michelangelo.

Taken in isolation from their religious connotations, the medieval human figures and their settings tend to appear “flat”, static, relatively expressionless, symbolic or iconic in character. The ability of the human figures in medieval art to evoke a powerful response from the minds of the viewers derived from their symbolic and liturgical function, from their association with biblical events and in some cases the belief in a supernatural connection of the figure itself with God. The medieval art work does not have the meaning or quality of an act of creativity on the part of an individual artistic personality, and the artists remained mostly anonymous.

Renaissance portraits, by contrast, convey to us the sense of living human individuals in their full depth of character, as “seen in the mind’s eye” of an individual creative artist. In religious terms we could say that the medieval figures have the character of images or symbols of the divine, while the figures in the Renaissance portraits have the character of embodiments of the divine. The relationship of the figure to the mental processes of the viewer is quite different. The great Renaissance paintings and sculptures function as instruments for establishing a direct communication (or “resonance”) between creative mental processes of the artist and those of the viewer. They not only evoke the conception of each individual human being as a microcosm of the creative principle governing the Universe as a whole, but provide a kind of living proof of that conception, by calling forth creative processes in the mind of the viewer. This, rather than the mere use of the technique of perspective – which tends to be overemphasized, in our opinion – is the essence of the advance made by Renaissance artists relative to the preceding period. 

Certainly, creativity is not lacking in medieval art and society! The Cathedral of Chartres, for example, belongs among the most awesome products of human creativity. Great medieval artworks celebrate the divine spirit manifested by the Universe as a whole, but they do not focus attention on human creativity per se. The dearth of artworks focusing on the creative nature of the human individual as their immediate subject is both a cause and a symptom of a pervasive alienation on medieval society – an alienation of the human individual from his or her creative nature.

From this standpoint we can understand the powerful effect of Renaissance art on the entire cultural environment of that time, and the strong sense of a spiritual awakening.

Just as the Renaissance was unleashed in large part by rediscovering the great art of classical antiquity, so we may hope that reviving the spirit of Renaissance humanism will help overcome the alienation, the pervasive “flatness” of present-day society.

For this purpose it is essential to have historical models which are closer in time and circumstances to today’s world than the European Renaissance itself. The period of relatively successful physical-economic development in the U.S. and other nations in the decades following WWII can serve as a useful model for some aspects of economic and financial policy, but it can hardly be regarded as a high-point of culture. This period saw an accelerating erosion of the influence of classical culture, the rise of increasingly stupefying forms of commercial mass entertainment (television and spectator sports) and a growth alienation from the spread of Taylorism. The spirit of the Renaissance found its expression not in culture per se, but in the rapid progress of science and technology in that period, including the beginning of the era of space exploration. But the accomplishments of post-WWII science and technology war grew out of a scientific culture whose high-point has been reached much earlier, centered in Germany in the middle of the 19th century. At that time Germany was the focus of a kind of “mini-Renaissance” --a period of rapid physical-economic development combined with a flowering of classical humanism expressed especially in literature, music and philosophy. That period, associated with names such as Humboldt, Beethoven, Goethe and Schiller and with an unprecedented flowering of scientific genius, provides the most recent reference-point for overcoming the alienation and general cultural impoverishment of present-day society.

11.4  The educational model of Wilhelm von Humboldt

A crucial role in the German “mini-Renaissance” was played by the educational system introduced by Wilhelm von Humboldt in Prussia starting in 1808. Humboldt’s educational reform was explicitly directed against the previous “practical” orientation in education, which concentrated only on imparting factual knowledge and discipline to pupils. Humboldt’s educational program aimed instead at educating the whole person, at attaining what he called “the highest, most well-proportioned development of the personality”. Not science per se, but rather the study of classical languages and classical culture – especially that of ancient Greece – formed the core of Humboldt’s educational system, in close relationship with Friedrich Schiller’s concept of the “Aesthetic Education of Man”.

Paradoxically, it was exactly Humboldt’s “impractical” focus on classical culture and on educating the whole personality, which played a crucial role in making Germany the world center of scientific and technological development from the middle of the 19th century up to World War II. At its highpoint, the influence of the Humboldt educational system extended throughout Europe all the way to the United States and Japan.

(It should be noted that although Humboldt’s system is often referred to as “humanistic”, it is very different from the liberal model of humanistic education put forward by the psychologist Carl Rogers and others in the 1960s and 1970s. The difference lies among other things in the Humboldt system’s strong emphasis on classical culture and languages, on intellectual rigor and rigorous mastery of the relevant subject-matter.) 

In the meantime the “practical” orientation in education, against which Humboldt polemicized, has once again become hegemonic nearly everywhere. In Germany, the birthplace of Humboldt’s system, the teaching of ancient Greek language, history and culture is widely regarded as outmoded and useless, and has largely disappeared from the school curricula. But a high price is being paid in terms of the creative potential of German society. 

Among other things Humboldt’s educational system contributed greatly to the development, during the 19th century, of a powerful culture of scientific creativity in Germany. Leading German scientists of the period tended to be philosophically-minded and to love classical music and poetry. They took a broad historical view of science and saw it as an inseparable part of human culture. This ability of these scientists to communicate their ideas in lucid and powerful literary language, contributed greatly to raising the interest and understanding of science in the general population.

11.5  Alexander von Humboldt’s “Cosmos”

An example of decisive importance for today is the book “Cosmos” by Wilhelm von Humboldt’s brother Alexander, who was famous as a great naturalist, explorer and researcher on terrestrial magnetism. “Cosmos” was enormously popular in his time and one of the most influential books ever written on science. It was directed both to scientists as well as to a broad educated audience. A literary masterpiece embodying much of the spirit of Renaissance humanism, Alexander von Humboldt’s “Cosmos” conveys a conception of science, Nature and Man diametrically opposed to the alienated thinking prevailing in popular culture and among most scientists today. His conception provides powerful medicine for overcoming the problems we analyzed in Chapter 10.

We are convinced, in fact, that reviving Humboldt’s conception in the context of a campaign for mass participation in scientific research (see the next chapter) provides an eminently feasible way to overcome the main cultural barriers standing in the way of a Knowledge Generation Economy. In the following we examine the features of Humboldt’s “Cosmos” which are immediately relevant to the subject and purpose of this book, mostly letting Humboldt speak for himself.

Alexander von Humboldt’s “Cosmos – Sketch of a Physical Description of the Universe” is a very extensive work, appearing in five volumes published between the years 1845 and 1862.  The book is unique in that it combines a great part of the detailed scientific knowledge at his time in the fields of astronomy, geography, geology, zoology and botany, together with a historical account of the development of human conceptions of Nature. This development is portrayed both in terms of the striving for objective knowledge as well as the subjective/aesthetic dimension expressed in poetry, literature and the relationship of culture and history to the natural environment. “Cosmos” is written with the boundless enthusiasm of a great scientific thinker and world explorer striving to awaken the thirst for knowledge and the enjoyment of Nature in the minds of the readers.

Rather than attempt to describe this work further, we quote from a letter from Alexander von Humboldt from the year 1834, in which he describes in rough terms his original concept for what was to become a gigantic project:

 “I have the wild idea to present everything we know today about the entire material world -- from the phenomena of the heavens and the Earth, from stellar nebulae to the mosses on the granite rocks -- in a single work, written in a lively language, which should both stimulate and delight the reader. Every great and important idea which has arisen anywhere should be presented alongside the facts. It should present an epoch in the spiritual development of Mankind, in its knowledge of Nature. The Prolegomena are practically finished ... (and include) an introductory discourse on the Portrait of Nature, the inspiration for the study of Nature, in the spirit of our time, in three respects: 1) descriptive poetry and lively description of scenes of nature in modern travelers’ reports, 2) landscape painting, sensuous presentation, an exotic Nature, at what time it emerged, when it became a necessity and joy, why the passionate ancient peoples could not have had that experience; 3) … the history of the physical descriptions of Nature, as the Idea of our world, the mutual relationship of all phenomena, which has become evident to the various peoples in the course of centuries. These preliminaries are the most important thing; they contain the general considerations and are followed by specialized ones – the details (I am sending you a list): outer space – all of physical astronomy – our Earth, its inner and outer regions, the electromagnetism of the Earth’s interior, volcanism … – the oceans – atmosphere – climate – life -- geography of plants -- geography of animals – human races and languages – the physical organization of languages (articulation of tones) which is governed by human intelligence. In the specialized parts all precise numerical data … since these details cannot be presented in the same literary form, as the general relationships of scientific knowledge, the merely factual will be presented with short phrases, in almost tabular form, so that the diligent reader will find on a few pages the results that would otherwise require many years of study.”

Today, a century and a half later, many of the details of Humboldt’s account are outdated and would appear to have merely historical interest. But the aim and conception behind his work are more relevant today than ever before. To appreciate this importance it is enough to contrast Humboldt’s conception of science and its role in human life, with the alienated attitudes we have described above. What is most striking today is the harmony between human passion and human reason, between knowledge, joy and beauty, which pervades Humboldt’s Cosmos. Today Humboldt’s conception is often misleadingly identified with romanticism and the romantic movement of the early 19th century, overlooking the fact that Humboldt was a staunch advocate of rigorous scientific reasoning, precise measurement and the strict empirical testing of scientific theories. Among other things Humboldt was also an advocate of industrial development and technological progress, in contrast to the strong anti-industrial tendencies of the romanticist movement.  

Especially important for the transition to a Knowledge Generator Economy is the holistic alternative which Humboldt offers to the extreme one-sided mechanistic-reductionist bias prevailing in science today.

(To avoid misunderstandings we should emphasize that we do not at all reject the use of reductionist, analytical methods per se; these methods remain an indispensible tool in the development of science and technology. Nature has been shown to be full of “mechanisms” in the sense of processes which, under certain conditions, behave in a manner analogous to Man-built machines. These are processes which are mathematically predictable -- to a greater or lesser degree of approximation -- in terms of chains of causality on the basis of interactions between fixed arrays of “parts”. The problem lies in the one-sidedness of the reductionist model and its false claim to universality.)

Besides pointing the way to an organic understanding of Nature, Humboldt’s approach lays the foundation for organizing the mass participation of the population in scientific research in the coming period. The essential feasibility of recruiting masses of the population for scientific research can be seen from the fact, that most of our knowledge of the Universe, including practical knowledge, does not come from reductionist analyses and theories and does not depend on elaborate mathematical techniques. Instead most scientific knowledge comes from observation, conceptualization and insight into the relationships of natural phenomena to each other and to the larger context in which they occur, leading ultimately to the discovery of general principles. These are the sources which Alexander von Humboldt emphasizes, and these will assuredly also remain the main sources of scientific knowledge in the future.

11.6  Excerpts from “Cosmos”

The following original quotes indicate the direction of Humboldt’s thinking concerning the cultural importance of the scientific study of Nature.

(The reader should be warned that Humboldt’s effusive and flowery mode of expression may be difficult to digest or even repugnant to those who are accustomed to the much “cooler” and more straightforward style prevailing today. The same goes for the 19th century English translation we quote from below. The reader’s diligence will be rewarded, however, by direct access to Humboldt’s thinking and a better insight into why the “Cosmos” exerted such a powerful influence at the time it was written. We have italicized some phrases for emphasis.)

1. Nature and the development of human knowledge:

“Nature considered 'rationally', that is to say, submitted to the process of thought, is a unity in diversity of phenomena; a harmony blending together all created things, however dissimilar in form and attributes; one great whole animated by the breath of life. The most important result of a rational inquiry into nature is, therefore, to establish the unity and harmony of this stupendous mass of force and matter

“The history of the physical contemplation of the universe is the history of the recognition of nature as a whole; it is the recital of the endeavors of man to conceive and comprehend the concurrent action of natural forces on the earth and in the regions of space: it accordingly marks the epochs of progress in the generalization of physical views …

“In considering the study of physical phenomena, not merely in its bearings on the material wants of life, but in its general influence on the intellectual advancement of mankind, we find its noblest and most important result to be a knowledge of the chain of connection, by which all natural forces are linked together …; and it is the perception of these relations that exalts our views and ennobles our enjoyments. … He who can trace, through by-gone times, the stream of our knowledge to its primitive source, will learn from history how, for thousands of years, man has labored, amid the ever-recurring changes of form, to recognize the invariability of natural laws, and has thus, by the force of mind, gradually subdued a great portion of the physical world to his dominion …

“Since we have defined the subject before us as the history of nature as a whole, or of unity in the phenomena and concurrence in the action of the forces of the universe, our method of proceeding must be to select for our notice those subjects by which the idea of the unity of phenomena has been gradually developed. We distinguish in this respect, 1. the efforts of reason to attain the knowledge of natural laws by a thoughtful consideration of natural phenomena; 2. events in the world's history which have suddenly enlarged the horizon of observation; 3. the discovery of new means of perception through the senses (i.e. the telescope, microscope, magnetometer – JT), whereby observations are varied, multiplied, and rendered more accurate, and men are brought into closer communication both with terrestrial objects and with the most distant regions of space …”

2. The infinite horizon of scientific discovery

“The fear of sacrificing the free enjoyment of nature, under the influence of scientific reasoning, is often associated with an apprehension that every mind may not be capable of grasping the truths of the philosophy of nature. It is certainly true that in the midst of the universal fluctuation of phenomena and vital forces — in that inextricable network of organisms by turns developed and destroyed — each step that we make in the more intimate knowledge of nature leads us to the entrance of new labyrinths; but the excitement produced by a presentiment of discovery, the vague intuition of the mysteries to be unfolded, and the multiplicity of the paths before us, all tend to stimulate the exercise of thought in every stage of knowledge. The discovery of each separate law of nature leads to the establishment of some other more general law, or at least indicates to the intelligent observer its existence. Nature, as a celebrated physiologist has defined it, and as the word was interpreted by the Greeks and Romans, is ‘that which is ever growing and ever unfolding itself in new forms.’…”

3. Science, beauty and economic development

“Man cannot act upon nature, or appropriate her forces to his own use, without comprehending their full extent, and having an intimate acquaintance with the laws of the physical world. Bacon has said that, in human societies, knowledge is power. Both must rise and sink together. But the knowledge that results from the free action of thought is at once the delight and the indestructible prerogative of man; and in forming part of the wealth of mankind, it not infrequently serves as a substitute for the natural riches, which are but sparingly scattered over the earth. Those states which take no active part in the general industrial movement, in the choice and preparation of natural substances, or in the application of mechanics and chemistry, and among whom this activity is not appreciated by all classes of society, will infallibly see their prosperity diminish in proportion as neighboring countries become strengthened and invigorated under the genial influence of arts and sciences.

“As in nobler spheres of thought and sentiment, in philosophy, poetry, and the fine arts, the object at which we aim ought to be an inward one — an ennoblement of the intellect — so ought we likewise in our pursuit of science, to strive after a knowledge of the laws and the principles of unity that pervade the vital forces of the universe; and it is by such a course that physical studies may be made subservient to the progress of industry, which is a conquest of mind over matter. By a happy connection of causes and effects, we often see the useful linked to the beautiful and the exalted … The improvement of agriculture in the hands of freemen, and on lands of a moderate size, the flourishing state of the mechanical arts … the increased impetus imparted to commerce by the multiplied means of the intellectual progress of mankind, and of the amelioration of political institutions, in which this progress is reflected. The picture presented by modern history ought to convince those who are tardy in awakening to the truth of the lesson it teaches.

“Nor let it be feared that the marked predilection for the study of nature, and for industrial progress, which is so characteristic of the present age, should necessarily have a tendency to retard the noble exertions of the intellect in the domains of philosophy, classical history, and antiquity, or to deprive the arts by which life is embellished of the vivifying breath of imagination. Where all the germs of civilization are developed beneath the aegis of free institutions and wise legislation, there is no cause for apprehending that any one branch of knowledge should be cultivated to the prejudice of others. All afford the state precious fruits, whether they yield nourishment to man and constitute his physical wealth, or whether, more permanent in their nature, they transmit in the works of mind the glory of nations to remotest posterity. The Spartans, notwithstanding their Doric austerity, prayed the gods to grant them "the beautiful with the good." …

“Increased mental cultivation has given rise, in all classes of society, to an increased desire of embellishing life by augmenting the mass of ideas, and by multiplying means for their generalization; and this sentiment fully refutes the vague accusations advanced against the age in which we live, showing that other interests, besides the material wants of life, occupy the minds of men…”

4. The joy of thinking

“In order to trace to its primitive source the enjoyment derived from the exercise of thought, it is sufficient to cast a rapid glance on the earliest dawnings of the philosophy of nature, or of the ancient doctrine of the 'Cosmos.' We find even among the most savage nations (as my own travels enable me to attest) a certain vague, terror-stricken sense of the all-powerful unity of natural forces, and of the existence of an invisible, spiritual essence manifested in these forces, whether in unfolding the flower and maturing the fruit of the nutrient tree, in upheaving the soil of the forest, or in rending the clouds with the might of the storm. We may here trace the revelation of a bond of union, linking together the visible world and that higher spiritual world which escapes the grasp of the senses. The two become unconsciously blended together, developing in the mind of man, as a simple product of ideal conception and independently of the aid of observation, the first germ of a 'Philosophy of Nature.'…

“An intimate communion with nature, and the vivid and deep emotions thus awakened, are likewise the source from which have sprung the first impulses toward the worship and deification of the destroying and preserving forces of the universe. But by degrees, as man, after having passed through the different gradations of intellectual development, arrives at the free enjoyment of the regulating power of reflection, and learns by gradual progress, as it were, to separate the world of ideas from that of sensations, he no longer rests satisfied merely with a vague presentiment of the harmonious unity of natural forces; thought begins to fulfill its noble mission; and observation, aided by reason, endeavors to trace phenomena to the causes from which they spring …

5. The enjoyment of Nature

“In reflecting upon the different degrees of enjoyment presented to us in the contemplation of nature, we find that the first place must be assigned to a sensation, which is wholly independent of an intimate acquaintance with the physical phenomena presented to our view… In the uniform plain bounded only by a distant horizon, where the lowly heather, the cistus, or waving grasses, deck the soil; on the ocean shore, where the waves, softly rippling over the beach, leave a track, green with the weeds of the sea; everywhere, the mind is penetrated by the same sense of the grandeur and vast expanse of nature, revealing to the soul, by a mysterious inspiration, the existence of laws that regulate the forces of the universe. Mere communion with nature, mere contact with the free air, exercise a soothing yet strengthening influence on the wearied spirit, calm the storm of passion, and soften the heart when shaken by sorrow to its inmost depths.

“Everywhere, in every region of the globe, in every stage of intellectual culture, the same sources of enjoyment are alike vouchsafed to man. The earnest and solemn thoughts awakened by a communion with nature intuitively arise from a presentiment of the order and harmony pervading the whole universe, and from the contrast we draw between the narrow limits of our own existence and the image of infinity revealed on every side, whether we look upward to the starry vault of heaven, scan the far-stretching plain before us, or seek to trace the dim horizon across the vast expanse of ocean …

“The contemplation of the individual characteristics of the landscape, and of the conformation of the land in any definite region of the earth, gives rise to a different source of enjoyment, awakening impressions that are more vivid, better defined, and more congenial to certain phases of the mind, than those of which we have already spoken. At one time the heart is stirred by a sense of the grandeur of the face of nature; … at another time, softer emotions are excited by the contemplation of rich harvests wrested by the hand of man from the wild fertility of nature, or by the sight of human habitations raised beside some wild and foaming torrent.”

11.7  Recipe for success

To round out this chapter, here are the main elements of our solution to overcoming alienation and launching a Knowledge Generator Economy:

1. Economic policies guaranteeing full employment and universal access to university-level education.

2. Use of robotics and advanced automation to free the workforce from routine forms of work, permitting a huge expansion of employment in science-related activities as well as in education and cultural activities directed at the development of the creative powers of the population.

3. Visionary projects, including a new era of space exploration -- in many ways parallel to the era of the great voyages of exploration in the Renaissance.

4. Mass participation of the population in scientific research, in the spirit of Alexander von Humboldt’s “Cosmos” (see Chapter 12 below).

5. Exploiting the full potential of internet and related technology (i) to provide for the most rapid and efficient communication of ideas; (i) to give the population access to the entire wealth of past and present knowledge, as well as to experimental observations and data, including information from ongoing space exploration; and (iii) to permit “real time” participation in educational events, scientific seminars and discussions, and experiments at distant locations. The culture effect should be comparable to the revolution unleashed by book printing in the Renaissance.

6. Utilizing the opportunity provided by the above measures to reawaken the spirit of Renaissance humanism.

7. Revival of Wilhelm von Humboldt’s model of classical education.

 

Chapter 12    Science for mass participation

12.1  Introduction

Today professional scientists make up less than 0.5% of the workforce in the U.S. and other developed countries. Is it realistic to propose engaging large masses of the population in the highly specialized sort of scientific work done by most professional scientists today? If we are speaking of the coming two or three decades, then the answer is almost certainly no.  The only near- and medium-term possibility for mass participation in scientific research is to identify areas and methods of research that do not require highly specialized training, but nevertheless contribute in a significant way to the progress of science. These must be forms of activity which overcome, rather than exacerbate, the alienation that is deeply embedded in the culture of society.

Here is our proposed solution: phenomenological studies of collective processes in Nature

12.2  The phenomenological approach to the study of Nature

To make clear what we have in mind, we first briefly explain what we mean by collective processes and by phenomenological studies.

By a “collective process” (or “collective phenomenon”) we mean, basically, any form of coherent, organized or structured behavior arising in a collection of mutually interacting physical entities. We could call such phenomena “social processes” in a very generalized sense. Collective processes can be found in practically infinite variety everywhere in Nature, from the subatomic to the astronomical scales. Familiar examples include the formation of snowflakes out of the molecules in water vapor, the swarming behavior of insects and birds, the formation of hurricanes in the atmosphere and spiral galaxies in outer space. Collective processes are the basis of the phenomenon of magnetization and the functioning of semiconductors, lasers and superconductors, for example. Life itself is intrinsically a collective process on all its hierarchical levels. All the structures we find in living cells are formed by collective processes among atoms, molecules and ions. Cell division is a collective phenomenon of vast complexity. The physical processes associated with thoughts, emotions and learning involve collective phenomena among neurons in the brain. Everywhere in Nature the more closely we look, the more collective phenomena we find. This goes even for processes which appear disordered and chaotic at first glance. The variety of collective phenomena which are accessible to observation and study is virtually inexhaustible. Science up to now has barely touched the surface of this vast domain.

Needless to say, progress in the mastery of collective processes has enormous economic implications, and guarantees a large multiplier effect for well-organized research in this field. It is this multiplier effect that will make it feasible to sustain an employment structure with 20% or more of the workforce engaged directly in scientific activity. 

The phenomenological method in our sense lies at the heart of “Humboldtian science”. In its pure form it is perhaps most clearly exemplified by the scientific work of Leonardo da Vinci and Johannes Kepler.

The distinction of the phenomenological approach is to study physical phenomena -- their emergence, behavior, development and interrelationships – directly, without trying to “explain” them in a reductionist manner. The researcher carrying out phenomenological studies is not concerned with trying to deduce the properties of a whole from those of its parts. Instead, the phenomenon itself is taken as primary. The researcher is occupied with observation, conceptualization and creative insight rather than inductive-deductive reasoning and mathematical analysis. Phenomenological studies can provide the basis for elaborating mathematical theories, but the latter is very different type of activity.

In the appended Supplement to this chapter we shall go more deeply into the methodology of phenomenological studies and their decisive role in the progress of science.

The daily work of a phenomenological investigator involves the following:

1. Discovering, observing and classifying collective phenomena, the circumstances under which they are “born”, their modes of behavior etc.; where appropriate, carrying out precise measurements and assembling numerical data. This empirical-exploratory activity takes advantage of the entire range of technical means which physical-economic development can provide for scientific activity, including all possibilities for manned and robotic exploration of the terrestrial and extraterrestrial domains and the use of scientific instruments based on the most advanced technologies. (It is important to note that modern computer technology makes it possible, in many cases, for people to utilize sophisticated scientific equipment without the need for highly specialized training.)    

2. Systematizing and conceptualizing the results of empirical investigations. Inventing new concepts and new means of language and visualization for dealing with the phenomena under study.

3. Developing insights into the inner character of the collective phenomena under study, into the principles governing their behavior, the interrelationships among collective phenomena of different species, and their role in Nature as a whole, with the ultimate goal of discovering universal principles of Nature. 

4. In this context discovering new ways to look at phenomena, i.e. “seeing the world with different eyes”

Phenomenological studies can be carried out by persons with widely differing backgrounds and levels of education. In the proposed Knowledge Generator Economy the manpower engaged in phenomenological studies will include a large part of the greatly expanded workforce of professional scientists, plus a much larger number of part-time scientists and masses of “hobby scientists”, the latter being organized in networks under the guidance of professionals.

Creating an army of “hobby scientists”, which should grow to include most of the adult population, is a powerful means to overcome intellectual passivity and transform the entire cultural climate of society. Organized in a suitable way, “hobby science” will unleash enormous creative energies in the population, giving people a sense of being part of a great undertaking. A foretaste of this is the overwhelmingly positive response to “Citizens Science” internet projects launched in recent years (see below).

12.3  Work for everyone: biology, medicine, geosciences, astronomy and materials research

The potential for phenomenological studies in the natural sciences is practically infinite. Particularly suitable are the life sciences, geosciences, astronomy and space science – the latter going hand-in-hand with a new era of manned and unmanned exploration of the solar system.

A major role will be played by the ongoing development of new scientific instruments. Already today the advent of modern imaging techniques, sensing and high-speed data processing and transmission makes it possible to study collective phenomena on all scales of space and time, in ways that could not have been dreamt of before.

Biology and medicine provide a good illustration. Here enormous progress has already occurred, for example, in the field of “noninvasive” methods for studying living organisms. Thus, using video microscopes we can today directly observe processes at the subcellular level in living cells, including the synthesis of membranes and cytoskeleton structures, intracellular transport processes and the miracle of mitosis. Similarly for the early stages of morphogenesis and embryogenesis of higher organisms. Modern technologies provide fantastic opportunities for observing the collective behavior of populations of organisms (“superorganisms”) on all size scales, from microorganisms to animals in their natural habitat.

Of incalculable scientific as well as economic importance are the growing possibilities for human beings to monitor processes in their own bodies on a continuous basis, without bulky instrumentation and without interfering with normal activities. This opens up the prospect for recruiting huge numbers of people to become medical researchers, carrying out phenomenological studies of their own organism!  With a minimum of training and intellectual rigor, even “hobby researchers” could contribute a wealth of important observations to be followed up by professional biologists and medical scientists. Continuous monitoring of body processes, combined with the insights of the subject, open up extraordinary possibilities for investigating the whole process of emergence of illnesses, as opposed to the mere "snapshot” obtained by clinical diagnostic procedures carried out after symptoms have already appeared.

Studies of coherent phenomena in ecosystems, the oceans and continents, the atmosphere, the biosphere and the planet Earth as a whole provide ample opportunities for masses of people to combine their natural human craving to travel and to explore the world, with participation in the generation of new scientific knowledge.  Here the best historical model is no doubt Alexander Humboldt and his book “Cosmos”, discussed in Chapter 11.

The fact that the abovementioned areas have been objects of scientific investigations for centuries, should not lead to the erroneous conclusion that most important discoveries have already been made. On the contrary! The total number of scientific explorers in the course of history is infinitesimally small in relation to the vast scale and complexity of our planet, and with the help of new technologies we can study structures and processes that could never have been observed in earlier times. The principle “the closer you look, the more you see” applies here as much as every other domain of our Universe. 

Among other things the great adventure of exploring the ocean floor and studying the forms of life at depths of several kilometers, has barely begun. Deep ocean research is the terrestrial equivalent of space exploration. In many ways we know the surface of the Moon better than the floor of our own oceans. Here also, the progress of technology – including especially new types of materials and fabrication techniques for withstanding extreme pressures -- is opening up new horizons for manned and robotic deep-ocean exploration on a large scale.

We have already spoken, in the context of Great Projects, of the potential for involving tens or even hundreds of millions of people in astronomical and space research, in “digesting” images and data obtained by Earth-based and space-based astronomical instruments as well as from manned and unmanned missions to various destinations in the solar system. This is an area par excellence for phenomenological studies in the spirit of Johannes Kepler (see the Supplement to this chapter).

An example from a completely different field -- one of enormous economic importance -- is materials science and especially the creation and investigation of new materials. Today we are witnessing the first phases of a major technological revolution connected with so-called nanomaterials. In general the variety of materials that can be synthesized and studied using the sophisticated instruments and imaging techniques available today is virtually unlimited. Although research in this area has up to now been conducted mainly by persons with highly specialized training, there is an enormous scope for non-specialists to participate in experimentation with the creation of new materials and for “digesting” the results of imaging and diagnostic measurements.

Needless to say, these examples could be multiplied ad infinitum.

12.4  Citizen Science

We are already witnessing very promising beginnings of mass participation in scientific research in the form of so-called “Citizen Science” projects organized via the internet. These projects go far beyond the traditional participation of “hobby scientists” in entomology and ornithology.

Perhaps the best example is provided by phenomenological studies organized via the Zooniverse website (www.zooniverse.org), in which already over 1 million people have participated. Here volunteers assist professional scientists in the fields of astronomy, biology and medicine, zoology and climate research, in analyzing large amounts of data, mainly in the form of images. The contributions of the volunteers to these projects are mainly limited to the first of the three tasks of phenomenological studies listed above. What is mainly exploited is the unique human capability to discover, recognize and characterize coherent structures and processes on the basis of visual images. This capability goes qualitatively beyond anything that can be replicated by computer-based so-called “artificial intelligence”.

The most extensive projects so far are in the field of astronomy. They include:

  • the identification and classification of galaxies, star clusters, so-called bubbles and “extended green objects” associated with the formation of stars, from images taken by the Hubble and Spitzer space telescopes;
  • identifying detailed structures of the surface of the Moon and Mars from images taken by orbiting reconnaissance satellites;
  • analysis of sunspot and solar flare data;
  • identification of near-Earth asteroids and planetary systems and planetary disks around other stars.

The second main area of Citizen Science today is biology and medicine. Exemplary activities include characterizing cancerous and precancerous cells in micrographs of tissue samples, identifying the forms of behavior of various lower organisms from video films, classifying species of microorganisms etc.

In the context of these and other “Citizen Science” projects, various regimes of comparison, evaluation and verification of contributions are being developed, which are essential to insuring solid scientific results.

The “Citizen Science” projects are extremely encouraging in terms of their growing number and popularity, but they can at best only be regarded as a small initial step in the right direction. For one thing, the conceptual powers of the participants are used only in a very limited, shallow way, even with regard to the first stage of phenomenological studies listed above. Participants are not yet challenged to develop new insights, and there is hardly place for much originality. But these limitations can certainly be overcome once the participants begin to see themselves as aspiring scientists rather than merely assistants performing more or less routine tasks.  

Inducing masses of the population to see themselves becoming real scientists – even if only in their free time – depends to a large extent on changing the popular conception of science itself. Humboldt’s “Cosmos” is the model for success (see Chapter 11). Although no single person matches the qualities of Alexander von Humboldt today, there are many enthusiastic scientists whose notion of science is close to Humboldt’s and who -- in the climate created by the launching of Great Projects such as a new era of space exploration -- can together convey the essential idea of Humboldtian science to popular audiences. This is already happening to a limited extent in the video media. The main difficulty is the failure of much popularization of science to convey an adequate sense of scientific rigor, and the fact that popularization has not yet been linked to a strategy of recruiting masses of the population to become actual scientists.

Evidently, preparing large sections of the population for serious scientific work calls for a huge expansion and upgrading of science education in schools and universities. This will take substantial time, however, so that in the initial period much of the necessary education will have to take place in parallel with direct participation in scientific work. In addition to self-education, which has become much easier thanks to internet and related means, “learning while doing” requires at least some personal contact with scientific professionals, participation in classes and seminars, opportunities to work in laboratories, critical review and publication of results etc. The masses of new scientific workers must be fully integrated into the scientific community. In the longer term the guidance and education needed for mass participation in real scientific research requires a core force of professional scientists at least an order of magnitude larger than is available today. That is one reason why a drastic expansion of the number of professional scientists is high on the list of priorities for a Knowledge Generator Economy.

Of decisive importance to the quality of science education is for people to grasp science as a historical process with a infinite horizon of future development; to learn about we don’t know as well as what we know; to gain an overview of the present frontiers of science and technology, of the limitations and paradoxes of present-day knowledge and the possible directions of scientific revolutions in the future. In order to be able to “think ahead” to future discoveries it is essential to study the process of discovery which has led to present-day scientific knowledge, and whenever possible to get to know great scientific thinkers through their original writings.  This has become much easier today with internet and computerized archives.

12.5  Organizing scientific research in a Knowledge Generator Economy

The problem of organizing an economically self-sustaining expansion of scientific activity including a large portion of the workforce and with the participation of the general population has never been posed in history up to now. It requires “networking” between (A) the mass of persons engaged in phenomenological studies; (B) highly-trained professionals providing critical guidance and elaborating the results of phenomenological studies and observational data into quantitative theories; (C) specialists in applied science and engineering, together with creative inventors, working to develop new technologies; (D) enterprises engaged in the production of equipment and instrumentation for science and industry; (E) the overall productive base of the economy. Naturally the division of labor will not be as strict as this; many persons will be involved in more than one of these activities at the same time, including in the role of “hobby scientists”.

The indicated linear progression A->B->C->D->E describes only one aspect of the process. Apart from the countless feedback processes among the steps, the entire process is embedded within the overall cycle of development of the physical economy as an organic totality. Each such cycle transforms the lives of the whole population, as new concepts and discoveries, technologies and modes of work become part of everyday life, laying the foundation for ensuing cycles.  This is exactly the process of nonlinear development described in more detail in Part I.

We have no magic formula for how the unprecedented scale of scientific activity, which we have called for above, should be organized in institutional terms. Workable solutions will evidently require a process of experimentation and successive approximations. Of critical importance will be to insure a balance between a necessary degree of direction and critical rigor, and “looseness” and toleration for original scientific work and creative innovations which do not fit into established paradigms. Ample experience demonstrates the danger of “institutional sclerosis”, in which new ideas and breakthroughs are suppressed by biases and vested interests of an entrenched establishment. At the same time it is impossible to maintain healthy physical economical development – including the organization and allocation of resources for scientific work, without some degree of institutionalization, including the key role of the state. Much speaks for a complementary role of the state and a great number and variety of independent private institutions (research companies, private funds, research collectives, production firms etc.).

12.6  Leibniz’s strategy

Although the task to be accomplished is unprecedented in many ways, there are is much to learn from studying some historical parallels.

Our proposal for a Knowledge Generator Economy has deep roots in Wilhelm Gottfried Leibniz’s strategy for establishing Academies of Science in Europe and Russia. In Leibniz’s conception these Academies would not be “academic” at all in the sense given to that term today; rather, they would be locomotives for transforming entire nations, laying the basis for what we today would call science-based industrial economies. In this context basic and applied scientific research in all areas of knowledge were to be organized in a systematic way and on a scale far beyond anything ever known before in history.

In Leibniz’ conception the development of theoretical (or fundamental) science was to be combined with an unprecedented effort to promote education and to gather together the accumulated wealth of what he called “useful observations” –  the fruits of “phenomenological studies”, in our sense -- made by all peoples of the world.

Leibniz placed particular emphasis on China as a great source of “useful arts and observations” in all areas, including medicine, botany, hydrology, metallurgy, geography and astronomy. Although the theoretical sciences were underdeveloped in China in comparison with Europe, China had accumulated a vast wealth of empirical knowledge and technologies, in the course of its unbroken history as an organized society from ancient times. For many centuries China had led the world in the development of tools, machines and large-scale infrastructure, and in the gathering and recording of knowledge. In our terminology we would say that China’s strength, as emphasized by Leibniz, lay in the domain of phenomenological studies. His strategy of combining China’s phenomenological knowledge with Europe’s strength in theoretical science is very much similar to the symbiosis we have proposed above.

Of particular relevance to today is Leibniz’s work on the methodology and epistemology of science, which is inseparable from the way he foresaw the organization of large-scale scientific research and technological development in his proposed Academies of Science. Leibniz’s work on a “characteristica universalis” – a universal language for scientific knowledge – is to be seen in this light.

Leibniz clearly aimed to involve as large a section of the population in the generation of knowledge. “The cultural soil” was to be laid via a huge program of expanding and upgrading education. Leibniz’s strategy is exemplified by his design for the Academy of Sciences in Petersburg and played a decisive role in the subsequent development of science and industry in Russia. We quote from a summary by the Russian historian of science Irina Sokolova: 

 “In the period 1697-1716, Leibniz watched closely the events taking place in Russian Empire. He met Peter the Great several times, developed the draft of the structure of the Russian Academy of Sciences in St. Petersburg and a number of directions concerning the establishment and development of universities. Working out these advices and instructions … The philosopher saw the prospect of the development of science, culture and education only in their synthesis and close cooperation. To assure the full-fledged functioning of an Academy of Sciences it is necessary to prepare the cultural soil of the country, that is to rear, through the renewed system of schools and universities, a generation of educated persons, to reexamine the complex of cultural institutions — libraries, botanical gardens, observatories, cabinets of antics etc., to make them generally accessible, arousing interest in science and education …”

Leibniz’s visionary plans for Europe, Russia and China were only partially realized but it can be said that his efforts transformed history, playing a pivotal role in the subsequent spectacular rise of science and industry which has continued up into the 20th century.

Today we face a very different situation than in Leibniz’s time. Thanks to vast increase in the physical-economic, populations in the industrialized nations today enjoy affluence beyond the wildest dreams of people in the 18th century. At that time the vast majority of Europe’s population lived in extreme poverty with life expectancies of less than 40 years. Under such conditions and in the absence of a developed industrial base it would hardly have been possible to involve the masses of the population directly in scientific work. The problem of how to organize a full-scale Knowledge Generator Economy in our sense was not posed. Today, when the economic and technological conditions are ripe for this endeavor, Leibniz’s work provides us with an invaluable historical source for relevant ideas and methodology.

12.7  Modern lessons

In modern times the best example for the organization of scientific activity on a large scale is the no doubt the vast scientific establishment which arose in the United States in the course of its history, and was vastly expanded in the post-WW II period. The development of the U.S. scientific establishment is closely bound up with the influence of Leibniz, embodied most directly in Benjamin Franklin’s founding of the American Philosophical Society in 1743, and in a broader way in the entire original conception of the so-called “American System of National Economy” which we shall discuss in Part II of this book. This process accelerated in the mid-19th century above all through organizing work of two extraordinary scientists, Alexander Douglas Bache and Joseph Henry, leading to the founding of the Smithsonian Institution (1846), the American Association for the Advancement of Science (1848), and the National Academy of Sciences (1853).  Following the end of World War II organized science in the United States was turned into a veritable “science machine” which laid the basis for many of the most important scientific and technological developments of the subsequent period. We shall document some of these developments in Part III. The rapid conversion of scientific breakthroughs into commercial and military technologies, insuring a large multiplier effect in the economy as a whole, was accomplished via a unique symbiosis of government and private enterprise. The successes of the U.S. space program, including the lunar landing in 1969 embodied the unique organizational capabilities which were built up in and around the American “science machine”. These capabilities have been greatly damaged by the post-1970 turn to neo-liberal economic policies, but are still significant.

Despite all its strengths, the U.S. “science machine” never came near engaging more than a small minority of the population in scientific work. Conditions were in many ways ripe for launching a Knowledge Generator Economy at the time of the first lunar landing, but the chance was missed. Instead, as we noted above, the U.S. led the industrial world into a “consumerist”, “post-industrial” mode with a descent into ever deeper cultural decadence, greatly exacerbating the problem of alienation in the population as a whole.

An extensive critical study of the background and functioning of the American “science machine” in its best period, including its strengths as well as weaknesses, will be invaluable resource for launching a Knowledge Generator Economy in the coming period.

Relevant are also the postwar accomplishments of science and technology in the Soviet Union. Here also a gigantic “science machine” was built up, with a completely different character than the U.S.. The successes of Soviet science would have been impossible without the rich heritage of pre-revolutionary Russian science, particularly the role of the Russian Academy of Sciences whose origin goes back to Leibniz. Despite the great prestige given to science in the Soviet Union, and a high quality system of scientific education, only a small percentage of the Soviet population were directly involved in scientific activity. Nevertheless the Soviet case is of interest because of its scale and enormous accomplishments made under much more difficult conditions, and the completely different form of organization. Particularly interesting, in our view, is the role of the famous “design bureaus” in the conversion of scientific breakthroughs into technology.     


Supplement to Chapter 12

Creativity and the phenomenological method

In order to accomplish the task of recruiting masses of the population to scientific work, it is essential to establish clarity concerning the nature of phenomenological method, why it is essential to the progress of science and why it is uniquely suited to unleash the creative energies of the population.

The phenomenological method is rooted in our belief in the fundamental intelligibility of Nature and the coherence between the human mind and the workings of Nature.

The best reference-point for grasping the essence of the phenomenological method is the phenomenon of empathy, which is inseparable from the creative powers of the human mind. Empathy implies an ability to have direct insights into the thinking and emotional life of another person, without the use of logical reasoning and reductionist methods. When I want to get to know a person I am not interested in developing mathematical models of the person. I am not interested in investigating how the person’s responses are caused by the interactions of neurons in his or her brain, for example, or how the activity of the neurons is caused by interactions of atoms, molecules, ions etc. For me thoughts, ideas, personalities are the fundamental realities. I have immediate access to the subject of my study. I get to know the person “from the inside” in virtue of a kind of resonance between that person’s mental processes and processes in my own mind. The person is no longer a mere external phenomenon.

The faculty of empathy is a special case of a general human capability for direct insight into the world around us. Nature and the processes within it must in principle be intelligible, since we are part of one and the same world and we ourselves are products of Nature. We have an intrinsic rapport with the world around us, and the principles of Nature are in some way expressed our own mental processes. From this standpoint we see that there is nothing mysterious about the faculty of empathy and more generally of direct insight into the processes of Nature. In fact, it has served Mankind in his development over the thousands of years, long before the advent of formal mathematical methods. It continues today, albeit often unconsciously, to be the basis for everything we do. Language serves as an instrument for expressing, refining and communicating these direct insights.

In the purely inductive-deductive method, by contrast, the scientist acts like a stranger to Nature -- a being from a completely different world who has somehow been injected into our Universe. The alien being’s sense organs produce numerical data which have no meaning for him per se; all he can do is to look for statistical correlations or other mathematical relationships. He has no direct insights, but can only make logical deductions and devise formal theories to predict the results (or average results) of his measurements.

Oddly enough, the behavior of this being from another Universe has become a norm for scientific work today! As we noted in our earlier discussion of “scientific management”, the term “scientific” tends today to be identified with the ideas of “objective” and “quantifiable”. It is true that modern science and technology would be unthinkable without precise quantitative measurements and mathematically formulated laws that could be confirmed or refuted by comparison with experimental data. The objectification of science is also seen as having been essential to overcoming the irrationality and superstition of earlier times. During the last century this process extended beyond the domain of physical sciences to embrace sociology, psychology, economics and other social sciences. In many areas a research article will not be taken seriously unless it contains mathematical formulas.

Under these conditions it is hardly surprising to find many economists behaving like alien beings, making predictions and analyses that have little or nothing to do with our real world. Similarly, it is the author’s experience that workers in the area of elementary particle physics and the so-called “Standard Theory” have a high risk of insanity.  By virtually banning conceptual thinking and direct insights into reality as “unscientific”, science has lost an essential alternative and corrective to an “other-wordly” flight into mathematical phantasies. The phenomenological approach, applied in a rigorous way, provides exactly that alternative. 

The phenomenological method is indispensible, among other things because of the inability of reductionist and other mathematical-deductive methods to adequately account for most collective phenomena in the Universe (see below). The inadequacy of conventional analytical methods in this area is now widely recognized and is highlighted by recent discussions of so-called “weak and strong emergence” – without any clear alternative being put forward. 

Kepler versus Newton

The relationship between phenomenological and analytical/reductionist methods can be grasped most clearly by comparing Johannes Kepler’s approach to the organization of the solar system with that associated with Isaac Newton.

It is well-known that Kepler’s book “Astronomia Nova” contains what are today known as Kepler’s Three Laws of planetary motion. Starting from the notion that the Sun constitutes the “organizing center” of the solar system, Kepler concluded from the great store of observations made in the course of time that the planets orbit around the Sun in trajectories which have the form of ellipses with the Sun as a common focal point. He found a simple geometrical relationship (the so-called area law) for how the velocity of each planet varies as a function of position on its elliptical orbit; and he discovered a lawful relationship between a planet’s orbital period of planet and its mean distance from the Sun. The method by which Kepler discovered his laws is essentially phenomenological in our sense. Kepler approaches the solar system as a coherent totality, an organism.

Kepler’s phenomenological approach becomes even clearer in his “Harmonicae Mundi”, which addresses the unitary organization and morphology of the solar system. Among other things Kepler establishes the coherence of the entire set of planetary orbits, by comparing the spacing of the planetary orbits with geometrical proportions derived from the Platonic solids, which in turn are characteristics of the geometry of space. He also correlates the motions of the planets with progressions of notes of the musical scale, looking for manifestations of an overall coherence between the organization the solar system and that of the human mind. 

All of this is done without trying to “explain” the solar system in terms of cause and effect as commonly understood. Indeed, Kepler is not trying to develop a theory of the solar system in the sense of conventional mathematical physics. He is striving to conceptualize the coherent, collective phenomena presented by the solar system as a whole. Unfortunately, modern readers often misunderstand Kepler’s intentions, resulting in the impression that his work is “unscientific”.

The approach attributed to Newton provides the chief historical point of reference for the narrow, alienated way the term “scientific” is used in physical science today.

In the Newtonian reductionist approach we ignore the coherent nature of the solar system as a whole. Instead, we “dissect” the solar system into a mere collection of material bodies interacting with one another. The Sun loses its unique status as the “organizer” in Kepler’s conception of the solar system, and is treated merely as the most massive of the interacting bodies. The goal is to account for the motions of the bodies, as they change from each moment to the next, in a cause-and-effect manner. Newton’s solution is well-known: each body is assumed to act on each other by an attractive force whose magnitude is a function of their distance. From Kepler’s three laws of motion Newton deduces the mathematical form of the required function, obtaining his famous inverse square law. Thanks to the infinitesimal calculus, developed mainly by his contemporary Leibniz, Newton’s theory makes it possible -- in principle -- to calculate the positions and velocities of the planets at any future moment from their present positions and velocities. Corrected by electromagnetic and relativistic effects, Newton’s law of universal gravitation is still the basis for precise astronomical calculations today, including the trajectories of spacecraft.

Limits of the analytical/reductionist method

As useful and even indispensible as it can be from a technical standpoint, however, the reductionist method suffers from intrinsic limitations even in the areas where it appears to be most suited.

These limitations are exemplified by the so-called “many-body problem”: Applying Newton’s law of gravitation to the motions of any collection of interacting bodies leads to highly nonlinear sets of equations. For more than two bodies the equations have no explicit mathematical solution, and it cannot be determined, for example, which configurations are ultimately stable. A tiny variation in the initial conditions (positions and velocities at some given moment) can lead to huge differences in later behavior. The same problem appears in reductionist analyses generally, including for example the notorious unreliability of weather predictions, even using modern supercomputers.

The complementarity of phenomenological and analytical/reductionist methods

Effective scientific work, and the application of science to engineering and technology, requires a combination of both sorts of methods: the phenomenological (observation, insight, conceptualization) and the analytical/reductionist (analysis of interactions in a system, induction and deduction, elaboration of formal mathematical theories and methods of calculation). Each complements and corrects the other, but in fundamentally different ways.

For example, the Newtonian approach provides a correction to Kepler’s phenomenological description with respect to many details of the solar system. Among other things it is possible on the basis of Newton’s analysis to account for the discrepancies between the actual, observed trajectories of each of the planets and the ideal elliptical orbits described by Kepler, arising from the gravitational influences exerted by the other planets. While the phenomenological approach is mainly qualitative, the reductionist approach exemplified by Newton provides precise quantitative calculations, and is thereby indispensible to the development of technology. Most importantly from a scientific standpoint, the scope of Newton’s “law of universal gravitation” extends far beyond the solar system to embrace the entire observable Universe. Integrated into the framework of modern theoretical physics, it lays the foundation for the application of reductionist methods to all coherent objects in astronomy.

At the same time Kepler’s approach corrects Newton’s by characterizing and demonstrating the coherent nature of the solar system as a whole -- expressed in morphological features such as the harmonic spacing of the planetary orbits – for which Newton’s analysis provides no explanation. Even today, the reductionist methods of modern celestial mechanics cannot account for the overall morphology of the solar system. The same holds for coherent phenomena in practically all known orbital systems, including for example the intricate structure of Saturn’s rings. To the extent certain features of these systems have been accounted for, they mainly involve resonances among orbits, in line with Kepler’s musical conception. In the meantime it has become generally accepted that the system of the Sun and planets has evolved from a single proto-star, confirming Kepler’s insight into of the organic unity of the system and his identification of the Sun as the organizing center.

Conclusion

The example of Kepler and Newton is immediately relevant to defining the role of phenomenological studies, carried out with mass participation, in the Knowledge Generator Economy.

Scientific and technological progress -- including the indispensible role of precise measurement and mathematical analysis -- “feeds” on the enormous, growing wealth of observations, insights and conceptualizations generated by people working in the phenomenological mode. Conversely, mathematical analysis and highly specialized experimental work is essential to transforming the results of phenomenological investigations into tools for engineering calculations as well as detailed comparison of hypotheses with empirical measurements. Hence there must be a close collaboration between those engaged in phenomenological studies, including “hobby scientists”, and specialists in the relevant areas. 

While acknowledging a certain complementarity between the phenomenological and analytical/reductionist methods, there should be no doubt concerning the primacy of the phenomenological approach.

It is no accident that the Newton’s work came after that of Kepler and cannot replace it. Before a phenomenon can become a subject of scientific study, it must first be identified and conceptualized. We must first have established relevant conceptions before we can even talk about a given phenomenon of Nature.  In this sense the phenomenological method is intrinsically prior to the analytical/reductionist method and forms the basis for the latter.

Furthermore, as the latter – almost by definition -- excludes the faculty of direct insight into Nature, it inevitably leads to alienation. This alienating effect can only be overcome when analytical/reductionist methods are relegated to the status of tools and strictly subordinated to the processes of direct, creative insight and conceptualization of reality.

Above all, the process of fundamental scientific discovery, which we have characterized in terms of conceptualizing and overcoming the relative boundedness of a given mode of thinking concerning the Universe, pertains to the phenomenological method in our sense. Fundamental scientific discovery depends on direct insights of discoverer into his or her own thinking processes.

 

Part III: National Development from the Standpoint of Physical Economy


Chapter 13: National Economy

13.1 The essential role of the nation-state

In the first part of this book we presented the concepts of physical-economic growth and development in theoretical terms. Now we turn to the question of how to realize them in practice. We find that the first and most essential precondition for realizing real, sustained economic development is to have a sovereign nation.

Although a gradual progress of human productive power can be followed back to the earliest times – with the development of tools and use of fire through the emergence of agriculture etc. -- the explosion of scientific, technological and social progress over the last 300 years coincides with the emergence of the modern nation-state.

Historically speaking, only sovereign nations have been able to develop powerful industrial economies – as exemplified most clearly by the historical development of the United States, Germany and France, for example. In spite of “globalization”, the real economic strength of countries around the world is closely correlated with the degree of strength and stability of their governing institutions and their character as sovereign nation-states. The developed industrial economies have all been well-defined individual economic organisms based on the territories and under the jurisdiction of a sovereign state, whose populations have been united by a sense of belonging to a common state. Successful industrial economies have practically all been national economies with their own all-round agricultural and industrial base, participating in international trade not as dependent territories of an empire, but as sovereign economic individuals. More complicated are cases such as Great Britain in the 18th and 19th centuries, where elements of national economy were mixed with those of a colonial empire. 

What is the reason for this close relationship between economic development and the sovereign nation-state?

Firstly, like any living organism, an economy requires a stable environment and a suitable framework in which to develop. It must be protected against external or internal dangers, nurtured and guided into a direction that assures a balanced all-round development of its potentials, avoiding excessive dependence on the outside. This requires a strong, stable state equipped with adequate policy instruments and qualified leadership on all levels of the economy. None of this can be accomplished by the so-called “market forces” acting alone. On the contrary: unless regulated and corrected by suitable institutions and policies, the “market forces” – left to themselves – produce chaos. As we emphasized in the first section of this book, healthy economic development requires a combination of factors, none of which can work without the others. A sovereign state is the only agency which has the capability to bring together all the necessary elements. The whole is more than the sum of its parts.

A unique advantage of an economy based on the nation-state -- as opposed to an imperial system or a totally globalized free market world without national economies – lies in its character as an independent, sovereign individual in its interactions with the rest of the world, and in its relationship to the nation’s citizens. If oriented to the principles of physical economy, as presented here -- and with a suitable political system! -- the national economy operates as an instrument for the self-realization of its individual citizens. It is seen by the population as “their” economy, not only as a means for sustaining their lives and those of their children, but also as a vehicle for contributing to and investing in the future. Building a nation provides the citizens with a sense of participation in a process which goes beyond their own lives, supplying a continuity of purpose from generation to generation.

Naturally, the extent to which a nation-state actually functions, in practice, as an instrument for economic development and the self-realization of its population, depends on the quality of its governing institutions and of its leadership. Under circumstances of rampant corruption, dictatorial and fascist regimes, chronic political instability etc. the opposite effect can occur. The same holds when a national economy is enslaved to imperial aims and to the purposes of war. In economics as in every other domain of human activity, instruments per se are neither good nor bad -- everything depends on how they are used.  The nation-state is a necessary, but not a sufficient condition for physical-economic development.

Let us briefly review the main reasons for this necessary role of the state. Most of them are obvious and well-known, but they seem have been overlooked by many people in the rush to adopt neoliberal policies of deregulation, privatization and globalization.

Only a sovereign nation-state can supply and guarantee the necessary framework for sustained productive activity. This applies, first of all, to the most elementary preconditions for economic activity to occur at all, namely:

  • physical security
  • a functioning, equitable legal system, which (among other things) provides a stable basis for agreements between persons and between institutions
  • a single currency with a stable value in relation to basic physical commodities

The above elements are necessary as a minimum for economic activity of a population, but development requires more:

  • a regulated financial and credit system (discussed further below)
  • a health system guaranteeing certain minimum standards of care
  • a system of universal public education
  • basic physical infrastructure (transport, energy, water, communications)
  • a  domestic productive base encompassing the most essential sectors of agriculture, mining and industry, and comprised of some combination of private and state-controlled enterprises
  • functioning institutions of scientific research and innovation
  • the government must have a long-term economic strategy and essential instruments for implementing economic decisions

13.2 Economic instruments of the state

The governing institutions of a sovereign state possess a variety of instruments at their disposal for implementing economic policy. The instruments exploited by the governments of practically all successful national economies up to today, include the following:  

1)  large-scale direct state investment into the physical economy and its service base, above all in physical infrastructure, science and technology, health and education;

2) direct government procurements (purchases) of industrial products and services, particularly from the high-technology sectors; 

3) other forms of direct support for productive sectors of the economy such as reduced taxes and low-interest credits for specific groups of industries, various types of subsidies (e.g. energy subsidies, research and development subsidies, export subsidies etc.), various forms of tariff and so-called “nontariff” protection for domestic producers, supportive trade and cooperation agreements etc.;

4) indirect support of productive activities by a variety of means such as access to the results of state-supported research and development; laws and regulatory measures which encourage productive activities, prevent the formation or undue influence of trusts and monopolies, and hinder speculation and manipulation of markets;

4) measures to provide an adequate supply of credit for long-term productive investment, directly state-owned financial institutions, and/or indirectly, through the impact of state policy on the credit-generation and lending activity of private banks. In both cases the foundation lies in the sovereign power of the state to issue currency and to expand and regulate the volume of credit via a suitable national bank or central bank whose operations are subservient to the economic policy of the state (see below); 

6) a variety of measures, including licensing and certification, which establish norms and standards for various sectors of economy activity.

7) the system of taxation, which not only generates income for the state, but provides a powerful tool for steering and “tuning” economic activity --shaping the distribution of investment, supporting  beneficial activities and discouraging others etc. 

Naturally, the mere existence of these instruments does not mean that they all must be used in a major way. Generally speaking the most effective state policy is one that aims for maximum simplicity and transparency, restricting use of the most of the above-mentioned instruments to the minimum necessary for insuring an optimal long-term development of the nation and dealing with specific problems when they arise.  We shall discuss the question of “more state” or “less state” in section 13.6 below.

13.3  Building the nation: the function of Great Projects

Among the most powerful instruments of economic policy is large-scale infrastructure projects planned and carried out either by the state directly, or with strong support and guidance by the state. The role of the state is generally indispensible in such projects because of the large scale of investment required and the fact that the “payback” is realized only on the level of the economy as a whole.

Infrastructure projects

The use of large-scale infrastructure projects as instruments of economic development goes back thousands of years, to long before the rise of the nation-state in its modern form. It is most evident in the case of water infrastructure, which can still be regarded as the most basic category of infrastructure in any economy. An early example is the huge irrigation, drainage and water control projects which were central features of the great ancient empires of China, Mesopotamia, Egypt and others. One can speak of “hydraulic cultures”. Large-scale construction of canals for transport and irrigation has occurred practically throughout the history of China, for example. At the time of the Qin Dynasty in the 3rd century BC these projects included the Zhengguo Canal in Shaanxi Province, irrigating some 27000 square kilometers, and the Lingqu Canal in Guangxi Province, said to be the first canal connection between two rivers, and which used locks. A later, most famous example is the 1770 km long Grand Canal, completed around 600 AD and still the longest artificial river in the world today. In the West a classical example is the Roman empire’s system of aqueducts, many of which were the result of large-scale projects involving elaborate planning and engineering. The supply of water via aqueducts was essential to the rise of the city of Rome itself, which reached 1 million inhabitants by the 3rd century AD. In each case the creation of essential infrastructure depended on the function of a central state.

Jumping forward in time, the rise of the modern industrial nation-states has practically always been closely tied to infrastructure development.  The United States is a prime example.

The rise of the United States as a leading industrial power is inseparable from the early construction of networks of transport canals, roads and railroads, followed by telegraph and electric power networks.  A high-point in the mid-19th century was the completion of the first transcontinental railroad, from the Atlantic to the Pacific. This project was carried out under the Pacific Railroad Act signed into law by President Abraham Lincoln in 1862. Under this law the U.S. government created two corporations, the Central Pacific and the Union Pacific Railroad Corporations, whose task was to construct and operate the line. The project was financed mainly by the U.S. government through the donation of U.S. bonds to the railroad companies, who in turn sold these bonds on the market to raise capital. It was also supported by the donation of large amounts of government-owned land. The transcontinental railroad was a key part of a rapidly developing rail network which not only promoted the settlement of the whole territory and the development of vast interior regions of the United States, but also greatly strengthened the unity of the nation and the efficiency and coherence of the national economy. Crucial also was the installation of telegraph lines along the railroads, which permitted direct communication across the whole territory.

In Germany, the development of a national system of railways, promoted by the “American System” economist Friedrich List, was decisive in overcoming the division of the country into small states (“Kleinstaaterei”), consolidating the national economy and laying the basis for the spectacular industrial development of Germany. 

The crucial role of large-scale government-supported railroad projects in the development of national economies is exemplified also by the 1891-1916 Transsiberian railroad project in Russia and the great railroad-building program for China, launched by Sun Yatsen in 1920. It is not an accident that the authors and promoters of these and other large-scale railroad projects were often at the same time great political leaders of their respective nations. Examples are Abraham Lincoln in the United States, Sergei Witte in Russia and Sun Yatsen in China. 

In many ways even more revolutionary was the advent of electricity and of electricity networks, whose benefits extended directly into the households and workplaces of the population. The magic word "electrification" has been synonymous with socio-economic development in nations around the Earth. This includes state-organized electrification programs, which have historically been one of the most effective means for developing the real economy and uplifting the living standards of the broad population. Famous examples include the Rural Electrification program launched in the United States in the 1930s under President Franklin Roosevelt, and the Soviet GOELRO electrification program which was decisive for the development of the Soviet Union into a powerful industrial economy.

The recent, spectacular development of China into a new economic “superpower” demonstrates once more the role of large-scale infrastructure projects as instruments of economic policy.  Examples are the Three Gorges Dam, the creation of a national expressway network and a national high speed rail network, and the Chinese government’s long-term electrification policy, which has brought electricity to 99% of the population.

High-technology projects  

In addition to infrastructure, great projects in the areas of science and technology have become increasingly important as instruments of economic policy. Exemplary are the Apollo moon landing and the civilian nuclear energy program of the United States in the postwar period. Not surprisingly, the aspiring superpowers China and India both have large space and nuclear energy programs. This is not only for military reasons, but also as means to upgrade the technological level of the whole economy. As we discussed in Part II, the benefits of space exploration go far beyond the immediate economic “multiplier effect” to include a strengthening of a positive cultural outlook of the population, especially the younger population.

Needless to say, scientific and technological endeavors such as the first manned missions into space or the early realization of nuclear power require the initiative and resources of the state. This is because of the enormous investments and the risks involved, and especially because the “profit” or “pay-back” for the investment is realized only on the level of the nation as a whole. In physical-economic terms the payback consists in the fact that science-intensive projects lead to an increase in the overall physical productivity of the economy, via technological improvements, know-how, manpower development etc. The higher productivity in turn causes a growth in the total physical output of the economy. For well-chosen projects the resulting increase in total output is much larger than the portion of output consumed in realizing the project. The difference can be regarded as a “physical profit” obtained from the investment of productive resources into the project. In financial terms the state realizes the “pay-back” mainly via the expansion of its tax base. An excellent example is “multiplier effect” of the U.S. Apollo Program, discussed elsewhere in this book.

 The model used very successfully by the United States is for a large part of the R&D and physical production to be carried out by private companies working under government contracts. The profit margin which private companies require is provided by these government contracts together with the possibility (regulated by the contracts) to apply certain know-how and technologies, developed in the context of government-sponsored work, in their own commercial products

13.4  National banking and the national financial system

The monetary-financial processes in an economy constitute an essential domain of action of a sovereign state, in which the state has at its disposal -- or can create -- a variety of instruments for realizing policies of physical-economic development. Since financial and monetary issues are not the subject of this book, we shall limit ourselves here to a few basic points.

The first and most fundamental point is that a sovereign state has the power to create its own monetary and financial system or to transform an existing one; to regulate monetary and financial processes including the activities of private financial entities; and to create financial institutions such as central banks, national banks and development banks as instruments of economic policy.

A good example is the United States, which has had a variety of currency and banking systems in the course of its long and often turbulent financial history. Although the present financial system of the United States is based on an independent central bank -- the Federal Reserve Bank – one should never forget that this present system was created by the U.S. government itself via the Federal Reserve Act of 1913. Prior to 1913 the U.S. had a series of currency and banking systems which differed widely from each other, and in which the degree of control by the government was sometimes stronger and sometimes weaker. The complex history of the U.S. financial system has been shaped by struggles between different economic and political philosophies, and between the federal government, the state governments, and powerful private financial interests and trusts exemplified by the financial empire of J.P. Morgan. Again and again, however, the U.S. government exercised its sovereign powers by discarding an existing system and replacing it by a new one.  

Analogous cases can be found in the histories of nations around the world. As long as a nation maintains its sovereignty, its government can shape the nation’s monetary and financial system and guide its functions in accordance with the requirements of economic policy. Naturally the exercise of these powers is subject to a wide variety of domestic and external pressures and constraints, including foreign trade and currency relations, domestic and foreign debt, domestic savings and budget issues, the conjunctural situation, domestic and foreign politics etc.  As in all other areas, policy-making in the financial domain requires a mixture of strategy and tactics. But apart from the issue of who really controls financial policy at a given moment – private interests or the elected government, for example – the most essential point is to define the fundamental criteria according to which decisions are made. 

Our answer is that the real, physical economy must be the basic criterion and measuring-rod for financial and monetary policy. Unfortunately this is rarely the case today. Most governments use GDP growth – a financial-based measure of economic performance -- as the main basis for their policy-making in the economic and financial spheres. We have already emphasized the potentially disastrous results of this practice, exemplified by the great financial crisis which began in 2007-2008.

In Part I of this book we have set out the basic principles of physical economy and the criteria for assessing the real growth and development of an economy, providing a principled alternative to GDP and other false or inadequate measures.

The principles of physical economy are particularly essential when it comes to banking and credit policy, which are among the most powerful instruments through which a sovereign government can act upon the economy.

It lies outside the scope of this book to deal with the vast subject of banking and credit policy in general. We have already touched upon a central issue, however, in Part II, Section 8.3, entitled “Where will the money come from?” There we described means by which a state can insure an adequate flow of credit to the productive sectors of the economy, including through various mechanisms of productive credit generation; and how the state can channel that flow of credit via a combination of state and private credit institutions, into directions that promote optimal physical-economic development. We mentioned there the examples of Germany and Japan in the postwar period, as well as China today. More will be said about these and other cases in Chapter 15.  Here a fundamental point should be emphasized:     

We insist that it is possible, in principle, to finance anything which lies within the limits of the physical productive capacity of the nation. The correct methodological approach is first to define a feasible trajectory (or range of trajectories) of physical-economic development, and afterwards to identify and/or create the financial instruments and financial policies needed to realize the desired trajectory. There is generally a sizable range of possible solutions to the latter problem, involving various choices of instruments, varying degrees of state dirigisme and different forms of “division of labor” between the state and private banking and investment.  An important criterion for all such solutions is that the monetary expansion and growth in the issuance of credit must be matched by the real growth of the physical economy. This is accomplished primarily by channeling suitable amounts of credit and investment into activities which increase the productivity of the physical economy – above all into scientific and technological development and improvements in infrastructure. At the same time the state must exercise its power to suppress the misuse of financial instruments and to block the growth of speculative bubbles and related financial illnesses. Using the method of productive credit generation, the state can in principle expand economic activity up to the point of full employment and full use of productive capacities.

In our view the main difficulty at present is the fact that most of the world’s leading central banks are committed to policies which contradict the basic principles of physical economy. Often these banks pursue their policies independently of the governments, functioning de facto as a kind of second government, a “financial government”, alongside the elected one. This situation is by no means inevitable or permanent, however. As we emphasized, central banks -- even the so-called independent ones -- are creations of governments. They are created by laws, and can be reorganized or replaced at any time by the passage of new laws. 

13.5  Private enterprise or state ownership?  Protectionism or free trade? Economic planning or market mechanisms?

In this book we mainly deal with decisions made at the level of an economy as a whole, which by their very nature are the primary responsibility of governments. Hence in the following sections, where we examine examples of successful physical-economic development in the history of the United States, Germany, France, Japan, the USSR and the recently China, we shall focus on the role of the state.

There are endless debates over how large the state role in the economy should be and what instruments the state should or should not use. At one extreme is the concept of a socialist command economy in which all economically relevant activities are centrally planned and directed, and goods are directly allocated rather than being bought and sold on markets. At the other extreme of the spectrum is the concept of the pure laissez faire economy, where the state is prohibited from intervening in any way into the economy, and everything is left up to the “market forces”.  This debate involves a great number of issues, many of which lie outside the scope of this book. We shall limit ourselves to making a few basic points.

Firstly, the various different sides of the debate on the proper economic role of the state often make the mistake of focusing on the instruments of economic policy per se rather than on how the instruments are used, for what purpose and under what circumstances. It would be like debating about whether a screwdriver is a good thing or a bad thing.

As history shows, the benefits of state intervention into the economy depend on the character of the state and its policies. The same applies to state ownership or control of large corporations and banks. If the institutions of the state are sound, and its policies are competent and oriented to developing the real, physical economy, then the results of strong state intervention can be excellent. But if the state is corrupt and dominated by special interests, the result can be disastrous.

Another defect in the debate -- most pronounced among advocates of radical neoliberal ideology -- is to argue for this or that position on the grounds that it leads to the most rapid economic growth. As we have emphasized repeatedly, the purpose of physical economy is not to optimize growth but rather to promote human happiness. The mere fact that certain policies may be superior in terms of the growth they produce is not sufficient to prove that they are superior in bringing about real physical-economic development as we have defined it. This applies especially to arguments based on the criterion of GDP growth.  Similarly, it is often argued that radical deregulation, privatization and globalization benefit the consumer by lowering the prices of goods. Whatever the truth of that argument, we must not forget that the world consists not only of consumers, but also of producers; and that the goal of life is not to consume as much as possible. 

It is important to adopt a non-dogmatic position and to steer away from ideologies.

There is no doubt, for example, that under suitable conditions free competition among private entrepreneurs can be a powerful force for efficiency, productivity and innovation. But unbridled competition can also lead to a ruthless drive to push down wages, to the extinction of entire classes of small producers and control of markets by a small number of giant corporations.

Ironically the so-called free market economies of the United States and many other countries require constant interventions by the state exactly in order to insure “fair competition”, among other things by blocking the formation of cartels and monopolies. The United States in particular has an elaborate system of antitrust laws, evolved over a long period, and enforced by agencies and courts on the Federal and state levels. The United States also has a minimum wage law and extensive labor laws to protect the rights of employees. State intervention into the markets and into the conditions of employment is even more extensive in the successful, so-called “social market economy” of Germany. 

On this background we can understand why radical neoliberal policies have caused great damage to free entrepreneurship in many instances, by removing the kinds of regulation, protection and stabilizing measures which are necessary to maintain a broad layer of small- and medium-sized enterprises.

It is particularly important to recognize that free entrepreneurship can only flourish under conditions of monetary and financial stability, and where there is an adequate supply of affordable credit. As we have experienced in the devastating financial crisis beginning in 2007-2008, deregulated financial markets have an inherent tendency to take on a life of their own and to separate themselves from economic reality. Ultimately it is only the state which can hold these irrational tendencies in check, counteract the formation of “bubbles” and insure an adequate flow of credit to activities that are contribute to physical-economical development. 

Trade barriers (protectionist tariffs) are neither good nor bad in and of themselves. Under certain circumstances trade barriers can be indispensible for creating a viable national economy and for nurturing specific economic sectors. This can be the case, for example, when an industrial sector, which is needed to form an all-round domestic base, is still underdeveloped. Historically, protectionist policies were essential to the successful industrial development of the United States, Germany and many other nations during the 19th century. But in some situations trade barriers can also be a cause of chronic obsolescence, by removing the inducement and pressure to modernize and to increase efficiency. Like all government instruments of economic policy, there is a constant danger of misuse for political reasons and under the influence of particular interests.  

Historically speaking, successful national economies have nearly always been based on a mutually complementary combination of a broad layer of private entrepreneurs and businesses together with a strong economic function of the state, including state ownership of certain key institutions and facilities.

From a purely physical-economic standpoint (leaving aside political considerations) the indispensible function of private ownership and private entrepreneurship sphere lies in the sovereign sphere of action they provide, in which the individual can nurture and develop his or her contribution to the economy, free from outside interference. Such individual contributions range from the countless decisions made by entrepreneurs in developing their businesses, to inventions and scientific discoveries made by individuals. The essence of an original creative contribution – such as a scientific discovery – is that it deviates from prevailing, established ideas and ways of doing things. Creative ideas are born only in individual human minds. Bureaucracies or other collectives intrinsically tend to resist innovations. This is a main reason why “pure” communist or socialist systems lacking a large sphere of independent entrepreneurs have generally not been viable in the long term.

Markets have an important role to play in a successful economy, but we think their importance is exaggerated in classical economics. In physical economy the most important thing is not how goods are bought and sold, but rather how they are created and produced, how they are used and what is the impact of their production and consumption on the development of the economy as a whole. In principle it would be possible to eliminate the use of money and markets entirely by centralized control of production and allocation of all goods and services, as was practiced to a large extent in the Soviet Union before Perestroika. But although the Soviet Union was able to achieve considerable physical-economic development, especially in the pre-1975 period (as described in Chapter 15 below), direct allocation by a centralized state bureaucracy tends to be extremely ineffective and wasteful. It would be so even using the most modern information systems, because top-down allocation procedures cannot replace the decision-making of millions of individual members of society which – to the extent rationality prevails – reflects the actual requirements and usefulness of goods and services on the “micro” level. Strict centralized allocation has a particularly negative effect on the process of innovation in the productive sector, because innovations most often require new equipment, whose procurement would always require approval “from above”. (An analogous problem often afflicts large corporations, which tend to function on the inside like miniature socialist economies.) 

Markets, when they are functioning properly, provide a natural means of quantitative regulation of production and demand, as well as providing the freedom to procure goods without bureaucratic procedures. Markets also provide a direct, more or less unhindered point of entry for new and improved types of goods and services into the economy, and a means for testing the quality and usefulness of such products via the judgment of consumers.

At the same time, it is a matter of daily experience that markets are subject to distortions and dysfunctions which can have highly destructive effects on the real economy. Among the most dangerous case is when markets become a mechanism for self-feeding spirals of irrationalism. This danger is exemplified by speculative financial bubbles as well as the practice of creating artificial demand by psychological and even cultural manipulation of society. The latter phenomenon has become pervasive in Western society in the post-1970s period with the transition to a “consumer society” and the huge expansion of entertainment “industries” (including mass sports and gambling) which not only feed on growing irrationality in the population, but also actively encourage it. When we speak here of “manipulation” we do not mean some sort of secret conspiracy, but rather the open activity of a gigantic advertizing industry which utilizes all available means of psychology and sociology to influence the thinking and behavior of the population. As we noted earlier, the so-called “consumer society” did not arise spontaneously, but was in many ways the result of changes in the economic and financial policies pursued by governments.

Once again we must recognize that markets are instruments which can be used for both positive and negative purposes. The claim that markets are “self-correcting” overlooks the fact that severe financial crises can cause irreversible damage to society and even lead to wars and the destruction of entire nations. No one can forget the “correction” which occurred in 2007-2008, in which the governments of the United States and other countries saw themselves forced to intervene massively in order to prevent an uncontrollable disintegration of the entire financial system.

Macro-economic planning is another case of an instrument which is neither good nor bad in and of itself. History provides examples in which physical-economic development has been achieved over substantial periods on the basis of comprehensive planning with more or less precisely-defined targets, as well as cases where successful development was achieved without any overall plan at all. To the first category belong not only some more or less successful socialist planned economies (such as the pre-1975 Soviet Union), but also “capitalist” countries such as France and Japan in the 30 years following World War II (see Chapter 15). India, an open society with a mixed economy, has practiced centralized planning with a succession of detailed 5-year plans over the entire period from its independence until today.  The United States, on the other hand, realized rapid physical economic development throughout most of its history without any overall economic plan. The major exception was the U.S. war mobilization in World War II. The “economic miracle” in West Germany after the war was also accomplished mostly without centralized planning, although government policies were essential to creating the framework within which the “miracle” occurred.

Evidently the usefulness or necessity of comprehensive planning depends very much on the concrete circumstances of a nation. Generally speaking comprehensive, detailed planning has been most useful and most successful in the case of relatively underdeveloped nations (or nations with large underdeveloped sectors) striving to build an all-round productive base and eventually to “catch up” with the advanced nations. An analogous case is Japan’s rapid recovery and modernization after the destruction of World War II. In such cases the most necessary technologies already exist and can be obtained from the outside, from more advanced nations. The most recent example is China.

When development is based on the integration of already existing technologies into an economy, planning can make use of knowledge concerning the basic parameters of these technologies and their applications. But what happens once an economy has reached the technological level of the most advanced nations, and in which further development can no longer be based on the import of already existing technologies?  Does planning become superfluous?  Is detailed planning even possible?

This question raises a fundamental methodological issue, which emerges most clearly when we consider the “strongly nonlinear” mode of physical-economic development described in Chapter 4 of this book. Strongly nonlinear development is driven by scientific and technological revolutions. These are the result of a combination of creative mental acts of individuals, typically connected with anomalous experimental phenomena or observations.  It belongs to the very nature of such events that their occurrence and consequences cannot be predicted in advance.

Suppose, for example that a new, cheap and plentiful source of energy were discovered, based on some new physical principle, which could drastically change the parameters and even the structure of the whole economy? To try to maintain a too rigid plan under such circumstances could mean delaying or even blocking the introduction of technology based on the new energy source.

On the other hand, relying exclusively on markets and private investment can also inhibit or even completely block fundamental scientific and technological revolutions. The main reason is that such revolutions typically have a long “gestation period” during which there is little or no return on investment. This applies most of all to fundamental scientific research, where it sometimes takes decades before practical applications emerge. Who will finance this fundamental research? And according to what criteria? The “Darwinian” environment of free market competition favors a value system focused on profits and practical benefits, and not on the acquisition of human knowledge per se. Although the notion of “ivory tower research” has negative connotations today, history shows that the greatest technological revolutions often derive from research pursued with no other goal than the search for truth. What is the use of a fundamental discovery in science? We would answer with Benjamin Franklin’s famous retort: “What is the use of a new-born child?”

Evidently it is the state which must take the main responsibility for supporting fundamental scientific research and long-term scientific projects which have no obvious commercial applications in the short term. The state must also make sure that sufficient private and public investment goes into maintaining the high-technology industrial base, manpower and R&D capabilities needed to realize the technological possibilities opened up by scientific discoveries. At the same time we cannot overlook the risks of over-bureaucratization and lack of accountability which are often connected with large-scale state support of scientific research.

Going to a deeper level, we saw in Chapter 5 that fundamental discoveries, albeit individually not predictable, can nevertheless be ordered in a coherent way in terms of what Lyndon LaRouche has called “higher hypotheses”. Without going into this matter more here, the essential implication is that “strongly nonlinear” trajectories of development via successive scientific and technological revolutions are in principle capable of being projected into the future in a certain qualitative way; and policies directed toward the long-term evolution of the economy can be designed accordingly. Naturally, this does not imply that the state can “decree” scientific discoveries! The framing of higher hypotheses – conceptions which can function as “fountains of discovery” -- is the task of great scientists and philosophers, who must have freedom to pursue truth in a fully independent way.  

From all these considerations we are inclined to conclude that there is no single “best” economic system in the sense of some fixed set of rules or form of organization of economic activity – at least at the present stage of development of human society. When we discuss the struggle between the so-called “American System” and “British System” below, the term “system” signifies philosophies or schools of thought about economics, rather than economic systems in the formal sense. Above all, success in physical-economic development is a function of the economic philosophy that guide decision-making, and the quality of the institutions, leadership and culture of a nation. This view is strengthened by the study of history, especially the successes in physical-economic development in nations having widely different institutional structures. We shall examine examples of this below.

Supplements to Chapter 13

Globalization vs. national economy – a political note

The concepts of a sovereign nation-state and of a national economy are inseparably linked to each other. A nation-state is first of all defined by a population of citizens and a well-defined territory over which the government of the nation-state has jurisdiction. The same holds for economic activities taking place on the territory of the given nation, and implies automatically the distinction between an “internal” economy, belonging to the jurisdiction of the state, and “external” economic activity involving foreign nations or other entities which are outside that jurisdiction.  So far this is only a legalistic notion. But if the people of a sovereign nation are united by factors of language, culture, history and participation in interconnected economic activities, and if the government takes responsibility for the economic well-being of the citizens, then it is natural that the economic activities on the nation’s territory will evolve into a unity, a distinct economic organism. That is a national economy. On the level of the global economy, national economies behave as distinct individuals interacting with each other via trade and various forms of cooperation.

Traditionally, even economists of the liberal school have recognized the institution of the sovereign nation-state and the existence of national economies, merely arguing for a minimum of state interference into the economy. But in the recent period a much more radical ideology of neoliberalism has emerged in parallel with the unprecedented liberalization of trade and especially financial flows on a global level. It is claimed that the way to prosperity is for each nation to integrate itself totally into a single world economy, giving up the sovereign power of a government to manage domestic economic and financial affairs. De facto this means transforming nations into mere territories within a global economic “empire” dominated by multinational corporations and financial groups. Such a scenario would make the institution of the sovereign nation-state virtually irrelevant. To the extent national governments give up the ability to manage their economies in the interest of the wellbeing of their citizens, they degrade themselves to the status of local colonial administrations. Democracy then loses a great deal of its meaning: people can still elect their government, but the government has no power to intervene into the economic processes that shape their daily lives.

(This situation has already arisen to a certain extent in the European Union. Here many of the sovereign decision-making powers that formerly belonged to the governments of the individual member-states, have been transferred to the centralized EU bureaucracy, but without the EU itself having become a single unified, sovereign and democratic nation like the United States. This has created an understandable reaction among populations who feel that they have been de facto disenfranchised.)

Beginning no later than the 1970s, and greatly intensifying into the 1980s and 1990s, advocates of neoliberal policies launched a barrage of arguments to the effect that national economies had now become obsolete and that their continued existence was an obstacle to growth and progress. The rise of neoliberal economic ideology has gone hand-in-hand with political attacks on the institution of the sovereign nation-state, which has been the norm of international relations since the Peace of Westphalia in the middle of the 17th century. The principle of territorial sovereignty and the principle of non-intervention into the internal affairs of states have increasingly been “relativized” in favor of a new interventionism. The new norm of international relations would legitimize outside interventions (including military interventions) in order to defend local populations against threats to their security and human rights.  Although such interventions can be morally justified as a last resort, e.g. in cases of genocide, the concept of humanitarian intervention can easily also serve as a disguise for imposing a new form of “liberal imperialism”. 

It is not the purpose of this book to deal with such political issues, nor do we wish to suggest that national sovereignty is the solution to all problems -- which is obviously not true. Like all instruments, the institution of the sovereign nation-state can be used for good or for evil. But it is essential to stress what we pointed out at the outset: Throughout the last 200 years, up to today, only sovereign nations have been able to develop and maintain powerful industrial economies. In every case, state intervention into the economic process was essential in promoting physical-economic development and providing a favorable environment for independent private entrepreneurs, businesses and markets to fulfill their essential economic functions.  

One of the most revealing examples of the economic role of the state is the history of the United States of America, which has ironically been a main promoter of neoliberal globalization in the recent period. The rise of the economic power of the U.S. was accomplished on the basis of what was known in the 19th century as the “American System of National Economy”. Historically speaking the original “American System” -- also known as the “National System” -- has been the most successful embodiment of the principles of physical economy in the practice of nations so far. It was adopted by many nations in the course of the 19th century and became the basis for the emergence of powerful industrial economies such as that of Germany. Despite the recent onslaught of neoliberalism, the influence of the American System continues today, and can be seen for example in the economic policies adopted by China, India and a number of other developing nations. We shall discuss the “American System” and its battle with the so-called “British System” in the Chapter 14 below.

In recent decades the policies of United States governments have gone against the classical “American System”. This has resulted in enormous damage to American society and its real economy. The U.S. today is hardly a model to be followed by other nations. Worse: recent U.S. policies have contributed to weakening and even destabilization of nations whose development had been oriented to the classical “American System”.

The role of the state in the U.S. economy today

An irony of this situation is that the despite the promotion of neoliberal policies, the U.S. itself still retains many essential features of a national economy. The U.S. still maintains an all-round industrial base, although the dependence upon imports and “outsourcing” of production has greatly increased. The U.S. government continues to play an enormous role in the economy, not least of all in supporting advanced capabilities in science and technology which are crucial to maintaining that nation’s status as an economic and military superpower.

Let us briefly review some relevant facts concerning the U.S. government’s economic role, keeping in mind the list of instruments identified above. As an exception we shall include some monetary-based statistics on government spending which reflect the extent of influence of the government on the economic process.  (“Government” refers here both to national and state governments of the U.S..)

Overall spending: The total of federal, state and local government spending -- $6269 billion in 2014 -- is larger than the total income of all U.S. households. Total spending of the federal government was $3650 billion, or about $30000 per U.S. household. 45% of this was for social programs. The rest includes $677 billion total for national defense, $93 billion for transportation and over $135 billion for scientific research and R&D activities. State and local government spending was $2619 billion. The total of federal, state and local government spending is therefore an enormous economic factor.

Work force: U.S. government is by far the largest single employer in the country. Government agencies currently employ 22 million persons directly, or about 15% of the total workforce. In addition, through its large role in health care the government is indirectly responsible for over half of the employment in the health sector, which is about 11 million persons.

Education: 90% of U.S. children are educated in public (i.e. government owned and operated) schools. About 50% of all colleges and universities are publically-owned.

Health care: 22% of all U.S. hospitals are state institutions. Federal, state and local governments of the U.S. finance about 44% of all costs for health care directly, plus another 20% indirectly, through insurance subsidies.

Scientific research and development:  Of the total R&D expenditures in the United States, about a third - $135 billion in 2014 - are made directly by the U.S. federal government.  Of this about $65 billion per year goes to defense-related R&D,  $30 billion goes to National Institutes of Health (research grants to 300 000 researchers at 2500 universities), $12 to the Department of Energy, $12 billion to the space agency NASA, $6 billion to the National Science Foundation, and smaller amounts to other agencies.

Support for high-technology industry via defense contracts: The gigantic military budget of the United States -- about $500 billion per year “base line” spending, not counting foreign interventions -- plays a decisive role in supporting high-technology industries and advanced scientific research and development. For example: in addition to the $65 billion invested in defense-related R&D, every year approximately $13 billion in defense contracts go to the aerospace company Lockheed, $10 billion to Boeing, $5,6 billion to Raytheon, $4,8 billion to General Dynamics, $4,2 to Northrup, $3,7 to United Technologies etc. 

General support for the private sector: Currently the U.S. federal government has over 2000 different programs for assisting private businesses. This includes about $100 billion each year of subsidies in the form of reduced taxes, loan guarantees, interest-free loans etc.

Infrastructure - Roads:  practically the entirety of the U.S. road system is government owned and was built with government funds. That includes 6.4 million kilometers of public roads and highways, and roughly 607,380 bridges. Canals and waterways: The U.S. has 46,600 km of navigable public waterways (including rivers), of which 30,000 km are maintained by the U.S. Army Corps of Engineers, a government institution. Water supplies: 85% of the U.S. population is supplied by public (state-owned) water companies via 1.5 million km of water distribution pipelines. The approximately 84,000 dams are nearly all publically built and owned.  Electric power: about half of the U.S. electric grid -- which consists of approximately 273,000 km of high-voltage electric transmission lines and almost 10 million km of lower-voltage  distribution lines -- is owned and operated by public utilities and cooperatives. These cooperatives, mainly in rural areas, were created on the basis of loans from the U.S. government Rural Electrification Administration. At present about 1/3 of the total electricity consumption in the U.S. is provided by state-owned (public) utilities and nonprofit cooperatives, and the remainder by private electricity companies.  The system is strictly government-regulated. Railroads: practically all of the 224,792 km of railroads of the United States were built by private companies, but with considerable direct and indirect support from the government. This is exemplified historically by the construction of the first transcontinental railroad as a national project under President Lincoln’s “Pacific Railroad Acts” of 1863-1866. Today nearly all passenger rail transport between U.S. cities is run by the National Railroad Passenger Corporation, a public-funded service entity created by the U.S. Congress in 1970. Freight transport continues to be in private hands. Railroad operations are strongly regulated by the U.S. Surface Transport Board and the Federal Railroad Administration. Public transport systems: Nearly all of the urban mass transport systems of the United States are publicly owned and were built on the basis of public investments.

Infrastructure spending overall: The U.S. government presently invests approximately $300 billion per year in transport and water infrastructure, including urban transport. Given the poor state of much basic infrastructure – including an increasing number of collapsing bridges – it is likely that infrastructure spending will grow rapidly in the future.  
 

Chapter 14  Economic policy and the battle of systems

14.1 The “American System” versus the “British System”

The most successful embodiment of the principles of physical economy in the practice of nations so far, is the economic philosophy and practice which famous 19th century German economist Friedrich List called the “American System of Political Economy”, also referred to by List and others as the “National System”. (In the following we shall use the terms “American System” and “National System” synonymously.) The American System was first realized in the United States but went on to play a decisive role in the development of nations around the world. For this reason we shall devote a great deal of attention to it in this and the following sections. We begin with some historical background.

Throughout the 19th century, and continuing in various forms until today, economic policy and practice has been the battleground of a struggle between two opposing schools of the thought traditionally identified as the American (or National) System, just referred to, and the so-called “British System”. The most famous proponents of the British System thinkers are Adam Smith, David Ricardo and Thomas Malthus, while the American System in its original sense is associated most closely with Alexander Hamilton, Henry Clay, Henry C. Carey, and the influential German economist Friedrich List.

The essential policy elements of the American System were set forward already in the first years of the U.S. republic in three memoranda by Alexander Hamilton, its first Secretary of Treasury: “Report on Manufactures”, “Report on the Public Credit” and “Report on a National Bank”. The theme of the struggle between these two systems was most famously articulated by Henry C. Carey in his 1851 book on the American system, entitled “The Harmony of Interests”.

The American System aims at the development of nations as economic unities, i.e. of national economies, through government policies which foster the all-round development of the nation’s productive forces – of its population, agriculture and industry – with emphasis on scientific and technological progress and what the founders of the American System called “internal improvements”. The latter refers mainly to what we today call basic economic infrastructure (transport, energy, water and communications). 

By contrast the British System is oriented primarily to trade, finance and market competition, seeing these as the main sources of wealth and drivers of economic progress, advocating the unhindered movement of money and goods over as much of the world as possible. Historically the British System is inseparable from the practices of the British Empire and the earlier trading empire of Venice, including the dominant role of a financial oligarchy. The British System has traditionally been characterized as “cosmopolitan” in contrast to the nation-state-oriented American System.

Most often the American System has been identified with protectionism, and the British System with free trade. Such characterizations are misleading, however, because the use of trade barriers to protect domestic industry (for example) is only an instrument of economic policy, and by itself does not constitute the essential core of the American System. Henry Carey, for example, saw as an ultimate future goal the establishment of free trade in the form of an exchange of surplus products between industrially- developed nations whose range of production should be limited only by physical circumstances such as the distribution of natural resources. With this conception in mind he wrote: “Of the advantage of perfect free trade there can be no doubt … But free trade can be successfully administered only after an apprenticeship of [tariff] protection. …  Interference with trade is excusable only on ground of self-protection.”  In fact, in the early period of its industrialization around the end of the 17th century England itself had an elaborate system of import tariffs to protect its industry, particularly its textile industry, from foreign imports. Great Britain also maintained a certain level of protection throughout the 19th century. The doctrine of pure “free trade” associated with Adam Smith and David Ricardo, was most often used as a weapon of economic warfare. 

The British System is better typified by Adam Smith’s notion of the “invisible hand” and Ricardo’s principles of “comparative advantage” and the so-called “Iron Law of Wages”.  According to Smith markets are self-regulated by the striving for maximum profit among competing interests, and governments should not interfere. The principle of competitive advantage dictates that a product should be produced in the country or region where production is cheapest, and sold where the price is highest. The Iron Law of Wages is a proposed law of economics that asserts that real wages always tend, in the long run, toward the minimum wage necessary to sustain the life of the worker.

By contrast the American System emphasizes both individual entrepreneurship (free enterprise) and a major economic role of the state, for example in regulating currency, trade and markets, fostering domestic industry, promoting infrastructural development and guaranteeing the availability of adequate credit for productive investment. The American System aims at developing all essential branches of agriculture, mining and industry rather than specializing in a few areas that might be advantageous for trade.

In many ways the opposition between the American System and British System goes back long before the founding of the United States, to an internal battle within England itself. What is now called the British system became hegemonic in England after a long and bitter struggle. The opposing tendency in a sense “emigrated” to North America and found its home in the young American republic.

One of the key issues in the battle between two philosophies in the early history of the American colonies and United States itself, was the desirability of industrialization. Advocates of the British system argued that the America, with its gigantic land area and abundance of fertile soil, was destined to remain an agricultural producer. In more modern language their argument, America’s “comparative advantage” lay in the area of agriculture. With its already-developed industry but relatively small land area England was destined to export manufactured goods and import agricultural goods. Since manufactured goods were originally more expensive to produce in America than in England, the development of manufacturing could only occur under the protection of import tariffs. Advocates of the American system strived to create a powerful, industrialized national economy able to match or overtake England and the other European nations. This policy was put forward very clearly by Alexander Hamilton in his 1791 “Report on Manufactures”.

Despite the victory in the Independence War from England, the fundamental conflict between the two systems continued within the U.S. itself, and came to a climax in the American Civil War. The Civil War took the form of a conflict between the industrial Northern states, where the “National System” flourished, and the Southern states which had inherited from the British colonial period an economy based mainly on plantations, slavery and the export of agricultural products, above all cotton and tobacco. Up into the time of the Civil War the Southern states were the main suppliers of cotton to the British textile industry. Not surprisingly these states tended to favor “free trade” strongly in opposition to the protectionist policies of the Northern states. Henry Carey openly blamed the British System and British policies for the U.S. Civil War, criticizing the earlier failure to implement the industrially-oriented “American System” in the Southern states:

"To British free trade it is, as I have shown, that we stand indebted for the present Civil War. Had our legislation been of the kind which was needed for giving effect to the Declaration of Independence, that great hill region of the South, one of the richest, if not absolutely the richest in the world, would long since have been filled with furnaces and factories, the laborers in which would have been free men, women, and children, white and black, and the several portions of the Union would have been linked together by hooks of steel that would have set at defiance every effort of the ‘wealthy capitalists’ of England for bringing about a separation.”

The Union victory in the Civil War and the emancipation of the slaves was a victory of the American System and greatly encouraged the spread of American System ideas to other countries (see below).

14.2 Inventions and the spirit of the American System

It is extremely important to understand that the American System is not an economic system in the sense the term is normally used today. It is not a fixed form of organization, but rather a set of principles directed toward the development of national economies as sovereign “economic individuals”. We would even go so far as to say – although the statement may be controversial -- that the American System could in principle be realized in an economy of a socialist type, assuming that sufficient freedom is provided for the independent creative activity of individuals.  

It is extremely important to recognize that the American System has from its very beginnings been inseparably connected with a special enthusiasm for invention and discovery, typified by the United States in its best periods.

In fact, the United States has distinguished itself historically not only by many famous inventions, but also by the skill with which new inventions were applied in practical life, often creating entire new branches of industry. Among the American inventors and the American entrepreneurs who transformed inventions into large-scale practice were: Benjamin Franklin (lightning rod, bifocal glasses), Eli Whitney (the cotton gin), Robert Fulton (first commercial steam boat), Joseph Henry and Samuel Morse (long-distance telegraphy), Alexander Graham Bell (telephone), Thomas Edison (electric light bulb), Nicolas Tesla (alternating current), the Wright brothers (first successful powered airplane), Henry Ford (large-scale assembly-line production), Robert Goddard (first liquid-fuel rockets). This American prowess is remarkable because theoretical science was generally more advanced in Europe. 

The special importance given to inventions and the practical application of scientific knowledge in the original American System is well illustrated by the activity of two great American leaders, Benjamin Franklin and Abraham Lincoln. Here we give two examples. 

Benjamin Franklin

Benjamin Franklin was the first physicist in America, and famous all over the world for his pioneering research in electricity. He was also a great organizer of science in America, founding its first important scientific institution, the American Philosophical Society in Philadelphia. Franklin was also a prodigious inventor. Among his many creations were the lightning rod, glass harmonica (a glass instrument, not to be confused with the metal harmonica), the Franklin stove, bifocal glasses and the flexible urinary catheter.

This same Benjamin Franklin was one of the founding fathers of the United States, one of its chief organizers in the political and economic fields. Franklin played a key role in the American Revolution and was a co-author of the Declaration of Independence and the Constitution.

One of Franklin’s crucial contributions to the development of the American colonies and to the later United States was his recommendation for the issuance of paper money and the expansion of credit by the governments of the colonies. Franklin’s “A Modest Enquiry into the Nature and Necessity of a Paper-Currency” (1729) is one of the classics of American System economics, and a document of decisive strategic importance for the entire future of what was to become the United States. In fact, the British imposition of a ban on the emission of paper currency by the colonies is generally regarded as a main cause for the American Revolution. After the establishment of the United States the spirit of Franklin’s monetary policy found expression in Alexander Hamilton’s founding of the first National Bank of the United States.  Later the issue of a new paper currency called the Greenback under Abraham President Lincoln was decisive for financing the victory of the Union in the U.S. Civil War. 

In 1743, when North America was still part of the British empire, Benjamin Franklin published “A PROPOSAL for Promoting USEFUL KNOWLEDGE among the British Plantations in America”. Here “Plantations” signifies “colonies”. This document defined the aims of the American Philosophical Society, founded in the same year. Among its early members were the first four Presidents of the United States, George Washington, John Adams, Thomas Jefferson and James Madison, as well as Alexander Hamilton, Thomas Paine and the Marquis de La Fayette. Here is the main part of Franklin’s “Proposal”:

 “The first Drudgery of Settling new Colonies, which confines the Attention of People to mere Necessaries, is now pretty well over; and there are many in every Province [colony] in Circumstances that set them at Ease, and afford Leisure to cultivate the finer Arts and improve the common Stock of Knowledge. To such of these who are Men of Speculation, many Hints must from time to time arise, many Observations occur, which if well-examined, pursued and improved, might produce Discoveries to the Advantage of some or all of the British Plantations, or to the Benefit of Mankind in general.

“THAT these Members meet once a Month, or oftener, at their own Expense, to communicate to each other their Observations, Experiments, &c. [etc.], to receive, read and consider such Letters, Communications, or Queries as shall be sent from distant Members; to direct the Dispersing of Copies of such Communications as are valuable to other distant Members in order to procure their Sentiments thereupon, &c.

“THAT the Subjects of the Correspondence be:

“All new-discovered Plants, Herbs, Trees, Roots, &c. their Virtues, Uses, &c.

“Methods of Propagating them, and making such as are useful but particular to some Plantations [colonies] more general.

“Improvements of vegetable Juices, as Ciders, Wines, &c.

“New Methods of Curing or Preventing Diseases.

“All new-discovered Fossils in different Countries, as Mines, Minerals, Quarries, &c.

“New and useful Improvements in any Branch of Mathematicks.

“New Discoveries in Chemistry, such as Improvements in Distillation, Brewing, Assaying of Ores, &c.

“New Mechanical Inventions for saving Labour; as Mills, Carriages, &c. and for Raising and Conveying of Water, Draining of Meadows, &c.

“All new Arts, Trades, Manufactures, &c. that may be proposed or thought of.

“Surveys, Maps and Charts of particular Parts of the Sea-coasts, or Inland Countries; Course and Junction of Rivers and great Roads, Situation of Lakes and Mountains, Nature of the Soil and Productions, &c.

“New Methods of Improving the Breed of useful Animals; Introducing other Sorts from foreign Countries.

“New Improvements in Planting, Gardening, Clearing Land, &c.

“And all philosophical Experiments that let Light into the Nature of Things, tend to increase the Power of Man over Matter, and multiply the Conveniencies or Pleasures of Life… 

“at the End of every Year, Collections be made and printed of such Experiments, Discoveries, Improvements, &c. as may be thought of publick Advantage.

Abraham Lincoln

Abraham Lincoln was one of the greatest figures in American history. As President he led the Union (essentially the Northern States) to victory in the Civil War, thereby saving the United States from being dismantled; and he abolished slavery with his 1862 “Emancipation Proclamation. Less well-known is the fact that Lincoln was a great supporter of infrastructure development of the United States, especially of the railroad system. One of Lincoln’s foremost goals was the completion of the first Transcontinental Railroad. In 1862 he signed the Pacific Railway Act which provided for extensive Federal financing and land grants for the construction of the transcontinental line.

In 1858 – 1859, two years before becoming President of the United States, Lincoln gave a series of popular lectures on "Discoveries and Inventions". The following short excerpts show how Lincoln thought about this subject and its fundamental importance for the development of mankind:

“All creation is a mine, and every man, a miner.

“The whole earth, and all within it, upon it, and round about it, including himself, in his physical, moral, and intellectual nature, and his susceptabilities, are the infinitely varied "leads" from which, man, from the first, was to dig out his destiny

“Man is not the only animal who labors; but he is the only one who improves his workmanship. This improvement, he effects by Discoveries, and Inventions. … The discovery of the properties of iron, and the making of iron tools, must have been among the earliest of important discoveries and inventions. We can scarcely conceive the possibility of making much of anything else, without the use of iron tools …

“The use of the wheel & axle, has been so long known, that it is difficult, without reflection, to estimate it at it's true value. The oldest recorded allusion to the wheel and axle is the mention of a "chariot" Genesis: 41-43. … The boat is indispensable to navigation. It is not probable that the philosophical principle upon which the use of the boat primarily depends – to wit, the principle, that any thing will float, which can not sink without displacing more than it's own weight of water -- was known, or even thought of, before the first boats were made…

“The plow, of very early origin; and reaping, and threshing, machines, of modern invention are, at this day, the principle improvements in agriculture. And even the oldest of these, the plow, could not have been conceived of, until a precedent conception had been caught, and put into practice -- I mean the conception, or idea, of substituting other forces in nature, for man's own muscular power. These other forces, as now used, are principally, the strength of animals, and the power of the wind, of running streams, and of steam….

“In speaking of running streams, as a motive power, I mean its application to mills and other machinery by means of the "water wheel" -- a thing now well known, and extensively used; but, of which, no mention is made in the Bible ... The advantageous use of Steam-power is, unquestionably, a modern discovery. And yet, as much as two thousand years ago the power of steam was not only observed, but an ingenious toy was actually made and put in motion by it, at Alexandria in Egypt …

In another part of his speech Lincoln describes how the invention of printing greatly accelerated the process of invention itself, and emancipated mankind:

“In the world's history, certain inventions and discoveries occurred, of peculiar value, on account of their great efficiency in facilitating all other inventions and discoveries. Of these were the arts of writing and of printing … When man was possessed of speech alone, the chances of invention, discovery, and improvement, were very limited; but by the introduction of each of these, they were greatly multiplied. When writing was invented, any important observation, likely to lead to a discovery, had at least a chance of being written down, and consequently, a better chance of never been forgotten; and of being seen, and reflected upon, by a much greater number of persons…  By this means the observation of a single individual might lead to an important invention, years, and even centuries after he was dead… The seeds of invention were more permanently preserved, and more widely sown. And yet, for the three thousand years during which printing remained undiscovered after writing was in use, it was only a small portion of the people who could write, or read writing; and consequently the field of invention, though much extended, still continued very limited.

“At length printing came. It gave ten thousand copies of any written matter, quite as cheaply as they were given before; and consequently a thousand minds were brought into the field where there was but one before. … I will venture to consider it, the true termination of that period called "the dark ages." Discoveries, inventions, and improvements followed rapidly, and have been increasing their rapidity ever since. ..

“It is very probable -- almost certain -- that the great mass of men, at that time, were utterly unconscious [i.e. unaware]  that their conditions, or their minds were capable of improvement. They not only looked upon the educated few as superior beings; but they supposed themselves to be naturally incapable of rising to equality. To emancipate the mind from this false and underestimate of itself, is the great task which printing came into the world to perform. It is difficult for us, now and here, to conceive how strong this slavery of the mind was; and how long it did, of necessity, take, to break it's shackles, and to get a habit of freedom of thought, established.

“It is, in this connection, a curious fact that a new country is most favorable -- almost necessary -- to the emancipation of thought, and the consequent advancement of civilization and the arts. ---  It is in this view that I have mentioned the discovery of America as an event greatly favoring and facilitating useful discoveries and inventions.”

Lincoln concludes:

“As Plato had for the immortality of the soul, so Young America … has a great passion -- a perfect rage -- for the ‘new’ … The great difference between Young America and Old Fogy [i.e. Britain], is the result of Discoveries, Inventions, and Improvements. ..”

14.3  The spread of the American System: examples of Germany, Russia and Japan

The story of how the American (or National) System came to play a decisive role in the development of nations around the world, is long and complicated. We can only mention a few highlights here, focusing on the cases of Germany, Russia and Japan. 

Friedrich List and the American System in Germany

The introduction of the American System into Germany is inseparably connected with the life and work of Friedrich List. Born in Southern Germany in 1789, List became a prominent writer and activist for the republican cause and political reforms in the state of Würtemberg. Already in 1819 he proposes the elimination of tariff barriers between the various states in Germany and the establishment of a common tariff to protect the fledgling German industry against foreign goods. This proposal, further elaborated, became the famous “Zollverein” -- German tariff union – which provided a crucial basis for the spectacular development of German industry in the 19th century. In the meantime List was arrested for his political activity and forced into exile. While in Switzerland he got to know one of the heroes of the American Revolution, Marquis de Lafayette, who advised him to emigrate with his family to the United States. There, with Lafayette’s help, List became acquainted with leading economists of the American school of economics, as well as political leaders including the Presidents John Quincy Adams and James Madison. List became involved with the U.S. mining industry and afterwards the manager of a small railroad. Later he became more and more active as a publicist. In 1827 he published his “Outlines of American Political Economy”, which made him famous throughout the United States and remains a classic work of American System economics.

In 1830 List returned to Germany as an American citizen and became deeply involved with building up the first railroads in Germany. In 1833 he published a detailed plan for a German national railroad network. At the same time he pressed for the establishment of a German tariff union, the Zollverein.  List was tireless in his publishing and organizing activities in favor of the adoption of American System principles in Germany. As an economist he made theoretical contributions focused on the notion of the productive forces of a nation, the economic significance of transport infrastructure and steam power and other themes. In 1841 he published his famous works “The National System of Political Economy” (Das Nationale System der politischen Oekonomie) and “The German Railroad System” (Das Deutsche Eisenbahnsystem). Both were directed at developing a national economy in Germany and laying the basis for rapid industrial development. The railroad network was essential to this goal, not only as the backbone of an industrial economy but also as the means to bind together the diverse German states – which differed greatly from each other in terms of their history, political traditions and economic structures --into a single unified state.

List’s efforts bore fruit: the Zollverein was established in 1834, providing a unified domestic market and tariff protection to domestic industry. The German railroad system grew rapidly, and Germany was on the way to becoming one of the most powerful industrial economies in the world.

To give a flavor of List’s writing we quote two passages from his “National System of Political Economy”. The first refutes the liberal (today neoliberal) claim that state intervention is harmful to private enterprise:

“The State is not merely justified in imposing, but bound to impose, certain regulations and restrictions on commerce (which is in itself harmless) for the best interests of the nation. By prohibitions and protective duties it does not give directions to individuals how to employ their productive powers and capital (as the popular school sophistically alleges); it does not tell the one, 'You must invest your money in the building of a ship, or in the erection of a manufactory;' or the other, 'You must be a naval captain or a civil engineer;' it leaves it to the judgment of every individual how and where to invest his capital, or to what vocation he will devote himself. It merely says, 'It is to the advantage of our nation that we manufacture these or the other goods ourselves; but as by free competition with foreign countries we can never obtain possession of this advantage, we have imposed restrictions on that competition, so far as in our opinion is necessary, to give those among us who invest their capital in these new branches of industry, and those who devote their bodily and mental powers to them, the requisite guarantees that they shall not lose their capital and shall not miss their vocation in life; and further to stimulate foreigners to come over to our side with their productive powers. In this manner, it does not in the least degree restrain private industry; on the contrary, it secures to the personal, natural, and moneyed powers of the nation a greater and wider field of activity…

As a second example we quote from List’s refutation the Malthus’  “monstrous” theory of population, which is a philosophical cornerstone of the British System. In this context List points to the process of development by which Man successively overcomes the apparent “limits to growth”.

“Only by ignoring the cosmopolitical tendency of the productive powers [the tendency for the increase in productivity to spread from individual nations to the whole world - JT] could Malthus be led into the error of desiring to restrict the increase of population, or Chalmers and Torrens maintain more recently the strange idea that augmentation of capital and unrestricted production are evils the restriction of which the welfare of the community imperatively demands, or Sismondi declare that manufactures are things injurious to the community. [Ideas similar to those of the modern environmentalist movement – JT] Their theory in this case resembles Saturn, who devours his own children -- … It merely regards the present conditions of individual nations, and does not take into consideration the conditions of the whole globe and the future progress of mankind.

“It is not true that population increases in a larger proportion than production of the means of subsistence; it is at least foolish to assume such disproportion, or to attempt to prove it by artificial calculations or sophistical arguments … It is mere narrow-mindedness to consider the present extent of the productive forces as the test of how many persons could be supported on a given area of land. The savage, the hunter, and the fisherman, according to his own calculation, would not find room enough for one million persons, the shepherd not for ten millions, the raw agriculturist not for one hundred millions on the whole globe; and yet two hundred millions are living at present in Europe alone. The culture of the potato and of food-yielding plants, and the more recent improvements made in agriculture generally, have increased tenfold the productive powers of the human race for the creation of the means of subsistence. In the Middle Ages the yield of wheat of an acre of land in England was fourfold, to-day it is ten to twenty fold, and in addition to that five times more land is cultivated … Who will venture to set further limits to the discoveries, inventions, and improvements of the human race? Agricultural chemistry is still in its infancy; who can tell that to-morrow, by means of a new invention or discovery, the produce of the soil may not be increased five or ten fold? We already possess, in the artesian well, the means of converting unfertile wastes into rich corn fields; and what unknown forces may not yet be hidden in the interior of the earth? Let us merely suppose that through a new discovery we were enabled to produce heat everywhere very cheaply and without the aid of the fuels at present known: what spaces of land could thus be utilized for cultivation, and in what an incalculable degree would the yield of a given area of land be increased? If Malthus' doctrine … is simply horrible. It seeks to destroy a desire which nature uses as the most active means for inciting men to exert body and mind, and to awaken and support their nobler feelings -- a desire to which humanity for the greater part owes its progress. It would elevate the most heartless egotism to the position of a law; it requires us to close our hearts against the starving man, because if we hand him food and drink, another might starve in his place in thirty years' time. It substitutes cold calculation for sympathy. This doctrine tends to convert the hearts of men into stones. But what could be finally expected of a nation whose citizens should carry stones instead of hearts in their bosoms? What else than the total destruction of all morality, and with it of all productive forces, and therefore of all the wealth, civilisation, and power of the nation?”

Sergei Witte and the American System in Russia

The American system had a profound impact on the development of Russia, most prominently via the influence of Graf Sergei Witte, who in many ways can be seen as “Friedrich List of Russia”. 

Born in 1849 to a noble family, Witte’s original training was in mathematics and physics, but he ended up pursuing a career in the Ukrainian und Russian railroads. After working for 20 years in various positions he was appointed in 1889 as Director of Railway Affairs in the Russian Finance Ministry. From 1892 to 1903 he served as Minister of Finance of Russia, and from 1903 to 1906 as head of the Council of Ministers. Witte was thus one of the most powerful and influential figures in Russia.

As a railroad administrator, as Minister of Railroads and Minister of Finance Witte presided over a gigantic program of railroad construction aimed at laying the basis for rapid industrial development. Among other things he played a key role in the realization of the Transsiberian Railroad project. The length of Russia’s railroad network grew from 22860 km in 1880, to 30600 km in 1890 and 50900 km in 1905.

In his economic policies Witte not only followed Friedrich List’s National System, but worked to make List’s ideas known throughout Russia. In 1889, the same year as he became Railroad Minister, Witte published an article entitled “National Economy and Friedrich List”. There Witte introduces Russian readers to the life and work of Friedrich List. Besides a short biography of List himself, the article consists mostly of excerpts from List’s book “The National System of Political Economy”, with extensive comments by Witte. Later, in 1912 Witte published a voluminous series of lectures, accessible to the educated public, systematically laying out all the areas of knowledge necessary to develop and run a powerful national economy. Much attention is given to both the physical and the mental side of productive activity, to the economic function of human creativity, to the significance of infrastructure, of education, and the role of the state in economic development. In fact, Witte’s lectures belong to the classics of physical economy. Here we merely quote two paragraphs which reflect Witte’s thinking on the economic significance of human knowledge:

 “In the same way as civilized Man liberated himself from the influence of Nature in the tropical zones, Man has liberated himself everywhere where he has developed or is developing civilized societies. Progress is nothing else that the liberation of Man from the domination of Nature. Man reduces his dependence from Nature by means of a whole series of tools and devices, which make Nature serve Man’s aims better and better …

“Knowledge is one of the most important forms of capital. The whole history of the productive process provides irrefutable evidence of the outstanding role of this form of capital. It is impossible to imagine even a single form of capital – basic tools, instruments, machines, production equipment – whose creation did not involve the study of some natural phenomenon, which in turn provided the first idea for an invention.  We can say without exaggeration that every machine, every industrial chemical process, is nothing more than the material realization of some sort of scientific knowledge. The skill of a worker, the talent of a managing engineer or an entrepreneur all are the result of the work of the mind, which is the fruit of a capital widely distributed throughout the population – knowledge.”

It should not be surprising that one of Witte’s advisors was the great Russian scientist Dmitri Mendeleyev. Although best known for his discovery of the periodic table, Mendeleyev was also one of Russia’s leading economic thinkers and a strong supporter of Witte’s policies for the promotion and protection of Russian industry.

The American System in Japan

From the beginning of the Meiji Restoration in 1868 onward, Japan’s industrialization process development was profoundly influenced by the ideas of the American System.

The work of the work Henry Carey, American System proponent and chief economy advisor to Lincoln, was introduced in Japan no later than 1871 by a member of the Society for the Japanese Economy, Yoshihazu Wakayama, in a famous book on protective tariffs. In 1884 another member of the society, Tsuyoshi Inukai, published the first Japanese translation of Carey’s “Principles of Social Science”. 

More importantly, the ideas of the American System were transmitted directly to the Japanese government via a personal friend of Henry Carey, E. Penshine Smith, who was the first foreigner to hold an official Japanese government position. As an advisor on international law to the Japanese Ministry of Foreign Affairs from 1871 to 1876, Smith exerted a powerful influence not only on Japanese foreign policy, but especially also on Japan’s policies for industrialization. Under his advice a Ministry of Home Affairs was established, which included an Industrial Promotion Board.  

Penshine Smith was a devoted follower of Carey, well versed in economics, and an avowed enemy of the “British System”.  Penshine Smith’s “Manual of Political Economy”, published in the United States in 1853, features a strong refutation of Malthus’ so-called Law of Population. Penshine Smith particularly emphasized the notion of progress which is essential to the American System:

“[The author] owes whatever his own study of the subject may have effected, to his having been put upon the path and furnished with the clue, in the writings of Mr. Carey …. The object of preparing this Manual was, to present to his countrymen in a compact form, the principles of what he thinks may justly be called the American System of Political Economy …

“The theory of Mr. Carey reconciles all the facts, and explains them all. It is possible for food to increase more rapidly than population, when men begin with the inferior soils, and, as their numbers grow, pass to those of superior fertility. An increasing proportion of each community is thus released from direct employment in the raising of food, and enabled to apply its energies to the preparation of machinery and the improvement of processes. These give the ability to the husbandman to reap a larger return from his old soil, and to overcome more readily and effectually the difficulties which attend his subduing the new and richer lands. The result is necessarily a larger yield, in recompense of the same amount of labour, a further increase in the surplus of food, and the setting free of more labourers from the farm, to recruit the workshops and to undertake fresh branches of industry.

“Upon this theory we can comprehend the progress of civilization; it is the foreseen and certain result of a permanent law… Russia, France, the States of the Zollverein, and the other countries who adhered to the policy which promotes domestic production, and secures what alone can justly be called free trade -- that which accords with human nature and human inclinations  --a re rapidly advancing in wealth and power. Turkey and Portugal, the nations which, possessing nominal independence, have been most submissive to the British policy, and Ireland, which has been coerced, are the most backward nations of Europe, and have now less power to resist than they had a generation ago.”

Another influential promoted of the American System was “Japan’s Friedrich List”, the economist Sadamasu Ohshima, whose Japanese translation of List’s “National System of Political Economy” was published in 1889 and then again in 1895.
 

Chapter 15  Examples of physical-economic development in the postwar period

In this chapter we briefly examine five cases of physical-economic development which are relatively close in time to the present period. These give a preliminary glimpse of the “National System” in action.  It is useful to compare these examples, because they show very different pathways in institutional terms – in respect to the role of the government – but are similar in terms of physical economy. They illustrate different pathways of realization of the same principles, under very different circumstances. A further benefit is that these examples provide a useful contrast to the economic policies adopted by most governments in recent decades.

Here the reader should not expect to find comprehensive analyses of the developmental trajectories involved. Unfortunately the method of analysis of a nonlinearly evolving economic organism, described in Part I of this book, has so far never been carried out in a scientifically rigorous way to any concrete historical case. That is a task for the future. Nevertheless, examining available accounts and data from the standpoint of physical economy provides important insights into how successful development occurs, and how sovereign nations can achieve success by a proper use of the instruments described in Chapter 13.

15.1  The United States

Since the Second World War the United States has been the world’s leading economy in terms of size, productive power, and influence on the world economy as a whole. Only in the recent period has this begun to change, with the rise of China and the long-term effects of anti-industrial economic policies in the U.S. itself. The roots of the economic power of the U.S. go back to the very beginnings of the American System and particularly the rapid industrial development which followed the Civil War. But it was the World War II mobilization which catapulted the United States to the status of the world’s economic superpower.  

Prelude: the New Deal

An important foundation for the wartime economic mobilization and the subsequent postwar boom in the United States had already been laid by gigantic public works projects carried out under the pre-war “New Deal” (1933-1939) – the government-financed program, launched under U.S. President Roosevelt, to overcome the economic depression and mass unemployment that followed the stock market collapse of 1929. Under the “New Deal” a whole set of government bodies was set up to finance and organize public works projects. It is worth mentioning some main New Deal programs here, because the physical-economic impacts well as also political effects of the New Deal extended beyond the 1930s into the entire succeeding period, in some respects even until today. 

As part of the New Deal, the Civilian Conservation Corps (CCC) organized road construction, reforestation and development of national parks, employing a total of over 1 million persons. The Civil Works Administration (CWA) created temporary jobs for 4 million people, improving or constructing buildings and bridges, installing 12 million feet of sewer pipe, building or modernizing 255,000 miles of roads, 40,000 schools, and nearly 1,000 airports.  The Public Works Administration (PWA) funded and administered the construction of more than 34,000 projects including 11,428 street and highway projects, airports, bridges and tunnels, large electricity-generating dams, electrification of railroads, construction of new hospitals and 7,488 new school buildings. The Tennessee Valley Authority (TVA) was charged with planning and carrying out the integrated development of the whole Tennessee River basin with an area of 106,200 sq km. On the infrastructure side the TVA effort focused on water control, construction of hydroelectric plants and electrification of the region. Among the 16 dams built by the TVA was the gigantic Hoover Dam, one of the greatest single engineering projects in American history. The TVA effort also included disease elimination (30% of the population of the region were infected with malaria in 1933), introduction of fertilizers and modern agricultural methods, replanting forests, etc.  The Rural Electrification Administration provided support for electrification of the rural areas of the U.S., raising the proportion of rural households with electricity from 11% in 1934 to nearly 50% in 1942. To implement those goals the REA made long-term loans to state and local governments, to farmers' cooperatives and to nonprofit organizations for electrification and extension of telephone service to rural areas.

A large part of the financing of New Deal projects was provided by the Reconstruction Finance Corporation (RFC), a U.S. government corporation created in 1932 which gave financial support to state and local governments and made loans to banks, railroads, mortgage associations and other businesses. An important additional factor in the New Deal was legislation to stabilize the financial system and to prevent a repetition of the catastrophic collapse of 1929. One of these measures was 1933 Glass-Steagall Act which imposed strict regulation on banking activity, including the separation of investment and commercial banks, and established the Federal Deposit Insurance Corporation (FDIC) to insure bank deposits.

(Unfortunately in 1999, after a long assault by promoters of neoliberal policies, essential provisions of the Glass-Steagall Act were later repealed, thereby permitting the growth of a gigantic bubble on the deregulated financial markets, and leading finally to the great collapse of 2007-2008.) 

The war mobilization

Despite the New Deal the U.S. economy remained sluggish up to the time of the World War II mobilization. This was due in part to the lack of clear perspective, the depressed mood of the population and related, low expectations of private investors. All this changed abruptly with the all-out mobilization in World War II. The turning-point came on December 7, 1941, when the Japanese air force attacked the U.S. fleet at Pearl Harbor. Rather than attempt to describe the whole process of the economic mobilization -- which would go beyond the limits of this book -- we quote here from some published accounts which provide useful insights into the physical-economic nature of the mobilization and how it was organized: 

“Pearl Harbor shocked the nation into a realization of the size and speed of the needs of an all-out war, of the overriding necessity of what was in effect central control of production, consumption, scarce raw materials and scarce skills. From December 1941 on, there was no further disagreement. Though there was some grumbling about rationing of civilian goods, there was no longer any serious pretense that such action could be avoided; nor was there, for example, any debate over the extent of the demands that would be made on the pool of American industrial talent that was concentrated in and around the automobile industry. By that decision to go all out the whole of American industry with its skilled and adaptable labor force, its enormous capital equipment, its technology and its managerial ‘know-how’ was made available for the production of the sinews of warThe greatest integration of industrial effort the world has ever known resulted from the American resolution to put its industrial strength wholly into the war effort. It was possible to construct a single organization … that could allocate the major part of the raw materials, the labor, the industrial and agricultural capacity, and the transport …” (from “Our Economic Contribution to Victory” 
(Foreign Affairs, October 1947  Winfied Riefler]

“’Powerful enemies must be out-fought and out-produced,’ President Franklin Roosevelt told Congress and his countrymen less than a month after Pearl Harbor… ‘We must out-produce them overwhelmingly, so that there can be no question of our ability to provide a crushing superiority of equipment in any theatre of the world war.’

“Two years earlier, America’s military preparedness was not that of a nation expecting to go to war. In 1939, the United States Army ranked thirty-ninth in the world … In the wake of Pearl Harbor, the President set staggering goals for the nation’s factories: 60,000 aircraft in 1942 and 125,000 in 1943; 120,000 tanks in the same time period and 55,000 antiaircraft guns. In an attempt to coordinate government war agencies Roosevelt created the War Production Board in 1942 and later in 1943 the Office of War Mobilization …  

“War production profoundly changed American industry. Companies already engaged in defense work expanded. Others, like the automobile industry, were transformed completely. In 1941, more than three million cars were manufactured in the United States. Only 139 more were made during the entire war. Instead, Chrysler made fuselages. General Motors made airplane engines, guns, trucks and tanks. Packard made Rolls-Royce engines for the British air force. And at its vast Willow Run plant in Ypsilanti, Michigan, the Ford Motor Company performed something like a miracle 24 hours a day. The average Ford car had some 15,000 parts. The B-24 Liberator long-range bomber had 1,550,000. One came off the line every 63 minutes. … In 1944 alone, the United States built more planes than the Japanese did from 1939 to 1945. By the end of the war, more than half of all industrial production in the world would take place in the United States

“While 16 million men and women marched to war, 24 million more moved in search of defense jobs, often for more pay than they previously had ever earned. Eight million women stepped into the work force and ethnic groups such as African Americans and Latinos found job opportunities as never before. … With the economy booming, Americans felt their lives improving.”
(From an account by the U.S. Public Broadcasting System)

The success of the economic mobilization depended to a significant extent on techniques of mass production and “scientific management” which had already been developed in and around the U.S. automobile industry, and had led to a much higher labor productivity than in the European countries. The powerful mass production capability of U.S. industry involved much more than assembly lines per se, but also the standardization of parts and machinery and corresponding approaches to designing the final products. It depended on a huge, pre-existing base of heavy industries and a complex logistical apparatus to supply the production. The rapid conversion of this apparatus from civilian to war production demonstrated remarkable abilities for improvisation and organizational talent on the part of U.S. industry and government. The wartime U.S. economy became a single, gigantic production machine.

In this context it is important to keep in mind that nearly 100% of the civilian and war production in the United States was done by private companies. Apart from government ownership of part of the U.S. infrastructure, state-owned industries have rarely existed in the United States. The success of the war mobilization rested on a unique symbiosis of the role of government and of private enterprise which became a “secret” of the U.S. economy – especially of its high-tech sectors -- in the postwar period. 

“The period between 1940- 1944 saw the greatest expansion of industrial production that the U.S. had ever experienced. The volume of output was increasing at an annual rate of over 15 percent between these years … From 1940 to 1944 total output of manufactured goods increased by 300 percent. Investments by the U.S. government, and to a lesser extent private enterprise, increased America's productive capacity by 50 percent. The average productivity of labor in industry was increased by about 25 percent between 1939 and 1944. America had twice the productive output per person than Germany and five times that of Japan. All these gains in the productivity of labor were due to new capital equipment, an increase in the hours of the workweek, pooling of industrial information and the common cause of winning the war, which was deeply important to nearly every American.” 

In contrast to nearly all other historical cases, the standard of living of the U.S. population actually increased during the war mobilization:

“Although war production was the main driving force of the U.S. economy during 1941-1944, consumers at home also benefited from the increased productivity. Despite shortages of particular items and a general decline in the quality of some goods, consumer's expenditures increased in virtually all non-war industries with the exception of printing, publishing and clothing. Expansion of war and non-war output together led to consumer purchases of goods and services to increase by 12 percent between 1939 and 1944... The increased productivity of the war years not only ended the Great Depression but also set the economic and cultural foundations for America's peacetime prosperity, which began just immediately after the end of the war in 1945”(Millward, Alan S. “War Economy and Society 1939-1945”, University of California Press, 1977)

Building up and managing a gigantic war production machine which had to constantly adapt its production to the changing requirements of the war – including the development of new types of weapons – posed unprecedented challenges in terms of organization and administration. As we shall discuss below, the new techniques for organization and management, developed in the U.S. for the war mobilization, were later to play a major role in the postwar economic boom, not only in the U.S., but also in Western Europe and Japan.

“The record of the actual administration of the war economy… tells a story of utterly new administrative problems for which there were no guiding precedents. It tells of the struggle to define the problems that waited decision, to devise effective administrative organizations to make the decisions and, above all, of the difficult task of coordination of these agencies so that their separate decisions could reinforce each other …  It is difficult to conceive of a situation in which the American economy, starting with as little experience in actual war production as prevailed in September 1939, and lacking prototypes of many of the weapons to be produced, could have been converted to full war output in much under three years.”
(Winfied Riefler)

A revealing example of the problems and solutions involved was the “Controlled Materials Plan”:

“On November 2, 1942, Donald Nelson, former vice-president of Sears Roebuck, and chairman of the U.S. War Production Board (WPB), announced the introduction of the Controlled Materials Plan (CMP), a new system for controlling the distribution of critical materials to war production programs. CMP offered a sophisticated institutional framework for decision-making on materials allocations as well as a new mechanism for materials control. The plan would balance demand and supply for three critical

materials, steel, aluminum and copper and it would ensure their availability in quantity, forms, and time required by approved programs and production schedules. ..

“CMP represents an unprecedented experiment in national administrative control. As a system of decision-making it linked action in four areas: national production goals; specific military programs supply demand balance for critical materials and plant production schedules.  As a managerial system it combined coordinated central policy making with decentralized operations. As an operational system it combined central materials accounting control with comparatively flexible use of allocated materials budgets among claimants. CMP gave organizational content to the idea of the wartime industrial economy as a multidivisional firm.

“As an experiment in central planning, CMP operated through vertically integrated firms and not through associations of producers… It was an attempt in wartime to combine central policy making

at the apex with decentralized responsibility through claimants agencies and their prime contractors …

The irony may well be that by late 1943, as a result of CMP, more central industrial control had become available to governmental administrators in the U.S. economy than to the governments of its major enemies.” 
(“Organizational Capabilities and U.S. War Production: The Controlled Materials Plan of World War II” by Robert Cuff, York University)

The Manhattan Project

World War II also began the era of large-scale involvement of the U.S. government in scientific research. By far the most significant event in this regard was the secret Manhattan Project – a scientific and engineering project unprecedented in history in its scale and complexity. Within 5 years, a fundamental discovery in a new branch of science – nuclear physics - was transformed into the most terrifying weapons mankind had ever known.

The first self-sustaining nuclear fission chain reaction – an entirely new physical phenomenon -- was realized in the world’s first nuclear reactor, at the University of Chicago on December 2, 1942. The U.S. Army Corps of Engineers built the so-called Clinton Engineering Works (now the Oak Ridge National Laboratory), with a secret new town to house some 50.000 employees. There a second nuclear reactor was built, demonstrating the generation of macroscopic amounts of the artificial element plutonium, and carrying out the first large-scale isotope enrichment and separation using the newly-developed gas diffusion and electromagnetic separation processes. At the so-called Hanford Works in the State of Washington a third reactor was built, producing large amounts of plutonium for use in nuclear weapons. At the newly constructed laboratory and secret town of Los Alamos, thousands of scientists developed and built the first nuclear bombs. At the peak of the project, in 1944, some 125,000 of the nation’s top scientists, engineers and construction workers were engaged in the Manhattan project.

The postwar boom

By the summer of 1943 it was clear that the Nazis had no chance to win the war. The series of failures on the Eastern front, including the failure to capture Moscow in 1941 and culminating in the crushing defeat of the German 6th Army in Stalingrad in June 1943, plus the launching of the U.S. war mobilization and the resulting, overwhelming Allied advantage in sheer numbers of tanks, ships and planes  – these factors alone made the defeat of Germany inevitable. By 1943 the U.S. government began to make plans for the conversion back to civilian production after the war, and to formulate policies for a postwar world order in which the United States would play the leading role.

Following the end of the war the U.S. economy developed along a qualitatively different trajectory than in the prewar decades. The difference lay not simply in the much larger industrial capacity created through the war mobilization, but above all in the much greater direct role of basic science and advanced R&D in the economy. This new orientation grew in part out of the unprecedentedly technology-intensive nature of war production in the Second World War, particularly in the aviation industry, as well as a new policy of massive government support for basic and applied scientific research. The new policy had already taken root in the Manhattan Project and other wartime projects, most notably in the area of radar technology.

The development of the atomic bomb was recognized then as a turning-point in history in military, political, economic, and even cultural terms. Leaving aside all the military and political issues, the process leading from the discovery of fission to the civilian applications of nuclear energy provides a concentrated example of what we have called the “strongly nonlinear” mode of physical-economic development. In the space of a mere eight years, between 1938 and 1945, the phenomenon of nuclear fission, which had been discovered in the context of fundamental scientific research, was turned into the most frightful weapon ever created by Man. A single bomb could destroy an entire city! At the same time the wartime development of fission reactors opened up a practically unlimited energy source for economic development, one whose fuel source was a million times more concentrated than coal and gasoline. All of this could not fail to make a profound impression on the decision-makers and populations of the United States and other nations. It was realized that the economic prosperity and even the entire destiny of nations would depend on their capabilities in the most advanced areas of science and technology. The era of government-sponsored “Big Science” began.  

Apart from the special role of science it was recognized that the success of the postwar economy would depend to a large extent on achieving a higher level of qualification of the workforce. The G.I. Bill, signed into law already in 1944, before the end of the war, provided federal government financial support for returning U.S. soldiers to attend colleges, universities and job training centers. The U.S. government paid the tuition for veterans to attend any institution that admitted them, and also provided money to support their spouses and children. More than half of the returning veterans from World War II took advantage of this support. Some 2.2 million attended college or graduate school using GI Bill assistance, and 5.6 million more prepared for jobs as auto mechanics, electricians, construction workers etc. On this basis the U.S. gained approximately 450,000 engineers, 240,000 accountants, 238,000 teachers, 91,000 scientists, 67,000 doctors and 22,000 dentists. The push toward more education and skills was greatly aided by the high level of motivation of the returning soldiers and the rest of the population involved in the war mobilization.

The GI Bill also provided low-interest mortgages and loan guarantees for homes, farms and business. Nearly 30% of the veterans used these low-interest mortgages to buy homes, farms or businesses. In 1955 this accounted for nearly 1/3 of all new housing starts.

It has been estimated that the increase in productivity, resulting from the provisions of the GI Bill, generated $7 of additional wealth in the U.S. economy for every $1 invested by the U.S. government.

These developments were accompanied by the famous Baby Boom – a dramatic increase in the birth rate in the 1950s into the early 1960s.  The Baby Boom in turn supported a housing boom, a consumption boom and a rapid increase in the labor force. The middle class grew and the majority of America's labor force eventually became employed in so-called white-collar jobs. The much higher wages paid by industrial employers compared to prewar levels dramatically increased the demand for automobiles, supporting the spectacular rise of the automobile industry following its reconversion to civilian production

Infrastructure was another crucial aspect of U.S. post-war development.

The large-scale infrastructure development launched during the “New Deal”, continued during and especially after the war. Electricity production, for example, doubled in the ten years from 1950 to 1960, and doubled again from 1960 to 1970.  A new factor was the spectacular growth of the automobile industry and the private use of automobiles, which went hand-in-hand with the construction of modern highways, as well as pipeline construction and the rise of the petrochemical and rubber industry.

A milestone of postwar infrastructure development was the Federal-Aid Highway Act of 1956, which led to the construction of 66,000 km of highways, creating an entire interstate highway system. This was the largest public works project in American history up to that time, bigger even than the New Deal.  The money for the Interstate Highway and Defense Highways was concentrated in a Highway Trust Fund that paid for 90 percent of highway construction costs. The Federal portion of the cost of the Interstate Highway System has been paid for by taxes on gasoline and diesel fuel.

Within just a few years, almost two-thirds of American families achieved middle-class status.  By 1960, most American families had a car, a TV, and a refrigerator and owned their own home—an amazing achievement given that more than half of Americans had none of these “luxuries” just thirty years earlier.

Vannevar Bush and the launching of “Big Science”

The most decisive element of the post-war rise of the United States, however, was the promotion of science and technology, which has guaranteed the technological leadership of the U.S. up into the present period.  

In July 1945 a report entitled “Science -- the Endless Frontier” was published. The author, Vannevar Bush, was the head of the U.S. Office of Scientific Research and Development (OSRD) which administered almost all military R&D in the U.S. during World War II. This included also the early phases of the Manhattan Project. Written at the request of U.S. President Roosevelt, “Science -- the Endless Frontier” signaled the launching of a new policy for the postwar period in which the promotion of basic science became a top priority for the U.S. government.

Science Is a Proper Concern of Government … It has been basic United States policy that Government should foster the opening of new frontiers. It opened the seas to clipper ships and furnished land for pioneers. Although these frontiers have more or less disappeared, the frontier of science remains. It is in keeping with the American tradition —one which has made the United States great—that new frontiers shall be made accessible for development by all American citizens. Moreover, since health, well-being, and security are proper concerns of Government, scientific progress is, and must be, of vital interest to Government. Without scientific progress the national health would deteriorate; without scientific progress we could not hope for improvement in our standard of living or for an increased number of jobs for our citizens; and without scientific progress we could not have maintained our liberties against tyranny.”

The report also emphasized the complementarity between the role of the state and the role of private enterprise: "The simplest and most effective way in which the Government can strengthen industrial research is to support basic research and to develop scientific talent." 

Beginning immediately after the war, a vast apparatus of government-support scientific research and development was created. Naturally this process was greatly strengthened and accelerated by the advent of the Cold War, particularly following the Soviet blockade of Berlin in 1948.

In 1946 the Office of Naval Research (ONR) and the Atomic Energy Commission (AEC) were established. In the same year the RAND Corporation was founded, an “think tank” founded by the Air Force General Henry Arnold, which was to play an major role in the U.S. space program, defense strategy, and the later development of the internet.  In 1950 the National Science Foundation was created, in 1958 the National Aeronautics and Space Administration (NASA) and the Defense Department’s Advanced Research Projects Agency (ARPA or DARPA). A powerful complex of National Laboratories grew up, mostly going back to the wartime Manhattan Project and focusing especially on nuclear physics and the civilian and military applications of nuclear energy. These included the Los Alamos, Lawrence Livermore, Sandia, Brookhaven and Oak Ridge National laboratories. The biomedical research capabilities of the already-existing National Institutes of Health (NIH) were upgraded. These and other government agencies worked closely with private industry, disbursing large numbers of contracts and grants, establishing information networks and a system of coordination between private and government research and development.  A “science and technology machine” was put into place.  

The following examples illustrate the enormous impact which this “machine” has had for the physical-economic development not only of the U.S., but of the world as a whole.

Nuclear power
The first generation of electricity by a nuclear reactor was achieved at the Argonne National Laboratory in 1951. The Pressurized Water Reactor (PWR), the main nuclear reactor type used today all over the world for the generation of electricity, was first developed by the US Navy program to create nuclear submarines, led by the famous Admiral Rickover. Nuclear fusion – the energy source of the Sun, and a practically unlimited source of energy for mankind in the future – was realized for the first time on a macroscopic scale in the hydrogen bomb, developed at the Lawrence Livermore National Laboratory (LLNL). The LLNL was created by the U.S. Atomic Energy Commission in 1952. In recent decades LLNL has played a leading role in the quest to develop nuclear fusion as a virtually unlimited source of energy for the world economy.

Medical use of isotopes
The first medical radionuclide was produced at the Oak Ridge National Laboratory in 1946, inaugurating the vast field of nuclear medicine, which has revolutionized many areas of diagnostics and treatment of cancer and other diseases.

Lasers
In 1948, the U.S. Defense Department provided Columbia University with the funds to build a new team of physicists to work on microwave radar technology. One of the members of the team was Charles Townes, who three years later realized the first MASER (Microwave Amplification by the Stimulated Emission of Radiation). The MASER demonstrated the basic principle later exploited in the laser, only at much longer wavelengths. Later Townes and one of this students, Gordon Gould, received funds from the Defense Advanced Research Projects Agency (DARPA) to set up four teams to work on prototypes of an optical version of the MASER.  In 1959 Gould published a paper in which he described the characteristics of such a device, calling it the LASER -- Light Amplification by Stimulated Emission of Radiation.  On the basis of Townes’ and Gould’s work, the first functioning laser was demonstrated in 1960 by Thomas Maiman at Hughes Research Laboratories (a private military contractor), with funds from the U.S. Army Signal Corps.

 Microelectronics
The first transistor was built in 1947 by John Bardeen and Walter Brattain at Bell Laboratories, the private research laboratory originally created by the inventor of the telephone, Alexander Graham Bell in the 1880s. Bell Laboratories was in turn part of the American Telephone and Telegraph Company (AT&T), which functioned since the 1920s as a legally sanctioned, regulated monopoly under agreement with the U.S. government. Bell and AT&T maintained a special relationship with the U.S. government, which was crucial for the early development of microelectronics and many other technologies. Bell Labs became known as an “Idea Factory” and a model of American inventiveness. 

In 1949, just two years after the creation of the first transistor, Bell Laboratories signed a contract with the Joint Services (U.S. Army, Navy and Air Forces) for R&D into transistor technology. This contract, which was extended throughout the 1950s, required Bell Labs to disseminate its know-how to other government contractors, and gave the U.S. government the right to distribute information produced under the contract.  In 1951 a famous symposium was held, at which Bell Labs presented the results of its work on transistors and their possible applications to an audience of researchers and officials from government agencies, military services, universities and leading industrial firms. In an article entitled “Government Support of the Semiconductor Industry” historian Daniel Holbrook noted: “This event marked one of the first postwar government efforts to disseminate technical and scientific information about semiconductor devices and physics widely to industry and academia … The various means by which the government assured the distribution of Bell Labs’ information were crucial early events in the history of this technology and industry.”

Another historical account highlights the role of the government in assuming part of the risk inherent in realizing every significant innovation. This applies especially to the initial phases -- nowadays sometimes called the “Valley of Death” or the “Darwinian Sea” -- where the risks of failure are so high that private investors are often unwilling or unable to accept them:

“Government money poured into transistor R&D at Bell Labs and elsewhere. Through its willingness to pay for expensive components, the military provided a much needed revenue stream to transistor companies. Digital technology was the ultimate expression of the increasing complexity of post WW II electronic technology. Government helped both the computer and semiconductors industries get through the early high-risk part of the technology curve. The U.S. private sector then took over and turned digital electronics into a global, mass-market revolution.”
(A Brief History of the U.S. Federal Government and Innovation, IEEE, 2011)

Computers

The world’s first programmable digital computer, the ENIAC (Electronic Numerical Integrator and Computer) was built and funded as a secret project by the United States Army Research and Development Command’s Ballistic Research Laboratory. ENIAC was a gigantic apparatus containing 17,468 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5 million hand-soldered joints. It filled a large room (167 m2). The project started in 1943, but ENIAC first began full operation in 1946-47.  One of its earliest tasks was calculations for the construction of the hydrogen bomb. The realization of the transistor in 1947 opened the way to realizing much more compact and powerful digital computers. Again, the U.S. government played an essential role. The report “Funding a Revolution: Government Support for Computing Research” (U.S.National Academy of Sciences) gives an overview:

“The 1950s saw the development of key technical underpinnings for widespread computing: cheap and reliable transistors available in large quantities, rotating magnetic drum and disk storage, magnetic core memory, and beginning work in semiconductor packaging and miniaturization, particularly for missiles. In computing, the technical cutting edge, however, was usually pushed forward in government facilities, at government-funded research centers, or at private contractors doing government work.

Government funding accounted for roughly three-quarters of the total computer field…. Federal agencies, particularly the military services, provided strong financial support for every major U.S. computer development between 1945 and 1955. Second, federal space and defense programs influenced the computer and semiconductor industries by generating huge markets for such products. Space and defense demand constituted a major factor in the growth of the U.S. semiconductor industry, as learning economies proved essential… As both the computer and semiconductor industries matured, space and defense demand promoted competition among existing firms and aided the entry of new firms. New semiconductor companies could enter the market easily given the receptivity of the military agencies and NASA (the U.S. space agency) to the products.”

Incidentally: the development of microelectronics has clearly followed the principle of increasing power density (see section 6.3) – in this case in the form of density of transistors per unit area, and density-rate of digital operations per unit area.

The Sputnik Shock and the Apollo Project

The Cold War was to a significant extent a competition for dominance in strategic areas of science and technology, with both sides engaged in scientific research and development on a scale unprecedented in history.

In 1957 the Soviet Union launched Man’s first artificial satellite, the Sputnik. This was an enormous shock for the U.S. population and political circles, who were not fully aware of the capabilities of the Soviet Union’s own “science and technology machine” had already developed, particularly in military-related areas. 

The “Sputnik shock” was followed in 1961 by the Soviets’ launching of the first man into space -- Cosmonaut Yuri Gagarin. The U.S. succeeded, later that year in launching the astronaut into space, but the American mission was merely a short flight, on a ballistic trajectory, whereas Gagarin circled the Earth in orbit. The Soviet Union was clearly ahead of the U.S. in this extremely complex and challenging technological area.

A year later, in May 1962, U.S. President Kennedy answered with a speech to the U.S. Congress, proposing what was to become the Apollo Program: “This nation should commit itself to achieving the goal, before the decade is out, of landing a man on the moon and returning him safely to the earth.” Winning the “Race to Space” became a top priority of the U.S. government.

It was not an accident that space exploration became a key arena for competition between the two superpowers. Three reasons should be mentioned here:

First, the obvious close relationship between the technology of space exploration and the technology of nuclear-armed intercontinental ballistic missiles. 

Second, the technological principle we discussed in Part I of this book: the manned conquest of space has the effect of pushing a broad range of existing technologies to their limits, focusing effort on making the innovations needed to exceed those limits.  Kennedy had already addressed this principle in a famous speech at Rice University in September 1961, when he stated: “We choose to go to the Moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win.”

Thirdly, manned space travel drew the attention of the whole world, captured the imagination of young people, and offered the winner of the “space race” an enormous gain in prestige. The cultural impact of manned space travel on the younger generation in both countries was enormous.

The economic benefits of the Apollo program can be seen both from a “microeconomic” and a “macroeconomic” perspective – the latter being by far the most important.

One category was direct benefits in the form of new or improved commercial products which derived from innovations made in the construction of the space vehicles, life support systems, spacesuits, scientific equipment etc. for the Moon landing. These range from new types of coatings, thermal insulation materials, ultra-light construction materials, detectors and flame-resistant textiles, to a variety of biomedical devices deriving from the space effort, including improved water purification systems and kidney dialysis machines.

Second and more important is the overall growth in industrial productivity resulting from the research, development and production activities generated by the Apollo program (see section 8.7 above). The process of solving the extraordinary challenges of the Moon landing gave birth to a mass of new technological capabilities, skills and know-how which could be applied in a broad range of industrial areas.  This in turn generated further new capabilities. NASA actively promoted such “spinoffs” of the space program by creating, starting in 1962, a network of “Technology Utilization Offices” and “Industrial Applications Centers (IACs)”.

Nearly all equipment and components for the Apollo program were produced by private companies.  70-80 % of the procurements for the NASA program were made from private companies, which could then apply the knowledge and skills, technology and much of the production equipment – such as machine tools – to other areas of production. As a result, the Apollo Moon program caused a significant increase in the real productivity of the private sector.

 

We shall make some observations about the Soviet side of the race to the Moon later below.

Internet

Research on computer networking began in the 1960s, funded by the U.S. Department of Defense. During the early 1960s several researchers at the Massachusetts Institute of Technology and the RAND Corporation began to develop the theoretic basis for “packet switching”, which divides the data which should be transmitted into groups (“packets”) which can be transmitted along different routes and are reassembled at the destination. An essential motivation for this technique was to create a communications system which could continue to work under conditions of nuclear attack, in which parts of network might be destroyed.

In the late 1960s Defense Advanced Research Projects Agency (DARPA) gave the first contract to build a functional packet switch and prototype network of computers. This first network, then known as the ARPANET, was in many ways the forerunner of the modern internet. Further key aspects were developed with U.S. government funding, such as the TCP/IP protocol, and a large pool of computer scientists and engineers were training in its use. The first Wide Area Networks were set up by NASA, the National Science Foundation and the Department of Defense.  By the 1980s research centers at U.S. universities were interlinked by computer networks.  The groundwork for the internet had been created. Up to 1995 the network operated under the control of government agencies, and no commercial traffic was permitted. At that point, the National Science Foundation transferred responsibility for the network to a group of private companies. The commercialization of the internet began.

15.2 The postwar “economic miracles” in Germany, France and Japan   

In addition to the United States a number of other important countries achieved extraordinarily rapid growth and development in the period 1945-1971, fueled in large measure by high rates of investment in scientific and technological progress.  Below we shall discuss the so-called “economic miracles” in West Germany, France and Japan in the postwar period. Following this we shall examine the rapid development of the Soviet Union, which occurred under a completely different economic and political system.

The story of each of these “miracles” is elaborate and the subject of a huge literature. They are also the subject of much controversy among economists.  What tends to be missed, however-- or not sufficiently emphasized -- are the common physical-economic principles underlying the otherwise very different pathways followed by the U.S., Germany, France, Japan, the Soviet Union and a number of smaller nations in their successful postwar development.

Before turning to specific cases of Germany, France and Japan we must say a few words about the new international monetary system set up in 1944 by the so-called Bretton Woods agreement. Although monetary and financial issues are not the focus of this book, the Bretton Woods system played such an essential role in establishing a stable framework for postwar trade and economic development, that we must briefly discuss it here. This topic is all the more important because the 1971 decision of the U.S. government under Nixon, to unilaterally terminate the Bretton Woods system, marked the end of the postwar period of rapid physical-economic development.

The Bretton Woods system and the Marshall Plan

The period leading up to WWII had been characterized by a mountain of unpayable debt obligations among the countries involved in World War I, and crushing payment demands placed on Germany by the Versailles Treaty of 1919. Private banks in the various countries held massive debt-based assets on their books.  The “Black Tuesday” stock market crash of 1929 was followed in 1931 by the collapse of the Vienna Kreditanstalt and a chain reaction of banking defaults in Europe and the United States. Great Britain suspended the gold backing for the pound sterling, which then functioned as the world reserve currency.

Under the pre-war British gold standard system much of the settling of international accounts (including trade deficits) was done by the direct transfer of physical gold. This made the system extremely inflexible and made it impossible to finance long-term physical-economic development of all the nations at the same time. At the end of World War II it was clear that there could be no hope for rebuilding the economies of the war-devastated countries and ensuring a lasting peace, without a new, stable international monetary and financial system which would avoid the disastrous flaws of the pre-war system.

With the British Empire greatly weakened by the war, the U.S. under President Roosevelt was able to dictate the most essential features of a new world financial system. In July 1944 a conference was held in Bretton Woods, New Hampshire bringing together representatives of all the Allied nations, to discuss the issue of the monetary and financial world order after the war. In a message to the conference Roosevelt declared that the purpose was to create the basis by which the world’s nations “will be able to exchange with one another the natural riches of the earth and the products of their own industry and ingenuity. Commerce is the life blood of a free society. We must see to it that the arteries which carry that blood stream are not clogged again, as they have been in the past, by artificial barriers created through senseless economic rivalries.” Roosevelt warned: “Economic diseases are highly communicable. It follows, therefore, that the economic health of every country is a proper matter of concern to all its neighbors, near and distant. Only through a dynamic and a soundly expanding world economy can the living standards of individual nations be advanced to levels which will permit a full realization of our hopes for the future.”

The U.S. dollar was the foundation of the Bretton Woods system. Its value was fixed in relation to gold (at $35 per ounce) and backed up by the commitment of the U.S. government to exchange dollars for gold at this fixed rate. But the amount of money available for productive investment was not limited by the amount of gold in the possession of the central banks, as it had been in the previous “gold standard” system. America’s status as the leader of the postwar order, the power of the U.S. economy in real physical-economic terms and its ability to export much-needed investment goods and technology, made the dollar effectively “better than gold”. The dollar became the world’s reserve currency, and accounts between nations were settled in dollars rather than gold. In the subsequent period, a huge monetary and credit expansion was carried out in the United States (and in other nations experiencing postwar “economic miracles”), but the U.S. economy grew even faster in real physical terms. This was because the monetary expansion was closely linked to investment in the productive sectors of the economy.

The Bretton Woods agreement provided for stable, relatively fixed exchange rates between the currencies of the participating nations, tying them to the gold-backed dollar. The U.S. rejected the proposal by the British side to impose a single world currency (the “Bancor”). United States policy was to promote the development of sovereign nation-states, in line with the traditional American orientation to national economy. While preserving in the power of individual governments to manage their own economic affairs, the Bretton Woods system provided the stable monetary foundation for a gigantic expansion of world trade in the post-war period.

In his message to the U.S. Congress in February 12, 1945 requesting Congressional approval of the Bretton Woods agreements, Roosevelt identified what he saw as the key features of the proposed system. He stated, in part:

“It is time for the United States to take the lead in establishing the principle of economic cooperation as the foundation for expanded world trade. We propose to do this, not by setting up a super-government, but by international negotiation and agreement, directed to the improvement of the monetary institutions of the world and of the laws that govern trade… (The) monetary conference at Bretton Woods has taken a long step forward on a matter of great practical importance to us all. The conference submitted a plan to create an international monetary fund which will put an end to monetary chaos. The fund is a financial institution to preserve stability and order in the exchange rates between different moneys. It does not create a single money for the world; neither we nor anyone else is ready to do that. There will still be a different money in each country, but with the fund in operation the value of each currency in international trade will remain comparatively stable.

“Furthermore, and equally important, the fund agreement establishes a code of agreed principles for the conduct of exchange and currency affairs. In a nutshell, the fund agreement spells the difference between a world caught again in the maelstrom of panic and economic warfare culminating in war-as in the Nineteen Thirties-or a world in which the members strive for a better life through mutual trust, cooperation and assistance.”

The United States had its own strong interest in promoting a prosperous European and world economy, as it would provide a large market for U.S. industrial exports. To this was added the political motivation of countering the growth of communist movements and the growing influence of the Soviet Union.

In this context the U.S. launched in 1948 a European Recovery Program (ERP), popularly known as the Marshall Plan. In the course of 4 years, the U.S. government provided the equivalent of $160 billion in 2014 dollars to rebuild the economies of the Western European nations. This included food, raw materials and other products as well as financial assistance.  Initially Germany was excluded, and the immediate postwar policy of deindustrialization of Germany – the Morgenthau Plan – was continued. However, in light of the increasingly desperate situation of the German population, the growing social unrest, and especially with increasing East-West tension, West Germany was soon fully included in the Marshall Plan and the revival and modernization of German became a major focus.

The Bretton Woods system and the Marshall Plan have been the subject of endless debates among economists up to today, with violently opposed schools of thought on issues such as the role of gold, the advantages and disadvantages of its exchange rate provisions, the role of state intervention etc.  There are also different positions on the question, what role Bretton Woods and the Marshall Plan really played in the “economic miracles” of the postwar period. Certainly, the domestic economic policies of the various nations were not less important, as we shall see below in the case of Germany, France and (in a somewhat different context) Japan. There are also debates about the political intentions and effects, with some seeing a victory for freedom and others the creation of a new “American Empire”.  Such political issues are not the subject of this book. But there is no doubt that the economies of the United States and Western Europe (including most notably Germany and France) did actually experience a period of very rapid physical-economic growth and development in the postwar period until the 1970s. 

Technology transfer – key to the postwar “economic miracles” in Western Europe

During the period of the Marshall Plan, the overall agricultural and industrial output of the Western European countries grew at the fastest rates ever experienced in European history, reaching a level much higher than the pre-war period. Material and financial assistance from the United States was certainly important in helping to restore essential economic functions in those countries after the war, but for the longer term a much more important factor was the much less well-known “Technical Assistance Program”.  Although this program consumed only a small fraction of the Marshall Plan funds, it contributed greatly to the dramatic increase in industrial productivity which characterized the West European “economic miracles”.

According to a 1992 account by James Silberman, “The technical assistance program of the Marshall Plan

was the largest and most comprehensive program of assistance to civilian industry ever undertaken. In a few years, and at low cost, those programs reached almost every plant in every industry, marketing agency, and agricultural entity in the war-devastated countries of Western Europe, introducing them to a technology more than a generation in advance of what they were doing. These programs accelerated the postwar economic recovery, raising the annual rate of increase in labor productivity of Western European industry from its historic level of about 1 percent per year to 4 percent or more. Within individual enterprises, productivity commonly increased by 25 to 50 percent within a year with little or no investment”.

Labor productive was a central focus of the Technical Assistance Program. During the war mobilization in World War II the U.S. had developed extraordinarily efficient production methods. This was due partly to applying the most advanced technologies available, but also through efforts to optimize the work process in each industry. A critical role in this optimization process had been played by the U.S. Bureau of Labor Statistics (BLS), which conducted extensive studies of labor productivity in American industries. In the context of the Technical Assistance Program the BLS applied its know-how and methods to European industry.  An account by Solidelle F. Wasser and Michael L. Dolfman (Monthly Labor Review June 2005) states in part:

 “A unique contribution of BLS to the Technical Assistance Program was the preparation and issuance of

Factory Performance Reports. … These field-based reports of actual productivity contributed substantially to European recovery. The reports were designed to present operational profiles of U.S. plants. Businessmen in other countries could then use these profiles to evaluate their own operations, isolate their areas of good or poor performance, and improve those areas that needed improvement. The case studies covered factories of similar size and products generally comparable with those in foreign companies.”

An essential feature of the Technical Assistance Program was the direct exchange of information and experience, through visits of American industry experts to Europe and visits by European expert to U.S. plants. Wasser and Doffman report:

“Reliable data indicate that through March 1957, nearly 19,000 European technicians, specialists and leaders of industry, labor, and government had visited the United States. Nearly 15,000 U.S. specialists had served abroad in the direct implementation of the national programs. Extensive technical services were provided including over 35,000 technical and scientific books, periodicals, and other literature; over 2,500 replies by mail to technical inquires, over 3,000 digests of articles from U.S. technical and trade magazines.”

This story is amazing from the standpoint of present-day business thinking, where competition tends to be the first thought. How could the United States make such an effort to boost the productivity and technological level of foreign industries, which could then be more competitive against American industry? A big part of the answer lies, we think, in the physical-economic thinking which at that time was still very strong in the United States. The Technical Assistance Program reflected a high degree of confidence in the ability of American economy to generate new technologies at a high rate and to incorporate them in new products -- as opposed to competing for the lowest price in producing already existing goods. Moreover, the “economic miracles” in Europe (and in Japan) generated a huge demand for industrial equipment from the U.S.. The U.S. economy also benefited from the consumer goods imported from those countries at favorable prices. As we already mentioned, the growing Cold War with the Soviet Union obviously provided a powerful motivation for supporting economic development in important nations allied with the United States. 

15.3  West Germany

At the end of World War II, the defeated Germany was occupied by the armed forces of the U.S., Great Britain, France and Russia, each with its own sector. In the following period the sectors occupied by the Western allies were consolidated essentially into what later became the Federal Republic of Germany (“West Germany”), and the Soviet-occupied sector became the German Democratic Republic (“East Germany”). Initially the Allied policies went in two directions: on the one hand, emergency assistance was given in the form of food and other goods to support the hungering population, while at the same time German industries began to be dismantled under so-called Morgenthau Plan. The latter called for a systematic de-industrialization of Germany and its conversion into a “pastoral state”. With the beginning of the “Cold War”, however, and the realization that the de-industrialization of Germany was making an economic recovery in the rest of Western Europe impossible, the Morgenthau Plan was effectively reversed and the U.S. began to support a rapid buildup and modernization of West Germany’s industries.

Apart from emergency aid to the population, the crucial step toward reviving the economy was the establishment of a entirely new monetary system for West Germany, based on new currency: the Deutsche Mark. The Deutsche Mark was created directly by the United States. The first Deutsche Mark notes – in the total amount of about 10 Billion Marks – were printed in New York and Washington, transported in secrecy to Germany and released into circulation on June 21 1948, the famous “Day X”. On that day the previous currency, the Reichsmark, was exchanged for Deutschmarks at a set exchange rate. A new national banking institution (Bank der Deutschen Länder) was set up, which later became the central bank of the Federal Republic of Germany (Bundesbank). The establishment of the Deutsche Mark system restored much of the confidence necessary to resume normal economic activity. 

Meanwhile steps were taken to launch housing construction and to restore the function of the transportation system (especially rail) whose destruction by allied bombing during the war was an essential cause of the collapse of the Nazi war machine. In November 1948 a new state-owned bank was established to rebuild the economy: the Kreditanstalt für Wiederaufbau (KfW) (Reconstruction Credit Institute). The KfW channeled funds from the U.S. Marshall Plan into the West German economy in the form of low-interest loans, mainly for housing construction, infrastructure and industry, and later for the promotion of exports. An essential feature of the KfW was that its operations were not hindered by the war reparations and international debt obligations, which the West Germany government paid for on the basis of tax income. Thereby, the KfW could “recycle” all of the funds obtained from the repayment of its loans into new loans.

Given the large number of people made homeless as a result of the World War II bombing, the initial priority was housing construction, overcoming infrastructure bottlenecks, restoring energy supplies, mining, steel and other basic industry. This was soon followed by a highly-directed channeling of large amounts of credit into rebuilding and modernizing West German industry in depth. Fortunately for Germany, a large part of its industry had survived the war and the immediate post-war dismantling policy, permitting the industrial economy to be restarted in a relatively short time.

In the ensuing “economic miracle” a big role was played by the so-called “Mittelstand” – the huge sector of small- and medium-sized manufacturing companies which remains a distinctive feature of the German economy today and a foundation for Germany’s economic strength. The legendary high productivity and innovative capabilities of the German Mittelstand are rooted in a cultural tradition going back to the craftsmen of the Middle Ages.  

As the West German economy picked up steam, the KfW shifted its priority to supporting the Mittelstand with emphasis on suppliers of precision machine tools and other modern production equipment, including the launching of many new Mittelstand firms. 

Meanwhile West German exports grew at explosive rates: the export of finished goods grew by 2.6 times between 1950 and 1960, and then by nearly 3 times between 1960 and 1970.  A famous example of the success in exports was the famous Volkswagen “Beetle” automobile, which became the world’s best-selling car by the mid-1960s. One million “Beetles” were exported to the United States alone.

All of these developments took place within the framework of the “Social Market Economy” – the special economic and political system set up in West Germany after the war, which differs significantly both from the U.S. model as well as that of France, England and most other European and Asian nations.  The design of the social market economy was motivated to a large degree by painful lessons drawn from the rise of the Nazi regime, and by the desire to develop a free, harmonious society with a high degree of social and political stability, as far removed as possible from the command economy and police-state regime of the Nazis.

According to the philosophy of the social market economy there should be a clear division between the responsibilities of the state and those of the private sector. Free markets are considered to provide the most effective mechanism for the allocation of goods, services and investment, while the state is responsible for creating the overall framework within which private enterprise can produce the best results. That framework includes above all (1) an extensive social system guaranteeing the economic security of the population and including unemployment insurance, universal health care and pension insurance; (2) regulation and macroeconomic management of the economy; (3) fostering education, scientific research and the diffusion of new technologies into private industry. Thus the overwhelming majority of children and young adults are educated in public institutions – public schools, colleges and universities.

The extensive system of social services provided by the West German state in the context of the social market economy was essential to creating an environment of social stability which in turn encouraged a high rate of private investment and thereby also high rates of employment in the postwar period into the 1980s.

It is important to note that the “free market” has a very different meaning in the German social market economy than in the classical sense of Adam Smith “laissez faire”, where the state plays no role. On the contrary, in the social market economy the government is responsible for keeping the markets “free”, by intervening to prevent the formation of cartels and monopolies, to suppress manipulation or prices and “unfair” competition, and in the early period even breaking up large companies into smaller ones.

The latter policy embodies a crucial difference between the German and French policies in the postwar period. In contrast to Germany, the French government consistently favored the formation of large companies by forced mergers of smaller ones. And in contrast to France, West Germany had no overall economic “plan”. In the postwar period the KfW and other state institutions promoted the development of a largely decentralized industrial order with an essential role being played by small and medium-sized high-technology industries alongside the famous industrial giants such as Siemens, Volkswagen and Daimler, Bayer, BASF, Bosch etc.  Germany has retained this structure up to today. 

From a physical-economic standpoint the success of that system rested to a large extent on a highly-developed industrial culture, a highly-skilled workforce, great strength in the sector of machine tools and precision manufacturing, excellent infrastructure and a strong system of scientific research and development.  Germany had a strong tradition of state support for scientific research, going back to the founding of the Prussian Academy of Sciences in 1700 and expanded especially in the 19th and 20th centuries with institutions such as the German National Metrology Institute (Physikalisch-Technische Reichsanstalt) founded in 1887, and the Kaiser Wilhelm Society for the Advancement of Science (Kaiser-Wilhelm-Gesellschaft zur Förderung der Wissenschaften), founded in 1911. Support for applied research was strengthened after the war by the creation of the Fraunhofer Society, which acted as a bridge between basic research in public laboratories and applied research in indus­try. After a slow start it grew into a very powerful organization, closely linked to universities but mainly carrying out applied research on behalf of clients in industry and government. It was part of West Germany’s diffusion-oriented technology policy aimed at encouraging wide­spread access to technical expertise and reducing the costs which Mittelstand firms faced in developing and assimilating new technologies.

Traditional industrial export products, including especially automobiles, were the core of West Germany’s success in become a leading export nation. At the same time German industry moved quickly to become a leader and competitor with the United States in many areas of advanced technology.

A prime example is nuclear energy, which in the 1950s was regarded around the world as the single most advanced area of science and technology. Top priority was given to creating a domestic nuclear industry capable of supplying the entire range of technologies and equipment required for the civilian use of nuclear energy. From early on, the intention was not only to use nuclear power to supply domestic needs, but also to make the export of nuclear power plants and nuclear technology into a pillar of West Germany’s export industry.

To achieve these goals it was necessary to create an entire sector of industry starting from zero. For this purpose the West German government took a dirigistic approach. A new government department – the Ministry for Atomic Questions (Bundesministerium für Atomfragen) – was created to support development work in nuclear energy. Its responsibilities were later extended to encompass space research. Large nuclear research laboratories were set up in Karlsruhe and Jülich. In 1957 the first test reactor was completed, and four years later the first German nuclear power plant began generating electricity. The foundation for Germany’s nuclear industry was laid.

Subsequently nuclear power expanded rapidly, reaching 27% of total electric power production in reunified Germany during 1990-2000. Up to today the main German producer of nuclear reactors, Siemens, has supplied a total of 31 large reactors, of which 21 were used in nuclear power plants in Germany, and the rest – 10 power reactors – were exported to Spain, Austria, Switzerland, Argentina and Brazil.

Back in the late 1970s West Germany appeared destined to become a leading world exporter of nuclear technology, including to developing countries.

However, beginning with the German-Brazilian nuclear agreement of 1975, German nuclear cooperation with developing countries came under strong attack from the United States. Nuclear power became the target of an increasingly virulent anti-nuclear movement in Germany itself. In the year 2000, at the end of a long process we shall not describe here, the Germany government finally decided to phase out nuclear energy entirely, to virtually shut down all research and development activities in the nuclear power area and to allow the infrastructure for production of nuclear reactors in Germany to collapse.

This self-destructive act goes hand-in-hand with a post-1970 culture shift of the German population in the direction of increasing irrationality and hostility to scientific and technological progress. In adopting the mentality of the “post-industrial society” and “consumer society” Germany seems to be copying the United States model, but with an important difference: the U.S. continues to utilize and to export nuclear power plants! Given the present world-wide revival of nuclear power, the U.S. nuclear industry is in a position to profit greatly from the elimination of a powerful competitor.  

15.4  France: the “Thirty Glorious Years“

The economic boom in France in the period 1945-1975 is particularly instructive because it followed a very different model from both the U.S. and West Germany.

The physical-economic development of France in the postwar period was planned and organized by a central agency, the Commissariat général du plan, in a succession of five-year plans spanning the period 1946-1975. The Commissariat was set up in 1946 under the direction of Jean Monet, France’s chief negotiator in Washington, with the long-term goal of transforming France into a highly developed and technologically advanced economy that could rival even the United States.

In contrast to the Soviet “command economy” with its rigid control of production and allocation, the French plan had an “indicative” character: the central government utilized a variety of incentives and inducements to guide the activity of public and private enterprises in accordance with the overall objectives of the plan. In addition, the planning process was connected with a large-scale sharing of economic information between government and private institutions, providing a coherent basis for decision-making on all levels.  

The principle or philosophy of guiding economic activity in France is generally described as “dirigisme”.  The most powerful tools of French postwar dirigisme involved the channeling of credit and investment into selected sectors of the economy. This was achieved not only by large-scale state investments (amounting to about 30% of all investments in the country), but also by a policy called “the nationalization of credit”.

From the outset the French state organized a network of public and private institutions and supervisory agencies that would guarantee the flow of sufficient credit to finance the national economic and social priorities set by the plan. The priority was long-term credits, otherwise known as “investment credit”, as opposed to short-term credit. As part of this policy, the central bank (Banque de France) and the four major commercial banks were nationalized in December 1945. The majority of banks remained private, but operated in an environment that was strongly shaped by the policies of the Banque de France and other government agencies. In addition state-owned enterprises played a major role in strategic sectors of the economy. It is important to note that throughout the “Thirty Glorious Years” of France’s postwar economic boom, the financial markets – including equity markets – remained quite small, and played only a minor role. In that period, in fact, the growth of financial markets was actively discouraged by the government.

The essence and spirit of French government dirigisme was beautifully summarized by the French leader Charles de Gaulle in his 1970 book “Mémoires d’espoire”:

“The task of the state was not to force the nation under a yoke, but to guide its progress. However, though freedom remained an essential lever in economic action, this action was nonetheless collective, since it directly controlled the nation’s destiny, and it continually involved social relations. It thus required an impetus, a harmonizing influence, a set of rules, which could only emanate from the state…. In practical terms, what it primarily amounted to was drawing up a national plan, in other words deciding on the goals, the priorities, the rates of growth and the conditions that had to be observed by the national economy, and determining the fields of development in which the state must intervene, along with laws and budgets. It is within this framework that the state increases or reduces taxation, eases or restricts credit, regulates customs duties; that it develops the national infrastructure — roads, railways, waterways, harbors, airports, communications, new cities, housing, etc.; harnesses the sources of energy — electricity, gas, coal, oil, atomic power; initiates research in the public sector and fosters it in the private; that it encourages the rational distribution of economic activity over the whole country; and by means of social security, education, and vocational training, facilitates the changes of employment forced upon many Frenchmen by modernization. In order that our country’s structures should be remolded and its appearance rejuvenated, my government, fortified by the newfound stability of the state, was to engage in manifold and vigorous interventions.”

The success of dirigisme in France in the postwar period was made possible by the existence of an elite of civil servants, administrators, engineers and scientists educated in the famous “Grandes Ecoles” , including the École Polytechnique, the École National des Ponts et Chaussées and the École des Mines – all founded in the 18th century -- plus the École Nationale d'Administration created by Charles De Gaulle in 1945.

A good overview of France’s first five year plans is given by Charles Kindelberger in the book “National Economc Planning” published by the U.S. National Bureau of Economic Research in 1967.  Kindelberger writes:

 “The First Plan had a slogan, ‘modernization or decadence’ and chose to concentrate expansion on six ‘basic’ sectors: coal, electricity, steel, cement, agricultural machinery, and transportation. At the time of the extension of the initial four-year period, two further industries were added: fuels and fertilizers. Coal,

electricity, and railroad transport were nationalized and could be expanded from within. The others were fairly well concentrated and implicitly threatened with nationalization. In steel, capital for expansion

was provided from counterpart funds and other government sources (as in other industries) but on condition of mergers. Government intervention was ad hoc in design and in implementation; the emphasis on expansion, modernization, efficiency, and modern management which characterized this intervention, however, was systematic.

“The Second Plan, organized with a gap of one year, covered 1954—57, and rested on a more systematic basis in national accounting. The emphasis was still on expansion, but this was now extended from the eight sectors to the economy as a whole. The ‘basic sectors’ of the First Plan were followed by the ‘basic actions’ of the Second: research, improved productivity, marketing reform, assistance to equipment, and training, that is, programs to produce more, but under competitive conditions of quality and price: The

threat of socialization had ended, and the Planning Commissariat was transformed into an agency for forecasting and economizing. Goals were laid down overall and by sectors, including housing. Most of these were over-fulfilled ….

“The Third Plan ran from 1958 to 1961 and was addressed to growth and the correction of the balance of payments. The need to reduce costs was underlined by the prospective entry into force of the Common Market. The pressure for expansion was maintained, with an increase of 20 per cent projected for the four years (manufacturing, 33 per cent; exports, 70 per cent) …

“In the Fourth Plan, over the years 1962—65, the rate of expansion was again set at 5.5 per cent a year, raised from the original experts' target of 5 per cent. Whereas earlier plans had been called plans of

modernization and equipment, this was one for economic and social development. The economic development involved the same prescription as before: expansion, full employment … In addition, problems of regional balance were explicitly addressed in the plan, to push particularly those regions like Brittany and the Central Massif where industrialization has lagged.”

Kindelberger also noted that “the public corporations, especially in railroads, aviation, and electricity, have been among the leaders in increasing efficiency and improving technology. Unlike public corporations in many less developed countries, which have a weakness for wasteful investment programs, they have pioneered in the calculation of efficiency conditions for pricing and investment. To a certain extent, their calculations have become those of the plans. But it is a mistake to regard French planning as using nationalized industries to carry out its designs. Here … it must persuade as much as command.”

In addition to the measures summarize above, the French government launched a series of “Great Projects” for infrastructure, science and technology. These included:

1. A gigantic program of highway construction. In 1955 France had only 25 km of modern highways. In 1958 the Commissariat du Plan proclaimed the long-term goal of constructing a nationwide highway network with a total length of 2000 km. By 1969, 1000 km had been completed, and in 1974 the goal of 2000 km was reached. Other large-scale infrastructure projects included the enlargement of the harbor in Marseille, which became the largest port on the Mediterranean.

2. Development of the world’s first commercially successful passenger jet aircraft, the S 120 Caravelle. This state-funded project was utilized as a means for rebuilding and modernizing the French civilian aircraft industry. The original specifications new medium range aircraft were drawn up by the government “Comité du matériel civil” in 1951 and transmitted to industry by the “Direction Technique et Industrielle,” leading to the submission of design proposals, a selection process, and the launching of the project. The “Caravelle” became a worldwide model for the following generations of passenger aircraft.

3. Development and production of the supersonic passenger aircraft, the Concorde, in cooperation with Great Britain. Although the Concorde suffered commercial losses and was retired from service after 27 years, it was a technological marvel which catalyzed many new developments in the aircraft industry.

4. High-speed rail transportation. Already in 1955 the state-owned railroad company SNCF began studies on the development of high-speed trains, leading via a series of prototypes to the 250 km/hour gas-turbine-powered Turbotrain, which began commercial operation in 1971.  Further developments led to the 260 km/hour TGV-Sud-Est and – following the “oil price shock” of 1973 to electric-powered TGV’s reaching speeds of over 350 km/hour. Within France the TGV is generally faster and more convenient than air travel, and has been a major commercial success.     


5. A program for the large-scale development of nuclear power. One of the first decisions make under the provisional government of General De Gaulle was the creation of the Commissariat à l’énergie atomique (CEA), which systematically developed France’s nuclear industry. France’s first nuclear power plant was opened in 1963, followed ten years later by the decision of the French government to make nuclear power generation the main basis of France’s electricity supply. Today France derives over 75% of its electricity from nuclear energy, with 59 reactors in service. All these reactors were produced by the French nuclear industry. The 85% state-owned owned company Electricité de France is the largest single producer of electricity in the world and the world's largest net exporter of electricity due to its very low cost of generation. AREVA, 95% owned by the French government, is the world’s largest builder of nuclear plants.

15.5 The post-war “Japanese miracle”

The spectacular rise of Japan’s economy from the ashes of World War II to a leading world industrial power by the 1960s was accomplished in a very different way than the postwar developments of the U.S., France and Germany. In his 1982 book “MITI and the Japanese Miracle”, the well-known Japan expert Chalmers Johnson writes:

“Japan’s postwar economic triumph – that is, the unprecedented economic growth that has made Japan the second most productive open economy that has ever existed – is the best example of a state-guided market system currently available…  As a particular pattern of latest development, the Japanese case differs from the Western market economics, the communist dictatorships of development, or the new states of the postwar world. The most significant difference is that in Japan the state’s role in the economy is shared with the private sector, and both the public and private sectors have perfected means to make the market work for developmental goals. The pattern has proved to be the most successful strategy of intentional development among the historical cases.”

Within the Japanese state apparatus, the famous Ministry of International Trade and Industry (MITI) is commonly identified as having played an essential guiding role in Japan’s “economic miracle”. In his book “The Making of Modern Japan” (D.C. Heath,1996) Kenneth Pile gives a useful summary of the background and functions of MITI:

“MITI was formed in 1949 from the union of the Trade Agency and the Ministry of Commerce in an effort to curb post-war inflation and provide government leadership and assistance for the restoration of industrial productivity and employment… MITI has served as an architect of industrial policy, an arbiter on industrial problems and disputes and a regulator. A major objective of the ministry has been to strengthen the country’s industrial base.

“It has not only managed Japanese trade and industry along the lines of a centrally planned economy, but it has provided industries with administrative guidance and other direction, both formal and informal, on modernization, technology, investments in new plants and equipment, and domestic and foreign competition. The close relationship between MITI and Japanese industry has led to a foreign trade policy that often complements the ministry’s efforts to strengthen domestic manufacturing interests. MITI facilitates the early development of nearly all major industries by providing protection from import competition, technological intelligence, help in licensing foreign technology, access to foreign exchange, and assistance in mergers…”

Pile goes on to describe the “matrix structure” of the bureaus working under the Cabinet Minister and Vice-Ministers heading the MITI at the top:

“The various industrial bureaus focus on detailed plans and strategies for various sectors. Detailed information and statistics pertaining to each sector’s output, competitiveness and potential relative to international competitors are prepared, which forms the basis of Japan’s industrial planning. These bureaus monitor Japan’s performance sector by sector… The Industrial Structure division deals with the projected structure of industrial demand and output with policies aimed towards a particular targeted year, with provision for a rolling plan which is revised and updated… The output is an integrated plan which acts as a common thread, pulling the diverse aspects together. This matrix form of organization of MITI results in cross-fertilization of ideas and strategies… Another notable function performed by MITI is continuous consultation with the private sector companies and economic organizations. Thus we find the organization of MITI as a matrix structure and detailed planning done by each of the divisions with the objective of diagnosing industrial environment on a continuous basis provides one of the cornerstones of Japanese industrial success.”

As we have emphasized, efficient state guidance of an economy per se does not determine whether the outcome will be good or bad; decisive is the content and orientation of state intervention in terms of physical economy. In this respect a valuable overview is provided by Kozo Yamamura in a 1976 article entitled “A Retrospect and Prospect on the Postwar Japanese Economy (in “Explorations in Economic Research” Vol. 3 Nr. 2, published by the U.S. National Bureau of Economic Research).

Yamamura identifies technology as the single most importance source of Japan’s spectacular growth and development in the postwar period. He asks: “Why was technological change ... so rapid in postwar Japan? The answer consist of four parts:
(a) War devastation, which made possible adoption of new technology on a large scale;
(b) Availability of a large backlog of Western technology at relatively low cost;
(c) Active and numerous government policies that promoted and encouraged the adoption of new technology;
(d) Rapid diffusion of new technology, which was encouraged by the rapidity of growth itself and by the ability to make improvements on imported technology.”

Yamamura elaborates these points, noting (among other things) that “Japan had lost more than a quarter of her industrial capacity to bombardments, and what she had left in 1945 was an over-depreciated capital stock that was technologically backward … As the recovery began and postwar growth was initiated, Japan was in a position to adopt new technology without waiting for assets to be fully depreciated.”

Yamamura points out that “the abundant supply of new technology at a relatively low cost was available to satisfy the voracious appetite of Japanese industry”. The main source of this technology was the United States. Yamamura quotes from George C. Allen’s book, “Japan’s Economic Expansion”:

“Between 1950 and 1962, 1998 contracts for technical cooperation had been signed with foreign firms, nearly two-thirds of them American. Most of the contracts related to projects in industries that have grown especially fast, notably iron and steel, petrochemicals, chemical engineering, electronics and motor manufacturing. The result was that by the early 1960s the technical gap [with the U.S. – JT]  had been virtually closed in most branches of industry and Japanese firms were themselves beginning to devise important innovation.”

Yamamura adds. “The number of contracts signed, however, is only a partial indicator of the Japanese efforts to innovate.” He notes that in 1963, for example “about 210 000 abstracts of foreign scientific papers were made by the Japan Information Center for Science and Technology. Japanese businessmen and government officials are constantly visiting foreign countries to pick up new ideas.”

The transfer of knowledge and technology above all from the United States was closely connected with the special economic relationship between the two countries in the postwar period. Especially following the outbreak of the Korean War in 1950, Japan – which had been occupied by U.S. forces until 1949 -- became an important economic source for the United States. With U.S. support the Japanese government instituted a protectionist regime with heavy trade barriers which shielded Japan’s developing industries from competition from the U.S. and elsewhere. In turn Japan flooded the U.S. with cheap industrial goods, developing a huge trade surplus. Japan used this export income to procure technology. 

Favorable state policies were crucial to Japan’s technological mobilization. Yamamura writes:

“The process of rapid technological change was vigorously aided by the government, which adopted a wide variety of policies to assist, directly and indirectly, in the adoption of new technologies by industries… (a) Beginning in 1948 and throughout the 1950s, the government made low-cost capital loans available to electric, iron and steel, coal, shipping and other industries in order to aid rapid ‘rationalization’ of their technology…  (b) Throughout the postwar period, the basic monetary policy was to maintain ‘cheap money’. The long-term objective of monetary policy was to supply growing industries with low-cost funds created by the Bank of Japan…”

In the 1950-1970 period two thirds of investments by Japanese companies were financed by loans and trade credits. The huge flow of credit provided to industry by Japanese banks was made possible by banks borrowing directly from the Bank of Japan, and relending the money preferentially to sectors that had been identified as priority sectors by the Japanese planners, according to the principle of “window guidance”. Interest rates were rigidly controlled. Meanwhile, the rapidly growing tax base allowed the government to finance most of its investments internally, without incurring external debt.

Beyond the rapid pace of technological improvements, Japan’s physical-economic development was inseparably linked to large-scale infrastructure construction, especially connected with rapid urbanization and the development of the famous “Pacific Belt” -- known today as the “Megalopolis of Tokaido” – running parallel to Japan’s Pacific coast. With a length of about 1200 kilometers, Tokaido Megalopolis now has a population of 83 million people, or about two-thirds of Japan’s population, and is home to nearly the entirety of the nation’s manufacturing industry. The “Pacific Belt” first emerged as a concept in a Japanese Economic Commission Subcommittee Report in 1960, as part of the plan to double the national income. Already then, Japan was in the middle of an extremely rapid urbanization process: the percentage of population living in cities grew from 37% in 1950 to over 70% in 1970, with a massive shift of the work force from agricultural to industrial activity.

The Japanese scholar Miyamoto Kenichi remarked that this rapid urbanization of Japan compressed a process which had taken 100 years in the United States, into a mere 25 years!

The emergence of the “Megalopolis of Tokaido” was made possible by an extraordinarily rapid development of transport, energy and communications infrastructure. As we have noted in the first part of this book: the densification of economic activity through urbanization, supported by modern infrastructure, boosts the real physical-economic productivity of society in a nonlinear fashion. This occurs first and most obviously by reducing the distances and time required for the transportation of people and goods, but also by enhancing the intensity of cognitive interchanges and thereby also (under the appropriate cultural framework) the rate of generation of new ideas. 

The development of Japan’s Pacific “Megalopolis” as probably the single most productive industrial region on the Earth, was further strengthened and accelerated by a pioneer high-technology project: the Shinkansen high-speed train, otherwise known as the “bullet train”. The first Shinkansen trains began running between Tokyo and Osaka in 1964, operating at speeds up to 210 kilometers per hour. Subsequently the high-speed lines were extended to cover nearly the entire length of the Pacific corridor, and speeds were further increased. The Shinkansen was an immediate success, reaching 100 million passengers by 1967 and one billion passengers in 1976. It had a major positive impact on Japanese business and even on the style of living in the Megalopolis.

A further source of Japan’s postwar economic success was the high degree of economic security provided to the population. This was achieved above all the successful state policy of maintaining essentially full employment of the work force throughout the 1950s, 1960s and 1970s. This was achieved with the help of an extremely high rate of investment into the real economy, plus a mandatory universal health insurance system and a national pension scheme. In addition, the postwar Japanese economy was characterized by a kind of “social contract”, known as “the 1955 regime” between the state, the corporate sector and labor, under which workers were strongly tied to their corporate employers. In return for loyalty and hard work, the employers provided a high degree of job security – including in some cases the assurance of lifelong employment – and a variety of social benefits such as transportation allowances, support for workers’ families, low-interest-rate loans to purchase housing, retirement benefits etc. Between 1960 and 1973 real wages rose 7 percent each year.

High employment in Japan was also supported by the development of a large number of small and medium-sized manufacturing companies (SMEs), which are typically closely linked to larger companies as “dedicated suppliers”. In 1970 over 75% of the total non-agricultural labor force worked for SMEs. 

The crucial role of small and medium-sized manufacturing companies in Japan’s economic success is similar to the case of Germany, and contrasts greatly with that in France or the United States. As in so many other areas, this success is connected with deliberate government policy. In 1963 the Japanese enacted what is sometimes referred to as the “SME Constitution” – a collection of laws providing for financing, credit guarantees, mutual support, dealing with crisis situations of SMEs and providing an overall favorable legal environment.

15.6  The rise of the Soviet Union

We do not agree with the often-repeated assertion that the Soviet Union was a total economic failure. It is often forgotten that Russia at the time of the Bolshevik Revolution was a predominantly backward, underdeveloped nation with mostly illiterate peasants making up 80% of the population, living in brutal oppression under the Tsarist police state. Over long periods the Soviet Union had to develop mostly on its own, and along the way the economy and population suffered devastating losses in World War II, as well as the effects of a repressive political system.  Furthermore the Soviet Union did not benefit from the large-scale transfer of technology and other forms of assistance which were provided by the United States to the West European nations and Japan after the war. Despite all these disadvantages, by 1970 the Soviet Union had managed to achieve the status of an urbanized, industrially developed nation with an educated population and a predominantly middle-class standard of living. Considerable progress was made in building up infrastructure, urban centers with modern public services, and productive facilities in underdeveloped areas of the country, including in Siberia, the Russian Far East and Central Asia.  The Soviet Union became a superpower able to challenge the most advanced nation in the world, the United States, in many areas of science and technology.

How was this possible? Until Gorbachov’s Perestroika, and with the exception of the brief period of Lenin’s “New Economic Policy” (1921-1928), the Soviet Union operated as a rather strict “command” economy in which industry was essentially 100% state-owned, where industrial goods were allocated by the state rather than bought and sold in a market, and all important production and investment decisions were made by state agencies in accordance with detailed plans. There were no private banks, no private industry and no provision for private investment – all things which are regarded in the West as indispensible to the growth or even the survival of an economy.

Examining the economic accomplishments of the Soviet Union helps once more to underline the primacy of the principles of physical economy, rather than the choice of economic system, as the main determinant of the outcome of economic policy. In the case of the Soviet Union up to the 1970s, economic planning was generally oriented toward the principles of physical economy, despite more or less violent fluctuations connected with internal and external political causes. The main problem, in our view, was the tendency toward a relatively linear, extensive mode of growth, whose negative effects became more and more severe as soon as a certain level of development had been reached.

The economy of the Soviet Union did not start from zero. In fact a very significant industrial and infrastructural base, including the core of an industrial workforce, had already been established in the pre-revolutionary period. The industrial development of Russia in the late 19th century is exemplified by the role of “American System” advocate Sergei Witte and the construction of the Trans-Siberian Railroad. In addition the Soviet Union inherited from the previous period a highly-developed class of intellectuals, including many brilliant scientists associated with the Academy of Sciences.  All of this existed alongside a largely peasant population, a primitive agrarian sector and a vast expanse of underdeveloped territory.

Heavy industry

As is well-known, the main thrust of Soviet economic policy in the late 1920s was the rapid buildup of heavy industry. The pace was accelerated in the 1930s, as the prospect of war with the highly-industrialized Germany appeared more and more inevitable. The prewar buildup was crucial to the rapidity and gigantic scale of the Soviet Union’s subsequent wartime industrial mobilization and to the Soviet victory over the Nazi war machine. But even after the war, economic development remained largely dominated by the heavy industry. The huge reserves of raw materials and of labor drawn from the rural population made it possible to create a self-feeding expansion of the Soviet economy on this basis.

Nowadays the “Soviet model of development” is often dismissed as a costly failure, made at the expense of the population whose living standards suffered from the low priority given to consumer goods production. Such characterizations tend to miss an essential point, namely that heavy industry supplies the most essential materials and machinery needed for the construction of housing, schools and hospitals for the population, water and sanitation systems, roads and railroads, mines and power plants, transport vehicles and the basic inputs to every other branch of industry. Agriculture and heavy industry are the foundation of any industrial economy. Evidently Soviet heavy industry fulfilled this essential function, in spite of its legendary waste and inefficiency.

In this context a decisive step in the development of the Soviet Union was the electrification campaign carried out under the State Electrification Commission (GOELRO). The top priority given to this campaign by the Soviet leadership is summed up in Lenin’s famous slogan “Communism is Soviet power plus the electrification of the whole country.” In fact, electrification was essential for overcoming the tremendous backwardness of the rural population, spread out over a vast area, and which lived so-to-speak in a different century from the urban population. Electrification was crucial to creating a unified national economy. In Lenin’s words it was part of an overall policy to provide for “the organization of industry on the basis of modern, advanced technology, on electrification which will provide a link between town and country, will put an end to the division between town and country, will make it possible to raise the level of culture in the countryside and to overcome, even in the most remote corners of land, backwardness, ignorance, poverty, disease, and barbarism." 

Parallel with the electrification process, the Soviet leadership launched the “LIKBIZ” campaign to reduce and finally eliminate illiteracy in the population. At the time of the revolution only about 13% of Russian women could read and write, and in most rural areas 90% or more of the whole population were illiterate. Twenty years later literacy rates in the Soviet Union had reached 80-90% and by the end of the 1950s practically 100% of the population was literate.

In its role in helping to overcome the backwardness of the rural areas of the Soviet Union the GOELRO had many similarities to the rural electrification campaigns launched in the United States under Franklin Roosevelt under the Rural Electrification Act. Roosevelt was attacked for using “Soviet methods”, but the TVA was quite successful in accomplishing its goals.

Ironically, Soviet leaders were from the very beginning fascinated with the economic successes of the United States. In his book, in his book, “American  Genesis: A Century of Invention and Technological Enthusiasm, 1870-1970” the late U.S. historian Thomas Hughes remarks: “To Viadimir Ilyich Lenin, Leon Trotsky, and the other Russian Bolsheviks who seized power during the November Revolution of 1917, hydroelectricity generation at Niagara Falls, steelmaking in Gary, Indiana, and, above all, Ford automobile manufacture in Detroit were the essence of modern American technology. They believed that, if such technology were developed in a Soviet context, American means of production could lead the way to the socialist future.” In his 1953 “Foundations of Leninism” Josef Stalin wrote:  “The combination of Russian revolutionary sweep with American efficiency is the essence of Leninism in Party and state work… American efficiency is that indomitable force which neither knows nor recognizes obstacles… ”

In fact, Frederick Taylor’s concept of “scientific management”, and the question of its compatibility with Marxist goals was a big subject of controversy among the Bolshevik leaders even before the revolution. Vladimir Lenin was apparently skeptical at first, but in April 1918 at a session of the Supreme Council of the National Economy (VSNKh), he declared:  “We must definitely speak of the introduction of the Taylor System, in other words, of using all scientific methods of labor which this system advances. Without this, it will be impossible to raise productivity, and without that we will not usher in socialism”.

Indeed, Taylorist production methods were essential to the rapid industrial growth in the Soviet Union in the whole period from the beginning of the heavy industry drive up into the 1960s. Scientific management greatly increased the productivity of labor, as in every other county, but with the alienating effects we have discussed in Chapter 10.

To the extent the image of socialist construction became identified with giant industrial plants and mass production methods – including collectivized agriculture – the Soviet economy suffered more and more from the one-sidedness which was inherent in the original drive for heavy industry. Heavy industry is only a necessary, but not a sufficient condition for sustained physical-economic development. As time went on, the Soviet economy suffered more and more from the predominantly linear character of its mode of growth.

Science and technology

A very important exception to this linearity was the spectacular success of the Soviet Union in developing certain high-technology industrial sectors with military-strategic significance, including especially in the areas of nuclear energy, aviation and space technology. Parallel with this the Soviet Union build up a “science machine” rivaling that of the United States and contributing significantly to the progress of science and technology in the postwar period.

In fact, the scientific-technological-industrial base built up in the Soviet Union had its own distinct features. The USSR placed enormous emphasis on scientific education and created strong incentives for the most talented young people to go into science and technology. Scientists, engineers and educators enjoyed significant status and privileges in Soviet society. The USSR had 13 Nobel Prize-winners in physics and chemistry.

Nuclear energy

As in the United States, the development of nuclear energy in the Soviet Union began with the effort to build an atomic bomb. The Soviet Union started around 1942, later that the U.S., and profited significantly from information obtained by Soviet spies in the U.S. and British nuclear programs. In 1946 Soviet scientists under the leadership of Igor Kurchatov realized a self-sustaining chain reaction, and already 1948 a 100 megawatt reactor began operating to supply plutonium. The first Soviet atomic bomb was detonated on August 29, 1949 at Semipalatinsk, followed by the explosion of a hydrogen bomb in 1953, only a year after the United States.

While developing its arsenal of nuclear weapons, the Soviet Union advanced rapidly in civilian and military applications of nuclear reactors. In 1954 the world’s first commercial nuclear power plant, the AM-1, began operation in Obninsk, continuing to generate electricity all the way to the year 2002. In the late 1950s the Soviet Union launched the world’s first nuclear-powered icebreaker ship, the “Lenin”. In 1964 the first prototype of a standardized reactor for large-scale power production, the WWER (sometimes designated VVER) , began operation in Novo-Voronezh. The WWER reactor type became one of the world’ most successful and reliable sources of electric power, with 67 units built and operated up to now including in the former Soviet Union, the Eastern European countries, Finland, China, India and Iran. Going through three successive generations of development, the WWER is one of the most significant high-technology products of Russia today. The Soviet Union also developed fast breeder reactors and the entire nuclear fuel cycle.

The Soviet Union directed large efforts, in parallel with the United States, to developing nuclear fusion as a civilian energy source – an area of research which requires capabilities in many of the most advanced areas of science and technology. The “tokamak”, a prime candidate for the design of a working fusion reactor was originally invented in the Soviet Union, and Soviet scientists made pioneering contributions in practically every aspect of fusion research.

The Sputnik and Yuri Gagarin

Among the most successful sectors in the Soviet economy was the aerospace industry, thanks in large degree to an extraordinary collection of brilliant scientists and engineers, including space pioneer Konstantin Tsiolkovsky, Sergei Ilyushin,  Sergei Korolyov, Valentyn Glushko, Alexei Tupolev, Igor Sikorsky, Artem Mikoyan and others. The legendary “design bureaus” working under their leadership accomplished astonishing feats of engineering.

More famous are the Soviet accomplishments in manned and unmanned space travel, highlighted by the world’s first artificial satellite – the Sputnik, launched on October 4, 1957 -- and by the mission of Yuri Gagarin, who became the first human being to orbit the Earth, on April 12, 1961. The Soviet Union led the world in sheer numbers of satellite launches, far ahead of the United States. The Soviet technology for launching manned and unmanned payloads into space achieved a very high degree of dependability and relatively low cost, laying the basis for the competitiveness of the Russian space industry today. In fact, after the decision to end its Space Shuttle program, the U.S. for the time being lost its own capability to transport astronauts into space, and became dependent on Russian launch systems and Soyuz capsules. Moreover, most of the rockets used by the U.S. to launch satellites into orbit presently utilize rocket engines designed and fabricated in Russia.

Defeat in the race to the Moon

Having successfully launched the first satellite into space and the first man into orbit, the Soviet would seem to have had a big advantage in the race to put a man on the Moon. Why did the Americans arrive there first?

There are specific reasons for the failure of the Soviet effort to land cosmonauts on the Moon, which have since been documented in detail. They included a series of catastrophic launch failures of the N1 rocket which the USSR had developed for a future mission, rivalry and lack of coordination between different Design Bureaus involved in the project, resistance from the Soviet military establishment over priorities, lack of decisive backing from the top Soviet leadership etc. On a deeper level, however, the fact that the United States reached the Moon first reflects an essential weakness of the entire Soviet system from the standpoint of physical economy.

Although the USSR had brilliant scientists and engineers and a gigantic industrial base, the Soviet system was rigid and compartmentalized, and not organized in such a way that innovations could be rapidly disseminated throughout the economy. Above all it lacked the broad base of private enterprises and the complementary relationship between private companies and the state, which was key to the tremendous dynamism and productivity of the U.S. economy. Probably the most serious weakness of the Soviet system in the area of advanced technologies was the nearly total separation between the military and civilian sectors of the economy. Innovations made in the high-technology military-industrial sector, could not propagate into the civilian sector, which remained relatively backward. As a result, those innovations had little effect on the overall productivity of the Soviet economy. The vast investment into the military sector was thus a net burden on the Soviet economy. In the U.S., by contrast, the military and civilian sectors were closely interwoven, with private companies often working in both sectors at the same time. Technological breakthroughs could much more easily pass from one area to the other. Moreover, as we noted above, the “science and technology machine” of the U.S. emphasized the active role of the government in disseminating new knowledge and innovations throughout the industry.  

These differences were clearly manifested in the space programs of the two nations. The Soviet space program was carried out under the higher secrecy, essentially within the military sector. U.S. Apollo program was organized as a civilian program and almost completely open (unclassified). Indirectly it paid for itself, rather than being a burden to the economy as in the case of the Soviet program. The same applies in large measure to the research and development activities of military programs.

The “Era of Stagnation”

In the late 1960s and early 1970s the symptoms of weakness in the Soviet Union’s economic trajectory became more and more apparent. Among these was the failure to overcome the chronic problem of undersupply of consumer goods to the Soviet population, their inferior quality and limited variety compared to consumer goods made in the West. Associated with this was the rapid growth of the “informal”, “shadow” or “second economy” comprised of economic activities which were illegal according to state laws. The “shadow economy” was strongest in the consumer sector of the Soviet economy (including agriculture), but with time its influence extended more and more into core industrial sectors, resulting in increasingly serious discrepancies between the economic trajectory intended by Soviet planners, and the actual trajectory.

It would go beyond the scope of this book to try to answer the question, to what extent the collapse of the Soviet Union was due to economic causes, or rather was primarily the result of political decisions. Curiously, the period of relatively successful growth and development of the Soviet economy came to an end roughly at the same time as the 1970s “branching point” in the economic trajectories and policies of the U.S. and other Western countries.


Chapter 16  The rise of China

The modern economic history of China is an extraordinary complex subject. A vast literature exists, but as far as we know, there has been no comprehensive study of China’s modern development using the approach of physical economy. In the following we confine ourselves to some essential points.

16.1 From Sun Yatsen to Mao Zedong

Sun Yatsen, leader of the revolutionary movement which overthrew the last dynasty in China in 1912, dreamed of transforming China into a modern industrial nation with a republican form of government. In 1921 Sun Yatsen published a detailed plan for the economic development of China, including the creation of a national railroad network with over 160,000 kilometers of new railways, the construction of one and a half million kilometers of roads, a vast network of canals and ports, hydroelectric dams and new industrial centers. Sun also specified the priorities for development of agriculture and the various branches of industry. The plan, published as a 250 page book entitled “The International Development of China”, foresaw foreign investment as the key locomotive for China’s economic transformation. Sun argued that China’s economy would provide a huge and growing market for the industrialized nations who had emerged from World War I with enormous industrial over-capacities. Sun wrote:  “The world has greatly benefited by the development of America as an industrial and a commercial nation. So a developed China with her 400 million population [now 1400 million – JT] will be another New World in the economic sense. The nations which will take part in this development will reap immense advantages.” In  many ways Sun Yatsen’s dream anticipated the unprecedented growth of China’s economy over the last 30 years. In fact, industrial development already began in the early decades of the 20th century, particularly in Shanghai and other port cities and during the so-called “Golden Decade” under the Nationalist government from 1927 to 1937. In that period China had established beginnings of a national base of science and technology. Universities were created and the Chinese Academy of Science, generally known as the Academia Sinica, was set up on the model of the Soviet Academy of Sciences.    

Unfortunately, following the death of Sun Yatsen in 1925, a bloody civil war began between the so-called nationalist forces Chiang Kaishek and the Communist Party. In the midst of that civil war came first the Japanese occupation of Manchuria in 1931 and then an all-out invasion of the country by Japanese forces in 1937. This led to the occupation of large parts of the country, unspeakable atrocities against the Chinese population and a bitter resistance war lasting all the way to the end of World War II. After the defeat of Japan the civil war in China resumed in full force up to the victory of the Communist forces under Mao Zedong in 1949. 

16.2  Soviet-style industrialization

Needless to say, China’s economy was in a devastated state when the Communist Party came to power. After restoring the most essential economic activities in the country, including especially food production, the Communist Party adopted a strategy of rapid buildup of heavy industry and infrastructure on the Soviet model, and with massive inputs from Soviet Union itself. Large amounts of machinery and even entire plants were purchased from the USSR, and Soviet engineers and planners played a major role in the buildup of the industry. In the rural areas, where in 1949 nearly 90% of the population still lived, the Communist Party pursued a policy of land reform and the formation of cooperatives and collective farms.

In terms of industrial production the first Five-Year Plan (1953-1957) was a spectacular success. Relatively speaking the agricultural sector lagged behind – in part due to lack of investment and the burden of exports to pay for Soviet aid -- but the supply of food to the population improved overall. The urban population grew rapidly. The problems could have been corrected by rational decisions, but unfortunately, as in the Soviet Union but in a much more radical and irrational way under Mao, subsequent economic policy was characterized by abrupt changes from one direction to another. Economic development was disrupted and set back by Mao’s imposition of abrupt ideological shifts, as typified by the succession of the “Hundred Flowers” campaign, violent “anti-rightist” campaigns, the “Great Leap Forward” (1958-1959) with the ensuing “three bitter years” of hunger and famine, and finally the “Great Proletarian Cultural Revolution” (1966-1976) which threw the country into chaos. The Cultural Revolution had also been preceded by Mao’s “Third Line” policy (1963-1965) to move large parts of Chinese industry from the coastal areas into the interior of the country. A large number of factories were literally dismantled, the components transported and reassembled in remote locations. This policy, motivated by Mao’s fear of an American invasion by sea, was accompanied by significant infrastructure construction in underdeveloped regions of China, but placed an enormous burden on the economy, including much waste of resources on account of the extreme haste with which the “Third Line” was implemented.

On this background it is remarkable that over the long run, apart from all the political disruptions and abrupt changes in economic policy, the per capita output of agricultural and industry continued to grow at a significant pace all the way up the reforms initiated under Deng Xiaoping in 1978. In particular it should be noted that China succeeded in accomplishing the “miracle” of feeding a population which grew from 542 million in 1949 to over 956 million in 1978. China had also exploded its first atomic bomb (1964) and hydrogen bomb (1967), and launched its first satellite into orbit (1970) – the results of programs that had been initiated with Soviet assistance, but were continued by China alone after 1962-1963.

16.3  Reform and the two-track policy

Despite these and other accomplishments of the pre-reform period, it is indisputable that the reform course initiated under Deng Xiaoping in 1978-79 marked the turning point in the process of China’s rise to the status of an “economic superpower” in the world today. In the West, China’s reform course is often placed in the same category with the neoliberal policies of deregulation, privatization and globalization adopted more or less in parallel by the Western nations.  The reality is different. Although the Chinese leaders have learned to exploit the neoliberal policies in other nations for its own goals, the economic policies pursued inside China have a very different character. On the one side they included the introduction of markets and more or less free competition in large domains of the economy, thereby eliminating much of the chronic inefficiency of the “pure” socialist system. On the other side, however, they featured an overwhelmingly powerful role of the state in planning and steering the economy in desired directions.

From the beginning of the reforms the Chinese government adopted a two-track policy. One side is well-known, and the other less so. The first track was to massively promote the export of products of labor-intensive industries on the world market, exploiting the vast pool of cheap labor existing in the country. In this context the globalization of the world markets created a gigantic opportunity for China. Chinese state-owned and private companies became large-scale exporters of low-cost consumer goods.   At the same time strong incentives were created to induce large Western companies to “outsource” labor-intensive steps of their manufacturing flows to China – for example the final assembly of computers. This led to a strong “export processing” or “triangular” structure in China’s trade: inputs from one country (or several countries) are processed in China into final goods and exported to a third country. The buildup of such “processing” trade was accompanied by huge amounts of foreign direct investment for setting up production facilities in China. The income from exports could be applied to obtaining the machinery, equipment, high-tech components, as well as raw materials (including oil) which China requires to maintain its production and internal physical investment. China developed a large trade surplus. This trajectory is a particularly successful version of the one commonly recommended to developing countries by leading international institutions and is entirely compatible with liberal economic policies.

The second policy track, which is much less well known, consisted in a systematic “dirigistic” buildup of key industrial sectors selected on the basis of their strategic importance – so called “strategic” and “pillar industries” -- using all the instruments available to a sovereign state. The second component runs totally contrary to the dictates of the neoliberal model and is designed to propel China to the status of the world’s Number One economic power, eventually overtaking the United States and other Western countries in virtually every domain of production. 

To understand the sophisticated combination of strategy and tactics being pursued by the Chinese leadership, one must focus on the single most significant result of the reforms in physical economic terms: drastically upgrading the technological level of the Chinese economy. This has been achieved mainly through the absorption and efficient application of technology obtained from foreign nations. Achieving access to that technology and integrating it into the Chinese economy required a radical institutional change both in China internally and in its economic relations with the external world.  One should bear in mind that from the time of Sino-Soviet split at the beginning of the 1960s, until 1978, China had been essentially closed off to the external world. Also the technology obtained from the Soviet Union, although adequate for the original rapid development of heavy industry in China, was generally far behind the standard of the advanced Western countries. In the face of these and many other difficulties Deng Xiaoping rejected the ideology fanaticism of the Maoist period and adopted a pragmatic approach to the development of the country, as expressed in his famous statement: "It doesn't matter whether a cat is white or black, as long as it catches mice."

16.4  “Catching the mice”: modernization with the world’s best technologies

The “mice” which Deng Xiaoping wanted to catch included above all the technologies and know-how of the most developed nations. China should first catch up with those nations, and then – in the ambitions of China’s leaders – move beyond that to become the world leader in science and technology.

The well known Chinese economist Justin Yifu Lin describes the logic involved as follows, in a paper entitled “The Chinese Miracle Demystified”:

“A continuous stream of technological innovations is the basis for sustained growth in any economy. … In advanced high-income countries, technological innovation and industrial upgrading require costly and risky investments in research and development, because their technologies and industries are located on the global frontier. Moreover the institutional innovation which is required for realizing the potential of new technology and industry often procedes in a costly trial-and-error, path-dependent, evolutionary process. By contrast, a latecomer country in the catching-up process can borrow technology, industry and institutions from the advanced countries at low risk and costs. So if a developing country knows how to tap the advantage of backwardness in technology, it can grow at an annual rate several times that of high-income countries for decades before closing its income gap with those countries. … After the transition was initiated by Deng Xiaoping in 1979, China adopted the opening-up strategy and started to tap the potential of importing what the rest of the world knows and exporting what the world wants.”

However, in order for the strategy identified by Lin to work in the long run, certain problems must be solved:
(1) China must have the means to get access to “what the world knows”, i.e. the most up-to-date technology and know-how of the advanced nations.
(2) Policies must be in place to nurture and support Chinese companies in the process of assimilating the new technologies. The latter policies can range from providing easy access to credit, direct state investment and subsidies, guarantied markets, various forms of direct and indirect protection from foreign and domestic competitors etc., plus a variety of incentives.
(3) China must have the scientific and technical manpower to master and apply those technologies.

Let us briefly see how China has responded to these three challenges.

How does China obtain foreign technologies? A 2010 article by Thomas Hout and Pankaj Ghemawat in the Harvard Business Review, “China vs. the World: Whose Technology Is It?” gives part of the answer:

“Our studies show that since 2006 the Chinese government has been implementing new policies that seek to appropriate [i.e. take over] technology from foreign multinationals in several technology-based industries … These rules limit investment by foreign companies as well as their access to China’s markets, stipulate a high degree of local content in equipment produced in the country, and force the transfer of proprietary technologies from foreign companies to their joint ventures with China’s state-owned enterprises. The new regulations are complex and ever changing. They reverse decades of granting foreign companies increasing access to Chinese markets and put CEOs in a terrible bind: They can either comply with the rules and share their technologies with Chinese competitors—or refuse and miss out on the world’s fastest-growing market… The Chinese government has deployed several strategies to help local companies acquire state-of-the-art technologies and break into the global market. Some work in a top-down fashion, others from the bottom up. Beijing drives the process nationally in most capital-intensive sectors.”

The article gives an example:

“Consider high-speed railway systems, now an estimated $30 billion a year market in China. In the early 2000s the superior equipment of multinational corporations such as Alstom, which built France’s TGV train system; Kawasaki, which helped develop Japan’s bullet trains; and Siemens, the German engineering conglomerate, gave foreign companies control of about two-thirds of the Chinese market. The multinationals subcontracted the manufacture of simple components to state-owned companies and delivered end-to-end systems to China’s railway operators. In early 2009 the government began requiring foreign companies wanting to bid on high-speed railway projects to form joint ventures with the state-owned equipment producers CSR and CNR. Multinational companies could hold only a 49% equity stake in the new companies, they had to offer their latest designs, and 70% of each system had to be made locally. Most companies had no choice but to go along with these diktats, even though they realized that their joint-venture partners would soon become their rivals outside China.

“The multinationals are still importing the most-sophisticated components, such as traction motors and traffic-signaling systems, but today they account for only 15% to 20% of the market. CSR and CNR have acquired many of the core technologies, applied them surprisingly quickly, and now dominate the local market… The combination of low manufacturing costs and modern technologies is helping them make inroads in developed markets, too …”

Another tactic which China has been using quite effectively is for Chinese companies and investors to directly purchase small- and medium-sized high-technology enterprises in the U.S. and Western Europe. An important example is suppliers of components to the automobile industry. The Chinese government has identified this as a key area for obtaining technology in order to make Chinese-built automobiles in the future competitive with Western manufactures on the world market. When the owners of small- and middle-size companies wish to sell their companies, for example because of financial difficulties or because of retirement in the case of a family owner, Chinese investors offer the best price. After purchasing such a supplier company (and thereby also the rights to its technologies), they create a factory in China using those technologies to supply Chinese automobile producers.

Obtaining the most up-to-date technologies in these and other ways permits China to substitute more and more of the high-tech equipment and components, which it formerly needed to import from abroad, by products of its own growing high-tech industry. The total so-called “value-added” of China’s production is thereby increased. The process of import substitution has greatly accelerated over the last 15 years and is becoming more and more important factor in China’s economy.

The dual-track policy is spelled out in more detail in an article by Daniel Poon entitled “China’s Development Trajectory: A Strategic Opening for Industrial Policy in the South” (UNCTAD, Paper No. 218, December 2014):

“ China’s so-called ‘dual track’ economic reform strategy … blended ongoing support for import-substitution in selected sectors with an evolving array of export processing activities. China’s export processing prowess is quite widely documented, but the strategic policy regime of import-substitution in selected strategic sectors is often less appreciated despite its close connection to the heart of China’s bank-centric investment-led growth model. …  By the early 1990s, structural changes in China’s export basket from lower value-added manufactures such as apparel and clothing accessories, towards higher value-added manufactures such as electrical, computers, and telecommunications equipment, has led some to argue that China’s export bundle resembled the sophistication of a country with an income-per-capita level three times higher. Critics point out, however, that China’s apparent export sophistication is misleading given elevated degrees of imported high-value parts and components that are assembled in factories based in China and subsequently re-exported. This is known as China’s ‘triangular’ trade or ‘processing’ trade whereby China acts merely as the preferred low-cost assembly platform, the last stage in GVCs [global value chains – JT] whose design and architecture are ultimately orchestrated by multinational corporations (MNCs) based in advanced economies … An examination of the production chain for an Apple iPod, for example, revealed that only about US $5 out of the total value of US$180 can be attributed to assembly and testing activities of (mostly from the Taiwan Province of China) producers located in China while most of the value accrues to lead firms based in the United States, Japan and the Republic of Korea.  …

“The difficulty in assessing China’s development trajectory stems from the ‘dual-track’ nature of its economic reforms. In economic development and trade policy, in particular, the reform package combined ongoing support for import-substitution in selected sectors, while simultaneously

conducting export processing activities considered as ‘new’ for the domestic economy. … The defining feature of this dual-track approach was to effectively cordon-off strategic parts of the domestic economy from the processing trade regime’s outputs and imported inputs. This is the essential difference in policy regime toward incoming FDI to China that is “market-seeking” (to gain access to the domestic market), and FDI that is ‘efficiency-seeking’ (to utilize China as a low-cost assembly platform)…

“The concepts of ‘strategic’, ‘key’, ‘backbone’ and ‘pillar’ sectors have a long history in China, but it

was only in 2006, and after the establishment of the State-owned Assets Supervision and Administration

Commission (SASAC) in 2003, that the Chinese Government more clearly delineated the role of the State

in these categories of industries.… For instance defence, electrical power and grid, petroleum and petrochemical, telecommunications, coal, civil aviation, and shipping are categorized as ‘strategic’ sectors where the state will maintain sole ownership or absolute control. Other sectors, such as equipment manufacturing (machinery), automobiles, information technology, construction,

iron and steel, non-ferrous metals, chemicals, land surveying, and R&D and design, are categorized as

‘pillar’ industries where the State will maintain strong control and influence. For these reasons, it is commonly observed that Chinese State firms still retain control over the ‘commanding heights’ of the economy …”

16.5 Pillar industries and the systematic use of directed credit

Daniel Poon also identifies the main methods by which China has financed the systematic buildup of its endogenous high-technology industry:

“For the most part, it is the large State-owned firms that are the principal beneficiaries of China’s bank-centric financial system which drive the high investment, rapid expansion of infrastructure inside the

Chinese economy. The core of the State sector, namely the oil, metallurgy, electricity, telecommunications and military industry sectors, accounting for three-quarters of the capital of SASAC-owned firms, and producing less than four per cent of China’s total exports

“With the continued State control and ownership of the Chinese banking system and the practice by China’s central bank, the People’s Bank of China (PBoC), to set bank lending and deposit rates while also limiting other investment channels for depositors, Chinese policymakers have mobilized resources mainly by engaging in so-called ‘financial repression’ in making low-cost pools of savings/capital available to the banking system. This was a conscious policy decision to rely on domestic bank creditrather than turning and tapping into international capital markets and the benefits/risks such an option entails. Although the role of bank credit has been reduced through reform measures that have led to developments of other capital sources (bond and stock markets), China’s financial system remains predominantly bank-centric. … Fiscal policies have prioritized development spending, particularly investment in infrastructure and education, along with subsidies to export industries. Second, monetary policy was integrated with banking/financial sector and industrial policies, including directed credit and favorable interest rates…”

In many ways China’s use of “directed credit” or “channeled credit” resembles the “window” policy which Japan used to finance its economic miracle in the postwar period. Naturally, Chinese economists have studied the Japanese case very carefully.

6.6  Mass production of scientists and engineers

As we mentioned, scientific and technical manpower is a precondition for being able to assimilate advanced technologies into the Chinese economy. Here the Chinese government has expended enormous efforts, especially over the last decade, and these efforts have achieved major successes.

“Since 1998 total number of national institutions of Higher Education (universities and general colleges) has grown 2 ½ times, from 1022 in 1998 to 2553 in 2015. About 25 million students are presently enrolled in Chinese colleges. The number of graduates entering the job market is now about 7.5 million per year, and rapidly growing.  China leads the world in the number of bachelor’s degrees in the fields of science and engineering (S/E), turning out 4 times more than the United States. This reflects not only China’s dramatic expansion in higher education since the late 1990s but also a much higher percentage of students majoring in S/E in China, around 44% in 2010, compared with 16% in the United States. … China’s massive production of scientists and engineers is not limited to the bachelor’s level. Its growth in S/E doctoral degrees has been comparably dramatic. In 1993, China’s number of degrees was only 10% that for the United States, but by 2010 China exceeded the United States by 18%.” (From an article by Xie, Zhang, Lai, Proceedings of the U.S. National Academy of Sciences, July 2014)

16.7  The greatest infrastructure boom in history

Parallel with the rapid expansion and technological upgrading of China’s industrial base, the Chinese leadership launched a buildup of physical infrastructure on a scale never seen before in world history. China’s use of productive credit generation and directed credit has been decisive for this process.

China’s ongoing infrastructure boom is most famous for its spectacular “megaprojects” such as the Three Gorges Dam (the most powerful hydroelectric generator in the world), the 165 km-long Danyang- Kunshan Grand Bridge (the world’s longest bridge) or the epochal South-to-North Water Diversion Scheme (the world’s largest water transfer project) whose 1,432-kilometer-long central line was opened in December 2014. But alongside these megaprojects China has been expanding transportation, energy, water and communication networks throughout whole the country.

Since 2000 the length of China’s expressway network, already the 2nd largest in world, has been increasing at an average rate 20% per year, growing from a mere 1600 kilometers in 1995, to 34,000 km in 2005 and 112,000 km in 2015.

Railway construction is also proceeding at a rapid and accelerating pace, particularly in the relatively underdeveloped western and central regions of the country. Total railroad length has grown from 69,000 km in the year 2000 to 112,000 km in 2014, with an additional 8000 km projected to be completed by the end of 2015. To get an idea of the scale of construction, one should bear in mind that 8000 kilometers is more than 2 times the distance between the Atlantic and Pacific coasts of the United States.

Meanwhile China has been rapidly developing a national system of high-speed railways (HSR) with top speeds of between 200 and 350 km per hour. The Chinese HSR network reached a length of over 16,000 kilometers at the end of 2014, which is longer than the total length of all other high speed rail networks in the world. When completed, the full grid is intended to connect all Chinese cities with more than 500,000 inhabitants. In this way China can slow down the growth of domestic air traffic and achieve enormous gains in the physical efficiency of the Chinese economy. 

16.8  Urbanization – hundreds of new cities and towns

The development of China into a fully industrialized nation goes hand-in-hand with an urbanization process of unprecedented proportions. The percentage of urban to total population has grown from 17% in 1975, to 23% in 1985, 31% in 1995, 43% in 2005 and 56% in 2015. In the last 15 years alone, the urban population of China has grown by 320 million persons – more than the total population of the United States. One can easily imagine the scale of urban construction which this process has involved:  housing, schools, office buildings, roads, public transport etc.  The rapid urbanization process is by no means spontaneous, but has been deliberately encouraged by policies adopted by the Chinese government.

In this context a big debate has arisen, both inside and outside China, about overcapacities, excessive construction and real estate speculation connected with the urbanization boom. Particular attention is focused on the phenomenon of so-called “ghost cities” arising all over the country: entire new cities and urban districts which are supposedly empty of inhabitants. An article by Wade Shepard in China Daily (3/7/2015) clarifies the issue of the so-called “ghost cities” and provides some background to China’s present urbanization process:

“What China has is the opposite of ghost towns. It has new cities in the process of coming to life. 

“Modern China's urbanization story begins in the early 1980s, when large swaths of rural land across the country began being rezoned as urban en masse, and larger cities started being granted authority over prefectures. At that time, only 180 million Chinese lived in cities … But by the early 2000s, China's urbanization ambitions were kicked into overdrive, as nearly every city in the country began rapidly expanding outward and completely new cities and towns started sprouting up in the expanses between them…

“In a 2013 survey, China's National Development and Reform Commission set out to discover just how many new urban areas were actually being built. Their findings: In just 144 of China's cities, more than 200 new towns were either being constructed or were in the planning stages.

“China is now home to one fourth of the world's 100 largest cities. It has 171 cities with over a million people … There are now 730 million urban Chinese, and the nation's ‘National Urbanization Plan’ aims to keep this number rising. Over 250 million more Chinese are expected to be living in cities by 2030…

“With all of this urban activity and migration to cities, why do so many of China's new developments appear so empty?”

Shepard’s answer points to one of the typical “capitalist” tools the Chinese government uses to steer development in a desired direction:

“China's urbanization program has been forced into motion by a fiscal policy that all but demands local cities to expand to remain economically solvent.  According to the World Bank, China's cities must fend for 80 percent of their expenses while only receiving 40 percent of the country's tax revenue, so land sales are often used to make up the difference. Land is bought by cities at the low rural rate, rezoned as urban, and then sold to developers at the high urban construction land rate… The profits are huge… Corruption and errant spending aside, this money is often essential for sustaining urban infrastructure, funding public institutions and facilities and various other social programs. Cities expanding beyond their current needs are all too often a built-in inevitability. When developers purchase these fresh plots of urban construction land, they are required to build something almost immediately… 

“Although the sure-shot economic stimulus of masses of ready homebuyers kept the wheels of China's new city movement spinning, it often led to huge amounts of the housing stock being purchased by people with no intention of living in it, which often drove the prices higher than what those who actually wanted to move in could afford. This has had the effect of making many of China's new cities take on the appearance of ghost towns. This is something that in large part has been corrected by new government policies to restrict real estate speculation, curb the laundering of illicitly received funds in housing, and to open up alternative avenues for investment, which has lowered or leveled off the cost of housing in much of the country.

“Although there are many built-in factors that inhibit the vitalization of many of China's new urban areas, the country has the means to break this inertia. If there is one thing that China has a lot of, its people, and the country maintains ways of moving these people around the country to live in new areas that are in the process of vitalization.

“One of the main ways that large scale new areas are stimulated is through the building of university towns. These sprawling networks of multiple college campuses are able to attract hundreds of thousands of students and staff members… Another strategy in bigger cities is to build a central business district and then force its occupation by movement of the headquarters and offices of state-owned banks and other businesses … A third strategy for sparking a population in a new area is to move in government offices and offer housing subsidies and incentives to the officials and workers to get them to follow their job. ...

“Eventually, as people trickle in through free will or fiat, China's new cities begin developing a social and economic base. Although there are almost always bumps in the road, political and economic mishaps, and delays, essential infrastructure generally gets built, public institutions are created, shopping and entertainment centers open, and nearly everything that's needed or wanted to sustain life in modern China gradually becomes available.

“Some of China's most notorious ghost cities have been attracting considerable numbers of residents, according to a report by Standard Chartered. … When we consider that many of these new cities are packed full of high density housing, large vacancy rates can be sustained while still having a lively civic pulse and an adequate amount of economic activity…  China's new cities are generally not being built for today or even for tomorrow, but for decades from now, when the country's urban population is predicted to top 1 billion and a network of megacities stretches across the land ... China's new cities are generally built on 20 year timelines  … To put it simply, China's new cities are new, and the ghost city critique usually amounts to little more than an analysis of a temporary phase of development as China builds new cities for the deluge of urbanization that was intentionally set in motion.”

16.9  The Future of China

China’s development has been a gigantic experiment in physical economy. There is no doubt that this experiment has been a brilliant success, up to now, both in comparison with other developing countries and in light of the enormous difficulties the country has faced along the way. Today, too, China faces enormous challenges – not least of all in connection with the social dynamics which have been set into motion by its rapid economic transformation. Assuming that the country remains socially and politically stable, the problems which are most often mentioned today – widespread corruption, accumulations of bad debts, large overcapacities in certain sectors, pollution and waste etc. – can most likely be managed. Generally speaking the threat posed by these problems to the future of China tends to be overestimated by Western commentators, because of their tendency to project the experience and criteria of Western liberal economies onto a system which operates in a completely different way. One very important difference is the fact that – so far, at least – China’s financial system and financial policies have been subordinated to the needs of the physical economy.

More interesting for us is the question, what will happen when (1) China has “caught up” technologically with the advanced industrial nations, and can no longer tap the large pool of existing foreign technologies as a source for its growth and development; and (2) when China exhausts the large reserves of labor power which it still has today, especially in the rural areas. How well will China succeed in developing its indigenous capacities for technological innovation? Can China become the source of new scientific and technological revolutions? Will China be able to develop a Knowledge Generator Economy? These questions point to the greatest challenge for China, which is to realize the full creative potential of its 1.4 billion citizens.
 

Conclusion: The future of physical economy

The late 1960s and early 1970s marked a turning-point at which the United States and other leading industrial nations began to abandon the postwar focus on real physical-economic development in favor of “postindustrial” or “consumer society” models. The accompanying changes in economic and financial policies have proven disastrous in their effects on many nations -- including especially developing nations who accumulated mountains of debt as a result of international credit policies oriented to financial gain rather than development of the real economy. The neoliberal economic policies of deregulation, privatization and financial globalization, adopted by the Western nations in recent decades, led finally to the catastrophic financial crisis unleashed in 2007-2008, whose effects have by no means been overcome.

In the meantime a tectonic change is taking place in the structure of the world economy. While New York and London remain -- for the time being! – leaders in the “virtual world” of international banking and finance, the center of gravity of the world’s real economy has shifted more and more to Asia.

As we detailed above, China is presently engaged in physical-economic development on a scale never seen before in human history. Despite many weaknesses and difficulties, China’s development —together with that of India and some other countries -- has created a new situation, which is causing nations around the world to reorient to the principles of physical economy.  This tendency is being strengthened by China’s support for large-scale infrastructure projects in Asia, Africa and South America -- via the China Development Bank for example -- and by the creation of new financial institutions oriented to the needs of physical-economic development: the Asian Infrastructure Investment Bank and the BRICS Development Bank. At the same time, new currency arrangements are being made which can replace the dollar as a medium for payment among cooperating nations. It is entirely conceivable that an alternative “multipolar” international monetary and financial system may emerge, alongside the present dollar-based system, and possibly even replacing it in the longer term. This would spell the end of the dominance of Anglo-American financial institutions in the world economy.

It is quite possible that the rising economic, technological and military power of China, India and a number of other nations, the new role of Russia etc. will induce the United States and other Western countries to drop “postindustrial” and neoliberal policies, and revive their previous orientation to physical economy. It is also possible that the world will go through a frightful period of conflict and chaos before sanity returns. But sooner or later the principles of physical economy will become the dominant paradigm.

Physical economy as a science is in many ways still in its infancy. Fundamental principles have been established, but much remains to be done to elaborate them further and to develop suitable techniques for the analysis and projection of actual economies. There is enough work to occupy entire institutes. We hope, in fact, that such institutes will be created in the future.

In the past, nations have developed successfully on the basis of a mere conceptual orientation to the principles of physical economy, without the need for elaborate analytical tools. The main exception in the modern period has been the use of input-output analysis as an aid to some decision-making. But as we noted above, the input-output method in its present form is intrinsically linear, and cannot adequately describe the dynamics of economies undergoing rapid scientific and technological progress. And yet it is exactly such nonlinear development which is the main subject of physical economy. As scientific and technological advance becomes increasingly the determining factor in the structure and dynamics of national economies, it will become necessary to go beyond the bare concept of physical economy, to develop precise methods for determining the actual trajectory of a given economy and evaluating the consequences of alternative policies.

The greatest challenge lies in dealing with what we have called the “strongly nonlinear” mode of economic growth characterized by an unfolding series of technological revolutions derived from fundamental discoveries in science. It is exactly in this domain that physical economy can claim the greatest methodological superiority over all other approaches. But here concrete applications to macroeconomic analysis and projection lag far behind theory. The idea of Lyndon LaRouche, to develop a mathematical approach based on the work of Riemann and Cantor, has so far not been realized.

A related direction of work, of especially keen interest to the author, concerns the way physical economy and the implied approach to “strongly nonlinear” processes (in our sense) can shed new light on fundamental problems of physical science. I intend to pursue this subject in future publications.

In the meantime it is to be hoped that this book, despite its shortcomings, will suffice to demonstrate the extraordinary importance of physical economy, and inspire others to pursue its development and application in the interest of mankind’s future.

 

END