Breakthrough areas of science


Here 18 areas of scientific research are identified and discussed in which revolutionary breakthroughs can be expected. The document was written in non-technical language for a special occasion in 2007, but should still be of interest. I hope to be able to revise and update it later this year -  JT  

1. How is the mind related to the brain?

2. Water and the physical basis of "holistic" organization of living processes

3. Control of gene expression and the paradox of "supercomplexity" of living systems

4. Beyond genetic engineering -- from Darwinian to Vernadskian evolution

5. What is a chemical bond?

6. Supramolecular chemistry and the "sociology of matter"

7. The puzzle of high-temperature superconductivity - What is an electric current?

8. Magnetic isotope effects - a secret of living organisms?

9. Low energy nuclear processes

10. Changing the "constants" of radioactive decay.

11. Where do raw materials come from?  Self-organization and the generation of oil, gas and mineral deposits in the Earth

12. Coupling of Earth, biosphere and solar activity, and electromagetic meteorology

13. The coherent organization of the solar system, the Crab Nebula, galaxies and other astronomical objects -- can these objects be regarded as living organisms?

14. Beyond quantum physics -- What is the origin of quantization of the micro- and macroworld?

15. Does Time have a structure?

16. Are there general laws of technological progress? Development of a science of physical economy

17. Revolutionary new types of aircraft and interactive aerodynamics

18. Nuclear space propulsion and principles of space exploration in the 21st century


1. How is the mind related to the brain? 

A. It is well-known that the mental activity of any human being depends on complex electrical and chemical processes occurring in the human brain -- an organ composed of living cells. But what is the exact relationship between the ideas, memories, emotions and thinking processes, which we experience subjectively in our own mental activity, and the corresponding objective physical processes in the brain? For example, do there exist specific features of the physiological processes in the brain, which correspond to a specific type of mental activity, such as "discovering a creative solution to a problem"? Breakthroughs in understanding the brain-mind relationship could revolutionize both medicine and psychology, improve the treatment of mental illnesses, lead to the development of powerful new types of computers, perhaps even provide ways to improve human mental performance.   

B. Although there have been enormous technological advances in methods for observing the physical activity of the human brain -- including three-dimensional imaging of metabolic activity -- present-day science very far from being able to account for the brain-mind relationship on the basis of existing concepts and models. The problem is very fundamental, because until now the languages and conceptual systems of psychology on the one side, and physiology, biology and medicine on the other, are completely different and even incompatible with each other. The psychologist deals with ideas and emotions, the physiologist deals with electrical currents, molecules, cell membranes etc. -- two categories of objects that seem to have nothing in common. Since the 1950s, the main tendency of research has been to try to unify the description of the brain and mind on the basis of analogies with digital computers or other information-processing devices. But today it is generally acknowledged, that the principles of organization of the human brain must be fundamentally different from those digital computers or other known electronic devices. Needles to say, digital computers do not show any sign of the creative activity that is characteristic of the human brain. New ideas are required. One of the promising directions of brain research today is the investigation of the interaction between oscillatory processes at various levels of the brain's organization -- from microscopic chemical and electrical oscillations taking place in individual neurons, to collective oscillations occurring on the level of entire regions of the brain, and which contribute to the well-known "brain waves". From the standpoint of these investigations, the brain appears as a hierarchically-organized, evolving system of nonlinearly interacting oscillators.

(The most revolutionary direction of work, in this area, would be to invert the problem by regarding the relationships of ideas in the mind de facto as principles of organization of the corresponding physical processes in the brain. The beginnings of such an approach can be found in the late 19th century work of Gustav Fechner, Ewald Hering and Josef Breuer. More on this profound topic later! – JT, June 2015)   

2. Water and the physical basis of "holistic" organization of living processes

A. In recent decades molecular biologists have accumulated an enormous wealth of detailed knowledge concerning the structure of biological molecules and chemical reactions in living cells. But the astonishing complexity of the activity underlying the existence of even the simplest living organism, poses a big paradox: how is all this activity coordinated? In a single, metabolically active living cell, 100 000 or more different chemical reactions take place every second. Many of these reactions are part of complex metabolic cycles and processes of synthesis of proteins and other large molecules, which involve a large number of interconnected reaction steps. For each reaction step to occur, the participating molecules must arrive at the right place at the right time, and afterwards the products of the reaction must be transported to the necessary locations for the next reaction steps. The movement of molecules between reaction centers located in different locations within the cell, must be synchronized in some way. How is this huge complex of simultaneous chemical processes and movements of molecules, taking place simultaneously in different locations in the cell, organized into a coherent whole?  Without some very precise and effective form of organization and control, the life of a cell would break down into total chaos. Understanding the "holistic" basis of organization of living processes would revolutionize biology and medicine, and provide a new basis for dealing with systemic problems of the human organism, including aging, degenerative diseases and cancer.

B. Present-day molecular biology has no satisfactory answer to this question. It is clear that the "holistic" organization of living cells is based on fundamentally different principles, than the way Man uses computers and information systems to control technical processes. The living cell seems to organize itself spontaneously and in an extremely flexible, adaptive, non-mechanistic manner. There is now little doubt that a key role in the organization of the living cell is played by water -- a substance which might seem simple from a chemical point of view, but which is capable of an enormous number of distinct, organized physical states. The study of the organization of water and of coherent states in living cells is one of the most promising direction of research today. In this revolutionary area of biophysics, ideas of the great 20th century biologist Alexander Gurwitsch, concerning the existence of a so-called "biological field", are being revived in connection with modern developments in quantum field theory and biophotonics.


3. Control of gene expression and the paradox of "supercomplexity" of living systems

A. Practically all living processes are controlled by the activity of enzymes derived from protein molecules. It is well known that the basic plan for the synthesis of each protein used by a cell is encoded in the DNA molecules which constitute its genetic material – its genes. The primary structure of a given protein is encoded in the form of specific sequences of structural elements in the DNA, called base pairs, according to a “genetic code” discovered in the 1960s. Since that time, the basic features of the entire process of synthesis of a protein, starting from a coded sequence in the DNA, has been intensively investigated. The "Human Genome Project", completed in 2003 in the U.S., identified each of the approximately 20 000 genes in the human genome and determined the exact base pair sequences of 99% of them, making it possible to construct a nearly complete “map” of the human genome.

          Superficially, someone might conclude that we now know basically everything about the functioning of human cells, at least as far as the synthesis of proteins is concerned. But this is not true at all!  In a given cell at any given time only a very small portion of the genes are actually activated and "expressed" in the production of the corresponding proteins. To understand the behavior of living cells, it is essential to understand how and by what means specific sets of genes are "turned on" or "turned off", for example in the process of development of a human embryo. This is one of the most important problems in present-day biology and medicine. Solving it could one day give Man the capability to re-grow damaged organs within the body and perhaps even to overcome the aging process. 

B. Attempts to solve the problem of control of gene expression, using the methods and concepts of molecular biology, have led to a paradoxical situation. A number of mechanisms have been uncovered -- involving the binding of specific molecules at specific locations -- which either trigger or block specific steps in the process leading to the synthesis of a protein. But for each such mechanism a new question arises: what turns that mechanism on and off? The discovery of each mechanism thus leads to new mechanisms, and so on, without coming to an end! Meanwhile, it has become increasingly clear that the activation and expression of genes is a global process, which generally involves a large number of other genes and depends upon the entire spatial configuration and environment of the DNA molecule. This is especially the case for the eucaryotic cells of higher organisms. Thus, new methods and ideas are required which go outside the domain of molecular biology and may even require a new type of geometrical language to be invented.    


4. Beyond genetic engineering -- from Darwinian to Vernadskian evolution

A. There is no doubt that the present biosphere -- with the species of plants, animals and other organisms we find in it now -- is the product of a very long process of development, going back many hundreds of millions of years; that at different geological times different sets of species of living organisms inhabited the Earth, and that later species tend to be related to earlier ones by certain lines of development. These general conclusions are supported by physical evidence, above all the so-called "fossil record" studied by paleontologists.

          Justifiable doubts remain, however, concerning the validity of the commonly accepted theory of evolution as (essentially) the cumulative result of random genetic mutations combined with "natural selection" of the most viable variants through the competitive struggle for survival.

          Among other things, the fossil record contains a multitude of "gaps" or "jumps" in the series of species, where a new species seem to have appeared suddenly, without any traces of intermediate stages. In some cases the emergence of a new species is connected with a new type of organ, which only functions when all its parts exist together, and therefore could not easily by created by a gradual process. Furthermore, although laboratory experiments with bacteria and other rapidly-multiplying microorganisms demonstrate the existence of a phenomenon of genetic adaptation or "micro-evolution", the emergence of a truly new species has not been clearly demonstrated in laboratory experiments.

          For these and many other reasons, many scientists doubt Darwin's theory in its original form. A number of revised "neo-Darwinian" theories have been proposed, but it is most likely that the truth lies in a very different direction.

          The question, how new species actually can arise, has important practical consequences for Mankind today. One is the danger of sudden appearance of new infectious diseases. Another is the possibility of finding more natural alternatives to the present methods of "genetic engineering" of organisms, which have great limitations (and, perhaps, dangers) due to inadequate understanding of the dynamic coupling between changes in the physical state of the genetic material (the chromosomes), and changes in the environment in which organisms exist.

B.  Darwin's concepts of "natural selection" and of the "struggle for survival among the species" were strongly influenced by political ideas in England at his time that did not come from scientific work per se. Since Vladimir Vernadsky's founding of the scientific study of the biosphere, it has become clear, for example, that the dominant relationship between living organisms in the biosphere is cooperative, not competitive. The species of organisms are interdependent; each belongs to a complex ecological system involving many other types of organisms, without which it could not exist. As a result, the emergence of a new species cannot be an isolated phenomenon, but most likely corresponds to a phase change in the functioning of the ecological system as a whole. Furthermore, Darwin did not take into account the crucial fact, emphasized by Vernadsky, that living matter actively transforms its environment, and that the environment of the biosphere has constantly changed in the course of evolution. These considerations suggest a very different approach to the problem of evolution, starting from the biosphere as a whole, rather than individual species. A fundamental problem, in this context, is to discover the language of communication between the different species and the biosphere. That language would be a dynamic one, and very different from the static "genetic code" of molecular biology; it relates to the biophysical (including electromagnetic) regulation of gene expression among species in an ecosystem. These and other ideas, being discussed at present, could lead to a breakthrough in understanding not only evolution, but also how our present biosphere is organized.


5. What is a chemical bond?

A. The problem of the nature of chemical bonding between atoms, has occupied many generations of chemists and physicists, going back many centuries, and is still not resolved today. It is the central problem of physical chemistry and one of the most important questions in natural science. For a long time, chemists developed their own theories of binding and valence based on experience and intuition, without a formalized basis in mathematical physics. With the emergence of quantum mechanics in the first decades of the 20th century many scientists expected that questions about the behavior of atoms and molecules, including chemical bonds, would sooner or later become solvable by mathematical methods based on Schrödinger's wave function. Many believed that chemistry would become essentially a branch of physics, and cease to be an independent discipline. In the meantime, however, strong doubts have arisen about the ability of existing quantum mechanics to fully elucidate the fundamental nature of chemical bonds, chemical reactions and phenomena such as catalysis. Here the discovery of new fundamental ideas and principles could lead to a revolution in chemistry as well as physics itself. 

B. From a conventional physical standpoint, nearly all the problems in chemistry constitute "many-body problems" involving the interaction of many physical entities (electrons, nucleii). As is well-known, even in the simple case of three bodies interacting according to the laws of Newtonian mechanics -- the famous "three-body problem" -- the mathematical equations cannot be solved in closed form. The corresponding quantum mechanical equations are even more complicated. In the cases of interest for chemistry, where many electrons can be involved, a strict mathematical solution is out of the question, and numerical solutions are generally beyond the reach of even the most powerful computers. The full equations are thus never used in practice. Instead, various approximations, simplifications and heuristic models have been introduced, which depend on assumptions and parameters derived from experience. As a result, it is difficult to rigorously compare the predictions of quantum theory with the real behavior of chemical compounds. Meanwhile, most of the teaching and practice of chemistry today is based on two competing approximate models -- the so-called molecular orbital (MO) model and the valence bond (VB) model -- which are logically incompatible and in many cases do not give the same predictions. Today there is a growing number of scientists who believe that present-day quantum theory is in principle incapable of accounting for the reality of the chemical bond, and even leads to incorrect conclusions. There is reason to expect that new breakthroughs in understanding the nature of the chemical bond will come from chemists and biologists developing new ideas, independently of mathematic physics. In this context, chemistry may recover its earlier role as an independent scientific discipline. On the other hand, understanding the amazingly "intelligent" behavior of biological molecules will require ideas going beyond the framework of chemistry per se.


6. Supramolecular chemistry and the "sociology of matter"

A. Classical chemistry is mainly concerned with chemical reactions and the compounds which are formed as a result of such reactions. However, atoms and molecules can interact with each other without a reaction taking place. In fact, the dynamic properties of chemical reactions, the function of catalysts and many other important questions concerning chemical reactions, depend on the behavior of atoms and molecules when they are separated by a significant distance, before an actual reaction takes place. The properties of water and many other materials also depend on such non-binding interactions. In living cells an essential role is played by associations of molecules which are not rigidly bound together, but nevertheless behave in a coordinated, collective manner. The significance of “non-equilibrium molecular constellations” in living systems was pointed out by the great Soviet biologist Alexander Gurwitsch already in the 1920s. The distant, non-binding interactions between molecular groups determine the geometrical configurations of large biological molecules and play an essential role in the process by which biological molecules “recognize” each other. All of these questions have great practical importance today, including in the rapidly-growing new field of nanotechnology. 

B. Modern chemistry has a highly developed language and analytical-conceptual apparatus for describing chemical compounds and reactions, but lacks general principles that apply to more general interactions between atoms and molecules. It is no longer possible to treat interactions of molecules in an isolated manner, as classical chemistry does. In the cases of interest, such as in biology for example, the behavior of molecules depends on the entire environment in which the molecules are located; and conversely, the presence of each molecule changes the environment for other molecules. Thus, scientists speak of “supramolecular chemistry” which must consider a next-higher level of organization of material processes beyond the molecular level per se. Supramolecular chemistry also concerns the behavior of not just two or three, but aggregates of many atoms and molecules. It is thus part of a future “sociology of matter” which should discover general laws of “social behavior” of material systems. 


7. The puzzle of high-temperature superconductivity – how are electric currents organized?

A century ago, in 1911 the Dutch physicist Ohnes discovered that when certain materials are cooled down to temperatures near to so-called absolute zero ( 0° K =  - 273,15° C) their resistance to electrical current disappears. An electric current set up in a ring of superconducting material, continues to flow by itself and essentially indefinitely with no loss of strength. Superconductors are used in a variety of technical devices today such as the powerful magnets used in nuclear magnetic resonance (NMR) imaging devices in modern hospitals. Superconducting magnets provide a possible basis for extremely efficient magnetic levitation (maglev) transportation systems in the future. Also, superconducting cables might one day provide the means for transporting electrical energy without losses over long distances.

          A key question, both from the standpoint of understanding superconductivity, and for its practical applications, concerns the temperatures at which this phenomenon occurs. Following the development of the theory of superconductivity by Ginzberg and Landau (1950), and Bardeen, Cooper and Schäffer (1957) it was widely believed that the phenomenon of superconductivity would be limited to extremely low temperatures. Indeed, the theory seemed to predict that superconducting could not exist at temperatures above 30°K. However, in 1986 the two German physicists Bednorz and Mueller, in the context of investigating the conductivity properties of ceramics, discovered a material (lathan-based cuprate perovskite) that remained superconducting up to a temperature of 35° K. In the following period, there was a race among scientific groups around the world to obtain even higher "critical temperatures", using ceramics of similar structure to the Bednorz-Mueller material. Already in 1987 a material was found that remained superconducting up to 92° K, which is far above the boiling-point of nitrogen. This created an enormous sensation around the world, because it meant the possibility of using liquid nitrogen -- which is technically simple to produce and wiedly used commercially -- as a coolant for superconductors.

The race for higher temperatures continued, however, and some scientists are convinced that superconducting is also possible at room temperature. The record up to now (October 2007) is still much lower: 138° K ( - 135° C) in a ceramic material synthesized from thallium, mercury, copper, barium, calcium, strontium and oxygen.

          Unfortunately, the presently-existing high-temperature superconductors have a number of serious disadvantages, which have so-far prevented them from being widely used: limitations on the maximum currents these materials can carry without the superconductivity breaking down, poor mechanical properties, very high cost and difficulty of manufacture etc.

          On the more fundamental level, the essential problem is that there is no satisfactory theory and explanation for the phenomena observed in the new "high-temperature superconductors". The search for better materials has to rely to a large extent on intuition and tedious "trial-and-error" methods. Despite enormous efforts and a huge flood of research papers, progress has tended to slow down, and many researchers became frustrated and gone into other areas. Clearly, some fundamentally new ideas are needed. A real breakthrough in this area would lead to a major scientific and technological revolution.

B. The present difficulties in the area of high-temperature superconductors and in a number of other areas, are clearly connected with the fact, that science still does have an adequate answer to the question: "What is an electric current?". The old picture of point-like electrons moving through a crystal lattice has long since been replaced by a mathematical theory based on electron waves (Schrödinger functions). However, it is clear that high-temperature superconducting involves a collective, self-organizing process within the material, which cannot be fully accounted for by the present conceptions of solid-state physics. Perhaps the most fruitful analogies will found in certain physico-chemical processes in living cells, which take place at very high efficiencies and appear to be "decoupled" from the (supposedly) random thermal motion of molecules. It has even been proposed that future high-temperature superconductors might be developed on the basis of biological materials.  


8. Magnetic isotope effects - a secret of living organisms?

A. Most of the chemical elements in living organisms occur as mixtures of two or more stable or long-lived isotopes together with small amounts of radioactive isotopes. Naturally-occurring carbon, for example, consists nearly entirely of two stable isotopes, C-13 and C-14; magnesium of three: Mg-24, Mg-25 and Mg-26, and so on. Although the chemical properties of these isotopes are very nearly identical, some scientists have suspected that they might play different roles in living processes. For a long time experiments designed to discover the biological properties of stable, non-radioactive isotopes, produced disappointing results. But today there is evidence that the magnetic characteristics of specific atomic nuclei -- the so-called magnetic moments, which differ strongly among different isotopes of one and the same element -- may play an essential role in living processes. The magnetic interactions between nucleii and their surroundings are exploited on a routine basis in nuclear magnetic resonance (NMR) imaging, used in modern hospitals, as well as in an important chemical analytical method known as nuclear magnetic resonance spectroscopy; but their significance for the phenomenon of life itself is only beginning to be grasped.

          In recent years, decisive evidence has been found showing that isotope-specific nuclear magnetic interactions play an essential role in living processes. In 2005, for example, a research group led by Prof. Anatoly Buchachenko at the N.N. Semenov Institute for Chemical Physics of the Russian Academy of Sciences, demonstrated the existence of a powerful “magnetic isotope effect” in living systems. Buchachenko and his collaborators studied certain enzymes that are involved in the synthesis of the important biological molecule ATP in living cells, and which contain an ion of magnesium in a critical position. They found that the reaction rates depend strongly on which of the isotopes of magnesium, Mg-25 or Mg-26, is present in the enzyme. In the case of Mg-25, which has a non-zero magnetic moment, the reaction occurs three times faster than for the isotope Mg-26, whose magnetic moment is zero. Evidently, the magnetic field of the magnesium nucleus contributes to the catalytic activity of the enzyme.

B. Mankind stands on the threshold of revolutionary developments in biology and medicine, connected with understanding how the distinction between living and nonliving processes expresses itself on the subatomic, nuclear level. While we cannot predict the exact form this revolution will take, it is nearly certain that it will be connected with a better understanding of the specific role of isotopes in living organisms. This question is related to another famous problem: the extraordinary sensitivity of living processes to constant and varying magnetic fields, which has been known for a long time and forms an entire field of research called “magnetobiology” or “biomagnetism.” But despite many investigations, the nature of these effects and their possible relation to nuclear magnetism, has not been fully clarified. Part of the reason is the seemingly “infinitesimal” magnitude of the “nuclear component” of the magnetic fields in living and nonliving material. The magnetic moments of nuclei are 1,000 or more times weaker than those associated with the electrons and their orbital configurations in molecules. But as science over the centuries has demonstrated again and again, it is often the weakest effects -- the ones that tend to be ignored --, that actually control the largest ones.

          A breakthrough in this area could to a qualitative transformation in the uses of isotopes, not only in biology and medicine, but also in agriculture and the management of the biosphere as a whole. It is quite conceivable that by altering and controlling the isotopic composition of plant, animal, and human nutrition in certain ways, mankind could obtain a variety of beneficial effects.


10. Changing the "constants" of radioactivity.

A. From the beginning of research into the phenomenon of radioactivity, over 100 years ago, one of the most surprising features was the apparent constancy of the rate of decay of radioactive substances -- commonly expressed in terms of their so-called "half-life". Many experiments, carried out during the 20th century, appeared to demonstrate that the half-life of a radioactive isotope did not change significantly when the substance was heated to high temperatures, subjected to electric currents, magnetic fields, mechanical forces and chemical influences. As a result, the half-life of a given isotope came to be regarded as a kind of constant of Nature, and is commonly used as a “signature” to identify isotopes, for geological dating and other purposes. The prejudice remains, even among professionals today, that radioactive decay processes are practically beyond human control, except by bombarding nuclei with particles from high-energy accelerators or nuclear reactors.

          The apparent constancy of the radioactive decay has many practical consequences. For example, what to do with the radioactive waste from nuclear reactors, which contains isotopes whose half-lives are thousands of years or more? Until recently, the only known way to change the radioactivity of an isotope was to transmute it into another isotope by means of neutrons or high-energy particles from a reactor (particularly a so-called fast neutron reactor) or particle accelerator. Under certain conditions this process leads to short-lived isotopes which quickly decay into stable nucleii. Fully eliminating radioactive waste by such methods, although feasible in principle, would be extremely difficult and expensive in practice.

          However recent experiments have shown that the half-life of isotopes can be changed by much "softer" means, by changing the physical environment surrounding the nucleus. For example, the normal half-life of the rhenium isotope Re-187 is over 40 billion years, but when atoms of Re-187 are fully ionized (all electrons removed), the half-life of their nucleii is reduced more than a billion times, to less than 33 years! Smaller, but still easily measurable decreases in radioactive half-lives have been obtained by even “softer” means than ionization: for example, by embedding beryllium-7 atoms in so-called fullerines (spherically-shaped complexes of atoms), and just recently again, by embedding sodium-22 in palladium metal, afterwards cooled to a temperature of 12°K. These results imply that radioactivity -- commonly regarded as an autarchical property of the atomic nucleus -- in fact depends strongly on the environment in which the nucleus is located.

B. The teaching and practice of nuclear physics often transmit prejudices that were introduced very early into this field. Among these is the idea that the processes “inside” the atomic nucleus constitute a categorically separate world, governed by short-range “strong forces” whose intensity is so enormous, that the nucleus is essentially closed off to the outside environment except for “high-energy” events. Recent experiments and theoretical investigations suggest a very different -- in a certain sense exactly opposite -- conclusion: The atomic nucleus is actually very strongly coupled to its environment. The nucleus "feels" and reacts to everything occurring around it, even when that reaction is not immediately detectable in the form of high-energy radiation. Experimental success in changing the rate of radioactive decay by manipulating the environment of atomic nuclei is a first step toward mastering the exquisitely "fine tuning" of nuclear processes. But new fundamental ideas are needed concerning the nature of that "tuning" and the organization of the atomic nucleus in general. There is reason to expect that radioactive decay is not, as presently believed, a random event, but rather a strictly organized process, which Man in the future will learn to influence and control by technological means.   


11. Where do raw materials come from?  Self-organization and the generation of oil, gas and mineral deposits in the Earth

A. In order to support the expanding consumption of energy and raw materials in the world economy it is necessary to constantly improve the methods for discovering and utilizing mineral deposits in the Earth's crust. The ability to detect and locate new deposits depends in turn on scientific knowledge concerning the process by which such deposits were generated in the course of the Earth's geological evolution, up to the present time. In recent years some important new scientific results and ideas have emerged in the important area of resource-genesis, which call into question traditional theories about the origin of important minerals and about geological processes in general. For example traditional theory asserts that oil deposits originate in biological material -- biomass, mainly plant life -- which was trapped and accumulated in upper layers of the Earth and chemically transformed over very long periods. But now there is evidence for a different hypothesis: that at least part of the existing reserves of oil did not originate in the transformation of biological material; instead, oil is continually being generated within the Earth by abiotic processes. A further example is recent studies concerning the origin and means of detection of so-called super-rich mineral deposits around the world, as exemplified by the famous "iron mountain" in Brazil, whose existence is often difficult to explain. A fundamental problem of geology, which is not clarified today, concerns the origin and form of the energy which is required for the formation of mineral deposits.

B. The main direct sources of energy for the formation of mineral deposits in the Earth, which have been considered in geological theories until now, are: gravitational energy, heat, mechanical pressure, and to a limited extent the energy of chemical reactions. The possible role of vibrational energy -- for example the so-called microseismic oscillations -- as well as the energy of electric and magnetic fields, has hardly been considered. There is evidence that in many cases, mineral deposits are created by self-organizing processes including "mass migrations" of atoms and ions that concentrate themselves in certain critical zones.


14. Beyond quantum physics -- What is the origin of quantization of the micro- and macro-world ?

A. The last real revolution in the foundations of physics began a century ago, with the discovery by Max Planck, that the exchange of energy between atoms and the surrounding electromagnetic field does not occur continuously, but rather in discrete "packets" or quanta of energy. In the decades following Planck's discovery, beginning with the work of Bohr, Sommerfeld, Heisenberg, Schrödinger, Dirac and others, an entirely new mathematical framework was developed for physics, which became known as quantum mechanics and quantum field theory. Despite an enormous development of quantum physics and its technological applications, until today nobody has really explained why the exchange of energy occurs in a discontinuous way, and why microscopic objects such as atoms appear to exist only in a discrete ("quantized") array of possible energy states. Discovering a real explanation would revolutionize physics and transform practically all of natural science.

B. In present quantum physical theories the existence of Planck's quantum of action is simply introduced as a postulate, without explanation, and little or nothing is said about its origin and physical nature. In the 1920s and 30s Niels Bohr and some others claimed that the quantum mechanics was in some sense a "final" theory. But a growing number of physicists today are convinced that there exists a more fundamental level of physical reality, which will make it possible to understand the phenomenon of quantization and other paradoxical features of microphysical objects.

          One reason for this expectation is the fact that quantized behavior can be observed not only on the microscopic scale of atoms, but also in a certain way in many macroscopic systems such as in the arrangement of planetary orbits in our solar system (Titius-Bode law), the arrangement of moons around the large planets, in other astronomical objects and in living organisms generally. In these cases, quantization is a natural result of the interactions in the system, rather than a rigid "rule" of the sort assumed by present quantum theory.

          For example, in the late1960s two physics students at Moscow University, Daniel and Yakov Doubochinski, demonstrated how nonlinear interactions between two or more oscillating systems can give rise to discrete, quantized series of stable modes of oscillation in the coupled system -- a phenomenon they called the "Macroscopic Quantization Effect", and which can be demonstrated in some very simple electromechanical devices such as a pendulum interacting with an inhomogeneous oscillating magnetic field. The behavior of Doubochinski's pendulum with its quantized amplitudes appears very similar to that of atoms and their quantized energy states. The possible relationship between these phenomena is a subject for investigation today. 

          Another, related approach to discovering the physical origin of quantization goes back to the great German physical chemist Walter Nernst. Nernst suggested that what physicists call a "vacuum" is not really empty at all, but is filled with electromagnetic radiation with a huge range of wavelengths and frequencies; and that the quantum properties of atoms are caused by their interactions with this "vacuum field". Today the study of the physical structure of the vacuum has grown into a major branch of modern quantum physics, called Quantum Field Theory. So far, however, Nernst's original program for investigating the physical cause of the quantum itself, has not been carried out.  

          Another interesting direction of investigation goes back to Louis DeBroglie, one of the founders of modern quantum physics, and the physicist David Bohm. DeBroglie and Bohm proposed to combine the notions of wave and particle into a single, unified picture of particles and other microphysical objects. 

          In recent years there has been increasing interest in these and other approaches to developing a new fundamental physical theory going beyond present-day quantum mechanics and quantum field theory. A common feature of nearly all of these approaches is that they require a new conception of interaction between physical objects, which is fundamentally different from the Newtonian idea of "force". Thus, we can speak of the emergence of a new "Physics of Interaction".

          At the same time experimental technology has progressed greatly, making it possible to conduct direct experiments on single atoms and their interaction with radiation in a manner which was completely impossible at the time when quantum mechanics was first developed. Thus, the conditions are ripe for breakthroughs. 

15. Does time have a structure?

A.      Many of the most important problems in contemporary science involve either extremely short or extremely long intervals of time.

On the one side, the development of femtosecond lasers, able to generate light pulses with a length of 10-12 seconds or less, has made it possible to study so-called “ultrafast“ processes such as changes in the shape of individual molecules, which play a crucial role in chemistry, molecular biology, biophysics and other areas. It is also found that femtosecond light pulses interact with matter in a completely different way than the much longer pulses generated by conventional light sources. This leads to interesting technological applications. For example, femotosecond lasers can be used to cut holes into high-explosive materials without danger of triggering an explosion, because the time needed to generate heat is much longer than the laser pulse. (In a sense, one might even say that heat no longer exists on the scale of a femtosecond or less!)

At the other opposite of time-scales, the methods of modern astrophysics, cosmology, geology and evolution research have made it possible to attain scientific knowledge concerning processes that unfold over hundreds of thousands, millions and even billions of years. These geological, evolutionary and cosmological processes are qualitatively different from processes occuring on the time-scale of everyday life.

Investigations in many areas have suggested the idea, that each time-scale has its own specific characteristics, and that processes occurring on different scales of space and time have a qualitatively different character and cannot be simply compared with each other. The possibility must also be considered, that different types of physical systems – for example, living organisms, nonliving matter, geological processes, large systems such as galaxies etc. – might each have its own specific form of “time”. The question arises, whether time itself might have a complex structure, and if so, how that structure could be scientifically studied and determined by experiment and other means.

B.      The historical development of physical science until today has been strongly influenced by the conception of “absolute time” put forward by Isaac Newton at the end of the 17th century. According to that conception, time exists independently of matter and physical processes; and that different moments and intervals of time, considered by themselves, are essentially indistinguishable from each other. This view is expressed in mathematical form by the idea of a time axis as a straight line without any distinguishing features, running infinitely backwards and forwards from any point.

In his special relativity theory of 1905 Albert Einstein criticized Newton’s conceptions of absolute space and time. Einstein argued that the characteristics of the propagation of light and other electrodynamic processes imply that physical space-time must have a kind of structure, in which (for example) the results of measurement of time depend upon the relative state of motion of observers.

The great naturalist Vladimir Vernadsky went much further than Einstein, in a sense. Vernadsky proposed that the real space-time of physical processes must have a complex structure, and in particular that it is necessary to distinguish between different species of time such as geological time and biological time. Vernadsky was unfortunately not able to fully elaborate his ideas up to his death in 1945, but in the meantime his ideas have attracted the attention of a number of leading scientists and inspired a interesting developments which are continuing today.

Determining the structure of time could lead to a fundamental revolution in science and technology.


16. Are there general laws of technological progress? Development of a science of physical economy

A.      Today there is much discussion about the fact that a modern economy needs to have a high rate of innovation – especially technological innovations – in order to remain competitive. Actually, innovation is necessary for the future of human society generally. Scientific and technological innovations are needed in order to open up new sources of energy and raw materials, to increase the efficiency of production of food and manufactured products, to provide medical defenses against disease and to solve many other problems connected with human existence on this planet.

          But the necessity of innovation poses a fundamental paradox: Innovations – for example, scientific discoveries – are creative events, and creative events by their nature can never be exactly predicted or planned. Some people even think that creativity is a kind of random, accidental or chaotic process. But if innovations cannot be planned or predicted, then how can we be sure that they will occur? How can we decide, for example, how and where to invest limited resources and money, in order to obtain the best results? 

          This is a serious problem for governments and for private entrepreneurs, because research and development of new technologies often require large investments that have to be made many years in advance without being able to predict what the results will be and when they will be achieved. Even after a new technology has been invented and developed the problems remain, how to predict the economic benefits of introducing the new technology on a large scale, how to choose between alternative technologies etc. Developing truly scientific approaches to making such decisions is especially important in the case of large-scale infrastructure investments such as investments into the national energy and transport systems, where the introduction of new technologies requires enormous resources and has long-term effects on the entire economic process.

B.       The history of great national technological projects such as the atomic weapon programs and space programs of the United States and the Soviet Union, as well as some programs in the private sector, shows that under certain conditions it is possible to realize a large number of technological innovations in a short time. The experience of organizing and leading such projects, as well as the experience of great periods of scientific and technological progress in human history, suggests that human creativity is not so chaotic and unpredictable as it might appear at first sight. 

In his book on “Scientific Thought as a Planetary Phenomenon” (Научная мысль как планетное явление) the great naturalist Vladimir Vernadsky argued that human creative reason, acting through the process of scientific and technological progress and the human economic activity connected with that, constitutes a geological force transforming the biosphere into a new stage of development -- the Noosphere. In this sense human creativity and its effects on the Earth must be regarded as a natural phenomenon reflecting the general laws of development of the Cosmos. Although Vernadsky did elaborate this concept into an economic theory, his conception of the Noosphere implies the possibility of making scientific forecasts concerning technological progress and its economic and environment effects. 

The famous scientist Pobisk Kuznetsov, one of the organizers of complex technological programs in the Soviet Union, proposed a method for analyzing economic processes by means of physically measurable parameters. In collaboration with the famous Soviet aircraft designer and scientist Bartini, Kuznetsov developed the notion, that science and technology evolve according to a “Periodic Table of Physical Magnitudes“, progressing step-by-step from simple magnitudes such as length and time, towards more compound magnitudes used in electrodynamics, quantum physics and nuclear science. Kuznetsov applied the new system of units to the evaluation of transport systems.

The American economist Lyndon LaRouche put forward the conceptual basis for a new “Science of Physical Economy“, in which scientific and technological progress is measured in terms of the increase in the maximum size of the human population which can be sustained per unit area of the Earth‘s surface. LaRouche affirms that human creativity is governed by scientific principles, and that these principles can be mastered and utilized for the design of long-term investment policies. He showed how the evolution of technology and its effect on economic productivity can be measured in terms of increases in the energy flux density in production and infrastructure processes.

Elaborating these approaches into a fully-developed scientific discipline of Physical Economy, making it possible to forecast the process of technological innovation and to make reliable decisions concerning long-term investments into infrastructure and other areas, is a major challenge for the first decades of the 21st century.


17. Revolutionary new types of aircraft, plasma and interactive aerodynamics

A. The development of air transport in the 21st century will require new types of aircraft, including aircraft designed on the basis of new physical principles. The requirements include the following: 1) the need to drastically reduce the travel times for rapidly-growing long-range passenger travel between the world's continents demands the development of new generations of supersonic and even hypersonic aircraft, able to operate with much lower costs and lower noise levels than the first generation of supersonic passenger transport aircraft built in the 1970s and 1980s (Concorde and Tu-144). Hypersonic aircraft technology is also needed for future orbital transport systems that can access Earth orbit starting from a normal airport; 2) the need to provide air transport to very densely-populated areas in Asia and elsewhere, where there is no space for the construction of very large airports, requires new types of large-sized aircraft that able to land safely on short runways and have very low noise. Possibly even commercial aircraft that can take off and land vertically. Drastically reducing the speeds at which aircraft can take off and land is also desirable for safety reasons, since 80% of accidents occur during or near the landing and takeoff phases; 3) the rapidly growing demand for long-range air freight transport requires development of specialized aircraft able to transport very heavy cargo loads efficiently over land and sea; 4) the need to drastically improve the efficiency of aircraft transport in general, reducing drag and other energy losses, as well as the noise levels generated; 5) new aircraft propulsion systems of various types must be developed, including engines using hydrogen and other non-kerosene fuels as well as possibly various types of ramjets, plasma-based (MHD) propulsion systems, electric-powered engines, and even nuclear-based engines; 6) possible use of air-cushion systems and even magnetic levitation in the place of wheels for aircraft landing and takeoff; 7) anti-gravity systems (today a kind of dream, but some day a reality?).


B. Despite enormous technical improvements, the design of present-day commercial aircraft is based on the same fundamental principle as the first airplanes a century ago: generating aerodynamic lift by the motion of an essentially fixed-profile body through the air. This principle belongs to classical aerodynamics, in which the properties of the air medium can be regarded as relatively constant; in which there is no active transformation of the air flow, and long-range electromagnetic forces can essentially be neglected. But the revolutionary aircraft of the 21st century will almost certainly be based on a new way of thinking about the interaction between the vehicle and the medium within it travels. More and more, these vehicles will actively organize and transform the medium surrounding the aircraft. Developments in this direction have already begun. For example:

i.) The experimental aircraft EKIP designed by the late Prof. L.N. Schukin, actively manipulates the air flow around the body of the vehicle using a system of suction ducts, thereby making it possible to greatly increase the lift of the aircraft and give it other desirable characteristics. Also, the EKIP uses an air cushion instead of the conventional landing gear. It can be expected that the same basic principle will be become widely used in many future aircraft designs.

ii.) Russian scientists and designers have demonstrated that the aerodynamic drag on an aircraft can be drastically reduced if the air flowing around it is transformed into the plasma state (partially ionized) by technical means. This is just one example of "plasma aerodynamics", which will become more and more important for aviation in the future.

iii) The fact that plasma flows can be manipulated by electric and magnetic fields opens up wide possibilities for new types of flying devices and propulsion systems based on electromagnetic interactions. This applies especially to future hypersonic aircraft which operate naturally in a partially ionized environment.

iv) In the more distant future, when Man has finally discovered the actual physical nature of gravitation, it may become possible to construct air and space vehicles based on anti-gravity technologies.


18. Nuclear space propulsion and principles of space exploration in the 21st century

A. Until now space travel has been nearly entirely based on propulsion systems (rocket engines) that use chemical fuels. Chemical propulsion systems, however, require large amounts of fuel, which greatly limits the range for manned space travel. The shortest travel time between the Earth orbit and Mars which can be achieved using known chemical propulsion technologies is of the order of 5-10 months. This long travel time exposes the passengers to health dangers and prevents rapid rescue in case of emergencies. Also, to carry large amounts of fuel on space journeys is extremely expensive. In order to safely and economically sustain permanent manned colonies on Mars, it will be necessary to reduce the travel time between Earth orbit and Mars orbit to a couple of weeks at most, instead of months, and to greatly reduce the fuel requirements. Thus, the long-term future of manned space exploration will depend on developing new propulsion systems which are not based on chemical reactions.

B. The fundamental limitation of chemical propulsion systems lies in the limited amount of energy released per unit mass of the fuel. In the case of oxygen-hydrogen fuel, which is the most powerful combination known today, this value is about 13 MJ per kilogram of fuel. That limitation is connected with the fundamental nature of chemical reactions, which involve changes in the outer electron orbits of atoms and molecules, which have relatively low energies on the order of a few electron volts or less. By contrast nuclear reactions can generate more than a million times more energy per unit mass than chemical reactions! The fusion of two deuterium nuclei releases 4 million electron volts of energy, and the fission of uranium 200 million electron volts. For this reason, nuclear energy holds the key to manned space travel to Mars and beyond. 

          The Soviet Union achieved considerable experience in the use of small nuclear fission reactors as power sources for satellites. Already in the 1960s the United States began working on a reactor-based propulsion system for a future Mars mission, and there was similar work in the Soviet Union. Today, Russian scientists are working on a number of possible designs for nuclear fission space propulsion, which could play an important role in manned space exploration in the coming decades.

          Fission reactors already exist as a highly-developed technology, and fission propulsion systems could in principle be built today. Fusion is still in an experimental stage, but once fully realized, may have a number of advantages compared to fission systems.

          The ideal solution would be a spacecraft with a constant acceleration equal to the acceleration of gravity, which would accelerate during the first half of the trip and decelerate during the second half. This would reduce the time for a trip from Earth to Mars to only a few days. However, even with a moderate payload (for example, 200 tons), the required propulsion system would have to generate more power than the total of all the world’s power stations today!

          As manned space exploration expands beyond Mars to the region of Jupiter and Saturn and – eventually – beyond the solar system, entirely new solutions to the propulsion problem will become necessary. Applying Einstein’s formula for the relationship between mass and energy, one can easily see that fission and fusion convert less than one percent of the mass of the fuel into energy. The obvious alternative would be to use matter-antimatter reactions, in which 100% of the mass is converted to energy. So far, however, scientists have only succeeded in producing infinitesimal quantities of antimatter in the laboratory.

          But even matter-antimatter reactions may become inadequate. In the future Man may have to find ways to utilize the enormous energy potential of the so-called vacuum fluctuations, or the curvature of space itself!