Understanding Begins with Acceptance of Reality as We Find It

In considering the existential questions that perplex humanity, we can do neither more nor less than take reality exactly as we find it. We must take account of the universe and the laws of physics that govern it as expressed in the universal language of mathematics . All that we can know about reality with empirical validity derives from observation, hypothesis and experiment. At the moment of the universe’s creation, space and time, all matter and energy, acquired the reality of existence in an infinitely condensed form or state which has been evolving according to physical law ever since. There is broad scientific consensus that approximately 13.7 billion years ago the universe, consisting of space-time together with all the matter and energy that exists, sprang into being in a cosmogenesis event known as the Big Bang . Space-Time is the four-dimensional reality, three of space and one of time, in which the universe evolves, and in which everything we can directly observe exists. The notion of space-time plays a major role in Einstein’s theories of Special and General Relativity, but the idea of four-dimensional space-time was known and developed by notable mathematicians and physicists before him. The first manifestation of the Big Bang is referred to as a singularity , which is characterized by the Theory of General Relativity as a locus of matter and energy that approximates infinite density in an infinitely small volume. In fact, however, Einstein’s equations for General Relativity break down and do not adequately describe the state of the universe in its first moment. For this a more comprehensive theory that is consistent with General Relativity and Quantum Mechanics is required. According to the current understanding of Quantum Mechanics, however, the initial volume of the universe cannot have a diameter smaller than the Planck length (1.61619926 × 10−35 meters). How such a massively dense and energetic entity first appears is still unknown. Black Holes , caused by the gravitational collapse of stars of sufficient mass, are also singularities, and subject to the same uncertainty regarding the actual size and density. In any case, it is clear that Space has been expanding since the Big Bang . A special aspect of this expansion, called cosmic inflation , will be examined in more detail in Chap. 4.

The laws of physics, which consist of all the rules by which the universe unfolds, were operative at the first instant of the Big Bang and perhaps even before if our universe sprang into existence from one that already existed as postulated by the theory of the multiverse as explained in Chap. 4. The consensus view of modern physics and cosmology is that these laws supervene everywhere for all time to govern, proscribe and describe reality. Neither the existence of the laws of physics, nor the effectiveness of mathematics as a language that accurately expresses those laws has been explained. The ability of mathematics to discover new physical truths from symbolic operations on the objects of mathematics is likewise without explanation. So lacking explanation, and so remarkable is the effectiveness of mathematics in this regard that it was considered unreasonable by Nobel Prize winning physicist Eugene Wigner , as explained in Chap. 2.

Does the existence of a coherent structure of physical law that operates on a universal scale for all time, together with the existence of a coherent and logical mathematical language that expresses those laws and allows humanity to discover new ones, suggest an underlying unity or plan for the structure and workings of the universe? This is essentially the question of design and, rather than grapple with it now, it is more productive at this point to ask a related question.Footnote 1 To momentarily side-step the issue of design, the question may be reformulated as follows. Does the pervasive operation of physical law throughout space and time, together with the universal relevance and coherence of mathematics , suggest that everything is connected in some way in the deep structure of physical reality? That is, does reality have a unitary nature? Are all things connected, as all truths about those things likewise are connected? This is an important question. It asks if reality is as connected and coherent as the consilient truth that can be known about it. Many physicists believe that the question can be answered, and indeed has been answered substantially, in the affirmative on the basis of results obtained using the methods of rational empirical science. Quantum physics demonstrates that there is a deep fundamental connectedness among the constituents of reality. This connectedness is the property referred to in quantum theory as quantum phase entanglement , which is explored below. To understand its key points, it is necessary to review some basic aspects of classical physics and, in particular, quantum physics. This subject matter has been discussed at great length in a burgeoning literature written by the physicists themselves for the lay public over the last 150 years. One notable early example in the referenced time-frame is “The Theory of Heat ”, by James Clerk Maxwell which was written with non-physicists in mind (Maxwell JC, 1871). The full breadth of physics is obviously beyond the scope of this book, but the brief and selective overview presented here will facilitate understanding of what follows.

Classical Physics

What we now call classical physics advanced tremendously with the work of Galileo Galilei who lived during the latter part of the Sixteenth and early Seventeenth Centuries. Among other achievements such as the invention of the telescope, Galileo overturned Aristotle’s false idea about motion requiring the constant application of a force. Sir Isaac Newton, who was born the year after the death of Galileo, built upon the work of his illustrious predecessor to advance a mathematical formulation of the laws of motion and gravity in “Philosophiae Naturalis Principia Mathematica”,Footnote 2 often referred to simply as “The Principia”. So valid and enduring in fact is Newtonian mechanics , which culminated in his Theory of Universal Gravitation , that it was used to put men on the moon almost 300 years after it was formulated. While his achievements in science and mathematics were extraordinary by any measure, Newton himself was said to have remarked, “If I have seen further than others, it is by standing upon the shoulders of giants.”Footnote 3 In addition, Newton along with Gottfried Leibnitz was a co-discoverer of the calculus . Calculus is a branch of mathematics that has been essential for all subsequent work in physics, as well as many other branches of science.

Reversibility of Newtonian Mechanics and the Irreversibility of Time

Newton developed equations of motion that described how objects move in response to forces that act upon them. An example from the game of pool is illustrative of the general idea. When the cue ball is struck by a player’s pool stick the force of the impact sets it in motion to move toward and strike the racked pool balls at the particular angle, and with the force, desired by the player. The force of the impact sets the racked pool balls in motion so that they each move around the pool table until friction with the table and air causes them to stop moving. During this process, some of the pool balls will strike the edge of the pool table and bounce off at an angle determined by the angle of the impact. In a similar manner, some of the pool balls may strike others and each will move away from the collision with a velocity (speed and direction) determined by the velocity of each pool ball at the moment of impact.

An interesting property of Newton’s laws of motion is that they are equally valid for describing the motion in a video or film recording of a cue ball striking a single pool ball whether we play the video in forward or reverse motion. We begin our video viewing after the white cue ball is already set in motion toward a single stationary pool ball. We observe that the cue ball moves toward and strikes the pool ball, after which they both move off in directions determined by the direction of the cue ball at the moment of impact. In reverse motion of the video recording, the pool ball and the cue ball converge to a point of collision after which the pool ball stops moving and the cue ball continues to moveFootnote 4 until the video ends. An observer of either scenario would not be able to tell whether the video recording was played in reverse or forward motion. This reflects the fact that Newtonian mechanics applies to and describes the motion of the cue ball and pool ball equally well in either case, and there is also no obvious indicator of the direction of time in either case. On the other hand, if a videographic recording were made starting when the cue ball approached and then struck a set of racked pool balls, an observer of forward and reverse versions of the recording could easily identify the forward and reverse directions of time in the two scenarios. This follows from the fact that we are accustomed to seeing objects disperse to become more disordered over time, but not to converge into a more ordered spatial structuring. The pool balls have an extremely low probability of moving toward each other, from various positions on the table, speeding up during the approach and then stopping abruptly at the exact moment the racked pattern is achieved and their collective momentum is transferred to the cue ball which then accelerates away from the point of contact with the racked pool balls! Newton’s laws of motion describe either scenario equally well, but we know that there is a universal tendency for a system such as the pool balls to become more disordered in the forward motion of time. On the other hand, reverse viewing of the initial break of the racked pool balls by the cue ball, while adequately described by Newton’s laws of motion could be instantly recognized as a reverse viewing of the actual events for reasons explained below.

Another example illustrates the irreversibility of time and the tendency toward increasing disorder of a system in time more dramatically. It also reveals something vital about the idea of information and its relationship to the direction of time. Consider the fall of the nursery rhyme character, Humpty Dumpty. The legendary nursery rhyme describes the irreversible existential fate of the unfortunate egg. There are two instructive aspects of this nursery rhyme: the catastrophic consequences that the fall has on the egg; and the inability of anyone to reassemble the egg. Watching a video recording of an egg falling from a table from beginning to end in forward motion makes perfect sense to the viewer. The reverse play of the recording, however, is obviously nonsensical because broken eggs that have fallen to the ground do not spontaneously reassemble themselves and then rise up in defiance of gravity to their former height on the table top. Every child understands this. Indeed, it is observations of events such as this that lead to the development of a child’s notion of causality in the interactions of objects, and people, in the world as time advances forward. More important for our purposes than the fall of an egg and its result, however, is the apparent impossibility of reassembling the egg. That is, while we appreciate that eggs that have fallen do not spontaneously reassemble themselves, something else prevents their deliberate reassembly by some agent. That “something else” is the information needed to complete the desired reconstitution. In fact, considering all of the molecules in the dispersed yolk albumin, membranes and shell of the egg, an impossibly large amount of information would be required to reconstitute the egg by exactly reversing the molecular motions that occurred during its destruction. This example illustrates an undeniable truism: spontaneous events that occur for any system are associated with an increase in disorder and an increase in the amount of information that would be required to describe the condition of the system as it progresses to increasingly disordered states. This is the same amount of information that would be required to reverse the disordered condition to reestablish the original state of the system. It is highly unlikely that a broken egg could ever be reconstituted to its original state because the amount of information required to do so is extremely large and virtually impossible to ascertain and use.

These examples from the game of pool and the breaking of eggs provide good intuitive descriptions of the relationship between disorder, also known as entropy , information and their intrinsic connection to the forward direction of time. This forward direction of time has been referred to as the “arrow of time ”. Sir Arthur Eddington was the first to use this term (Eddington A S 1928). Eddington expresses the concept of the arrow of time using the terms, “random” and “randomness” to convey the sense of “disorder”:

Let us draw an arrow arbitrarily. If as we follow the arrow we find more and more of the random element in the state of the world, then the arrow is pointing towards the future; if the random element decreases the arrow points towards the past. That is the only distinction known to physics. This follows at once if our fundamental contention is admitted that the introduction of randomness is the only thing which cannot be undone. I shall use the phrase ‘time’s arrow ’ to express this one-way property of time which has no analogue in space.

We can summarize the foregoing by saying that when any system evolves over time the disorder, or entropy , of that system increases and the amount of information that would be required to describe the state of the system increases accordingly. This is the same amount of information that would be required to return the system to its original state.

We can see how the ideas concerning entropy , disorder and information emerged gradually and were formalized in the Nineteenth Century from the study of thermodynamics, which together with Maxwell’s theory of electromagnetism was the crowning achievement of physics to that point in time. These ideas are presented with more formal detail in the next two sections. If you want to avoid the math, you can skip ahead to the section on quantum mechanics, which begins with the section headed “The Dual Nature of Light and Matter”.

The First and Second Laws of Thermodynamics

The Eighteenth and Nineteenth Centuries brought an acceleration of progress in physics. The new field of thermodynamics provided an understanding of temperature and heat flow on a macroscopic or non-atomic scale. The First and Second Laws of Thermodynamics laid the groundwork for many of the technological innovations of the Industrial Revolution, such as the steam engine that converted heat energy to mechanical energy. The First Law of Thermodynamics states that energy can neither be created nor destroyed, but it can be converted from one form to another. This can be illustrated by considering the change in the internal energy content, U, of a steam engine as it burns fuel and does work. The burning of fuel increases the energy content of the boiler of a locomotive in the form of heat, Q. On the other hand, the movement of the locomotive’s wheels that is caused by the pressure of steam heated by the boiler removes heat energy, in the form of mechanical energy, to perform work, W. The First Law of Thermodynamics states that the changeFootnote 5 in the internal energy, ΔU, of a system such as a steam engine can be determined at any time by subtracting the change in the amount of work done by the system, ΔW, from the change in the amount of heat in the system, ΔQ. For the example of a locomotive, ΔQ is the amount of heat added to the system by the burning of fuel, and ΔW is the amount of work done by steam in moving the wheels. The First Law of Thermodynamics is captured in the following equation:

$$ \Delta U=\Delta Q-\Delta W $$
(3.1)

When the First Law of Thermodynamics was formulated, it was known that work is equal to force times the distance over which the force is applied:

$$ W=F\times D $$
(3.2)

Unlike work, however, the nature of heat was poorly understood until the latter half of the Nineteenth Century. The dominant theory had been that heat is a fluid called caloric. An early indication of the alternative, and correct, idea that heat is related to motion comes from the work of Count Rumford , born Benjamin Thompson, in 1753. Rumford had been working on the boring of cannon and was led to question the theory of caloric on the basis of his observation that an inexhaustible amount of heat could be produced by the continuous friction generated when boring the cannon barrel. This led him to conclude (Rumford B 1798),

It is hardly necessary to add, that anything which any insulated body, or system of bodies, can continue to furnish without limitation, cannot possibly be a material substance; and it appears to me to be extremely difficult, if not quite impossible, to form any distinct idea of anything capable of being excited and communicated in the manner the Heat was excited and communicated in these experiments, except it be motion.

Remarkably almost 200 years earlier in his treatise on inductive reasoning , “Novum Organum ”, Sir Francis Bacon reached the exact same conclusion based on his application of the inductive method to the question of heat (Bacon F 1620):

…the nature whose limit is heat appears to be motion. This is chiefly exhibited in flame, which is in constant motion, and in warm or boiling liquids, which are likewise in constant motion … .What we have said with regard to motion must be thus understood, when taken as the genus of heat: it must not be thought that heat generates motion, or motion heat (though in some respects this be true), but that the very essence of heat, or the substantial self of heat, is motion and nothing else.

At first, he correctly observes that flame and boiling water are in constant motion to support his claim that, “the nature whose limit is heat appears to be motion”. Then he seems to reach the opposite conclusion in the last paragraph where he says, “it must not be thought that heat generates motion, or motion heat (though in some respects this be true)”; but then goes on to state that, “the very essence of heat, or the substantial self of heat, is motion and nothing else.” Bacon’s understanding that the nature of heat is intimately connected to motion, while correct, seems to have been somewhat tentative. His inductive method clearly led him to the right inference about the relationship between motion and heat, but he was unable to bring the concept to complete fruition in the absence of an atomic theory of matter. “Novum Organum ” is discussed further in Chap. 7.

The fog of ambiguity concerning the precise nature of heat was finally lifted when an atomic-scale mechanistic theory was formulated to explain what heat is and how it is transferred between bodies that have different temperatures. This theory was advanced in 1867 when James Clerk Maxwell postulated that molecules of a gas have a velocity-dependent energy of motion, called kinetic energy, Footnote 6 just as a macroscopic object like a falling apple does (Maxwell J C 1867). The kinetic theory of heat is explained clearly for the lay person in his famous book, “Theory of Heat ” (Maxwell J C 1871). The vast number of atoms , even in a very small volume of gas for example, makes specification of the individual velocities and kinetic energies impossible for each of the atoms of the gas. So, Maxwell developed a statistical approach to describe the distribution of molecular velocity for a system of gas molecules in thermal equilibriumFootnote 7 at three different temperatures, as shown in the Fig. 3.1.

Fig. 3.1
figure 1

The graph shows the distribution of molecular velocities for an enclosed system of gas molecules in thermal equilibrium at three different temperatures. Note that more molecules have higher velocities when the gas is hotter. Graph obtained from: “Quantum Physics, Thermodynamics, and Information.” Image downloaded from Information Philosopher at http://www.informationphilosopher.com/quantum/physics/ and reprinted here under terms of the Creative Commons License found at: https://creativecommons.org/licenses/by/3.0/legalcode

From this distribution of molecular velocities, Maxwell could calculate the average velocity of a closed system of gas molecules at uniform temperature throughout. Knowing this, Maxwell was able to determine the average kinetic energy of those same gas molecules since kinetic energy is:

$$ K=\frac{1}{2}m{\left({v}_{\mathrm{avg}}\right)}^2 $$
(3.3)

Where K is kinetic energy, m is the mass of a molecule of the gas and vavg is the average molecular velocity. From such an approach, Maxwell developed an understanding of the macroscopic phenomena of thermodynamics in terms of a statistical treatment of the kinetic energies of molecules that is inherent to their motion. Maxwell explained that heat is nothing more than the collective mechanical effect of the kinetic energies of all the molecules in a gas, liquid, or solid that is being observed.Footnote 8 This new theory of heat came to be known as statistical mechanics or statistical thermodynamics. When a hot body is brought into contact with a cooler one, kinetic energy transfers from each body to the other as their respective molecules collide. Heat transfers from the hotter body to a cooler one because, during those collisions the net transfer of kinetic energy is from the hotter to the cooler one.

The Second Law of Thermodynamics states that any natural or spontaneous process moves in the direction that causes the entropy of the system plus the environment to increase. Entropy was originally understood as a thermodynamic variable that described an entire system of molecules . Such variables as temperature and entropy of a body are therefore considered to be state variables of the system. In classical thermodynamics, entropy is defined in terms of heat and temperature. For each amount of heat (ΔQ) added to a volume of water, the entropy change (ΔS) is calculated according to:

$$ \Delta S=\Delta Q/T $$
(3.4)

where T is the temperature in degrees Kelvin at which the heat is added.Footnote 9 This is the macroscopic thermodynamic understanding of entropy , but in the hands of Ludwig Boltzmann, Maxwell’s statistical mechanics would provide another far-reaching understanding of entropy (Boltzmann L 1877). Maxwell had derived the mathematical expression for the distribution of atomic velocities in a gas at thermal equilibrium, as shown in Fig. 3.1. In a volume of gas molecules that is removed from thermal equilibrium,Footnote 10 Boltzmann showed that the distribution of molecular velocities gradually approached that of Maxwell’s distribution as a result of molecular collisions that bring the gas back into thermal equilibrium. In doing so, Boltzmann’s theory provided a new understanding of entropy as molecular disorder which reaches a maximum value at thermal equilibrium. Boltzmann defined entropy as:

$$ S={\mathrm{K}}_{\mathrm{B}}{\log}_{\mathrm{b}}(W), $$
(3.5)

where W is a measure of molecular disorder, expressed as the number of equally probable microstatesFootnote 11 the atoms in a system can assume, and where KB is a number known as Boltzmann’s constant .Footnote 12 , Footnote 13 Implicit in this formulation of entropy is the understanding that the system at thermal equilibrium can be described by more different microstates than the system removed from equilibrium. Correspondingly, a system that is removed from thermal equilibrium has fewer microstates than when it is in thermal equilibrium and temperature is uniform throughout. When W is large, as when the system is in thermal equilibrium, the logarithm of W is correspondingly large.Footnote 14 Applying this reasoning to Eq. 3.5, the entropy (S) for a system of gas molecules is maximum when molecular disorder, W, is maximum at thermal equilibrium. From this, Boltzmann demonstrated that entropy and molecular disorder are intimately related.

The mechanical state of a system of gas molecules in a container can be specified by a detailed description of the positions and momenta of all the particles in the system. Typically, the probability of different potential microstates thus defined for the molecules in a system will not be equal, and the measure of molecular disorder, W, must be replaced by an expression that takes account of the different probabilities of occurrence for different microstates, in which case the entropy of the system is defined as:

$$ S=-{\mathrm{K}}_{\mathrm{B}}\Sigma {\mathrm{p}}_{\mathrm{i}}\;{\log}_{\mathrm{b}}\left({\mathrm{p}}_{\mathrm{i}}\right), $$
(3.6)

where p i is the probability of a particular microstate of the system of atoms, and the Greek letter sigma, Σ, indicates that the expression pi logb(pi) must be summed for all of the possible microstates of the system. The entropy expression in Eq. 3.6 is known as the Gibbs entropy , after Josiah Willard Gibbs the mathematician and theoretical physicist who formulated it.

Comparing the expressions for entropy in Eqs. 3.4 and 3.6, we see that entropy can be defined not only in terms of heat and temperature, but also in terms of molecular disorder. If we equate the expressions for entropy in Eqs. 3.4 and 3.6 we can write:

$$ \Delta S=\Delta Q/T=-{\mathrm{K}}_{\mathrm{B}}\;\Sigma {\mathrm{p}}_{\mathrm{i}}\;{\log}_{\mathrm{b}}\left({\mathrm{p}}_{\mathrm{i}}\right) $$
(3.7)

Equation 3.7 states that the macroscopic thermodynamic variable known as entropy , which is the amount of heat added to a system divided by the temperature of the system at which it was added, can also be understood in terms of the statistical mechanical measure of molecular disorder.Footnote 15 When heat is added to a system, disorder increases and entropy increases likewise. We can isolate the two rightmost terms of Eq. 3.7 as follows:

$$ \Delta Q/T=-{\mathrm{K}}_{\mathrm{B}}\ \Sigma {\mathrm{p}}_{\mathrm{i}}\;{\log}_{\mathrm{b}}\left({\mathrm{p}}_{\mathrm{i}}\right) $$
(3.8)

Multiplication of both sides of Eq. 3.8 by temperature, T, then gives:

$$ \Delta Q=(T)\ \left[-{\mathrm{K}}_{\mathrm{B}}\;\Sigma {\mathrm{p}}_{\mathrm{i}}\;{\log}_{\mathrm{b}}\left({\mathrm{p}}_{\mathrm{i}}\right)\right] $$
(3.9)

Both sides of Eq. 3.9 are in units of Joules.Footnote 16 We can see from this simple algebraic manipulation another perspective of Boltzmann’s observation that the macroscopic thermodynamic variable heat added to a system, and the microscopic statistical mechanical variable molecular disorder are intimately related properties of that system. The heat added, ΔQ, is equal to the temperature at which it is added, T, multiplied by [-KB Σpi logb (pi)], which is a number that quantifies the molecular disorder of the system.

Information

One of the great surprises that emerged from the independent development of statistical mechanics in the late nineteenth Century and information theory more than 70 years later was the realization that entropy and information are intimately related. To understand the role that entropy and information play in cosmology , and the emergence of life and mind, it is important that we examine this relationship.

The expression for Gibbs entropy in Eq. 3.6 above bears a striking resemblance in form to the mathematical expression for uncertainty in information theory , which was derived independently of any consideration of statistical mechanics or thermodynamics by Claude Shannon and Warren Weaver after World War II (Shannon C and Weaver W 1949). This can be written as:

$$ H=-\Sigma {\mathrm{p}}_{\mathrm{i}}\;{\log}_{\mathrm{b}}\left({\mathrm{p}}_{\mathrm{i}}\right) $$
(3.10)

where H is the summed uncertainty, or surprise value, associated with a string of words in a message, or a sequence of events, each of which has a unique probability of occurrence designated by p i. Uncertainty may also be thought of as the information conveyed by the message, or alternatively the information one would need to specify the state of a system. If it can be shown that Eqs. 3.6 and 3.10 are related, then entropy , which depends on a measure of atomic disorder, can be understood in terms of information. It will be helpful to first gain an intuitive understanding of what Eq. 3.10 is saying. In this equation, H is also considered to represent the information, I, inherent in the occurrence of a sequence of events or words and, pi is the probability of each word or event. Equation 3.10 therefore states that the information content of a message depends on the likelihood or probability of occurrence of its individual components. If we consider a simple message that consists of only one word or event, the occurrence of an unlikely or rare event conveys a high level of information. The high uncertainty of its occurrence endows its actual occurrence with high information content or value. On the other hand, the occurrence of a highly probable event conveys less information because it is expected. Consider the issue from the perspective of surprise value. If tomorrow morning, the sun rises in the East, you will naturally regard this as unsurprising and therefore quite uninformative. If sunrise did not occur at the appointed time, however, you would attach a great deal of significance to this. You would recognize such a surprising event as very significant or informative; and you would also want to have an explanation. Likewise, if you received a message from a friend which stated that death and taxes were the only two things that are certain in life, you would not be very surprised. Your friend would be telling you something you already know. You would say that the message contained virtually no information. If, on the other hand, you received a message from that same friend, which stated that he had won a multimillion dollar lottery (an event with a very low probability), you would consider this to be surprisingly informative. You might even be encouraged go out to buy a lottery ticket of your own.

The relationship between the statistical mechanical description of the atomic disorder of a system of gas molecules in a container in Eq. 3.6, and the description of that system provided by information theory in Eq. 3.10, can be seen from the following example.Footnote 17 We cannot possibly know and describe the state of a system of gas molecules in a container. On the other hand, we can simplify the impossibly complex analysis that would require specification of momentum and position, for each molecule in a container of gas by revisiting the example introduced earlier for the game of pool. In doing this, we reduce the number of elements of the system and also lower the dimensions of the system from three-dimensional space for a gas to the two-dimensional surface of the pool table. To further simplify the analysis, we ignore the momentum of the pool balls and consider how the transition from the initial simple state to subsequent more complicated ones affects only how the positions of the pool balls change over time. Positional complexity can be assessed by measuring how the spatial distribution of pool balls changes over time after they are hit by the cue ball. If the surface is divided into a very large number of squares, with each one just large enough to accommodate just one pool ball, initially the racked pool balls are all confined to a small region of adjacent squares. The system is highly ordered because the position of each pool ball contains all the information needed to find all the others. But as the positions of individual pool balls begins to vary after they are struck by the cue ball they spread out over a larger total area on the table surface and are less likely to be found in adjacent squares. The system becomes more disordered. The position of each pool ball no longer contains all the information needed to find all the others. The system thus evolves from a highly ordered initial state, wherein the pool balls are clustered in a small group of adjacent squares and the position of any one of them conveys all information needed to find all the others, to a more disordered state in which, eventually, the position of each pool ball conveys only the information that specifies its own position.

Some final examples will help to illustrate the relationship between information, entropy and the Second Law of Thermodynamics . Consider the diffusion of molecules that convey the scent of roses in a room. One would not expect the scent of roses that had already filled a room to spontaneously concentrate in a small region immediately surrounding the roses, thus leaving the rest of the room devoid of their beautiful scent. Spontaneous processes in the universe always proceed from an initial state that is characterized by a low likelihood to a succeeding state that is more likely. Correspondingly, the initial state of a system that undergoes a spontaneous change has a lower level of disorder than the final state to which it evolves. Your socks do not spontaneously gather in your sock drawer. Rather they disperse in apparently random and maddening fashion. You need more information to find your socks when they are spread around the bedroom and laundry room than when they are neatly gathered in your sock drawer.

On the atomic scale, we have seen that entropy is a function of atomic disorder, so a spontaneous change in a system always involves an increase in entropy or disorder, the information you would need to understand the state of the system, and the information value of any single element of the system. The universe is, therefore, characterized by an inexorable increase in entropy or disorder, and a corresponding increase in the amount of information required to specify its state, as space-time evolves. This statement is characterized as the Second Law of Thermodynamics . Spontaneous processes always proceed with an increase in entropy, or the growth of disorder. A question that commonly follows this assertion is why then do life and mind emerge in the natural history of the universe, since clearly these require more ordering of matter and energy for their existence. The answer usually given is that local order may increase in the planetary breeding grounds where life and mind emerge, but the entropy and disorder of the universe increases nevertheless because there is a greater increase in entropy outside the region where order grows.

We can now examine the relationship between Shannon-Weaver entropy (H) and Gibbs entropy (S) as follows. Although the form of Eq. 3.6 looks similar to the form of Eq. 3.10, except for the presence of Boltzmann’s constant (KB) in Eq. 3.6, the Gibbs entropy and Shannon-Weaver uncertainty are not equal to each other. Rather the Shannon-Weaver uncertainty (H), which is expressed in units of bits in Eq. 3.10, can be viewed as the amount of information that would be needed to specify the state of a system of particles that has Gibbs entropy (S) expressed in units J/°K in Eq. 3.6. This means that, as a system moves toward equilibrium in a spontaneous process, the entropy of that system increases and the amount of information needed to describe the physical state of the system increases accordingly.

The Dual Nature of Light and Matter

Newton’s experiments on the nature of light revealed that white light is composed of a spectrum of monochromatic colors, or light frequencies, more commonly known as the rainbow. Newton is also responsible for advancing the particle theory of light. The idea that light consists of a stream of miniscule particles was challenged by Robert Hooke and Christiaan Huygens, who proposed the alternative theory that light is a wave that varies transversely to the direction of propagation. Subsequently, Thomas Young demonstrated that light exhibited the wave property of interference. The debate concerning whether light consists of particles or is a wave provided an early portent of the amazing findings of quantum physics, which showed that not only light, but also mater, simultaneously possesses the potential to manifest as either a particle or wave.

In addition to thermodynamics, another area of rapid advancement in physics, during the Nineteenth Century, was in the field of electromagnetic theory . Fundamental discoveries were made concerning the forces of electricity and magnetism, and these findings were synthesized by the mathematical physicist James Clerk Maxwell into the electromagnetic theory of the propagation of light. By unifying two of nature’s forces, electricity and magnetism, into a more fundamental force, electromagnetism , Maxwell was able to explain that light is an electromagnetic wave that propagates through space. Maxwell’s equations for light also implied that the speed of light propagation is constant irrespective of the motion of an observer relative to the motion of the light. This fact gave Einstein a vital clue in the early Twentieth Century that led him to his Special Theory of Relativity, which showed that space, and time are not constant but vary with the speed of an observer. Maxwell’s electromagnetic theory of light was also consistent with the earlier ideas of Thomas Young .

Newton’s idea that light consists of a stream of miniscule particles was challenged by the experiments of Thomas Young, as mentioned above. In what has come to be called the double-slit experiment, Young was able to show that if a beam of light was passed through two adjacent slits in an opaque object, the two resulting beams of light that emerged from the slits would exhibit the wave property of constructive and destructive interference. To understand this, consider how two ocean waves that are converging on the shoreline from two different angles interact with each other when they meet. When the waves converge, the size of the resulting wave depends on which part of each wave meets the other. Where the peak of one wave meets the other’s trough, the waves cancel each other. On the other hand, where the peak of one wave intercepts the peak of the other wave, they add to produce a peak that is more intense than either wave initially. Finally, when the trough of one wave meets the trough of the other, the waves add to produce a trough that is lower than either of the original troughs. The example of colliding wave fronts illustrates the destructive and constructive interference that Young observed for intersecting light beams when they reached a screen where they produced an image that is called a diffraction pattern. The image shows regions of more intense light and regions of minimal light as seen in Fig. 3.2.

Fig. 3.2
figure 2

Right. Images of red light that emerges from a single slit (top), or double slits (bottom), in an opaque object. The top image with one slit closed. The bottom image with both slits open. The separation between the two slits is 0.7 mm. The bottom image shows the signature pattern of interfering waves. The top image shows no interference pattern because only one wave emerges from the single slit. Left. View of Young’s double-slit apparatus with both slits open. The light source on the left shines light through a single slit in the first opaque screen, after which it passes through two open slits in the second opaque screen. The bright spots that form on the photographic plate are the result of wave interference as shown. Image at right is provided by: Wikipedia at https://en.wikipedia.org/wiki/Double-slit_experiment. License: CC BY-SA: Attribution-ShareAlike. https://creativecommons.org/licenses/by-sa/3.0/deed.en Image at left was obtained from an open source at: https://theobservereffect.wordpress.com/the-most-beautiful-experiment/

Young’s definitive experiment proved that light is a wave, but this settled things only temporarily. The debate in the time of Newton and Young, concerning whether light consists of particles or is a wave, provided an early hint of the amazing findings that would be observed when physicists began to probe matter at the atomic level. The wave theory of light was about to be challenged!

In the photoelectric effect first discovered by Heinrich Hertz in 1887, light that falls on a metal surface causes negatively charged electrons to be ejected from the metal. This is illustrated in the Fig. 3.3 below.

Fig. 3.3
figure 3

Incident light ejecting electrons from a metal surface and the electrical circuit designed to study ejected electrons. A battery or variable voltage source applies a voltage between the metal surface and the detector. Light hits the metal surface and ejects negatively charged electrons which are captured by the positively charged detector. The Ammeter measures the current produced by the photoelectrons . The image was produced by Utkarsh Agarwal, and was obtained at: https://www.quora.com/How-can-I-understand-the-photoelectric-effect-easily. This material is reproduced here under terms of a license found at: https://www.quora.com/about/tos

Capture of these electrons into a closed circuit allowed the number of ejected electrons to be measured by the strength of the electrical current in the circuit. The strength of the photocurrent , I, is determined by the number of electrons flowing through the circuit, which depends on the intensity of the light that falls on the metal surface and the voltage applied between the metal surface and the detector as shown in Fig. 3.4, which shows the photocurrent produced by two different intensities of incident light.

Fig. 3.4
figure 4

The strength of the photocurrent produced by high and low intensity incident light, varies with the strength of the voltage (potential difference) applied between the metal surface and the detector. Photocurrent declines when the voltage approaches zero at graph origin and then becomes negative. The detector then gradually becomes negatively charged, and repels negatively charged electrons making it harder for the electrons to enter the detector. When the voltage reaches a limiting value of -ΔVS, electrons are no longer ejected from metal surface by the light and the photocurrent stops. Image was downloaded from: https://phys.libretexts.org/Bookshelves/University_Physics/Book%3A_University_Physics_(OpenStax)/Map%3A_University_Physics_III_-_Optics_and_Modern_Physics_(OpenStax)/6%3A_Photons_and_Matter_Waves/6.2%3A_Photoelectric_Effect. License is at: https://creativecommons.org/licenses/by-nc-sa/3.0/us/

This is in line with expectations based on the wave theory of light. According to the wave theory, however, the kinetic energy of the ejected electrons should also depend on the intensity of the light falling on the metal surface. Surprisingly, it was found that the kinetic energy of the ejected electrons depended not on the intensity of the incident light but its color or frequency. A higher frequency of light imparted more kinetic energy to ejected electrons than a lower frequency, as seen in Fig. 3.5b. This finding was explained by Einstein in a 1905 paper for which he later won the Nobel Prize. Einstein knew that Max Planck had proposed that light energy is absorbed and emitted by matter in discrete units according to the equation: E = hf. In this equation E is the electromagnetic energy that is absorbed, h is Planck’s constant and f is the frequency of the electromagnetic wave. Einstein’s insight was to apply Planck’s concept to the photoelectric effect . He proposed that Planck’s equation be used to determine the energy of each unit of electromagnetic radiation absorbed by the metal plate in an experimental arrangement such as that shown in Fig. 3.3. In this formulation of the collision between one photon and one electron , electromagnetic energy can be transferred to the electron only in units equal to whole number multiples of hf. It is for this reason that the kinetic energy (KE) of ejected photoelectrons depends on the frequency of the incident light and not its intensity or brightness. The intensity of incident light increases the number of ejected electrons, and hence the intensity of the photocurrent as shown in Fig. 3.4, while the frequency of incident light determines the kinetic energy of the photoelectrons as shown in Fig. 3.5.

Fig. 3.5
figure 5

(a) Illustration of the photoelectric effect . Light interacts with electrons in the metal in discrete units called photons. One photon transfers energy to one electron, which escapes the metal surface if it has a kinetic energy greater than ϕ. (b) Kinetic energy (KE) of photoelectrons ejected from metal plate as a function of the frequency (f) of light that hits the plate. Measurements showed that KE varied with the frequency of light not its intensity. The relationship between the kinetic energy of ejected electrons and the frequency of incident light can be expressed as KE = hf –ϕ, where ϕ is the intercept of the KE-Axis that defines the amount of kinetic energy the electron expends escaping the metal. The image in A was obtained at: https://www.siyavula.com/science/grade-12/12-optical-phenomena-and-properties-of-matter/12-optical-phenomena-and-properties-of-matter-02.cnxmlplus under license terms found at: https://creativecommons.org/licenses/by/3.0/;. Image in B obtained at: http://dev.physicslab.org/Document.aspx?doctype=3&filename=AtomicNuclear_PhotoelectricEffect.xml and used with permission of Catherine H. Colwell the creator and copyright owner

Einstein’s explanation of the photoelectric effect provided support for Newton’s particle theory of light. On the other hand, this particle nature of light conflicts with the wave property of interference that Young had demonstrated in the double-slit experiment. The nature of this paradox is described as the wave-particle duality of light, which was the first great paradox to be revealed in the quantum theory of light and matter. The next paradox encountered was even more startling, however.

In 1924, a young French Ph.D. student named Louis de Broglie reasoned that, since nature is symmetrical in so many ways, and since there is a wave-particle duality of light, perhaps there is also a wave-particle duality for atomic particles such as the electron . Using the same formula that described the wavelength of light, de Broglie predicted the wavelength of the electron and this prediction was subsequently borne out by experiments that studied the scattering of a beam of electrons aimed into a crystalline solid. Scattering would be expected if the electron was a wave. Perhaps even more surprising were the results obtained when a beam of electrons was used instead of light in the double-slit experiment. A clear interference pattern, such as the one shown for light on the right side of Fig. 3.2, was observed on the screen. This could only be explained if the electrons were moving through space as waves and not as particles. Experiments such as these established the wave-particle duality of matter. Light could behave as either a particle or a wave, and so could the subatomic and even atomic constituents of matter! If you find this baffling, you are in good company. In his Messenger Lecture in 1964, physicist Richard Feynman said (Feynman R 1967):

I think I can safely say that no one understands quantum mechanics…… ..I am going to tell you what nature behaves like. If you will simply admit that maybe she does behave like this, you will find her a delightful, entrancing thing. Do not keep saying to yourself, if you can possibly avoid it, ‘but how can it be like that?’ because you will get ‘down the drain,’ into a blind alley from which nobody has yet escaped. Nobody knows how it can be like that.

Yet the mystery was to deepen even further.

Schrödinger’s Wave Mechanics and the Meaning of the Quantum Wave

What does it mean to say that an elementary particle such as an electron has the properties of a wave, behaves as a wave, or even that it is a wave as is sometimes stated? Erwin Schrödinger developed a mathematical formulation of quantum physics called wave mechanics. Building on de Broglie’s wave theory of matter, Schrödinger showed that particles such as the electron can be characterized by the same type of equations that describe macroscopic waves such as those that occur in water or as sound waves in the air. Schrödinger’s theory was useful in helping to explain how a beam of electrons could produce a wave interference pattern in the double-slit experiment, but it left the physical meaning of the electron wave unexplained. Further information was provided by new discoveries as described below.

It was clear that the wave interference pattern could be obtained with a beam of electrons in the double-slit experiment just as it had for light. When single electrons were fired in succession toward two open slits, however, a baffling observation was made. As expected, each electron could be observed to strike the screen as a particle, but after many electrons had been fired at the double-slits the familiar wave interference pattern emerged. That is, the same interference pattern was observed after many electrons had been fired one at a time in succession as when a continuous beam composed of many electrons at once was fired! The surprise derives from the presumption that, when fired one at a time, each electron would have to travel through only one of the two open slits. In this case, no interference pattern should occur. The interference pattern would require the interaction of electron waves that passed through both of the open slits at the same time. The fact that the sequential electron firing experiment nevertheless did produce an interference pattern led to the conclusion that when electrons are fired one at a time, the interference pattern must result from the fact that each electron travels through space as a wave that simultaneously passes through both slits and interferes with itself. Alternatively, the passage of a single-electron wave that passes through one of the slits would have to interfere with the wave of another electron that passes through the other slit at a different time. Observations such as this begin to shake our understanding of the fabric of reality. Production of an interference pattern when single electrons are fired in sequence toward double slits would be impossible if each electron behaved as a particle in the classical sense of being located exclusively in a well-defined region of space. Instead the interpretation of the results in the sequential single electron double-slit experiment suggests that an electron not only has the properties of a particle that are directly observed through an experimental measurement that occurs at a specific place, but also has a wave nature that transcends the confines of location, and time as well. We will return to this issue in the next chapter.

Clearly, we are dealing with a paradox in the wave-particle duality of matter and light. An attempt to resolve the paradox was offered by Niels Bohr , Werner Heisenberg and others in what came to be known as The Copenhagen Interpretation. According to this idea, a measurement “collapses” the wave aspect of a quantum entity by virtue of which it loses the transcendent properties of non-locality. Thereafter the quantum entity manifests as a particle at the time of the and place of a measurement by virtue of the measurement! It was in this context that Max Born proposed that the physical meaning of a quantum entity’s wave is that it provides information concerning the possibilities of the particle’s location in different regions of space. Specifically, Born proposed that the square of the quantum wave amplitude for any point in space represents the probability of finding the particle there. Any observation or measurement of its wave state would cause the particle-wave to manifest exclusively as a particle localized at the time of the measurement in a defined region of space. The interference of matter waves, helped to establish the wave nature of matter, but the idea that a quantum entity transitions from its wave state to a particle state on the basis of a measurement that causes the quantum entity’s wave to “collapse” led to questions concerning the nature of the quantum wave described by Schrödinger . Does Schrödinger’s wave equation merely describe a mathematical function, or does it describe a wave that has physical meaning? Born’s interpretation of Schrödinger’s wave equation raised the obvious question: if quantum matter waves are described by mathematical functions that can be used to calculate the spatial distribution of a particle’s probability of being localized at different regions of space, what is the physical reality of such waves that allows them to behave the same way in the double-slit experiment that electromagnetic light waves do? It is important to remember in this context that this is not the first time such a question has been asked. The physical meaning of the force fields of classical physics, such as the electric and magnetic fields , is defined by mathematical expressions that describe how these fields affect various elementary particles at different locations within the field. For example, two electrons will repel each other because elementary particles that both carry a similar charge exert a repulsive force against each other. This force is inversely proportional to the square of the distance between them. The nature of electric and magnetic fields puzzled physicists when they were first discovered, however. How could a non-material entity exert a force on matter? What was the physical nature of electric and magnetic force fields ? This is the same question we are now asking about quantum fields . In time, physicists came to accept the force exerted by these fields as evidence of their tangible physical reality. It was certainly clear that moving orthogonal magnetic and electric fields produced electromagnetic waves such as visible light, X-rays, and radio waves. Although not material, light waves clearly have a tangible physical reality. The quantum matter wave is not a force field , but we can think of it as a quantum field.Footnote 18 Its mathematical representation describes the spatial distribution of possible locations of a particle throughout space if a measurement is made. So, the quantum field of the electron may be thought of as determining the probabilities for the particle manifestation of an electron , in various regions of space. The phenomenon of quantum tunneling, in which a particle literally vanishes from one side of an obstacle and reappears instantaneously on the other side of the obstacle without traversing the distance between the two locations, demonstrates this quantum field of an elementary particle very dramatically.Footnote 19 We might argue, therefore, that the quantum field has a real physical meaning and is as real or tangible as a force field .Footnote 20 This line of reasoning does not explain how the quantum field of an electron can manifest the property of wave interference , however. How does the quantum field of an electron interact with two slits in an opaque object? The conundrum concerning the interpretation or meaning of the quantum wave was captured by Matthew Pusey and his colleagues as follows (Pusey MF et al. 2012):

At the heart of much debate concerning quantum theory lies the quantum state. Does the wave function correspond directly to some kind of physical wave? If so, it is an odd kind of wave, since it is defined on an abstract configuration space, rather than the three-dimensional space we live in. Nonetheless, quantum interference, as exhibited in the famous two-slit experiment, appears most readily understood by the idea that it is a real wave that is interfering. Many physicists and chemists concerned with pragmatic applications of quantum theory successfully treat the quantum state this way.

Many others have suggested that the quantum state is something less than real (References. Omitted). In particular, it is often argued that the quantum state does not correspond directly to reality, but represents an experimenter’s knowledge or information about some aspect of reality. This view is motivated by, amongst other things, the collapse of the quantum state on measurement. If the quantum state is a real physical state, then collapse is a mysterious physical process…

In their important paper, Pusey and his colleagues go on to show that strict information-based models cannot reproduce the predictions of quantum theory. In their own words:

In conclusion, we have presented a no-go theorem, which – modulo assumptions- shows that models in which the quantum state is interpreted as mere information about an objective physical state of a system cannot reproduce the predictions of quantum theory. The result is in the same spirit as Bell’s Theorem (Ref. omitted), which states that no local theory can reproduce the predictions of quantum theory.

If the quantum state is a real wave with physical meaning, what happens when a measurement is made that leads to the so-called collapse of this wave? An interesting theory, that would have cosmological significance if it were true, proposes that the collapse of the quantum wave generates a gravitational field that surrounds the quantum particle that manifests as a result of the collapse . (Tilloy A 2018). This is a fascinating idea because, if it is verified, it would demonstrate at least one tangible physical feature that lends ontological validity to the quantum wave. Tilloy’s theory could provide a link between space-time , gravity and quantum mechanics. Namely, “collapse” is what generates a gravitational field, upon a measurement of the space-time permeating quantum field described by Schrödinger’s equation, as well as a quantum particle at the center of that gravitational field. As Tilloy himself cautions, this theory awaits empirical verification:

…how much should we believe in the model introduced here? As theoretical physics is currently drowned in wild speculations delusionally elevated to the status of truth , a bit of soberness and distance is required. The present model most likely does not describe gravity, even in the Newtonian approximation. It is but a toy model, a proof of principle rather than a proposal that should be taken too seriously. Nonetheless some lessons survive its ad hoc character:

  1. 1.

    There is no obstacle in principle to construct consistent fundamentally semi-classical theories of gravity.

  2. 2.

    Collapse models can be empirically constrained by a natural coupling with gravity.

  3. 3.

    A primitive ontology can have a central dynamical role and need not be only passive.

If semi-classical theories of the type presented here can be extended to general relativity in a convincing way and if robust criteria can be found to make them less ad hoc (ref. omitted), then further hope will be warranted.

Despite Tilloy’s warning about how “theoretical physics is currently drowned in wild speculations delusionally elevated to the status of truth ”, what he modestly calls his “toy model” implies an equivalence between properties of a quantum field and the corresponding properties in a quantum particle plus the gravitational field around it. We must leave it to the physicists to address this and to decide the ontological question regarding the nature or meaning of the quantum wave or field. Anyone who maintains that Schrödinger’s wave-mechanical description of elementary quanta is merely a computational device, however, must explain how such an abstract entity that is devoid of its own ontological validity can produce the very real wave interference patterns observed in electron double-slit experiments. This question has been discussed and debated for over 90 years. Yet there is hope that progress in recent theoretical and experimental physics may resolve this issue. We can be sure of one thing, even now. Resolution of this issue will be the harbinger of a far-reaching and revolutionary new understanding of reality. For a first rate assessment of the various interpretations of quantum theory see Adam Becker’s recent book titled, “What is Real ” (Becker A 2018). Becker has given us a thorough account of the development of quantum theory that provides a sense of the adventure, biographical information on the cast of characters and a penetrating insight into the science as well.

Quantum Phase Entanglement and the Non-local Nature of Reality

Although he had played a major role in helping to establish quantum theory Einstein was disturbed by two of its key features. He was troubled by the role of probability in explaining quantum phenomena.Footnote 21 The other aspect of quantum theory that Einstein found objectionable was the non-continuous nature of quantum phenomena, despite his role in demonstrating the non-continuous quantum nature of the interaction of photons and electrons in the photoelectric effect . In general, the absorption of light by matter occurs in discrete non-continuous units, because electrons orbiting atomic nuclei must do so in specific orbits characterized by discrete energy levels. When the energy of a photon is absorbed by an electron in a low energy orbit the electron jumps to a higher energy orbital. There are no intermediate energy orbitals. These so-called quantum jumps violated Einstein’s belief that physical phenomena like energy absorption should occur in a continuous process rather than in discrete steps. The debates between Einstein and Niels Bohr were famous and Einstein, who was well-known for conceiving thought experiments to illustrate paradoxes and conflicts in the data, was highly motivated to describe a paradox to support his belief that quantum theory was fundamentally flawed, or at least incomplete. To this end in 1935, Einstein and his colleagues Boris Podolsky and Nathan Rosen published a paper in which they portrayed what they believed was an internal contradiction that disproved quantum theory, or at a minimum showed that it represents an incomplete description of reality (Einstein A Podolsk B and Rosen N 1935).

The paradox that the Einstein-Podolsky-Rosen (EPR) thought experiment proposes would arise from certain measurements on electrons moving apart from each other. It has also been explained more recently in terms of paired photons and other quantum particles. One compelling and clear explanation derives from consideration of an electron and its anti-matter counterpart, the positron. The electron and positron are emitted from a single event at the same location after which they move apart at high speed. Quantum theory requires that they have opposite spin Footnote 22 owing to the fact they are a matter/anti-matter pair. Since they were created together, they have a conjoint wave function that is described as a superposition of states in which both the electron and the positron are each in the spin 1 and spin −1 state simultaneously before a measurement of spin is made. Moreover, the electron and positron do not each have a separate wave function but they share a composite one that describes all present and future possibilities of properties that they could manifest at a time and place of measurement. The measurement of either the electron’s or the positron’s spin forces the superposition of states wave function for that particle to “collapse” after which only one spin direction exists for that particle. It is impossible to know in advance of the measurement which spin direction will result from the collapse of the wave function. There is a 50/50 chance of either spin 1 or spin −1 being measured. The EPR paradox arises from the fact that whenever the spin of one of the particles is measured, the probability of spin is absolutely determined for the paired particle. It must be the opposite of the spin of the measured particle. The second particle no longer has an indeterminate spin with an equal chance of manifesting either upon measurement. When measured, the second particle will always have a direction of spin that is opposite to that measured for the first particle. The paradox arises from the necessity of instantaneous transmission of the information of the first particle’s spin state to the second particle even over great distances. The instantaneous transmission of information would violate Einstein’s theory of special relativity , which requires that nothing, not even information, can travel through space faster than the speed of light.

This paradox was explained by EPR more generally as follows:

There exists a connection between the particles such that the fact of an observation of particle A is relayed to the distant particle B, in such a manner that the communication, does not diminish with distance, cannot be shielded,Footnote 23 and travels faster than light.

The fact of the two particles once being together is sufficient to mingle the particles’ phases. The mingling of phases, which is known as quantum phase entanglement, derives from the conjoint nature of the wave function that describes the possible quantum states for both particles that could manifest upon measurement. The requirement of quantum physics for the instantaneous communication of information about the quantum state of one particle to its conjoint particle, irrespective of the distance between them, describes a non-local causality, or simply “non-local” aspect of reality. Ordinary light-speed-limited phenomena, on the other hand, are referred to as “local”. Einstein had hoped by means of this imaginary experiment to disprove quantum theory, but subsequent experiments demonstrated that quantum phase entanglement is a true description of reality, and the theoretical work of Irish physicist John Stewart Bell showed that “all conceivable models of reality must incorporate this instant connection.” Bell’s Theorem is a mathematical proof that reality is non-local. This result is fundamentally important. It shows that local reality cannot be isolated from the rest of universe, which is equivalent to saying that there is a unitary nature of reality. In regard to quantum entanglement and the non-local nature of reality, David Bohm and Basil Hiley wrote:

We bring out the fact that the essential new quality implied by the quantum theory is non-locality i.e. that a system cannot be analyzed into parts whose basic properties do not depend on the state of the whole system. We do this in terms of the causal interpretation of the quantum theory, proposed by one of us (D.B.) in 1952, involving the introduction of the ‘quantum potential’, to explain the quantum properties of matter.

We show that this approach implies a new universal type of description, in which the standard or canonical form is always supersystem-system-subsystem. In quantum theory, the relationships of the subsystems depend crucially on the system and supersystem in which they take part. This leads to the radical new notion of unbroken wholeness of the entire universe .” (Bohm D and Hiley B 1975)

This notion of Bohm and Hiley regarding “the unbroken wholeness of the entire Universe” is consistent with recent theoretical work which showed that the universe can be modeled accurately as the expansion of a single quantum wave function (Ernest A D 2012). The implication of this work is that all quanta in the universe are particles that are entangled in the matrix of reality that Bohm and Hiley call the “unbroken wholeness of the entire universe”.

Every Picture Tells a Story

More recently, the phenomenon of quantum entanglement was demonstrated in a striking manner. As explained in Nature News (27 August 2014):

Physicists have devised a way to take pictures using light that has not interacted with the object being photographed. This form of imaging uses pairs of photons , twins that are ‘entangled’ in such a way that the quantum state of one is inextricably linked to the other. While one photon has the potential to travel through the subject of a photo and then be lost, the other goes to a detector but nonetheless ‘knows’ about its twin’s life and can be used to build up an image.

The following more detailed description is based on an article in New Scientist (Sarchet P 2014). The images shown in Fig. 3.6 are of a cat stencil, and were made using entangled photons . The photons used to generate the image at the camera never interacted with the stencil. Rather, the information used to create the image was obtained from photons that illuminated the stencil but were never seen by the camera. When photons are entangled they share a single quantum state, as explained above. Measuring the state of one of the entangled photons causes a correlated change in state of the other. The dramatic imaging study discussed here used quantum phase entanglement of photons with different wavelengths to make the images shown below without directly photographing it. Yellow and red pairs of entangled photons were generated and then the red photons were fired at the cat stencil, while the yellow photons were sent to the camera. Thanks to their entanglement with the red photons, that had interacted with the image, the yellow photons had “access” to the image “information” and could form the image of the cat stencil without ever having interacted with it. The silicon stencil was transparent to red light but not yellow light and the camera was sensitive to yellow light but not red light. It is therefore impossible for the image to have been formed by photons of red light that had interacted with the image. This remarkable experiment demonstrates that quantum phase entanglement is real and can be used to image objects that are “invisible” to the photons that create the image.

Fig. 3.6
figure 6

Images of a cardboard cut-out of a cat produced by photons that never interacted with the cut-out itself, but were entangled with photons that did. Figure from (Gibney E 2014) based on the work of (Lemos GB et al. 2014). Permission to use under license number 4198331050720 obtained from Nature Publishing Group and Copyright Clearance Center

In general, the contradiction that paradox reveals arises from incompleteness of understanding. A corollary of this idea is that a more fundamental explanation of the phenomena under investigation would eliminate the paradox. The EPR thought experiment reveals the paradox between the requirement of quantum mechanics for instantaneous transmission of information between two particles separated by a vast distance, and special relativity’s prohibition against anything traveling faster through space than light speed. The non-local nature of reality implied by quantum phase entanglement , which appears to contradict the reality of distance between the particles, has been demonstrated many times experimentally since the EPR paper was published. Perhaps a more fundamental explanation that would resolve the EPR paradox is provided by the idea that the entanglement of all quantum particles in the universe is mediated by their “connection” to another “point” that exists outside of space-time . Such a transcendental point would therefore be in simultaneous instantaneous relation to all the quantum particles that exist in space-time. In this scenario, the requirement for information to travel through space at supra-luminal velocities is eliminated, but it becomes necessary to invoke a connectedness of everything in the universe to a “point” or reference frame that exists “outside” of it. In fact, such a point has been hypothesized by some physicists and philosophers in an attempt to resolve the EPR paradox . John Bell , for example, hypothesized that “something” might be coming from “outside space and time” to instantaneously correlate measurements of widely separated entangled particles. Toward the same end, Huw Price proposed the existence of what he called an Archimedean perspective, or point of view, that he defined as being neutral to the time asymmetry that we observe from our time-bound point of view (Price H 1996) Simultaneity of action, for two entangled quantum particles at a distance, would then be “mediated” through the Archimedean “domain”. Such a domain is necessarily a higher-dimensional and transcendental one that is neither bound by time nor space; and Price even refers to the Archimedean perspective as that of God. On page 4 of his book, Price says:

I want to show that if we want to understand the asymmetry of time then we need to be able to understand, and quarantine, the various ways in which our patterns of thought reflect the peculiarities of our own temporal perspective. We need to acquaint ourselves with what might aptly be called the view from nowhen.

To provide a perspective on “the view from nowhen” Price states on page 145:

Consider, for example, the perspective available to God, as She ponders possible histories for the universe.

Price speculates that a version of quantum physics that incorporated the transcendental Archimedean perspective, which is necessarily symmetric with respect to time, could provide a more complete description of reality than the present version of quantum mechanics and could therefore resolve the paradox that EPR identified. A mathematical formalism that transforms the equations for the four-dimensional space-time representation of quantum mechanics to equations that account for the Archimedean, or supra- dimensional, perspective is needed to bring this idea to fruition. The supra-dimensional perspective of our four-dimensional space-time is discussed in more detail in the next chapter. We will return to Huw Price’s ideas then to evaluate in more detail their importance for developing a better understanding of the phenomena of quantum physics.

Some of the Main Conclusions of Quantum Physics

We have in quantum physics the most fundamental and accurate, yet amazing and mystifying, description of reality that science has produced. Clearly when scientists contacted nature at the atomic and sub-atomic scale, some very strange and perplexing phenomena were encountered. The interpretation of these findings is still debated fiercely among philosophers and physicists alike in an effort to understand the nature of reality at its deepest level.

It is useful to summarize some of the main observations of quantum physics, which offer a description of the universe so fraught with paradox, and new insight:

  1. 1.

    All quantum particles, whether massless photons or particles with mass such as the electron and other subatomic particles, exist before measurement as a waveform that extends throughout space.

  2. 2.

    This waveform describes all the possible states that the particle could manifest if measured at a particular time and place

  3. 3.

    The square of the waveform’s amplitude at each location describes the probability of finding the particle there upon measurement.

  4. 4.

    When measured, the waveform “collapses” and a particle manifests at the time and place of the measurement.

  5. 5.

    Both light and matter possess this wave-particle duality.

  6. 6.

    All the matter and energy of the universe may be part of an “unbroken wholeness”, or unity of being, in which all of the constituents are in mutual instantaneous connection, irrespective of the distance between them.

  7. 7.

    In short there is a unitary nature of reality that supersedes, or is at least as real as, the apparent separateness of the components of reality

  8. 8.

    The instantaneous correlation of states that occurs between entangled quantum particles was portrayed as paradoxical by EPR because not even information can travel at supra-luminal speeds. Yet quantum entanglement has been proven to be an accurate description of reality.

  9. 9.

    Some notable philosophers and physicists have attempted to eliminate the EPR paradox by suggesting that all quantum particles are connected to a “point” that has its existence outside of space-time , and which provides instantaneous connectedness among all quantum particles in the universe.

It is worth reminding ourselves that these are the findings of rational empirical science. Despite the presumption of validity conferred by repeated experimental replication of results, quantum physics defines a reality that is far removed from ordinary experience, a view of reality so paradoxical that it approaches the mystical. Yet the possibility of an even deeper level of reality has been suggested. In String Theory , the idea of vibrating strings with a length near the Planck length, or 1.616252 × 10−35 m, has been advanced as a form of ultimate sub-atomic particle.Footnote 24 That is, all the other sub-atomic particles consist of these strings whose vibrational frequencies determine the type of particle that exists. According to string theory , the different properties that two different quantum particles exhibit is determined by the vibrational frequency of strings in much the same manner that the difference between two pure musical tones is determined by the frequency of vibration of the air. If the tenets of string theory are correct, and they have yet to be experimentally verified, then we must imagine that as a result of the Big Bang space-time was created and filled with quantum strings vibrating at various frequencies to produce the elementary quanta of the nascent universe.