Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Most attempts at formulating hidden variable theories in the presently existing literature consist of some sort of modification of the real quantum theory, and of a replacement of ordinary classical physics by some sort of stochastic formalism. The idea behind this is always that quantum mechanics seems to be so different from the classical regime, that some deep modifications of standard procedures seem to be necessary.

In the approach advocated here, what we call deterministic quantum mechanics is claimed to be much closer to standard procedures than usually thought to be possible.

Deterministic quantum mechanics is neither a modification of standard quantum mechanics, nor a modification of classical theory. It is a cross section of the two.

This cross section is much larger and promising than usually thought. We can phrase the theory in two ways: starting from conventional quantum mechanics, or starting from a completely classical setting. We have seen already in previous parts of this work what this means; here we recapitulate.

Starting from conventional quantum mechanics, deterministic quantum mechanics is a small subset of all quantum theories: we postulate the existence of a very special basis in Hilbert space: the ontological basis. An ontological basis is a basis in terms of which the Schrödinger equation sends basis elements into other basis elements at sufficiently dense moments in time.

Very likely, there will be many different choices for an ontological basis (often related by symmetry transformations), and it will be difficult to decide which of these is “the real one”. Any of these choices for our ontological basis will serve our purpose but only one of them will be the ‘true’ ontological basis, and it will be essentially impossible for us to decide which one that is.

This is not so much our concern. Finding quantum theories that have an ontological basis will be an important and difficult exercise. Our hope is that this exercise might lead to new theories that could help elementary particle physics and quantum gravity theories to further develop. Also it may help us find special theories of cosmology.

Our definition of an ontological basis is deliberately a bit vague. We do not specify how dense the moments in time have to be, nor do we exactly specify how time is defined; in special relativity, we can choose different frames of reference where time means something different. In general relativity, one has to specify Cauchy surfaces that define time slices. What really matters is that an ontological basis allows a meaningful subset of observables to be defined as operators that are diagonal in this basis. We postulate that they evolve into one another, and this implies that their eigenvalues remain sharply defined as time continues. Precise definitions of an ontological basis will be needed only if we have specific theories in mind; in the first simple examples that we shall discuss, it will always be clear what this basis is. In some cases, time is allowed to flow continuously, in others, notably when we have discrete operators, time is also limited to a discrete subset of a continuous time line.

Once such a basis has been identified, we may have in our hands a set of observables in terms of which the time evolution equations appear to be classical equations. This then links the system to a classical theory. But it was a quantum theory from where we started, and this quantum theory allows us without much ado to transform to any other basis we like. Fock space for the elementary particles is constructed from such a basis, and it still allows us to choose any orthonormal sets of wave functions we like for each particle type. In general, a basis in Fock space will not be an ontological basis. We might also wish to consider the basis spanned by the field operators \(\phi (\vec{x},t),A_{\mu}(\vec{x},t),\) and so on. This will also not be an ontological basis, in general.

Clearly, an ontological basis for the Standard Model has not yet been found, and it is very dubious whether anything resembling an ontological basis for the Standard Model exists. More likely, the model will first have to be extended to encompass gravity in some way. This means that, quite probably, our models require the description of variables at the Plank scale. In the mean time, it might be a useful exercise to isolate operators in the Standard Model, that stay diagonal longer than others, so they may be closer to the ontological variables of the theory than other operators. In general, this means that we have to investigate commutators of operators. Can we identify operators that, against all odds, accidentally commute? We shall see a simple example of such a class of operators when we study our “neutrino” models (Sect. 15.2 in Part II); “neutrinos” in quotation marks, because these will be idealized neutrino-like fermions. We shall also see that anything resembling a harmonic oscillator can be rephrased in an ontological basis to describe classical variables that evolve periodically, the period being that of the original oscillator (Sect. 13 in Part II).

We shall also see in Part II that some of the mappings we shall find are not at all fool-proof. Most of our examples cease to be linked to a classical system if interactions are turned on, unless one accepts that negative-energy states emerge (Sect. 9.2). Furthermore, we have the so-called edge states. These are states that form a subset of Hilbert space with measure zero, but their contributions may still spoil the exact correspondence.

Rather than searching for an ontological basis in an existing quantum system, we can also imagine defining a theory for deterministic quantum mechanics by starting with some completely classical theory. Particles, fields, and what not, move around following classical laws. These classical laws could resemble the classical theories we are already familiar with: classical mechanics, classical field theories such as the Navier Stokes equations, and so on. The most interesting class of models, however, are the cellular automata. A cellular automaton is a system with localized, classical, discrete degrees of freedom,Footnote 1 typically arranged in a lattice, which obey evolution equations. The evolution equations take the shape of a computer program, and could be investigated in computers exactly, by running these programs in a model. A typical feature of a cellular automaton is that the evolution law for the data in every cell only depends on the data in the adjacent cells, and not on what happens at larger distances. This is a desirable form of locality, which indeed ensures that information cannot spread faster than some limiting speed, usually assumed to be the local speed of lightFootnote 2

In principle, these classical theories may be chosen to be much more general than the classical models most often used in physics. As we need a stabilization mechanism, our classical model will usually be required to obey a Hamiltonian principle, which however, for discrete theories, takes a shape that differs substantially from the usual Hamiltonian system, see Part II, Chap. 19. A very important limitation would then be the demand of time reversibility. If a classical model is not time reversible, it seems at first sight that our procedures will fail. For instance, we wish our evolution operator to be unitary, so that the quantum Hamiltonian will turn out to be a Hermitian operator. But, as we shall see, it may be possible to relax even this condition. The Navier Stokes equations, for instance, although time reversible at short time scales, do seem to dissipate information away. When a Navier Stokes liquid comes to rest, due to the viscosity terms, it cannot be followed back in time anymore. Nevertheless, time non reversible systems may well be of interest for physical theories anyway, as will be discussed in Sect. 7.

Starting from any classical system, we consider its book keeping procedure, and identify a basis element for every state the system can be in. These basis elements are declared to be orthonormal. In this artificial Hilbert space, the states evolve, and it appears to be a standard exercise first to construct the evolution operator that describes the postulated evolution, and then to identify a quantum Hamiltonian that generates this evolution operator, by exponentiation.

As soon as we have our Hilbert space, we are free to perform any basis transformation we like. Then, in a basis where quantum calculations can be done to cover long distances in space and time, we find that the states we originally called “ontological” now indeed are quantum superpositions of the new basis elements, and as such, they can generate interference phenomena. The central idea behind deterministic quantum mechanics is that, at this stage our transformations tend to become so complex, that the original ontological states can no longer be distinguished from any other superposition of states, and this is why, in conventional quantum mechanics, we treat them all without distinction. We lost our ability to identify the ontological states in today’s ‘effective’ quantum theories.

2 The Classical Limit Revisited

Now there are a number of interesting issues to be discussed. One is the act of measurement, and the resulting ‘collapse of the wave function’. What is a measurement [93]?

The answer to this question may be extremely interesting. A measurement allows a single bit of information at the quantum level, to evolve into something that can be recognized and identified at large scales. Information becomes classical if it can be magnified to arbitrary strength. Think of space ships that react on the commands of a computer, which in turn may originate in just a few electrons in its memory chips. A single cosmic ray might affect these data. The space ship in turn might affect the course of large systems, eventually forcing planets to alter their orbits, first in tiny ways, but then these modifications might get magnified.

Now we presented this picture for a reason: we define measurement as a process that turns a single bit of information into states where countless many bits and bytes react on it. Imagine a planet changing its course. Would this be observable in terms of the original, ontological variables, the beables? It would be very hard to imagine that it would not be. The interior of a planet may have its ontological observables arranged in a way that differs ever so slightly from what happens in the vacuum state. Whatever these minute changes are, the planet itself is so large that the tiny differences can be added together statistically so that the classical orbit parameters of a planet will be recognizable in terms of the original ontological degrees of freedom.

In equations, consider a tiny fraction \(\delta V\) of the volume \(V\) of a planet. Consider the ontological variables inside \(\delta V\) and compare these with the ontological variables describing a similar volume \(\delta V\) in empty space. Because of the ‘quantum’ fluctuations, there may be some chance that these variables coincide, but it is hard to imagine that they will coincide completely. So let the probability \(P(\delta V)\) that these coincide be somewhat less than 1, say:

$$\begin{aligned} P(\delta V)=1-\varepsilon , \end{aligned}$$
(4.1)

with a small value for \(\varepsilon >0\). Then the odds that the planet as a whole is indistinguishable from the vacuum will be

$$\begin{aligned} P_{\mathrm{tot}}=(1-\varepsilon )^{V/\delta V}\approx e^{-\varepsilon V/\delta V} \rightarrow 0, \end{aligned}$$
(4.2)

if the volume \(V\) of the planet is sufficiently large. This means that large planets must be well distinguishable from the vacuum state.

This is a very important point, because it means that, at a large scale, all other classical observables of our world must also be diagonal in terms of the ontological basis: large scale observables, such as the orbits of planets, and then of course also the classical data shown in a detector, are beables. They commute with our microscopic beable operators. See also Fig. 4.1.

Fig. 4.1
figure 1

Classical and quantum states. a The sub-microscopic states are the “hidden variables”. Atoms, molecules and fields are templates, defined as quantum superpositions of the sub-microscopic states, and used at the conventional microscopic scale. The usual “classical” objects, such as planets and people, are macroscopic, and again superpositions of the micro-templates. The lines here indicate quantum matrix elements. b The classical, macroscopic states are probabilistic distributions of the sub-microscopic states. Here, the lines therefore indicate probabilities. All states are astronomical in numbers, but the microscopic templates are more numerous than the classical states, while the sub-microscopic states are even more numerous

Let us now again address the nature of the wave functions, or states \(|\psi \rangle \), that represent real observed phenomena. In terms of the basis that we would normally use in quantum mechanics, these states will be complicated quantum superpositions. In terms of the original, ontological basis, the beables will just describe the elementary basis elements. And now what we just argued is that they will also be elementary eigen states of the classical observables at large scales! What this means is that the states \(|\psi \rangle \) that we actually produce in our laboratories, will automatically collapse into states that are distinguishable classically. There will be no need to modify Schrödinger’s equation to realize the collapse of the wave function; it will happen automatically.Footnote 3

This does away with Schrödinger’s cat problem. The cat will definitely emerge either dead or alive, but never in a superposition. This is because all states \(|\psi \rangle \) that we can ever produce inside the cat-killing machine, are ontological. When we write them as superpositions, it is because the exact state, in terms of ontological basis elements, is not precisely known.

In Schrödinger’s Gedanken experiment, the state actually started from was an ontological state, and for that reason could only evolve into either a dead cat or a live cat. If we would have tried to put the superimposed state, \(\alpha |\mathrm{dead}\rangle +\beta |\mathrm {alive}\rangle \) in our box, we would not have had an ontological state but just a template state. We can’t produce such a state! What we can do is repeat the experiment; in our simplified description of it, using our effective but not ontological basis, we might have thought to have a superimposed state as our initial state, but that of course never happens, all states we ever realize in the lab are the ontological ones, that later will collapse into states where classical observables take definite values, even if we cannot always predict these.

In the author’s mind this resolution to the collapse problem, the measurement problem and the Schrödinger cat problem is actually one of the strongest arguments in favour of the Cellular Automaton Interpretation.

3 Born’s Probability Rule

3.1 The Use of Templates

For the approach advocated in this book, the notion of templates was coined, as introduced in Sect. 2.1. We argue that conventional quantum mechanics is arrived at if we perform some quite complicated basis transformation on the ontological basis states. These new basis elements so obtained will all be quite complex quantum superpositions of the ontological states. It is these states that we call “template states”; they are the recognizable states we normally use in quantum mechanics. It is not excluded that the transformation may involve non-locality to some extent.

Upon inverting this transformation, one finds that, in turn, the ontological states will be complicated superpositions of the template states. The superpositions are complicated because they will involve many modes that are hardly visible to us. For instance, the vacuum state, our most elementary template state, will be a superposition of very many ontic (short for ontological) states. Why this is so, is immediately evident, if we realize that the vacuum is the lowest eigenstate of the Hamiltonian, while the Hamiltonian is not a beable but a ‘changeable’ (see Sect. 2.1.1). Of course, if this holds for the vacuum (Sect. 5.7.5), it will surely also hold for all other template states normally used. We know that some ontic states will transform into entangled combinations of templates, since entangled states can be created in the laboratory.

The macroscopic states, which are the classical states describing people and planets, but also the pointers of a measuring device, and of course live and dead cats, are again superpositions of the template states, in general, but they are usually not infinitely precisely defined, since we do not observe every atom inside these objects. Each macroscopic state is actually a composition of very many quantum states, but they are well-distinguishable from one another.

In Fig. 4.1, the fundamental ontological states are the sub-microscopic ones, then we see the microscopic states, which are the quantum states we usually consider, that is, the template states, and finally the macroscopic or classical states. The matrix elements relating these various states are indicated by lines of variable thickness.

What was argued in the previous section was that the classical, or macroscopic, states are diagonal in terms of the sub-microscopic states, so these are all ontic states. It is a curious fact of Nature that the states that are most appropriate for us to describe atoms, molecules and sub-atomic particles are the template states, requiring superpositions. So, when we observe a classical object, we are also looking at ontological things, which is why a template state we use to describe what we expect to see, “collapses” into delta peeked probability distributions in terms of the classical states.

In discussions with colleagues the author noticed how surprized they were with the above statements about classical states being ontological. The reasoning above is however almost impossible to ignore, and indeed, our simple observation explains a lot about what we sometimes perceive as genuine ‘quantum mysteries’. So, it became an essential ingredient of our theory.

3.2 Probabilities

At first sight, it may seem that the notion of probability is lost in our treatment of quantum mechanics. Our theory is ontic, it describes certainties, not probabilities.

However, probabilities emerge naturally also in many classical systems. Think of how a 19th century scientist would look at probabilities. In a particle collision experiment, two beams of particles cross in an interaction area. How will the particles scatter? Of course, the particles will be too small to aim them so precisely that we would know in advance exactly how they meet one another, so we apply the laws of statistics. Without using quantum mechanics, the 19th century scientist would certainly know how to compute the angular distribution of the scattered particles, assuming some classical interaction potential. The origin of the statistical nature of the outcome of his calculations is simply traced to the uncertainty about the initial state.

In conventional quantum mechanics, the initial state may seem to be precisely known: we have two beams consisting of perfectly planar wave functions; the statistical distribution comes about because the wave functions of the final state have a certain shape, and only there, the quantum physicist would begin to compute amplitudes and deduce the scattering probabilities from those. So this looks very different. We are now going to explain however, that the origin of the statistics in both cases is identical after all.

In our theory, the transition from the classical notation to the quantum notation takes place when we decide to use a template state \(|\psi (t)\rangle \) to describe the state of the system. At \(t=0\), the coefficients \(|\lambda _{A}|^{2}\), where (see the remarks following Eq. (2.2))

$$\begin{aligned} \bigl\langle \mathrm{ont}(0)_{A}\big|\psi (0)\bigr\rangle = \lambda _{A}, \end{aligned}$$
(4.3)

determine the probability that we are starting with ontological state # \(A\). We then use our Schrödinger equation to determine \(|\psi (t)\rangle \). When, at some time \(t_{1}\), the asymptotic out-state is reached, we compute \(\langle \mathrm{ont}(t_{1})_{A}|\psi (t_{1})\rangle \), where now the ontological state represents the outcome of a particular measurement, say, the particles hitting a detector at some given angle. According to quantum mechanics, using the Born rule, the absolute square of this amplitude is the probability of this outcome. But, according to our theory, the initial ontological states \(|\mathrm{ont}(0)\rangle _{A}\) evolved into final ontological states \(|\mathrm{ont}(t_{1})\rangle _{A}\), so, we have to use the same coefficients \(\lambda _{A}\) . And now, these determine the probability of the given outcome. So indeed, we conclude that these probabilities coincide with the probabilities that we started with given ontological in-states.

The final ontological states are the ontological states that lead to a given outcome of the experiment. Note, that we used superpositions in calculating the transition amplitudes, but the final answers just correspond to the probability that we started with a given ontological in-state that, with certainty, evolved into a given final, classical out-state.

Our template states form a very tiny subset of all ontological states, so that every time we repeat an experiment, the actual ontic state is a different one. The initial template state now does represent the probabilities of the initial ontic states, and because these are projected into the classical final states, the classical final states obey the Born rule if the initial states do. Therefore, we can prove that our theory obeys the Born rule if we know that the initial state does that regarding the ontic modes. If we now postulate that the template states used always reflect the relative probabilities of the ontic states of the theory, then the Born rule appears to be an inevitable consequence [113].

Most importantly, there is absolutely no reason to attempt to incorporate deviations from Born’s probability interpretation of the Copenhagen interpretation into our theory. Born’s rule will be exactly obeyed; there cannot be systematic, reproducible deviations. Thus, we argue, Born’s rule follows from our requirement that the basis of template states that we use is related to the basis of ontological states by an orthonormal, or unitary, transformation.

Thus, we derived that: as long as we use orthonormal transformations to go from one basis to another, Born’s rule, including the use of absolute squares to represent probabilities, is the only correct expression for these probabilities.