Orthodox Quantum Mechanics

  • Edward MackinnonEmail author
Part of the Boston Studies in the Philosophy of Science book series (BSPS, volume 289)


This chapter develops the measurement interpretation of quantum mechanics. This is an austere systematized version of the orthodox Copenhagen interpretation. Dirac used an analysis of the distinctive features of quantum measurements as a basis for developing the mathematical formalism of quantum mechanics. Schwinger put this on a more systematic basis and extended the formalism to include quantum electrodynamics and basic quantum field theory. This measurement formulation does not accommodate further advances in quantum field theory or quantum cosmology, where there is no outside observer.


Quantum Mechanic Classical Concept Classical Physic Correspondence Principle Test Body 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Any consideration of the role of language in interpreting quantum mechanics (QM) must consider Bohr who expressed his distinctive perspective with the claim: We are suspended in language (See Petersen 1968). In spite of his leading role in forming the Copenhagen, or orthodox1interpretation of QM Bohr’s position is widely misinterpreted. I will present a redevelopment of his position as a minimal interpretation of QM. To situate this I will indicate how Bohr’s position developed and came to be misinterpreted. The reason for the redevelopment is to appraise the limits of valid applicability of orthodox QM. This, in turn. supplies a basis for evaluating attempts to go beyond a minimal basis.

In the mid 1920s the development of a coherent functional interpretation of QM was an urgent concern. Routine reporting of experimental results generated contradictions. The new theoretical breakthroughs were couched in different formulations: the matrix formulation that limited interpretation to observables; de Broglie’s wave–particle and later pilot wave interpretation; Schrödinger’s wave mechanics; and Dirac’s transformation theory. On a functional level the basic interpretative problem was one of relating theoretical terms, like ψ or matrix components to aspects of actual and thought experiments. Even after the equivalence of wave and matrix mechanics was established there was still a conflict between Born’s interpretation of \(\int\psi\dag\psi dx\) as a probability and Schrödinger’s interpretation of it as charge density. Bohr’s underdeveloped and loosely assimilated ideas on complementarity helped experimenters avoid contradictions in reporting and extrapolating results.

Theoreticians assimilated the new QM by learning how to solve problems using properly formulated data. Much of the initial work involved treating problems whose solutions were already known through the Bohr–Sommerfeld (B-S)program and its various modifications. A more challenging test came from the problems the B-S program did not resolve and from previously unanticipated consequences of the new formalisms. Matrix mechanics had difficulty adapting the method of action–angle variables. Pauli found a way to calculate hydrogen energy levels and the Stark effect for hydrogen. The problem of the rotator was independently treated by: Lucie Mensing in Göttingen, Gregor Wentzel in Hamburg; Otto Halpern in Vienna; Igor Tamm and Lev Landau in Russia, and David Dennison, an American visiting Copenhagen.2 After the development of wave mechanics the problems treated were: the hydrogen atom (Schrödinger, Dirac); the Stark effect (Schrödinger, Wentzel); the anomalous Zeeman effect (Heisenberg, Jordan); motion of a free particle (Ehrenfest); the Compton effect (Wentzel, Beck), the fine structure of hydrogen (Dirac); and the Kramers-Heisenberg radiation formula (Klein, Dirac).3These were old problems done in a new way.

There were also some new developments that went beyond the B-S program, notably: collision theory (Born); the helium atom (Heisenberg, Hylleras, Bethe); Fermi-Dirac statistics; treatment of electrons in metals as a degenerate Fermi-Dirac gas (Sommerfeld); an explanation of the extreme density of white dwarf stars (Fowler); spectra of complex atoms (von Neumann, Wigner, Slater); penetration of a potential barrier (Gamow, Condon and Gurney); an account of ferromagnetism (Heisenberg); paramagnetism (Pauli); the inclusion of spin (Pauli); the existence of exchange forces (Heitler and London); exchange interaction in scattering (Oppenheimer, Mott); molecules (Born and Oppenheimer); details of chemical bonding (Pauling); and various approximation techniques (Born, Fock, Hartree, Fermi, Thomas, Wentzel, Kramers, Brillouin). The Raman effect had been predicted by a heuristic argument in the old quantum theory, but only really fit the new theory.

These solutions articulated the way quantum mechanics (from now on used as a general term including matrix and wave mechanics) related to and went beyond classical physics. Classical terms, like ‘mass’, ‘energy’, ‘momentum’, and ‘angular momentum’ entered in the same basic formulas, such as, \(\mathbf{p} = m\mathbf{v}\), \(\mathbf{L = r x p}\), and the conservation laws. Following the correspondence principle (CP) tradition, classical physics served as a starting point and guide for setting up quantum mechanics. The standard way of doing this was to analyze a problem in classical terms, set up the classical Hamiltonian, replace dynamical variables by quantum operators, and then attempt to solve the resulting differential equation. Most physicists, even those concerned with foundational issues, apparently felt that the practice of physics should not depend on settling issues about the meanings of concepts, the role of observability, or whether ultimate reality is continuous or discontinuous. Bohr’s conceptual subtleties and Dirac’s c-number/q-number distinction were largely ignored. Dirac’s transformation theory was rarely used. However, its development was widely regarded as proof that wave and matrix mechanics were special cases of a more general system, quantum mechanics.

By 1929 non-relativistic quantum mechanics (NRQM) was no longer seen as a problematic field. Though much remained to be done, the foundations seemed secure. With this brief background we may list the basic features of the Copenhagen interpretation that became orthodox quantum mechanics: Heisenberg’s uncertainty principle; the idea that photons, electrons, and other particles exhibit both wave and particle properties; the probabilistic interpretation of the wave function; the correspondence between eigenvalues derived from the mathematical formalism and values of quantities obtained from measurements; the idea that wave and matrix mechanics are special representations of a more general formalism; and some sort of complementary relation between classical and quantum physics. In 1927 most of these features seemed novel and more than a bit bizarre. By 1929, they were generally accepted as part of the normal practice of quantum physics. Bohr’s underdeveloped doctrines that classical concepts stemming from ordinary language play a definitive role in measurement, and that quantum physics is a rational generalization of classical physics were widely regarded as speculative philosophical issues. After the development of quantum field theory (QFT), relativistic quantum mechanics (RQM), and especially after the discovery of the neutron, the leading European physicists concentrated on these new fields and on nuclear physics.

5.1 The Development of Bohr’s Position

For most physicists the functional interpretation of QM no longer seemed problematic. There was one strong dissent. Bohr saw the developments just cited as a challenge to his way of handling problems in QM. His way of resolving these difficulties reflects and clarifies the unique aspects of his conceptual analyses. In developing his wave equation, Dirac expressed the hope that he could avoid the negative energy states allowed by the Klein-Gordon equation (Dirac 1928). He originally did this by simply ignoring the negative energy states. After Klein demonstrated the possibility of transitions to negative energy states, these could not be ignored. Bohr’s evaluation of the situation was expressed in a letter to Dirac:

In the difficulties of your old theory I still feel inclined to see a limit of the fundamental concepts on which atomic theory hitherto rests rather than a problem of interpreting the experimental evidence in a proper way by means of those concepts. Indeed according to my view the fatal transitions from positive to negative energy should not be regarded as an indication of what may happen under certain conditions, but rather as a limitation in the applicability of the energy concept. (Sources: Bohr Scientific Correspondence, sect. 4, letter of 5 December, 1929)

Bohr’s previous analyses had used classical concepts to interpret experimental information. RQM seemed to show that this method could not be extended to relativistic phenomena. Other considerations seemed to show that it could not be extended to nuclear physics or quantum field theory either. Before 1932 nuclear physics had two outstanding and apparently related problems, electron confinement and nuclear statistics. Electrons, it was agreed, must be in the nucleus since they are emitted in β decay. Yet, electrons confined in such a small volume should have very high kinetic energies. These energies should not only allow escape. They should also require RQM. Whether a nucleus obeyed Bose-Einstein of Fermi-Dirac statistics should be determined by counting the number of protons and electrons in the nucleus. This gave the wrong results for nitrogen. Bohr had enthusiastically accepted Dirac’s QFT as the only reasonable account of photons. Yet, as Oppenheimer showed, this theory encountered divergence difficulties when the interaction of an electron with a radiation field is treated in terms of the emission and absorption of virtual particles. For most physicists, these were separate problems on the frontiers of physics. For Bohr, the common feature these difficulties shared was the problematic extension of the classical concepts needed to give a descriptive account.

Bohr’s resolution of these problems focused on the use and limitation of the concept, ‘particle’ and the informal inferences it supports. It provides a foundation for the application of other concepts such as ‘space-time location’, ‘momentum’, ‘energy’ and ‘trajectory’. These quantitative concepts supply the correspondence principle basis for the introduction of the mathematical formalism of quantum mechanics. Nevertheless, ‘particle’ remains a classical concept either when used as the basis for a description of a particle’s trajectory or as a peg for the CP. Bohr was concerned with showing how the concept ‘particle’ can be extended to quantum applications. We will summarize how he did this in nuclear physics and in scattering theory, two topics that are rarely considered in philosophical accounts of Bohr’s position.

The applicability of ‘particle’ as applied to electrons broke down somewhere between the Compton wavelength of an electron, \(\lambda = h/mc = 2.4 \times 10^{-10}\) cm. and the classical radius of the electron, \(e^2/mc = 2.8 \times 10^{-13}\) cm. If the concept of an electron as a localized particle is inapplicable within the nucleus then so too is energy conservation for these electrons, though their charge is conserved. The statistical problem is dissolved, since electrons within a nucleus cannot be counted as particles. The Klein argument is moot. It requires extremely strong electrical fields over very small distances. Such fields must ultimately be due to the presence of charged particles. Yet, by Bohr’s new argument, it is impossible to localize enough particles in a small enough region to produce such a strong field. This argument, in turn, set limits to the applicability of quantum mechanics since it had to be suspended from pegs of classical ideas.4

Bohr’s provisional solution was undercut by new developments. The discovery of the neutron and the Fermi theory of beta decay eliminated the problem of electrons within the nucleus. These advances obviated some of the difficulties Bohr had in extending the concept ‘particle’. Since neutrons and protons have much greater masses than electrons, they could be treated as particles confined within the nucleus and having kinetic energies in the non-relativistic range. On this basis, Bohr went on to develop the two models of the nucleus that dominated research in the field. The collective model, stimulated by Fermi’s experiments with slow neutron capture, assumed that an incoming slow neutron is absorbed by the nucleus leading to a compound state that can decay through any one of a number of competing processes. Later Bohr introduced the liquid drop model and after reports of fission, used this model both to explain fission and also to conclude that the fission of uranium was due to the relatively rare isotope, U 235. (Bohr, Works, Vol. 9, 365–389). Both models shared two assumptions: that visualizable models are useful in the limits within which one can use classical concepts to give descriptive accounts; and, an implication of his earlier conceptual analysis, one cannot model individual particles within the nucleus. Here again Bohr insisted on the limits of applicability of basic concepts. One could speak of protons and neutrons as particles and as confined within the nucleus. However, there was no meaningful basis for ascribing positions or trajectories to any particle within the nucleus. So, both models relied on continuous potentials, rather than discrete particles.

The second assumption was sharply challenged by the success of the individual particle (or shell) model of the nucleus developed independently by M. G. Mayer, Haxel, Jensen, and Suess. Bohr eventually found a way to interpret collective and individual particle models as complementary, rather than contradictory. Since protons and neutrons are both Fermi-Dirac particles, the Pauli’s exclusion principle applied to nuclear particles, effectively gives each of them an infinite mean free path. Since it was meaningful to speak of particle trajectories, it was also meaningful to speak of individual particles having these trajectories. His suggestion led to the collective model developed by his son, Aage, and Ben Mottelson, and rewarded with a Nobel Prize.

Bohr’s lifelong concern with scattering theory illustrates the way he related the particle concept to mathematical formulations treated as computational tools. (See Bohr, Works, Vol. 8 and MacKinnon 1994) To see the complications we can begin with the perspective that characterized Bohr’s earliest work on scattering of electrons from atoms. At low energies classical approximations are valid. When the energy of the incident electron is high enough to induce orbital transitions, then quantum effects must be included. In the 1940s Bohr was concerned with particles incident upon nuclei and effectively reversed his earlier standards. At very high energies the incident particle can be thought of as striking an individual nucleon. At low energies it is absorbed by the nucleus and quantum levels must be considered. In 1940 he promised a general paper on collision theory, but did not complete it until 1948 (Bohr, Works, Vol. 8, 423–568). Here the quantum/classical division was determined by a parameter, ζ, the ratio of the collision diameter to the screening factor. When \(\zeta \ll 1\), then one has a pure classical picture. When \(\zeta \gg 1\) one has pure wave diffraction. Models are required for the intermediate cases. These are not models of the mathematical formalism. They are models of the reality treated that are introduced when the formalism seems inapplicable. The appropriate model depends on the problem. In a relatively low energy collision between an electron and an atom, one may use the orbital model of the atom. When the energy is high enough so that the incident electron effectively interacts with all the bound electrons the Fermi-Thomas model is appropriate. Similar considerations determine when it is appropriate to model the incident electron as a particle or as a wave packet (Born approximation). The intermediate cases must include many special effects: exchange phenomena in the collision of two identical particles; the Ramsauer effect for slow electrons interacting with noble gases, the capture and loss of electrons by fission fragments. The problematic feature in these cases was the development of a consistent descriptive account adequate to the phenomena treated. Different contexts required different accounts and an analysis of the valid applicability of the concepts used. When that was accomplished, the mathematical formulation was routine. When Bohr’s epistemological comments are cited out of context, then they may seem pontifical and arbitrary. However, they are best understood as emerging from his abiding concern with making the practice of physics conceptually consistent.

QFT seemed to fail a Bohrian analysis. Landau and Peierls (1931) argued that quantum mechanics could not be applied in the range of relativistic energies. They interpreted the difficulties in RQM (negative energy states) and QFT (divergences) as indicating the failure of these two theories and sought to explain the reason for the failure along Bohrian lines. A necessary condition for the applicability of quantum mechanics is the existence of predictable measurements. By adapting the time-energy indeterminacy principle to measurements of electrons and photons they concluded that measurements precise enough to support predictions can be made only for systems that vary little in the time required to achieve this precision. On this basis, they inferred that quantum mechanics does not apply at all to photons and only to non-relativistic electrons. They visited Bohr’s institute and were amazed at the strength of his rejection. (See Peierls’s Introduction to BCW, Vol. 9 (1985)).

Bohr’s response manifested a way of doing physics that was uniquely his. He concluded that the Landau-Peierls position was wrong on conceptual grounds, and then began to learn the mathematics of quantum field theory. Two years of intense work with Rosenfeld yielded a paper (Bohr and Rosenfeld 1933), which, as the authors noted, was more respected than read. This paper convinced Bohr that his manner of interpreting quantum physics was correct. Yet, it is rarely treated in any discussions of Bohr’s position. I will try to bring out the point of the paper and refer to Darrigol (1991) for a more complete account. The paper is not concerned with quantum field theory as a theory, or with the fundamental difficulties concerning divergences. It is exclusively concerned with a consistency problem. The definition of the quantities that quantum field theory uses is set by the CP and the uncertainty principle. Testing means measuring field components individually, or in combinations. A necessary condition for the extension of quantum mechanics to the electromagnetic field is that definitions of field quantities must be used in a way that is consistent with the possibility of measurement. It was here that Landau and Peierls argued that the theory was inconsistent.

The argument given in the Bohr-Rosenfeld paper is essentially a peculiar form of double-entry bookkeeping. The credits come from the application of the correspondence principle and the uncertainty principle to the electromagnetic field. The debits come from measurement of field quantities. The details present a double problem. First, they are technical and difficult. It took Bohr and Rosenfeld two years of intensive work to get all the details straight. Secondly, the proposed measurements are so grossly unrealistic that it is difficult to see what the authors are getting at. We begin with the credits. The CP extends basic concepts of mechanics and electrodynamics to quantum physics. Mechanical quantities presuppose the concept ‘particle’. Electrodynamics concepts presuppose ‘field’. This paper is concerned with the application of the CP to the field concept. The classical concept of a field is a continuous distribution, such that the components have a value at every point. This, the authors insist, is an idealization. Electromagnetic quantities are quantized by the same procedure used for mechanical concepts. Set up Poisson brackets for components in Cartesian coordinates and then replace these brackets by commutators. These commutation relations lead to detailed conclusions concerning which field components can be simultaneously measured and to what degree of accuracy. The mathematical form of the results made an essential use of the Dirac delta function. The authors justified this by the physical significance they accorded it. The value of a field at a point is an extension of the classical idealization beyond the limits of its validity. Physical significance attaches only to space-time integrals of field components. The delta function is a tool for integration that effectively uses values defined over space-time intervals. Using the delta function, they computed average values of field components over different space-time regions and used this as a basis for predictions concerning measurability.

The averages of all field components over the same space-time region commute and, accordingly, should be independently measurable. The averages of two components of the same kind, such as E x or H y , over two spatially separate regions commute if the time intervals are identical. The averages of two components of different kinds over two arbitrary time intervals commute when the corresponding spatial regions coincide. However, average values of the same component, e.g. E x , over different spatio-temporal regions (I and II) do not commute. Nor do the average values of one component, such as E x in I, and a perpendicular component, such as H y in II. Pauli, whose critical evaluation was regularly solicited, pointed out that vacuum fluctuations were not included. Rather than include them, the authors gave an epistemological justification for their omission.

To balance the debits with credits they considered, not actual measurements, but the most perfect measurements that could be conceived without contradiction. Again, the details are confusing, but the overall purpose is quite clear. Even idealized measurements of different field components can be broken down into two parts. The first is the actual measurement of a field quantity. The second is readjusting the ‘machinery’ so that it can perform another measurement. The analysis should include the actual measurement and any changes to other field values brought about by the measuring process, but exclude, or compensate for, the process of readjusting the machinery. The measurement of electromagnetic field components depends on the transfer of momentum to suitable electrical or magnetic test bodies placed in the field. Since measurements are of averages over space-time volumes, a suitable test body for measuring an electrical component must have a uniform charge distribution over a suitable volume. This is a classical charged particle, one whose atomic composition is ignored. Any direct measurement of the momentum this body acquires by using something like a radar gun, or by using the Doppler effect, would change the frequency. So for ideal measurements a more complicated device is needed.

Consider a collection of macroscopic uniformly charged rigid test particles, each with a fixed place in a rigid framework. To measure E x in region I the test particle in I is disconnected from the framework. Then it is displaced by the value of the E x field over the surface of the particle. Next, this displacement is compensated, e.g. by having the test particle attached to an oppositely charged body by magnetizable flexible threads. Then the test body is reattached to its original position. Since measurements are needed of different components in different regions and at different times the rigid framework must have a distributed series of detachable test particles each with its own compensating mechanism. The resulting apparatus is much more like a Rube Goldberg contraption than a feasible experimental arrangement. This is not significant. The postulation of the most perfect measurements compatible with the physical principles indispensable to the measurement in question supplies a clear basis for determining the overall consistency of measurements in QFT.

When Bohr and Rosenfeld developed these idealized measurements they were not trying to prove that The Copenhagen position was correct. Bohr always felt that learning where a system broke down is the best way to appraise its validity. What they found was that for the measurement of one or more components in different regions the results of idealized measurements did not coincide with the results obtained from the commutation relations. However, the analysis was not yet complete. The displacement of one test body changes the field at another test body. This change can be compensated by means of a third body hooked to the second by flexible springs. When this compensation is included, then the results are exactly the same as those obtained from the commutation relations. The debits and credits balance in precise detail. Hence the CP supplies a consistent basis for applying quantum mechanics to electromagnetic fields. This analysis does not establish, or even test, the consistency of quantum field theory. It simply shows that two different usages of classical field components, one using the CP to set up quantum analogs of classical components and the other in measuring fields, are consistent. This minimal consistency is a necessary but far from sufficient condition for any quantum field theory employing these concepts.

In their second paper (Bohr and Rosenfeld 1950), written when quantum electrodynamics (QED) dominated physics they extended their previous considerations from the measurement of fields to the measurement of charge currents. The goal was to show that second quantization is consistent in the treatment of ‘matter waves’ as well as photons. Again, they propose a highly idealized experiment, measuring current within a region by surrounding the region with a shell containing test bodies that absorb momentum, are moved, and then have the movement compensated. The first, or pre-QED, approximation presented no problems. In a second approximation they had to consider virtual pair production induced by displacement of test bodies. They gave a very non-technical argument to indicate that polarization of the vacuum would not influence their idealized measurements and that manipulations of a test body in one region would have a polarizing effect on other regions. When proper compensations are included the results are in accord with the commutation relations. The net result was a qualitative non-technical proof of what every one else assumed, that the way quantities are represented in QED is legitimate.

Bohr was finally convinced that his way of interpreting QM was consistent. One could use either the ‘particle’ or the ‘field’ cluster of concepts to interpret actual or ideal experiments and have a mathematical formulation that was consistent with the informal inferential structure used to report and extend experimental results. The final trial came from the challenge issued by Einstein, Podolsky, and Rosen. Since this has been exhaustively treated in the literature I will merely point out a divergence in the contrasting interpretative frameworks. The EPR paper argued that Copenhagen QM is incomplete on ontological grounds. There are elements of reality not included in the theory. Bohr defended QM as complete on epistemological grounds. It accommodates all the experimental information that can be used without introducing inconsistencies.

In his later analyses, Bohr gradually shifted from concepts used in individual experimental situations, to the supporting network of concepts, and finally to the language that made concepts possible. From about 1937 on Bohr advocated using ‘phenomenon’ as a general term covering the whole experimental situations, including the apparatus. Bohr was never concerned with the interpretation of quantum mechanics as a theory. He considered the mathematical formalism an inferential tool, not a theory. “Its physical content is exhausted by its power to formulate statistical laws governing observations obtained under conditions specified in plain language” (Bohr 1963, p. 12). With this background we may summarize the Bohr Consistency Conditions, the necessary conditions for the unambiguous communication of experimental information.
  1. 1.

    The meaning of classical concepts is rooted in ordinary language usage and its historical extension in the language of physics.

  2. 2.

    The doctrine of complementarity sets the limits to which classical concepts may be consistently extended.

  3. 3.

    Any use of classical concepts beyond these allowed limits may generate inconsistencies. Idealized thought experiments supply a vehicle for analyzing limits and exposing inconsistencies.

  4. 4.

    When concepts are used within their limits, then they support the normal inferences of experimental physics. Thus, predicting, or retrodicting, paths is valid in contexts where the classical particle concept is applicable.

  5. 5.

    The mathematical formulation based on the usual operator substitutions must be consistent with these conditions. This is a consistency relation between two inference supporting systems, a linguistic formulation and a mathematical formulation.


In introducing the dual inference model we used the simple example of how a dual inference system functions in the game of bridge. The informal ordinary-language inference system contains the physical content while an inferential system, like the Goren point-count system, functions as an inferential tool. In Bohr’s position the extended ordinary language contains all the physical content, while the mathematical formalism is an inferential tool. The consistency conditions allow the dual-inference system to function without generating contradictions. The justification for imposing this is pragmatic. It works in atomic physics, nuclear physics, and quantum electrodynamics.

To see the significance of the Bohr Consistency Conditions we note that they disallow the standard formulation of Bell’s theorem. Bell’s original formulation specified the problem: “Consider a pair of spin one-half particles formed somehow in the singlet state and moving freely in opposite directions.” This statement of the situation explicitly presupposes both the classical term ‘particle’ and a quantum specification of the state of the two-particle system. The fact that ‘particle’ is used in the classical sense of a localized body traveling in a trajectory is basic to every formulation of the problem. This problematic mixture of classical descriptive accounts, used to support inferences, and quantum state specifications carries over even to accounts given in purely quantum terms. Thus Redhead (1987, p. 73) says: “Consider a QM system consisting of two spin one-half particles, in the singlet state of the total spin, and widely separated, so that there is no significant overlap of the spatial wave functions of the two systems.” In Bohrian semantics the term ‘particle’ serves as an apt designation and a basis for inference only in an experimental context set up to test for mechanical properties. Prior to such a measurement we are dealing with an entangled quantum mechanical system represented by one wave-function, not with two separated particles having separate wave functions. The Bohrian position supports the conclusion the QM correlations will always trump Bell limits. However, it does not explain, or even address, the distant correlations that Einstein labeled ‘ghostly’. Bohr would argue that such questions are not properly formulated.

5.2 A Strict Measurement Interpretation of Quantum Mechanics

When quantum mechanics is developed on the basis of the Bohr consistency conditions it is not a theory. It uses the mathematical formalism of QM as a tool for extending classical concepts. Heisenberg5 and Pauli 6 also interpreted quantum mechanics as a rational generalization of classical physics. Bohr repeatedly insisted that the complementarity interpretation is the only possible one.7 In the view of many philosophers the Copenhagen patriarchs, like their Chalcedonian predecessors, were imposing orthodoxy by decreeing that no other position should be taught or held. What significance should be accorded Copenhagen orthodoxy?

Before answering that question we should consider the chief source of misunderstanding. David Bohm’s presentation of a hidden variable interpretation of QM effectively changed the status quaestionis. Quantum mechanics was presented as a mathematical formalism that admitted of different interpretations. Heisenberg entered the fray arguing that the Copenhagen interpretation is the only viable interpretation. (See Heisenberg 1958,  chaps. 3 and  8; Howard 2004) His defense effectively transformed the perception of the Copenhagen interpretation into an interpretation of quantum mechanics as a theory.8 When Bohr’s scattered comments were taken as the interpretation of QM as a theory then they seemed amateurish, outdated, and even perverse. Bohr never interpreted quantum mechanics as a theory. As the last footnote indicates what he regarded as necessary was the complementarity description. It is the only way to systematize experimental results without introducing inconsistencies. The mathematical formalism had to be used, and should be interpreted, in accord with these restrictions. Is this an adequate basis for an interpretation of QM? The three main objections can be labeled ‘the Einstein objection’, ‘the Bohr objection’, and the formalist objection. Einstein thought that Copenhagen QM is not what a fundamental theory should be. He realized that the only effective way to implement this criticism is to develop a better quantum theory. His 30 years of struggling to achieve this goal led only to frustration. Bohr thought that his way of interpreting QM should be rejected if it is inadequate to advances in physics. In 1930 he thought it might not be adequate to advances in RQM, QFT, and nuclear physics. The efforts previously summarized convinced him that these advances did not go beyond the limits his method allowed. The formalist objection is that a physical theory should be regarded as a mathematical formalism requiring a physical interpretation. This leaves no role for the dual-inference account that I summarized and which Bohr exemplifies.

We will focus on the Bohrian objection. Is the Copenhagen interpretation adequate to advances in physics since Bohr’s death? This question cannot be answered by simply considering advances in physics. Creative physicists often rely on an ‘Anything goes’ methodology and deliberately go beyond accepted limits. The question can be rephrased. Does a systematic account of accepted advances go beyond the limits of the Bohr Consistency Conditions? Here we can take some guidance from the formalists. Explicit rules for theory interpretation have been developed for formal systems, such as symbolic logic. In an axiomatically formulated system there is: a basis, the axioms; a method of extension, the allowed rules of inference; and a cutoff. Only conclusions derived from the axioms by following the rules count as part of the system. Then a theory is a sharply delineated object of interpretation with clearly specified limits. A rigorous reformulation of QM could put it in this interpretative framework.

John von Neumann (1955 [1933]), who coined the term ‘Hilbert space’ extended Hilbert’s axiomatic approach to quantum mechanics. J. Mackey (1963) gave a new axiomatic formulation of QM as a non-classical probability theory. Piron and the Geneva school developed axiomatic systems centered on the lattices of closed subspaces of a generalized Hilbert space.9 Recent works generally rely on the semantic conception of theories rather than axiomatic models. We may schematize these formulations of QM in terms of a general structural form: \(\mathcal{T = <L,A,D,K>}\), where \(\mathcal {T}\) is a theory, \(\mathcal {L}\) a formal language, \(\mathcal {A}\) is a set of axioms expressed in \(\mathcal {L}\), \(\mathcal {D}\) is a set of inference rules, and \(\mathcal {K}\) is a class of models of \(\mathcal {A}\), or structures in which the axioms are true. In the semantic conception one dispenses with axioms and treats a Hilbert space as an abstract structure to be given an interpretation in terms of models.10

Bohr’s methodology effectively reverses these methods of interpretation. Formal methods take a mathematical formalism as a foundation and then impose a physical interpretation on this foundation. Bohr takes a descriptive account of actual and possible measurements as foundational and then fits the mathematics to this foundation. This can be done in two ways. A loose measurement interpretation begins with the restrictions on the reporting of experimental data and then adapts the mathematical formalism to fit this basis. I am familiar with only five textbooks that take a basic consistency between the language used in experimental results and mathematical formulations as a basis for developing and interpreting quantum mechanics: Heisenberg 1930, Pauli 1947 [1930], Kramers 1957, Landau and Lifshitz [1956], and Gottfried 1966. There are undoubtedly more. This does not supply a basis for determining the limits of applicability of the method. A strict measurement interpretation relies on an analysis of quantum measurements to generate the mathematics of QM. In this case one can imitate the formal methodology and speak of the interpretation of QM in terms of a basis, the measurement analysis; a method of extension, the mathematical formulation, and a cutoff. Any conclusions incompatible with this methodology are not accepted. This methodology presents both theoretical and practical difficulties. As in the interpretation of classical physics, the theoretical difficulty is a reliance on sloppy mathematics. Since the physics is taken as foundational, physical considerations often replace existence theorems and consistency considerations in justifying mathematical formulations. The practical difficulty is that this is an awkward, and often confusing, way of developing QM. Nevertheless, it seems to be the only method available for testing the limits of valid applicability of the Bohrian approach. I have presented the technical details elsewhere (MacKinnon 2008) and will present an informal summary here. As a preliminary point we should make a sharp distinction between the measurement problem and the measurement interpretation. The standard formulation of the measurement problem assumes the universal validity of QM. It should treat the apparatus as well as the system being analyzed. Consider an experimental situation where the state function, \(|\psi \rangle\), representing the object plus the measuring apparatus is a superposition. In the linear dynamics of the Schrödinger equation a superposition of states evolves only into further superpositions. Measurement results require a mixture of states, which may be assigned different probabilities. How does a superposition become a mixture? In the von Neumann (or Wigner11) account one distinguishes two types of processes: the unitary evolution based on Schrödinger dynamics, and a non-unitary collapse proper to measurement situations. This has occasioned repeated criticism as an ad hoc postulate. When this postulate is rejected, then there are two interrelated problems. The first is the reduction problem, explaining how the superposition becomes a mixture. The second is the selection problem, explaining how the measurement selects one value from the mixture that has many values with differing probabilities.12

In a strict measurement interpretation the measurement problem does not arise. Instead of asking how the formalism yields measurement results one begins with measurements and asks how they can be represented mathematically. In a loose measurement interpretation one has the standard mathematical formalism and a form of the problem is treated in a reverse order. Thus, Landau and Lifshitz (pp. 21–24) claim that the measuring apparatus is represented by ‘a quasi-classical wave function’. This means that one relies on a classical description of the apparatus and presupposes that there is a state function, or a large equivalence class of state functions, corresponding to this description. Gottfried (p. 186) insists that an experimental arrangement counts as a measurement device if and only if quasi-classical states are macroscopically distinguishable. This means that pure states and mixtures are indistinguishable in a measurement situation. He focuses on the conditions under which it is reasonable to replace a superposition of states by a mixture. This is not a consequence of the formalism of quantum mechanics; it is a necessary condition for a real measurement. The formalism of quantum mechanics does not yield real measurements.

In his Principles Dirac generates the basic formalism of QM by analyzing idealized experiments. Messiah’s (1964) well-known textbook helped make the Dirac formalism an established part of normal physics by presenting it with no reliance on Dirac’s own development. As a result, Dirac’s method of development is never considered. I will present his reasoning in its starkest form. Dirac justifies the representation of states by vectors through an analysis of measurements. A simplified recasting of his argument highlights the problematic features. Consider a beam of light consisting of a single photon plane-polarized at an oblique angle relative to the optic axis of a tourmaline crystal. Either the whole photon passes, in which case it is observed to be polarized perpendicular to the optic axis, or nothing goes through. The initial oblique polarization, accordingly, must be considered a superposition of states of parallel and perpendicular polarization. Again, consider another single-photon beam passed through an interferometer so that it gets split into two components that subsequently interfere. Prior to the interference, the photon must be considered to be in a translational state, which is a superposition of the translational states associated with the two components (Dirac 1958, pp. 4–14). Since particle states obey a superposition principle, they should be represented by mathematical quantities that also obey a superposition principle, vectors. The physics generates the mathematics.

This was a methodology that Dirac regularly relied on. In the second edition of Principles he introduced vectors “… in a suitable vector space with a sufficiently large number of dimensions” (Dirac 1935, p. 14). In the third edition he introduced his bra-ket notation and simply postulated a conjugate imaginary space with the needed properties. He assumed that the vector space he postulated must be more general than a Hilbert space, because it includes continuous vectors that cannot be normalized (Dirac 1958, p. 40, 48). He only spoke of the Hilbert-space formulation of quantum mechanics when he became convinced that it should be abandoned (Dirac 1964). Messiah developed a statistical interpretation of QM and did not apply the superposition principle to individual systems. In this context the Dirac argument from physical superposition of states of an individual system to a mathematical representation that also obeys a superposition principle has no foundation. Physicists generally learned the Dirac formulation through Messiah’s elegant mathematical presentation and then failed to realize that Dirac’s presentation represented better physics. The application of the superposition principle to individual states proved indispensable in particle physics. Schwinger described his early student years as “unknown to him, a student of Dirac’s” (Schweber 1994, p. 278). Before beginning his freshman year at C.C.N.Y. he had studied Dirac’s Principles and, at age 16, wrote his first paper, never published, “On the Interaction of Several Electrons”, generalizing the Dirac-Fock-Podolsky many-time formulation of quantum electrodynamics. Schwinger explicitly puts QM on an epistemological basis: “Quantum mechanics is a symbolic expression of the laws of microscopic measurement” (Schwinger 1970b, p. 1). Accordingly, he begins with the distinctive features capturing these measurements. This, for Schwinger, is the fact that successive measurements can yield incompatible results. Since state preparations also capture this feature Schwinger actually uses state preparations, rather than complete measurements as his starting point. He begins by symbolizing a measurement, M, of a quantity, A, as an operation that sorts an ensemble into sub-ensembles characterized by their A values, \(M(a_i)\) The paradigm case is a Stern-Gerlach filter sorting a beam of atoms into two or more beams. This is a type one measurement. An immediate repetition would yield the same results. Though Schwinger did not use quantum information theory, his point of departure in developing his measurement interpretation is a consideration of idealized measurements that yield Yes/No answers. There is no reduction of the wave packet or recording of numerical results. An idealization of successive measurements is used to characterize the distinguishing feature of these microscopic measurements. Symbolically
$$M(a') M(a'') = \delta (a', a'') M(a').$$
This can be expanded into a complete measurement, \(M(a') = \prod_{i=1}^k M(a_i')\) where a i stands for a complete set of compatible physical quantities. Using A, B, C and D for complete sets of compatible quantities, a more general compound measurement is one in which systems are accepted only in the state \(B = b_i\) and emerge in the state, \(A = a_i\), e.g., an S-G filter that only accepts atoms with \(\sigma_z = +1\) and only emits atoms with \(\sigma_x = +1\). This is symbolized \(M(a_i, b_i)\). If this is followed by another compound measurement \(M(c_i,d_i)\), the net result is equivalent to an overall measurement that only accepts systems in state d i and emits systems in state a i . Symbolically,
$$M(a_i, b_i) M(c_i, d_i) = <b_i|c_i>M(a_i, d_i).$$

For this to be interpreted as a measurement \( <b_i|c_i >\) must be a number characterizing systems with \(C = c_i\) that are accepted as having \(B=b_i\). The totality of such numbers, \( <a'|b'>\), is called the transformation function, relating a description of a system in terms of the complete set of compatible physical quantities, B, to a description in terms of the complete compatible set, A. In the edition of Dirac’s Principles that Schwinger studied, the transformation function was basic. A little manipulation reveals that N, the total number of states in a complete measurement, is independent of the particular choice of complete physical quantities. For N states the measurement symbols form an algebra of dimensionality N 2. These measurement operators form a set that is linear, associative, and non-commutative under multiplication.

To get a physical interpretation of this algebra consider the sequence of selective measurements \(M(b')M(a')M(b')\). This differs from a simple or repeated measurement \(M(b')\) in virtue of the disturbance produced by the intermediate \(M(a')\) measurement. This suggests \({M(b')M(a')M(b')= p(a',b')M(b')}\), where \(p(a',b')= <a'|b'> <b'|a'>\). Since this is invariant under the transformation, \( <a'|b'>\rightarrow \lambda (a') <a'|b'>\lambda(b^{-1})\), where \(\lambda(a'), \lambda(b')\) are arbitrary numbers, Schwinger argues that only the product, \(p(a',b')\) should be accorded physical significance. Using \(\sum_{a'} p(a',b')=1\) Schwinger interprets this as a probability and imposes the restriction,
$$ <b'|a'>= <a'|b'>^{*}\!\!.$$

The use of complex numbers in the measurement algebra implies the existence of a dual algebra in which all numbers are replaced by complex conjugate numbers. This algebra of measurement operators can be expanded into a geometry of states. Introduce the fictional null (or vacuum) state, 0, and then expand \(M(a',b')\) as a product, \(M(a',0)M(0,b')\). Let \(M(0,b') = \varPhi (b')\), the annihilation of a system in state \(b'\), and \(M(a',0) = \varPsi (a')\), the creation of a system in state a′. These play the role of the state vectors, \(\varPhi (b') = <b'|\) and \(\varPsi(a') = |a'>\). With the convenient fiction that every Hermitian operator symbolizes a property and every unit vector a state one can calculate standard expectation values. Like Dirac Schwinger relies on the complex space developed from his measurement algebra and never refers to Hilbert space. Accardi (1995) has shown that Schwinger’s construction is equivalent to standard Hilbert space.

Schwinger extended this methodology to QED, where he was quite successful, and to QFT, where he was less successful. Standard QFT develops dynamics by introducing a classical Hamiltonian and substituting operators for dynamical variables. Schwinger relied on his methodology, rather than the Correspondence principle. He characterized his method as “… a phenomenological theory—a coherent account that it anabatic (from anabasis: going up)” (Schwinger 1983, p. 23, Flato et al. 1979). This anabatic methodology introduced two new steps. The first was a new dynamic principle (Schwinger 1959, p. xiv). The new dynamics is based on a unitary action principle whose justification hinges on the foundational role assigned measurement. A measurement-apparatus effectively defines a spatio-temporal coordinate system with respect to which physical properties are specified. A transformation function, \( <a't_1|a''t_2>\), relates two arbitrary complete descriptions. Physical properties and their spectra of values should not depend on which of equivalent descriptions are chosen. Hence, there must be a continuous unitary transformation leading from any given descriptive basis to equivalent bases. The continuous specification of a system in time gives the dynamics of the system (See Gottfried 1966, pp. 233–256). From this Schwinger infers that the properties of specific systems must be completely contained in a dynamical principle that characterizes the general transformation function.

Any infinitesimal alteration of the transformation function can be expressed as
$$\delta <a'_1 t_1 | a''_2 t_2>= i <a'_1 t_1 | \mathbf{\delta W_{12}} | a''_2 t_2>\!\!.$$

This suggests the fundamental dynamical postulate: There exists a special class of infinitesimal alterations for which the associated operators \(\mathbf{\delta W_{12}}\) are obtained by appropriate variation of a single operator, the action operator \(\mathbf{W_{12}}\), or \(\mathbf{\delta W_{12} =} \mathbf{\delta[W_{12}]}\). Thus, quantum dynamics can be developed simply as an extension of the algebra of measurements without attaching any further ontological significance to state functions. The second advance was the introduction of operator fields. These dynamic variables, or operator fields, supply the theoretical concepts that replace the phenomenological concept ‘particle’. This is the basic conceptual advance that Schwinger makes beyond Bohr’s methodology. For Bohr all descriptions must be expressed exclusively in classical terms. Schwinger assumes that it is possible to use dynamical field variables to give a sub-microscopic descriptive account within the framework of his methodology. The spatial and temporal coordinates that function as parameters for operator fields are idealized extensions of the spatio-temporal framework of the measuring apparatus. “It is the introduction of operator variations that cuts the umbilical cord of the correspondence principle and brings quantum mechanics to full maturity” (Schwinger 1983, p. 343, Flato et al. 1979). In 1964 Gell-Mann and Zweig independently introduced the quark hypothesis, which Schwinger rejected. Schwinger’s rejection had strong roots in his ideas of the proper relation between a phenomenological and depth level. ‘Particle’ functions on the phenomenological level. Speaking of a particle assumption he claimed: “But the essential point is embodied in the view that the observed physical world is the outcome of the dynamic play among underlying primary fields, and the relationship between these fundamental fields and the phenomenological particles can be comparatively remote, in contrast to the immediate connection that is commonly assumed” (Schwinger 1964, p. 189). The quark hypothesis entered on the wrong level, as part of an underlying theory rather than the phenomenology, and entered through a phenomenological classification, rather than through a depth theory. The standard model will be considered in the next chapter. Here we will simply indicate where it departs from Schwinger’s anabatic methodology. For Schwinger the space-time framework of the measuring apparatus anchors all assignments of spatial and temporal values to fields. This supported universal gauge transformations, but not the local gauge transformations basic to the standard model. Finally, the standard model did not meet the requirement that Schwinger considered basic for a new theory. It should supply a theoretical basis for the coupling constants.

It may seem arbitrary to take the limits of Schwinger’s anabatic advance as the limits of the measurement interpretation of QM. Yet, Schwinger’s combination of awesome computational skill, profound knowledge of physics, and systematic development of a methodology supply a better guide then any alternative I might attempt. Furthermore, the consistent histories interpretation, which will also be treated in the next chapter, can easily be regarded as a replacement for the measurement interpretation. This grounds my evaluation. Orthodox quantum mechanics has had a success that is unprecedented in scope and precision. Yet, the usual formulations of orthodoxy, and the systematic misinterpretations, supply no clear basis for determining the limits of valid applicability. The measurement interpretation systematizes the orthodox interpretation. It supplies a basis, the distinctive features of quantum measurements and the algebra these generate; a method of extension, Schwinger’s anabatic methodology; and a cutoff. This cutoff excludes the standard model of particle physics. In this respect Schwinger’s position is similar to algebraic quantum field theory. This too relies on an epistemological foundation and a systematic method of advancement that also excludes the standard model of particle physics.13

We will rephrase the two basic objections to orthodoxy. A fundamental theory should be about reality at a fundamental level. Bohrian QM is grounded in classical physics and treats the mathematical formalism as a tool rather than a fundamental theory. Schwinger consciously went beyond this by using operator fields to give the equivalent of a sub-microscopic descriptive account. This did not go far enough. Orthodox QM does not answer the Einstein objection. Nor does it answer the Bohr objection. It is not empirically adequate to advances in quantum field theory. This evaluation does not suggest a change in the practice of fundamental physics. Creative theoreticians do not feel bound by, and rarely avert to, the restriction of a methodology. However, this evaluation does show where and why a revised interpretation of QM is needed.

I have appended schematic outlines of two different perspective on the interrelation of classical and quantum physics and, by an oversimplification, attached historical names to each. In the Einstein perspective, shared by many philosophers, theories are the basic units to be interpreted. In the Bohr perspective, our suspension in language plays a presuppositional role in the interrelation and interpretation of theories. A development of this unfamiliar interpretative perspective requires a clarification of the status of classical physics.


  1. 1.

    The term ‘orthodox’ stems from the Council of Chalcedon, which set the standards of orthodoxy accepted by the Eastern Orthodox, Roman Catholic, and mainstream Protestant Churches. By a curious turn some theologians are now using Bohr’s doctrine of complementarity to explain the Chalcedonian decrees. See Richardson and Wildman (1996), pp. 253–298.

  2. 2.

    Surveys of the problems treated by matrix mechanics may be found in Mehra-Rechenberg (1982, Vol. 4, Part 2); and in Max Born’s 1926 lectures (Born 1962, p. 68–129).

  3. 3.

    For more details see: Mehra-Rechenberg (1982, Vol. 5, Part 2, pp. 838–854); Hund (1974), chaps. 12–14; Jammer (1966), 362–365; Pauli (1947 [1932]), 161–214; Bethe (1999) and Kuhn et al. (1962).

  4. 4.

    This is a summary of ideas Bohr presented in October, 1931. A more detailed analysis is given in MacKinnon (1982a,  chap. 8) and MacKinnon (1985).

  5. 5.

    “… the Copenhagen interpretation regards things and processes which are describable in terms of classical concepts, i.e., the actual, as the foundation of any physical interpretation” (Heisenberg 1958, p. 145).

  6. 6.

    Pauli, Bohr’s closest ally on interpretative issues, contrasted Reichenbach’s attempt to formulate quantum mechanics as an axiomatic theory with his own position: “Quantum mechanics is a much less radical procedure. It can be considered the minimum generalization of the classical theory which is necessary to reach a self-consistent description of micro phenomena, in which the finiteness of the quantum of action is essential” (Pauli 1947, p. 1404).

  7. 7.

    In an interview with Thomas Kuhn and others the day before his death Bohr claimed “There are all kinds of people, but I think it would be reasonable to say that no man who is called a philosopher really understands what one means by the complementary description. … They did not see that it was an objective description, and that it is the only possible objective description” (Bohr, AHQP, Interview 3, 5).

  8. 8.

    See Gomatam (2007) for a clarification of the difference between Bohr’s position and the standard Copenhagen interpretation.

  9. 9.

    Coecke et al. (2001) provides a good historical summary of the axiomatic approach.

  10. 10.

    Healey (1989), Hughes (1989), and Van Fraassen (1991) have developed interpretations of QM using the semantic method of interpretation.

  11. 11.

    The account of measurement was developed in von Neumann (1955 [1932],  chap. 6). In a conversation with Abner Shimony, Eugene Wigner claimed “I have learned much about quantum theory from Johnny, but the material in his Chapter Six Johnny learned all from me.” (citation from Aczel 2001, p. 102)

  12. 12.

    Bub (1997),  chap. 7 gives a technical treatment that examines Bohr’s position, the ‘von Neuman-Dirac orthodoxy’ and Bub’s own development based on the Bub-Clifton theorem.

  13. 13.

    Arguments supporting this evaluation are given in my 2007 and 2008 papers.


  1. Accardi, L. 1995. Can Mathematics Help Solving the Interpretational Problems of Quantum Mechanics? Il Nuovo Cimento, 110B, 685–721.Google Scholar
  2. Aczel, Amir D. 2001. Entanglement: The Greatest Mystery in Physics. New York, NY: Four Walls Eight Windows.Google Scholar
  3. Bethe, Hans. 1999. Quantum Theory. Reviews of Modern Physics, 71, S1–S8.CrossRefGoogle Scholar
  4. Bohr, Niels. 1963. Essays 1958–1962 on Atomic Physics and Human Knowledge. New York, NY: Wiley.Google Scholar
  5. Bohr, Niels, and Léon Rosenfeld. 1933. On the Question of the Measurability of Electromagnetic Field Quantities. In J. Wheeler, and W. Zurek (eds.), Quantum Theory and Measurement (pp. 478–522). Princeton, NJ: Princeton University Press.Google Scholar
  6. Bohr, Niels, and Léon Rosenfeld. 1950. Field and Charge Measurements in Quantum Electrodynamics. Physical Review, 78, 794–798.CrossRefGoogle Scholar
  7. Born, Max. 1962. Atomic Physics. New York, NY: Hafner.Google Scholar
  8. Bub, Jeffrey. 1997. Interpreting the Quantum World. Cambridge: Cambridge University Press.Google Scholar
  9. Coecke, Bob, David Moore, and Alexander Wilce. 2001. Operational Quantum Logic: An Overview. ProCite field[12]: (2001).Google Scholar
  10. Darrigol, Olivier. 1991. Coherence et complétude de la mécanique quantique: l’exemple de Bohr. Review d’Histoire de Sciences, 44, 137–179.CrossRefGoogle Scholar
  11. Dirac, Paul. 1928. The Quantum Theory of the Electron, A117, 610–625.Google Scholar
  12. Dirac, P. A. M. (ed.) 1935. The Principles of Quantum Mechanics (2nd ed). Cambridge: Cambridge Universiwty Press.Google Scholar
  13. Dirac, Paul. 1958. The Principles of Quantum Mechanics, (4th edn). Oxford: Clarendon Press.Google Scholar
  14. Dirac, Paul. 1964. Foundations of Quantum Theory. Lecture at Yeshiva University. New York.Google Scholar
  15. Flato, M., C. Fronsdal, and K. A. Milton. 1979. Selected Papers (1937–1976) of Julian Schwinger. Dordrecht: Holland: D. Reidel Publishing Company.CrossRefGoogle Scholar
  16. Gomatam, Ravi. 2007. Bohr’s Interpretation and the Copenhagen Interpretation – Are The Two Incompatible? In Cristina Bicchieri, and Jason Alexander (eds.), PSA06: Part I (pp. 736–748). Chicago, IL: The University of Chicago Press.Google Scholar
  17. Gottfried, Kurt. 1966. Quantum Mechanics. Volume I: Fundamentals. New York, NY: W. A. Benjamin.Google Scholar
  18. Healey, Richard A. 1989. The Philosophy of Quantum Mechanics: An Interactive Interpretation. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  19. Heisenberg, Werner. 1930. The Physical Principles of the Quantum Theory. New York, NY: Dover.Google Scholar
  20. Heisenberg, Werner. 1958. Physics and Philosophy: The Revolution in Modern Science. New York, NY: Harper’s.Google Scholar
  21. Howard, Don. 2004. Who Invented the “Copenhagen Interpretation”: A Study in Mythology. In Sandra D. Mitchell (ed.), Philosophy of Science: Proceedings of the 2002 Biennial Meeting (pp. 669–682). East Lansing, MI: Philosophy of Science Association. ProCite field[8]: ed.Google Scholar
  22. Hughes, R. I. G. 1989. The Structure and Interpretation of Quantum Mechanics. Cambridge: Harvard University Press.Google Scholar
  23. Hund, Friedrich. 1974. The History of Quantum Theory. New York, NY: Harper & Row.Google Scholar
  24. Jammer, Max. 1966. The Conceptual Development of Quantum Mechanics. New York, NY: McGraw-Hill.Google Scholar
  25. Kramers, H. A. 1957. Quantum Mechanics. Amsterdam: North Holland.Google Scholar
  26. Kuhn, Thomas, John Heilbron, Paul Forman (eds.) 1962. AHQP: Archives for the History of Quantum Physics. Berkeley, CA: Copenhagen.Google Scholar
  27. Landau, L. D., and E. M. Lifshitz. 1965. Quantum Mechanics: Non-Relativistic Theory (2nd rev. edn). Reading, MA: Addison-Wesley.Google Scholar
  28. Landau, Lev Davidovich, and Rudolf Peierls. 1931. Extension of the Uncertainty Principle to Relativistic Quantum Theory. In J. Wheeler, and W. Zurek (eds.), Quantum Theory and Measurement (pp. 465–476). Princeton, NJ: Princeton University Press.Google Scholar
  29. Mackey, J. 1963. The Mathematical Foundations of Quantum Mechanics. New York, NY: Benjamin.Google Scholar
  30. MacKinnon, Edward. 1982. Scientific Explanation and Atomic Physics. Chicago, IL: University of Chicago Press.Google Scholar
  31. MacKinnon, Edward. 1985. Bohr on the Foundations of Quantum Theory. In A. P. French, and P. J. Kennedy (eds.), Niels Bohr: A Centenary Volume (pp. 101–120). Cambridge, MA: Harvard University Press.Google Scholar
  32. MacKinnon, Edward. 1994. Bohr and the Realism Debates. In J. Faye, and H. Folse (eds.), Niels Bohr and Contemporary Physics (pp. 279–302). Dordrecht: Kluwer.CrossRefGoogle Scholar
  33. MacKinnon, Edward. 2008. The Standard Model as a Philosophical Challenge. Philosophy of Science, 75(4), 447–457.CrossRefGoogle Scholar
  34. Mehra, Jagdish, and Helmut Rechenberg. 1982. The Historical Development of Quantum Mechanics. New York, NY: Springer.CrossRefGoogle Scholar
  35. Messiah, Albert. 1964. Quantum Mechanics: Vol. I. Amsterdam: North Holland.Google Scholar
  36. Pauli, Wolfgang. 1947. Die Allgemeinen Prinzipien der Wellenmchanik. Ann Arbor, MI: J. W. Edwards.Google Scholar
  37. Petersen, Aage. 1968. Quantum Physics and the Philosophical Tradition. Cambridge: MIT Press.Google Scholar
  38. Redhead, Michael. 1987. Incompleteness, Nonlocality, and Realism. Oxford: Clarendon Press.Google Scholar
  39. Richardson, W. Mark, and Wesley J. Wildman. 1996. Religion and Science: History, Method, Dialogue. New York, NY: Routledge.Google Scholar
  40. Rosenfeld, L. et al. 1972. Niels Bohr: Collected Works. Amsterdam: North Holland.Google Scholar
  41. Schweber, Silvan S. 1994. QED and the Men Who Made it. Princeton, NJ: Princeton University Press.Google Scholar
  42. Schwinger, Julian. 1970a. Particles, Sources, and Fields. Reading, MA: Addison-Wesley.Google Scholar
  43. Schwinger, Julian. 1970b. Quantum Kinematics and Dynamics. New York, NY: W. A. Benjamin, Inc.Google Scholar
  44. Schwinger, Julian. 1970c. Selected Papers on Quantum Electrodynamics. New York, NY: Dover.Google Scholar
  45. Schwinger, Julian. 1959. The Algebra of Microscopic Measurement. Proceedings of the National Academy of Sciences of the United States of America, 45, 1542.CrossRefGoogle Scholar
  46. Schwinger, Julian. 1964. Field Theory of Matter. Physical Review, 135, B816–B830.CrossRefGoogle Scholar
  47. Schwinger, Julian. 1983. Renormalization theory of quantum electrodynamics. In Laurie Brown, and Lillian Hoddeson (eds.), The Birth of Particle Physics (pp. 329–353). Cambridge: Cambridge University Press.Google Scholar
  48. Van Fraassen, Bas. 1991. Quantum Mechanics: An Empiricist View. Oxford: Clarendon Press.CrossRefGoogle Scholar
  49. von Neumann, John. (1955 [1933]). Mathematical Foundations of Quantum Mechanics, trans. Robert T. Beyer. Princeton, NJ: Princeton University Press.Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2012

Authors and Affiliations

  1. 1.California State University East BayOaklandUSA

Personalised recommendations