Keywords

2.1 The New Biology/Biomedicine

2.1.1 Life vs. The Second Law of Thermodynamics

In the mid-twentieth century Erwin Schrödinger saw that from the point of view of physics living systems can be conceptualised as local areas in which, contrary to the general direction in the universe, entropy decreases, order increases [1]. Living systems make energy differences, extracting energy from the environment, using this to maintain their difference and to function, for example to obtain more energy. Schrödinger’s idea was taken up by von Bertalanffy in his General System Theory [2], and is now mainstream life science, at the cutting edge of understanding how biology relates to physics and chemistry, conceptually and in the appearance of life on Earth. Here is the biophysicist Nick Lane in his recent popular book The Vital Question [3] (pp. 21–22):

Ironically, the modern era of molecular biology , and all the extraordinary DNA technology that it entails, arguably began with a physicist, specifically with the publication of Erwin Schrödinger ’s book What is Life? in 1944. Schrödinger made two key points: first, that life somehow resists the universal tendency to decay, the increasing entropy (disorder) that is stipulated by the second law of thermodynamics ;…

The second point is that the key to how life does this is: genes and genetic information . We pick this up later, but first, more on the physics.

2.1.2 Energy Production and Control in Cells

Rolling entropy back—locally and definitely temporarily—is bound to involve a great deal of physics. Lane vividly explains energy production processes deep inside the cell; here are some selections [3] (pp. 69–71):

You are at the thermodynamic epicentre of the cell, the site of cellular respiration, deep within the mitochondria. Hydrogen is being stripped from the molecular remains of your food, and passed into the fast and largest of [the] giant respiratory complexes… Electrons are separated from protons and fed into this vast complex, sucked in at one end and spat out of the other, all the way over there, deep in the membrane itself… The electrical current animates everything here… Your 40 trillion cells contain at least a quadrillion mitochondria, with a combined convoluted surface area of about 14,000 square metres; about 4 football fields. Their job is to pump protons, and together they pump more than 1021 of them… every second.

The key points for our present purpose are first, that biological organisms exploit physics to extract energy for functioning, and second, picked up in the next section, that their doing this depends not only and essentially on the physics but also on massive organisational and regulatory mechanisms.

2.1.3 Regulatory Control by Genetic Information

Questions about how living processes accomplish the feat or resisting entropy, at least temporarily until they return to dust, and how they persist nevertheless by making replicas of themselves—all turn out to involve regulation and control, information and coding . Here is Nick Lane on Schrödinger ’s second key point, continuing from the quote above [3] (p. 22):

And second, that the trick to life’s local evasion of entropy lies in the genes . He proposed that the genetic material is an aperiodic crystal, which does not have a strictly repeating structure, hence could act as a code-script – reputedly the first use of the term in the biological literature… Within a frenzied decade, Crick and Watson had inferred the crystal structure of DNA itself. In their second Nature paper of 1953, they wrote: ‘it therefore seems likely that the precise sequence of the bases is the code which carries the genetical information ’. That sentence is the basis of modern biology . Today biology is information , genomic sequences are laid out in silico, and life is defined in terms of information transfer.

Biological organisms use information transfer to control energy transfer. Physical and chemical processes involve energy transfers covered by mathematical energy equations, but in biological organisms the physical and chemical processes not only happen, but can only happen in the right place at the right time in the right degree, if there are mechanisms that control and regulate them in a way appropriate to bringing about a particular function. These mechanisms also conform to the physico-chemical energy equations, they never violate them, but they are not fully explained by them, rather their full explanation has to invoke concepts of information -based regulatory control , and typically involve form , or structure. Control systems assemble, organise, up- and down-regulate physico-chemical energetic processes. A control system has to be sensitive to physico-chemical processes and to other control systems, depending on their state, if they are to tend towards end states of the whole. This reactive and interactive sensitivity to external states implies the flow and exchange of information . Information is however not like energy, which is covered by the energy equations of physics and the corresponding enthalpy equations of chemistry. Rather, information is more like a switch, turning processes off and on, hence being representable typically by 0s and 1s; or like a gate that has continuous positions between open and shut.

The new concept of information was constructed inside and outside biology . Critical advances were made by logicians, mathematicians and electrical engineers in the 1940s as part of war efforts to break codes and to secure codes. Here is Andrew Hodges on this point, referring to Turing’s work in Bletchley and Shannon’s in Bell Labs in the early 1940s [4] (p. 317):

Rapidly developing, and not only in Bletchley and Washington, was a new kind of machinery, a new kind of science, in which it was not the physics and chemistry that mattered, but the logical structure of information , communication, and control.

This sentence summarises the way that current science has taken leave of the old reductionist assumption that only the physics and chemistry of matter. Appropriately, it works across all the sciences, forging links and creating a unity among previously unconnected problem areas. That is to say, the new kind of science works across all the sciences except physics and chemistry, which deal with energy transfer, but even then, it has comprehensible, theorised and technological connections with the physics and chemistry of the processes involved.

The logical rules of information flow—such as ‘if A then B’—can take, as first approximation, energy values as initial state variables, for example, electrical potential difference across the mitochondrial membrane, but the consequents are regulatory variables—such as open or close, or open or close more or less. This points to the need to correct the first approximation of the initial state variables: they are not energy values, but information about them. It is the information that triggers the regulatory response. In interacting control mechanisms, the initial states are also regulatory variables—another gate being open or closed, and so on. Implementation of such rules requires suitable materials in a suitable state—it might be difficult to make a switch out of a cup of water for example—but apart from this entirely crucial qualification, material composition is unimportant. Another way of making this point is that the energy transfer involved in information transfer is irrelevant to the information transfer. The flow of information depends on regularities, but these regularities are not determined by the energy equations of physics and chemistry, rather they must rely on other properties of materiality. The concept required at this point is expressed by such terms as structure, form , shape or syntax (to borrow from logic)—that codes information . The concept of code, reliant on form or shape, signifies how biology breaks away from physics. Code is fallible, liable to error , and it has an arbitrary quality: the same information can be carried by different forms . Code is a kind of mechanism: it makes things happen in the receiving system, and what it makes happen depends on the state of the emitting system.

In short, for life to arise and persist requires much organising and control of the physics and chemistry, and this organising and control relies on information . Here is the oncologist Siddhartha Mukherjee in his book The Gene : An Intimate History referring to similar points [5] (p. 409):

The universe seeks equilibriums; it prefers to disperse energy, disrupt organisation, and maximise chaos. Life is designed to combat these forces. We slow down reactions, concentrate matter, and organise chemicals into compartments…

Mukherjee goes on to emphasise the importance of the circular flow of biological information : Genes encode RNAs, to build Proteins, to form /regulate Organisms, that sense Environments, that influence Proteins, RNA (and DNA), that regulate Genes ….commenting that it is ‘perhaps one of the few organising principles in biology , the closest thing that we might have to biological law’ [5] (p. 410).

To sum up, current biological models include both biochemistry, subject to physico-chemical energy equations, plus models of information -based regulatory control mechanisms. Here is an illustration of such mechanisms, from a paper titled ‘Signaling in Control of Cell Growth and Metabolism’ [6].

In multicellular organisms, cell growth and proliferation are normally not cell autonomous. Receptor-mediated signal transduction, initiated by extracellular growth factors, promotes entry into the cell cycle and reprograms cellular metabolism to fulfil the biosynthetic needs of cell growth and division […] However, despite having become highly dependent on instruction from extracellular growth factors, mammalian cells have retained the ability to sense their internal metabolic reserves and adjust their growth and biosynthetic activities accordingly. Much of this feedback control occurs at the level of posttranslational modifications of signal transduction proteins by key cellular metabolites. Moreover, intracellular metabolites can also regulate chromatin accessibility to control gene expression…

This quotation illustrates, as would so many others, the fundamental and dominant importance of regulatory control processes in current biological/biomedical science.

As already implied, the appearance of regulatory control processes in biology , in addition to the energy-related equations of physics and chemistry, has major implications for the unity of science, paving the way for interacting linkages between the biological, the psychological and the social. This is because elaborations of these processes are found throughout these domains. As illustration, consider this passage from Lane , proposing a reason why mitochondria retain their own local genes [3] (p. 187):

The mitochondrial genes must be right there on site, next to the bioenergetic membranes they serve. I’m told that the political term is ‘bronze control’… In a war, gold control is the central government, which shapes long-term strategy; silver control is the army command, who planned the distribution of manpower or weaponry used; but a war is won or lost on the ground, under the command of bronze control, the brave men or women who actually engage enemy, take the tactical decisions, who inspire their troops, and who are remembered in history as great soldiers. Mitochondrial genes are bronze control, decision-makers on the ground…

This illustrates how new explanatory concepts now fundamental to biology , apply also to psychological and social processes. Or it can be put the other way round: when biophysicists want to explain their theoretical models, they help themselves to processes and principles familiar in psychosocial phenomena. The idea of theory reduction to basic science has disappeared.

2.1.4 Error Is Fundamental to Biology

Control of energetic, metabolic processes is what holds back entropy increase, keeping biological organisms alive and functioning, as opposed to back to dust. The next point to emphasise is that control processes, dependent on information transfer, can go wrong, unlike energy transfers, which can’t. Here is Lane explaining further the need for local genetic control, ‘decision-making’, in energy production in the mitochondria [3] (p. 187):

Why are such decisions necessary? […] We discussed the sheer power of the proton-motive force. The mitochondrial membrane has an electrical potential of about 150–200 millivolts. As the membrane is just 5 nanometres thick, […] this translates into a field strength of 30 million volts per metre, equal to a bolt of lightning. Woe betide you if you lose control over such an electrical charge!

Loss of control leads to poor outcomes [3] (pp. 187–188):

The penalty is not simply a loss of ATP synthesis, although that alone may well be serious. Failure to transfer electrons properly down the respiratory chains to oxygen (or other electron receptors) can result in a kind of electrical short-circuiting, in which electrons escape to react directly with oxygen or nitrogen, to form reactive ‘free radicals’. The combination of falling ATP levels, depolarisation of the bioenergetic membranes and release of free radicals is the classic trigger for ‘programmed cell death’… In essence, mitochondrial genes can respond to local changes in conditions, modulating the membrane potential within modest bounds before changes become catastrophic.

The general conceptual point at issue here is that regulation and control mechanisms keep things going right rather than wrong. Such normativity is not present in the energy equations of physics and chemistry, which always apply and never fail. It arises in biology for the first time, marking a fundamental departure of biology from physical and chemical processes alone. The normativity is implied in all of the key systems theoretic concepts such as regulation, control and information . It derives from the point that biological systems function towards ends, and function well and badly accordingly as they do or do not attain them. In the present illustration the point is that if electrical charge in the cell membrane is not properly regulated, the cell dies.

Normativity applies at the basic level of genetic replication, as for example in ‘transcription error ’ in molecular genetics —or ‘mutation’ as used in evolutionary biology . The concept of mutation is critical in evolutionary biology, crucial to explaining how diversity arises—the condition for natural selection processes to operate. Genes are the vehicles of information passed from one generation to the next, including the required building instructions; they normally run true, creating like for like, but to explain diversity they have to be able to mutate, to make a mistake in the replication. It was a hard question what shape of thing could have these and this combination of functional qualities—the answer turning out to be the double helix . Watson and Crick’s [7] double-helix structure could replicate itself (by a several stage process), securing continuity, and it could also mutate, delivering a copy with a changed order of bases. This variation leads to production of different proteins that could (might or might not) affect the phenotype interacting with the environment, which difference could (might or might not) differentially affect survival and propagation. But this variation at the phenotypic level is possible because variation is possible at the molecular level, because various nucleotide sequences are possible, all consistent with complex molecular thermodynamic equilibrium. The emergence of biological diversity depends on the kind of error that genes are capable of.

As implicit above, normativity also applies at the level of the whole organism in interaction with the environment: interaction is adaptive insofar as it promotes continuity and functioning and is otherwise maladaptive. Evolution depends on these two kinds of normativity—genetic mutation and adaptation. These kinds of normativity are biologically fundamental, based on scope for error . Cell respiration is disrupted if sufficient oxygen fails to be delivered; defence mechanisms in a cell can mistake a virus for a metabolite or other signalling molecule; or elements detected in viral particles cause the human immune system to attack a tissue or cell which would normally be treated as self and not subject to immune attack, with resultant inflammatory response and immune inflicted damage, up to and including cell death. Error arises in many ways, one of which, just referred to, is that the competition can deceive by mimicking, from viruses on upwards. Life and diversity are closely linked, one upshot being that the same or diverse life forms typically end up in competition for finite energy resources. The competition exploits the possibility of error in information transfer that is fundamental to life forms . All these goings on do not matter at all to the energy equations of physics and chemistry—everything conforms to them—but some do matter to the biology , hence there is pervasive use for normative contrasts: ‘functions well/badly’ ‘right’/‘wrong’, ‘same’/‘error ’, life/death, health/disease.

2.1.5 Life Forms : Diversity Amidst the Physics

Living systems exploit slack—they find options within—physical laws. At the basic level of genes there are diverse complex molecules, all thermodynamically stable, consistent with physical, quantum-mechanical energy equations, but which are interestingly different, because they may have very different consequences for the organism, positive or negative. This much transforms the explanatory framework, but also the ontology , which includes not only physical material, but shapes or forms such as double helixes, with their novel causal properties of regulatory control , programming and replication.

The possibility of proliferation of forms and causal potentials within the constraints of physical, quantum-mechanical energy equations is well illustrated in the genetic code and genetic replication, but it has wide application. It can be seen already in chemistry, in the diversity of the elements, in their diverse structures, resulting in variety in physical properties (such as melting and boiling points), and in chemical combinatorial properties, all of which are consistent with energy equations. All of the chemical elements, and the great diversity of their combinations, including the complex molecules in biological systems, all conform to the equations—but the critical point is that the equations permit chemical diversity and complexity including those in biological processes.

Diversity arises from increasing complexity, successions of combinations of parts into greater wholes. The parts essentially interact with one another—otherwise they would not make a whole thing, but would remain isolated separate things. The wholes become parts of other wholes—and so on. This can be seen in physics, where subatomic particles interactively form into atoms, and in chemistry, where atoms compound into molecules. In biology , all the physics and chemistry continue to apply, but new phenomena appear: regulation of physico-chemical processes by coded information —and with that, especially the possibility of error . Concepts of error gain traction in relation to wider systems and functional ends of those systems—ultimately responsible for natural biological systems being able—in local areas, temporarily—to avoid the general increase of entropy, to increase energy differences, to make more order out of less order.

The increasingly larger and more complex shapes, structures or forms , have distinctive new causal properties. Form here is dynamical, a matter of what the molecule, cell or membrane can do and does. For example, the fusion effects of intense gravity in collapsing stars make new things from hydrogen, metals such as iron, a new structure with new physical and chemical properties—and among the elements necessary for life. Once life gets going, diversity takes on a whole new meaning: countless new structures, forms , complexity, capacities and operating principles. Biological processes exploit the physics and chemistry from the start, for example the physics of proton gradient across a cell membrane, or energy released according to chemical enthalpy equations in Krebs’ cycle. At the complex molecular level, shape (structure) is critical to distinguishing them and determining their interactive properties. As one moves to complex organic and biochemical molecules, shape is increasingly exploited. In cellular biology for example, the function of enzyme catalysts turns on their shape and fit to relevant biochemical agents—as in ‘lock and key’ models. Biological forms not only conform to physics and chemical energy equations, they manage the energetic processes, with new principles of regulation and information flow. These biological principles operate in the very large spaces permitted by those energy equations, producing new forms on top of the physical elements and chemical combinations that those laws permit. And with the new forms come new operating principles, though what remains at their core are the original components: the need for energy, for preservation, the critical importance of regulation and information flow.

The above issues are linked to the concept of ‘emergence’ which has a long history in the philosophy of biology and psychology , and systems theory generally. For review of the topic, see e.g. [8]. There are detailed treatments in recent philosophy of biology (e.g. [9, 10]).

The new biology , employing causal principles that turn on shape or form in relation to systemic ends, marks a radical departure from physicalism that has its roots in the seventeenth-century mechanisation of the world picture. These developments also, as is well known, point backwards to the science and philosophy that preceded the development of seventeenth-century science, specifically to Aristotle . Aristotle had a broad vision of causation, comprising 4 kinds: material, efficient, formal and final, arguing that formal and final causes were likely to be especially relevant to biological processes (see, e.g., Andrea Falcon’s critical review [11]). The new seventeenth mechanics, however, required—in these terms—only the first two, while the second two dropped out of the science as redundant. When biological sciences developed in the nineteenth century, research programmes emulated the natural sciences, discovering the chemistry within biological processes. However, reducing the phenomena to chemistry was not such a clear option for other parts of biology , especially study of the formation of whole organisms and whole species: embryology and evolutionary biology. These have always seemed to require concepts different from those in the natural sciences, more akin to Aristotle’s formal and final causes.

It was always final or teleological explanation that was the most problematic for natural science. It seems to imply that the ends must in some way be already present at the start, and it has been assumed—notwithstanding Aristotle’s original disavowal [11]—that this could only be so if the ends are in some way ‘preconceived’ by some purposive intelligence/designer. It is probably true that teleological explanation of a change supposes that the end-conditions must somehow be present at the beginning, and it is also true that genes do not in any way ‘have in mind’ the proteins they produce. It is however exactly at this point that the information -processing paradigm does its conceptual work, because the genes encode (code for) the proteins they produce. In this sense—the sense of encoding—the ends are already present at the start—and in this sense the information-processing model envisages—something like—teleological explanation. A typical explanation in the information -processing paradigm is that particular genes code for particular proteins. Needless to say much hangs on what ‘code for’ means. But what it does not mean is that some protein-like shape already exists in the genes , obviously still less that the genetic material has a mental image of the proteins to be produced. Rather, ‘code for’ means: in normal circumstances, in the normal cellular environment, in a complex series of interlocking steps, such-and-such DNA sequence produces such-and-such protein. The coding concept secures the idea that the ‘ends’ are already present—in some sense—and are instrumental in production (under normal circumstances) of the end result. This dynamic, production sense of ‘encoded information ’ is more explicitly captured by terms like ‘programme’ or ‘instructions’, with clearer implication of direction to an end, and connotes more clearly that the production process follows rules (if… then…) that are not inviolable physico-chemical laws but violable metabolic regularities.

In summary, the information -processing paradigm in biology secures the fundamental point that the functional end of a system—the result it tends in normal circumstances to produce—is in a defined sense already present in the system prior to production, as instructions and a mechanism for the production. These kinds of principles of causal explanation involving forms and ends were anticipated by Aristotle , as was the insight that they are likely to apply particularly in biology .

The concept of genetic coding recreates a refined, scientific version of the idea that the ends are—as programming instructions—present at the start. No such idea, however, is implied by Darwin’s theory: on the contrary, evolution as envisaged by Darwin does not admit of a teleological type of explanation in any sense, but rather provides a quite different alternative in terms of random genetic mutation, adaptation and natural selection. Once natural (as opposed to human made) functional systems come into being, they admit of teleological explanation, expressed in the idea that states of biological systems encode—instructions for—production processes. Genes coding for embryonic development is a fundamental example. But no analogue of the information -processing paradigm applies to evolution as a whole; the teleologic applies only to systems with design—forms suited to securing particular ends—that result from the evolutionary process, not to the evolutionary process itself.

2.2 The Limitations of Physicalism

2.2.1 Preamble and the Argument in Brief Lay Terms

This is the most explicitly philosophical section of the book because it addresses positions in the contemporary analytic philosophy literature where physicalism holds an important place. The whole section may be less accessible and of less interest to the reader without background knowledge of philosophy, but we include it here because physicalism is of fundamental importance to the conceptualisation of the sciences and how they relate to one another, in turn therefore of fundamental importance to understanding the conceptual foundations of the biopsychosocial model. The importance of physicalism in Engel’s original formulation of the biomedical model, some historical expressions of physicalism, and the recognition by current commentators of the need for distinctive biopsychosocial causal interactions—were reviewed as context for the general biopsychosocial model at the beginning of Sect. 1.3.

Physicalism in its clearest, strong version holds that everything that there is and all causation is physical, or, alternatively expressed: everything is physical, covered by physical laws. This doctrine exerts massive downwards reductionist pressure on all other sciences: their ontology and their causal principles ultimately have to be physical, or else illusionary. Chemistry passes under the bar, much of biology is physics and chemistry, psychology is problematic, and social science even more so. All are basically bad news for any biopsychosocial model. Or the other way round, a viable biopsychosocial model is bad news for physicalism .

The key step in the shift away from physicalism and physicalist reductionism occurs in current biology and has been examined in the previous section. In brief, current biology since the mid-twentieth century envisages not only physical and chemical energetic processes but also regulatory control of those processes. Crucially: regulatory control mechanisms never contravene the energy equations of physics and chemistry (because nothing ever does), but it is a type of causation. Regulatory control mechanisms are typically dynamical forms , the causal properties of which turn on shape as opposed to material constituent parts. This is clear in the cosmic prize-winning case of the complex molecular DNA double helix , and evident in the supporting cast of, for example, enzymes working like keys in locks. From here, once dynamical life forms with regulatory control functions take off from the physics and chemistry of the matter, from compliance with energy equations alone, they become ever more complex and diverse in evolution , to include eventually psychological and social phenomena. There are certainly reasons to distinguish regulatory control by genes and enzymes from regulatory control by nervous systems, from regulatory control by social rules and regulations, but the key thing from a philosophical point of view is that they can all be conceptualised under this very general heading, they can causally interact, and especially, they are not tied down by, though always compliant with, the energy equations of physics and chemistry. In short, the ontology and causal theory of current biology can envisage psychological and social processes, making the biopsychosocial model viable.

This, in brief, is the argument we propose to work around physicalism . The rest of the section is more philosophically technical and detailed.

2.2.2 Physicalism

Physicalism and related reductionism have been extensively discussed in contemporary analytic philosophy during the past few decades. It would be fair to say that they are mainstream views, but also challenged, defended and modified. While the challenges are substantial, it would be fair to say, nevertheless, that physicalism has no serious competitors, no viable, large scale alternatives. Such alternatives as are envisaged in this mainstream literature, the philosophies to which physicalism is opposed—dualism and vitalism —are historical and long discredited in the science. We suggest that contemporary alternatives are to be found in current biological theory, key features of which, it will be argued in subsequent chapters, carry into psychology and behavioural science.

In broad terms, physicalism is the view that everything is physical and there is nothing else besides. This ontology most obviously would comprise a view as to causation and causal laws, namely, that all causation and all causal laws are physical, or another way of putting this: ‘physics explains everything’. This would seem to follow clearly enough: since there are only physical events, there are only physical events to explain, so the only explanations are physical. Or again: physical things have physical causal powers, and therefore, since there are only physical things, there are only physical causal powers. The matters of ontology and causation should probably be tied together in a tight knot. If there seemed to be only physical things, and if we had only physical causal explanations, then the physicalist metaphysics would stay as simple as this. The broad problem for physicalism is just that these two conditions have never held. It has never seemed like there were only physical things, and never that we explained everything by physics. The ontological problem was and remains easily enough disposed of by saying: it may seem that there are many kinds of non-physical things—animation, perceptions—but these are only appearances, and they are really physical things, or just appearances, and not real after all. Similar moves can be made about apparent causes and effects, especially mind over body: the causation is illusory, or really physical.

The one place at which this imperious dogmatising falters is where apparently non-physical entities and causal processes are invoked by empirical sciences, finding associations and following methods for determination of causes articulated by Mill . Just as the real backing for the mechanisation of the world picture and the beginnings of physicalism was success of the science, mechanics, it can only be undermined by more else of the same, i.e. more but different successful science. These new sciences were established in the nineteenth century with advances through the twentieth: chemistry, biology , psychology , social sciences —with all their large and small sub-fields .

As these new sciences developed, physicalism becomes entangled with reductionism : the assumption that, and the project of trying to show that, these new sciences can be reduced to physics; meaning, that their ontology and causal principles can or could ultimately be eliminated in favour of the physical. Such strong reduction—in an ideal physicalist world, elimination—known as, for example, semantic- or theory- reduction, has not however fared well. It does well in chemistry, in parts of physiology, struggles seriously in psychology , and is hopeless in social science . As noted in Sect. 1.3, under the heading “Theorising Biopsychosocial Interactions—Not Parallel Worlds”, by around the 1970s, something of a halt was called, with acknowledgement that the sciences apart from physics–chemistry, over and above them, what Fodor called the ‘Special Sciences’, could not be reduced/eliminated, and there were, after all, causal concepts and principles, over and above those of physics [12,13,14]

That might have spelt the end of physicalism , except for the option, unattractive but needs must, of disconnecting ontology from causation and causal explanation. Physicalism could be retained as a view of what stuff there is—only physical—while acknowledging that, where theory or semantic reductionism fails, there are constructs of non-physical entities, processes and causes in the sciences above physics–chemistry. This depleted version of physicalism as an ontological doctrine only—not about causes—has a corresponding weaker reductionist doctrine, called ontological, or metaphysical, without commitment to epistemological or explanatory reduction, and the combination is sometimes called ‘non-reductive physicalism ’ [see, e.g., 15]. Insofar as this weaker form of Physicalism is an ontological claim only, involving no claims about causal explanations, it probably has given up on being much or anything to do with the sciences, and becomes a purely ‘metaphysical’ doctrine.

As suggested above, however, the move of separating off ontology from causation is very awkward, requiring as it does a conception of things (entities, properties or processes) somehow independent of what they do, independent of their causal powers and interactions. The awkwardness shows up in various related ways. Consider mental states, the traditional anomaly for physicalism : if—in the ontological version of physicalism—they are allowed to be causal, connected by psychological principles not physical laws, what account can be given of their ontological status—given that the assumption that the only ontology is physical?

The basic problem is not ontological however—we can say what we like about what there is, if this makes no commitment to causal properties—rather, the basic problem involves theorising causation. While non-reductive physicalism seeks to acknowledge non-physical causes, it still retains physical causal laws, implicitly including a massive theory of causation, but since these physical laws cover all physical processes, and since these are the only events that there are, then the difficult question arises: where is there any room for causation by anything else, by mental events for example (whatever may be their curious ontological status)? The ontological issues in contemporary physicalism are often theorised in terms of ‘supervenience’ and the conundrum in the theory of causation as to how there can be mental causes as well as physical causes is sometimes called the ‘dual causation’ or ‘causal overdetermination’ problem (see, e.g., [16, 17]).

The many types of physicalist ‘reductionism ’ that have had to be invoked in this philosophical literature, outlined above, indicate just how much it has struggled to survive in the current scientific climate. The depleted version left at the end is ontological only, seeking to subtract commitments on causality, although actually retaining the assumption that physical causation covered by physical laws is the only kind. It is this assumed ‘completeness of physics’ that actually delivers the core, best argument for physicalism , and we consider it next.

2.2.3 Regulatory Mechanisms Do Not Affect Energy Equations

There is a core argument for physicalism based on the so-called causal completeness of physical. This argument is for a strong form of physicalism in that sense that it would prohibit the idea of any non-physical cause making a difference to energy and energy exchanges of physical material. Here is the philosopher David Papineau presenting the argument, in several stages corresponding to historical developments in the science [18] (p. 9):

In the middle of the nineteenth century the conservation of kinetic plus potential energy came to be accepted as a basic principle of physics… In itself this does not did rule out fundamental mental or vital forces… but … does imply that any such special forces must be governed by strict deterministic laws to ensure they never led to energy increases.

During the course of the twentieth century received scientific opinion became even more restrictive about possible causes of physical effects, and came to reject sui generis mental or vital causes, even of a law governed and predictable kind. Detailed physiological research, especially into nerve cells, gave no indication of any physical effects that cannot be explained in terms of basic physical forces that also occur outside living bodies. By the middle of the twentieth century, belief in sui generis mental or vital forces had become a minority view. This led to the widespread acceptance of the doctrine now known as the “causal closure” or the “causal completeness of the physical”, according to which all physical effects have fully physical causes.

This is a powerful argument in favour of physicalism . Tracking the science, it successfully excludes non-physical forces capable of making energy differences. Physicalism wins if the opposing team is ‘spooky’ energy-exchanging forces, as in dualism and vitalism .

Current biology and biomedicine, however, go off at a tangent to this problematic. As outlined in the preceding section, the new life sciences envisage distinctive forms , structures and information -based regulatory control mechanisms—in addition to energy exchanges and conservation covered by the equations of physics. However, and of course, this departure from physics respects the physical energy equations. In short, there are distinctive biological structures and causes—regulatory mechanisms—but they don’t interfere with the physics; they exploit the physics, rely on it, manage it—but they don’t change it.

Consider the analogy of a chemical industrial plant running, for example, the Haber process for production of ammonia from hydrogen and nitrogen. The model of the process certainly includes the core chemical reactions and the associated enthalpy (energy) equations. However, for the chemical reaction to run at all, to run forwards and not too much backwards, the hydrogen and nitrogen have to be present in quantities in an appropriate range, at temperature and pressure high enough, though not too high for the containers, aided by the presence of catalysts. The model of all this includes regulatory control mechanisms for delivery and removal of materials, temperature and pressure control, etc. Several points can be noted:

  • First, the regulatory control mechanisms never affect energy exchange equations and never flout the principle of conservation of energy. They obviously don’t because nothing does—but in any case they don’t.

    Second, the chemical reactions can occur outside the factory. Equally the basic energy exchange physico-chemical reactions in, for example, biological cells could occur outside of cells.

    Third, as a qualification, in both cases, they only occur—inside or outside the factory—if the necessary reactants come together in a particular sequence, particular amounts, at particular temperatures, etc. Bringing this about—in the chemical industrial factory, as in the biological cell—requires substantial organisational and control mechanisms.

The physical/chemical energy equations cover some aspects of the Haber process: how much energy is absorbed or produced, etc. The principles of regulatory control model other aspects, answering questions such as: ‘how is the rate of reaction kept within a range, so as not to run too fast or too hot?’, ‘Why does this gate shut at this time, cutting off the supply of hydrogen?’ There will be a physical process that shuts the gate, but, if the gate shutting is part of a regulatory mechanism (is indeed a ‘gate shutting’), it will involve a physical process that can ‘go wrong’. For example, the gate has a particular shape, and the process that shuts it may be the arrival of an object which fits it like a key; the key turning in the lock is a physical process, and it does not violate any physical equations, because nothing does, but the process is also a regulatory one, signified by the fact that it can go wrong, because for example the key has a fault in it, or because there are competitor saboteurs at work with fake keys. Models of regulatory control mechanisms are distinct from physico-chemical equations covering energy exchanges ; they are a different kind of causal-explanatory framework, suited to different processes, answering different sorts of question.

As a corollary, there is no problem of ‘dual causation’ or ‘causal overdetermination ’. In the present context the problem would be: how can a regulatory mechanism cause anything when all the causing is already accomplished by physical events covered by physical laws? But the problem doesn’t arise because regulatory mechanisms do not concern energy exchanges covered by physical laws. Nowhere in these models (in chemical engineering or biology ) is the same process being causally explained twice.

The benefits of models of causation by regulatory control extend to promoting research questions, supporting predictions, enabling control, diagnosis of dysfunctions and fixing things. For example, if the model of regulatory control of a chemical factory includes that a particular gate opens or closes depending on the rate of reaction relative to parameters of temperature and pressure, the model can be used to predict when the gate will open or close. The model also prompts a research programme to investigate the mechanisms by which the gate is sensitive within certain ranges to the rate of reaction, temperature and pressure. If the plant blows up, we want to know why. Generally, the model guides understanding of dysfunction or breakdown. If, for example, the reaction is running too hot, becoming inefficient or raising risk of meltdown—the model tells us that one cause might be malfunctioning of a gate, for example, the hinges are rusted, or the thermostatic devices regulating its function are malfunctioning. We can also use the model to intervene, for example, in the case of dysfunction one might fix the rust or the regulatory feedback mechanism. Use of such models is obvious enough in chemical engineering and the analogues pervade physiology and biomedicine.

It may be objected: ‘but factories have designs that promote functional ends, but they are human built—not natural systems’. But this is a pre-Darwinian thought. Natural systems, biological ones, have this kind of design—regulatory control mechanisms—resulting from random mutation and natural selection.

Causation by regulatory control has distinctive properties, among the most curious of which is causation by events that don’t happen! This phenomenon has been theorised in current philosophy of causation and is taken up below. First we give reasons why the weakest form of physicalism , really limited to an ontological claim only, without any presumption about causation, is unattractive.

2.2.4 Weaker—Ontological Only—Physicalism Is Problematic

In their Stanford Encyclopedia entry on Supervenience, Brian McLaughlin and Karen Bennett, in the section titled ‘Coincident Entities and the “Grounding Problem”’, consider the classic example of a lump of clay (Lumpl) later fashioned into a statue (Goliath), which have different modal properties—such as that the one survives being squashed into a ball while the other does not—which seems to entail they are different things [19] (p. 51), continuing:

The main objection to the view that Goliath and Lumpl are distinct is what can be called ‘the grounding problem’. How can Lumpl and Goliath differ in their modal properties, given that they are alike in every other way? What grounds their difference in persistence conditions? In virtue of what do they have the persistence conditions they do?

The obvious way in which the lump of clay and the statue are the same is that they are made of the same material. The obvious way they differ is in shape or form . According to the view we have argued for in this chapter, shape or form , over and above material composition, can be of critical importance in determining causal properties. This is less evident in the classic lump of clay/statue example, because statues do not have standout causal powers over and above those due to their material composition. However, if we shift the example to the DNA double-helix dynamical form , which, in its normal operating environment, has amazing causal properties such as replication and coding for protein production. These properties could be reasonably called ‘emergent’ in the sense that they are not evident in the formless, unorganised higher entropic sum of its elements.

This line of thought implies that the very weak form of physicalism as an ontological claim only, about ‘metaphysical grounding ’, is bound to be deficient. Shamik Dasgupta writes [20] (p. 557):

It has been suggested that many philosophical theses—physicalism, nominalism, normative naturalism, and so on—should be understood in terms of ground… What is physicalism ? Not just physicalism about the mind, but physicalism period. What kind of a thesis is it? We know what the rough picture is: at some basic level the world is constituted wholly out of physical stuff, and everything else—football matches, string quartets, consciousness, values, numbers—somehow ‘arises out of’ that physical stuff. Or, to use other locutions, everything else is “fixed by” or ‘determined by’ or ‘is nothing over and above’ that physical stuff. Or, as the metaphor goes, all God had to do when making the world was make the physical stuff, and then her job was done.

The last sentence seems to imply that nothing interesting, or nothing at all, has happened since the fraction of second after the Big Bang—or, staying with its metaphor, the sentence neglects what God made on all the other days. The formations and phases of stars, formation of elements, metals, complex molecules, conditions for life on at least one planet, the whole evolutionary process of organisms and of mammals and primates—have what status according to this metaphysical grounding thesis? Presumably the grounding thesis allows that such things exist, now or past, but limits itself to a claim about what these things are constituted out of, and this in a highly reductive sense, which recognises only what is common between hydrogen and iron for example, and not their differences including their different combinatorial and causal properties; or again which admits only what is common between metallic iron and biological tissue, not their differences, including their different causal properties, such as that metallic iron contains no regulatory mechanisms, but biological tissue does. However, Dasgupta supposes that this minimalist ontology can have explanatory value, indeed—curiously—‘full’ explanatory value [20] (p. 558):

To say that some facts ground another is just to say that the former explain the latter, in a particular sense of ‘explain’. When I say that some facts ground another, I mean that the former fully explain the latter.

‘Fully explain’ is too strong however, if we wish to explain not only the material similarity between hydrogen, iron and biological cells but also the differences in their causal properties. It can be said that specifying what material something is made of explains it to some extent—though probably only because its causal properties are being assumed, for example mechanical properties of physical matter; but there are so many other things and causal properties to explain, such as chemical combinatorial possibilities and properties that turn on structure, or systemic functioning that turns on achieving or maintaining end states. In short, there is need for a principled variety of kinds of explanation, of which Aristotle’s typology of causes, briefly reviewed at the end of the preceding section, is the original. In those terms, physicalism can be regarded as a weak ontological claim only, specifying the material cause of everything as physical. In these terms there are however in addition efficient causes, an approximate example being the operation of mechanical forces, and formal and final causes (dynamical forms that tend to an end-state) that have a particular explanatory role to play in modelling biological systems.

2.2.5 Causation by Events That Don’t Happen

Recent novel philosophical analyses of causation have drawn attention to the curious fact that some causal pathways involve events that do not happen! This is a very clear sign of causation that does not involve energy transfer.

Jonathan Schaffer begins his paper titled ‘Causation by Disconnection’ like this [21] (p. 285):

It is widely believed that causation requires a connection from cause to effect, such as an energy flow. But there are many ways to wire a causal mechanism . One way is to have the cause connect to the effect, but another is to have the cause disconnect what was blocking the effect.

Using the example of a bomb detonation mechanism, Schaffer points out that it can be wired in various ways, including: pressing the button generates an electrical current which connects to the bomb and makes it explode, or pressing the button disconnects an electrical current that was inhibiting an independent source from triggering the explosion. Schaffer notes the similarities between this latter case of causation by disconnection and other recent approaches to causation, such as Ned Hall’s on causation by ‘double prevention’, involving absence of events or ‘negative’ causation [22, 23]. In short, this recent philosophical work identifies a kind of causal connection—variously identified as ‘disconnection’, ‘negative’, ‘double prevention’—that is not a matter of energy flow.

The important point for our present purpose is that the examples of this other kind of causal connection all involve functional mechanisms, whether artefacts, or natural, biological systems. Schaffer uses detonator wiring diagrams, but the footnote explaining the diagram conventions refer to neuronal firing or not-firing, stimulatory and inhibitory connections [21] (p. 286n). In other words, we are dealing here with biological causation. James Woodward in his Making Things Happen notes that there are many scientific examples of causation by double prevention, particularly in biology , giving as illustration Jacob and Monod’s lac operon model for Escherichia coli, noting that biologists describe this as a case of ‘negative control’ [24] (pp. 225–226). This clearly illustrates that causation by double prevention is situated within the explanatory paradigm of regulatory control so far outlined.

The idea of causal pathways that involve absences of events is probably tied inextricably to the systems theoretic concepts of functioning towards ends and contributions of part functioning to whole functioning. In this context whole functioning will depend on whether inputs are or are not received from another part, and both cases are of interest. So, for example, closing of a gate and the consequent cessation of delivery of a chemical into a chemical reaction container, is as interesting as the gate being open—otherwise there would be no point in using the term ‘gate’. Distinctions like open/closed, happens/doesn’t happen are integral to the normativity of regulatory control , and they have no analogue in physico-chemical laws/equations covering energy exchanges .

The critical point is that all these curious kinds of explanations posited in recent philosophical work on causation—‘disconnection’, ‘negative’, ‘double prevention’—are to be distinguished from causal connections that rely on energy transformation and conservation. The standard philosophical view about causation has been to emphasise this latter kind of causal connection. Schaffer [21] (p. 286) attributes this standard view widely, to Wesley Salmon, Phil Dowe, Peter Menzies and David Armstrong. In a strong form , the proposal is to limit causal processes as those that transmit conserved quantities—the clearest example of which is energy in physics. The new work in philosophy of causation is consistent with the approach we have taken in this chapter, which distinguishes regulatory control from energy transformations and conservation covered by physical equations.

2.2.6 Philosophy of Biology Notes

Recent philosophy of biology has focussed on systems theoretic concepts and principles, such as (dynamical) systems/mechanisms, part/whole relationships and complexity. Books include William Bechtel and Robert C. Richardson’s Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research [10]; Sandra D. Mitchell’s Biological Complexity and Integrative Pluralism [25]; and William C. Wimsatt’s Re-engineering Philosophy for Limited Beings [9]. This is a very rich literature dealing with many topics in philosophy of biology , including those few covered here, in much more detail, and with many examples.

The different focus here is the philosophy of biology as it defines key conceptual features of the first component of the biopsychosocial, especially to bring out that the key conceptual features of current biology open up the way to a coherent view of biopsychosocial ontology and causation appropriate for the biopsychosocial model of health and disease. For this purpose we have emphasised the relation of biology to physics–chemistry and especially the fundamental role of information as well as energy, the theory started by Erwin Schrödinger , picked up by Ludwig von Bertalanffy , and used in contemporary biophysics and genetics by for example Nick Lane and Siddhartha Mukherjee . Generally this line of thought has not been the focus in philosophy of biology . Moreover, there are some signs of antipathy towards it in the mainstream philosophical literature, directed against the core notion of information , as considered next.

2.2.7 Biological Information Is Semantic (Capable of Error)

We noted at some length in Sect. 2.1, under the heading “Error Is Fundamental to Biology”, that normativity, including the possibility of error , is fundamental to biological regulatory control mechanisms. Normativity is however entirely anomalous for physicalism . Physicalism envisages only the few physical qualities, related to mass, momentum, energy—and it especially doesn’t envisage any of the family that includes (semantic) information or intentionality, characterises by aboutness or directness, and the possibility of error . Here for example is Jerry Fodor [26] (p. 97):

The deepest motivation for intentional irrealism derives… from a certain ontological intuition: that there is no place for intentional categories in a physicalist view of the world; that the intentional can’t be naturalised.

This ontological intuition is correct: biological information , bound up with regulation and the possibility of error , has no place in the physicalist view of the world, assuming this envisages only energy exchanges and the physical laws/equations that govern them. Hence there is enormous pressure from physicalism to disqualify or down-grade the information -processing paradigm in biology , specifically to deny the possibility of error . This is actually quite difficult to do since subtract information -processing concepts, always involving normativity, from contemporary biology textbooks and there is practically nothing left.

The disqualification move is especially unattractive in the case of genes and genetic information . This suggests a compromise of limiting error -prone information to genes . Here for example is Paul Griffiths [27] (p. 295):

There is a genetic code by which the sequence of DNA bases in the coding regions of a gene corresponds to the sequence of amino acids in the primary structure of one or more proteins… The rest of ‘information talk’ in biology is no more than a picturesque way to talk about correlation and causation.

Such a concession is philosophically pointless however; it only takes a single exception—though in this case by the way a massive one (genetics /life)—to disprove the metaphysical claim that there is no error -prone information in nature.

Another possibility is to envisage semantic information processing in the mind–brain but not elsewhere in biology , except perhaps, again, in genes . William Bechtel [28] for example highlights the concept of information in the stronger semantic sense in modelling the mind/brain. In a section entitled Mental Mechanisms: Mechanisms That Process Information , Bechtel argues that biological phenomena such as cellular respiration ‘can be adequately characterised as involving physical transformations of material substances’ [28] (p. 22), while ‘mental mechanisms are ones that can be investigated taking a physical stance (examining neural structures and their operations) but also, distinctively and crucially, taking an information processing stance’ [28] (p. 23). In this discussion, Bechtel qualifies the proposal that sub-mental/neuronal biology has no information processing, making an exception, like Griffiths as quoted above, of genetics [28] (p. 22n).

However, the information -processing and with it the possibility of error in genes , and also in the brain, are not biological exceptional cases, but are rather the rule. The same applies all over the body—for example, to the endocrine system’s management of many internal functions, described for example here [29]:

The endocrine system is a network of glands that secrete chemicals called hormones to help your body function properly. Hormones are chemical signals that coordinate a range of bodily functions. The endocrine system works to regulate certain internal processes… and systems [such as] growth and development, homeostasis (the internal balance of body systems), metabolism (body energy levels), reproduction, response to stimuli (stress and/or injury).

And—evident in the endocrine disorders—it can all go wrong.

2.3 Current Biomedicine Is Conducive to the Biopsychosocial Model

Consider again Engel’s characterisation of the Biomedical Model [30] (p. 130):

The biomedical model embraces both reductionism , the philosophic view that complex phenomena are ultimately derived from a single primary principle, and mind-body dualism , the doctrine that separates the mental from the somatic. Hence the reductionist primary principle is physicalistic; that is, it assumes that the language of chemistry and physics will ultimately suffice to explain biological phenomena.

Engel uses the term ‘reductionism ’ in this passage in two senses: the first is commitment to there being a single primary principle explaining complex phenomena, specifically a biological principle; the second has to do with the reduction of biology to physics and chemistry. The line of thought in this chapter counts against the complete reduction of biology to physics and chemistry, though retains partial reduction. Much biology relies on the energy exchanges determined by quantum mechanical and chemical combinatorial enthalpy equations. However, these energy exchanges have to be controlled, as do all other biological processes, by regulatory mechanisms involving information transfer. Biology and biomedicine in the last half-century have developed as an exquisite combination of these two kinds of science.

Interestingly, Engel recognised the fundamental role of the new information science in medicine in his ‘Foreword’ [31] to the book on the subject by Foss and Rothenberg [32]; he acknowledged the shortcomings of the term ‘biopsychosocial’, which emphasises structural boundaries rather than integration, and welcomed the authors’ term ‘infomedical’. However while the thinking behind these considerations was sound, this terminology has not caught on, at least not as a replacement for ‘biopsychosocial’.

From the systems theory point of view there is no reason at all to quarrel with the partial reduction of biology to physics and chemistry, evidenced in scientific research programmes to determine the biophysics and biochemistry of, for example, cell metabolism or blood oxygen transport. A connected point, nor is there any reason to regard biomedicine as anything other than a scientific medical research programme with a remarkably successful track record. The general direction of biomedical research programmes from the mid-nineteenth century was towards study of internal organs and systems, penetrating beneath, literally inside, the complex presentations of signs and symptoms of disease, and beyond that, deeper inside the bodily organs and systems, to the structure and functioning of cells and the underlying chemistry of molecular processes. Research strategies shifted away from traditional naturalistic observational methods towards laboratory based experimentation, requiring elucidation of experimental methods to determine causation, famously developed in the mid-nineteenth century by Robert Koch in his postulates for use in the new microbiology. Resounding successes in control of infectious diseases and the development of penicillin were followed by many further developments from the mid-twentieth century, in new sciences such as clinical genetics and neuroscience , and new treatment technologies (e.g. [33]).

Biomedical research from the middle of the nineteenth century led the way in understanding the basic physics and chemistry of biological processes, but to this it can be added that since the mid-twentieth century it has also been at the cutting edge of that whole new aspect of biology involving information -based regulatory control mechanisms, the fallibility of which is fundamental to the understanding of disease. In short, and of course, nothing does ‘the biological’ better than biomedicine. So if the biopsychosocial model wants to include the best concerning the first in its triumvirate, it had better aim to include biomedicine.

On the other hand, in addition, there are also all the other aspects of health, disease and health care that have come to light or prominence over the same period of the last few decades, outlined in Sect. 1.1, which require more than biomedical science. Such as, the epidemiology of social determinants of health , the increasing relative prevalence of non-communicable diseases compared with infectious diseases, raising issues of adjustment and quality of life with chronic health conditions, and broader social changes which have put patient rights and autonomy at the forefront of practice.

This raises the issue of the second type of reductionism that Engel attributed to the biomedical model in the above quotation: reducing complex phenomena to the biological alone. While biology was supposed to reduce to physics and chemistry, and while this supposition had a priori support from physicalism , it would follow fast without much thought that the explanation of diseases, like everything else, would ultimately be in terms of biology = physics and chemistry. Within the confines of physicalism , the possibility of distinctive psychological or social explanatory principles can hardly arise. Conversely, it does arise in a post-physicalist thought space that can envisage psychological and social factors as candidate explanations, as well as biological. In this context, the biomedical assumption that there is a primary biological cause becomes an empirical bet, without a priori, metaphysical/ideological support. The bet is that illnesses have a biological cause—explaining ‘most’ of the outcome variance. Whether this is true in any given type of illness is a matter for research, and we already know enough to say that it is not true of all illnesses—and not at every stage. This refers to the emerging evidence implicating psychosocial factors reviewed briefly at the beginning of Sect. 1.2.

What is required to comprehend psychosocial reality and causation is a post-physicalist framework that can accommodate more than physics and chemistry. But this is exactly what is opened up by the recent paradigm shift in biology and biomedicine that we have been considering. The main point is that fundamental biological phenomena—form or structure, functioning towards ends, regulatory control and inter-systemic information -transfer—complexify and diversify into what we call the psychological and the social. As noted at the end of the first chapter, under the heading “Developing the General Model”, the evolution of life forms ends up with human psychological and social phenomena, but ‘ends up with’, as currently understood in the science, is not a matter of logic or scientific law, but is entirely contingent—accidental. The original biological function is to maintain biological life, and this preoccupation carries through to the psychological and the social. However, psychological life has conditions in addition to biological life—agency and recognition —and all these matters are managed in forms of social organisation and control. Further, all kinds of biopsychosocial functioning, once we leave the physics and chemistry, are liable to error , vulnerable, illness prone. This expansion into the biopsychosocial conditions of health and disease is the business of the remaining chapters.

A caveat before closing this section: it is clear that there is in the mix a fourth ingredient as well as the biological, psychological and social, namely, ‘the environment’. Having our environment identified solely as ‘social’ is no use at all, not in general, not in any of the life and health sciences. Conceptually from basic genetics and cell biology upwards, it makes no sense to model living processes except in relation to interactions with the environment. This is also the clear context of Schrödinger’s linkage between life and the second law of thermodynamics . Certainly it has been clear to the public health physicians that for good health we need food, water, accommodation. The ‘biological’ and biomedicine imply conditions and interactions with the non-social, physico-chemical environment. However, at the current time ‘the environment’ demands explicit acknowledgement in any proposed general model of health and disease because of the many urgent environmental challenges we face: threats to global temperature stability, to energy, water and food security, with their impacts on health, and their interactions with social policy. This reflects the increasing importance of geography and environmental sciences , filling the gap historically created in the historical three-way division between biological/physiological, psychological and social sciences .