Skip to main content

Analogy-Based Inference Patterns in Pharmacological Research

  • Chapter
  • First Online:

Part of the book series: Boston Studies in the Philosophy and History of Science ((BSPS,volume 338))

Abstract

Analogical arguments are ubiquitous vehicles of knowledge transfer in science and medicine. This paper outlines a Bayesian evidence-amalgamation framework for the purpose of formally exploring different analogy-based inference patterns with respect to their justification in pharmacological risk assessment. By relating formal explications of similarity, analogy, and analog simulation, three sources of confirmatory support for a causal hypothesis are distinguished in reconstruction: relevant studies, established causal knowledge, and computational models.

This work is supported by the European Research Council (grant 639276) and the Munich Center for Mathematical Philosophy (MCMP).

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    In this passage, Hill refers to (i) severe disabilities (even death) among babies linked to the over-the-counter drug thalidomide, prescribed in the 1960s in Germany as Contergan to alleviate morning sickness in pregnant women, and to (ii) miscarriage or children born with the congenital rubella syndrome (CRS) due to infection by the rubella virus during pregnancy.

  2. 2.

    As implied by this way of distinguishing reliability and relevance, the relevance weight (attached to a given evidential report) is really meant here to capture the degree of external validity. Of course, there are other ways in which a study can be relevant to the hypothesis – for example, if it is conducted by an acknowledged authority. In the framework of Landes, Osimani, and Poellinger (Landes et al. 2017), this way of being relevant to the hypothesis would be encoded in the reliability weight, which collects all sources of bias.

  3. 3.

    The graphical d-separation criterion (with d for directional) distinguishes conditionally dependent (sets of) variables from conditionally independent ones by drawing on structural information, i.e., on how arrows are directed along the paths between the (sets of) variables under consideration; see, e.g., Geiger et al. (1990).

  4. 4.

    In this case, independence between the weight variables and the hypothesis “may or may not be a realistic assumption”, as Bovens and Hartmann concede, and they extend their discussion to cases where such weight nodes (reliability of evidence reports) and the hypothesis are made dependent through auxiliary theories (see Bovens and Hartmann 2003, pp. 107ff.). For the purpose of this paper, though, the standard for assigning values to weight variables is assumed to be fixed prior to hypothesis testing.

  5. 5.

    The Bayes net structure in Fig. 5.1 for example illustrates that Ind 1 is influenced by (since d-connected to) α 1 once we know the value of the “collider variable” Rep 1.

  6. 6.

    Landes et al. (2017) contains a non-exclusive list of six causal indicators derived from Hill’s guidelines in Hill (1965). See also Poellinger (forthcoming) for a discussion of the conceptual relationships of these causal indicators and the ramifications of theory choice in causal assessment.

  7. 7.

    LaFollette and Shanks (1995) argue, e.g., that animal studies are limited to hypothesis generation.

  8. 8.

    In particular, the question must be answered whether the partial result can be combined in an additive fashion with information about further sub-mechanisms, or whether complex inter-dependencies forbid partitioning the full mechanism into stand-alone modules.

  9. 9.

    Note that I am distinguishing evidential relevance (as a property of evidence relevant in causal assessment) from causal relevance (as a property of a variable causally relevant to a second variable in a causal model), cf. Footnote 18. Also see Footnote 3 above for a remark on other interpretations of relevance.

  10. 10.

    Note that for such an interpretation the prior of the network is required to be set up in such a way that the αs render the Reps independent of (i.e., irrelevant to) the hypothesis Hyp in the extreme case. Formally: P(Hyp | Rep k = true, α k = irrelevant) = P(Hyp | α k = irrelevant) = P(Hyp). See also my discussion of the prior in Sect. 5.1.1 above.

  11. 11.

    This suggests that the distinction between changeable and unchanged aspects of the study population will be a static one prior to modeling. Nevertheless, Paul and Healy discuss cases in which the modeler is forced to revisit her model because a clinical trial impacts on relevant characteristics of the population in a sort of feedback loop (see Paul and Healy 2016 on Transformative Treatments). For the present purpose it is uncritical to assume that the initial model can be refined at later stages to accommodate previously exogenous assumptions into the model as endogenous parameters/relations.

  12. 12.

    If the causal hypothesis is thought of as a causal graph, D, E, and U are meant to represent designated (sets of) variables with token values in a causally interpreted structure M (possibly encoding the specifics of direct causal relations and assumptions about causal in/dependencies on type level). Note that, more generally, M and M k can be thought of as sets of structural constraints, i.e., as classes of causal graphs.

  13. 13.

    These accounts of similarity share the intuition that comparing two things means (i) comparing certain aspects of those things and (ii) aggregating one’s evaluation of those aspects in a certain manner. Lewis’ idea of comparative similarity is tightly connected to his concept of causation, where a cause–effect relation is evaluated in terms of the corresponding counterfactuals. The cause reveals its power in the effect event where the rest of the world remains unperturbed, i.e., as similar as possible to the state of world prior to the cause event. Lewis suggests a priority ordering for the assessment of similarity, where local changes in physical facts are understood as a lesser deviation from actuality than far-reaching global changes in natural laws, see Lewis (1973b). The geometric account locates an object’s properties (deemed relevant for comparison) in a multi-dimensional space by assigning a specific value to each of those properties. Similarity is then spelled out in terms of vector distance from a reference object. The question of how to assign such values is circumvented in the contrast approach which deals well with similarity as partial identity, since in this approach degrees of similarity are assessed by assigning weights to co-instantiated identical properties (which might make the approach suitable rather for comparing different states of one and the same object than for comparing different objects).

  14. 14.

    In the formal notation used here, ∼ denotes some (reflexive, transitive, and symmetric) equivalence relation (equivalence w.r.t. a given property) such that for a domain A, some object a ∈ A, and an equivalence relation ∼ on A: [a] := {x ∈ A | x ∼ a}. The expressions [D], [M], and [U] are to be understood as encoding each a specific equivalence relation, since – to be precise – each category comes with its own standards for how equivalence classes are to be generated. If standards are set high, e.g., D k might be in the class [D] only if it is identical with D, while comparing U and U k will naturally demand flexibility for possibly very different populations. (I will not add a further index, though, to avoid notational clutter.)

  15. 15.

    Componentwise multiplication of two vectors (also referred to as “Hadamard Product”) multiplies vectors A and B (both of length n) element by element and returns a vector C (also of length n). Example: 〈a, a, a〉∘〈0, a, b, 〉 = 〈0, 2a, ab〉.

  16. 16.

    In this example, study and target are compared merely in terms of population characteristics \(\overrightarrow {u_{(k)}}\). In general, though, differences between the substances and between the causal structures will also play a role in assessing the weight of a report, as explicated in Eq. 5.8. For sake of illustration it might be assumed here that substances and causal structures have been found equivalent w.r.t. the present purpose.

  17. 17.

    See Pearl (2000, Sect. 7.3.3), for a discussion of causal relevance. Note, that I am distinguishing causal relevance as part of the causal knowledge (encoded as report nodes) from epistemic or inferential relevance (encoded as attributional Rlv weight nodes).

  18. 18.

    The way it is presented here, one’s assessment of such similarity between study and target is obviously relative to the set of aspects included in one’s considerations. There is an argument to be made here that possible further differences not considered should lower one’s confidence in the similarity assessment. In principle, in the Bayesian framework employed, it is possible to add an unspecified counter-weight (much like an error term) in order to encode one’s uncertainty about potentially neglected, though relevant differences between study and target. Yet, assigning a number to this weight is a subjective task again. Indeed, I would like to argue that such analogy-based arguments are inherently perspectival: They rest on a specific choice of relevant aspects (reasonably motivated) and a specific way of relating those aspects (non-arbitrarily). Thus, making the ingredients of such arguments explicit helps refining or potentially also refuting them.

  19. 19.

    In the following, I deviate from Dardashti et al. (2017) in notational details.

  20. 20.

    As an illustration, consider the following: In many cases, evidence for similarity of the drug’s causal effects comes from mechanistic knowledge, maybe in relating the molecular structure of the substances to known classes of biochemical processes. So, if D is known to be harmful because of its capacity to block some specific mechanism, and if this capacity is judged to be relevant in comparing D and D , then such blocking behavior should be part of Hyp’s testable consequences Ind k. Owing to differences in the investigated substances, the testable consequences of Hyp and Hyp are in general not identical, but can be related non-arbitrarily in motivating a specific theoretical mapping, i.e., some isomorphism at a suitably chosen level of description.

  21. 21.

    This topic is a subject of current discussion in the philosophy of science with some authors regarding computational models as simply an implemented variant of scientific models as such (e.g., Frigg and Reiss 2009), while others emphasize as a special feature of computer simulations the possibility to experiment with such models as virtual test objects (e.g., Parker 2009 and Morrison 2015).

  22. 22.

    In the context of modeling with Bayesian networks, this demand is captured in the requirement that all variables represent distinct events.

  23. 23.

    This strategy introduces a secondary set of model-external, empirically grounded arguments in the picture which is first motivated by the logical cross-link between the two frames and later guided by anchoring considerations. See Osimani and Poellinger (forthcoming) for a detailed reconstruction of model creation, verification, and validation for computer simulation in systems biology.

  24. 24.

    For a discussion of surprise in computer simulation see Parke (2014).

References

  • Bartha, P. (2010). By parallel reasoning: The construction and evaluation of analogical arguments. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Bartha, P. (2013). Analogy and analogical reasoning. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Fall 2013 ed.).

    Google Scholar 

  • Beebe, C., & Poellinger, R. (forthcoming). Bayesian confirmation from analog models.

    Google Scholar 

  • Bovens, L., & Hartmann, S. (2003). Bayesian epistemology. Oxford: Oxford University Press.

    Google Scholar 

  • Britton, O. J., Bueno-Orovio, A., Van Ammel, K., Lu, H., Towart, R., Gallacher, D., & Rodriguez, B. (2013). Experimentally calibrated population of models predicts and explains intersubject variability in cardiac cellular electrophysiology. Proceedings of the National Academy of Sciences of the United States of America, (110), E2098–E2105.

    Google Scholar 

  • Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafo, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14, 365–376.

    Article  Google Scholar 

  • Cartwright, N. (2011). Predicting what will happen when we act. what counts for warrant? Preventive Medicine, 53(4), 221–224. Special Section: Epidemiology, Risk, and Causation.

    Google Scholar 

  • Cartwright, N., & Stegenga, J. (2011). A theory of evidence for evidence-based policy. In P. Dawid, M. Twinning, & W. Vasilaki (Eds.), Evidence, inference and enquiry (Chapter 11, pp. 291–322). Oxford: Oxford University Press.

    Google Scholar 

  • Carusi, A., Burrage, K., & Rodriguez, B. (2012). Bridging experiments, models and simulations: An integrative approach to validation in computational cardiac electrophysiology. American Journal of Physiology – Heart and Circulatory Physiology, 303(2), H144–H155.

    Article  Google Scholar 

  • Casini, L., & Manzo, G. (2016). Agent-based models and causality: A methodological appraisal. The IAS Working Paper Series (Linköping University Electronic Press), (7), 1–80.

    Google Scholar 

  • Chan, A.-W., & Altman, D. G. (2005). Epidemiology and reporting of randomised trials published in PubMed journals. The Lancet, 365(9465), 1159–1162.

    Article  Google Scholar 

  • Dardashti, R., Hartmann, S., Thébault, K., & Winsberg, E. (forthcoming). Confirmation via analogue simulation: A Bayesian analysis.

    Google Scholar 

  • Dardashti, R., Thébault, K., & Winsberg, E. (2017). Confirmation via analogue simulation: What dumb holes could tell us about gravity. The British Journal for the Philosophy of Science, 68(1), 55.

    Google Scholar 

  • Diez Roux, A. V. (2015). The virtual epidemiologist – Promise and peril. American Journal of Epidemiology, 181(2), 100–102.

    Article  Google Scholar 

  • Doll, R., & Peto, R. (1980). Randomised controlled trials and retrospective controls. British Medical Journal, 280, 44.

    Article  Google Scholar 

  • Frigg, R., & Reiss, J. (2009). The philosophy of simulation: Hot new issues or same old stew? Synthese, 169(3), 593–613.

    Article  Google Scholar 

  • Geiger, D., Verma, T., & Pearl, J. (1990). Identifying independence in Bayesian networks. Networks, 20(5), 507–534.

    Article  Google Scholar 

  • Guala, F. (2010). Extrapolation, analogy, and comparative process tracing. Philosophy of Science, 77(5), 1070–1082.

    Article  Google Scholar 

  • Hesse, M. B. (1952). Operational definition and analogy in physical theories. British Journal for the Philosophy of Science, 2(8), 281–294.

    Article  Google Scholar 

  • Hill, A. B. (1965). The environment and disease: Association or causation? Proceedings of the Royal Society of Medicine, 58(5), 295–300.

    Article  Google Scholar 

  • LaFollette, H., & Shanks, N. (1995). Two models of models in biomedical research. Philosophical Quarterly, 45(179), 141–160.

    Article  Google Scholar 

  • Landes, J., Osimani, B., & Poellinger, R. (2017). Epistemology of causal inference in pharmacology. Towards an epistemological framework for the assessment of harms. European Journal for Philosophy of Science 8(1), 3–49.

    Article  Google Scholar 

  • Lewis, D. (1973a). Causation. The Journal of Philosophy, 70(17), 556–567.

    Article  Google Scholar 

  • Lewis, D. (1973b). Counterfactuals (2nd ed.). Hoboken, New Jersey: Wiley-Blackwell.

    Google Scholar 

  • Luján, J. L., Todt, O., & Bengoetxea, J. B. (2016). Mechanistic information as evidence in decision-oriented science. Journal for General Philosophy of Science, 47(2), 293–306.

    Article  Google Scholar 

  • Morrison, M. (2015). Reconstructing reality: Models, mathematics, and simulations. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Osimani, B., & Poellinger, R. (forthcoming). A protocol for model validation and causal inference from computer simulation.

    Google Scholar 

  • Parke, E. C. (2014). Experiments, simulations, and epistemic privilege. Philosophy of Science, 81(4), 516–536.

    Article  Google Scholar 

  • Parker, W. S. (2009). Does matter really matter? computer simulations, experiments, and materiality. Synthese, 169(3), 483–496.

    Article  Google Scholar 

  • Paul, L. A., & Healy, K. (2016). Transformative treatments. Noûs, 52(2), 320–335.

    Article  Google Scholar 

  • Pearl, J. (2000). Causality: Models, reasoning, and inference (1st ed.). Cambridge: Cambridge University Press.

    Google Scholar 

  • Poellinger, R. (forthcoming). On the ramifications of theory choice in causal assessment: indicators of causation and their conceptual relationships.

    Google Scholar 

  • Revicki, D. A., & Frank, L. (1999). Pharmacoeconomic evaluation in the real world. PharmacoEconomics, 15(5), 423–434.

    Article  Google Scholar 

  • Shepard, R. N. (1980). Multidimensional scaling, tree-fitting, and clustering. Science, 210(4468), 390–398.

    Article  Google Scholar 

  • Steel, D. (2008). Across the boundaries. Extrapolation in biology and social sciences. Oxford: Oxford University Press.

    Google Scholar 

  • Tversky, A. (1977). Features of similarity. Psychological Review, 84(4), 327–352.

    Article  Google Scholar 

  • Unruh, W. G. (2008). Dumb holes: Analogues for black holes. Philosophical Transactions of The Royal Society A, 366, 2905–2913.

    Article  Google Scholar 

  • Upshur, R. (1995). Looking for rules in a world of exceptions: Reflections on evidence-based practice. Perspectives in Biology and Medicine, 48(4), 477–489.

    Article  Google Scholar 

  • Weisberg, M. (2012). Getting serious about similarity. Philosophy of Science, 79(5), 785–794.

    Article  Google Scholar 

  • Weisberg, M. (2013). Simulation and similarity: Using models to understand the world. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Worrall, J. (2007). Evidence in medicine and evidence-based medicine. Philosophy Compass, 2(6), 981–1022.

    Article  Google Scholar 

Download references

Acknowledgements

This paper was presented at workshops and conferences in Munich, Sydney, Groningen, Bologna, Bochum, and Exeter. I greatly benefited from the comments and suggestions made by the audiences, and I am particularly thankful for personal discussions with Cameron Beebe, Lorenzo Casini, Radin Dardashti, Stephan Hartmann, Adam LaCaze, Jürgen Landes, Barbara Osimani, Jan-Willem Romeijn, Karim Thébault, Naftali Weinberger, and Michael Wilde, whose valuable comments helped me clarify my aims and shape the final version of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roland Poellinger .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Poellinger, R. (2020). Analogy-Based Inference Patterns in Pharmacological Research. In: LaCaze, A., Osimani, B. (eds) Uncertainty in Pharmacology. Boston Studies in the Philosophy and History of Science, vol 338. Springer, Cham. https://doi.org/10.1007/978-3-030-29179-2_5

Download citation

Publish with us

Policies and ethics