Abstract
Computer simulations and experiments share many important features. One way of explaining the similarities is to say that computer simulations just are experiments. This claim is quite popular in the literature. The aim of this paper is to argue against the claim and to develop an alternative explanation of why computer simulations resemble experiments. To this purpose, experiment is characterized in terms of an intervention on a system and of the observation of the reaction. Thus, if computer simulations are experiments, either the computer hardware or the target system must be intervened on and observed. I argue against the first option using the non-observation argument, among others. The second option is excluded by e.g. the over-control argument, which stresses epistemological differences between experiments and simulations. To account for the similarities between experiments and computer simulations, I propose to say that computer simulations can model possible experiments and do in fact often do so.
Similar content being viewed by others
Notes
E.g. Gramelsberger (2010).
E.g. Naumova et al. (2008).
Keller (2003), p. 203, in scare quotes.
Humphreys (1994), p. 103.
Dowling (1999), p. 261, in scare quotes.
Consult (Imbert 2017), Sec. 3.5 for a very useful overview of the recent debate.
But see fn. 68 for a short remark on analog simulations.
I can draw on a rich philosophical literature about experiments. See Hacking (1983), Part B, Janich (1995), Morrison (1998), Heidelberger (2005), Radder (2009), Bogen (2010) and Franklin (2010) for introductory pieces or reviews about scientific experiments. Radder (2003) is a recent collection in the philosophy of experiment. See Falkenburg (2007), particularly Ch. 2, for a recent account of experimentation with applications to particle physics. Biological experiments are philosophically analyzed by Weber (2005), experiments in economics by Sugden (2005). For studies about modern experiments see also Knorr-Cetina (1981) and Rheinberger (1997).
Kant himself is eager to stress the conceptual work necessary to ask nature a well-defined question, but this is not important in what follows; our focus concerning the idea that nature is asked a question is rather on the causal interference with the system that is observed.
In the example of the potter, I cannot exclude that the potter runs an experiment, if additional conditions are fulfilled. But even if she does, the respective experiment would not count as scientific. Scientific experiments are embedded in a broader scientific practice. As a consequence, the epistemic difference an experiment is supposed to make is more pronounced.
See e.g. Balzer (1997), p. 139.
A recent monograph about theory-ladenness is Adam (2002).
See Peschard (forthcoming), Sec. 1 for a similar account of experiment.
One may of course argue that natural and thought experiments are not really experiments, but this is not the place to do so.
I’m grateful to an anonymous referee for pointing me to this fact.
Cf. the “identity thesis” mentioned by Winsberg (2009a), p. 840.
So far, the focus of the philosophy of computer simulations was on knowledge.
To be fair, I should mention that the paper by Morrison provides also indications that she does not fully support CET. For one thing, the wordings of her central claims are very cautious; she never says that CSs are experiments, but rather e.g. that there are no reasons to maintain the distinction (e.g. p. 55). This claim seems also to be restricted to some “contexts” (p. 33). She further admits that computer simulations do not involve the manipulation of the target system (fn. 16 on p. 55). This concession does not seem to matter much for her argument; so, maybe, she does not think that intervention is crucial for experiment.
CE, CE+ and CME try to clarify the relationship between experiments and simulations. But what exactly do they mean by CSs? There are broadly two ways of conceiving CSs depending on whether a CS is supposed to be one run with a simulation program or whether it is what Parker (2009), p. 488 calls a “computer simulation study”, which also includes writing the program, testing it, etc. (e.g. Frigg and Reiss 2009, p. 596; Parker 2009, p. 488). Analogous questions can be raised about experiments too, e.g. is the construction of the detector used in an experiment part of the latter or not? In what follows, I will not rely upon any specific proposal as to what is included in an experiment or a CS. Rather, I will assume that experiments and CSs are identified in a similar way such that the claims under consideration have a chance of being true. For instance, when we discuss CE, it would be too uncharitable to assume that experiments include detector building and similar activities, while a computer simulation is simply one run of a simulation program. What is important though for my argument is that every experiment includes an intervention on the object of the experiment.
Cf. Hughes (1999), p. 137.
Note that we are here not talking about observing in the sense of looking at. Observation on this interpretation does not suffice for experimenting.
See Rechenberg (2000), Chs. 2–3 for a brief description of the hardware of computers; the details do not matter for my purposes.
Barberousse et al. (2009) seem to agree with my claim that the working scientist does not observe the hardware, for they write that
“the running computer is not observed qua physical system, but qua computer, i.e., qua symbolic system” (p. 564).
I’m grateful to an anonymous referee for raising this objection.
See Press et al. (2007), p. 9 for an example of a suitable program.
Such an argument is suggested by Imbert (2017), Sec. 3.5.
This is so if intervention in the second condition is meant to be intentional. If this is not so, then the argument needs the third condition which makes it very likely that the intervention is intentional.
The example of a simulation that is parallelized is also used by Barberousse et al. (2009, p. 565), albeit in a different argument.
My arguments against CEH from this section can easily be generalized to show that computations carried out on computers (and not just simulations) do not include an experiment on the hardware.
Some measurement apparatuses used in experiments function in a similar fashion. We do not observe certain measurement apparatuses, but rather use them to observe something else (cf. fn. 2 on p. 7 above). To do so we have to trust the instruments. There is thus a close parallel between computers as instruments and instruments in experimentation. Cf. Humphreys (2004), Ch. 1 and Parker (2010).
For the purposes of this paper, we need not engage with Morrison’s argument in detail because Giere (2009) has made a convincing case against it.
My first two arguments against CET may be summarized by the claim by Arnold (2013, pp. 58–59) that CSs do not operate on the target itself.
Here, the last inferential step excludes the possibility that CSs include an experiment on the target even though the results of CSs are not experimental. This possibility can indeed be dismissed as being far-fetched and useless. Even if it were realized, we could not directly appeal to experimentation to explain the epistemic power of CSs.
Properly speaking, it is the results of CSs (or experiments) that are (not) over-controlled; but for convenience, I will sometimes say that CSs (experiments) are (not) over-controlled. See fn. 42 for a justification.
To elaborate a bit: The argument starts from what the working scientists wants. The reason is that the third condition on experiment is cast in terms of what the experimenter wants; it does not require that the experimenter be successful. The argument further assumes that the scientist is rational in that she only wants something that she takes to be possible and that she draws immediate consequences of her beliefs. For the objective absence of over-control, it is further assumed that the scientist correctly believes that she can learn about a reaction of the system. We can grant the additional assumptions because proponents of CET will not want to save their claim by claiming that scientists are not rational or that they do not know basic things about the concrete setting. If we don’t find the additional assumptions convincing, we may restrict ourselves to successful experiments in which the goal mentioned in the third condition is in fact fulfilled. Proponents of CET will not want to exclude such experiments. We can then argue that over-control would prevent success.
Note though that the notion of a reaction does a lot of work for the argument since we understand reaction in a way that excludes objective over-control. It may be objected that, in the conditions on experiment, “reaction” may instead be taken to be any consequence of the intervention. The assumption that experiments exclude over-control would then need additional justification. I think that the discussion provided in this section does provide this justification.
I’m grateful to an anonymous referee for raising this objection.
There may be exceptions. For instance, I may use a system that is known to follow certain dynamical equations to identify a solution to these equations. Here I need to know the equations, otherwise I can’t interpret the experiment in the way I wish to. To save my claim that experiments are not over-controlled, I may either deny that we are really talking about an experiment here. Alternatively, I may claim that knowledge of the equations is not needed for the experiment proper, but only for an inference that is based upon it.
See e.g. Beisbart and Norton (2012).
Here, simulations are distinguished from arguments to the effect that the simulations get it right. When arguments of this type are part and parcel of simulations, the latter can of course empirically refute a set of assumptions, as can do experiments. But it is trivial that CSs in this extremely thick sense can do this. This trivial point cannot be the rationale for CET. – Note also that purely mathematical theories may be falsified using computer simulations. But mathematical theories are not my concern here.
What then are the conditions under which a CS can replace an experiment? Well, it can do so if the CS is known to reflect the intervention and the reaction of the system experimented on in a sufficiently faithful way. What sufficiently faithful representation means depends on the aspects that the working scientists are interested in and on the desired level of accuracy.
Barberousse et al. (2009), pp. 562 and 565 make a similar point concerning the hardware of a computer. Their target is not so much an alleged experimental status of simulations rather than the idea that the physicality of the simulations is crucial for the epistemic power of simulations.
That CSs model experiments is sometimes assumed in the sciences, too, see e.g. Haasl and Payseur (2011), p. 161. The claim also makes sense of the following remark by Metropolis and Ulam (1949) about Monte Carlo simulations:
“These experiments will of course be performed not with any physical apparatus, but theoretically.” (p. 337).
As I shall show below, my argument also goes through for an alternative account of modeling that does not assume similarity.
See also Beisbart (2014) for the various ways in which computer simulations are related to models.
This analysis is focused on deterministic simulations, but can be generalized to Monte-Carlo simulations. The latter produce many sample trajectories of the computer model. Each sample trajectory arises from initial conditions subject to quasi-intervention and produces outputs that may be quasi-observed.
I have here concentrated on the conditions on experimentation introduced in Section 2 above. These conditions have not been shown to be sufficient. This is not a problem because we here not interested in the claim that an experiment is indeed run, but only in the proposition that an experiment is modeled. Now a model need not fully reflect its target. Thus, not every condition on experiment needs to be reflected in the model, if only crucial aspects of experiments are represented. This clearly seems to be the case.
A model in which all initial conditions and parameter values are fixed seems highly artificial, and even in this case, one may vary some of the model assumptions, which would suffice for quasi-intervention.
My claim that a CS can be or even is a modeled experiment is not meant to imply that this CS can be or is an experiment. A modeled experiment is not an experiment as fake snow is not snow.
If a CS does not actually model a possible experiment because it is supposed to reflect the way the target system does behave as a matter of fact, we may still say that it models a natural experiment. In such an experiment, no intervention is needed simply because the system that is observed happens to fulfill the conditions that are at the focus of the inquiry.
Some similarities on the list from Section 3 too apply to CSs of which CME2 does not hold true. We can explain such similarities by saying that the simulations model only the behavior of a system as a reaction to certain conditions rather than also an intervention in which the system is subjected to the conditions.
See e.g. Mainzer (1995), p. 467.
It would be too much to claim that my proposal provides an independent explanation of the similarities listed. The reason is that some similarities are built into the proposal. What the proposal does though is to re-organize the similarities in a useful way. Of course, CE (plus, maybe, CE+) potentially reorganizes the similarities too, but CE and CE+ have been rejected on independent grounds.
They may do so by imitating natural experiments, which are not experiments according to our partial explication.
See also Imbert (2017), Sec. 5.2 for a similar strategy.
All this was very helpfully pointed out by an anonymous referee who also noted that the problem is not just restricted to my account, but rather affects any view that embraces the following two claims: i. Experiments are not over-controlled, while CSs are. ii. CSs can replace experiments.
In a similar way, Nagel (1986), p. 93 criticizes Berkeley’s so-called master argument because Berkeley confuses something that is needed to create an image with the content of the image.
Can my main thesis, viz. that CSs can, and do often do, model possible experiments be generalized to what is called analog simulation? I here take it that, in an analog simulation, the target and a physical model of it may be described using the same type of dynamical equations. The dynamics of the model then is investigated to learn about the dynamics of the target (Trenholme 1994). For instance, electric circuits may be used to study a fluid in this way (see Kroes (1989) for this example). Now, my claim that CSs can model possible experiments and often do so seems to apply to such analog simulations, too. For instance, if I set up an electric circuit to model a particular fluid, I’m modeling an experiment on the fluid. Note, however, the following difference between analog and computer simulations: If a possible experiment on the target is modeled in an analog simulation, this is a real experiment on the model system (the analogue). As I have argued above in Section 5, this is not so in computer simulations.
References
Adam, M. (2002). Theoriebeladenheit und Objektivität. Zur Rolle von Beobachtungen in den Naturwissenschaften. Frankfurt am Main und London: Ontos.
Arnold, E. (2013). Experiments and simulations: Do they fuse? In Durán, J.M., & Arnold, E. (Eds.) Computer simulations and the changing face of scientific experimentation (pp. 46–75). Newcastle upon Tyne: Cambridge Scholars Publishing.
Balzer, W. (1997). Die Wissenschaft und ihre Methoden. Freiburg und München: Karl Alber.
Barberousse, A., Franceschelli, S., & Imbert, C. (2009). Computer simulations as experiments. Synthese, 169, 557–574.
Barker-Plummer, D. (2016). Turing machines. In Zalta, E.N. (Ed.), The Stanford encyclopedia of philosophy. Winter 2016 edn, Metaphysics Research Lab, Stanford University.
Baumberger, C. (2011). Understanding and its relation to knowledge. In Löffler, C.J.W. (Ed.) Epistemology: contexts, values, disagreement. Papers of the 34th international Wittgenstein symposium (pp. 16–18). Austrian Ludwig Wittgenstein Society.
Beisbart, C. (2012). How can computer simulations produce new knowledge? European Journal for Philosophy of Science, 2(2012), 395–434.
Beisbart, C. (2014). Are we Sims? How computer simulations represent and what this means for the simulation argument. The Monist, 97/3, 399–417.
Beisbart, C., & Norton, J.D. (2012). Why Monte Carlo simulations are inferences and not experiments. International Studies in the Philosophy of Science, 26, 403–422.
Bertschinger, E. (1998). Simulations of structure formation in the Universe. Annual Review of Astronomy and Astrophysics, 36, 599–654.
Binder, K., & Heermann, D. (2010). Monte Carlo simulation in statistical physics: An introduction, graduate texts in physics. Berlin: Springer Verlag.
Bogen, J. (2010). Theory and observation in science. In Zalta, E.N. (Ed.), The stanford encyclopedia of philosophy. Spring 2010 edn. http://plato.stanford.edu/archives/spr2010/entries/science-theory-observation/.
Brown, J.R., & Fehige, Y. (2017). Thought experiments. In Zalta, E.N. (Ed.), The stanford encyclopedia of philosophy. Summer 2017 edn.
Carnap, R. (1962). Logical foundations of probability, 2nd edn. Chicago: University of Chicago Press.
Casti, J.L. (1997). Would-be worlds. How simulation is changing the frontiers of science. New York: Wiley.
Dolag, K., Borgani, S., Schindler, S., Diaferio, A., & Bykov, A.M. (2008). Simulation techniques for cosmological simulations. Space Science Reviews, 134, 229–268. arXiv:0801.1023v1.
Dowling, D. (1999). Experimenting on theories. Science in Context, 12/2, 261–273.
Duhem, P.M.M. (1954). The aim and structure of physical theory, Princeton science library. Princeton, NJ: Princeton University Press.
Durán, J.M. (2013). The use of the materiality argument in the literature on computer simulations. In Durán, J.M., & Arnold, E. (Eds.), Computer simulations and the changing face of scientific experimentation (pp. 76–98). Newcastle upon Tyne: Cambridge Scholars Publishing.
Efstathiou, G., Davis, M., White, S.D.M., & Frenk, C.S. (1985). Numerical techniques for large cosmological N-body simulations. Ap J Suppl, 57, 241–260.
Falkenburg, B. (2007). Particle metaphysics. A critical account of subatomic reality. Heidelberg: Springer.
Franklin, A. (2010). Experiment in physics. In Zalta, E.N. (Ed.), The Stanford encyclopedia of philosophy. Spring 2010 edn.
Frigg, R.P., & Reiss, J. (2009). The philosophy of simulation: Hot mew issues or same old stew? Synthese, 169, 593–613.
Fritzson, P. (2004). Principles of object-oriented modeling and simulation with Modelica 2.1. IEEE Press.
Giere, R.N. (2004). How models are used to represent. Philosophy of Science, 71, 742–752.
Giere, R.N. (2009). Is computer simulation changing the face of experimentation? Philosophical Studies, 143(1), 59–62.
Gillespie, D.T. (1976). A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. Journal of Computational Physics, 22, 403–434.
Goodman, N. (1968). Languages of art: An approach to a theory of symbols. Indianapolis: Bobbs-Merrill.
Gramelsberger, G. (2010). Computerexperimente. Zum Wandel der Wissenschaft im Zeitalter des Computers. Transcript, Bielefeld.
Guala, F. (2002). Models, simulations, and experiments. In Magnani, L., & Nersessian, N. (Eds.), Model-based reasoning: science, technology, values (pp. 59–74). New York: Kluwer.
Guillemot, H. (2010). Connections between simulations and observation in climate computer modeling. scientist’s practices and bottom-up epistemology lessons. Studies In History and Philosophy of Science Part B: Studies In History and Philosophy of Modern Physics, 41, 242–252. Special Issue: Modelling and simulation in the atmospheric and climate sciences.
Haasl, R.J., & Payseur, B.A. (2011). Multi-locus inference of population structure: a comparison between single nucleotide polymorphisms and microsatellites. Heredity, 106, 158–171.
Hacking, I. (1983). Representing and intervening. Cambridge: Cambridge University Press.
Hasty, J., McMillen, D., Isaacs, F., & Collins, J.J. (2001). Computational studies of gene regulatory networks: In numero molecular biology. Nature Reviews Genetics, 2, 268–279.
Heidelberger, M. (2005). Experimentation and instrumentation. In Borchert, D. (Ed.), Encyclopedia of philosophy. Appendix (pp. 12–20). New York: Macmillan.
Hughes, R.I.G. (1997). Models and representation. Philosophy of Science (Proceedings), 64, S325–S336.
Hughes, R.I.G. (1999). The Ising model, computer simulation, and universal physics. In Morgan, M.S., & Morrison, M. (Eds.), Models as mediators. Perspectives on natural and social sciences (pp. 97–145). Cambridge: Cambridge University Press.
Humphreys, P. (1990). Computer simulations. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1990, 497–506.
Humphreys, P. (1994). Numerical experimentation. In Humphreys, P. (Ed.), Patrick Suppes. Scientific philosopher (Vol. 2, pp. 103–118). Dordrecht: Kluwer.
Humphreys, P. (2004). Extending ourselves: Computational science, empiricism, and scientific method. New York: Oxford University Press.
Humphreys, P.W. (2013). What are data about? In Durán, J.M., & Arnold, E. (Eds.) Computer simulations and the changing face of scientific experimentation (pp. 12–28). Newcastle upon Tyne: Cambridge Scholars Publishing.
Hüttemann, A. (2000). Natur und Labor. Über die Grenzen der Gültigkeit von Naturgesetzen. Philosophia Naturalis, 37, 269–285.
Imbert, C. (2017). Computer simulations and computational models in science. In Magnani, L. & Bertolotti, T. (Eds.) Springer handbook of model-based science (Vol. 34, pp. 733–779), Cham, chapter: Springer.
Janich, P. (1995). Experiment. In Mittelstraß, J. (Ed.), Enzyklopädie Philosophie und Wissenschaftstheorie. Band 1, Metzler, Stuttgart (pp. 621–622).
Kant, I. (1998). Critique of pure reason. Cambridge: Cambridge University Press. translated by P. Guyer and A. W. Wood; Cambridge Edition of the Works of Kant.
Keller, E.F. (2003). Models, simulation, and computer experiments. In Radder, H. (Ed.), The philosophy of scientific experimentation (pp. 198–215). Pittsburgh: University of Pittsburgh Press.
Knorr-Cetina, K. (1981). The manufacture of knowledge: An essay on the constructivist and contextual nature of science. Pergamon international library of science, technology, engineering, and social studies, Pergamon Press.
Kroes, P. (1989). Structural analogies between physical systems. British Journal for the Philosophy of Science, 40, 145–154.
Küppers, G., & Lenhard, J. (2005). Computersimulationen: Modellierungen 2. Ordnung. Journal for General Philosophy of Science, 36(2), 305–329.
Lim, S., McKee, J.L., Woloszyn, L., Amit, Y., Feedman, D.J., Sheinberg, D.L., & Brunel, N. (2015). Inferring learning rules from distributions of firing rates in cortical neurons. Nature Neuroscience, 18, 1804–1810.
Mainzer, K. (1995). Computer – neue Flügel des Geistes? Die Evolution computergestützter Technik, Wissenschaft, Kultur und Philosophie, 2nd edn. Berlin, New York: de Gruyter Verlag.
Metropolis, N., & Ulam, S. (1949). The Monte Carlo method. Journal of the American Statistical Association, 44(247), 335–341.
Michelson, A.A. (1881). The relative motion of the earth and the luminiferous ether. American Journal of Science, 22, 120–129.
Michelson, A.A., & Morley, E.W. (1887). On the relative motion of the earth and the luminiferous ether. American Journal of Science, 34, 333–345.
Morgan, M.S. (2002). Model experiments and models in experiments. In Magnani, L., & Nersessian, N. (Eds.), Model-based reasoning: science, technology, values (pp. 41–58). New York: Kluwer.
Morgan, M.S. (2003). Experimentation without material intervention: Model experiments, virtual experiments, and virtually experiments. In Radder, H. (Ed.), The philosophy of scientific experimentation (pp. 216–235). Pittsburgh: University of Pittsburgh Press.
Morgan, M.S. (2005). Experiments versus models: New phenomena, inference and surprise. Journal of Economic Methodology, 12(2), 317–329.
Morrison, M. (1998). Experiment. In Craig, E. (Ed.) Routledge encyclopedia of philosophy (Vol. III, pp. 514–518). London: Routledge and Kegan.
Morrison, M. (2009). Models, measurement and computer simulation: The changing face of experimentation. Philosophical Studies, 143, 33–57.
Nagel, T. (1986). The view from nowhere. Oxford: Oxford University Press.
Naumova, E.N., Gorski, J., & Naumov, Y.N. (2008). Simulation studies for a multistage dynamic process of immune memory response to influenza: Experiment in silico. Annales Zoologici Fennici, 45, 369–384.
Norton, J.D. (1996). Are thought experiments just what you thought? Canadian Journal of Philosophy, 26, 333–366.
Norton, S.D., & Suppe, F. (2001). Why atmospheric modeling is good science. In Edwards, P., & Miller, C. (Eds.), Changing the atmosphere (pp. 67–106). Cambridge, MA: MIT Press.
Parker, W.S. (2008). Franklin, Holmes, and the epistemology of computer simulation. International Studies in the Philosophy of Science, 22(2), 165–183.
Parker, W.S. (2009). Does matter really matter? Computer simulations, experiments, and materiality. Synthese, 169(3), 483–496.
Parker, W.S. (2010). An instrument for what? Digital computers, simulation and scientific practice. Spontaneous Generations, 4(1), 39–44.
Peschard, I. (forthcoming). Is simulation a substitute for experimentation? In Vaienti, S., & Livet, P. (Eds.) Simulations and networks. Aix-Marseille: Presses Universitaires d’Aix-Marseille. Here quoted after the preprint http://d30056166.purehost.com/Is_simulation_an_epistemic%20_substitute.pdf.
Press, W.H., Teukolsky, S.A., Vetterling, W.T., & Flannery, B.P. (2007). Numerical recipes. The art of scientific computing, 3rd edn. New York: Cambridge University Press.
Radder, H. (2009). The philosophy of scientific experimentation: A review. Automatic Experimentation 1. open access; http://www.aejournal.net/content/1/1/2.
Radder, H. (Ed.) (2003). The philosophy of scientific experimentation. Pittsburgh: University of Pittsburgh Press.
Rechenberg, P. (2000). Was ist Informatik? Eine allgemeinverständliche Einführung, 3rd edn. München: Hanser.
Rheinberger, H.J. (1997). Toward a history of epistemic things: Synthesizing proteins in the test tube. Writing science, Stanford University Press.
Rohrlich, F. (1990). Computer simulation in the physical sciences. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1990, 507–518.
Scholz, O. R. (2004). Bild, Darstellung, Zeichen. Philosophische Theorien bildlicher Darstellung, 2nd edn. Frankfurt am Main: Vittorio Klostermann.
Shapere, D. (1982). The concept of observation in science and philosophy. Philosophy of Science, 49(4), 485–525.
Skaf, R.E., & Imbert, C. (2013). Unfolding in the empirical sciences: experiments, thought experiments and computer simulations. Synthese, 190(16), 3451–3474.
Stöckler, M. (2000). On modeling and simulations as instruments for the study of complex systems. In Carrier, M., Massey, G.J., & Ruetsche, L. (Eds.), Science at the century’s end: Philosophical questions on the progress and limits of science (pp. 355–373). Pittsburgh, PA: University of Pittsburgh Press.
Suárez, M. (2003). Scientific representation: Against similarity and isomorphism. International Studies in the Philosophy of Science, 17, 225–244.
Suárez, M. (2004). An inferential conception of scientific representation. Philosophy of Science, 71, 767–779.
Sugden, R. (Ed.) (2005). Experiment, theory, world: A symposium on the role of experiments in economics, Vol. 12/2. London: Routledge. Special issue of Journal of Economic Methodology.
Tiles, J.E. (1993). Experiment as intervention. British Journal for the Philosophy of Science, 44(3), 463–475.
Trenholme, R. (1994). Analog simulation. Philosophy of Science, 61(1), 115–131.
Turing, A. (1937). On computable numbers, with an application to the entscheidungsproblem, Proceedings of the London mathematical society (Vol. s2–42, no. 1).
Weber, M. (2005). Philosophy of experimental biology. Cambridge: Cambridge University Press.
Weisberg, M. (2007). Who is a modeler? British Journal for Philosophy of Science, 58, 207–233.
Winsberg, E. (1999). Sanctioning models. The epistemology of simulation. Science in Context, 12, 275–292.
Winsberg, E. (2003). Simulated experiments: Methodology for a virtual world. Philosophy of Science, 70, 105–125.
Winsberg, E. (2009a). Computer simulation and the philosophy of science. Philosophy Compass, 4/5, 835–845.
Winsberg, E. (2009b). A tale of two methods. here quoted from Winsberg (2010) Ch. 4 pp. 49–71.
Winsberg, E. (2010). Science in the age of computer simulations. Chicago: University of Chicago Press.
Zimmerman, D.J. (2003). Peer effects in academic outcomes: Evidence from a natural experiment. The Review of Economics and Statistics, 85(1), 9–23.
Acknowledgments
Thanks to Christoph Baumberger and Trude Hirsch Hadorn for extremely useful comments on an earlier version of this manuscript. I’m also very grateful for detailed and helpful comments and criticisms by two anonymous referees. One of them provided extensive, constructive and extremely helpful comments even about a revised version of this paper – thanks a lot for this!
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Beisbart, C. Are computer simulations experiments? And if not, how are they related to each other?. Euro Jnl Phil Sci 8, 171–204 (2018). https://doi.org/10.1007/s13194-017-0181-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13194-017-0181-5