Skip to main content

Technological Paradigms

  • Chapter
  • First Online:
Book cover Computer Simulations in Science and Engineering

Part of the book series: The Frontiers Collection ((FRONTCOLL))

  • 1113 Accesses

Abstract

In previous chapters, we have discussed how philosophers, scientists, and engineers alike construct the idea that computer simulations offer a ‘new epistemology’ for scientific practice. By this they meant that computer simulations introduce new—and perhaps unprecedented—forms of knowing and understanding the surrounding world, forms that were not available before. Whereas scientists and engineers emphasize the scientific novelty of computer simulations, philosophers try to appraise computer simulations for their philosophical virtues. The truth of the former claim is out of question, the latter, however, is more controversial.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 29.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 37.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 37.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For the most part, I am going to ignore the ordinal position of the paradigm. In this respect, I will leave unanswered the question of whether there is some presupposed hierarchy among the paradigms. As we saw in Chap. 3, the positivist would take that experimentation remains a secondary, confirmatory stance of theory, where theory is more important. After the flaws of the positivist were exposed, a new experimentalist wave invaded the literature showing the vast universe of experimentation and their philosophical importance. Should advocates on computer simulations and Big-Data claim that there is a new, improved way of doing science and engineering, and that theory and experimentation are reserved for only a minor role, they would be walking down the same dangerous road as the positivist.

  2. 2.

    Although there are several projects in science and engineering relying on Big Data, its presence is much stronger in areas such as social networking studies, economy, and big government.

  3. 3.

    The precession of the perihelion of Mercury was explained by general relativity around 1925—with successive and more accurate measurements starting in 1959—although it was an ‘anomalous’ phenomenon already known back in 1919.

  4. 4.

    Louis Pasteur showed that the apparent spontaneous generation of microorganisms was actually due to unfiltered air allowing bacterial growth.

  5. 5.

    Wolfgang Pietsch has suggested a distinction between Big Data and data-intensive science. While the former is defined with respect to the amount of data and the technical challenges which it poses, data-intensive science refers to “to the techniques with which large amounts of data are being processed. One should further distinguish methods of data acquisition, data storage, and data analysis” (Pietsch 2015). This is a useful distinction for analytical purposes, as it allows philosophers to draw conclusions about data regardless of the methods involved; in other words, distinguishing Big Data as a product from a discipline. To us, however, this distinction is otiose because we are interested in studying the techniques of data acquisition against a backdrop of technical components (e.g., speed, memory, etc.). A similar remark applies to Jim Gray’s notion of eScience, understood as “where IT meets scientists” (Hay et al. 2009, xviii). In the following, although I make use of the notion of Big Data, data intensive science, and eScience indistinctively, readers must keep in mind that they are different fields.

  6. 6.

    There are studies that link the growth of the system memory with the amount of data stored. A report to the Department of Energy of the US shows that, on average, every 1 Terabyte of main memory results in about 35 Terabyte of new data stored to the archive each year—more than 35 Terabyte per 1 Terabyte actually get stored, but 20–50% actually gets deleted on average over the year (Hick et al. 2010).

  7. 7.

    Our treatment of Big Data will be focused on scientific uses. In this respect, we need to keep in mind that, although the nature of the data is always computational, it is also of an empirical origin. Allow me here to make a quick digression and clarify what I mean by computational data of empirical origin. In laboratory experimentation, the data gathered could come directly from manipulating the experiment and thus by means of reporting changes, measurements, reactions, etc., as well as by means of using laboratory instruments. An example of the former is using a Petri Dish to observe the behavior of bacteria and plant germination. An example of the latter is the bubble chamber that detects electrically charged particles moving through a superheated liquid hydrogen. Big Data in scientific research obtains much of its data from similar sources. As suggested, the ASKAP obtains large quantities of data from scanning the skies, and in this respect data has an empirical origin. Even the data collected from social networks largely used for sociologists and psychologists could be considered to have an empirical origin. To contrast these ways of gathering data, let us take the case of computer simulations. With the latter methods, data is produced by the simulation rather than collected. This is not a whimsical distinction, since the characteristics, epistemological assessment, and quality of the data varies significantly from method to method. Philosophers have been interested in the nature of data and on what make them different (for instance Barberousse and Marion 2013; Humphreys 2013a, b).

  8. 8.

    An interesting case stems from biomedical Big Data where data is gathered from an incredibly varied and complex variety of sources. As Charles Safran et al. indicate, these sources include “laboratory auto-analyzers, pharmacy systems, and clinical imaging systems [...] augmented by data from systems supporting health administrative functions such as patient demographics, insurance coverage, financial data, etc. ... clinical narrative information, captured electronically as structured data or transcribed ‘free text’ [...] electronic health records” (Safran et al. 2006, 2).

  9. 9.

    Let us note that the ethical issues raised in each one of these books are not necessarily limited to scientific and engineering uses of Big Data, but they also extend to business, society, and studies on government and law.

  10. 10.

    The term ‘hubris’ is usually found in the great Greek tragedies describing the hero’s personality quality as being of extreme and foolish pride, or holding a dangerous overconfidence often in combination with arrogance. The hero behavior is to defy the established norms by challenging the gods, with the result of the hero’s own downfall.

  11. 11.

    There is even the claim that the amount of data corresponds to the phenomenon itself. That is, the data is the phenomenon (Mayer-Schönberger and Cukier 2013).

  12. 12.

    The authors are of course aware of how important causality is in the sciences and engineering. In this respect, they say: “We will still need causal studies and controlled experiments with carefully curated data in certain cases, such as designing a critical airplane part. But for many everyday needs, knowing what not why is enough” (Mayer-Schönberger and Cukier 2013, 191—Emphasis in original).

  13. 13.

    One could make the case that causality is not necessary for explanation, and that given the right explanatory framework, Big Data could be able to provide genuine explanations. To the best of my knowledge, such explanatory framework is still missing.

  14. 14.

    Prominent examples are Salmon (1998), Dowe (2000), and Bunge (1979).

  15. 15.

    For a complete account of this interpretation of causality, see (Pearl 2000).

  16. 16.

    It is interesting to note that The National Cancer Institute features four categories of causal relations, dependent on the strength of available evidence. Those are ‘level 1: evidence is sufficient to infer a causal relationship’, ‘level 2: evidence is suggestive but not sufficient to infer causal relationship’, ‘level 3: Evidence is inadequate to infer the presence or absence of a causal relationship (which encompasses evidence that is sparse, of poor quality, or conflicting)’, and ‘level 4: evidence is suggestive of no causal relationship’ (U.S. Department of Health and Human Services 2014).

  17. 17.

    For an up-to-date version of the TETRAD algorithm, visit http://www.phil.cmu.edu/tetrad/current.html.

  18. 18.

    See http://www.phil.cmu.edu/projects/tetrad/.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juan Manuel Durán .

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Durán, J.M. (2018). Technological Paradigms. In: Computer Simulations in Science and Engineering. The Frontiers Collection. Springer, Cham. https://doi.org/10.1007/978-3-319-90882-3_6

Download citation

Publish with us

Policies and ethics