Apologies for not getting back to you sooner. I'm afraid we are going to have to put [author’s Postdigital Science and Education article] on the back burner. I've been told in no uncertain terms by my line manager not to publish with PDSE until it has an impact factor. It's not about the intellectual value of PDSE; it is about the mind-numbing metrics obsessions of my masters. I feel I should be braver, but I feel there may yet be more important hills to die on...
In short, I can get away with doing book reviews for you; just not articles. Not until you get that first IF.
My apologies again,
[prospective PDSE author]
In the middle of writing this article I received the above email from a prospective author for Postdigital Science and Education (PDSE). The project title has been removed to protect the author’s identity, yet all other words could easily have been written by any contemporary academic. Indeed, this disturbing yet very real email is a perfect illustration of the challenge I presented in my keynote to academics at the Higher Education Institutional Research (HEIR) Conference at University of Wolverhampton in 2019: What is wrong, epistemically, ontologically, and practically, with dominant approaches to measuring research excellence in our postdigital condition? What, if anything, can we do to improve the current state of affairs?
Some years ago my dear friend Hamish Macleod emailed me the article entitled ‘Peter Higgs: I wouldn’t be productive enough for today’s academic system’ (Aitkenhead 2013). The Nobel Prize winning physicist, and one of my intellectual heroes, said that he had a hard time working at the University of Edinburgh because he ‘published fewer than 10 papers after his groundbreaking work, which identified the mechanism by which subatomic material acquires mass, [which] was published in 1964’. Higgs said that, after that work and prior to winning the Nobel prize in 2013, he had been treated like ‘an embarrassment to the department when they did research assessment exercises.’ Speaking of today’s employment criteria, he says: ‘Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.’ Responding to a question about the conditions which contributed to the Nobel Prize winning discovery, Higgs continued: ‘It’s difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964’ (Aitkenhead 2013). If people like Peter Higgs are indeed an embarrassment to their schools and departments, then who is their pride?
Higgs might have been an embarrassment to his department before he won the Nobel Prize for Physics, yet after he received the Nobel Prize, the same department has written accolades in his honour. How lovely is that? Higgs’ work has been around since 1964, but his supposed ‘peers’ – academics of roughly the same status, working at the same department – have managed to recognize its importance only after he received the Nobel Prize. However tempting, I do not want to turn this article into argumentum ad hominem: this appalling behaviour of Higgs’ colleagues is a mere reflection of the wider pathetic, miserable, Janus-faced nature of the so-called ‘objective’ scientific community. Instead of playing the blame game, therefore, I am merely using the case of Peter Higgs to send a sobering message to those who still believe in the myth of scientific neutrality. Even the ‘hardest’ experimental sciences, such as physics, are games of hunger for the many and prestige for the few.
In a way, Peter Higgs is a lucky man. In 1964 he developed a theory that could not be proven without a complex apparatus which was built in the early twenty-first century. The 49 years between publication of Higgs’ theory and its experimental confirmation is pretty long by human standards, yet Higgs has at least managed to get the world’s most prestigious recognition during his lifetime. However, what happens to the prospective author of Postdigital Science and Education quoted at the beginning of this article, and indeed to all of us working in contemporary academia? How does our work, perhaps less significant than Higgs’ but still somehow important, find its way through the mazes of ‘research excellence’? And how do we find the room to do good work in the first place?
The dominating discourse has it that measuring excellence is beneficial both for knowledge development and for researchers (Hayes 2019). We want to support excellence over mediocrity, and we do not want slackers to take away our precious resources. As for examples such as Peter Higgs (before he won the Nobel Prize), we can only hope that these examples are just unfortunate aberrations in an otherwise just system of reward and punishment. Unfortunately, they are not. While it is easy to agree with good intentions behind the dominating discourse of measuring excellence, it is now abundantly clear that the privileging of certain disciplines and methodologies over others, and accepting simplistic notions of impact, result in some deep flaws in the system. In this article I continue the theme of quantum physics, drawing on Karen Barad’s interpretation of the work of Niels Bohr and Erwin Schrödinger, to explore some ontological and epistemic issues which might inform our understanding of measuring research excellence in a postdigital context.
Ontology and Epistemology, Sparkled with a Bit of Political Economy
The Double-Slit Experiment
In Meeting the Universe Halfway: Quantum physics and the Entanglement of Matter and Meaning Barad (2007) interprets two famous quantum physics experiments: the double-slit experiment which can easily be performed in your kid’s bedroom, and the thought experiment popularly known as the Schrödinger’s cat experiment. The first half of the double-slit experiment consists of a coherent light source, such as a simple classroom laser-pointer, which illuminates a plate (this can be as simple as a sheet of paper), with two parallel slits (one above and one below) (see Fig. 1). As the light wave passes through the slits, it splits into two waves, and each of these waves spreads from its respective slit. The two waves interfere with each other. Where a peak meets a trough, the waves cancel each other; where a peak meets a peak, the waves reinforce each other. Consequently, the wall behind the plate will show a stripy pattern of multiple wave reinforcements (bright stripes) and cancellations (dark stripes) called the interference pattern. Now for the second half of the double-slit experiment, let us replace the light source with a simple homemade straw-gun with paper bullets and start shooting! (see Fig. 2). Some bullets will pass through the upper slit, and some bullets will pass through the lower slit. After a while, our wall will end up with two heaps of paper bullets, formed as two strips, behind each slit. Two strips on the wall indicate that our paper bullets are particles; multiple interference strips on the wall indicate that light is a wave. Conducted in physics classrooms all over the world, this simple experiment clearly distinguishes particles from waves.
To enter the world of quantum physics, let us replace our laser beam and straw-gun with a source of electrons. When we turn on the source, the wall will show multiple interference strips of a wave. So far so good – we can now conclude that electrons are waves. Just out of curiosity, let us now check which electron passes through which hole in the plate. To do so, we will equip each slit with a detector. And, surprise! When we turn on the detectors, the interference pattern characteristic for waves turns into the two-strip pattern characteristic for particles. As soon as we added the detectors, our electrons have started to behave like particles. Revealing this curious behaviour, the double-slit experiment shows one of the key mysteries of quantum mechanics. At a quantum level, in some experiments we can observe wave-like properties, and in other experiments we can observe particle-like properties. While we can never observe the wave-like properties and particle-like properties in the same experiment, we do need both models to explain all behaviours. On a quantum scale our concepts of wave and particle do not correspond to reality. In this way, twentieth century physicists have developed the concept of wave-particle duality.
According to Barad (2007: 109) ‘the indeterminable nature of measurement interactions is based on [Bohr’s] insight that concepts are defined by the circumstances required for their measurement. That is, theoretical concepts are not ideational in character; they are specific physical arrangements.’ (italics from the original) Based on this interpretation of Bohr’s work, Barad has developed a more general theory called agential realism. As can easily be seen from the double-slit experiment, quantum particles change their behaviour when they are being observed. Therefore, the act of measurement is inseparable from the object of measurement; the human observer cannot be separated from observed non-human phenomena. Barad’s agential realism elaborates Bohr’s ‘insights in a posthumanist direction that decentres the human’ and suggests ‘that Bohr’s notion of a phenomenon be understood ontologically’ (Barad 2007: 333). Thus, writes Barad,
I take the primary ontological unit to be phenomena, rather than independent objects with inherent boundaries and properties. In my agential realist elaboration, phenomena do not merely mark the epistemological inseparability of ‘observer’ and ‘observed’; rather, phenomena are the ontological inseparability of intra-acting ‘agencies.’ That is, phenomena are ontological entanglements. (Barad 2007: 333) (italics from the original)
Barad’s agential realism carries two important implications for measuring research excellence. First, it brings about a non-representationalist understanding that measurement always depends on the act of measuring. The act of measuring research excellence changes the nature of what is being measured, i.e., constitutes research practices which are different from research practices that might develop in a non-observed environment. More generally, the act of measuring research excellence brings the very concept of research excellence into being. Different ways of measuring will bring about different research; principles, practices, and policies of measuring research excellence carry a lot of ontological significance.
Second, Barad claims that ‘[r]eality does not depend on the prior existence of human beings; rather, the point is to understand that “humans” are themselves natural phenomena.’ (Barad 2007: 336) Therefore, she concludes that ‘[i]t doesn’t make sense to hold onto an anthropocentric conception of measurement; on the contrary, a commitment to a thoroughgoing naturalism suggests that we understand measurements as causal intra-actions (some of which involve humans).’ (Barad 2007: 338) Measurement does not only influence practice; it is an inseparable element of the thing or phenomenon being measured. Yet at least since Romanticism, Western science is based on ‘the dominant, romantic, highly individualistic and irrational account of “personal anarcho-aesthetics”’ exemplified in the notion of the creative individual (Peters and Jandrić 2018: 343). This produces the dominant contemporary discourse where the main task of measuring excellence is to recognize creative, hard-working researchers and weed out ‘slackers’. Yet sides in the apparent binary between productive researchers and slackers are dialectically intertwined parts of the larger system of knowledge making and dissemination, which cannot be separated from researched phenomena and from each other.
Schrödinger’s Cat Thought Experiment
In 1935 Erwin Schrödinger offered his contribution to the problem of measurement by the way of his famous Schrödinger’s cat thought experiment (Fig. 3). In his paper ‘The present situation in quantum mechanics’ Schrödinger introduces the experiment as follows:
A cat is penned up in a steel chamber, along with the following device (which must be secured against direct interference by the cat): in a Geiger counter, there is a tiny bit of radioactive substance, so small, that perhaps in the course of the hour one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges and through a relay releases a hammer that shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The first atomic decay would have poisoned it. The psi-function of the entire system would express this by having in it the living and dead cat (pardon the expression) mixed or smeared out in equal parts. (Schrödinger 1935/1983: 157)
With this vivid image of a cat smeared out in equal parts around the box, which has become an inseparable part of popular culture, Schrödinger makes some important points about the nature of measurement. This smearing (hopefully) does not arrive from Schrödinger’s vicious character, but refers to a situation where the cat’s indeterminate state results from ‘a superposition of “alive” and “dead” states of the cat.’ This does not imply ‘that the cat is either alive or dead and that we simply do not know which’, or that the cat is a part-dead, part-alive zombie, or that the cat is neither-alive-nor-dead vampire-like creature. ‘Rather, the correct way to understand what the “smearing” stands for is to realize that the cat’s fate is not simply metaphorically entangled with the radioactive source – it is literally in an entangled state.’ (Barad 2007: 278)
For the cat, the act of measuring is the act of transition from this indeterminate, entangled state into one of the two determined states: dead or alive. So the key question pertaining to measurement is: what happens in the moment of this transition? Schrödinger does not subscribe to one or another ‘mystical’ explanation; and neither does Barad. In language of physics, ‘[i]t seems as if the wave function has somehow “collapsed” from a superposition (or entanglement) to one in which all the coefficients except one of them are set to zero’ (Barad 2007: 280). Or, in simpler but less detailed words, the act of measurement kills potentiality by selecting only one of many possible options.
For Schrödinger, this murder of potentiality is the beginning of the problem of measurement. The trouble is, that the Schrödinger’s cat experiment is not reproducible – each time the box is opened, the cat has a new chance to surface dead or alive.
Thus for Schrödinger the problem of measurement is resolved as follows: what appears to be a discontinuous change in the wave function is not due to some distinctive law of nature governing measurement interactions that creates a discontinuous change in the wave function; but rather what is actually going on is that the wave function of the ‘object’ becomes entangled with the ‘measuring system’ (…) such that they are no longer separate systems. Only upon observation by a cognizing agent can we speak of a resolution of the entanglement. (Barad: 2007: 284) (italics from the original)
Following Barad’s interpretation of Schrödinger, academic work before measurement can be understood as ‘a catalog of the maximum knowledge of a system that it is possible to obtain in principle’. Unlike Bohr’s ontological view to the double-slit experiment, Schrödinger’s solution to the problem of measurement is decidedly epistemological. The act of measuring research excellence creates an entanglement of our knowledge about academic work with our knowledge of our measuring instruments such as the UK’s Research Excellence Framework.Footnote 1 Measuring excellence resolves this entanglement, but reduces our knowledge of a researcher’s work to a single result (‘good’ or ‘bad’ research). Research excellence exercises are highly competitive, so ‘good’ researchers are typically supported in furthering their work, while ‘bad’ researchers are often directed towards other duties such as teaching or administration. More often than not, those measures kill potentials for further development of their research.
Postdigital Postdigital Measurement of Excellence in Higher Education
We now live in a postdigital world, ‘where digital technology and media is [no longer] separate, virtual, ‘other’ to a ‘natural’ human and social life’ (Jandrić et al. 2018: 893). Together with development of postdigital conditions in the past few decades, measuring research excellence follows trends similar to educational research in general. Much has been written about policies such as New Public Management and neoliberalization of the university (Peters and Jandrić 2018), domination of one-sided discourses of excellence (Hayes 2019) and the dialectically intertwined notion of student experience (Hayes and Jandrić 2018). These trends are underlined by a disappearance of human agency; much too often, it is posited that quality improvements arrive magically by applications of this, or that, technology (Hayes and Jandrić 2014). In this context, Sarah Hayes questions why there is often no explicit reference to the very humans who will enact the tasks stated, and in many cases, no reference to those who authored said policy documents either (Bartholomew and Hayes 2015; Hayes 2019: viii). These problems are exacerbated by data-driven algorithmic systems that process various forms of assessment in seemingly objective but actually very skewed ways. The problem can be classified in two main steps (Jandrić 2019). First, problematic measurements perform problematic constructs into being; second, these problematic constructs are operated on by algorithms and systems that are opaque and biassed in important ways.
Speaking of data, Ben Williamson asks a fundamental question ‘who owns educational research?’ and argues:
The central argument is that as educational data science has migrated from the academic lab to the commercial sector, ownership of the means to produce educational data analyses has become concentrated in the activities of for-profit companies. As a consequence, new theories of learning are being built-in to the tools they provide, in the shape of algorithm-driven technologies of personalization, which can be sold to schools and universities. (Williamson 2017: 105)
In his analyses of student assessment Cormac O’Keefe examines quality of collected data. He suggests that standardized tests, which are now routinely used in various contexts from admissions to final exams, produce large datasets about students – and these datasets suffer from various biases. Looking at ‘the role of psychometric practices and educational testing theories’, O’Keefe (2017: 123) shows ‘that large-scale digital assessments as tests such as PIAAC [The Programme for the International Assessment of Adult Competencies] do more than produce data about ability’. More importantly, ‘[t]hey perform the concept of ability into being.” (O’Keefe 2017: 133). These days, measuring methods and strategies across academia have become almost completely McDonaldized (Ritzer et al. 2018). Therefore, Williamson’s analysis of ownership over academic data and O’Keefe’s analyses of student assessment can easily inform the argument about measuring research excellence.
AIs arrive on top of the commercialised entanglement between data excellence and bring in an additional level of indeterminacy.
According to Liza Daly, ‘artificial intelligence is the umbrella term for the entire field of programming computers to solve problems. I would distinguish this from software engineering, where we program computers to perform tasks.’ (Daly 2017) This simple definition describes an important paradigm change in the inner workings of the computer. Traditional computers, including the most sophisticated expert systems of yesterday, consisted of long lines of code which determined their behaviour: for every input, such systems would do predetermined calculations and provide an output. In contrast, AI systems are provided with some initial rules of behaviour, and then they are ‘taught’ by large datasets. Then, a computer independently establishes various connections between input data and produces ‘intelligent’ solutions to new problems in non-predetermined ways. This is the essence of machine learning, which is broadly defined as ‘the science of getting computers to act without being explicitly programmed’ (Ng 2018) (Jandrić 2019: 31–32)
By design, AI systems transfer a lot of agency from human beings to inner workings of the computer. Data bias inevitably superposes with AI bias: problematic measurements perform problematic constructs into being and are then operated on by algorithms and systems that are opaque and biassed in important ways. As can be seen from examples arriving from various walks of life (see Jandrić 2019; Peters and Jandrić 2019), results of these operations are far from trustworthy. As I write these words there are no adequate technical solutions for data / AI biasses – perhaps because the nature of these biasses is not only technical but also normative / ethical (Fuller and Jandrić 2018: 190). At present tech companies resolve data / AI biases ‘manually’, by introducing various practices based on human labour such as algorithm auditing (Hempel 2018; IBM Research 2018; see also Jandrić 2019: 33). However, these practices are far from perfect, and applications of AI-based decision-making systems in areas such as healthcare remain far from fair (Eubanks 2018).
Measuring research excellence brings a particular concept of research excellence into being. Research is dialectically related with the process of academic publishing, which ‘is a form of “social production” that takes place across the economy, politics and culture, all of which are in turn accommodating both old and new technology in our postdigital age’ (Jandrić and Hayes 2019: 381). For as long as ‘good research’ is defined by journal impact factors, a place of publication and publication popularity will be more important than an article’s content. For the prospective PDSE author quoted at the beginning of this article, this means producing publications for ‘better’ journals. Yet how ‘better’ journals are defined is in itself open to question in a postdigital context. Beyond academic publishing, the mutually constitutive relationship between measurement and research has resulted in the underappreciation of the value of Higgs’ work for 49 years. On this basis alone, it is worth reviewing how measurement and research are currently being linked.
All researchers – despite their position in the Hunger Games of research excellence – are dialectically intertwined parts of the larger system of knowledge making and dissemination. Higgs became famous only because other scientists did not become (as) famous. There is only so much room at the top, and the notion of excellence is heavily rigged towards those who are already there. For instance, one of the main difficulties of starting a new academic journal is the cruel game illustrated by this article’s leading quotation where academics need to publish in ‘good’ journals in order to secure their jobs. ‘Our postdigital age is one of cohabitation, blurring borders between social actors and scientific disciplines, mutual dependence, shifting relationships between traditional centres and margins, and inevitable compromise – and this calls for deep reconfiguration of politics and practice of knowledge production and academic publishing as we know it.’ (Jandrić and Hayes 2019: 390). Producing standard science is very hard, but boring; it takes a lot of courage, expertise, and luck, to break the glass ceiling of measured excellence and make a real change. This pushes some academics towards various strategies of gaming the system, which are harmful for science and the society at large (see Fuller and Jandrić 2019).
The last problem with measuring research excellence is the vicious murder of potentiality. Who knows what the prospective PDSE author might have written had he not been coerced into playing by the rules? Defined as the ‘dirty little industrial machine’ (Peters in Jandrić 2017: 378), an academic article is always of same format, same length, same methodology – and top quoted journals have mastered this art of the same to perfection. While measuring research excellence often brings about vicious consequences to researchers’ careers, quantum mechanics offers a straw of hope. Schrödinger’s cat experiment teaches that the entanglement of our knowledge and measurement systems can collapse differently in each experiment, so every day offers a new chance.
This brief overview of issues pertaining to measuring research excellence in a postdigital context reveals the same issues as our short trip to mazes of quantum physics. Arguably, these concordances have always been there, yet in the postdigital condition they surface more prominently than ever. So how far can we extend these concordances; what can quantum physics teach us about measuring research excellence? This question can be answered in many different ways, yet its full elaboration reaches far beyond the scope of this article. Following Barad’s advice, I am not interested in developing direct links between the double slit experiment, Schrödinger’s cat experiment and measuring research excellence. Instead, ‘I am interested in understanding the epistemological and ontological issues that quantum physics forces us to confront, such as the conditions for the possibility of objectivity, the nature of measurement, the nature of nature and meaning making, and the relationship between discursive practices and the material world.’ (Barad 2007: 24) Arguments presented in this article, therefore, are not about applying quantum physics to measuring research excellence; they are presented to help us reach a meta-level of epistemology and ontology which can be applied to both systems of thought and their entanglements.
Which messages from quantum physics could possibly be of use for this analysis of research excellence in a postdigital context? According to Barad,
[w]hat is needed is an analysis that enables us to theorize the social and the natural together, to read our best understandings of social and natural phenomena through one another in a way that clarifies the relationship between them. (…) considering them together does not mean forcing them together, collapsing important differences between them, or treating them in the same way, but means allowing any integral aspects to emerge (by not writing them out before we get started). (Barad 2007: 25)
Moving from theory to practice, we are now left with two main approaches. The first approach, which is probably most reasonable for many of us including the prospective author quoted at the beginning of this paper, is business as usual – but with more awareness of issues pertaining to postdigital measurement and occasional fixes. The business as usual approach is by definition unable to make significant changes, yet awareness of these things could result in occasional pockets of freedom or an odd personal advance resulting from knowing how to game the system. The second approach is to try and redefine theory and practice of research measurement according to these philosophical insights. My best bet is on attempts starting from systems theory and cybernetics, but there’s plenty of opportunity in other fields. Working in, against, and beyond contemporary academia (Holloway 2016), we need to simultaneously engage with both approaches. While I fully appreciate the importance of solid work done under the business as usual scenario, my heart is with those who dare to imagine radically different futures.
Aitkenhead, D. (2013). Peter Higgs: I wouldn't be productive enough for today's academic system. The Guardian, 6 December. https://www.theguardian.com/science/2013/dec/06/peter-higgs-boson-academic-system. Accessed 10 January 2020.
Barad, K. (2007). Meeting the universe Halfway: Quantum physics and the entanglement of matter and meaning. Durham and London: Duke University Press.
Bartholomew, P., & Hayes, S. (2015). An introduction to policy as it relates to technology enhanced learning. In J. Branch, P. Bartholomew, & C. Nygaard (Eds.), Technology enhanced learning in higher education. London: Libri.
Eubanks, V. (2018). Automating inequality. How high-tech tools profile, police, and punish the poor. New York: St. Martin’s Press.
Fuller, S., & Jandrić, P. (2019). The Postdigital human: Making the history of the future. Postdigital Science and Education, 1(1), 190–217. https://doi.org/10.1007/s42438-018-0003-x.
Hayes, S., & Jandrić, P. (2018). Resisting the Iron cage of ‘the student experience’. Šolsko polje, 29(1–2), 127–143.
Hayes, S., & Jandrić, P. (2014). Who is really in charge of contemporary education? People and technologies in, against and beyond the neoliberal university. Open Review of Educational Research, 1(1), 193–210. https://doi.org/10.1080/23265507.2014.989899.
Hayes, S. (2019). The labour of words in higher education: Is it time to reoccupy policy? Leiden: Brill Sense.
Hempel, J. (2018). Want to prove your business is fair? Audit your algorithm. Wired, 5 September. https://www.wired.com/story/want-to-prove-your-business-is-fair-audit-your-algorithm/. Accessed 10 January 2020.
Holloway, J. (2016). In, against, and beyond capitalism: The San Francisco lectures. Oakland, CA: PM Press/Kairos.
IBM Research (2018). AI and Bias. https://www.research.ibm.com/5-in-5/ai-and-bias/. Accessed 10 January 2020.
Jandrić, P. (2017). Learning in the Age of Digital Reason. Rotterdam: Sense.
Jandrić, P. (2019). The Postdigital challenge of critical media literacy. The International Journal of Critical Media Literacy, 1(1), 26–37. https://doi.org/10.1163/25900110-00101002.
Jandrić, P., & Hayes, S. (2019). The postdigital challenge of redefining education from the margins. Learning, Media and Technology, 44(3), 381–393. https://doi.org/10.1080/17439884.2019.1585874.
Jandrić, P., Knox, J., Besley, T., Ryberg, T., Suoranta, J., & Hayes, S. (2018). Postdigital science and education. Educational Philosophy and Theory, 50(10), 893–899. https://doi.org/10.1080/00131857.2018.1454000.
O’Keefe, C. (2017). Economizing education: Assessment algorithms and calculative agencies. E-Learning and Digital Media, 14(3), 123–137. https://doi.org/10.1177/2042753017732503.
Peters, M. A., & Jandrić, P. (2018). The Digital University: A dialogue and manifesto. New York: Peter Lang.
Peters, M. A., & Jandrić, P. (2019). AI, human evolution, and the speed of learning. In J. Knox, Y. Wang, & M. Gallagher (Eds.), Artificial Intelligence and Inclusive Education: speculative futures and emerging practices (pp. 195–206). Springer Nature. https://doi.org/10.1007/978-981-13-8161-4_12.
Ritzer, G., Jandrić, P., & Hayes, S. (2018). Prosumer capitalism and its machines. Open Review of Educational Research, 5(1), 113–129. https://doi.org/10.1080/23265507.2018.1546124.
Schrödinger, E. (1935/1983). The Present Situation in Quantum Mechanics. In J. Wheeler & W.H. Zurek (Eds.), Quantum Theory and Measurement. Princeton, N.J.: Princeton University Press.
Wikimedia Foundation (2007b). File:Two-Slit Experiment Particles.svg. https://commons.wikimedia.org/wiki/File:Two-Slit_Experiment_Particles.svg. Accessed 14 January 2020.
Wikimedia Foundation (2007a). File:Two-Slit Experiment Light.svg. https://commons.wikimedia.org/wiki/File:Two-Slit_Experiment_Light.svg. Accessed 14 January 2020.
Wikimedia Foundation (2008). File:Schrodingers cat.svg. https://commons.wikimedia.org/wiki/File:Schrodingers_cat.svg. Accessed 14 January 2020.
Williamson, B. (2017). Who owns educational theory? Big data, algorithms and the expert power of education data science. E-Learning and Digital Media, 14(3), 105–122. https://doi.org/10.1177/2042753017731238.
About this article
Cite this article
Jandrić, P. Postdigital Research Measurement. Postdigit Sci Educ 3, 15–26 (2021). https://doi.org/10.1007/s42438-020-00105-8