# The dependence of computability on numerical notations

## Abstract

Which function is computed by a Turing machine will depend on how the symbols it manipulates are interpreted. Further, by invoking bizarre systems of notation (i.e., interpretation schemes mapping symbols to their denotations) it is easy to define Turing machines that compute textbook examples of uncomputable functions, such as the solution to the decision problem for first-order logic. Thus, the distinction between computable and uncomputable functions depends on the system of notation used. This raises the question: which systems of notation are the relevant ones for determining whether a function is computable? These are the acceptable notations. I argue for a use criterion of acceptability: the acceptable notations for a domain of objects $${\mathcal {D}}$$ are the notations that we can use for the usual $${\mathcal {D}}$$-activities. Acceptable notations for natural numbers are ones that we can count with. Acceptable notations for logical formulas are the ones that we can use in inference and logical analysis. And so on. This criterion makes computability a relative notion—whether a function is computable depends on which notations are acceptable, which is relative to our interests and abilities. I argue that this is not a problem, however, since there is independent reason for taking the extension of computable function to be relative to contingent facts. Similarly, the fact that the use criterion makes a mathematical distinction depend on practical concerns is also not a problem because it dovetails with similar phenomena in other areas of computability theory, namely the roles of notation in computation over real numbers and of Extended Church’s Thesis in complexity theory.

This is a preview of subscription content, log in to check access.

1. 1.

Note that this does not require that d be onto. If d is not onto, then f’s being computable relative to d does not mean that we can determine the value of any f(x) for $$x\in {\mathcal {D}}$$, but only that we can determine the value of f(x) for any $$x\in {\mathcal {D}}$$ that we are able to denote in $${\mathcal {L}}_A$$.

2. 2.

The observation that regarding computation as merely string-theoretic is historically revisionary is due to Rescorla (2007).

3. 3.

Note that we cannot dodge the problem by claiming that the genuinely computable functions are the functions which are computable relative to every notation. Shapiro (1982) shows that the only functions which are Turing-computable relative to every notation are the functions which are either constant on a cofinite set or are equivalent to the identity function on a cofinite set.

4. 4.

See, for instance, Vardi (2012), Dean (2016) and Yanofsky (2011).

5. 5.

In fact, I have just written two different numbers, namely $$1.023056\times 10^{23}$$ and $$1.023056\times 10^{22}$$. It’s much easier to tell them apart in scientific notation.

6. 6.

Thanks to Chris Pincock for suggesting the analogy to constructibility.

7. 7.

This example comes from Giaquinto (2007), p. 92, and is also discussed in Marshall (2017), though neither take the same perspective on it as I do. Kripke (1992) also discusses a similar puzzle, though again reaching a different view.

8. 8.

Somewhat more exactly, if $$\eta _{ab}$$ is the metric of Minkowski spacetime, let $$\Omega$$ be a scalar field that is constant outside c and within c diverges to infinity at r, and define the metric for the toy example to be $$\Omega ^2\eta _{ab}$$. See Earman and Norton (1993), p. 28. The diagram given here mimics one from Hogarth (1994).

9. 9.

This raises an important further question about how we idealize our capabilities in giving a mathematical model of computability. In Turing’s standard analysis the tape is infinite and the machine never breaks, though these are obviously both physically unrealistic. By analogy, perhaps in an MH spacetime we can ignore the thermal properties of the machine on $$\gamma _1$$, so that the observer on $$\gamma _2$$ does not have to worry about distinguishing a signal that is sent from the background noise caused by the thermal radiation of the machine. This would eliminate one of Earman and Norton’s worries about the physicality of using MH spacetime to solve unsolvable problems, Earman and Norton (1993), p. 34. See also the discussion in Manchak (2010), Manchak (2018), and Manchak and Roberts (2016).

10. 10.

‘Explication’ is meant here in a loose and imprecise sense, and I do not intend to take a stand on whether Church’s thesis is an explication in the stricter Carnapian sense.

11. 11.

12. 12.

Further, to move a component of mass m a distance of $$D_1$$ in time T, the force will be on the order of $$F\varpropto mD_1T^{-2}$$. If the mass is of uniform density, then $$m\varpropto D_2^3$$, for $$D_2$$ the size of the relevant component. The force will be applied to a given surface area $$D_2^2$$, and hence the pressure on the component will be $$\varpropto D_1D_2T^{-2}$$. Since distance and time are all reduced in proportion from one machine to the next, the pressure applied to the faces of the components does not increase. Hence we do not assume that the material used is of infinite strength.

13. 13.

The possible factor of two on the upper bound comes from the fact that $$M_n$$ has twice as much memory as $$M_{n-1}$$.

14. 14.

See also Sieg (2002a) and Sieg (2002b), as well as Copeland (2017).

15. 15.

Turing’s arguments are somewhat sketchy, and may not deserve the honorific ‘proof’, but see Sieg (2002a).

16. 16.

I do not want to get into the assessment of Turing, Gandy, and Sieg’s arguments, here. Since, however, I have explicitly argued that (machine) computability can outstrip Turing computability, I feel I should indicate why Gandy’s argument does not account for MH spacetimes or Davies’ machine. First, he excludes from his account analogue machines, assuming that his machines are made of discrete objects. This rules out Davies’ machines. He also assumes that the evolution of a machine is modeled by the iterative finite application of a function mapping states to subsequent states. In MH spacetime, though, it can be essential to consider also the limit of an infinite sequence of applications of such a function.

17. 17.

See Blum (2004) and Braverman and Cook (2006) for introductions to two other approaches to computation over the reals. Feferman (2013) compares these two approaches and discusses how they fit into more general theories of computation over arbitrary structures.

18. 18.

Weihrauch (2000). While Turing (1936) did not consider computable functions of an arbitrary real number, when the function computed has zero arguments the present definition coincides with his definition of computable real number.

19. 19.

I diverge here from the terminology of Weihrauch (2000), p. 33, who would call these representations rather than notations.

20. 20.

This is because any function computable under Convention One must be continuous. See Weihrauch (2000), Thm. 4.3.1. Rescorla (2015) makes a similar point.

21. 21.

Though Convention One has its own attractions. See Weihrauch (2000), Ch. 3 and Hertling (1999).

22. 22.

This is because there are real numbers computable from a blank tape relative to $$\rho _>$$ that are not computable relative to $$\rho _<$$ and vice versa. If $$c_>$$ and $$c_<$$ are respectively such numbers, then the constant function $$c_>$$ can be computed under Convention Two, but not Convention Three, and the constant function $$c_<$$ can be computed under Convention Three but not Convention Two.

23. 23.

For an introduction to complexity theory, see, for instance, Arora and Barak (2009).

24. 24.

One might object that this claim depends on a particular way of framing what computability theory is. Computability theory is also sometimes called recursive function theory, after all, and we can define the recursive functions directly. There is no need to aver to vague theses about the ‘intuitive’ notion of computability (or, for that matter, to notations) to characterize the study of recursive functions. In a certain sense, this is true. Recursive functions are a perfectly well-defined class of mathematical objects, and there is nothing wrong with studying them as such. But if recursive function theory is also supposed to tell us something about what we or our machines can compute—as it is generally thought to do—we need to appeal to Church’s thesis. (Recall my remarks on this issue from Sect. 1.1). And historically, the theory of recursive functions was introduced by Gödel to capture what humans can compute. For more on the history, see Sieg (1994).

25. 25.

That is, if a computer takes time t to compute f(n), then there is a Turing machine and a polynomial p such that the Turing machine takes at most time p(t) to compute f(n).

26. 26.

Of course, this is recognized as an idealization, since a problem that took $$n^{10^{100}}$$ would hardly be practically solvable. However, $$\mathsf {P}$$ has the advantage that it includes obviously efficient algorithms, such as those that take time n, and is closed under composition, which models the use of an efficient program as a subroutine that another efficient program calls upon.

27. 27.

Shor (1994); see also Shor (1999). More exactly, Shor gave a $$\mathsf {BQP}$$ algorithm for finding a factor d of an odd number n.

28. 28.

See Rieffel and Polak (2011), pp. 326–327 for a brief discussion of this issue.

29. 29.

Thanks to André Curtis-Trudel, Steve Dalglish, Øystein Linnebo, Chris Pincock, Giorgio Sbardolini, Stewart Shapiro, Damon Stanley, Neil Tennant, two anonymouos referees, and audiences at University of Oslo and the 5th International Conference on the History and Philosophy of Computation for comments and discussion on earlier versions of this work.

## References

1. Aaronson, S. (2013). Quantum computing since democritus. Cambridge: Cambridge University Press.

2. Arora, S., & Barak, B. (2009). Computational complexity: A modern approach. New York: Cambridge University Press.

3. Blum, L. (2004). Computing over the reals: Where turing meets Newton. Notices of the AMS, 51(9), 1024–1034.

4. Boolos, G., & Jeffrey, R. (1989). Computability and logic (3rd ed.). New York: Cambridge University Press.

5. Braverman, M., & Cook, S. (2006). Computing over the Reals: Foundations for scientific computing. Notices of the AMS, 53(3), 318–329.

6. Copeland, B. J. (2017). The Church-Turing Thesis. In Stanford Encyclopedia of Philosophy. Ed. by Edward Zalta. https://plato.stanford.edu/archives/win2017/entries/church-turing/.

7. Copeland, B. J., & Proudfoot, D. (2010). Deviant encodings and Turing’s analysis of computability. Studies in Hisotry and Philosophy of Science, 41(2), 247–252.

8. Copeland, B. J., & Shagrir, O. (2007). Physical computation: How general are Gandy’s principles for mechanisms? Minds and Machines, 17, 217–231.

9. Davies, E. B. (2001). Building infinite machines. British Journal for the Philosophy of Science, 52(4), 671–682.

10. Davis, M. (1958). Computability and unsolvability. New York: McGraw-Hill.

11. Dean, W. (2016). Algorithms and the mathematical foundations of computer science. In L. Horsten & P. Welch (Eds.), Gödel’s disjunction. New York: Oxford University Press.

12. Earman, J., & Norton, J. D. (1993). Forever is a day: Supetasks in Pitowsky and Malament-Hogarth spacetimes. Philosophy of Science, 60(1), 22–42.

13. Feferman, S. (2013). About and around Computing over the Reals. In B. J. Copeland, C. J. Posy, & O. Shagrir (Eds.), Computability: Turing, Gödel, Church, and Beyond (pp. 55–76). Cambridge, MA: MIT Press.

14. Gandy, R. (1980). Church’s thesis and principles for mechanisms. In J. Barwise, H. J. Kiesler, & K. Kunen (Eds.), Kleene symposium (pp. 123–148). Amsterdam: North-Holland.

15. Giaquinto, M. (2007). Visual thinking in mathematics. New York: Oxford University Press.

16. Hertling, P. (1999). A real number structure that is effectively categorical. Mathematical Logic Quarterly, 45(2), 147–182.

17. Hogarth, M. (1994). Non-turing computers and non-turing computability. In PSA: Proceedings of the biennial meeting of the philosophy of science association (pp. 126–138). University of Chicago Press.

18. Hogarth, M. (2004). Deciding arithmetic using SAD computers. British Journal for the Philosophy of Science, 55(4), 681–691.

19. Kleene, S. C. (1952). Introduction to metamathematics. Amsterdam: North-Holland.

20. Kripke, S. (1992). Logicism, Wittgenstein, and de re Beliefs about Numbers.

21. Manchak, J. B. (2010). On the possibility of supertasks in general relativity. Foundations of Physics, 40, 276–288.

22. Manchak, J. B. (2018). Malament-Hogarth Machines. British Journal for Philosophy of Science. https://doi.org/10.1093/bjps/axy023.

23. Manchak, J., & Roberts, B. W. (2016). Supertasks. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy. Stanford: Metaphysics Research Lab, Stanford University.

24. Marshall, O. (2017). The psychology and philosophy of natural numbers. In Philosophia Mathematica (pp. 1–27).

25. Rescorla, M. (2007). Church’s thesis and the conceptual analysis of computability. Notre Dame Journal of Formal Logic, 48(2), 253–280.

26. Rescorla, M. (2015). The representational foundations of computation. Philosophia Mathematica, 23(3), 338–366.

27. Rieffel, E., & Polak, W. (2011). Quantum computing: A gentle introduction. Cambridge, MA: MIT Press.

28. Rogers, H. (1967). Theory of recursive functions and effective computability. New York: McGraw-Hill.

29. Shapiro, S. (1982). Acceptable notation. Notre Dame Journal of Formal Logic, 23(1), 14–20.

30. Shapiro, S. (2013). The open texture of computability. In B. Jackopeland, C. J. Posy, & O. Shagrir (Eds.), Computability: Turing, Gödel, church, and beyond (pp. 152–182). Cambridge, MA: MIT Press.

31. Shapiro, S. (2017). Computing with numbers and other non-syntactic things: Dere knowledge of abstract objects. Philosophia Mathematica, 25(2), 268–291.

32. Shoenfield, J. R. (1993). Recursion theory., Lecture notes in logic Berlin: Springer.

33. Shor, P. (1994). Algorithms for quantum computation: Discrete logarithms and factoring. In Proceedings 35th annual symposium on foundations of computer science. (Santa Fe, NM) (pp. 124–134). IEEE Computer Society Press.

34. Shor, P. (1999). Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Review, 41(2), 303–332.

35. Sieg, W. (1994). Mechanical procedures and mathematical experience. In A. George (Ed.), Mathematics and mind. Oxford: Oxford University Press.

36. Sieg, W. (2002a). Calculations by man and machine: Conceptual analysis. In S. Wilfried, S. Richard, T. Carolyn, & A. K. Peters (Eds.), Reactions on the foundations of mathematics (pp. 390–409). Heidelberg: Trubner & Company.

37. Sieg, W. (2002b). Calculations by man and machine mathematical presentation. In P. Gärdenfors, J. Woleński, & K. Kijania-Placek (Eds.), Proceedings of the Cracow international congress for logic, methodology and philosophy of science (pp. 247–262). New York: Kluwer Academic Publishing.

38. Turing, A. (1936). On computable numbers, with an application to the entsheidungs problem. Proceedings of the London Mathematical Society, 42(1), 230–265.

39. Vardi, M. Y. (2012). What Is an Algorithm? Communications of the ACM, 55(3), 5.

40. Weihrauch, K. (2000). Computable analysis: An introduction. New York: Springer.

41. Welch, P. D. (2008). The extent of computation in Malament-Hogarth spacetimes. British Journal for the Philosophy of Science, 59(4), 659–674.

42. Yanofsky, N. S. (2011). Towards a definition of algorithm. Journal of Logic and Computation, 21(2), 253–286.

## Author information

Authors

### Corresponding author

Correspondence to Ethan Brauer.