# Windable Heads and Recognizing NL with Constant Randomness

- 306 Downloads

## Abstract

Every language in NL has a *k*-head two-way nondeterministic finite automaton (2nfa(\(k\))) recognizing it. It is known how to build a constant-space verifier algorithm from a 2nfa(\(k\)) for the same language with constant-randomness, but with error probability Open image in new window that can not be reduced further by repetition. We have defined the unpleasant characteristic of the heads that causes the high error as the property of being “windable”. With a tweak on the previous verification algorithm, the error is improved to Open image in new window, where \(k_{\text {W}}\le k\) is the number of windable heads. Using this new algorithm, a subset of languages in NL that have a 2nfa(\(k\)) recognizer with \(k_{\text {W}}\le 1\) can be verified with arbitrarily reducible error using constant space and randomness.

## Keywords

Interactive proof systems Multi-head finite automata Probabilistic finite automata## 1 Introduction

Probabilistic Turing machines (PTM) are classical Turing machines with randomness as a resource. These machines alone can be recognizers of a language, or be verifiers for the proofs of membership in an interactive proof system (IPS). In either scenario, a noticeable error might be incurred in machines’ decisions due to randomness involved in their execution. This error can usually be reduced via repeated execution in PTM’s control.

The class of languages verifiable by the constant-randomness two-way probabilistic finite automata (2pfa) is the same as NL, the class of languages recognizable by the nondeterministic sub-linear space Turing Machines. Curiously, however, the error of these verifiers in recognizing languages of this class seems to be irreducible beyond a certain threshold [6].

In this paper, we introduce a characteristic for the languages in NL. Based on this characteristic, we lower the error threshold established in [6] for almost all languages in NL. Finally, we delineate a subset of NL in which each language is verifiable by a constant-randomness 2pfa with arbitrarily low error.

The remaining of the paper is structured as follows: Sects. 2 and 3 provides the necessary background as well as our terminology in the domain. A key property of the multi-head finite automata is identified in Sect. 4. Our verification algorithm, which improves on Say and Yakaryılmaz algorithm, and a subset of NL on which this algorithm excels are described in Sect. 5.

\(\mathcal {L}\left( M\right) \) denotes the language recognized by the machine

*M*.Open image in new window for a class of machines \(\textsf {X}\).

\(S_{\setminus q}\) denotes the set

*S*without its element*q*.\(\sigma _i\) denotes the

*i*th element of the sequence \(\sigma \).\(w^\times \) denotes the substring of

*w*without its last character.Open image in new window denotes the sequence \(\sigma \) concatenated with the element or sequence \(\tau \).

## 2 Finite Automata with *k* Heads

Finite automata are the Turing machines with read-only tape heads on a single tape. A finite automata with only one head is equivalent to a DFA (deterministic finite automaton) in terms of language recognition [3], hence recognizes a regular language. Finite automata with \(k > 1\) heads can recognize more than just regular languages. Their formal definition may be given as follows:

### Definition 1 (Multi-head nondeterministic finite automata)

- 1.
*Q*is the finite set of states, - 2.\(\varSigma \) is the finite set of input symbols,
- (a)
\(\triangleright , \triangleleft \) are the left and right end-markers for the input on the tape,

- (b)
Open image in new window is the tape alphabet,

- (a)
- 3.\(\delta :Q \times \varGamma ^{k} \rightarrow \mathcal {P}(Q_{\setminus q_0} \times \varDelta ^{k})\) is the transition function, where;
- (a)
Open image in new window is the set of head movements,

- (a)
- 4.
\(q_0 \in Q\) is the unique initial state,

- 5.
\(q_f \in Q\) is the unique accepting state.

Machine *M* is said to execute on a string \(w \in \varSigma ^*\), when \(\triangleright w \triangleleft \) is written onto *M*’s tape, all of its heads rewound to the cell with \(\triangleright \), its state is reset to \(q_0\), and then it executes in steps by the rules of \(\delta \). At each step, inputs to \(\delta \) are the state of *M* and the symbols read by respective heads of *M*.

When \( \left| {\delta } \right| = 1\) with the only member \((q', (d_1, \dotsc , d_k)) \in Q_{\setminus q_0} \times \varDelta ^k\), the next state of *M* becomes \(q'\), and *M* moves its *i*th head by \(d_i\). Whenever \( \left| {\delta } \right| > 1\), the execution branches, and each branch runs in parallel. A branch is said to reject *w*, if \( \left| {\delta } \right| = 0\), or if all of its branches reject. A branch accepts *w*, if its state is at \(q_f\), or if any one of its branches accepts. A branch may also do neither, in which case the branch is said to loop.

A string *w* is in \(\mathcal {L}\left( M\right) \), if the root of *M*’s execution on *w* is an accepting branch. Otherwise, \(w \notin \mathcal {L}\left( M\right) \), and the root of *M*’s execution is either a rejecting or a looping branch.

Restricting \(\delta \) to not have transitions inbound to \(q_0\) does not detriment the language recognition of a 2nfa(\(k\)) in terms of its language recognition: Any 2nfa(\(k\)) with such transitions can be converted into one without, by adding a new initial state \(q'_0\) and setting Open image in new window.

### Lemma 1

The containment \(\mathcal {L}\left( \textsf {2nfa}\mathrm{(}k\mathrm{)}\right) \subsetneq \mathcal {L}\left( \textsf {2nfa}\mathrm{(}k+1\mathrm{)}\right) \) is proper [4, 5].

### Lemma 2

Given a 2nfa(\(k\)), one can construct a 2nfa(\(2k\)) recognizing the same language, which is guaranteed to halt.

### Proof

A *k*-headed automaton running on an input *w* of length *n* has \(n^k\) distinct configurations. Additional *k* heads can count up to \(n^k = (nnn\dots n)_n\), and halt the machine with a rejection.

### Lemma 3

Every 2nfa(\(k\)) can be converted into an equivalent 2nfa(\(k\)) which does not move its heads beyond the end markers.

Conversion in Lemma 3 is done via trivial modifications on the transition function.

### Definition 2 (Multi-head deterministic finite automata)

A 2dfa(\(k\)) is a 2nfa(\(k\)) that is restricted to satisfy \( \left| {\delta } \right| \le 1\), where \(\delta \) is its transition function.

### Lemma 4

### Definition 3 (Multi-head one-way finite automata)

A 1nfa(\(k\)) is a restricted 2nfa(\(k\)) that does not move its heads backwards on the tape. In its definition, Open image in new window. A 1dfa(\(k\)) is similarly a restriction of 2dfa(\(k\)).

### Definition 4 (Multi-head probabilistic finite automata)

*M*is a PTM defined similar to a 2nfa(\(k\)) with the following modifications on Definition 1:

Open image in new window \(Q = Q_D \cup Q_P\), where \(Q_D\) and \(Q_P\) are disjoint.

- Open image in new window Transition function \(\delta \) is overloaded as follows:
\(\delta :Q_D \times \varGamma ^{k} \rightarrow \mathcal {P}(Q_{\setminus q_0} \times \varDelta ^{k})\)

The output of \(\delta \) may at most have 1 element.

States \(Q_D\) are called deterministic, and \(Q_P\) probabilistic. Depending on the state of the machine, \(\delta \) receives a third parameter, where a 0 or 1 is provided by a random bit-stream. We write \(\textsf {2pfa}\) instead of \(\textsf {2pfa}\mathrm{(}1\mathrm{)}\).

A string *w* is in \(\mathcal {L}\left( M\right) \), *iff* *M* accepts *w* with a probability greater than Open image in new window.

*M*, the following three types of error in the language recognition are inherent to it. For \(w \in \mathcal {L}\left( M\right) \):

*M*are defined as follows [1]:

*M*can fail to reject a string

*w*, by either accepting it, or going into an infinite loop. Consequently, \(\varepsilon _{\text {fail-to-reject}}\ge \varepsilon _{\text {false-accept}}\) and \(\varepsilon _{\text {strong}}\ge \varepsilon _{\text {weak}}\) are always true.

*k*and Open image in new window, letbe the class of languages recognized by a 2pfa(\(k\)) with a weak error at most \(\varepsilon \). Class \(\mathcal {L}_{\text {strong}, \varepsilon }\left( \textsf {2pfa}\mathrm{(}k\mathrm{)}\right) \) is defined similarly.

## 3 Interactive Proof Systems

An interactive proof system (IPS) models the verification process of proofs. Of the two components in an IPS, the *prover* produces the purported proof of membership for a given input string, while the *verifier* either accepts or rejects the string, alongside its proof. The catch is that the prover is assumed to advocate for the input string’s membership without regards to truth, and the verifier is expected to be accurate in its decision, holding a healthy level of skepticism against the proof.

The verifier is any Turing machine with capabilities to interact with the prover via a shared communication cell. The prover can be seen as an infinite state transducer that has access to both an original copy of the input string and the communication cell. Prover never halts, and its output is to the communication cell.

Our focus will be on the one-way IPS, which restricts the interaction to be a monologue from the prover to the verifier. Since there is no influx of information to the prover, prover’s output will be dependent on the input string only. Consequently, a one-way IPS can also be modeled as a verifier paired with a certificate function, \(c :\varSigma ^* \rightarrow \varLambda ^\infty \), where \(\varLambda \) is the communication alphabet. A formal definition follows:

### Definition 5 (One-way interactive proof systems)

An IP(\(\textsf {restriction-list}\)) is defined with a tuple of a verifier and a certificate function, \(S = (V, c)\). The verifier *V* is a Turing machine of type specified by the restriction-list. The certificate function *c* outputs the claimed proof of membership \(c(w) \in \varLambda ^\infty \) for a given input string *w*.

The verifier’s access to the certificate is only in the forward direction. The qualifier “one-way”, however, specifies that the interaction in the IPS is a monologue from the prover to the verifier, not the aforementioned fact, which is true for all IPS.

The language recognized by *S* can be denoted with \(\mathcal {L}\left( S\right) \), as well as \(\mathcal {L}\left( V\right) \). A string *w* is in \(\mathcal {L}\left( S\right) \) *iff* the interaction results in an acceptance of *w* by *V*.

If the verifier of the IPS is probabilistic, its error becomes the error of the IPS. The notation \(\mathcal {L}_{\text {weak}, \varepsilon }\left( \textsf {IP}\mathrm{(}\textsf {restriction-list}\mathrm{)}\right) \) and \(\mathcal {L}_{\text {strong}, \varepsilon }\left( \textsf {IP}\mathrm{(}\textsf {restriction-list}\mathrm{)}\right) \) is also adopted.

*k*is the minimum number of heads among the \(\textsf {2nfa}\mathrm{(}k\mathrm{)}\) recognizing

*L*that also halts on every input. Existence of such a 2nfa(\(k\)) is guaranteed by Lemmas 2 and 4.

This work improves on the findings of [6]. For their pertinence, an outline of the algorithms attaining the errors in Eqs. (3) and (4) is provided in the following sections.

### 3.1 Reducing Weak Error Arbitrarily Using Constant-Randomness Verifier

Given a language \(L \in \textsf {NL}\) with a halting 2nfa(\(k\)) recognizer *M*, verifier \(V_1 \in \textsf {2pfa}\) expects a certificate to report (i) the *k* symbols read, and (ii) the nondeterministic branch taken for each transition made by *M* on the course of accepting *w*. Such a report necessarily contains a lie, if \(w \notin \mathcal {L}\left( M\right) = L\).

*M*’s control. Then, the algorithm for the verifier is as follows:

- 1.Repeat \(m\) times:
- (a)
Move head left, until \(\triangleright \) is read.

- (b)
Reset

*M*’s state in the internal representation, denoted \(q_m\). - (c)
Randomly choose a head of

*M*by flipping \(\left\lceil {\log k} \right\rceil \) coins. - (d)Repeat until \(q_m\) becomes the accepting state of
*M*:- i.
Read

*k*symbols and the nondeterministic branch taken by*M*from the certificate. - ii.
*Reject*if the reading from \(V_1\)’s head disagrees with the corresponding symbol on the certificate. - iii.
Make the transition in the internal representation if it is valid, and move the chosen head as dictated by the nondeterministic branch.

*Reject*otherwise.

- i.

- (a)
- 2.
*Accept*.

*m*repetitions. This is possible despite

*M*being a halting machine. The lie in the certificate can present an infinite and even changing input string from the perspective of the head being lied about.

Being wound up counts as a failure to reject, but does not yield a false acceptance. The resulting weak error is \(\varepsilon _{\text {strong}}= k^{-m}\), which can be made arbitrarily small.

### 3.2 Bringing Strong Error Below Open image in new window Using Constant-Randomness Verifier

- 1.
Randomly

*reject*with Open image in new window probability by flipping \(\left\lceil {\log k} \right\rceil + 1\) coins. - 2.
Continue as \(V_1\).

## 4 Windable Heads

This section will introduce a property of the heads of a 2nfa(\(k\)). It leads to a characterization of the 2nfa(\(k\)) by the number of heads with this property. A subset rNL of the class NL will be defined, which will also be a subset of \(\mathcal {L}_{\text {strong}, \varepsilon }\left( \textsf {IP}\mathrm{(}\textsf {2pfa}, \textsf {constant}{\text {-}}\textsf {randomness}\mathrm{)}\right) \) for \(\varepsilon > 0\) approaching zero.

*M*is said to be

*windable*, if these three conditions hold:

There is a cycle on the graph of

*M*’s transition diagram, and a path from \(q_0\) to a node on the cycle.The movements of the head-in-question add up to zero in a full round of that cycle.

The readings of the head is consistent along the said path and cycle.

The definition of a head being windable completely disregards the readings of the other heads, hence the witness path and the cycle need not be a part of a realistic execution of the machine *M*.

We will define the windable heads formally to clarify its distinguishing points. Some preliminary definitions will be needed.

### Definition 6 (Multi-step transition function)

*M*. It is defined recursively, as follows:

The set \(\delta ^t(q, g_1, \dotsc , g_k)\) contains a \((k+1)\)-tuple for each nondeterministic computation to be performed by *M*, as it starts from the state *q* and reads \(g_i\) with its *i*th head. These tuples, each referred to as a *computation log*, consist of the state reached, and the movement histories of the *k* heads during that computation.

The constraint of a constant and persistent tape contents that is present in an execution of a 2nfa(\(k\)) is blurred in the definition for multi-step transition function. This closely resembles the verifier’s perspective of the remaining heads that it does not verify in the previous section. There, however, the verifier’s readings were consistent in itself. This slight will be accounted for with the next pair of definitions.

### Definition 7

**(Relative head position during**

*i*

**th transition).**Let

*M*be a 2nfa(\(k\)) that does not attempt to move its heads beyond the end markers on the input tape, and \(\delta \) be its transition function. Let

*H*be a head of

*M*, and

*D*be any \(t\)-step movement history in the output of \(\delta ^t\) of that head. The relative position of

*H*while making the

*i*th transition of

*D*since before making the first movement in that history is given by the function \(\rho _D(i) :\mathbb {N}_1^{\le t} \rightarrow \left( -t, t \right) \) defined as

By Lemmas 3 and 4, given any language in NL there is a 2nfa(\(k\)) recognizing it, which also does not attempt to move its heads beyond the end markers.

### Definition 8

**(1-head consistent**\(\delta ^t\)

**).**\(\delta ^t_1 :Q \times (\varGamma ^t)^k \rightarrow \mathcal {P}\left( Q_{\setminus q_0} \times (\varDelta ^t)^k\right) \) is the

*ith-head consistent*subset of \(\delta ^t\) of a 2nfa(\(k\))

*M*. It filters out the first-head inconsistent computation logs by scrutinizing the purportedly read characters by examining the movement histories against the readings. The formal definition assumes that

*M*does not attempt to move its heads beyond the end markers, and is as follows:

For each pair of transitions departing from the same tape cell, it is checked whether the same symbol is read while being performed. This check is needed to be done only for \(p \in \left( -t, t \right) \), since in \(t\) steps, a head may at most travel \(t\) cells afar, and the last cell it can read from will then be the previous one. This is also consistent with the definition of \(\rho _D\).

This last definition is the exact analogue of the verifiers’ perspective in the algorithms proposed by [6]. It can be used directly in our next definition, that will lead us to a characterization of the 2nfa(\(k\)).

### Definition 9 (Windable heads)

*i*th head of a 2nfa(\(k\))

*M*is

*windable*

*iff*there exists;

- 1.
\(g_1, \dotsc , g_k \in \varGamma ^t\) and \(g_1', \dotsc , g_k' \in \varGamma ^l\), for \(t\) and

*l*positive, - 2.
\((q, D_1, \dotsc , D_k) \in \delta ^t_i(q_0, g_1, \dotsc , g_k)\),

- 3.
Open image in new window where \({{\,\mathrm{sum}\,}}(D'_i) = 0\).

When these conditions hold, \(g_1, \dotsc , g_k\) can be viewed as the sequences of characters that can be fed to \(\delta \) to bring *M* from \(q_0\) to *q*, crucially without breaking consistency among the *i*th head’s readings. This ensures reachability to state *q*. Then, the sequences \(g_1', \dotsc , g_k'\) wind the *i*th head into a loop; bringing *M* back to state *q* and the first head back to where it started the loop, all while keeping the *i*th head’s readings consistent. The readings from the other heads are allowed to be inconsistent, and their position can change with every such loop.

A head is *reliable* *iff* the head is not windable.

It is important to note that a winding is not based on a realistic execution of a 2nfa(\(k\)). A head of a 2nfa(\(k\)) *M* might be windable, even if it is guaranteed to halt on every input. This is because the property of being windable allows other heads to have *unrealistic*, inconsistent readings that may be never realized with any input string.

## 5 Recognizing Some Languages in NL with Constant-Randomness and Reducible-Error Verifiers

Consider a language \(L \in \textsf {NL}\) with a 2nfa(\(k\)) recognizer *M* that halts on every input. In designing the randomness-restricted 2pfa(\(1\)) verifier \(V_2\), the following three cases will be considered:

*All Heads Are Reliable.* In this case, \(V_1\) suffices by itself to attain reducible error. Without any windable heads in the underlying 2nfa(\(k\)), each round of \(V_1\) will terminate. The certificate can only make \(V_1\) falsely accept, and the chances for that can be reduced arbitrarily by increasing \(m\).

*All Heads Are Windable.* In this case, unless the worst-case assumptions are alleviated, any verification algorithm using a simulation principle similar to \(V_1\) will be wound up on the first round. The head with the minimum probability of getting chosen will be the weakest link of \(V_2\), thus the head the certificate will be lying about. The failure to reject rate is equal 1 minus that probability. This rate is the lowest when the probabilities are equal, and is then Open image in new window.

*It Is a Mix.* Let \(k_{\text {W}}, k_{\text {R}}\) denote the windable and reliable head counts, respectively. Thus \(k_{\text {W}}+ k_{\text {R}}= k\). The new verifier algorithm \(V_2\) is similar to \(V_1\), but instead of choosing a head to simulate with equal probability, it will do a *biased branching*. With biased branching, \(V_2\) favors the reliable heads over the windable heads while choosing a head to verify.

- 1c.\({}'\) Randomly choose a head of
*M*by biased branching:Instead of flipping \(\left\lceil {\log k} \right\rceil \) coins, flip \(b+ \left\lceil {\log (\max (k_{\text {W}}, k_{\text {R}}))} \right\rceil \) coins. Let \(z_1, z_2, \dotsc , z_{b}\) be the outcomes of the first \(b\) coins.

If \(\sum _{i=1}^b2^{-i}z_i < P_{\text {W}}\), choose one of the windable heads depending on the outcomes of the next \(\left\lceil {\log k_{\text {W}}} \right\rceil \) coins. Otherwise, similarly choose a reliable head via \(\left\lceil {\log k_{\text {R}}} \right\rceil \) coins.

*For an Input String*\(w \in L\)

*.*Verifier \(V_2\) is still perfectly accurate. Certificate may provide any route that leads

*M*to acceptance. Repeating this for \(m\)-many times, \(V_2\) will accept after \(m\) rounds of validation.

*For an Input String* \(w \notin L\)*.* To keep \(V_2\) from rejecting, the certificate will need to lie about at least one of the heads. Switching the head to lie about in between rounds cannot be of any benefit to the certificate on its mission, since the rounds are identical both from \(V_2\)’s and the certificate’s points of view. Hence, it is reasonable to assume that the certificate repeats itself in each round, and simplify our analysis.

If it chooses the head being lied about, \(V_2\) detects the lie rather than being deceived.

Otherwise, if a windable head was chosen, \(V_2\) loops indefinitely.

Otherwise (i.e. a reliable head was chosen), \(V_2\) runs for another round or accepts

*w*.

*m*rounds of false-acceptance.

*k*heads of the

*M*, while verifying for the language \(\mathcal {L}\left( M\right) \in \textsf {NL}\). Consequently the optimum error for \(V_2\) is

### Theorem 1

The minimum error for \(V_2\) depends only on the number of windable heads of the 2nfa(\(k\)) *M* recognizing \(L \in \textsf {NL}\).

### Definition 10

**(Reducible strong error subset of**NL

**).**For \(\varepsilon > 0\) approaching zero, the reducible strong error subset of NL is defined as

### Theorem 2

Equations (5) and (6), and their consequent Theorems 1 and 2, constitute the main results of this study.

- 1.
Randomly

*reject*with Open image in new window probability by flipping \(\left\lceil {\log k_{\text {W}}} \right\rceil + 1\) coins. - 2.
Continue as \(V_2\).

The strong error of \(V_2'\) is then given by Open image in new window.

### 5.1 Example Languages from rNL and Potential Outsiders

Let \(w_{\mathtt {a}}\) denote the amount of symbols \(\mathtt {a}\) in a string *w*.

## 6 Open Questions

## References

- 1.Condon, A.: The complexity of space bounded interactive proof systems. In: Ambos-Spies, A., et al. (eds.) Complexity Theory: Current Research, pp. 147–189. Cambridge University Press, Cambridge (1992). https://www.cs.ubc.ca/~condon/papers/ips-survey.pdf
- 2.Hartmanis, J.: On non-determinancy in simple computing devices. Acta Informatica
**1**(4), 336–344 (1972). https://doi.org/10.1007/BF00289513MathSciNetCrossRefzbMATHGoogle Scholar - 3.Holzer, M., Kutrib, M., Malcher, A.: Complexity of multi-head finite automata: origins and directions. Theoret. Comput. Sci.
**412**(1–2), 83–96 (2011). https://doi.org/10.1016/j.tcs.2010.08.024MathSciNetCrossRefzbMATHGoogle Scholar - 4.Monien, B.: Transformational methods and their application to complexity problems. Acta Informatica
**6**(1), 95–108 (1976). https://doi.org/10.1007/BF00263746MathSciNetCrossRefzbMATHGoogle Scholar - 5.Monien, B.: Two-way multihead automata over a one-letter alphabet. RAIRO Informatique Théorique
**14**(1), 67–82 (1980). https://doi.org/10.1051/ita/1980140100671MathSciNetCrossRefzbMATHGoogle Scholar - 6.Say, C., Yakaryılmaz, A.: Finite state verifiers with constant randomness. Log. Methods Comput. Sci.
**10**(3) (2014). https://doi.org/10.2168/LMCS-10(3:6)2014. arXiv:1102.2719