Advertisement

Windable Heads and Recognizing NL with Constant Randomness

  • Mehmet Utkan GezerEmail author
Conference paper
  • 306 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12038)

Abstract

Every language in NL has a k-head two-way nondeterministic finite automaton (2nfa(\(k\))) recognizing it. It is known how to build a constant-space verifier algorithm from a 2nfa(\(k\)) for the same language with constant-randomness, but with error probability Open image in new window that can not be reduced further by repetition. We have defined the unpleasant characteristic of the heads that causes the high error as the property of being “windable”. With a tweak on the previous verification algorithm, the error is improved to Open image in new window, where \(k_{\text {W}}\le k\) is the number of windable heads. Using this new algorithm, a subset of languages in NL that have a 2nfa(\(k\)) recognizer with \(k_{\text {W}}\le 1\) can be verified with arbitrarily reducible error using constant space and randomness.

Keywords

Interactive proof systems Multi-head finite automata Probabilistic finite automata 

1 Introduction

Probabilistic Turing machines (PTM) are classical Turing machines with randomness as a resource. These machines alone can be recognizers of a language, or be verifiers for the proofs of membership in an interactive proof system (IPS). In either scenario, a noticeable error might be incurred in machines’ decisions due to randomness involved in their execution. This error can usually be reduced via repeated execution in PTM’s control.

The class of languages verifiable by the constant-randomness two-way probabilistic finite automata (2pfa) is the same as NL, the class of languages recognizable by the nondeterministic sub-linear space Turing Machines. Curiously, however, the error of these verifiers in recognizing languages of this class seems to be irreducible beyond a certain threshold [6].

In this paper, we introduce a characteristic for the languages in NL. Based on this characteristic, we lower the error threshold established in [6] for almost all languages in NL. Finally, we delineate a subset of NL in which each language is verifiable by a constant-randomness 2pfa with arbitrarily low error.

The remaining of the paper is structured as follows: Sects. 2 and 3 provides the necessary background as well as our terminology in the domain. A key property of the multi-head finite automata is identified in Sect. 4. Our verification algorithm, which improves on Say and Yakaryılmaz algorithm, and a subset of NL on which this algorithm excels are described in Sect. 5.

The following notation will be common throughout this paper:
  • \(\mathcal {L}\left( M\right) \) denotes the language recognized by the machine M.

  • Open image in new window for a class of machines \(\textsf {X}\).

  • \(S_{\setminus q}\) denotes the set S without its element q.

  • \(\sigma _i\) denotes the ith element of the sequence \(\sigma \).

  • \(w^\times \) denotes the substring of w without its last character.

  • Open image in new window denotes the sequence \(\sigma \) concatenated with the element or sequence \(\tau \).

2 Finite Automata with k Heads

Finite automata are the Turing machines with read-only tape heads on a single tape. A finite automata with only one head is equivalent to a DFA (deterministic finite automaton) in terms of language recognition [3], hence recognizes a regular language. Finite automata with \(k > 1\) heads can recognize more than just regular languages. Their formal definition may be given as follows:

Definition 1 (Multi-head nondeterministic finite automata)

A 2nfa(\(k\)) is a 5-tuple, \(M = (Q, \varSigma , \delta , q_0, q_f)\), where;
  1. 1.

    Q is the finite set of states,

     
  2. 2.
    \(\varSigma \) is the finite set of input symbols,
    1. (a)

      \(\triangleright , \triangleleft \) are the left and right end-markers for the input on the tape,

       
    2. (b)

      Open image in new window is the tape alphabet,

       
     
  3. 3.
    \(\delta :Q \times \varGamma ^{k} \rightarrow \mathcal {P}(Q_{\setminus q_0} \times \varDelta ^{k})\) is the transition function, where;
    1. (a)

      Open image in new window is the set of head movements,

       
     
  4. 4.

    \(q_0 \in Q\) is the unique initial state,

     
  5. 5.

    \(q_f \in Q\) is the unique accepting state.

     

Machine M is said to execute on a string \(w \in \varSigma ^*\), when \(\triangleright w \triangleleft \) is written onto M’s tape, all of its heads rewound to the cell with \(\triangleright \), its state is reset to \(q_0\), and then it executes in steps by the rules of \(\delta \). At each step, inputs to \(\delta \) are the state of M and the symbols read by respective heads of M.

When \( \left| {\delta } \right| = 1\) with the only member \((q', (d_1, \dotsc , d_k)) \in Q_{\setminus q_0} \times \varDelta ^k\), the next state of M becomes \(q'\), and M moves its ith head by \(d_i\). Whenever \( \left| {\delta } \right| > 1\), the execution branches, and each branch runs in parallel. A branch is said to reject w, if \( \left| {\delta } \right| = 0\), or if all of its branches reject. A branch accepts w, if its state is at \(q_f\), or if any one of its branches accepts. A branch may also do neither, in which case the branch is said to loop.

A string w is in \(\mathcal {L}\left( M\right) \), if the root of M’s execution on w is an accepting branch. Otherwise, \(w \notin \mathcal {L}\left( M\right) \), and the root of M’s execution is either a rejecting or a looping branch.

Restricting \(\delta \) to not have transitions inbound to \(q_0\) does not detriment the language recognition of a 2nfa(\(k\)) in terms of its language recognition: Any 2nfa(\(k\)) with such transitions can be converted into one without, by adding a new initial state \(q'_0\) and setting Open image in new window.

Lemma 1

The containment \(\mathcal {L}\left( \textsf {2nfa}\mathrm{(}k\mathrm{)}\right) \subsetneq \mathcal {L}\left( \textsf {2nfa}\mathrm{(}k+1\mathrm{)}\right) \) is proper [4, 5].

Lemma 2

Given a 2nfa(\(k\)), one can construct a 2nfa(\(2k\)) recognizing the same language, which is guaranteed to halt.

Proof

A k-headed automaton running on an input w of length n has \(n^k\) distinct configurations. Additional k heads can count up to \(n^k = (nnn\dots n)_n\), and halt the machine with a rejection.

Lemma 3

Every 2nfa(\(k\)) can be converted into an equivalent 2nfa(\(k\)) which does not move its heads beyond the end markers.

Conversion in Lemma 3 is done via trivial modifications on the transition function.

Definition 2 (Multi-head deterministic finite automata)

A 2dfa(\(k\)) is a 2nfa(\(k\)) that is restricted to satisfy \( \left| {\delta } \right| \le 1\), where \(\delta \) is its transition function.

Lemma 4

The following are shown in [2]:
$$\begin{aligned} \cup _{k=1}^\infty \mathcal {L}\left( \textsf {2nfa}\mathrm{(}k\mathrm{)}\right)&= \textsf {NL}\end{aligned}$$
(1)
$$\begin{aligned} \cup _{k=1}^\infty \mathcal {L}\left( \textsf {2dfa}\mathrm{(}k\mathrm{)}\right)&= \textsf {L} \end{aligned}$$
(2)

Definition 3 (Multi-head one-way finite automata)

A 1nfa(\(k\)) is a restricted 2nfa(\(k\)) that does not move its heads backwards on the tape. In its definition, Open image in new window. A 1dfa(\(k\)) is similarly a restriction of 2dfa(\(k\)).

Definition 4 (Multi-head probabilistic finite automata)

A 2pfa(\(k\)) M is a PTM defined similar to a 2nfa(\(k\)) with the following modifications on Definition 1:

States \(Q_D\) are called deterministic, and \(Q_P\) probabilistic. Depending on the state of the machine, \(\delta \) receives a third parameter, where a 0 or 1 is provided by a random bit-stream. We write \(\textsf {2pfa}\) instead of \(\textsf {2pfa}\mathrm{(}1\mathrm{)}\).

A string w is in \(\mathcal {L}\left( M\right) \), iff M accepts w with a probability greater than Open image in new window.

Due to the probabilistic nature of a given 2pfa(\(k\)) M, the following three types of error in the language recognition are inherent to it. For \(w \in \mathcal {L}\left( M\right) \):
$$\begin{aligned}\varepsilon _{\text {fail-to-accept}}(M)&= {\text {Pr}}[{M \text { does not accept } w}]\end{aligned}$$
(Failure to accept)
And for \(w \notin \mathcal {L}\left( M\right) \):
$$\begin{aligned}\varepsilon _{\text {fail-to-reject}}(M)&= {\text {Pr}}[{M \text { does not reject } w}]\end{aligned}$$
(Failure to reject)
$$\begin{aligned}\varepsilon _{\text {false-accept}}(M)&= {\text {Pr}}[{M \text { accepts } w}]\end{aligned}$$
(False acceptance)
The overall weak and strong errors of a probabilistic machine M are defined as follows [1]:
$$\begin{aligned} \varepsilon _{\text {weak}}(M)&= \max (\varepsilon _{\text {fail-to-accept}}(M), \varepsilon _{\text {false-accept}}(M)) \end{aligned}$$
(Weak error)
$$\begin{aligned} \varepsilon _{\text {strong}}(M)&= \max (\varepsilon _{\text {fail-to-accept}}(M), \varepsilon _{\text {fail-to-reject}}(M)) \end{aligned}$$
(Strong error)
Note that a 2pfa(\(k\)) M can fail to reject a string w, by either accepting it, or going into an infinite loop. Consequently, \(\varepsilon _{\text {fail-to-reject}}\ge \varepsilon _{\text {false-accept}}\) and \(\varepsilon _{\text {strong}}\ge \varepsilon _{\text {weak}}\) are always true.
Given a k and Open image in new window, letbe the class of languages recognized by a 2pfa(\(k\)) with a weak error at most \(\varepsilon \). Class \(\mathcal {L}_{\text {strong}, \varepsilon }\left( \textsf {2pfa}\mathrm{(}k\mathrm{)}\right) \) is defined similarly.

3 Interactive Proof Systems

An interactive proof system (IPS) models the verification process of proofs. Of the two components in an IPS, the prover produces the purported proof of membership for a given input string, while the verifier either accepts or rejects the string, alongside its proof. The catch is that the prover is assumed to advocate for the input string’s membership without regards to truth, and the verifier is expected to be accurate in its decision, holding a healthy level of skepticism against the proof.

The verifier is any Turing machine with capabilities to interact with the prover via a shared communication cell. The prover can be seen as an infinite state transducer that has access to both an original copy of the input string and the communication cell. Prover never halts, and its output is to the communication cell.

Our focus will be on the one-way IPS, which restricts the interaction to be a monologue from the prover to the verifier. Since there is no influx of information to the prover, prover’s output will be dependent on the input string only. Consequently, a one-way IPS can also be modeled as a verifier paired with a certificate function, \(c :\varSigma ^* \rightarrow \varLambda ^\infty \), where \(\varLambda \) is the communication alphabet. A formal definition follows:

Definition 5 (One-way interactive proof systems)

An IP(\(\textsf {restriction-list}\)) is defined with a tuple of a verifier and a certificate function, \(S = (V, c)\). The verifier V is a Turing machine of type specified by the restriction-list. The certificate function c outputs the claimed proof of membership \(c(w) \in \varLambda ^\infty \) for a given input string w.

The verifier’s access to the certificate is only in the forward direction. The qualifier “one-way”, however, specifies that the interaction in the IPS is a monologue from the prover to the verifier, not the aforementioned fact, which is true for all IPS.

The language recognized by S can be denoted with \(\mathcal {L}\left( S\right) \), as well as \(\mathcal {L}\left( V\right) \). A string w is in \(\mathcal {L}\left( S\right) \) iff the interaction results in an acceptance of w by V.

If the verifier of the IPS is probabilistic, its error becomes the error of the IPS. The notation \(\mathcal {L}_{\text {weak}, \varepsilon }\left( \textsf {IP}\mathrm{(}\textsf {restriction-list}\mathrm{)}\right) \) and \(\mathcal {L}_{\text {strong}, \varepsilon }\left( \textsf {IP}\mathrm{(}\textsf {restriction-list}\mathrm{)}\right) \) is also adopted.

Say and Yakaryılmaz proved that [6]:
$$\begin{aligned} \textsf {NL}&\subseteq \mathcal {L}_{\text {weak}, \varepsilon }\left( \textsf {IP}\mathrm{(}\textsf {2pfa}, \textsf {constant}{\text {-}}\textsf {randomness}\mathrm{)}\right)&\text {for } \varepsilon > 0 \text { arbitrarily small,} \end{aligned}$$
(3)
$$\begin{aligned} \textsf {NL}&\subseteq \mathcal {L}_{\text {strong}, \varepsilon }\left( \textsf {IP}\mathrm{(}\textsf {2pfa}, \textsf {constant}{\text {-}}\textsf {randomness}\mathrm{)}\right)&\text {for } \varepsilon = \frac{1}{2} - \frac{1}{2k^2}, k\rightarrow \infty . \end{aligned}$$
(4)
For the latter proposition, the research proves that any language \(L \in \textsf {NL}\) can be recognized by a one-way IPS \(S \in \textsf {IP}\mathrm{(}\textsf {2pfa}, \textsf {constant}{\text {-}}\textsf {randomness}\mathrm{)}\), which satisfies Open image in new window, and where k is the minimum number of heads among the \(\textsf {2nfa}\mathrm{(}k\mathrm{)}\) recognizing L that also halts on every input. Existence of such a 2nfa(\(k\)) is guaranteed by Lemmas 2 and 4.

This work improves on the findings of [6]. For their pertinence, an outline of the algorithms attaining the errors in Eqs. (3) and (4) is provided in the following sections.

3.1 Reducing Weak Error Arbitrarily Using Constant-Randomness Verifier

Given a language \(L \in \textsf {NL}\) with a halting 2nfa(\(k\)) recognizer M, verifier \(V_1 \in \textsf {2pfa}\) expects a certificate to report (i) the k symbols read, and (ii) the nondeterministic branch taken for each transition made by M on the course of accepting w. Such a report necessarily contains a lie, if \(w \notin \mathcal {L}\left( M\right) = L\).

Verifier \(V_1\) has an internal representation of M’s control. Then, the algorithm for the verifier is as follows:
  1. 1.
    Repeat \(m\) times:
    1. (a)

      Move head left, until \(\triangleright \) is read.

       
    2. (b)

      Reset M’s state in the internal representation, denoted \(q_m\).

       
    3. (c)

      Randomly choose a head of M by flipping \(\left\lceil {\log k} \right\rceil \) coins.

       
    4. (d)
      Repeat until \(q_m\) becomes the accepting state of M:
      1. i.

        Read k symbols and the nondeterministic branch taken by M from the certificate.

         
      2. ii.

        Reject if the reading from \(V_1\)’s head disagrees with the corresponding symbol on the certificate.

         
      3. iii.

        Make the transition in the internal representation if it is valid, and move the chosen head as dictated by the nondeterministic branch. Reject otherwise.

         
       
     
  2. 2.

    Accept.

     
For the worst case errors, it is assumed that there is a lie for the certificate to tell about each one of the heads alone and in any single one of the transitions, which causes \(V_1\) to fail to reject a string \(w \notin L\). Similar lies are assumed to exist for the false acceptances. The following are then the (upper bounds of) errors for \(V_1\):
$$\begin{aligned} \varepsilon _{\text {fail-to-accept}}(V_1)&= 0&\varepsilon _{\text {fail-to-reject}}(V_1)&\le \frac{k-1}{k}&\varepsilon _{\text {false-accept}}(V_1)&\le \frac{1}{k^m} \end{aligned}$$
A discrepancy between \(\varepsilon _{\text {false-accept}}\) and \(\varepsilon _{\text {fail-to-reject}}\) is observed, because an adversarial certificate may wind \(V_1\) up in an infinite loop on its first round of m repetitions. This is possible despite M being a halting machine. The lie in the certificate can present an infinite and even changing input string from the perspective of the head being lied about.

Being wound up counts as a failure to reject, but does not yield a false acceptance. The resulting weak error is \(\varepsilon _{\text {strong}}= k^{-m}\), which can be made arbitrarily small.

3.2 Bringing Strong Error Below Open image in new window Using Constant-Randomness Verifier

Presented first in [6], verifier \(V_1'\) with the following algorithm manages to achieve Open image in new window, outlined as follows:
  1. 1.

    Randomly reject with Open image in new window probability by flipping \(\left\lceil {\log k} \right\rceil + 1\) coins.

     
  2. 2.

    Continue as \(V_1\).

     
This algorithm then has the following upper bounds for the errors:
$$\begin{aligned} \varepsilon _{\text {fail-to-accept}}(V_1')&= \frac{k-1}{2k}&\varepsilon _{\text {fail-to-reject}}(V_1')&\le \frac{k^2-1}{2k^2}&\varepsilon _{\text {false-accept}}(V_1')&\le \frac{k+1}{2k^{m+1}} \end{aligned}$$
Since \(\varepsilon _{\text {fail-to-reject}}(V_1')\) is potentially greater than \(\varepsilon _{\text {fail-to-accept}}(V_1')\), the strong error is bounded by Open image in new window.

4 Windable Heads

This section will introduce a property of the heads of a 2nfa(\(k\)). It leads to a characterization of the 2nfa(\(k\)) by the number of heads with this property. A subset rNL of the class NL will be defined, which will also be a subset of \(\mathcal {L}_{\text {strong}, \varepsilon }\left( \textsf {IP}\mathrm{(}\textsf {2pfa}, \textsf {constant}{\text {-}}\textsf {randomness}\mathrm{)}\right) \) for \(\varepsilon > 0\) approaching zero.

A head of a 2nfa(\(k\)) M is said to be windable, if these three conditions hold:
  • There is a cycle on the graph of M’s transition diagram, and a path from \(q_0\) to a node on the cycle.

  • The movements of the head-in-question add up to zero in a full round of that cycle.

  • The readings of the head is consistent along the said path and cycle.

The definition of a head being windable completely disregards the readings of the other heads, hence the witness path and the cycle need not be a part of a realistic execution of the machine M.

We will define the windable heads formally to clarify its distinguishing points. Some preliminary definitions will be needed.

Definition 6 (Multi-step transition function)

$$\begin{aligned} \delta ^t:Q \times (\varGamma ^t)^k \rightarrow \mathcal {P}\left( Q_{\setminus q_0} \times (\varDelta ^t)^k\right) \end{aligned}$$
is the \(t\)-step extension of the transition function \(\delta \) of a 2nfa(\(k\)) M. It is defined recursively, as follows:

The set \(\delta ^t(q, g_1, \dotsc , g_k)\) contains a \((k+1)\)-tuple for each nondeterministic computation to be performed by M, as it starts from the state q and reads \(g_i\) with its ith head. These tuples, each referred to as a computation log, consist of the state reached, and the movement histories of the k heads during that computation.

The constraint of a constant and persistent tape contents that is present in an execution of a 2nfa(\(k\)) is blurred in the definition for multi-step transition function. This closely resembles the verifier’s perspective of the remaining heads that it does not verify in the previous section. There, however, the verifier’s readings were consistent in itself. This slight will be accounted for with the next pair of definitions.

Definition 7

(Relative head position during ith transition). Let M be a 2nfa(\(k\)) that does not attempt to move its heads beyond the end markers on the input tape, and \(\delta \) be its transition function. Let H be a head of M, and D be any \(t\)-step movement history in the output of \(\delta ^t\) of that head. The relative position of H while making the ith transition of D since before making the first movement in that history is given by the function \(\rho _D(i) :\mathbb {N}_1^{\le t} \rightarrow \left( -t, t \right) \) defined as
$$ \rho _D(i) = {{\,\mathrm{sum}\,}}(D_{1 : i-1}). $$

By Lemmas 3 and 4, given any language in NL there is a 2nfa(\(k\)) recognizing it, which also does not attempt to move its heads beyond the end markers.

Definition 8

(1-head consistent \(\delta ^t\)). \(\delta ^t_1 :Q \times (\varGamma ^t)^k \rightarrow \mathcal {P}\left( Q_{\setminus q_0} \times (\varDelta ^t)^k\right) \) is the ith-head consistent subset of \(\delta ^t\) of a 2nfa(\(k\)) M. It filters out the first-head inconsistent computation logs by scrutinizing the purportedly read characters by examining the movement histories against the readings. The formal definition assumes that M does not attempt to move its heads beyond the end markers, and is as follows:

For each pair of transitions departing from the same tape cell, it is checked whether the same symbol is read while being performed. This check is needed to be done only for \(p \in \left( -t, t \right) \), since in \(t\) steps, a head may at most travel \(t\) cells afar, and the last cell it can read from will then be the previous one. This is also consistent with the definition of \(\rho _D\).

This last definition is the exact analogue of the verifiers’ perspective in the algorithms proposed by [6]. It can be used directly in our next definition, that will lead us to a characterization of the 2nfa(\(k\)).

Definition 9 (Windable heads)

The ith head of a 2nfa(\(k\)) M is windable iff there exists;
  1. 1.

    \(g_1, \dotsc , g_k \in \varGamma ^t\) and \(g_1', \dotsc , g_k' \in \varGamma ^l\), for \(t\) and l positive,

     
  2. 2.

    \((q, D_1, \dotsc , D_k) \in \delta ^t_i(q_0, g_1, \dotsc , g_k)\),

     
  3. 3.

    Open image in new window where \({{\,\mathrm{sum}\,}}(D'_i) = 0\).

     

When these conditions hold, \(g_1, \dotsc , g_k\) can be viewed as the sequences of characters that can be fed to \(\delta \) to bring M from \(q_0\) to q, crucially without breaking consistency among the ith head’s readings. This ensures reachability to state q. Then, the sequences \(g_1', \dotsc , g_k'\) wind the ith head into a loop; bringing M back to state q and the first head back to where it started the loop, all while keeping the ith head’s readings consistent. The readings from the other heads are allowed to be inconsistent, and their position can change with every such loop.

A head is reliable iff the head is not windable.

It is important to note that a winding is not based on a realistic execution of a 2nfa(\(k\)). A head of a 2nfa(\(k\)) M might be windable, even if it is guaranteed to halt on every input. This is because the property of being windable allows other heads to have unrealistic, inconsistent readings that may be never realized with any input string.

5 Recognizing Some Languages in NL with Constant-Randomness and Reducible-Error Verifiers

Consider a language \(L \in \textsf {NL}\) with a 2nfa(\(k\)) recognizer M that halts on every input. In designing the randomness-restricted 2pfa(\(1\)) verifier \(V_2\), the following three cases will be considered:

All Heads Are Reliable. In this case, \(V_1\) suffices by itself to attain reducible error. Without any windable heads in the underlying 2nfa(\(k\)), each round of \(V_1\) will terminate. The certificate can only make \(V_1\) falsely accept, and the chances for that can be reduced arbitrarily by increasing \(m\).

All Heads Are Windable. In this case, unless the worst-case assumptions are alleviated, any verification algorithm using a simulation principle similar to \(V_1\) will be wound up on the first round. The head with the minimum probability of getting chosen will be the weakest link of \(V_2\), thus the head the certificate will be lying about. The failure to reject rate is equal 1 minus that probability. This rate is the lowest when the probabilities are equal, and is then Open image in new window.

It Is a Mix. Let \(k_{\text {W}}, k_{\text {R}}\) denote the windable and reliable head counts, respectively. Thus \(k_{\text {W}}+ k_{\text {R}}= k\). The new verifier algorithm \(V_2\) is similar to \(V_1\), but instead of choosing a head to simulate with equal probability, it will do a biased branching. With biased branching, \(V_2\) favors the reliable heads over the windable heads while choosing a head to verify.

Let \(P_{\text {W}}, P_{\text {R}}\) denote the desired probability of choosing a windable and reliable head, respectively. Note that \(P_{\text {W}}+ P_{\text {R}}= 1\). The probabilities of choosing a head within types (windable or reliable) are kept equal. Denote the probability of choosing a particular windable head as Open image in new window, and similarly Open image in new window. Assume \(P_{\text {W}}, P_{\text {R}}\) are finitely representable in binary, and with \(b\) digits after the decimal point. Then, the algorithm of \(V_2\) is the same as \(V_1\), with the only difference at step 1c:
  • 1c.\({}'\) Randomly choose a head of M by biased branching:
    • Instead of flipping \(\left\lceil {\log k} \right\rceil \) coins, flip \(b+ \left\lceil {\log (\max (k_{\text {W}}, k_{\text {R}}))} \right\rceil \) coins. Let \(z_1, z_2, \dotsc , z_{b}\) be the outcomes of the first \(b\) coins.

    • If \(\sum _{i=1}^b2^{-i}z_i < P_{\text {W}}\), choose one of the windable heads depending on the outcomes of the next \(\left\lceil {\log k_{\text {W}}} \right\rceil \) coins. Otherwise, similarly choose a reliable head via \(\left\lceil {\log k_{\text {R}}} \right\rceil \) coins.

For an Input String \(w \in L\). Verifier \(V_2\) is still perfectly accurate. Certificate may provide any route that leads M to acceptance. Repeating this for \(m\)-many times, \(V_2\) will accept after \(m\) rounds of validation.

For an Input String \(w \notin L\). To keep \(V_2\) from rejecting, the certificate will need to lie about at least one of the heads. Switching the head to lie about in between rounds cannot be of any benefit to the certificate on its mission, since the rounds are identical both from \(V_2\)’s and the certificate’s points of view. Hence, it is reasonable to assume that the certificate repeats itself in each round, and simplify our analysis.

The worst-case assumption is that the certificate can lie about a single (arbitrary) head alone and deceive \(V_2\) in the worst means possible, depending on the head it chooses:
  • If it chooses the head being lied about, \(V_2\) detects the lie rather than being deceived.

  • Otherwise, if a windable head was chosen, \(V_2\) loops indefinitely.

  • Otherwise (i.e. a reliable head was chosen), \(V_2\) runs for another round or accepts w.

The head which the certificate fixes to lie about is either a windable head or a reliable one. Given a \(V_2\) algorithm with its parameter \(P_{\text {W}}\) set, let \(F_{\text {W}}(P_{\text {R}})\) be the probability of \(V_2\) failing to reject against a certificate that lies about any one windable head. Failure to reject would either be a result of up to \(m - 1\) rounds of false-acceptance followed by getting wound up in an infinite loop, or by m rounds of false-acceptance.
$$\begin{aligned} F_{\text {W}}(P_{\text {R}})&= \sum _{i=0}^{m-1} P_{\text {R}}^i (P_{\text {W}}-p_{\text {W}}) + P_{\text {R}}^{m} \\&= (1 - P_{\text {R}}^{m}) \cdot \left( 1 - \frac{1}{k_{\text {W}}}\right) + P_{\text {R}}^{m} \\&= 1 - \frac{1 - P_{\text {R}}^{m}}{k_{\text {W}}} \end{aligned}$$
Let \(F_{\text {R}}(P_{\text {R}})\) similarly be the probability for the reliable counterpart.
$$\begin{aligned} F_{\text {R}}(P_{\text {R}})&= \sum _{i=0}^{m-1} (P_{\text {R}}-p_{\text {R}})^i P_{\text {W}}+ (P_{\text {R}}-p_{\text {R}})^{m} \\&= \frac{1 - (P_{\text {R}}-p_{\text {R}})^{m}}{1 - (P_{\text {R}}-p_{\text {R}})} \cdot P_{\text {W}}+ (P_{\text {R}}-p_{\text {R}})^{m} \\&= \frac{P_{\text {W}}}{P_{\text {W}}+p_{\text {R}}} + \left( 1-\frac{P_{\text {W}}}{P_{\text {W}}+p_{\text {R}}}\right) (P_{\text {R}}-p_{\text {R}})^{m}\\ \end{aligned}$$
The most evil certificate would lie about the head that yields a higher error. Thus, the worst-case failure to reject probability is given by
$$\begin{aligned} F(P_{\text {R}}) = \max (F_{\text {W}}(P_{\text {R}}), F_{\text {R}}(P_{\text {R}})). \end{aligned}$$
The objective is to find the optimum \(P_{\text {R}}\), denoted \(P_{\text {R}}^*\), minimizing the error \(F(P_{\text {R}})\). We note that \(F(1)\) is 1. Hence, \(P_{\text {R}}^*< 1\).
Constant \(m\) may be chosen arbitrarily large. For \(P_{\text {R}}< 1\), and \(m\) very large, approximations of \(F_{\text {W}}\) and \(F_{\text {R}}\) are, respectively, given as
$$\begin{aligned} F_{\text {W}}^*(P_{\text {R}})&= 1 - \frac{1}{k_{\text {W}}}&F_{\text {R}}^*(P_{\text {R}})&= \frac{P_{\text {W}}}{P_{\text {W}}+p_{\text {R}}}. \end{aligned}$$
Error \(F_{\text {W}}^*\) is a constant between 0 and 1. For \(0 \le P_{\text {R}}\le 1\), error \(F_{\text {R}}^*\) decreases from 1 to 0, and in a strictly monotonous fashion:These indicate that \(F_{\text {W}}^*(P_{\text {R}})\) and \(F_{\text {R}}^*(P_{\text {R}})\) are equal for a unique \(P_{\text {R}}= P_{\text {R}}^*\). The optimality of \(P_{\text {R}}^*\) will be proved shortly. It is easy to verify that
$$\begin{aligned} P_{\text {R}}^*= \frac{k_{\text {R}}}{k-1}. \end{aligned}$$
(5)
Using \(P_{\text {R}}^*\) we can define \(F^*\) as the following partial function:
$$\begin{aligned} F^*(P_{\text {R}}) = {\left\{ \begin{array}{ll} F_{\text {R}}^*(P_{\text {R}}) &{} \text {for } P_{\text {R}}\le P_{\text {R}}^*\\ F_{\text {W}}^*(P_{\text {R}}) &{} \text {for } P_{\text {R}}\ge P_{\text {R}}^*\end{array}\right. } \end{aligned}$$
Since \(F_{\text {R}}^*\) is a decreasing function, \(F(P_{\text {R}}) > F(P_{\text {R}}^*)\) for any \(P_{\text {R}}< P_{\text {R}}^*\). The approximation \(F_{\text {W}}^*\) is a constant function. Function \(F_{\text {W}}\), however, is actually an increasing one. Therefore, given \(m\) large, probability \(P_{\text {R}}^*\) approximates the optimum for \(V_2\) choosing a reliable head among the k heads of the M, while verifying for the language \(\mathcal {L}\left( M\right) \in \textsf {NL}\). Consequently the optimum error for \(V_2\) is
$$\begin{aligned} F(P_{\text {R}}^*) = 1 - \frac{1}{k_{\text {W}}}. \end{aligned}$$
(6)
This points to some important facts.

Theorem 1

The minimum error for \(V_2\) depends only on the number of windable heads of the 2nfa(\(k\)) M recognizing \(L \in \textsf {NL}\).

Definition 10

(Reducible strong error subset of NL). For \(\varepsilon > 0\) approaching zero, the reducible strong error subset of NL is defined as
$$\begin{aligned} \textsf {rNL}{} = \textsf {NL}{} \cap \mathcal {L}_{\text {strong}, \varepsilon }\left( \textsf {IP}\mathrm{(}\textsf {2pfa}, \textsf {constant}{\text {-}}\textsf {randomness}\mathrm{)}\right) . \end{aligned}$$

Theorem 2

For \(k_{\text {W}}\le 1\) and \(k_{\text {R}}\) arbitrary,
$$\begin{aligned} \mathcal {L}\left( \textsf {2nfa}\mathrm{(}k_{\text {W}}+k_{\text {R}}\mathrm{)}\right) \subseteq \textsf {rNL}{}. \end{aligned}$$

Equations (5) and (6), and their consequent Theorems 1 and 2, constitute the main results of this study.

Similar to how \(V_1'\) was obtained, the algorithm for \(V_2'\) is as follows:
  1. 1.

    Randomly reject with Open image in new window probability by flipping \(\left\lceil {\log k_{\text {W}}} \right\rceil + 1\) coins.

     
  2. 2.

    Continue as \(V_2\).

     

The strong error of \(V_2'\) is then given by Open image in new window.

5.1 Example Languages from rNL and Potential Outsiders

Let \(w_{\mathtt {a}}\) denote the amount of symbols \(\mathtt {a}\) in a string w.

The following two are some example languages with 2nfa(\(k_{\text {W}}+k_{\text {R}}\)) recognizers, where \(k_{\text {W}}= 0\):An example language with a \(k_{\text {W}}\le 1\) recognizer is the following:Lastly, it is an open question whether the following language is inside or outside rNL:

6 Open Questions

It is curious to us whether \(\mathcal {L}\left( \textsf {2nfa}\mathrm{(}k_{\text {W}}+k_{\text {R}}\mathrm{)}\right) \) coincides with any known class of languages for \(k_{\text {W}}= 0\) or 1, or \(k_{\text {W}}\le 1\). The minimum number or windable heads required for a language in NL to be recognized by a halting 2nfa(\(k\)), could establish a complexity class. Conversely, one might be able to discover yet another infinite hierarchy of languages based on the number of windable heads, alongside the hierarchy in Lemma 1. For some \(c > 0\) and \(k_{\text {W}}' = k_{\text {W}}+ c\), this hierarchy might be of the form
$$\begin{aligned} \mathcal {L}\left( \textsf {2nfa}\mathrm{(}k = k_{\text {W}}+k_{\text {R}}\mathrm{)}\right) \subsetneq \mathcal {L}\left( \textsf {2nfa}\mathrm{(}k' = k_{\text {W}}'+k_{\text {R}}'\mathrm{)}\right) \end{aligned}$$
for \(k = k'\), \(k_{\text {R}}= k_{\text {R}}'\), or without any further restriction.

References

  1. 1.
    Condon, A.: The complexity of space bounded interactive proof systems. In: Ambos-Spies, A., et al. (eds.) Complexity Theory: Current Research, pp. 147–189. Cambridge University Press, Cambridge (1992). https://www.cs.ubc.ca/~condon/papers/ips-survey.pdf
  2. 2.
    Hartmanis, J.: On non-determinancy in simple computing devices. Acta Informatica 1(4), 336–344 (1972).  https://doi.org/10.1007/BF00289513MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Holzer, M., Kutrib, M., Malcher, A.: Complexity of multi-head finite automata: origins and directions. Theoret. Comput. Sci. 412(1–2), 83–96 (2011).  https://doi.org/10.1016/j.tcs.2010.08.024MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Monien, B.: Transformational methods and their application to complexity problems. Acta Informatica 6(1), 95–108 (1976).  https://doi.org/10.1007/BF00263746MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Monien, B.: Two-way multihead automata over a one-letter alphabet. RAIRO Informatique Théorique 14(1), 67–82 (1980).  https://doi.org/10.1051/ita/1980140100671MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Say, C., Yakaryılmaz, A.: Finite state verifiers with constant randomness. Log. Methods Comput. Sci. 10(3) (2014).  https://doi.org/10.2168/LMCS-10(3:6)2014. arXiv:1102.2719

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Boğaziçi UniversityIstanbulTurkey

Personalised recommendations