Computing and Software Science pp 926  Cite as
Some Estimated Likelihoods for Computational Complexity
Abstract
The editors of this LNCS volume asked me to speculate on open problems: out of the prominent conjectures in computational complexity, which of them might be true, and why?
I hope the reader is entertained.
1 Introduction
Computational complexity is considered to be a notoriously difficult subject. To its practitioners, there is a clear sense of annoying difficulty in complexity. Complexity theorists generally have many intuitions about what is “obviously” true. Everywhere we look, for every new complexity class that turns up, there’s another conjectured lower bound separation, another evidently intractable problem, another apparent hardness with which we must learn to cope. We are surrounded by spectacular consequences of all these obviously true things, a sharp coherent worldview with a wonderfully broad theory of hardness and cryptography available to us, but—gosh, it’s so annoying!—we don’t have a clue about how we might prove any of these obviously true things. But we try anyway.
Much of the present cluelessness can be blamed on wellknown “barriers” in complexity theory, such as relativization [BGS75], natural properties [RR97], and algebrization [AW09]. Informally, these are collections of theorems which demonstrate strongly how the popular and intuitive ways that many theorems were proved in the past are fundamentally too weak to prove the lower bounds of the future.

Relativization and algebrization show that proof methods in complexity theory which are “invariant” under certain highlevel modifications to the computational model (access to arbitrary oracles, or lowdegree extensions thereof) are not “finegrained enough” to distinguish (even) pairs of classes that seem to be obviously different, such as \(\mathsf{NEXP}\) and \(\mathsf{BPP}\).

Natural properties also show how the generality of many circuit complexity lower bound proofs can limit their scope: if a method of proving circuit complexity lower bounds applies equally well to proving lower bounds against random functions, then it had better not be a highly constructive argument, where one can feasibly discern the circuit complexity of a simple function. Otherwise, our method for proving circuit lower bounds also proves an upper bound, showing that pseudorandom functions cannot be implemented in the circuit class.
In any case, these barriers show that many known proof methods have so much slack in their arguments that, when it comes to questions like \(\mathsf{P}\) versus \(\mathsf{NP}\), the method will simply hang itself. To make further progress on complexity class separations and prove these obviouslytrue theorems, we need to dig into computation deeper, and find less superficial methods of argument which speak about computation on a finer level.
I think it is highly probable that a decent level of cluelessness is due to simply being wrong about some of these obviously true things. I’m certainly not the first to proclaim such an opinion; Lipton and Regan’s blog and books [Lip10, LR13] have spoken at length about how everyone was wrong about X for all sorts of X, and other “contrarian” opinions about complexity can be found in Gasarch’s polls on P vs NP [Gas02, Gas12]. The idea that complexity theorists can be very wrong is certainly not in doubt.^{1} The fact that it happens at a nontrivial frequency is enough that (I think) folks should periodically reconsider the conjectured complexity separations they have pondered over the years, and update their thoughts on them as new information arises. Regardless of one’s opinions about how wrong we may or may not be, I think it is an important exercise to review the major problems in one’s field once a year, and seriously check if you got any smarter about them over the previous year.
So, when no good upper bound (i.e., algorithm) is attained for a problem, even after a bunch of smart people have thought about it, the inclination is to conclude that the upper bound does not exist (i.e., a lower bound). Hence it is natural that beliefs about lower bounds tend to be refuted more often than those about upper bounds: we rarely assert that interesting upper bounds exist, unless we already know how to attain them. (An interesting exception is that of matrix multiplication over a field; researchers in that area tend to believe that nearlyoptimal running time is possible for the problem.) There seems to be an additional belief in the justification of the above heuristic:If a bunch of smart people could not figure out how to do it, then it probably cannot be done.
For example, our collective inability to find an efficient SAT algorithm, even over decades of thought about the problem, even though all the other thousands of \(\mathsf{NP}\)complete problems are really only SAT in disguise, suggests to many that \(\mathsf{P}\ne \mathsf{NP}\).As the bunch of smart people who cannot find a good algorithm increases over time, we get closer to a universal quantifier over all good algorithms.
Unfortunately, I do not believe that the “bunch of smart people” living in the present time covers this sort of quantifier well, and I am not sure how we will raise the next generation of smart people to cover it more thoroughly (other than teaching them to be skeptical, and providing them a vastlythicker literature of algorithms and complexity than what we had). For this reason, I am probably less dogmatic than a “typical” complexity theorist regarding questions such as \(\mathsf{P}\) versus \(\mathsf{NP}\).
1.1 Some Estimated Likelihoods for Some Major Open Problems
What you receive, when you ask for my opinions on some open problems in complexity theory.
Proposition  RW’s estimated likelihood 

TRUE  100% 
\(\mathsf{EXP}^{\mathsf{NP}} \ne \mathsf{BPP}\)  99% 
\(\mathsf{NEXP}\not \subset \mathsf{P}/\mathsf{poly}\)  97% 
\(\mathsf{L}\ne \mathsf{NP}\)  95% 
\(\mathsf{NP}\not \subset \mathsf{SIZE}(n^k)\)  93% 
\(\mathsf{BPP}\subseteq \mathsf{SUBEXP}\)  90% 
\(\mathsf{P}\ne \mathsf{PSPACE}\)  90% 
\(\mathsf{P}\ne \mathsf{NP}\)  80% 
ETH  70% 
\(\mathsf{NC}1 \ne \mathsf{TC}0\)  50% 
\(\mathsf{NEXP}\ne \mathsf{EXP}\)  45% 
SETH  25% 
\(\mathsf{NEXP}\ne \mathsf{coNEXP}\)  20% 
NSETH  15% 
\(\mathsf{L} \ne \mathsf{RL}\)  5% 
FALSE  0% 
The numerical values of my “estimated likelihoods” are (obviously) nothing too rigorous. What is more important is the relative measure between problems. I did want the measures to be “consistent” in some simple senses. For example, if we know that A implies B, then B should not be (much) less likely to be true than A. I will give some explanations for my likelihoods in the next section.
There are many other open problems for which I could have put a likelihood, but I wanted to focus on problems where I thought I had something interesting to say along with my measure of likelihood. I deliberately refrained from putting a measure on conjectures which I do not feel that I am very knowledgeable on the stateoftheart, such as the famous Unique Games Conjecture of Khot [Kho02]. For the record, the present state of knowledge suggests to me that Unique Games (as usually stated) is probably intractable, but perhaps not \(\mathsf{NP}\)hard. But what do I know? A very recent line of work [KMS17, KMS18] claims to settle the 2to2 conjecture, a close relative of Unique Games.
2 Thoughts on Various Separations
I will discuss the separations mentioned in Table 1, starting with those that I think are most likely to be true.
2.1 EXP with an NP Oracle Versus BPP
Recall that \(\mathsf{BPP}\) is the class of problems solvable in randomized polynomial time (with twosided error), and \(\mathsf{EXP}^{\mathsf{NP}}\) is the class of problems solvable in (deterministic) exponential time with access to an oracle for the SAT problem (note that exponentiallylong SAT instances can be solved in one step with such a model). I put 99% on the likelihood of \(\mathsf{EXP}^{\mathsf{NP}} \ne \mathsf{BPP}\), for several reasons. One reason is that everything we know indicates that randomized computation is far, far weaker than deterministic exponential time, and exponential time with an \(\mathsf{NP}\) oracle should be only more powerful. Another reason is that the open problem becomes trivially closed (separating the two classes) if one makes various small changes in the problem statement. Change the “twosided error” to “onesided error” (the class \(\mathsf{RP}\)) and it is easy to separate them. Change the \(\mathsf{EXP}\) to \(\mathsf{ZPEXP}\) (randomized exponential time with “zeroerror”) and it is again easy to separate them. For a third reason, \(\mathsf{EXP}^{\mathsf{NP}} \ne \mathsf{BPP}\) is implied by very weak circuit lower bounds (such as \(\mathsf{NEXP}\not \subset \mathsf{P}/\mathsf{poly}\)) that I am also very confident are true (they will be discussed later).
It appears to me that \(\mathsf{EXP}^{\mathsf{NP}} \ne \mathsf{BPP}\) is primarily still open because there are oracles making them equal [Hel86], so one will need to use the right sort of nonrelativizing argument to get the job done. I do not view the existence of oracles as a significant barrier, but rather a caution sign that we will need to dig into the guts of Turing machines (or equivalent formalizations) in order to separate the classes. Some potential approaches to (weak) lower bounds against \(\mathsf{BPP}\) are outlined in an earlier article of mine [Wil13b].
2.2 NEXP vs P/poly
Recall that \(\mathsf{NEXP}\) is the class of problems solvable in nondeterministic exponential time: a huge complexity class. The class \(\mathsf{P}/\mathsf{poly}\) is a special kind of class: it consists of those problems over \(\{0,1\}^{\star }\) which can be solved by an infinite family of polynomialsize circuits \(\{C_n\}\). Intuitively, this is a computational model with an infinitelylong description (a socalled nonuniform model), but for any particular input length n, the description of the computer solving the problem on all inputs of length n (and the running time of this computer) is a fixed polynomial of n. I put \(97\%\) likelihood on \(\mathsf{NEXP}\not \subset \mathsf{P}/\mathsf{poly}\). Note that this lower bound would imply \(\mathsf{EXP}^{\mathsf{NP}} \ne \mathsf{BPP}\).
I can think of two major reasons why this separation is almost certainly true.
2.2.1 Why Would Nonuniformity Help Here? I can see no reason why the nonuniform circuit model should let us significantly speedup the solution to every \(\mathsf{NEXP}\) problem (or to every \(\mathsf{EXP}\) problem, for that matter). Having a distinct algorithm for each input length does not intuitively seem to be very helpful: how could it be that every input length n allows for some special hyperoptimization on that particular n, yielding an exponentially faster solution to an \(\mathsf{NEXP}\) problem? And how could this happen to be true for every individual length n? In this light, it feels remarkable to me that this separation problem is still open at all.
Note that there are still oracles relative to which \(\mathsf{NEXP}\) is contained in \(\mathsf{P}/\mathsf{poly}\), even in the algebrization sense [AW09]. It is known that there are undecidable problems in \(\mathsf{P}/\mathsf{poly}\), but this is because we can concoct undecidable problems out of unary (or more generally, sparse) languages.
I am willing to entertain the possibility that there are infinitely many input lengths for which \(\mathsf{NEXP}\) problems are easy: perhaps \(\mathsf{NEXP}\) is contained infinitely often in \(\mathsf{P}/\mathsf{poly}\). For example, the “good” input lengths could have the form Open image in new window , or something more bizarre. This would be an amazing result, but because we only require infinitely many input lengths to work, perhaps some hyperoptimization of certain strange input lengths is possible in this setting. There are oracles relative to which \(\mathsf{NEXP}\) is contained infinitely often in \(\mathsf{NP}\) [For15], which shows that infinitelyoften simulations can be very tricky to rule out. Still, this also seems unlikely.
2.3 LOGSPACE vs NP
I also believe that \(\mathsf{L}\ne \mathsf{NP}\) is extremely likely: that (for example) the Vertex Cover problem on medge graphs cannot be solved by any algorithm that uses only \(m^k\) time (for some constant k) and \(O(\log m)\) additional space (beyond the \(O(m \log (n))\) bits of input that lists the edges of the graph). This is far from a controversial position; \(O(\log m)\) space is not enough to even store a subset of nodes from the graph (a candidate vertex cover), and it is widely believed that this tiny space requirement is a severe restriction on computational power. In particular it is also widely believed that \(\mathsf{L}\ne \mathsf{P}\) (which I would put less likelihood on, but not much less).
I mainly want to highlight \(\mathsf{L}\ne \mathsf{NP}\) because (unlike the situation of \(\mathsf{P}\ne \mathsf{NP}\), which is murkier) I believe that substantial progress has already been made on the problem. Significant combinatorial approaches to space lower bounds (such as [BJS98, Ajt99, BSSV00, BV02]) have yielded modelindependent superlinear time lower bounds on decision problems in \(\mathsf{P}\), when the space usage is \(n^{1\varepsilon }\) or less. (In fact, these results hold in a nonuniform version of timespace bounded computation.) Approaches based on diagonalization/simulation methods, aimed at proving lower bounds on \(\mathsf{NP}\)hard decision problems such as SAT, include [For00, FLvMV05, Wil08a, Wil13a, BW12] and show that problems such as SAT, Vertex Cover, and Independent Set require \(n^{2\cos (\pi /7)}\) time to be solved when the space usage of the algorithm is \(n^{o(1)}\). Unfortunately, \(2\cos (\pi /7) < 1.81\), and in fact the last reference above shows that current techniques cannot improve this curious exponent. So in a quantitative sense, we have a long way to go before \(\mathsf{L}\ne \mathsf{NP}\).
After studying these methods for years now, I am moreorless convinced of \(\mathsf{L}\ne \mathsf{NP}\) and that it will be proved, possibly long before \(\mathsf{P}\) vs \(\mathsf{NP}\) is resolved. In fact I believe that only a few new ideas will be required to yield enough “bootstrapping” to separate \(\mathsf{L}\) and \(\mathsf{NP}\). The catch is that I am afraid the missing ideas will need to be extraordinarily clever, unlike anything seen before in mathematics (at least a Gödelincompletenesslevel of cleverness, relative to the age in which he proved those famous theorems). In the meantime, we do what we can.
2.4 NP Does Not Have Fixed PolynomialSize Circuits
First, I put a bit less likelihood (93%) on \(\mathsf{NP}\not \subset \mathsf{SIZE}(n^k)\) for some constant k, because it implies \(\mathsf{NEXP}\not \subset \mathsf{P}/\mathsf{poly}\) (and they don’t seem to be equivalent), so it is stronger than the other circuit lower bound problems that have been mentioned so far.
There is a considerable history of results in this direction. Kannan [Kan82] proved “fixed polynomial” circuit lower bounds for the class \(\mathsf{NP}^{\mathsf{NP}}\) (a.k.a. \(\varSigma _2 \mathsf{P}\)): for every constant \(k \ge 1\), there is a problem in \(\mathsf{NP}^{\mathsf{NP}}\) that does not have \(n^k\) size Boolean circuits (over any gate basis that you like). Over time, his fixedpolynomial lower bound has been improved several times, from \(\mathsf{NP}^{\mathsf{NP}}\) to seemingly smaller complexity classes such as \(\mathsf{ZPP}^{\mathsf{NP}}\) [KW98]. It is known that \(\mathsf{MA}/1\) (MerlinArthur with one bit of advice) is not in \(\mathsf{SIZE}(n^k)\) for each k [San07], and due to our beliefs about circuit lower bounds [IW97] it is believed that \(\mathsf{MA}= \mathsf{NP}\) (i.e., it is believed that randomness doesn’t help much with noninteractive verification of proofs). This looks like strong evidence in favor of \(\mathsf{NP}\not \subset \mathsf{SIZE}(n^k)\), besides the intuition that \(\mathsf{NP}\) problems that require \(n^{100000k}\) nondeterministic time probably can’t be “compressed” to \(n^k\)size circuits.
The problems of proving that classes such as \(\mathsf{P}\), \(\mathsf{NP}\), and \(\mathsf{P}^{\mathsf{NP}}\) have fixed polynomialsize circuits are discussed in [Lip94, FSW09, GM15, Din15], and many absurdlooking consequences have been derived from propositions such as \(\mathsf{NP}\subset \mathsf{SIZE}(n^k)\) (of course, none of these have been proved to be actually contradictory).
is in \(\mathsf{P}^{\mathsf{NP}}\), and therefore has \(O(n^k)\)size circuits under the hypothesis. Thus the witnesses printed by these circuits have low circuit complexity, for every x.Given an x and an integer i, print the ith bit of the lexicographically first y such that V(x, y) accepts (or print 0 if no such y exists)
2.5 BPP is in SubExponential Time
Recall \(\mathsf{SUBEXP}= \cap _{k \in {\mathbb N}} \mathsf{TIME}(2^{n^{1/k}})\), i.e., it is the class of problems solvable in \(O(2^{n^{\varepsilon }})\) time, for any \(\varepsilon > 0\) as close to zero as you like.
The main reason for putting a high likelihood on \(\mathsf{BPP}\subseteq \mathsf{SUBEXP}\) is that it is implied by \(\mathsf{EXP}\not \subset \text { io}\mathsf{P}/\mathsf{poly}\) [NW94, BFNW93], which I also believe to be true, although perhaps not quite as strongly as \(\mathsf{NEXP}\not \subset \mathsf{P}/\mathsf{poly}\). (The “io” part stands for “infinitely often” and it means that there is a function \(\mathsf{EXP}\) which, for almost every input length n, fails to have circuits of size \(\mathsf{poly}(n)\).) The intuition for why \(\mathsf{EXP}\not \subset \text { io}\mathsf{P}/\mathsf{poly}\) is similar to the intuition for why \(\mathsf{NEXP}\not \subset \mathsf{P}/\mathsf{poly}\). As a starting point, one can easily prove results like \(\mathsf{EXP}\not \subset \text { io}\mathsf{TIME}[2^{n^k}]\) for every constant k, by diagonalization. It would be very surprising, even magical, if one could take a problem that cannot be solved infinitelyoften in \(2^{n^k}\) time and solve it infinitelyoften in polynomial time, simply because one got a separate algorithm and received polynomiallylong extra advice for each input length. Intuitively, to solve the hardest problems in \(\mathsf{EXP}\), the only advice that could truly help you solve the problem quickly (on all inputs of length n) would be the entire \(2^n\)bit truth table for length n, and for such problems it is not clear why some input lengths would ever be easier than others. (Aside: it is interesting to note that \(\mathsf{P}= \mathsf{NP}\) implies circuit lower bounds such as \(\mathsf{EXP}\not \subset \text { io}\mathsf{P}/\mathsf{poly}\).)
For what it’s worth, I would put about 87% likelihood on \(\mathsf{P}= \mathsf{BPP}\): a bit lower than the 90% \(\mathsf{BPP}\) in subexponential time (for good reason), but not significantly less. These two likelihoods aren’t substantially different for me, because I tend to believe that nontrivial derandomizations of \(\mathsf{BPP}\) are likely to imply much more efficient derandomizations, along the lines of Theorems 1.4 and 1.5 in [Wil10].
2.6 P vs PSPACE
Nevertheless, \(\mathsf{P}= \mathsf{PSPACE}\) does look extremely unlikely: the idea that \(\mathsf{PSPACE}\) corresponds to computing winning strategies in twoplayer games makes it clear that our world would be extremely weird and wonderful if \(\mathsf{P}= \mathsf{PSPACE}\) and we discovered a fast algorithm for solving quantified Boolean formulas.If \(\mathsf{PSPACE}\) is so large, why can’t we prove an \(n^{1.0001}\) time lower bound against solving QBF? What are the obstacles? Can we articulate some interesting tools that would suffice to prove the lower bound, if we had them?
2.7 P vs NP
I do not have much to say about \(\mathsf{P}\) versus \(\mathsf{NP}\) beyond what has already been said, over many decades, by many researchers (a notable example is Aaronson’s astounding recent survey [Aar16]). But I do only give \(80\%\) likelihood of \(\mathsf{P}\ne \mathsf{NP}\) being true. Why only \(80\%\)? Because the more I think about \(\mathsf{P}\) versus \(\mathsf{NP}\), the less I understand about it, so why should I be so confident in its answer? Because ETH is less likely to be true than \(\mathsf{P}\ne \mathsf{NP}\), and I feel like the truth of ETH is not so far from a coin toss. Because it is not hard, when you squint, to view incredible achievements like the PCP theorem [AS98, ALM+98] as progress towards \(\mathsf{P}= \mathsf{NP}\) (one only has to satisfy \(7/8+\varepsilon \) of the clauses in a MAX3SAT instance! [Hås01]) instead of hardness for approximately solving \(\mathsf{NP}\) problems.
Yes, intuitively and obviously \(\mathsf{P}\ne \mathsf{NP}\)—but only intuitively and obviously. (Incidentally, I put about the same likelihood on the existence of oneway functions; I tend to believe in strong worstcase to averagecase reductions.)
2.8 ETH: The Exponential Time Hypothesis
Although I give it only \(70\%\) likelihood, I do not think it is necessarily presumptuous to base a research program on ETH being true, but I do think researchers should be a little more skeptical of ETH, and periodically think seriously about how it might be refuted. (As Russell Impagliazzo often reminds me: “It’s not a Conjecture, it’s a Hypothesis! We chose that word for a reason.”) More points along these lines will be given when SETH (the Stronger ETH) is discussed.
2.9 NC1 versus TC0
Recall that \(\mathsf{NC}^1\) (“Nick’s Class 1”) is the class of problems solvable with \(O(\log n)\)depth circuits of polynomial size and constant fanin for each gate. The class \(\mathsf{TC}^0\) contains problems solvable with O(1)depth fanin circuits of polynomial size with unbounded fanin MAJORITY gates along with inverters. (The \(\mathsf{T}\) stands for “Threshold”—without loss of generality, the gates could be arbitrary linear threshold functions.) It is wellknown that \(\mathsf{TC}^0 \subseteq \mathsf{NC}^1\), and \(\mathsf{NC}^1 \ne \mathsf{TC}^0\) is sometimes used as a working hypothesis.
Upon reflection, I have possibly put significantly less weight on \(\mathsf{NC}^1 \ne \mathsf{TC}^0\) (only \(50\%\) likelihood) than one might expect. (I know of at least one complexity theorist who has worked for years to prove that \(\mathsf{NC}^1 = \mathsf{TC}^0\). No, it’s not me.) One notterriblyserious reason for doubting \(\mathsf{NC}^1 \ne \mathsf{TC}^0\) is that \(\mathsf{TC}^0\) is (as far as we know) the class of circuits most closely resembling the human brain, and we are all familiar with how unexpectedly powerful that sort of computational device can be.
A more serious reason for doubting \(\mathsf{NC}^1 \ne \mathsf{TC}^0\) is that \(\mathsf{NC}^1\) has proven to be surprisingly easier than expected, and \(\mathsf{TC}^0\) has been surprisingly powerful (in a formal sense). Barrington’s amazing theorem [Bar89] shows that \(\mathsf{NC}^1\) corresponds to only “O(1)space computation” in a computational model that has random access to the input. One corollary is that the word problem over the group \(S_5\) (given a sequence of group elements, is its product the identity?) is already \(\mathsf{NC}^1\)complete: solving it in \(\mathsf{TC}^0\) would prove \(\mathsf{NC}^1 = \mathsf{TC}^0\).
The circuit complexity literature shows that many problems known to be in \(\mathsf{NC}^1\) were systematically placed later in \(\mathsf{TC}^0\); a nice example is integer division [BCH86, RT92] but many other interesting numerical and algebraic tasks also turn out to be in \(\mathsf{TC}^0\) (such as the pseudorandom function constructions of Naor and Reingold [NR04]). For many different types of groups (but not \(S_5\), as far as we know) their word problems are known to be in \(\mathsf{TC}^0\) (see [MVW17] for very recent work, with references). In fact, every natural problem that I know in \(\mathsf{NC}^1\) is either already \(\mathsf{NC}^1\)complete under \(\mathsf{TC}^0\) reductions, or is already in \(\mathsf{TC}^0\) (there are no good candidate problems which are neither). So perhaps we are one smart threshold circuit away from making the two classes equal.
An argument leaning towards \(\mathsf{NC}^1 \ne \mathsf{TC}^0\) may be found in Allender and Koucky [AK10] who show that if \(\mathsf{NC}^1 = \mathsf{TC}^0\), then there are \(n^{1+\varepsilon }\)size \(O(1/\varepsilon )\)depth \(\mathsf{TC}^0\) circuits for certain \(\mathsf{NC}^1\)complete problems, for every \(\varepsilon > 0\). Another point is that, if \(\mathsf{NC}^1 = \mathsf{TC}^0\), then (by a padding argument) the class \(\mathsf{PSPACE}\) lies in the socalled “polynomialtime counting hierarchy” which intuitively seems smaller. Maybe multiple layers of oracle calls to counting solutions are powerful, and such circuits exist? To me, it’s a coin flip.
2.10 EXP vs NEXP
Many may wonder why I put only 45% likelihood on \(\mathsf{EXP}\ne \mathsf{NEXP}\). (I suspect the others will, instead of wondering, just assume that I’m out of my mind.) Well, for one, we do have that \(\mathsf{EXP}\ne \mathsf{NEXP}\) implies \(\mathsf{P}\ne \mathsf{NP}\), and the other direction (at least from a provability standpoint) does not seem to hold, so it is natural to consider \(\mathsf{EXP}\ne \mathsf{NEXP}\) to be not as likely as \(\mathsf{P}\ne \mathsf{NP}\).
To make the percentage dip below \(50\%\), there are other reasons. For one, if we think we’re ignorant about what is impossible in \(\mathsf{P}\), then we are total idiots about what is impossible in \(\mathsf{EXP}\). The scope of what can be done in exponential time has been barely scratched, I believe, and many improved exponential time algorithms in the literature are mainly applications of known polynomialtime strategies to exponentialsized instances. That is, we generally don’t know how to construct algorithms that take advantage of the additional structure provided by succinctlyrepresented inputs—which are the defining feature of many \(\mathsf{NEXP}\)complete problems [PY86, GW83])—and I am inclined to believe that nontrivial algorithms for solving problems on succinct inputs should exist. Arora et al. [ASW09] study exponentiallylarge graphs whose edge relations are defined by small weak circuits such as \(\mathsf{AC}^0\), and give several interesting algorithms for solving problems on such graphs. They also show how the usual \(\mathsf{NP}\)complete problems cannot be solved on graphs defined by \(\mathsf{AC}^0\) circuits unless \(\mathsf{NEXP}= \mathsf{EXP}\).
One can define the MaxCliqueDNF problem in an analogous way. In unpublished work, Josh Alman and I noticed that MaxCliqueCNF is solvable in \(2^{O(m+n)}\) time, but MaxCliqueDNF is already \(\mathsf{NEXP}\)complete, even for constantwidth DNFs of n variables and \(\mathsf{poly}(n)\) terms! Das et al. [DST17] make similar observations for the CNF and DNF versions of other \(\mathsf{NP}\)hard problems.Given a CNF F on 2n variables and m clauses, and an integer \(k \in [2^n]\), is there a clique in the graph of F of size at least k?
I would be very surprised if the hardest cases of the MaxClique problem (or even the somewhathardest cases) can be generated by tiny DNF formulas of constant width. I predict that problems solvable in \(2^{O(n)}\) time can be solved faster than the naive \(2^{2^{O(n)}}\) deterministic running time; perhaps even in \(2^{O(n^k)}\) time for some large constant k.
2.11 SETH: The Strong Exponential Time Hypothesis
The main reason for skepticism is that the area of exponentialtime algorithms has produced many results where the naive running time of \(c^n\) for an \(\mathsf{NP}\)complete problem was reduced to \(O((c\delta )^n)\) for some \(\delta > 0\) (see the textbook of Fomin and Kratsch for examples [FK10]). I do not see a good reason for believing that kSAT is immune to such improvements. But people have tried hard to solve the problem, especially over the last 10 years, so there is some reason to believe SETH. Personally, I have benefited from believing it is false: trying to solve SAT faster has led me down several fruitbearing paths that I would have never explored otherwise. I believe that contentious conjectures/hypotheses like SETH need to exist, to keep a research area vibrant and active.
The Orthogonal Vectors Conjecture (OVC) is that for every \(\varepsilon > 0\), there is a (potentially large) \(c \ge 1\) such that no algorithm solves Orthogonal Vectors in \(n^{2\varepsilon }\) time. It is known that OVC implies SETH [Wil04, WY14], and many SETHhardness results are actually OVChard. It looks very plausible to me that OVC is true but SETH is false.Orthogonal Vectors: Given n Boolean vectors in \(c\log (n)\) dimensions (for some constant parameter c ), are there two which are orthogonal?
It is useful to think of the Orthogonal Vectors problem as an interesting detection version of rectangular matrix multiplication: given a “skinny” Boolean matrix A, does \(A \cdot A^T\) contain a zero entry? Note that detecting if there is a nonzero can be done in randomized linear time, by Freivalds’ checker for matrix multiplication [Fre77]. So the OVC asks whether matrix multiplication checking in the Boolean domain can be extended to checking for a zero entry, without (essentially) multiplying the two matrices.
2.12 NEXP vs CoNEXP
 1.
It is true with small advice. First, it is known that \(\mathsf{coNEXP}\) is already contained in \(\mathsf{NEXP}\) with O(n) bits of advice, as reported by Buhrman, Fortnow, and Santhanam [BFS09]. Given a language \(L \in \mathsf{coNEXP}\), for inputs of length n, the advice encodes the number of strings of length n which are in L. Then in nondeterministic exponential time, one can guess the inputs that are not in L, guess witnesses for each of them, and verify all of this information. Thus in order to put \(\mathsf{coNEXP}\) in \(\mathsf{NEXP}\) without advice, it would suffice to be able to count (in \(\mathsf{NEXP}\)) the number of accepted strings of length n for an \(\mathsf{NEXP}\) machine. Note how the power of exponential time is being used: \(\mathsf{coNP}\subset \mathsf{NP}/\mathsf{poly}\) seems very unlikely in comparison. This proof looks to be inspired by the inductive counting technique in the proof of \(\mathsf{NSPACE}[S(n)] = \mathsf{coNSPACE}[S(n)]\), due to Immerman [Imm88] and Szelepcsényi [Sze88].
 2.The Spectrum Problem. A stronger result than \(\mathsf{NEXP}= \mathsf{coNEXP}\) would be implied by an expected resolution of the spectrum problem. In finite model theory, the spectrum of a firstorder sentence \(\phi \) is the set of all finite cardinalities of models of \(\phi \). For example, if \(\phi \) is a sentence that defines the axioms of a field, then its spectrum is the set of all prime powers. In the 1950s, Asser (see [DJMM12] for a comprehensive survey) asked whether the complement of a spectrum is always a spectrum itself: i.e.,
The Spectrum Problem: Given a firstorder sentence \(\phi \), is there another sentence \(\psi \) whose spectrum is the complement of \(\phi \)’s spectrum?
 3.
MaxCliqueDNF. From the section on \(\mathsf{EXP}\) vs \(\mathsf{NEXP}\) (Sect. 2.10), the following would imply \(\mathsf{NEXP}= \mathsf{coNEXP}\): Given a DNF formula F of constant width, 2n variables, and \(\mathsf{poly}(n)\) terms, there is a nondeterministic algorithm running in \(2^{\mathsf{poly}(n)}\) time which accepts F if and only if the graph of F does not have a clique of a certain desired size. (Recall we said that the corresponding problem for CNF is solvable exactly in \(2^{\mathsf{poly}(n)}\) time.)
 4.
Why not? I don’t know of any truly counterintuitive consequences of \(\mathsf{NEXP}= \mathsf{coNEXP}\). Because one can enumerate over all inputs of length n in exponential time, and in nondeterministic exponential time one can even guess witnesses of exponential length for each input of length n, I think these classes will behave differently than our intuition about lower complexity classes.
2.13 NSETH: Nondeterministic SETH
So NSETH proposes that there is no proof system which can refute unsatisfiable \(\omega (1)\)width CNFs in \(2^{\delta n}\) steps, for any \(\delta < 1\).For all \(\delta < 1\), there is a k such that kUNSAT on n variables is not in nondeterministic in \(2^{\delta n}\) time.
I put \(15\%\) likelihood on NSETH being true. The most obvious reason for skepticism is that the mild extension to MerlinArthur proof systems is very false: FormulaUNSAT for \(2^{o(n)}\)size formulas can be proved with a probabilistic verifier in only \(2^{n/2+o(n)}\) time [Wil16].
Since it is generally believed that \(\mathsf{MA}= \mathsf{NP}\), one might think the story is essentially over, and that I should have a much lower likelihood for NSETH. That is not quite the case: while \(\mathsf{MA}\) may well equal \(\mathsf{NP}\), it is not clear how a \(2^{n/2}\)time \(\mathsf{MA}\) algorithm could be simulated in \(1.999^n\) nondeterministic time. It’s not even clear that the (oneround) ArthurMerlin version of SETH is false, because the inclusion of \(\mathsf{MA}\) in \(\mathsf{AM}\) takes quadratic overhead. Refuting the oneround ArthurMerlin SETH (where Arthur tosses coins, then Merlin sends a message based on the coins, then an accept/reject decision is made, and we want Merlin to prove that a given formula is UNSAT in \(1.999^n\) time) would probably imply that a nonuniform variant of NSETH is false.
2.14 L vs RL
I put 95% likelihood on \(\mathsf{L}= \mathsf{RL}\), that is, the problems solvable in randomized logarithmic space equals the problems solvable in (deterministic) logspace. At this moment in time, it feels like this problem has “almost” been solved. Intuitively, there are two factors in favor of \(\mathsf{L}= \mathsf{RL}\): (1) we already believe that randomness generally does not help solve problems much more efficiently than deterministic algorithms, and (2) spacebounded computation appears to be fairly robust under modifications to the acceptance conditions of the model (think of \(\mathsf{NL}\subseteq \mathsf{SPACE}[\log ^2 n]\) [Sav70] and \(\mathsf{NL}= \mathsf{coNL}\) [Imm88, Sze88]).
As far as I know, the main problem that was thought to be a potential separator of \(\mathsf{RL}\) and \(\mathsf{L}\) was undirected st connectivity [AKL+79]. However this problem was shown to be in \(\mathsf{L}\) by a remarkable algorithm of Reingold [Rei08]. In followup work, Reingold et al. [RTV06] showed how to solve st connectivity in Eulerian directed graphs in \(\mathsf{L}\), and show that a logspace algorithm for a seemingly slight generalization of their problem would imply \(\mathsf{L}= \mathsf{RL}\). My personal interpretation of these results is that \(\mathsf{L}= \mathsf{RL}\) is true, and is only one really good idea away from being resolved.
Footnotes
Notes
Acknowledgment
I appreciate Gerhard Woeginger’s considerable patience with me during the writing of this article, and Scott Aaronson, Josh Alman, Boaz Barak, Greg Bodwin, Sam Buss, Lance Fortnow, Richard Lipton, Kenneth Regan, Omer Reingold, Rahul Santhanam, and Madhu Sudan for helpful comments on a draft, some of which led me to adjust my likelihoods by a few percentage points.
References
 [Aar16]Aaronson, S.: \(P\mathop { =}\limits ^{?}NP\). In: Nash Jr., J.F.F., Rassias, M.T.T. (eds.) Open Problems in Mathematics, pp. 1–122. Springer, Cham (2016). https://doi.org/10.1007/9783319321622_1CrossRefGoogle Scholar
 [ABV15]Abboud, A., Backurs, A., Williams, V.V.: Tight hardness results for LCS and other sequence similarity measures. In: FOCS, pp. 59–78 (2015)Google Scholar
 [ACW16]Alman, J., Chan, T.M., Williams, R.R.: Polynomial representations of threshold functions and algorithmic applications. In: FOCS, pp. 467–476 (2016)Google Scholar
 [Ajt99]Ajtai, M.: A nonlinear time lower bound for Boolean branching programs. Theory Comput. 1(1), 149–176 (2005). Preliminary version in FOCS’99MathSciNetzbMATHCrossRefGoogle Scholar
 [AK10]Allender, E., Koucký, M.: Amplifying lower bounds by means of selfreducibility. J. ACM 57(3), 14 (2010)MathSciNetzbMATHCrossRefGoogle Scholar
 [AKL+79]Aleliunas, R., Karp, R.M., Lipton, R.J., Lovász, L., Rackoff, C.: Random walks, universal traversal sequences, and the complexity of maze problems. In: FOCS, pp. 218–223 (1979)Google Scholar
 [ALM+98]Arora, S., Lund, C., Motwani, R., Sudan, M., Szegedy, M.: Proof verification and the hardness of approximation problems. J. ACM 45(3), 501–555 (1998)MathSciNetzbMATHCrossRefGoogle Scholar
 [AS98]Arora, S., Safra, S.: Probabilistic checking of proofs: a new characterization of NP. J. ACM 45(1), 70–122 (1998)MathSciNetzbMATHCrossRefGoogle Scholar
 [Ash94]Ash, C.J.: A conjecture concerning the spectrum of a sentence. Math. Log. Q. 40, 393–397 (1994)MathSciNetzbMATHCrossRefGoogle Scholar
 [ASW09]Arora, S., Steurer, D., Wigderson, A.: Towards a study of lowcomplexity graphs. In: Albers, S., MarchettiSpaccamela, A., Matias, Y., Nikoletseas, S., Thomas, W. (eds.) ICALP 2009, Part I. LNCS, vol. 5555, pp. 119–131. Springer, Heidelberg (2009). https://doi.org/10.1007/9783642029271_12CrossRefGoogle Scholar
 [AVW14]Abboud, A., Williams, V.V., Weimann, O.: Consequences of faster alignment of sequences. In: Esparza, J., Fraigniaud, P., Husfeldt, T., Koutsoupias, E. (eds.) ICALP 2014. LNCS, vol. 8572, pp. 39–51. Springer, Heidelberg (2014). https://doi.org/10.1007/9783662439487_4CrossRefGoogle Scholar
 [AW09]Aaronson, S., Wigderson, A.: Algebrization: a new barrier in complexity theory. ACM TOCT 1(1), 2 (2009)zbMATHGoogle Scholar
 [AW17]Alman, J., Williams, R.: Probabilistic rank and matrix rigidity. In: STOC, pp. 641–652 (2017)Google Scholar
 [Bar89]Barrington, D.A.M.: Boundedwidth polynomialsize branching programs recognize exactly those languages in NC\(^1\). J. Comput. Syst. Sci. 38(1), 150–164 (1989)MathSciNetzbMATHCrossRefGoogle Scholar
 [BCH86]Beame, P.W., Cook, S.A., Hoover, H.J.: Log depth circuits for division and related problems. SIAM J. Comput. 15(4), 994–1003 (1986)MathSciNetzbMATHCrossRefGoogle Scholar
 [BFNW93]Babai, L., Fortnow, L., Nisan, N., Wigderson, A.: BPP has subexponential time simulations unless EXPTIME has publishable proofs. Comput. Complex. 3(4), 307–318 (1993)MathSciNetzbMATHCrossRefGoogle Scholar
 [BFS09]Buhrman, H., Fortnow, L., Santhanam, R.: Unconditional lower bounds against advice. In: Albers, S., MarchettiSpaccamela, A., Matias, Y., Nikoletseas, S., Thomas, W. (eds.) ICALP 2009, Part I. LNCS, vol. 5555, pp. 195–209. Springer, Heidelberg (2009). https://doi.org/10.1007/9783642029271_18CrossRefGoogle Scholar
 [BGS75]Baker, T., Gill, J., Solovay, R.: Relativizations of the P =? NP question. SIAM J. Comput. 4(4), 431–442 (1975)MathSciNetzbMATHCrossRefGoogle Scholar
 [BI15]Backurs, A., Indyk, P.: Edit distance cannot be computed in strongly subquadratic time (unless SETH is false). In: STOC, pp. 51–58 (2015)Google Scholar
 [BI16]Backurs, A., Indyk, P.: Which regular expression patterns are hard to match? In: FOCS, pp. 457–466 (2016)Google Scholar
 [BJS98]Beame, P., Thathachar, J.S., Saks, M.: Timespace tradeoffs for branching programs. J. Comput. Syst. Sci. 63(4), 542–572 (2001). Preliminary version in FOCS’98MathSciNetzbMATHCrossRefGoogle Scholar
 [BM16]Bringmann, K., Mulzer, W.: Approximability of the discrete Fréchet distance. JoCG 7(2), 46–76 (2016)zbMATHGoogle Scholar
 [Bri14]Bringmann, K.: Why walking the dog takes time: frechet distance has no strongly subquadratic algorithms unless SETH fails. In: FOCS, pp. 661–670 (2014)Google Scholar
 [BSSV00]Beame, P., Saks, M., Sun, X., Vee, E.: Timespace tradeoff lower bounds for randomized computation of decision problems. J. ACM 50(2), 154–195 (2003). Preliminary version in FOCS’00MathSciNetzbMATHCrossRefGoogle Scholar
 [BV02]Beame, P., Vee, E.: Timespace tradeoffs, multiparty communication complexity, and nearestneighbor problems, pp. 688–697 (2002)Google Scholar
 [BW12]Buss, S.R., Williams, R.: Limits on alternation trading proofs for timespace lower bounds. Comput. Complex. 24(3), 533–600 (2015). Preliminary version in CCC’12MathSciNetzbMATHCrossRefGoogle Scholar
 [CGI+16]Carmosino, M., Gao, J., Impagliazzo, R., Mikhailin, I., Paturi, R., Schneider, S.: Nondeterministic extensions of the strong exponential time hypothesis and consequences for nonreducibility. In: ACM Conference on Innovations in Theoretical Computer Science (ITCS), pp. 261–270 (2016)Google Scholar
 [CM06]Chateau, A., More, M.: The ultraweak Ash conjecture and some particular cases. Math. Log. Q. 52(1), 4–13 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
 [COS17]Chen, R., Oliveira, I.C., Santhanam, R.: An averagecase lower bound against ACC\(\hat{}\)0. In: Electronic Colloquium on Computational Complexity (ECCC), vol. 24, no. 173 (2017)Google Scholar
 [Din15]Ding, N.: Some new consequences of the hypothesis that P has fixed polynomialsize circuits. In: Jain, R., Jain, S., Stephan, F. (eds.) TAMC 2015. LNCS, vol. 9076, pp. 75–86. Springer, Cham (2015). https://doi.org/10.1007/9783319171425_8CrossRefGoogle Scholar
 [DJMM12]Durand, A., Jones, N.D., Makowsky, J.A., More, M.: Fifty years of the spectrum problem: survey and new results. Bull. Symb. Logic 18(4), 505–553 (2012)MathSciNetzbMATHCrossRefGoogle Scholar
 [DST17]Das, B., Scharpfenecker, P., Torán, J.: CNF and DNF succinct graph encodings. Inf. Comput. 253(3), 436–447 (2017)MathSciNetzbMATHCrossRefGoogle Scholar
 [Dvi17]Dvir, Z.: A generating matrix of a good code may have low rigidity. Written by Oded Goldreich (2017). http://www.wisdom.weizmann.ac.il/~oded/MC/209.pdf
 [FK10]Fomin, F.V., Kratsch, D.: Exact Exponential Algorithms. Springer, Heidelberg (2010). https://doi.org/10.1007/9783642165337zbMATHCrossRefGoogle Scholar
 [FLvMV05]Fortnow, L., Lipton, R.J., van Melkebeek, D., Viglas, A.: Timespace lower bounds for satisfiability. J. ACM 52(6), 835–865 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
 [For00]Fortnow, L.: Timespace tradeoffs for satisfiability. J. Comput. Syst. Sci. 60(2), 337–353 (2000)MathSciNetzbMATHCrossRefGoogle Scholar
 [For15]Fortnow, L.: Nondeterministic separations. In: Theory and Applications of Models of Computation (TAMC), pp. 10–17 (2015)Google Scholar
 [Fre77]Freivalds, R.: Probabilistic machines can use less running time. In: IFIP Congress, pp. 839–842 (1977)Google Scholar
 [FSW09]Fortnow, L., Santhanam, R., Williams, R.: Fixedpolynomial size circuit bounds. In: CCC, pp. 19–26. IEEE (2009)Google Scholar
 [Gas02]Gasarch, W.I.: The P =? NP poll. ACM SIGACT News 33(2), 34–47 (2002)MathSciNetCrossRefGoogle Scholar
 [Gas12]Gasarch, W.I.: Guest column: the second P=? NP poll. ACM SIGACT News 43(2), 53–77 (2012)MathSciNetCrossRefGoogle Scholar
 [GM15]Goldreich, O., Meir, O.: Inputoblivious proof systems and a uniform complexity perspective on P/poly. ACM Trans. Comput. Theory (TOCT) 7(4), 16 (2015)MathSciNetzbMATHGoogle Scholar
 [GW83]Galperin, H., Wigderson, A.: Succinct representations of graphs. Inf. Control 56(3), 183–198 (1983)MathSciNetzbMATHCrossRefGoogle Scholar
 [Hås01]Håstad, J.: Some optimal inapproximability results. J. ACM 48(4), 798–859 (2001)MathSciNetzbMATHCrossRefGoogle Scholar
 [Hel86]Heller, H.: On relativized exponential and probabilistic complexity classes. Inf. Control 71(3), 231–243 (1986)MathSciNetzbMATHCrossRefGoogle Scholar
 [HPV77]Hopcroft, J., Paul, W., Valiant, L.G.: On time versus space. J. ACM 24(2), 332–337 (1977)MathSciNetzbMATHCrossRefGoogle Scholar
 [IKW02]Impagliazzo, R., Kabanets, V., Wigderson, A.: In search of an easy witness: exponential time vs. probabilistic polynomial time. J. Comput. Syst. Sci. 65(4), 672–694 (2002)MathSciNetzbMATHCrossRefGoogle Scholar
 [Imm88]Immerman, N.: Nondeterministic space is closed under complement. SIAM J. Comput. 17, 935–938 (1988)MathSciNetzbMATHCrossRefGoogle Scholar
 [IW97]Impagliazzo, R., Wigderson, A.: P = BPP if E requires exponential circuits: derandomizing the XOR lemma. In: STOC, pp. 220–229 (1997)Google Scholar
 [JS74]Jones, N.D., Selman, A.L.: Turing machines and the spectra of firstorder formulas. J. Symb. Log. 39(1), 139–150 (1974)MathSciNetzbMATHCrossRefGoogle Scholar
 [Kan82]Kannan, R.: Circuitsize lower bounds and nonreducibility to sparse sets. Inf. Control 55(1), 40–56 (1982)MathSciNetzbMATHCrossRefGoogle Scholar
 [Kho02]Khot, S.: On the power of unique 2prover 1round games. In: STOC, pp. 767–775. ACM (2002)Google Scholar
 [KMS17]Khot, S., Minzer, D., Safra, M.: On independent sets, 2to2 games, and grassmann graphs. In: STOC, pp. 576–589 (2017)Google Scholar
 [KMS18]Khot, S., Minzer, D., Safra, M.: Pseudorandom sets in Grassmann graph have nearperfect expansion. In: Electronic Colloquium on Computational Complexity (ECCC), vol. 18, no. 6 (2018)Google Scholar
 [KW98]Kobler, J., Watanabe, O.: New collapse consequences of NP having small circuits. SIAM J. Comput. 28(1), 311–324 (1998)MathSciNetzbMATHCrossRefGoogle Scholar
 [Lip94]Lipton, R.J.: Some consequences of our failure to prove nonlinear lower bounds on explicit functions. In: Structure in Complexity Theory Conference, pp. 79–87 (1994)Google Scholar
 [Lip10]Lipton, R.J.: The P=NP Question and Gödel’s Lost Letter. Springer, Heidelberg (2010). https://doi.org/10.1007/9781441971555. http://rjlipton.wordpress.comzbMATHCrossRefGoogle Scholar
 [LR13]Lipton, R.J., Regan, K.W.: People, Problems, and Proofs. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642414220. http://rjlipton.wordpress.comzbMATHCrossRefGoogle Scholar
 [LW13]Lipton, R.J., Williams, R.: Amplifying circuit lower bounds against polynomial time, with applications. Comput. Complex. 22(2), 311–343 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
 [MVW17]Miasnikov, A., Vassileva, S., Weiß, A.: The conjugacy problem in free solvable groups and wreath products of abelian groups is in \({{\sf T\mathit{}}{{\sf C}}}^0\). In: Weil, P. (ed.) CSR 2017. LNCS, vol. 10304, pp. 217–231. Springer, Cham (2017). https://doi.org/10.1007/9783319587479_20zbMATHCrossRefGoogle Scholar
 [NR04]Naor, M., Reingold, O.: Numbertheoretic constructions of efficient pseudorandom functions. J. ACM 51(2), 231–262 (2004)MathSciNetzbMATHCrossRefGoogle Scholar
 [NW94]Nisan, N., Wigderson, A.: Hardness vs randomness. J. Comput. Syst. Sci. 49(2), 149–167 (1994)MathSciNetzbMATHCrossRefGoogle Scholar
 [PY86]Papadimitriou, C.H., Yannakakis, M.: A note on succinct representations of graphs. Inf. Control 71(3), 181–185 (1986)MathSciNetzbMATHCrossRefGoogle Scholar
 [Rei08]Reingold, O.: Undirected connectivity in logspace. J. ACM 55(4), 17:1–17:24 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
 [RR97]Razborov, A., Rudich, S.: Natural proofs. J. Comput. Syst. Sci. 55(1), 24–35 (1997)MathSciNetzbMATHCrossRefGoogle Scholar
 [RT92]Reif, J.H., Tate, S.R.: On threshold circuits and polynomial computation. SIAM J. Comput. 21(5), 896–908 (1992)MathSciNetzbMATHCrossRefGoogle Scholar
 [RTV06]Reingold, O., Trevisan, L., Vadhan, S.: Pseudorandom walks on regular digraphs and the RL vs L problem. In: STOC, pp. 457–466. ACM (2006)Google Scholar
 [San07]Santhanam, R.: Circuit lower bounds for MerlinArthur classes. SIAM J. Comput. 39(3), 1038–1061 (2009). Preliminary version in STOC’07MathSciNetzbMATHCrossRefGoogle Scholar
 [Sav70]Savitch, W.J.: Relationships between nondeterministic and deterministic tape complexities. J. Comput. Syst. Sci. 4(2), 177–192 (1970)MathSciNetzbMATHCrossRefGoogle Scholar
 [Sze88]Szelepcsényi, R.: The method of forced enumeration for nondeterministic automata. Acta Informatica 26(3), 279–284 (1988)MathSciNetzbMATHCrossRefGoogle Scholar
 [Tam16]Tamaki, S.: A satisfiability algorithm for depth two circuits with a subquadratic number of symmetric and threshold gates. In: Electronic Colloquium on Computational Complexity (ECCC), vol. 23, no. 100 (2016)Google Scholar
 [Wil08a]Williams, R.R.: Timespace tradeoffs for counting NP solutions modulo integers. Comput. Complex. 17(2), 179–219 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
 [Wil08b]Williams, R.: Nonlinear time lower bound for (succinct) quantified Boolean formulas. In: Electronic Colloquium on Computational Complexity (ECCC), (TR08076) (2008)Google Scholar
 [Wil13a]Williams, R.: Alternationtrading proofs, linear programming, and lower bounds. TOCT 5(2), 6 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
 [Wil13b]Williams, R.: Towards NEXP versus BPP? In: Bulatov, A.A., Shur, A.M. (eds.) CSR 2013. LNCS, vol. 7913, pp. 174–182. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642385360_15CrossRefGoogle Scholar
 [Wil14]Williams, R.: New algorithms and lower bounds for circuits with linear threshold gates. In: STOC, pp. 194–202 (2014)Google Scholar
 [Wil16]Williams, R.R.: Strong ETH breaks with Merlin and Arthur: short noninteractive proofs of batch evaluation. In: CCC, pp. 2:1–2:17 (2016)Google Scholar
 [Wil11]Williams, R.: Nonuniform ACC circuit lower bounds. JACM 61(1), 2 (2014). Preliminary version in CCC’11MathSciNetzbMATHCrossRefGoogle Scholar
 [Wil04]Williams, R.: A new algorithm for optimal 2constraint satisfaction and its implications. Theor. Comput. Sci. 348(2–3), 357–365 (2005). Preliminary version in ICALP’04MathSciNetzbMATHCrossRefGoogle Scholar
 [Wil10]Williams, R.: Improving exhaustive search implies superpolynomial lower bounds. SIAM J. Comput. 42(3), 1218–1244 (2013). Preliminary version in STOC’10MathSciNetzbMATHCrossRefGoogle Scholar
 [WY14]Williams, R., Yu, H.: Finding orthogonal vectors in discrete structures. In: SODA, pp. 1867–1877 (2014)Google Scholar