1 Introduction

We investigate the solution spaces of Boolean constraint satisfaction problems built from atomic constraints by means of conjunction and variable identification. We study three minimization problems in connection with Hamming distance: Given an instance of a constraint satisfaction problem in the form of a generalized conjunctive formula over a set of atomic constraints, the first problem asks to find a satisfying assignment with minimal Hamming distance to a given assignment (NearestSolution, NSol). Note that for this problem we assume neither that the given assignment satisfies the formula nor that the solution is different from the assignment. The second problem is similar to the first one, but this time the given assignment has to satisfy the formula and we look for another solution with minimal Hamming distance (NearestOtherSolution, NOSol). The third problem is to find two satisfying assignments with minimal Hamming distance among all satisfying assignments (MinSolutionDistance, MSD). Note that the dual problem MaxHammingDistance has been studied in [14].

The NSol problem appears in several guises throughout literature. E.g., a common problem in Artificial Intelligence is to find solutions of constraints close to an initial configuration; our problem is an abstraction of this setting for the Boolean domain. Bailleux and Marquis [4] describe such applications in detail and introduce the decision problem DistanceSAT: Given a propositional formula φ, a partial interpretation I, and a bound k, is there a satisfying assignment differing from I in no more than k variables? It is straightforward to show that DistanceSAT corresponds to the decision variant of our problem with existential quantification (called NSol\(_{\text {pp}}^{\mathrm {d}}\) later on). While [4] investigates the complexity of DistanceSAT for a few relevant classes of formulas and empirically evaluates two algorithms, we analyze the decision and the optimization problem for arbitrary semantic restrictions on the formulas.

Hamming distance also plays an important role in belief revision. The result of revising/updating a formula φ by another formula ψ is characterized by the set of models of ψ that are closest to the models of φ. Dalal [15] selects the models of ψ having a minimal Hamming distance to models of φ to be the models that result from the change.

As is common, we analyze the complexity of our optimization problems modulo a parameter that specifies the atomic constraints allowed to occur in the constraint satisfaction problem. We give a complete classification of the approximation complexity with respect to this parameterization. It turns out that our problems can either be solved in polynomial time, or they are complete for a well-known optimization class, or else they are equivalent to well-known hard optimization problems.

Our study can be understood as a continuation of the minimization problems investigated by Khanna et al. in [22], especially that of MinOnes. The MinOnes optimization problem asks for a solution of a constraint satisfaction problem with the minimal Hamming weight, i.e., minimal Hamming distance to the 0-vector. Our work generalizes these results by allowing the given vector to be also different from zero.

Our work can also be seen as a generalization of questions in coding theory. In fact, our problem MSD restricted to affine relations is the well-known problem MinDistance of computing the minimum distance of a linear code. This quantity is of central importance in coding theory, because it determines the number of errors that the code can detect and correct. Moreover, our problem NSol restricted to affine relations is the problem NearestCodeword of finding the nearest codeword to a given word, which is the basic operation when decoding messages received through a noisy channel. Thus our work can be seen as a generalization of these well-known problems from affine to general relations.

In the case of NearestSolution we are able to apply methods from clone theory, even though the problem turns out to be more intricate than pure satisfiability. The other two problems, however, cannot be shown to be compatible with existential quantification easily, which makes classical clone theory inapplicable. Therefore we have to resort to weak co-clones that require only closure under conjunction and equality. In this connection, we apply the theory developed in [28, 29] as well as the minimal weak bases of Boolean co-clones from [23].

This paper is structured as follows. Section 2 recalls basic definitions and notions. Section 3 introduces the trilogy of optimization problems studied in this paper, namely Nearest Solution (denoted by NSol), Nearest Other Solution (denoted by NOSol), and Minimum Solution Distance (denoted by MSD), as well as their decision versions. It also states our three main results, i.e., a complete classification of complexity for these optimization problems, depicted in Figs. 12, and 3. Section 4 investigates the (non-)applicability of clone theory to our problems. It also provides a duality result for the constraint languages used as parameters. Section 5 contains the proofs of complexity classification results for NearestSolution, Section 6 for NearestOtherSolution, and Section 7 for MinSolutionDistance. Finally, the concluding remarks in Section 8 compare our theorems to previously existing similar results and put our results into perspective.

2 Preliminaries

2.1 Boolean Relations and Relational Clones

An n-ary Boolean relationR is a subset of {0,1}n; its elements (b1,…,bn) are also written as b1bn. Let V be a set of variables. An atomic constraint, or an atom, is an expression R(x), where R is an n-ary relation and x is an n-tuple of variables from V. Let Γ be a non-empty finite set of Boolean relations, also called a constraint language. A (conjunctive) Γ-formula is a finite conjunction of atoms R1(x1) ∧⋯ ∧ Rk(xk), where the Ri are relations from Γ and the xi are variable tuples of suitable arity. For technical reasons in connection with reductions we also allow empty conjunctions (k = 0) here. Such formulas elegantly take care of certain marginal cases at the cost of adding only one additional trivial problem instance.

An assignment is a mapping m: V →{0,1} assigning a Boolean value m(x) to each variable xV. In a given context we can assume V to be finite, by restricting it e.g. to the variables occurring in a formula. If we impose an arbitrary but fixed order on the variables, say x1,…,xn, then the assignments can be identified with elements from {0,1}n. The i-th component of a tuple m ∈{0,1}n is denoted by m[i] and corresponds to the value of the i-th variable, i.e., m[i] = m(xi). The Hamming weight hw(m) = |{i|m[i] = 1}| of m is the number of 1s in the tuple m. The Hamming distance hd(m, m) = |{i|m[i]≠m[i]}| of m and m is the number of coordinates on which the tuples disagree. The complement \(\overline {m}\) of a tuple m is its pointwise complement, \(\overline {m}[i] = 1- m[i]\).

An assignment m satisfies a constraint R(x1,…,xn) if (m(x1),…,m(xn)) ∈ R holds. It satisfies the formula φ if it satisfies all its atoms; m is said to be a model or solution of φ in this case. We use [φ] to denote the set of models of φ. For a term t, [t] is the set of assignments for which t evaluates to 1. Note that [φ] and [t] represent Boolean relations. If the variables of φ are not explicitly enumerated in parentheses as parameters, they are implicitly considered to be ordered lexicographically. In sets of relations represented this way we usually omit the brackets. A literal is a variable v, or its negation ¬v. Assignments are extended to literals by defining mv) = 1 − m(v).

Table 1 defines Boolean functions and relations needed later on, in particular exclusive or [xy], not-all-equal nae3, k-ary disjunction ork, and k-ary negated conjunction nandk.

Table 1 List of some Boolean functions and relations

Throughout the text we refer to different types of Boolean constraint relations following Schaefer’s terminology [27] (see also the monograph [11] and the survey [9]). A Boolean relation R is (1) 1-valid if 1⋯1 ∈ R and 0-valid if 0⋯0 ∈ R, (2) Horn (dual Horn) if R can be represented by a formula in conjunctive normal form (CNF) with at most one unnegated (negated) variable per clause, (3) monotone if it is both Horn and dual Horn, (4) bijunctive if it can be represented by a CNF formula with at most two literals per clause, (5) affine if it can be represented by an affine system of equations Ax = b over \(\mathbb {Z}_{2}\), (6) complementive if for each mR also \(\overline {m} \in R\), (7) implicative hitting set-bounded+ with bound k (denoted by k-IHS-B+) if R can be represented by a CNF formula with clauses of the form (x1 ∨⋯ ∨ xk), (¬xy), x, and ¬x, (8) implicative hitting set-bounded− with bound k (denoted by k-IHS-B) if R can be represented by a CNF formula with clauses of the form (¬x1 ∨⋯ ∨¬xk), (¬xy), x, and ¬x. A set Γ of Boolean relations is called 0-valid (1-valid, Horn, dual Horn, monotone, affine, bijunctive, complementive, k-IHS-B+, k-IHS-B) if every relation in Γ is 0-valid (1-valid, Horn, dual Horn, monotone, affine, bijunctive, complementive, k-IHS-B+, k-IHS-B).

A formula constructed from atoms by conjunction, variable identification, and existential quantification is called a primitive positive formula (pp-formula). If φ is such a formula, we write again [φ] for its set of models, i.e., the Boolean relation defined by φ. As above the coordinates of this relation are understood to be the variables of φ in lexicographic order, unless otherwise stated by explicit enumeration. We denote by 〈Γ〉 the set of all relations that can be expressed using relations from Γ ∪{≈}, conjunction, variable identification (and permutation), cylindrification, and existential quantification, i.e., the set of all relations that are primitive positively definable from Γ and equality. The set 〈Γ〉 is called the co-clone generated by Γ. A base of a co-clone \(\mathcal {B}\) is a set of relations Γ such that \(\langle {\Gamma }\rangle = \mathcal {B}\), i.e., just a generating set with regard to primitive positive definability including equality. Note that traditionally (e.g. [18]), the notion of base also involves minimality with respect to set inclusion. Our use of the term base is in accordance with [10], where finite bases for all Boolean co-clones have been determined. Some of these are listed in Table 2. The sets of relations being 0-valid, 1-valid, complementive, Horn, dual Horn, affine, bijunctive, 2affine (both bijunctive and affine), monotone, k-IHS-B+, and k-IHS-B each form a co-clone denoted by iI0, iI1, iN2, iE2, iV2, iL2, iD2, iD1, iM2, \(\text {iS}_{00}^{k}\), and \(\text {iS}_{10}^{k}\), respectively; see Table 3.

Table 2 Some relevant Boolean co-clones with bases
Table 3 Sets of Boolean relations with their names determined by co-clone inclusions

We will also use a weaker closure than 〈Γ〉, called conjunctive closure and denoted by 〈Γ〉, where the constraint language Γ is closed under conjunctive definitions, but not under existential quantification or addition of explicit equality constraints.

Sets of relations of the form W = 〈W ∪{≈}〉 are called weak systems and are in a one-to-one correspondence with so-called strong partial clones [26]. It is a well-known consequence of the Galois theory developed in [26] that for every co-clone 〈Γ〉 whose corresponding clone is finitely generated (this presents no restriction in the Boolean case), there is a largest partial clone whose total part coincides with that clone, cf. [24, Theorem 20.7.2] or see [28, Theorems 4.6, 4.7, 4.11] for a proof in the Boolean case. This largest partial clone even is a strong partial clone, and hence, there is a least weak system W under inclusion such that 〈W〉 = 〈Γ〉. Any finite weak generating set Γ of this weak system W, i.e., W = 〈Γ∪{≈}〉, is called a weak base of 〈Γ〉, see [28, Definition 4.2]. Such a set Γ, in particular, is a finite base of the co-clone 〈Γ〉. Finally, to get from the closure operator 〈Γ∪{≈}〉 (which is hard to handle in the context of our problems) to 〈Γ〉 (which is easy to handle), one needs the notion of irredundancy. A relation R is called irredundant, if it has neither duplicate nor fictitious coordinates. It can be observed from the proofs of Proposition 5.2 and Corollary 5.6 in [28] or from [29, Proposition 3.11], that R ∈〈Γ∪{≈}〉 implies R ∈〈Γ〉 for any irredundant relation R. Following Schnoor [29, p. 30], we call a weak base of 〈Γ〉 consisting exclusively of irredundant relations an irredundant weak base. Thus, if Γ is an irredundant weak base of 〈Γ〉, then the minimality of the weak system W = 〈Γ∪{≈}〉 implies that Γ ⊆ W ⊆〈Γ∪{≈}〉 (cf. [28, Corollary 4.3]), and thus Γ ⊆〈Γ because of irredundancy. Hence, we obtain the following useful tool.

Theorem 1 (Schnoor [29, Corollary 3.12])

If Γ is an irredundant weak base of a co-cloneiC, e.g.a minimal weak base of iC, then Γ ⊆〈Γholds for any base Γof iC.

According to Lagerkvist [23], a minimal weak base is an irredundant weak base satisfying an additional minimality property that ensures small cardinality. The utility of Theorem 1 comes in particular from the fact that Lagerkvist determined minimal weak bases for all finitely generated Boolean co-clones in [23]. For our purposes we note that each of the co-clones iV, iV0, iV1, iV2, iN, iN2, and iI is generated by a minimal weak base consisting of a single relation (Table 4).

Table 4 Minimal weak bases for some co-clones

Another source of weak base relations without duplicate coordinates comes from the following construction: let χn be the 2n-ary relation that is given by the value tables (in some chosen enumeration) of the n distinct n-ary projection functions. More formally, let β: 2n →{0,1}n be the reader’s preferred bijection between the index set 2n = {0,…,2n− 1} and the set of all arguments of an n-ary Boolean function—often lexicographic enumeration is chosen here for presentational purposes, but the order of enumeration of the n-tuples does not matter as long as it remains fixed. Then χn = {eiβ|1 ≤ in} where ei: {0,1}n →{0,1} denotes the projection function onto the i-th coordinate. Let C be a clone with corresponding co-clone iC. Since iC is closed with respect to intersection of relations of identical arity, for any k-ary relation R, there is a least k-ary relation in iC containing R, scilicet \({C}\circ \langle {R}\rangle :=\bigcap \{R^{\prime }\in \mathrm {i}\mathit {C} | {R^{\prime }\supseteq R, R^{\prime }~k\text {-ary}}\}\). Traditionally, e.g. [24, Sect. 2.8, p. 134] or [25, Definition 1.1.16, p. 48], this relation is denoted by ΓC(R), but here we have chosen a different notation to avoid confusion with constraint languages. It is well known, e.g. [25, Satz 1.1.19(i), p. 50], and easy to see that C ∘〈R〉 is completely determined by the -ary part of C whenever ≥|R|: given any enumeration of R = {r1,…,r} (for technical reasons we have to exclude the case = 0 in this presentation because we do not consider clones with nullary operations here) we have C ∘〈R〉 = {f ∘ (r1,…,r)|fC, f-ary}, where f ∘ (r1,…,r) denotes the row-wise application of f to a matrix whose columns are formed by the tuples r1,…,r. Relations of the form C ∘〈χn〉 represent the n-ary part of the clone C as a 2n-ary relation and are called n-th graphic of C (cf. e.g. [24, p. 133 and Theorem 2.8.1(b)]). Indeed, the previous characterization of C ∘〈χn〉 yields C∘〈χn〉 = {f∘(e1β,…,enβ)|fC, fn-ary} = {f∘(e1,…,en)∘β|fC, fn-ary} = {fβ|fC, fn-ary}. With the help of this description of C ∘〈χn〉 and standard clone theoretic manipulations, one can easily verify the following result, identifying possible candidates for irredundant singleton weak bases.

Theorem 2 ([28, Theorem 4.11])

LetCbe a clone andR = C ∘〈{r1,…,rn}〉 withn ≥ 1, thenC ∘〈χngives a singleton weak base of 〈{R}〉 without duplicate coordinates.

2.2 Approximability, Reductions, and Completeness

We assume that the reader has a basic knowledge of approximation algorithms and complexity theory. We recall some basic notions of approximation algorithms and complexity theory; for details see the monographs [3, 11].

A combinatorial optimization problem\(\mathcal {P}\) is a quadruple (I,sol,obj,goal), where:

  • I is the set of admissible instances of \(\mathcal {P}\).

  • sol(x) denotes the set of feasible solutions for every instance xI.

  • obj(x, y) denotes the non-negative integer measure of y for every instance xI and every feasible solution y ∈sol(x); obj is also called objective function.

  • goal ∈{min,max} denotes the optimization goal for \(\mathcal {P}\).

A combinatorial optimization problem is said to be an NP-optimization problem (NPO-problem) if

  • the instances and solutions are recognizable in polynomial time,

  • the size of the solutions in sol(x) is polynomially bounded in the size of x, and

  • the objective function obj is computable in polynomial time.

The optimal value of the objective function for the solutions of an instance x is denoted by OPT(x). In our case the optimization goal will always be minimization, i.e., OPT(x) will be the minimum.

Given an instance xI with a feasible solution y ∈sol(x) and a real number r ≥ 1, we say that y is r-approximate if obj(x, y) ≤ r OPT(x) holds and our goal is minimization, or obj(x, y) ≥OPT(x)/r and we consider a maximization problem.

Let A be an algorithm that for any instance x of \(\mathcal {P}\) such that sol(x)≠ returns a feasible solution A(x) ∈sol(x). Given an arbitrary function \(r\colon \mathbb {N} \to [1,\infty )\), we say that A is an r(n)-approximate algorithm for \(\mathcal {P}\) if for any instance xI having feasible solutions the algorithm returns an r(|x|)-approximate solution, where |x| is the size of x. If an NPO problem \(\mathcal {P}\) admits an r(n)-approximate polynomial-time algorithm, we say that \(\mathcal {P}\) is approximable within r(n).

An NPO problem \(\mathcal {P}\) is in the class PO if the optimum is computable in polynomial time (i.e. if \(\mathcal {P}\) admits a 1-approximate polynomial-time algorithm). \(\mathcal {P}\) is in the class APX (poly-APX) if it is approximable within a constant (polynomial) function in the size of the instance x. NPO is the class of all NPO problems and NPOPB is the class of all NPO problems where the objective function is polynomially bounded. The following inclusions hold for these approximation complexity classes: PO ⊆APX ⊆ poly-APX ⊆NPO. All inclusions are strict unless P = NP.

For reductions among decision problems we use the polynomial-time many-one reduction denoted by ≤m. Many-one equivalence between decision problems is denoted by ≡m. For reductions among optimization problems we use approximation preserving reductions, also called AP-reductions, denoted by ≤AP. AP-equivalence between optimization problems is denoted by ≡AP.

We say that an optimization problem \(\mathcal {P}\) AP-reduces to another optimization problem \(\mathcal {Q}\), denoted \(\mathcal {P} \le _{\text {AP}} \mathcal {Q}\), if there are two polynomial-time computable functions f and g and a real constant α ≥ 1 such that for all r > 1 and all \(\mathcal {P}\)-instances x the following conditions hold.

  • f(x) is a \(\mathcal {Q}\)-instance or the generic unsolvable instance ⊥ (which is not part of \(\mathcal {Q}\)).

  • If x admits feasible solutions, then f(x) is different from ⊥ and also admits feasible solutions.

  • For any feasible solution y of f(x), g(x, y) is a feasible solution of x.

  • If y is an r-approximate solution of the \(\mathcal {Q}\)-instance f(x), then g(x, y) is an (1 + (r − 1)α + o(1))-approximate solution of the \(\mathcal {P}\)-instance x, where o(1) refers to the size of x.

Our definition of AP-reducibility slightly extends the one in [3] by introducing a generic unsolvable instance ⊥. This extension allows us to reduce problems with unsolvable instances to such without as long as the unsolvable instances can be detected in polynomial time, by making f map the unsolvable instances to ⊥. This practice has been implicit in previous work, e.g. [22].

We also need a slightly non-standard variation of AP-reductions. We say that an optimization problem \(\mathcal {P}\) AP-Turing-reduces to another optimization problem \(\mathcal {Q}\) if there is a polynomial-time oracle algorithm A and a constant α ≥ 1 such that for all r > 1 on any input x for \(\mathcal {P}\)

  • if all oracle calls with a \(\mathcal {Q}\)-instance x are answered with a feasible \(\mathcal {Q}\)-solution y for x, then A outputs a feasible \(\mathcal {P}\)-solution for x, and

  • if for every call the oracle answers with an r-approximate solution, then A computes a (1 + (r − 1)α + o(1))-approximate solution for the \(\mathcal {P}\)-instance x.

It is straightforward to check that AP-Turing-reductions are transitive. Moreover, if \(\mathcal {P}\) AP-Turing-reduces to \(\mathcal {Q}\) with constant α and \(\mathcal {Q}\) has an r(n)-approximation algorithm, then there is an αr(n)-approximation algorithm for \(\mathcal {P}\).

We will relate our problems to well-known optimization problems, by calling the problem \(\mathcal {P}\) under investigation \(\mathcal {Q}\)-complete if \(\mathcal {P}\equiv _{\text {AP}} \mathcal {Q}\). This notion of completeness is stricter than the one in [22], since the latter relies on A-reductions. For \(\mathcal {Q}\), we will consider the following optimization problems analyzed in [22].

Problem :

MinOnes(Γ)

Input: :

A conjunctive formula φ over relations from Γ.

Solution: :

An assignment m satisfying φ.

Objective: :

Minimum Hamming weight hw(m).

Problem :

WeightedMinOnes(Γ)

Input: :

A conjunctive formula φ over relations from Γ and a weight function \(w\colon V \to \mathbb {N}\) assigning non-negative integer weights to the variables of φ.

Solution: :

An assignment m satisfying φ.

Objective: :

Minimum value \(\sum _{x: m(x)= 1}w(x)\).

We now define some well-studied problems to which we will relate our problems. Note that these problems do not depend on any parameter.

Problem :

NearestCodeword

Input: :

A matrix \(A \in \mathbb {Z}_{2}^{k\times l}\) and a vector \(m\in {\mathbb {Z}_{2}^{l}}\).

Solution: :

A vector \(x\in {\mathbb {Z}_{2}^{k}}\).

Objective: :

Minimum Hamming distance hd(xA, m).

Problem :

MinDistance

Input: :

A matrix \(A\in \mathbb {Z}_{2}^{k\times l}\).

Solution: :

A non-zero vector \(x \in {\mathbb {Z}_{2}^{l}}\) with Ax = 0.

Objective: :

Minimum Hamming weight hw(x).

Problem :

MinHornDeletion

Input: :

A conjunctive formula φ over relations from {xy ∨¬z, xx}.

Solution: :

An assignment m to φ.

Objective: :

Minimum number of unsatisfied conjuncts of φ.

NearestCodeword, MinDistance and MinHornDeletion are known to be NP-hard to approximate within a factor \(2^{\Omega (\log ^{1-\varepsilon }(n))}\) for every ε > 0 [1, 16, 22]. Thus if a problem \(\mathcal {P}\) is equivalent to any of these problems, it follows that \(\mathcal {P} \notin \text {APX}\) unless P = NP.

2.3 Satisfiability

We also use the classic problem SAT(Γ) asking for the satisfiability of a given conjunctive formula over a constraint language Γ. Schaefer [27] completely classified its complexity. SAT(Γ) is polynomial-time decidable if Γ is 0-valid (Γ ⊆iI0), 1-valid (Γ ⊆iI1), Horn (Γ ⊆iE2), dual Horn (Γ ⊆iV2), bijunctive (Γ ⊆iD2), or affine (Γ ⊆iL2); otherwise it is NP-complete. Moreover, we need the decision problem AnotherSAT(Γ): Given a conjunctive formula over Γ and a satisfying assignment m, is there another satisfying assignment m different from m? The complexity of this problem was completely classified by Juban [20]. AnotherSAT(Γ) is polynomial-time decidable if Γ is both 0- and 1-valid (Γ ⊆iI), complementive (Γ ⊆iN2), Horn (Γ ⊆iE2), dual Horn (Γ ⊆iV2), bijunctive (Γ ⊆iD2), or affine (Γ ⊆iL2); otherwise it is NP-complete.

2.4 Linear and Integer Programming

A unimodular matrix is a square integer matrix having determinant + 1 or − 1. A totally unimodular matrix is a matrix for which every square non-singular submatrix is unimodular. A totally unimodular matrix need not be square itself. Any totally unimodular matrix has only 0, + 1 or − 1 entries. If A is a totally unimodular matrix and b is an integral vector, then for any given linear functional f such that the linear program min{f(x)|Axb} has a real minimum x, it also has an integral minimum point x. That is, the feasible region {x|Axb} is an integral polyhedron. For this reason, linear programming methods can be used to obtain the solutions for integer linear programs in this case. Linear programs can be solved in polynomial time, hence so can integer programs with totally unimodular matrices. For details see the monograph by Schrijver [30].

3 Results

This section presents the problems we consider and our results; the proofs follow in subsequent sections. The input to all our problems is a conjunctive formula over a constraint language. The satisfying assignments of the formula, i.e. its models or solutions, form a Boolean relation that can be understood as an associated generalized binary code. As for linear codes, the minimization target is always the Hamming distance between the codewords or models. Our three problems differ in the information additionally available for computing the required Hamming distance.

Given a formula and an arbitrary assignment, the first problem asks for a solution closest to the given assignment.

Problem :

NearestSolution(Γ), NSol(Γ)

Input: :

A conjunctive formula φ over relations from Γ and an assignment m to the variables occurring in φ, which is not required to satisfy φ.

Solution: :

An assignment m satisfying φ (i.e. a codeword of the code described by φ).

Objective: :

Minimum Hamming distance hd(m, m).

Note that the problem generalizes the MinOnes problem from [22]. Indeed, if we take the all-zero assignment m = 0⋯0 as part of the input, we get exactly the MinOnes problem as a special case.

Theorem 3 (illustrated in Fig. 1)

For a given Boolean constraintlanguage Γ theoptimization problemNSol(Γ) is

  1. (i)

    inPO if Γ is

    1. (a)

      2affine (Γ ⊆iD1) or

    2. (b)

      monotone (Γ ⊆iM2);

  2. (ii)

    APX-completeif

    1. (a)

      Γ generates iD2 (〈Γ〉 = iD2),or

    2. (b)

      [xy] ∈〈Γ〉 and Γ isk-IHS-B+\(({\text {iS}_{0}^{2}} \subseteq \langle {\Gamma }\rangle \subseteq \text {iS}_{00}^{k})\)forsome\(k \in \mathbb {N}\),k ≥ 2,or

    3. (c)

      x ∨¬y] ∈〈Γ〉 and Γ isk-IHS-B\(({\text {iS}_{1}^{2}} \subseteq \langle {\Gamma }\rangle \subseteq \text {iS}_{10}^{k})\)forsome\(k \in \mathbb {N}\),k ≥ 2;

  3. (iii)

    NearestCodeword-complete if Γ is exactly affine (iL ⊆〈Γ〉⊆iL2);

  4. (iv)

    MinHornDeletion-completeif Γ is

    1. (a)

      exactly Horn (iE ⊆〈Γ〉⊆iE2) or

    2. (b)

      exactly dual Horn (iV ⊆〈Γ〉⊆iV2);

  5. (v)

    poly-APX-complete if Γ does not contain an affine relation and it is

    1. (a)

      0-valid(iN ⊆〈Γ〉⊆iI0) or

    2. (b)

      1-valid(iN ⊆〈Γ〉⊆iI1);and

  6. (vi)

    otherwise (iN2 ⊆〈Γ〉) it is NP-complete to decide whether a feasible solution forNSol(Γ) exists.

Fig. 1
figure 1

Lattice of co-clones with complexity classification for NSol

Proof

The proof is split into several propositions presented in Section 5.

  1. (i)

    See Propositions 17 and 18.

  2. (ii)

    See Propositions 20, 21, and 22.

  3. (iii)

    See Corollary 25 and Proposition 26.

  4. (iv)

    See Propositions 29 and 30.

  5. (v)

    See Proposition 31.

  6. (vi)

    See Proposition 19.

Given a constraint and one of its solutions, the second problem asks for another solution closest to the given one.

Problem :

NearestOtherSolution(Γ), NOSol(Γ)

Input: :

A conjunctive formula φ over relations from Γ and a satisfying assignment m (to the variables mentioned in φ).

Solution: :

An assignment mm satisfying φ.

Objective: :

Minimum Hamming distance hd(m, m).

The difference between the problems NearestSolution and NearestOtherSolution is the knowledge, or its absence, whether the input assignment satisfies the constraint. Moreover, for NearestSolution we may output the given assignment if it satisfies the formula while for NearestOtherSolution we have to output an assignment different from the one given as the input.

Theorem 4 (illustrated in Fig. 2)

For every constraint language Γ the optimization problemNOSol(Γ) is

  1. (i)

    inPO if

    1. (a)

      Γ is bijunctive (Γ ⊆iD2) or

    2. (b)

      Γ isk-IHS-B+\(({\Gamma }\subseteq \text {iS}_{00}^{k})\)forsome\(k \in \mathbb {N}\),k ≥ 2 or

    3. (c)

      Γ isk-IHS-B\(({\Gamma }\subseteq \text {iS}_{10}^{k})\)forsome\(k \in \mathbb {N}\),k ≥ 2;

  2. (ii)

    MinDistance-complete if Γ is exactly affine (iL ⊆〈Γ〉⊆iL2);

  3. (iii)

    MinHornDeletion-complete underAP-Turing-reductionsif Γ is

    1. (a)

      exactly Horn (iE ⊆〈Γ〉⊆iE2) or

    2. (b)

      exactly dual Horn (iV ⊆〈Γ〉⊆iV2);

  4. (iv)

    in poly-APX if Γ is

    1. (a)

      exactly both 0-validand 1-valid(〈Γ〉 = iI) or

    2. (b)

      exactly complementive (iN ⊆〈Γ〉⊆iN2),

    whereNOSol(Γ) isn-approximable butnot (n1−ε)-approximablefor anyε > 0 unless P = NP;

  5. (v)

    and otherwise (iI0 ⊆〈Γ〉 or iI1 ⊆〈Γ〉) it is NP-complete to decide whether a feasible solution forNOSol(Γ) exists.

Fig. 2
figure 2

Lattice of co-clones with complexity classification for NOSol

Proof

The proof is split into several propositions presented in Section 6.

  1. (i)

    See Propositions 33 and 34.

  2. (ii)

    See Proposition 44.

  3. (iii)

    See Corollary 47.

  4. (iv)

    See Propositions 35 and 39.

  5. (v)

    See Proposition 35.

The third problem does not take any assignments as input, but asks for two solutions which are as close to each other as possible. We optimize once more the Hamming distance between the solutions.

Problem :

MinSolutionDistance(Γ), MSD(Γ)

Input: :

A conjunctive formula φ over relations from Γ.

Solution: :

Two satisfying truth assignments mm to the variables occurring in φ.

Objective: :

Minimum Hamming distance hd(m, m).

The MinSolutionDistance problem enlarges the notion of minimum distance of an error correcting code. The following theorem is a more fine-grained analysis of the result published by Vardy in [31], extended to an optimization problem.

Theorem 5 (illustrated in Fig. 3)

For any constraint language Γ the optimization problemMSD(Γ) is

  1. (i)

    inPO if Γ is

    1. (a)

      bijunctive (Γ ⊆iD2) or

    2. (b)

      Horn (Γ ⊆iE2) or

    3. (c)

      dual Horn (Γ ⊆iV2);

  2. (ii)

    MinDistance-complete if Γ is exactly affine (iL ⊆〈Γ〉⊆iL2);

  3. (iii)

    in poly-APX if dup3 ∈〈Γ〉 and Γ isboth 0-validand 1-valid(iN ⊆〈Γ〉⊆iI), whereMSD(Γ) isn-approximable butnot (n1−ε)-approximablefor anyε > 0 unless P = NP;and

  4. (iv)

    otherwise (iN2 ⊆〈Γ〉 or iI0 ⊆〈Γ〉 or iI1 ⊆〈Γ〉) it is NP-complete to decide whether a feasible solution forMSD(Γ) exists.

Fig. 3
figure 3

Lattice of co-clones with complexity classification for MSD

Proof

The proof is split into several propositions presented in Section 7.

  1. (i)

    See Propositions 48 and 49.

  2. (ii)

    See Proposition 55.

  3. (iii)

    For Γ ⊆iI, every formula φ over Γ has at least two solutions since it is both 0-valid and 1-valid. Thus TwoSolutionSAT(Γ) is in P, and Proposition 54 yields that MSD(Γ) is n-approximable. By Proposition 56 this approximation is indeed tight.

  4. (iv)

    According to [20], AnotherSAT(Γ) is NP-hard for iI0 ⊆〈Γ〉, or iI1 ⊆〈Γ〉. By Lemma 51 it follows that TwoSolutionSAT(Γ) is NP-hard, too. For iN2 ⊆〈Γ〉 we can reduce the NP-hard problem SAT(Γ) to TwoSolutionSAT(Γ). Hence it is NP-complete to decide whether a feasible solution for MSD(Γ) exists in all three cases.

The three optimization problems can be transformed into decision problems in the usual way. We add an integer bound k to the input and ask if the Hamming distance satisfies the inequality hd(m, m) ≤ k. This way we obtain the corresponding decision problems NOSold, NSold, and MSDd, respectively. Their complexity follows immediately from the theorems above. All cases in PO become polynomial-time decidable, whereas the other cases, which are APX-hard, become NP-complete. This way we obtain dichotomy theorems classifying the decision problems as polynomial or NP-complete for all sets Γ of relations. We obtain the following dichotomies for each of the respective decision problems.

Corollary 6

For each constraint language Γ

  • NSold(Γ) is inP if Γ is 2affine or monotone, and it is NP-complete otherwise.

  • NOSold(Γ) is inP if Γ is bijunctive,k-IHS-B+,ork-IHS-B,

    and it is NP-complete otherwise.

  • MSDd(Γ) is inP if Γ is bijunctive, Horn, or dual-Horn, and it is NP-complete otherwise.

4 Applicability of Clone Theory and Duality

We show that clone theory is applicable to the problem NSol, as well as a possibility to exploit inner symmetries between co-clones, which shortens several proofs in the following sections.

4.1 Nearest Solution

There are two natural versions of NSol(Γ). In one version the formula φ is quantifier free while in the other one we do allow existential quantification. We call the former version NSol(Γ) and the latter NSolpp(Γ) and show that both versions are equivalent.

Let NSold(Γ) and \(\textsf {NSol}^{\mathrm {d}}_{\text {pp}}({\Gamma })\) be the decision problems corresponding to NSol(Γ) and NSolpp(Γ), asking whether there is a satisfying assignment within a given bound.

Proposition 7

For any constraint language Γ, we have\(\text {\textsf {NSol}}^{\mathrm {d}}({\Gamma })\equiv _{\mathrm {m}}\text {\textsf {NSol}}^{\mathrm {d}}_{\text {pp}}({\Gamma })\)andNSol(Γ) ≡APNSolpp(Γ).

Proof

The reduction from left to right is trivial in both cases. For the other direction, consider first an instance of \(\textsf {NSol}^{\mathrm {d}}_{\text {pp}}({\Gamma })\) with formula φ, assignment m, and bound k. Let x1,…,xn be the free variables of φ and let y1,…,y be the existentially quantified ones, which we can assume to be disjoint. By discarding variables yi while not changing [φ], we can assume that each variable yi occurs in at least one atom of φ. We construct a quantifier-free formula φ, where the non-quantified variables of φ get duplicated by a factor λ := (n + + 1)2 such that the effect of quantified variables becomes negligible. For each variable z we define the set B(z) as follows:

$$\begin{array}{@{}rcl@{}} B(z) &=& \left\{\begin{array}{ll} \{{x_{i}^{1}},\dots,x_{i}^{\lambda}\} & \text{if}~z=x_{i}~\text{for some}~i\in \{1, \ldots, n\},\\ \{y_{i}\} & \text{if}~z=y_{i}~\text{for some}~i\in \{1, \ldots, \ell\}. \end{array}\right. \end{array} $$

For every atom R(z1,…,zs) in φ, the quantifier-free formula φ over the variables \(\bigcup _{i = 1}^{n} B(x_{i}) \cup \bigcup _{i = 1}^{\ell } B(y_{i})\) contains the atom \(R(z_{1}^{\prime }, \ldots , z_{s}^{\prime })\) for every \((z_{1}^{\prime }, \ldots , z_{s}^{\prime })\) from B(z1) ×⋯ × B(zs). Moreover, we construct an assignment B(m) of φ by assigning to every variable \({x_{i}^{j}}\) the value m(xi) and to yi the value 0. Note that because there is an upper bound on the arities of relations from Γ, this is a polynomial time construction.

We claim that φ has a solution m with hd(m, m) ≤ k if and only if φ has a solution m with hd(B(m),m) ≤ kλ + . First, observe that if m with the desired properties exists, then there is an extension \(m^{\prime }_{\mathrm {e}}\) of m to the yi that satisfies all atoms. Define m by setting \(m^{\prime \prime }({x_{i}^{j}}):= m^{\prime }(x_{i})\) and \(m^{\prime \prime }(y_{i}):= m^{\prime }_{\mathrm {e}}(y_{i})\) for all i and j. Then m is clearly a satisfying assignment of φ. Moreover, m and B(m) differ in at most kλ variables among the \({x_{i}^{j}}\). Since there are only other variables yi, we get hd(m, B(m)) ≤ kλ + as desired.

Now suppose m satisfies φ with hd(B(m),m) ≤ kλ + . We may assume for each i that \(m^{\prime \prime }({x_{i}^{1}}) = \cdots = m^{\prime \prime }(x_{i}^{\lambda })\). Indeed, if this is not the case, then setting all \({x_{i}^{j}}\) to \(B(m)({x_{i}^{j}})=m(x_{i})\) will result in a satisfying assignment closer to B(m). After at most n iterations we get some m as desired. Now define an assignment m for φ by setting \(m^{\prime }(x_{i}):=m^{\prime \prime }({x_{i}^{1}})\). Then m satisfies φ, because the variables yi can be assigned values as in m. Moreover, whenever m(xi) differs from m(xi), the inequality \(B(m)({x_{i}^{j}}) \neq m^{\prime \prime }({x_{i}^{j}})\) holds for every j. Thus we obtain λ hd(m, m) ≤hd(B(m),m) ≤ kλ + . Therefore, we have the inequality hd(m, m) ≤ k + /λ and hence hd(m, m) ≤ k, since /λ < 1. This completes the many-one reduction.

To see that the construction above is also an AP-reduction, let m be an r-approximation for φ and B(m), i.e., hd(B(m),m) ≤ r ⋅OPT(φ, B(m)). Construct m as before, so λ hd(m, m) ≤hd(B(m),m) ≤ r ⋅OPT(φ, B(m)). Since OPT(φ, B(m)) ≤ λ OPT(φ, m) + as above, we get λ hd(m, m) ≤ r(λ OPT(φ, m) + ). This implies hd(m, m) ≤ r ⋅OPT(φ, m) + r/λ = (r + o(1)) ⋅OPT(φ, m) and shows that the construction is an AP-reduction with α = 1.

Remark 8

Note that in the reduction from \(\textsf {NSol}^{\mathrm {d}}_{\text {pp}}({\Gamma })\) to NSold(Γ) we construct the assignment B(m) as an extension of m by setting all new variables to 0. In particular, if m is the constant 0-assignment, then so is B(m). We use this observation as we continue.

The following four technical results are the missing theoretical backbone of [8], which had to be omitted from [8] due to page limitations. The first of these lemmas allows us to consider constraints with disjoint variables independently.

Lemma 9

Letφ(x, y) = ψ(x) ∧ χ(y) be a Γ-formulaover a constraint language Γ andman assignment over disjoint variable blocksxandy. Let (φ, m) be an instance ofNSol(Γ). Then\(\text {OPT}(\varphi , m) = \text {OPT}(\psi , m\!\!\upharpoonright _{\boldsymbol {x}}) + \text {\text {OPT}}(\chi , m\!\!\upharpoonright _{\boldsymbol {y}})\).

Proof

If s ∈ [φ], then \(s\!\!\upharpoonright _{\boldsymbol {x}} \in [\psi ]\) and \(s\!\!\upharpoonright _{\boldsymbol {y}}\in [\chi ]\). Conversely, if sψ ∈ [ψ] and sχ ∈ [χ], then s := sψsχ is a model of φ. If s ∈ [φ] is optimal, i.e. hd(s, m) = OPT(φ, m), then

$$\begin{array}{@{}rcl@{}} \text{OPT}(\varphi,m) = \text{hd}(s,m) &= \text{hd}(s\! \!\upharpoonright_{\boldsymbol{x}},m\! \!\upharpoonright_{\boldsymbol{x}}) +\text{hd}(s\!\upharpoonright_{\boldsymbol{y}},m\!\!\upharpoonright_{\boldsymbol{y}}) \\ &\geq \text{OPT}(\psi,m\! \!\upharpoonright_{\boldsymbol{x}}) +\text{OPT}(\chi,m\!\!\upharpoonright_{\boldsymbol{y}}). \end{array} $$

Conversely, if sψ ∈ [ψ] and sχ ∈ [χ] are optimal solutions for their respective problems, then s := sψsχ satisfies

$$\begin{array}{@{}rcl@{}} \text{OPT}(\varphi,m)\leq\text{hd}(s,m) &=& \text{hd}(s\!\!\upharpoonright_{\boldsymbol{x}},m\!\!\upharpoonright_{\boldsymbol{x}}) +\text{hd}(s\!\!\upharpoonright_{\boldsymbol{y}},m\!\!\upharpoonright_{\boldsymbol{y}}) \\ &=& \text{OPT}(\psi,m\!\!\upharpoonright_{\boldsymbol{x}}) +\text{OPT}(\chi,m\!\!\upharpoonright_{\boldsymbol{y}}).\end{array} $$

We can also show that introducing explicit equality constraints does not change the complexity of our problem. We need two introductory lemmas. The first one deals with equalities that do not interfere with the other atoms of the given formula.

Lemma 10

For constraint languages Γ,NSol(Γ ∪{≈}) andNSold(Γ ∪{≈}) reduce to particular cases of the respective problem, where for each constraintxyin the given formulaφat least one ofx, yoccurs also in some Γ-atomofφ.

Proof

Let (φ, m) be an instance of NSol(Γ ∪{≈}). Without loss of generality we assume φ to be of the form ψε, where ψ is a Γ-formula and ε is a {≈}-formula. Let (Vi)iI be the unique finest partition of the variables in ε satisfying that variables x, y are in the same partition class if xy occurs in ε.

For each index iI we designate a specific variable xiVi. Let ψ be the formula obtained from ψ by substituting all occurrences of variables yVi by xi. Moreover, let I be the set of indices iI such that xi actually occurs in ψ, and let \(I^{\prime \prime }:=I\smallsetminus I^{\prime }\) be the set of indices without this property. We set \(\varepsilon ^{\prime }:=\bigwedge _{i\in I^{\prime }}\varepsilon _{i}\) and \(\varepsilon ^{\prime \prime }:=\bigwedge _{i\in I^{\prime \prime }}\varepsilon _{i}\), where the formula \(\varepsilon _{i}:=\bigwedge _{y\in V_{i}}(x_{i} \approx y)\) expresses the equivalence of the variables in Vi. Note that the formulas ψε and χ := ψεε contain the same variables and have identical sets of models.

Now consider the formula φ := ψε and the assignment \(m^{\prime } := m\!\!\upharpoonright _{V^{\prime }}\), where V is the set of variables occurring in φ. The pair (φ, m) is an NSol(Γ ∪{≈})-instance with the additional properties stated in the lemma. By construction we have χ = φε, where the set V of variables in φ and the set V of variables in ε are disjoint. By Lemma 9 we obtain \(\text {OPT}(\varphi ,m) = \text {OPT}(\chi ,m) =\text {OPT}(\varphi ^{\prime },m^{\prime }) + \text {OPT}(\varepsilon ^{\prime \prime },m\!\!\upharpoonright _{V^{\prime \prime }})\).

An optimal solution \(s_{\varepsilon ^{\prime \prime }}\) of ε and the optimal value \(d:=\text {OPT}(\varepsilon ^{\prime \prime },m\!\!\!\upharpoonright _{V^{\prime \prime }})\) can obviously be computed in polynomial time. Therefore the instance (φ, m, k) of NSold(Γ ∪{≈}) corresponds to the instance (φ, m, kd) of the restricted decision problem in the polynomial-time many-one reduction.

Moreover, if s is an r-approximate solution of (φ, m) for some r ≥ 1, then \(s:=s^{\prime }\cup s_{\varepsilon ^{\prime \prime }}\) is a solution of φ, and we have

$$\text{hd}(s,m) \!=\! \text{hd}(s^{\prime},m^{\prime}) + d \leq r\text{ OPT}(\varphi^{\prime},m^{\prime}) + d \leq r\text{ OPT}(\varphi^{\prime},m^{\prime}) + rd \!=\! r\text{ OPT}(\varphi,m), $$

so the constructed solution s of φ is also r-approximate. This concludes the proof of the AP-reduction with factor α = 1.

When dealing with NSol(Γ ∪{≈}), the previous lemma enables us to concentrate on instances where the formula φ has the form

$$\psi(z_1,\dotsc,z_n,x_1,\dotsc,x_t) \land \bigwedge_{i = 1}^t\bigwedge_{x\in V_i} (x_i \approx x), $$

where V1,…,Vt are disjoint sets of variables, being also disjoint from the variables of the Γ-formula ψ. For each 1 ≤ it the given assignment m can have equal distance to the zero vector and the all-ones vector on the variables in Vi ∪{xi}, or it can be closer to one of the constant vectors. It is convenient to group the equality constraints according to these three cases. The following lemma discusses how to remove those equality constraints, on whose variables m is not equidistant from 0 and 1.

Lemma 11

Let Γ be a constraint language and

$$\psi(z_1,\dotsc,z_{n},x_1,\dotsc,x_{\alpha},v_1,\dotsc,v_{\beta},w_1,\dotsc,w_{\gamma}) $$

be any Γ-formula containing precisely the distinct variables z1,…,zn, x1,…,xα, v1,…,vβ and w1,…,wγ. Consider a formula

$$\varphi := \psi \land \bigwedge_{a = 1}^{\alpha}\bigwedge_{x\in I_{a}^{\prime}} (x_{a} \approx x) \land \bigwedge_{b = 1}^{\beta} \bigwedge_{x\in J_{b}^{\prime}} (v_{b} \approx x) \land \bigwedge_{c = 1}^{\gamma}\bigwedge_{x\in K_{c}^{\prime}} (w_{c} \approx x) $$

where \(I_{1}^{\prime },\dotsc ,I_{\alpha }^{\prime }\), \(J_{1}^{\prime },\dotsc ,J_{\beta }^{\prime }\) and \(K_{1}^{\prime },\dotsc ,K_{\gamma }^{\prime }\) are non-empty sets of variables that are pairwise disjoint and disjoint from the variables in ψ. For 1 ≤ aα, 1 ≤ bβ and 1 ≤ cγ we put \(I_{a}:= I_{a}^{\prime } \cup \{x_{a}\}\), \(J_{b}:= J_{b}^{\prime } \cup \{v_{b}\}\) and \(K_{c} := K_{c}^{\prime }\cup \{w_{c}\}\). Moreover, let m be an assignment for φ, such that for 1 ≤ aα, 1 ≤ bβ and 1 ≤ cγ

$$\begin{array}{@{}rcl@{}} d_{0,I_{a}}&:=& \text{hd}(m\!\!\upharpoonright_{I_{a}},\boldsymbol{0}),\quad \hspace*{4pt}d_{1,I_{a}}:= \text{hd}(m\!\!\upharpoonright_{I_{a}},\boldsymbol{1}),\quad \hspace*{4.3pc}\quad d_{1,I_{a}} - d_{0,I_{a}} = 0\\ d_{0,J_{b}}&:=& \text{hd}(m\!\!\upharpoonright_{J_{b}},\boldsymbol{0}),\quad \hspace*{3pt} d_{1,J_{b}}:= \text{hd}(m\!\!\upharpoonright_{J_{b}},\boldsymbol{1}),\quad \!\text{satisfy}\quad e_{b} := d_{1,J_{b}} - d_{0,J_{b}} > 0\\ d_{0,K_{c}}&:=& \text{hd}(m\!\!\upharpoonright_{K_{c}},\boldsymbol{0}),\quad d_{1,K_{c}}:= \text{hd}(m\!\!\upharpoonright_{K_{c}},\boldsymbol{1}),\quad \hspace*{1.5pc} \quad f_{c} := d_{0,K_{c}} - d_{1,K_{c}} > 0 \end{array} $$

It is possible to construct a formula ψ, whose size is polynomial in the size of φ, and an assignment M for \(\varphi ^{\prime }:= \psi ^{\prime }\land \bigwedge _{a = 1}^{\alpha }\bigwedge _{x\in I_{a}^{\prime }} (x_{a}\approx x)\) such that the following holds

  • ψ, φ, φ and ψ are equisatisfiable;

  • if ψ is satisfiable, then OPT(φ, m) = OPT(φ, M) + d where \(d = \sum _{b = 1}^{\beta } d_{0,J_{b}} +\sum _{c = 1}^{\gamma } d_{1,K_{c}}\);

  • for every r ∈ [1,), one can produce an (r-approximate) solution of (φ, m) from any (r- approximate) solution of (φ, M) in polynomial time.

Proof

First, we describe how to construct the formula ψ. In the following we use the abbreviations Z := {z1,…,zn,x1,…,xα}, \(Z^{\prime }:=Z\cup \bigcup _{a = 1}^{\alpha } I_{a}\), V := {v1,…,vβ} and W := {w1,…,wγ}. For every variable uZVW define a set B(u) of variables as follows:

$$B(u) = \left\{\begin{array}{ll} \{u\} & \text{if } u\in Z\\ \{u^{1},\dotsc,u^{e_{b}}\} & \text{if } u=v_{b}\in V\\ \{u^{1},\dotsc,u^{f_{c}}\} & \text{if } u=w_{c}\in W. \end{array}\right. $$

For each atom R(u1,…,uq) of ψ define a set of atoms \(\{R(u_{1}^{\prime },\dotsc ,u_{q}^{\prime })|{(u_{1}^{\prime },\dotsc ,u_{q}^{\prime })\in \prod _{i = 1}^{q} B(u_{i})}\}\), take the union over all these sets and define ψ as the conjunction of all its members, giving a formula over ZVW where \(V^{\prime } =\bigcup _{u\in V} B(u)\) and \(W^{\prime } = \bigcup _{u\in W} B(u)\). Adding again the equality constraints, where m has equal distance from 0 and 1 we get \(\varphi ^{\prime } = \psi ^{\prime }\land \bigwedge _{a = 1}^{\alpha }\bigwedge _{x\in I_{a}^{\prime }} (x_{a}\approx x)\) over ZVW. This is a polynomial time construction since the arities of relations in Γ are bounded.

Moreover, we define an assignment M to the variables u of φ as follows:

$$\begin{array}{@{}rcl@{}} M(u)=\left\{\begin{array}{ll} m(u)&\text{if } u\in Z^{\prime}\\ 0 &\text{if } u\in V^{\prime}\\ 1 &\text{if } u\in W^{\prime}. \end{array}\right. \end{array} $$

Let S be a solution of (φ, M). If S is constant on B(u), for each uVW, then put S := S. Otherwise, by letting S(u) := S(u) for uZ and for uB(u) where uVW is such that S is constant on B(u), and by defining S(u) := M(u) = 0 for the remaining variables uV and S(u) := M(u) = 1 for the remaining variables uW, we obtain a model S of φ satisfying hd(S, M) ≤hd(S, M) and being constant on B(u) for each uVW. From S we construct an assignment S of φ by defining S(u) := S(u) for uZ, \(S(u):=S^{\prime \prime }({v_{b}^{1}})\) for uJb and 1 ≤ bβ, and \(S(u):=S^{\prime \prime }({w_{c}^{1}})\) for uKc and 1 ≤ cγ. It satisfies φ as eb,fc > 0 for 1 ≤ bβ and 1 ≤ cγ. From these definitions, it follows

$$\begin{array}{@{}rcl@{}} && \text{hd}(S^{\prime\prime},M) = \\ &=& \text{hd}(S^{\prime\prime}\!\upharpoonright_{Z^{\prime}},M\!\upharpoonright_{Z^{\prime}}) + \sum\limits_{b = 1}^{\beta} \text{hd}(S^{\prime\prime}\!\!\upharpoonright_{B(v_{b})},M\!\upharpoonright_{B(v_{b})}) + \sum\limits_{c = 1}^{\gamma} \text{hd}(S^{\prime\prime}\!\!\upharpoonright_{B(w_{c})},M\!\!\upharpoonright_{B(w_{c})})\\ &=& \text{hd}(S^{\prime\prime}\!\!\upharpoonright_{Z^{\prime}},m\!\!\upharpoonright_{Z^{\prime}}) + \sum\limits_{b = 1}^{\beta} S^{\prime\prime}({v_{b}^{1}})\cdot e_{b}\hspace*{4.1pc} + \sum\limits_{c = 1}^{\gamma} (1-S^{\prime\prime}({w_{c}^{1}}))\cdot f_{c},\\ \end{array} $$

because S is constant on B(u) for uVW and |B(vb)| = eb,|B(wc)| = fc for 1 ≤ bβ and 1 ≤ cγ; and

$$\begin{array}{@{}rcl@{}} && \text{hd}(S,m) = \\ &\!\!=& \text{hd}(S\!\upharpoonright_{Z^{\prime}},m\!\upharpoonright_{Z^{\prime}}) + \sum\limits_{b = 1}^{\beta} \text{hd}(S\!\upharpoonright_{J_{b}},m\!\upharpoonright_{J_{b}}) \hspace*{2.5pc} + \sum\limits_{c = 1}^{\gamma} \text{hd}(S\!\upharpoonright_{K_{c}},m\!\upharpoonright_{K_{c}})\\ &\!\!=& \text{hd}(S^{\prime\prime}\!\upharpoonright_{Z^{\prime}},m\!\upharpoonright_{Z^{\prime}}) \!+\! \sum\limits_{b = 1}^{\beta} \left( S^{\prime\prime}({v_{b}^{1}})\cdot e_{b} \!\!+ d_{0,J_{b}}\right) \hspace*{8pt}+ \sum\limits_{c = 1}^{\gamma} \left( (1\!-\!S^{\prime\prime}({w_{c}^{1}}))\cdot f_{c} \!+\! d_{1,K_{c}}\right). \end{array} $$

Consequently, hd(S, m) = hd(S, M) + d, where \(d=\sum _{b = 1}^{\beta } d_{0,J_{b}} + \sum _{c = 1}^{\gamma } d_{1,K_{c}}\).

Using this, we shall prove below that OPT(φ, M) + d = OPT(φ, m). Thus, if S now takes the role of an r-approximate solution of (φ, M) for some r ≥ 1, then it follows that

$$\begin{array}{@{}rcl@{}} \text{hd}(S,m) = \text{hd}(S^{\prime\prime},M) + d \leq \text{hd}(S^{\prime},M) + d \!&\leq&\! r\text{ OPT}(\varphi^{\prime},M) + d\\ \!&\leq&\! r\text{ OPT}(\varphi^{\prime},M) + rd \!=\! r\text{ OPT}(\varphi,m). \end{array} $$

Let subsequently S be such that OPT(φ, M) = hd(S, M), and let s be a model of φ. Construct a model s of φ by putting s(u) := s(u) for uZ and s(u) := s(u) for uB(u) and uVW. As above we get hd(s, m) = hd(s, M) + d because the definitions imply

$$\begin{array}{@{}rcl@{}} && \text{hd}(s^{\prime},M) =\\ &=& \text{hd}(s^{\prime}\!\!\upharpoonright_{Z^{\prime}},M\!\upharpoonright_{Z^{\prime}}) + \sum\limits_{b = 1}^{\beta} \text{hd}(s^{\prime}\!\!\upharpoonright_{B(v_{b})},M\!\!\upharpoonright_{B(v_{b})}) + \sum\limits_{c = 1}^{\gamma} \text{hd}(s^{\prime}\!\!\upharpoonright_{B(w_{c})},M\!\upharpoonright_{B(w_{c})})\\ &=& \text{hd}(s\!\!\upharpoonright_{Z^{\prime}},m\!\!\upharpoonright_{Z^{\prime}}) + \sum\limits_{b = 1}^{\beta} s(v_{b})\cdot e_{b} \hspace*{4.5pc}+ \sum\limits_{c = 1}^{\gamma} (1-s(w_{c}))\cdot f_{c}\enspace;\\ && \text{hd}(s,m) =\\ &=& \text{hd}(s\!\!\upharpoonright_{Z^{\prime}},m\!\!\upharpoonright_{Z^{\prime}}) + \sum\limits_{b = 1}^{\beta} \text{hd}(s\!\!\upharpoonright_{J_{b}},m\!\!\upharpoonright_{J_{b}}) \hspace*{2.4pc}+ \sum\limits_{c = 1}^{\gamma} \text{hd}(s\!\!\upharpoonright_{K_{c}},m\!\!\upharpoonright_{K_{c}})\\ &=& \text{hd}(s\!\!\upharpoonright_{Z^{\prime}},m\!\!\upharpoonright_{Z^{\prime}}) + \sum\limits_{b = 1}^{\beta} \left( s(v_{b})\cdot e_{b} + d_{0,J_{b}}\right) \hspace*{1pc}+ \sum\limits_{c = 1}^{\gamma} \left( (1-s(w_{c}))\cdot f_{c} + d_{1,K_{c}}\right). \end{array} $$

By minimality of S, we obtain hd(S, M) ≤hd(S, M) ≤hd(s, M). If we additionally require that s be an optimal solution of (φ, m), then hd(s, M) = hd(s, m) − d ≤hd(S, m) − d = hd(S, M). Thus, the distances hd(S, M), hd(S, M) and hd(s, M) coincide, which implies the desired equality OPT(φ, m) = hd(s, m) = hd(s, M) + d = hd(S, M) + d = OPT(φ, M) + d.

The previous lemma, in fact, describes an AP-reduction from the specialized version of the problem NSol(Γ ∪{≈}) discussed in Lemma 10 to an even more specialized variant (the analogous statement is true for the decision version—instances (φ, m, k) can be decided by considering (φ, M, kd) instead): namely all equality constraints touch variables in Γ-atoms and the given assignment has equal distance from the constant tuples on each variable block connected by equalities. In the next result we show how to remove also these equality constraints.

Proposition 12

For constraint languages Γ, we haveNSold(Γ) ≡mNSold(Γ ∪{≈}) andNSol(Γ) ≡APNSol(Γ ∪{≈}).

Proof

The reduction from left to right is trivial. For the other direction, consider first an instance of NSold(Γ ∪{≈}) with formula φ, assignment m, and bound k. Applying the reductions indicated in Lemmas 10 and 11, we can assume (also for NSol(Γ ∪{≈})) that φ is of the form \(\psi \land \bigwedge _{a = 1}^{\alpha } \bigwedge _{x\in I^{\prime }_{a}} (x_{a} \approx x)\) with a Γ-formula ψ containing the distinct variables z1,…,zn,x1,…,xα (n ≥ 0, α ≥ 1) and non-empty disjoint (from each other and from ψ) variable sets \(I^{\prime }_{a}\) for 1 ≤ aα. Moreover, we can suppose that \(\text {hd}(m\!\upharpoonright _{I_{a}},\boldsymbol {0}) = \text {hd}(m \upharpoonright _{I_{a}},\boldsymbol {1}) =:c_{a}\) for all 1 ≤ aα, where Ia denotes the set \(I^{\prime }_{a}\cup \{x_{a}\}\).

We define \(c := \sum _{a = 1}^{\alpha } c_{a}\), and we choose some -element index set I such that α/ < 1, that is, α + 1 (we shall place another condition on at the end). We construct a formula φ as follows: For each atom R(u1,…,uq) of ψ we introduce the set \(\{R(u_{1}^{i_{1}},\dotsc ,u_{q}^{i_{q}})|{(i_{1},\dotsc ,i_{q})\in I^{q}}\}\) of atoms where for 1 ≤ νq and iI we let \(u_{\nu }^{i} := z_{j,i}\) if uν = zj for some 1 ≤ jn and \(u_{\nu }^{i} = u_{\nu }\) if else uν ∈{x1,…,xα}. Take the union over all these sets and let φ be the conjunction of all atoms in this union. This construction can be carried out in polynomial time since there is a bound on the arities of relations in Γ. Define an assignment M by M(xa) := m(xa) for 1 ≤ aα and M(zj, i) := m(zj) for 1 ≤ jn and iI. We claim that existence of solutions for (φ, m, k) can be decided by checking for solutions of (φ, M, (kc) + α). The argument is similar to that of Proposition 7: ψ is (un)satisfiable if and only if φ and φ are, so we have a correct answer in the unsatisfiable case. Otherwise, consider a solution s to (φ, m, k). Letting Z := {z1,…,zn}, we have

$$\begin{array}{@{}rcl@{}} \text{hd}(s,m) = \text{hd}(s\!\!\upharpoonright_{Z},m\!\!\upharpoonright_{Z}) + \sum\limits_{a = 1}^{\alpha} \text{hd}(s\!\!\upharpoonright_{I_{a}},m\!\!\upharpoonright_{I_{a}}) &=&\text{hd}(s\!\!\upharpoonright_{Z},m\!\!\upharpoonright_{Z}) + \sum\limits_{a = 1}^{\alpha} c_{a} \\ &=&\text{hd}(s\!\!\upharpoonright_{Z},m\!\!\upharpoonright_{Z}) + c, \end{array} $$

i.e. HCode \(\text {hd}(s\!\!\upharpoonright _{Z},m\!\!\upharpoonright _{Z})\leq k-c\). Putting s(xa) := s(xa) for 1 ≤ aα and s(zj, i) := s(zj) for 1 ≤ jn and iI we get a model of φ, and it follows that \(\text {hd}(s^{\prime }\!\!\upharpoonright _{Z^{\prime }},\)\(M\!\!\upharpoonright _{Z^{\prime }}) =\ell \cdot \text {hd}(s\!\!\upharpoonright _{Z},m\!\!\upharpoonright _{Z})\leq \ell \cdot (k-c)\), where Z := {zj, i∣1 ≤ jn, iI}. Therefore, abbreviating X := {x1,…,xα}, we obtain \(\text {hd}(s^{\prime },M) = \text {hd}(s^{\prime \!}\!\upharpoonright _{Z^{\prime }},M\!\!\upharpoonright _{Z^{\prime }})\)\( + \text {hd}(s^{\prime }\!\!\upharpoonright _{X},M\!\!\upharpoonright _{X}) \leq \ell \cdot (k-c)+\alpha \).

Conversely, let S be a solution of (φ, M, (kc) + α). As in Proposition 7 we can construct a solution S being constant on {zj, iiI} for each 1 ≤ jn. Letting S(x) := S(xa) for xIa and 1 ≤ aα and S(zj) := S(zj, i) for some fixed index iI and all 1 ≤ jn, one obtains a model of φ. If S(zj)≠m(zj) for some 1 ≤ jn, then we have S(zj, i) = S(zj)≠m(zj) = M(zj, i) for all iI. Hence, we have \( \ell \cdot \text {hd}(S\!\!\upharpoonright _{Z},m\!\!\upharpoonright _{Z}) \leq \text {hd}(S^{\prime \prime }\!\!\upharpoonright _{Z^{\prime }},M\!\!\upharpoonright _{Z^{\prime }}) \leq \text {hd}(S^{\prime \prime },M) \leq \text {hd}(S^{\prime },M) \). Division by implies \(\text {h\hspace *{-.23pt}d\hspace *{-.23pt}}(S\!\!\upharpoonright _{Z},m\!\!\upharpoonright _{Z})\leq \text {hd}(S^{\prime },M)/\ell \leq k-c + \alpha /\ell < k-c + 1\), i.e. \(\text {h\hspace *{-.23pt}d}(S\!\!\upharpoonright _{Z},\)\(m\!\!\upharpoonright _{Z})\leq k-c\). From this we finally infer that \(\text {hd}(S,m) = \text {hd}(S\!\upharpoonright _{Z},m\!\!\upharpoonright _{Z}) + c \leq k\).

Suppose now that S is an r-approximate solution for (φ, M) for some r ≥ 1, i.e. we have hd(S, M) ≤ r OPT(φ, M). Constructing a model S of φ as before, we obtain \(\ell \text { hd}(S\!\!\upharpoonright _{Z},m\!\!\upharpoonright _{Z})\leq \text {hd}(S^{\prime },M)\leq r\text { OPT}(\varphi ^{\prime },M)\). Furthermore, from an optimal solution of φ, we get a model s of φ satisfying

$$\begin{array}{@{}rcl@{}} \text{OPT}(\varphi^{\prime},M)\leq \text{hd}(s^{\prime},M) &=& \text{hd}(s^{\prime}\!\!\upharpoonright_{Z^{\prime}},M\!\!\upharpoonright_{Z^{\prime}}) + \text{hd}(s^{\prime}\!\!\upharpoonright_{X},M\!\!\upharpoonright_{X})\\ &=& \ell(\text{OPT}(\varphi,m) - c) + \text{hd}(s^{\prime}\!\!\upharpoonright_{X},M\!\!\upharpoonright_{X})\\ &\leq& \ell(\text{OPT}(\varphi,m)-c) + \alpha. \end{array} $$

Multiplying this inequality by r, combining it with previous inequalities and dividing by we thus have \(\text {hd}(S\!\!\!\!\upharpoonright _{Z},m\!\!\!\!\upharpoonright _{Z})\leq r\text { OPT}(\varphi ,m)-rc +r\alpha /\ell \). Note that OPT(φ, m) > 0, because if OPT(φ, m) = 0, then we would have a unique optimal model of φ, namely m. Then \(m\!\!\upharpoonright _{I_{1}}\) would have to be constant, implying \(\text {hd}(m\!\!\upharpoonright _{I_{1}},\boldsymbol {0}) \neq \text {hd}(m\!\!\upharpoonright _{I_{1}},\boldsymbol {1})\), as one distance would be zero and the other one |I1| > 0. Therefore, for ∈Ω (|φ|2) we have \(\text {hd}(S,m) = \text {hd}(S\!\!\upharpoonright _{Z},m\!\!\!\upharpoonright _{Z}) + c \leq \text {hd}(S\!\!\!\upharpoonright _{Z},m\!\!\upharpoonright _{Z}) + rc \leq r\text { OPT}(\varphi ,m)+r\alpha /\ell \leq \text {OPT}(\varphi ,m)(r + r\alpha /\ell ) = \text {OPT}(\varphi ,m)(r + \mathrm {o}(1))\). This demonstrates an AP-reduction with factor 1.

Propositions 7 and 12 allow us to switch freely between formulas with quantifiers and equality and those without. Hence we may derive upper bounds in the setting without quantifiers and equality while using the latter in hardness reductions. In particular, we can use pp-definability when implementing a constraint language Γ by another constraint language Γ. Hence it suffices to consider Post’s lattice of co-clones to characterize the complexity of NSol(Γ) for every finite constraint language Γ.

Corollary 13

For constraint languages Γ and Γ, for which the inclusion Γ⊆〈Γ〉 holds, we have the reductionsNSold) ≤mNSold(Γ) andNSol) ≤APNSol(Γ). Thus, if 〈Γ〉 = 〈Γ〉 is satisfied, then the equivalencesNSold(Γ) ≡mNSold) andNSol(Γ) ≡APNSol) hold.

Next we prove that, in certain cases, unit clauses in the formula do not change the complexity of NSol.

Proposition 14

Let Γ be a constraint language such that feasible solutions ofNSol(Γ) can be found in polynomial time. Then we haveNSol(Γ) ≡APNSol(Γ ∪{[x],[¬x]}).

Proof

The direction from left to right is obvious. For the other direction, we give an AP-reduction from NSol(Γ ∪{[x],[¬x]}) to NSol(Γ ∪{≈}). The latter is AP-equivalent to NSol(Γ) by Proposition 12.

The idea of the construction is to introduce two sets of variables \(y_{1}, \ldots , y_{n^{2}}\) and \(z_{1}, \ldots , z_{n^{2}}\) such that in any feasible solution all yi and all zi take the same value. By setting m(yi) = 1 and m(zi) = 0 for each i, any feasible solution m of small Hamming distance to m will have \(m^{\prime }(y_{i})= 1\) and m(zi) = 0 for all i as well, because deviating from this would be prohibitively expensive. Finally, we simulate the unary relations x and ¬x by xy1 and xz1, respectively. We now describe the reduction formally.

Consider a formula φ over Γ ∪{[x],[¬x]} with the variables x1,…,xn and an assignment m. If (φ, m) fails to have feasible solutions, i.e., if φ is unsatisfiable, we can detect this in polynomial time by the assumption of the lemma and return the generic unsatisfiable instance ⊥. Otherwise, we construct a (Γ ∪{≈})-formula φ over the variables x1,…xn, \(y_{1}, \ldots , y_{n^{2}}\), \(z_{1}, \ldots , z_{n^{2}}\) and an assignment m. We obtain φ from φ by replacing every occurrence of a constraint [x] by xy1 and every occurrence of [¬x] by xz1. Finally, we add the atoms yiy1 and ziz1 for all i ∈{2,…,n2}. Let m be the assignment of the variables of φ given by \(m^{\prime }(x_{i}) = m(x_{i})\) for each i ∈{1,…,n}, and \(m^{\prime }(y_{i})= 1\) and m(zi) = 0 for all i ∈{1,…,n2}. To any feasible solution m of φ we assign \(g(\varphi , m, m^{\prime \prime })\) as follows.

  1. 1.

    If φ is satisfied by m, we define g(φ, m, m) to be equal to m.

  2. 2.

    Else if m(yi) = 0 holds for all i ∈{1,…,n2} or m(zi) = 1 for all i ∈{1,…,n2}, we define g(φ, m, m) to be any satisfying assignment of φ.

  3. 3.

    Otherwise, we have m(yi) = 1 and m(zi) = 0 for all i ∈{1,…,n2}. In this case we define g(φ, m, m) to be the restriction of m onto x1,…,xn.

Observe that all variables yi and all zi are forced to take the same value in any feasible solution, respectively, so g(φ, m, m) is always well-defined. The construction is an AP-reduction. Assume that m is an r-approximate solution. We will show that g(φ, m, m) is also an r-approximate solution.

Case 1g(φ, m, m) computes the optimal solution, so there is nothing to show.

Case 2 Observe first that φ has a solution because otherwise it would have been mapped to ⊥ and m would not exist. Thus, g(φ, m, m) is well-defined and feasible by construction. Observe that m and m disagree on all yi or on all zi, so we have hd(m, m) ≥ n2. Moreover, since φ has a feasible solution, it follows that OPT(φ, m) ≤ n. Since m is an r-approximate solution, we have that

$$n\text{ OPT}(\varphi^{\prime},m^{\prime}) \leq n^{2}\leq \text{hd}(m^{\prime},m^{\prime\prime})\leq r\text{ OPT}(\varphi^{\prime},m^{\prime})\enspace.$$

If OPT(φ, m) = 0, then m would have to be a model of φ, and so would be its restriction to the xi, i.e. m, a model of φ. This is handled in the first case, which is disjoint from the current one; hence, we infer nr. Consequently, the distance hd(m, g(φ, m, m)) is bounded above by nrr ⋅OPT(φ, m), where the last inequality holds because φ is not satisfied by m and thus the distance of any optimal solution from m is at least 1.

Case 3 The variables xi, for which [xi] is a constraint, all satisfy g(φ, m, m)(xi) = 1 by construction. Moreover, we have g(φ, m, m)(xi) = 0 for all xi for which [¬xi] is a constraint of φ. Consequently, g(φ, m, m) is feasible. Again, OPT(φ, m) ≤ n, so any optimal solution to (φ, m) must set all variables yi to 1 and all zi to 0. It follows that OPT(φ, m) = OPT(φ, m). Thus we get

$$\text{hd}(m, g(\varphi, m, m^{\prime\prime})) = \text{hd}(m^{\prime}, m^{\prime\prime}) \leq r \cdot \text{OPT}(\varphi^{\prime},m^{\prime}) = r \cdot \text{OPT}(\varphi,m), $$

which completes the proof.

4.2 Inapplicability of Clone Closure

Corollary 13 shows that the complexity of NSol is not affected by existential quantification by giving an explicit reduction from NSolpp to NSol. It does not seem possible to prove the same for NOSol and MSD. However, similar results hold for the conjunctive closure; thus we resort to minimal or irredundant weak bases of co-clones instead of usual bases.

Proposition 15

Let Γ and Γbe constraint languages. If Γ⊆〈Γ〉holds then we have the reductionsNOSold) ≤mNOSold(Γ) andNOSol) ≤APNOSol(Γ), as well asMSDd) ≤mMSDd(Γ) andMSD) ≤APMSD(Γ).

Proof

We prove only the part that Γ⊆〈Γ〉 implies NOSol(Γ) ≤APNOSol(Γ). The other results will be clear from that reduction since the proof is generic and therefore holds for both NOSol and MSD, as well as for their decision variants.

Let a Γ-formula φ be an instance of NOSol(Γ). Since Γ⊆〈Γ〉, every constraint R(x1,…,xk) of φ can be written as a conjunction of constraints over relations from Γ. Substitute the latter into φ, obtaining φ. Now φ is an instance of NOSol(Γ), where φ is only polynomially larger than φ. As φ and φ have the same variables and hence the same models, also the closest distinct models of φ and φ are the same.

4.3 Duality

Given a relation R ⊆{0,1}n, its dual relation is \(\text {dual}(R) = \{\overline {m}\mid {m \in R}\}\), i.e., the relation containing the complements of tuples from R. Duality naturally extends to sets of relations and co-clones. We define dual(Γ) = {dual(R)∣R ∈Γ} as the set of dual relations to Γ. Since taking complements is involutive, duality is a symmetric relation. If a relation R (a set of relations Γ) is a dual relation to R (a set of dual relations to Γ), then R (Γ) is also dual to R (to Γ). By a simple inspection of the bases of co-clones in Table 2, we can easily see that many co-clones are dual to each other. For instance iE2 is dual to iV2. The following proposition shows that it is sufficient to consider only one half of Post’s lattice of co-clones.

Proposition 16

For any constraint language Γ we have

$$\begin{array}{@{}rcl@{}} \text{\textsf{NSol}}^{\mathrm{d}}({\Gamma})&\equiv_{\mathrm{m}}&\text{\textsf{NSol}}^{\mathrm{d}}(\text{dual}({\Gamma})) \qquad\text{and} \quad \hspace*{12pt}\text{\textsf{NSol}}({\Gamma})\equiv_{\text{AP}}\text{\textsf{NSol}}(\text{dual}({\Gamma})),\\ \text{\textsf{NOSol}}^{\mathrm{d}}({\Gamma})&\equiv_{\mathrm{m}}&\text{\textsf{NOSol}}^{\mathrm{d}}(\text{dual}({\Gamma})) \quad\hspace*{2pt} \text{and}\quad \text{\textsf{NOSol}}({\Gamma})\equiv_{\text{AP}}\text{\textsf{NOSol}}(\text{dual}({\Gamma})),\\ \end{array} $$

as well as

$$\begin{array}{@{}rcl@{}} \text{\textsf{MSD}}^{\mathrm{d}}({\Gamma})&\equiv_{\mathrm{m}}&\text{\textsf{MSD}}^{\mathrm{d}}(\text{dual}({\Gamma})) \qquad\text{and}\quad \text{\textsf{MSD}}({\Gamma})\equiv_{\text{AP}}\text{\textsf{MSD}}(\text{dual}({\Gamma})). \end{array} $$

Proof

Let φ be a Γ-formula and m an assignment to φ. We construct a dual(Γ)-formula φ by substitution of every atom R(x) by dual(R)(x). The assignment m satisfies φ if and only if \(\overline {m}\) satisfies φ, where \(\overline {m}\) is the pointwise complement of m. Moreover, \(\text {hd}(m, m^{\prime }) = \text {hd}(\overline {m}, \overline {m}^{\prime })\).

5 Finding the Nearest Solution

This section contains the proof of Theorem 3. We first consider the polynomial-time cases followed by the cases of higher complexity.

5.1 Polynomial-Time Cases

Proposition 17

If a constraint language Γ is both bijunctive and affine (Γ ⊆iD1), thenNSol(Γ) can be solved in polynomial time.

Proof

Since Γ ⊆iD1 = 〈Γ〉 with Γ := {[xy],[x]}, we have NSol(Γ) ≤APNSol(Γ) by Corollary 13. Every Γ-formula φ is equivalent to a linear system of equations over the Boolean ring \(\mathbb {Z}_{2}\) of type xy = 1 and x = 1. Substitute the fixed values x = 1 into the equations of the type xy = 1 and propagate. If a contradiction is found thereby, reject the input. After an exhaustive application of this rule only equations of the form xy = 1 remain. For each of them put an edge {x, y} into E, defining an undirected graph G = (V, E), whose vertices V are the unassigned variables. If G is not bipartite, then φ has no solutions, so we can reject the input. Otherwise, compute a bipartition \(V = L \dot \cup R\). We assume that G is connected; if not perform the following algorithm for each connected component (cf. Lemma 9). Assign the value 0 to each variable in L and the value 1 to each variable in R, giving the satisfying assignment m1. Swapping the roles of 0 and 1 w.r.t. L and R we get a model m2. Return min{hd(m, m1),hd(m, m2)}.

Proposition 18

If a constraint language Γ is monotone (Γ ⊆iM2),then the problemNSol(Γ) can be solved in polynomial time.

Proof

We have iM2 = 〈Γ〉 where Γ := {[xy],[¬x],[x]}. Thus Corollary 13 and Γ ⊆〈Γ〉 imply NSol(Γ) ≤APNSol(Γ). The relations [¬x] and [x] determine a unique value for the respective variable, therefore we can eliminate unit clauses and propagate the values. If a contradiction occurs, we reject the input. It thus remains to consider formulas φ containing only binary implicative clauses of type xy.

Let V be the set of variables in φ, and for i ∈{0,1} let Vi = {xVm(x) = i} be the variables mapped to value i by assignment m. We transform the formula φ to a linear programming problem as follows. For each clause xy we add the inequality yx, and for each variable xV we add the constraints x ≥ 0 and x ≤ 1. As linear objective function we use \(f(\boldsymbol {x}) = \sum _{x \in V_{0}} x + \sum _{x \in V_{1}} (1 - x)\). For an arbitrary solution m, it returns the number of variables that change their parity between m and m, i.e., f(m) = hd(m, m). This way we obtain the (integer) linear programming problem (f, Axb), where A is a totally unimodular matrix and b is an integral column vector.

The rows of A consist of the left-hand sides of inequalities yx ≥ 0, x ≥ 0, and − x ≥− 1, which constitute the system Axb. Every entry in A is 0, + 1, or − 1. Every row of A has at most two non-zero entries. For the rows with two entries, one entry is + 1, the other is − 1. According to Condition (iv) in Theorem 19.3 in [30], this is a sufficient condition for A being totally unimodular. As A is totally unimodular and b is an integral vector, f has integral minimum points, and one of them can be computed in polynomial time (see e.g. [30, Chapter 19]).

5.2 Hard Cases

We start off with an easy corollary of Schaefer’s dichotomy.

Proposition 19

Let Γ be a finite set of Boolean relations. IfiN2 ⊆〈Γ〉, then it isNP-complete to decide whether a feasible solution exists forNSol(Γ);otherwise,NSol(Γ) ∈poly-APX.

Proof

If iN2 ⊆〈Γ〉 holds, checking the existence of feasible solutions for NSol(Γ)-instances is NP-hard by Schaefer’s theorem [27].

Let (φ, m) be an instance of NSol(Γ). We give an n-approximate algorithm for the other cases, where n denotes the number of variables in φ. If m satisfies φ, return m. Otherwise compute an arbitrary solution m of φ, which can be done in polynomial time by Schaefer’s theorem. This algorithm is n-approximate: If m satisfies φ, the algorithm returns the optimal solution; otherwise we have OPT(φ, m) ≥ 1 and hd(m, m) ≤ n, hence the answer m of the algorithm is n-approximate.

5.2.1 APX-Complete Cases

We start with reductions from the optimization version of vertex cover. Since the relation [xy] is a straightforward Boolean encoding of vertex cover, we immediately get the following result.

Proposition 20

NSol(Γ) is APX-hard for every constraint language Γ satisfying\({\text {iS}_{0}^{2}} \subseteq \langle {\Gamma }\rangle \)or\({\text {iS}_{1}^{2}} \subseteq \langle {\Gamma }\rangle \).

Proof

We have \({\text {iS}_{0}^{2}} = \langle {\{[x\lor y]\}}\rangle \) and \({\text {iS}_{1}^{2}} = \langle {\{[\neg x \lor \neg y]\}}\rangle \). We discuss the former case, the latter one being symmetric and provable from the first one by Proposition 16.

We encode VertexCover into NSol({[xy]}). For each edge {x, y}∈ E of a graph G = (V, E) we add the clause (xy) to the formula φG. Every model m of φG yields a vertex cover {vVm(v) = 1}, and conversely, the characteristic function of any vertex cover satisfies φG. Moreover, we choose m = 0. Then hd(0, m) is minimal if and only if the number of 1s in m is minimal, i.e., if m is a minimal model of φG, i.e., if m represents a minimal vertex cover of G. Since VertexCover is APX-complete (see e.g. [3]) and NSol({[xy]}) ≤APNSol(Γ) (see Corollary 13), the result follows.

Proposition 21

We haveNSol(Γ) ∈APX for constraint languages Γ ⊆iD2.

Proof

Γ := {[xy],[xy]} is a base of iD2. By Corollary 13 it suffices to show that NSol(Γ) is in APX. Let (φ, m) be an instance of this problem. Feasibility for φ can be encoded as an integer program as follows: Every constraint xy induces an equation x + y = 1, every constraint xy an inequality xy. If we restrict all variables to {0,1} by the appropriate inequalities, it is clear that an assignment m satisfies φ if it satisfies the linear system with inequality side conditions. As objective function we use \(f(\boldsymbol {x}):=\sum _{x\in V_{0}} x + \sum _{x\in V_{1}} (1-x)\), where Vi is the set of variables mapped to i by m. Clearly, for every solution m we have f(m) = hd(m, m). The 2-approximation algorithm from [17] for integer linear programs, where every inequality contains at most two variables, completes the proof.

Proposition 22

We haveNSol(Γ) ∈APX for constraint languages\({\Gamma }\subseteq \text {iS}_{00}^{\ell }\)with ≥ 2.

Proof

Γ := {[x1 ∨⋯ ∨ x],[xy],[¬x],[x]} is a base of \(\text {iS}_{00}^{\ell }\). By Corollary 13 it suffices to show that NSol(Γ) is in APX. Let (φ, m) be an instance of this problem. We use an approach similar to the one for the corresponding case in [22], again writing φ as an integer program. We write constraints x1 ∨⋯ ∨ x as inequalities x1 + ⋯ + x ≥ 1, constraints xy as xy, ¬x as x = 0, and x as x = 1. Moreover, we add x ≥ 0 and x ≤ 1 for each variable x. It is easy to check that the feasible Boolean solutions of φ and of the linear system coincide. As objective function we use \(f(\boldsymbol {x}):=\sum _{x\in V_{0}} x + \sum _{x\in V_{1}} (1-x)\), where Vi is the set of variables mapped to i by m. Clearly, for every solution m we have f(m) = hd(m, m). Therefore it suffices to approximate the optimal solution for the integer linear program.

To this end, let m be a (generally non-integer) solution to the relaxation of the linear program, which can be computed in polynomial time. We construct m by setting m(x) = 0 if m(x) < 1/ and m(x) = 1 if m(x) ≥ 1/. As ≥ 2, we get hd(m, m) = f(m) ≤ f(m) ≤ ⋅OPT(φ, m). It is easy to check that m is a feasible solution, which completes the proof.

5.2.2 NearestCodeword-Complete Cases

This section essentially uses the facts that MinOnes is NearestCodeword-complete for the co-clone iL2 and that it is a special case of NSol. The following result was stated by Khanna et al. for completeness via A-reductions [22, Theorem 2.14]. A closer look at the proof reveals that it also holds for the stricter notion of completeness via AP-reductions that we use. In this respect the proofs of Propositions 23 and 27 spell out the missing details from [8, Propositions 15 and 18].

Proposition 23

MinOnes(Γ) isNearestCodeword-complete by AP-reductionsfor constraint languages Γ satisfying 〈Γ〉 = iL2.

Proof

According to [22, Lemma 8.13], MinOnes(Γ) is NearestCodeword-hard for iL ⊆〈Γ〉. This proof uses AP-reductions, i.e., NearestCodeword ≤APMinOnes(Γ).

Regarding the other direction, MinOnes(Γ) ≤APNearestCodeword, we first observe that \(\text {odd}^{3}=\{(a_{1},a_{2},a_{3})\in \{0,1\}^{3}\mid \sum _i a_i~\text {odd}\}\) and \(\text {even}^{3}=\{(a_{1},a_{2},a_{3})\in \{0,1\}^{3}\mid \sum _i a_i~\text {even}\}\) perfectly implement every constraint in iL2, i.e., 〈{odd3,even3}〉 = iL2 as shown in [22, Lemma 7.6]. Therefore, for Γ ⊆iL2, the problem WeightedMinOnes(Γ) AP-reduces to WeightedMinOnes({odd3,even3}) [22, Lemma 3.9]. The latter problem AP-reduces to WeightedMinCSP({odd3,even3,[¬x]}) [22, Lemma 8.1], which further AP-reduces to WeightedMinCSP({odd3,even3}) because of [¬x] = [even3(x, x, x)]. In total we thus have that the problem WeightedMinOnes(Γ) AP-reduces to the problem WeightedMinCSP({odd3,even3}). We conclude by observing that MinOnes is a particular case of the WeightedMinOnes problem and that NearestCodeword is the same as WeightedMinCSP({odd3,even3}), yielding MinOnes(Γ) ≤APNearestCodeword.

Lemma 24

We haveMinOnes(Γ) ≤APNSol(Γ) for any constraint language Γ.

Proof

MinOnes(Γ) is a special case of NSol(Γ) where m is the 0-assignment.

Corollary 25

NSol(Γ) isNearestCodeword-hard for constraint languages Γ satisfying iL ⊆〈Γ〉.

Proof

Γ := {even4,[x],[¬x]} is a base of iL2. By Proposition 23, MinOnes(Γ) is NearestCodeword-complete. By Lemma 24, MinOnes(Γ) reduces to NSol(Γ). By Proposition 14, NSol(Γ) is AP-equivalent to NSol({even4}). Finally, because of even4 ∈iL ⊆〈Γ〉 and Corollary 13, NSol({even4}) reduces to NSol(Γ).

Proposition 26

We haveNSol(Γ) ≤APMinOnes({even4,[¬x],[x]}) for constraint languages Γ ⊆iL2.

Proof

Γ := {even4,[¬x],[x]} is a base of iL2. By Corollary 13 it suffices to show NSol(Γ) ≤APMinOnes(Γ).

We proceed by reducing NSol(Γ) to a subproblem of NSolpp), where only instances (φ, 0) are considered. Then, using Proposition 7 and Remark 8, this reduces to a subproblem of NSol(Γ) with the same restriction on the assignments, which is exactly MinOnes(Γ). Note that [xy] is equal to [∃zz(even4(x, y, z, z) ∧¬zz)] so we can freely use [xy] in any Γ-formula. Let formula φ and assignment m be an instance of NSol(Γ). We copy all clauses of φ to φ. For each variable x of φ for which m(x) = 1, we take a new variable x and add the constraint xx to φ. Moreover, we existentially quantify x. Clearly, there is a bijection I between the satisfying assignments of φ and those of φ: For every solution s of φ we get a solution I(s) of φ by setting for each x introduced in the construction of φ the value I(s)(x) to the complement of s(x). Moreover, we have that hd(m, s) = hd(0, I(s)). This yields a trivial AP-reduction with α = 1.

5.2.3 MinHornDeletion-Complete Cases

Proposition 27 (Khanna et al. [22])

We haveMinHornDeletion-completenessfor the problemsMinOnes({xy ∨¬z, xx}) andWeightedMinOnes({xy ∨¬z, xy}) via AP-reductions.

Proof

These results are stated in [22, Theorem 2.14] for completeness via A-reductions. The actual proof in [22, Lemma 8.7 and Lemma 8.14], however, uses AP-reductions, hence the results also hold for our stricter notion of completeness.

Lemma 28

NSol({xy ∨¬z}) ≤APWeightedMinOnes({xy ∨¬z, xy}).

Proof

Let formula φ and assignment m be an instance of NSol({xy ∨¬z}) over the variables \(x_{1}, \ldots ,x_{n}\). Let V1 be the set of variables xi with m(xi) = 1. We construct a {xy ∨¬z, xy}-formula φ by adding to φ for each xiV1 the constraint \(x_{i}\lor x_{i}^{\prime }\) where \(x_{i}^{\prime }\) is a new variable. We set the weights of the variables of φ as follows. For xiV1 we set w(xi) = 0, all other variables get weight 1. To each satisfying assignment m of φ we construct the assignment m which is the restriction of m to the variables of φ. This construction is an AP-reduction.

Note that m is feasible if m is. Let m be an r-approximation of OPT(φ). Note that whenever for xiV1 we have m(xi) = 0 then \(m^{\prime }(x_{i}^{\prime })= 1\). The other way round, we may assume that whenever m(xi) = 1 for xiV1 then \(m^{\prime }(x_{i}^{\prime }) = 0\). If this is not the case, then we can change m accordingly, decreasing the weight that way. It follows that w(m) = n0 + n1 where we have

$$\begin{array}{@{}rcl@{}} n_{0} &= |{\{i\mid{x_{i} \in V_{1}, m^{\prime}(x_{i})= 0}}\}| = |{\{i\mid{x_{i} \in V_{1}, m^{\prime}(x_{i})\neq m(x_{i})}}\}|\\ n_{1} &= |{\{i\mid{x_{i} \notin V_{1}, m^{\prime}(x_{i})= 1}}\}| = |{\{i\mid{x_{i} \notin V_{1}, m^{\prime}(x_{i})\neq m(x_{i})}}\}|, \end{array} $$

which means that w(m) equals hd(m, m). Analogously, any model s ∈ [φ] can be extended to a model m∈ [φ] by putting \(m^{\prime }(x_{i}^{\prime }) = 1\) if xiV1 and s(xi) = 0, and \(m^{\prime }(x_{i}^{\prime }) = 0\) for the remaining xiV1; thereby w(m) = hd(m, s). Consequently, the optima in both problems correspond, that is, we get OPT(φ) = OPT(φ, m). Hence we deduce hd(m, m) = w(m) ≤ r OPT(φ) = r OPT(φ, m).

Proposition 29

For every dual Horn constraint language Γ ⊆iV2we have the reductionNSol(Γ) ≤APWeightedMinOnes({xy ∨¬z, xy}).

Proof

Since {xy ∨¬z, xx} is a base of iV2, by Corollary 13 it suffices to prove the reduction NSol({xy ∨¬z, xx}) ≤APWeightedMinOnes({xy ∨¬z, xy}). To this end, first reduce NSol({xy ∨¬z, xx}) to NSol(xy ∨¬z) by Proposition 14 and then use Lemma 28.

Proposition 30

NSol(Γ) isMinHornDeletion-hardfor finite Γ with iV2 ⊆〈Γ〉.

Proof

For Γ := {xy ∨¬z, xx} we have MinHornDeletion ≡APMinOnes(Γ) by Proposition 27. Now it follows MinOnes(Γ) ≤APNSol(Γ) ≤APNSol(Γ) using Lemma 24 and Corollary 13 on the assumption Γ ⊆iV2 ⊆〈Γ〉.

5.2.4 Poly-APX-Hardness

Proposition 31

The problemNSol(Γ) is poly-APX-hard for constraint languages Γ satisfying iN ⊆〈Γ〉⊆iI0or iN ⊆〈Γ〉⊆iI1.

Proof

The constraint language Γ1 := {even4,[xy],[x]} is a base of iI1. MinOnes(Γ1) is poly-APX-hard by Theorem 2.14 of [22] and reduces to NSol(Γ1) by Lemma 24. Since [xy] = [dup3(x, y,1)] = [∃z(dup3(x, y, z) ∧ z)], as well as 〈{even4}〉 = iL, 〈{dup3}〉 = iN, and iL ⊆iN, we have the reductions

$$\textsf{NSol}({\Gamma}_1)\le_{\text{AP}} \textsf{NSol}({\Gamma}_1\cup\{\text{dup}^3\}) \le_{\text{AP}} \textsf{NSol}(\{\text{even}^4,\text{dup}^3,x\}) \equiv_{\text{AP}} \textsf{NSol}(\{\text{dup}^3,x\}) $$

by Corollary 13. The problem of finding feasible solutions of NSol(Γ), where iN ⊆〈Γ〉⊆iI0 or iN ⊆〈Γ〉⊆iI1, is polynomial-time decidable. Indeed, such Γ is 0- or 1-valid, therefore the all-zero or all-one tuple is always guaranteed to exist as a feasible solution. Therefore Proposition 14 implies NSol({dup3, x}) ≡APNSol({dup3}); the latter problem reduces to NSol(Γ) because dup3 ∈iN ⊆〈Γ〉 and Corollary 13.

6 Finding Another Solution Closest to the Given One

In this section we study the optimization problem NearestOtherSolution. We first consider the polynomial-time cases and then the cases of higher complexity.

6.1 Polynomial-Time Cases

Since we cannot take advantage of clone closure, we must proceed differently. We use the following result based on a theorem by Baker and Pixley [5].

Proposition 32 (Jeavons et al. [19])

Every bijunctive constraintR(x1,…,xn) is equivalent to the conjunction\(\bigwedge _{1 \leq i \leq j} R_{ij}(x_{i},x_{j})\), whereRijis the projection ofRto the coordinatesiandj.

Proposition 33

If Γ is bijunctive (Γ ⊆iD2) thenNOSol(Γ) is in PO.

Proof

According to Proposition 32 we may assume that the formula φ is a conjunction of atoms R(x, y) or a unary constraint R(x, x) of the form [x] or [¬x].

Unary constraints fix the value of the constrained variable and can be eliminated by propagating the value to the other clauses. For each of the remaining variables, x, we attempt to construct a model mx of φ with mx(x)≠m(x) such that hd(mx,m) is minimal among all models with this property. This can be done in polynomial time as described below. If the construction of mx fails for every variable x, then m is the sole model of φ and the problem is not solvable. Otherwise choose one of the variables x for which hd(mx,m) is minimal and return mx as second solution m.

It remains to describe the computation of mx. Initially we set mx(x) to 1 − m(x) and mx(y) := m(y) for all variables yx, and mark x as flipped. If mx satisfies all atoms we are done. Otherwise let R(u, v) be an atom falsified by mx. If u and v are marked as flipped, the construction fails, a model mx with the property mx(x)≠m(x) does not exist. Otherwise R(u, v) contains a uniquely determined variable v not marked as flipped. Set mx(v) := 1 − m(v), mark v as flipped, and repeat this step. This process terminates after flipping every variable at most once.

Proposition 34

If\({\Gamma } \subseteq \text {iS}_{00}^{k}\)or\({\Gamma } \subseteq \text {iS}_{10}^{k}\)forsomek ≥ 2 thenNOSol(Γ) is in PO.

Proof

We perform the proof only for \(\text {iS}_{00}^{k}\). Proposition 16 implies the same result for \(\text {iS}_{10}^{k}\).

The co-clone \(\text {iS}_{00}^{k}\) is generated by Γ := {ork,[xy],[x],[¬x]}. In fact, Γ is even a plain base of \(\text {iS}_{00}^{k}\) [12], meaning that every relation in Γ can be expressed as a conjunctive formula over relations in Γ, without existential quantification or explicit equalities. Hence we may assume that φ is given as a conjunction of Γ-atoms.

Note that xy is a polymorphism of Γ, i.e., for any two solutions m1, m2 of φ their disjunction m1m2 – defined by (m1m2)(x) = m1(x) ∨ m2(x) for all x – is also a solution of φ. Therefore we get the optimal solution m of an instance (φ, m) by flipping in m either some ones to zeros or some zeros to ones, but not both. To see this, assume the optimal solution m flips both ones and zeros. Then mm is a solution of φ that is closer to m than m, which contradicts the optimality of m.

Unary constraints fix the value of the constrained variable and can be eliminated by propagating the value to the other clauses (including removal of disjunctions containing implied positive literals and shortening disjunctions containing implied negative literals). This propagation does not lead to contradictions since m is a model of φ. For each of the remaining variables, x, we attempt to construct a model mx of φ with mx(x)≠m(x) such that hd(mx,m) is minimal among all models with this property. This can be done in polynomial time as described below. If the construction of mx fails for every variable x, then m is the sole model of φ and the problem is not solvable. Otherwise choose one of the variables x for which hd(mx,m) is minimal and return mx as second solution m.

It remains to describe the computation of mx. If m(x) = 0, we flip x to 1 and propagate this change iteratively along the implications, i.e., if xy is a constraint of φ and m(y) = 0, we flip y to 1 and iterate. This kind of flip never invalidates any disjunctions, it could only lead to contradictions with conditions imposed by negative unit clauses (and since their values were propagated before such a contradiction would be immediate). For m(x) = 1 we proceed dually, flipping x to 0, removing x from disjunctions if applicable, and propagating this change backward along implications yx where m(y) = 1. This can possibly lead to immediate inconsistencies with already inferred unit clauses, or it can produce contradictions through empty disjunctions, or it can create the necessity for further flips from 0 to 1 in order to obtain a solution (because in a disjunctive atom all variables with value 1 have been flipped, and thus removed). In all these three cases the resulting assignment does not satisfy φ, and there is no model that differs from m in x and that can be obtained by flipping in one way only. Otherwise, the resulting assignment satisfies φ, and this is the desired mx. Our process terminates after flipping every variable at most once, since we flip only in one way (from zeros to ones or from ones to zeros). Thus, mx is computable in polynomial time.

6.2 Hard Cases

Proposition 35

Let Γ be a constraint language. If iI1 ⊆〈Γ〉 or iI0 ⊆〈Γ〉 holds then it isNP-complete to decide whether a feasible solution forNOSol(Γ) exists. Otherwise,NOSol(Γ) ∈poly-APX.

Proof

Finding a feasible solution to NOSol(Γ) corresponds exactly to the decision problem AnotherSAT(Γ) which is NP-hard if and only if iI1 ⊆〈Γ〉 or iI0 ⊆〈Γ〉 according to Juban [20]. If AnotherSAT(Γ) is polynomial-time decidable, we can always find a feasible solution for NOSol(Γ) if it exists. Obviously, every feasible solution is an n-approximation of the optimal solution, where n is the number of variables in the input.

6.2.1 Tightness Results

It will be convenient to consider the following decision problem asking for another solution that is not the complement, i.e., that does not have maximal distance from the given one.

Problem: :

AnotherSATnc(Γ)

Input: :

A conjunctive formula φ over relations from Γ and an assignment m satisfying φ.

Question: :

Is there another satisfying assignment m of φ, different from m, such that hd(m, m) < n, where n is the number of variables in φ?

Remark 36

AnotherSATnc(Γ) is NP-complete for iI0 ⊆〈Γ〉 and iI1 ⊆〈Γ〉, since already AnotherSAT(Γ) is NP-complete for these cases, as shown in [20]. Moreover, AnotherSATnc(Γ) is polynomial-time decidable if Γ is Horn (Γ ⊆iE2), dual Horn (Γ ⊆iV2), bijunctive (Γ ⊆iD2), or affine (Γ ⊆iL2), for the same reason as for AnotherSAT(Γ): For each variable xi we flip the value of m[i], substitute \(\overline {m}(x_{i})\) for xi, and construct another satisfying assignment if it exists. Consider now the solutions which we get for every variable xi. Either there is no solution for any variable, then AnotherSATnc(Γ) has no solution; or there are only the solutions which are the complement of m, then AnotherSATnc(Γ) has no solution as well; or else we get a solution m with hd(m, m) < n, leading also to a solution for AnotherSATnc(Γ). Hence, taking into account Proposition 38 below, we obtain a dichotomy result also for AnotherSATnc(Γ).

Note that AnotherSATnc(Γ) is not compatible with existential quantification. Let φ(y, x1,…,xn) with model m be an instance of AnotherSATnc(Γ) and let m be a solution satisfying hd(m, m) < n + 1. Now consider the formula φ1(x1,…,xn) = ∃yφ(y, x1,…,xn), obtained by existentially quantifying the variable y, and the tuples m1 and \(m^{\prime }_{1}\) obtained from m and m by omitting the first component. Both, m1 and \(m^{\prime }_{1}\), are still solutions of φ, but we cannot guarantee \(\text {hd}(m_{1}, m^{\prime }_{1}) < n\). Hence we need the equivalent of Proposition 15 for this problem, whose proof is analogous.

Proposition 37

The reductionAnotherSATnc) ≤mAnotherSATnc(Γ) holds for all constraint languages Γ and Γsatisfying Γ⊆〈Γ〉.

Proposition 38

If a constraint language Γ satisfies 〈Γ〉 = iI or iN ⊆〈Γ〉⊆iN2, thenAnotherSATnc(Γ) is NP-complete.

Proof

Containment in NP is clear, it remains to show hardness. Since the problem AnotherSATnc is not compatible with existential quantification, we cannot use clone theory, but have to consider the three co-clones iN2, iN, and iI separately and make use of minimal weak bases.

Case 〈Γ〉 = iN Putting R := {000,101,110}, we present a reduction from the problem AnotherSAT({R}), which is NP-hard [20] as 〈{R}〉 = iI0. The problem remains NP-complete if we restrict it to instances (φ, 0), since R is 0-valid and any given model m other than the constant 0-assignment admits the trivial solution m = 0. Thus we can perform a reduction from this restricted problem.

Consider the relation RiN = {0000,1010,1100,1111,0101,0011}. Given a formula φ over R, we construct a formula ψ over RiN by replacing every constraint R(x, y, z) with a new constraint RiN(x, y, z, w), where w is a new global variable. Moreover, we set m to the constant 0-assignment. This construction is a many-one reduction from the restricted version of AnotherSAT({R}) to AnotherSATnc({RiN}).

To see this, observe that the tuples in RiN that have a 0 in the last coordinate are exactly those in R ×{0}. Thus any solution of φ can be extended to a solution of ψ by assigning 0 to w. Conversely we observe that any solution m of the AnotherSATnc({RiN})-instance (ψ, 0) is different from 0 and 1. As RiN is complementive, we may assume m(w) = 0. Then m restricted to the variables of φ solves the AnotherSAT({R})-instance (φ, 0).

Finally, observe that RiN is a minimal weak base and Γ is a base of the co-clone iN, therefore we have RiN ∈〈Γ〉 by Theorem 1. Now the NP-hardness of AnotherSATnc(Γ) follows from the one of AnotherSATnc({RiN}) by Proposition 37.

Case 〈Γ〉 = iN2 We give a reduction from AnotherSATnc({RiN}), which is NP-hard by the previous case. By Theorem 1, 〈Γ〉 contains the relation \(R_{\text {iN}_{2}} = \{ m\overline {m}\mid {m \in R_{\text {iN}}}\}\). For an RiN-formula φ(x1,…,xn), we construct a corresponding \(R_{\text {iN}_{2}}\)-formula \(\psi (x_{1}, \ldots , x_{n}, x_{1}^{\prime }, \ldots , x_{n}^{\prime })\) by replacing every constraint RiN(x, y, z, w) with a new constraint \(R_{\text {iN}_{2}}(x, y, z, w, x^{\prime }, y^{\prime }, z^{\prime }, w^{\prime })\). Assignments m for φ extend to assignments M for ψ by setting \(M(x^{\prime }):= \overline {m}(x)\). Conversely, assignments for ψ yield assignments for φ by restricting them to the variables in φ. Because every variable x1,…,xn assigned by models of φ actually occurs in some RiN-atom in φ and hence in some \(R_{\text {iN}_{2}}\)-atom of ψ, and because of the structure of \(R_{\text {iN}_{2}}\), any model of ψ distinct from M and \(\overline {M}\) restricts to a model of φ other than m or \(\overline {m}\). Consequently, this construction is again a reduction from \(\textsf {AnotherSAT}_{\textsf {nc}}(\{R_{\text {iN}}\})\) to \(\textsf {AnotherSAT}_{\textsf {nc}}(\{R_{\text {iN}_{2}}\})\), reducing itself to AnotherSATnc(Γ) by Proposition 37.

Case 〈Γ〉 = iI We proceed as in Case 〈Γ〉 = iN, but use RiI = {0000,0011,0101,1111} instead of RiN, and {000,011,101} for R. Note that the RiI-tuples with first coordinate 0 are exactly those in {0}× R. The relation RiI is not complementive, but (as every variable assigned by any model of ψ occurs in some atomic RiI-constraint) the only solution m such that m(w) = 1 is the constant 1-assignment, which is ruled out by the requirement hd(m, m) < n. Hence we may again assume m(w) = 0.

Proposition 39

For a constraint language Γ satisfying 〈Γ〉 = iI or iN ⊆〈Γ〉⊆iN2and anyε > 0 there is no polynomial-timen1−ε-approximationalgorithm forNOSol(Γ), unless P = NP.

Proof

Assume there is a constant ε > 0 with a polynomial-time n1−ε-approximation algorithm for NOSol(Γ). We show how to use this algorithm to solve AnotherSATnc(Γ) in polynomial time. Proposition 38 completes the proof.

Let (φ, m) be an instance of AnotherSATnc(Γ) with n variables. If n = 1, then we reject the instance. Otherwise, we construct a new formula φ and a new assignment m as follows. Let k be the smallest integer greater than 1/ε. Choose a variable x of φ and introduce nkn new variables xi for i = 1,…,nkn. For every i ∈{1,…,nkn} and every constraint R(y1,…,y) in φ, such that x ∈{y1,…,y}, construct a new constraint \(R({z_{1}^{i}}, \ldots , z_{\ell }^{i})\) by \({z_{j}^{i}} = x^{i}\) if yj = x and \({z_{j}^{i}} = y_{j}\) otherwise; add all the newly constructed constraints to φ in order to get φ. Moreover, we extend m to a model of φ by setting m(xi) = m(x). Now run the n1−ε-approximation algorithm for NOSol(Γ) on (φ, m). If the answer is \(\overline {m^{\prime }}\) then reject, otherwise accept.

We claim that the algorithm described above is a correct polynomial-time algorithm for the decision problem AnotherSATnc(Γ) when Γ is complementive. Polynomial runtime is clear. It remains to show its correctness. If the only solutions to φ are m and \(\overline {m}\), then, as n > 1, the only models of φ are m and \(\overline {m^{\prime }}\). Hence the approximation algorithm must answer \(\overline {m^{\prime }}\) and the output is correct. Now assume that there is a satisfying assignment ms different from m and \(\overline {m}\). The relation [φ] is complementive, hence we may assume that ms(x) = m(x). It follows that φ has a satisfying assignment \(m_{s}^{\prime }\) for which \(0<\text {hd}(m_{s}^{\prime }, m^{\prime })<n\) holds. But then the approximation algorithm must find a satisfying assignment m for φ with hd(m, m) < n ⋅ (nk)1−ε = nk(1−ε)+ 1. Since the inequality k > 1/ε holds, it follows that hd(m, m) < nk. Consequently, m is not the complement of m and the output of our algorithm is again correct.

When Γ is not complementive but both 0-valid and 1-valid (〈Γ〉 = iI), we perform the expansion algorithm described above for each variable of the formula φ and reject if the result is the complement for each run. The runtime remains polynomial. If \([\varphi ] = \{m,\overline {m}\}\), then indeed every run results in the corresponding \(\overline {m^{\prime }}\), and we correctly reject. Otherwise, we have a model \(m_{s}\in [\varphi ]\smallsetminus \{m,\overline {m}\}\), so there is a variable x of φ, where \(m_{s}(x)\neq \overline {m}(x)\), i.e. ms(x) = m(x). For this instance (φ, m) the approximation algorithm does not return \(\overline {m^{\prime }}\), wherefore we correctly accept.

6.2.2 MinDistance-Equivalent Cases

In this section we show that affine co-clones lead to problems equivalent to MinDistance. We thereby add the missing details to the rather superficial treatment of this matter given in [6].

Lemma 40

For affine constraint languages Γ (Γ ⊆iL2) we haveNOSol(Γ) ≤APMinDistance.

Proof

Let the formula φ and the satisfying assignment m be an instance of NOSol(Γ) over the variables x1,…,xn. The input φ can be written as Ax = b, with m being a solution of this affine system. A tuple m is a solution of Ax = b if and only if it can be written as m = m + m0 where m0 is a solution of Ax = 0. The Hamming distance is invariant with respect to affine translations: namely we have hd(m, m) = hd(m + m, m + m) for any tuple m, in particular, for m = −m we obtain hd(m, m) = hd(mm, 0). Therefore mm is a solution of Ax = b with minimal Hamming distance to m if and only if m0 = mm is a non-zero solution of the homogeneous system Ax = 0 with minimum Hamming weight. Hence, the problem NOSol(Γ) for affine languages Γ is equivalent to computing the non-trivial solutions of homogeneous systems with minimal weight, which is exactly the MinDistance problem.

We need to express an affine sum of even number of variables by means of the minimal weak base for each of the affine co-clones. In the following lemma, the existentially quantified variables are uniquely determined, therefore the existential quantifiers serve only to hide superfluous variables and do not pose any problems as they were mentioned before.

Lemma 41

For every\(n\in \mathbb {N}\),n ≥ 1, theconstraintx1x2 ⊕⋯ ⊕ x2n = 0 can be equivalently expressed by each of the following formulas:

  1. 1.

    \(\exists y_{0},\ldots ,y_{n} \left (\begin {array}{@{}l@{}} y_{0} = 0\land y_{n} = 0\land {}\\ R_{\text {iL}}(y_{0},x_{1},x_{2},y_{1}) \land {}\\ R_{\text {iL}}(y_{1},x_{3},x_{4},y_{2}) \land \cdots \land R_{\text {iL}}(y_{n-1},x_{2n-1},x_{2n},y_{n}) \end {array} \right )\),

  2. 2.

    \(\exists y_{0},\ldots ,y_{2n} \left (\begin {array}{l@{}l@{}} R_{\text {iL}_{0}}(y_{0},x_{1},y_{1},y_{0}) \land {}\\ R_{\text {iL}_{0}}(y_{1},x_{2},y_{2},y_{0}) \land \cdots \land R_{\text {iL}_{0}}(y_{2n-1},x_{2n},y_{2n},y_{2n}) \end {array} \right )\),

  3. 3.

    \(\exists y_{0},\ldots ,y_{2n} \left (\begin {array}{l@{}l@{}} R_{\text {iL}_{1}}(y_{0},x_{1},y_{1},y_{0}) \land {}\\ R_{\text {iL}_{1}}(y_{1},x_{2},y_{2},y_{0}) \land \cdots \land R_{\text {iL}_{1}}(y_{2n-1},x_{2n},y_{2n},y_{2n}) \end {array} \right )\),

  4. 4.

    y0,…,yn,z0,…,zn,w1,…,w2n\(\left (\begin {array}{l@{}l@{}} y_{0} = 0 \land y_{n} = 0 \land {}\\ R_{\text {iL}_{3}}(y_{0},x_{1},x_{2},y_{1},z_{0},w_{1},w_{2},z_{1}) \land {}\\ R_{\text {iL}_{3}}(y_{1},x_{3},x_{4},y_{2},z_{1},w_{3},w_{4},z_{2}) \land \cdots \land {}\\ R_{\text {iL}_{3}}(y_{n-1},x_{2n-1},x_{2n},y_{n},z_{n-1},w_{2n-1},w_{2n},z_{n}) \end {array} \right )\),

  5. 5.

    y0,…,y2n,z0,…,z2n,w1,…,w2n\(\left (\begin {array}{l@{}l@{}} R_{\text {iL}_{2}}(y_{0},x_{1},y_{1},z_{0},w_{1},z_{1},y_{0},z_{0}) \land {}\\ R_{\text {iL}_{2}}(y_{1},x_{2},y_{2},z_{1},w_{2},z_{2},y_{0},z_{0}) \land \cdots \land {}\\ R_{\text {iL}_{2}}(y_{2n-1},x_{2n},y_{2n},z_{2n-1},w_{2n},z_{2n}, y_{2n},z_{2n}) \end {array} \right )\),

where the number of existentially quantified variables is linearly bounded in thelength of the constraint. Note moreover that in each case any model ofx1x2 ⊕⋯ ⊕ x2n = 0 uniquely determines the values of the existentially quantified variables.

Proof

Write out the constraint relations following the existential quantifiers as (conjunctions of) equalities. From this uniqueness of valuations for the existentially quantified variables is easy to see, and likewise that any model of \(\bigoplus _{i = 1}^{2n} x_{i} = 0\) also satisfies each of the formulas 1. up to 5. Adding up the equalities behind the existential quantifiers shows the converse direction.

The following lemma shows that MinDistance is AP-equivalent to a restricted version, containing only constraints generating the minimal weak base, for each co-clone in the affine case.

Lemma 42

For each co-clone\(\mathcal {B}\in \{\text {iL},\text {iL}_{0},\text {iL}_{1},\text {iL}_{2},\text {iL}_{3}\}\)wehave\(\text {\textsf {MinDistance}}\le _{\text {AP}} \text {\textsf {NOSol}}(\{R_{\mathcal {B}},[\neg x]\})\).

Proof

Consider a co-clone \(\mathcal {B}\in \{\text {iL},\text {iL}_{0},\text {iL}_{1},\text {iL}_{2},\text {iL}_{3}\}\) and a MinDistance-instance represented by a matrix \(A\in \mathbb {Z}_{2}^{k\times l}\). If one of the columns of A, say the i-th, is zero, then the i-th unit vector is an optimal solution to this instance with optimal value 1. Hence, we assume from now on that none of the columns equals a zero vector.

Every row of A expresses the fact that a sum of nl variables equals zero. If n is odd, we extend this sum to one with n + 1 summands, thereby introducing a new variable v, which we existentially quantify and confine to zero using a unary [¬x]-constraint. Then we replace the expanded sum by the existential formula from Lemma 41 corresponding to the co-clone \(\mathcal {B}\) under consideration. This way we have introduced only linearly many new variables in l for every row, and for any feasible solution for the MinDistance-problem the values of the existential variables needed to encode it are uniquely determined. Thus, taking the conjunction over all these formulas we only have a linear growth in the size of the instance.

Next, we show how to deal with the existential quantifiers: First we transform the expression to prenex normal form getting a formula ψ of the form

$$\exists y_1,\dotsc,y_p \varphi(y_1,\dotsc,y_p,x_1,\dotsc,x_l), $$

which holds if and only if Ax = 0 for x = (x1,…,xl). We use the same blow-up construction regarding x1,…,xl as in Proposition 7 and Lemma 11 to make the influence of y1,…,yp on the Hamming distance negligible. For this we put J := {1,…,t} and introduce new variables \({x_{i}^{j}}\) where 1 ≤ il and jJ. If u is among x1,…,xl, we define its blow-up set to be \(B(u) = \{{x_{i}^{j}}|{j\in J}\}\), otherwise, for u ∈{y1,…,yp}, we set B(u) = {u}. Now for each atom R(u1,…,uq) of φ we form the set of atoms \(\{R(u_{1}^{\prime },\dotsc ,u_{q}^{\prime })| {(u_{1}^{\prime },\dotsc ,u_{q}^{\prime })\in \prod _{i = 1}^{q} B(u_{i})}\}\), and define the quantifier free formula φ to be the conjunction of all atoms in the union of these sets. Note that this construction takes time polynomial in the size of ψ and hence in the size of the input MinDistance-instance whenever t is polynomial in the input size because the atomic relations in ψ are at most octonary.

If s is an assignment of values to x making Ax = 0 true, we define \(s^{\prime }({x_{i}^{j}}):= s(x_{i})\) and extend this to a model of φ assigning the uniquely determined values to y1,…,yp. Let m be the model arising in this way from the zero assignment m. If s is any model of φ, then for every 1 ≤ il, all jJ and each atom R(u1,…,uq) of φ, s satisfies, in particular, the conjunction \(R(u_{1}^{\prime },\dotsc ,u_{q}^{\prime })\land R(u_{1}^{\prime \prime },\dotsc ,u_{q}^{\prime \prime })\) where for u ∈{u1,…,uq} we have u = u = u if u ∈{y1,…,yp}, \(u^{\prime }={x_{i}^{1}}\), \(u^{\prime \prime } = {x_{i}^{j}}\) if u = xi, and \(u^{\prime }=u^{\prime \prime }={x_{k}^{1}}\) if u = xk for some \(k\in \{1,\dotsc ,l\}\smallsetminus \{i\}\). Hence, the vectors \((s^{\prime }({x_{1}^{1}}),\dotsc ,s^{\prime }({x_{l}^{1}}))\) and \((s^{\prime }({x_{1}^{1}}),\dotsc ,s^{\prime }(x_{i-1}^{1}),s^{\prime }({x_{i}^{j}}),s^{\prime }(x_{i + 1}^{1}),\dotsc , s^{\prime }({x_{l}^{1}}))\) both belong to the kernel of A and so does their difference, which is \(s^{\prime }({x_{i}^{j}}) - s^{\prime }({x_{i}^{1}})\) times the i-th unit vector. As the i-th column of A is non-zero, we must have \(s^{\prime }({x_{i}^{j}}) = s^{\prime }({x_{i}^{1}})\). This also implies that if s is zero on \({x_{1}^{1}},\dotsc ,{x_{n}^{1}}\), then it must be zero on all \({x_{i}^{j}}\) (1 ≤ il, jJ) and thus it must coincide with m. Therefore, every feasible solution to the NOSol-instance (φ, m) yields a non-zero vector \((s^{\prime }({x_{1}^{1}}),\dotsc ,s^{\prime }({x_{l}^{1}}))\) in the kernel of A.

Further, if s is an r-approximation to an optimal solution, i.e., if hd(s, m) ≤ r OPT(φ, m), then, as \(s^{\prime }({x_{i}^{1}})=s^{\prime }({x_{i}^{j}})\) holds for all jJ and all 1 ≤ il, we obtain a solution to the MinDistance problem with Hamming weight w such that tw ≤hd(s, m). Also, any optimal solution to the MinDistance-instance can be extended to a not-necessarily optimal solution s of (φ, m), for which one can bound the distance to m as follows: OPT(φ, m) ≤hd(s, m) ≤ t ⋅OPT(A) + p. Combining these inequalities, we can infer twrt ⋅OPT(A) + rp, or w ≤OPT(A) ⋅ (r + r/OPT(A) ⋅ p/t). We noted above that p is linearly bounded in the size of the input, thus choosing t quadratic in the size of the input bounds w by OPT(A)(r + o(1)), whence we have an AP-reduction with α = 1.

Lemma 43

For constraint languages Γ, where one can decide the existence of and also find a feasible solution ofNOSol(Γ) in polynomial time, we have the reduction\(\text {\textsf {NOSol}}({\Gamma }) \le _{\text {AP}} \text {\textsf {NOSol}}(({\Gamma }\smallsetminus \{[x],[\neg x]\})\cup \{\approx \})\).

Proof

If an instance (φ, m) does not have feasible solutions, then it does not have nearest other solutions either. So we map it to the generic unsolvable instance ⊥. Consider now formulas φ over variables x1,…,xn with models m where some feasible solution s0m exists (and has been computed).

We can assume φ to be of the form \(\psi (x_{1},\dotsc ,x_{n}) \land \bigwedge _{i\in I_{1}} [x_{i}] \land \bigwedge _{i\in I_{0}} [\neg x_{i}]\), where ψ is a \(({\Gamma }\smallsetminus \{[x],[\neg x]\})\)-formula and I1,I0 ⊆{1,…,n}. We transform φ to \(\varphi ^{\prime }:= \psi (x_{1},\dotsc ,x_{n}) \land \bigwedge _{i\in I_{1}} x_{i} \approx y_{1} \land \bigwedge _{i\in I_{0}} x_{i} \approx z_{1} \land \bigwedge _{i = 1}^{1+n^{2}} (y_{i} \approx y_{1} \land z_{i} \approx z_{1})\) and extend models of φ to models of φ in the natural way. Conversely, if s is a model of φ and s(yi) = 1 and s(zi) = 0 hold for all 1 ≤ i ≤ 1 + n2, then we can restrict it to a model of φ. Other models of φ are not optimal and are mapped to s0. It is not hard to see that this provides an AP-reduction with α = 1.

Proposition 44

For every constraint language Γ satisfying iL ⊆〈Γ〉⊆iL2we haveMinDistanceAPNOSol(Γ).

Proof

Since we lack compatibility with existential quantification, we shall deal with each co-clone \(\mathcal {B} = \langle {\Gamma }\rangle \) in the interval {iL,iL0,iL1,iL2,iL3} separately. First we perform the reduction from Lemma 42 to \(\textsf {NOSol}(\{R_{\mathcal {B}}, [\neg x]\})\). We need to find a reduction to \(\textsf {NOSol}(\{R_{\mathcal {B}}\})\) as this reduces to NOSol(Γ) by Proposition 15 and Theorem 1.

This is simple in the case of iL0 and iL2 since \([\neg x] = \{x\mid R_{\text {iL}_{0}}(x,x,x,x)\}\in \langle {\{R_{\text {iL}_{0}}\}\rangle }_{\land }\) (see Proposition 15) and \([\neg x] = \{x\mid \exists y(R_{\text {iL}_{2}}(x,x,x,y,y,y,x,y))\}\), where the existential quantifier can be handled by an AP-reduction with α = 1 which drops the quantifier and extends every model by assigning 1 to all previously existentially quantified variables. Thereby (optimal) distances between models do not change at all.

In the remaining cases, we reduce \(\textsf {NOSol}(\{R_{\mathcal {B}},[\neg x]\})\le _{\text {AP}} \textsf {NOSol}(\{R_{\mathcal {B}},[x],[\neg x]\})\) and the latter to \(\textsf {NOSol}(\{R_{\mathcal {B}}, \approx \})\) by Lemma 43, which now has to be reduced to \(\textsf {NOSol}(\{R_{\mathcal {B}}\})\). This is obvious for \(\mathcal {B} = \text {iL}\) where equality constraints xy can be expressed as RiL(x, x, x, y) ∈〈{RiL}〉 (cf. Proposition 15). For iL1 the same can be done using the formula \(\exists z(R_{\text {iL}_{1}}(x,y,z,z))\), where the existential quantifier can be removed by the same sort of simple AP-reduction with α = 1 as employed for iL2. Finally, for iL3 we want to express equality as \(\exists u\exists v(R_{\text {iL}_{3}}(x,x,x,y,u,u,u,v))\). Here, in an AP-reduction, the quantifiers cannot simply be disregarded, as the values of the existentially quantified variables are not constant for all models. They are uniquely determined by the values of x and y for each particular model, though, which allows us to perform a similar blow-up construction as in the proof of Lemma 42.

In more detail, given a \(\{R_{\text {iL}_{3}}, \approx \}\)-formula ψ containing variables x1,…,xl, first note that each atomic \(R_{\text {iL}_{3}}\)-constraint \(R_{\text {iL}_{3}}(x_{1},\dotsc ,x_{8})\) can be represented as a linear system of equations, namely \(\oplus _{i = 1}^{4} x_{i} = 0\) and xixi+ 4 = 1 for 1 ≤ i ≤ 4. Since equalities xixj can be written as xixj = 0, the formula ψ is equivalent to an expression of the form Ax = b where x = (x1,…,xl). Replacing each equality constraint by the existential formula above and bringing the result into prenex normal form, we get a formula ∃y1,…,yp(φ(y1,…,yp,x1,…,xl)), which is equivalent to ψ and where φ is a conjunctive \(\{R_{\text {iL}_{3}}\}\)-formula. By construction any two models of φ that agree on x1,…,xl must coincide. Thus, introducing variables \({x_{i}^{j}}\) for 1 ≤ il and jJ := {1,…,t} and defining φ in literally the same way as in the proof of Lemma 42, any model s of ψ yields a model s of φ by putting \(s^{\prime }({x_{i}^{j}}):=s(x_{i})\) for 1 ≤ il and jJ and extending this with the unique values for y1,…,yp satisfying φ(y1,…,yp,x1,…,xl). In this way we obtain a model m of φ from a given solution m of ψ. Besides, if s is any model of φ, then as in Lemma 42, the vectors \((s^{\prime }({x_{1}^{1}}),\dotsc ,s^{\prime }({x_{l}^{1}}))\) and \((s^{\prime }({x_{1}^{1}}),\dotsc ,s^{\prime }(x_{i-1}^{1}),s^{\prime }({x_{i}^{j}}),s^{\prime }(x_{i + 1}^{1}),\dotsc , s^{\prime }({x_{l}^{1}})))\) both satisfy ψ, and thus their difference is in the kernel of A. Since the variable xi occurs in at least one of the atoms of ψ, the i-th column of A is non-zero, implying that \(s^{\prime }({x_{i}^{j}}) = s^{\prime }({x_{i}^{1}})\) for jJ and all 1 ≤ il. Thus, any model sm of φ gives a model sm of ψ by defining \(s(x_{i}):= s^{\prime }({x_{i}^{1}})\) for all 1 ≤ il.

The presented construction is an AP-reduction with α = 1, which can be proven completely analogously to the last paragraph of the proof of Lemma 42, choosing t quadratic in the size of ψ.

6.2.3 MinHornDeletion-Equivalent Cases

As in Proposition 38 the need to use conjunctive closure instead of 〈 〉 causes a case distinction in the proof of the following result, which is the dual variant of [6, Lemma 16]. Correspondingly, Lemma 46 then replaces [6, Lemma 17].

Lemma 45

If Γ is exactly dual Horn (iV ⊆〈Γ〉⊆iV2) then one of the following relations is in〈Γ〉:[xy],[xy] ×{0},[xy] ×{1}, or [xy] ×{01}.

Proof

The co-clone 〈Γ〉 is equal to iV, iV0, iV1, or iV2. In the case 〈Γ〉 = iV the relation RiV belongs to 〈Γ〉 by Theorem 1; because of RiV(y, y, y, x) = [xy] we have [xy] ∈〈RiV⊆〈Γ〉. The case 〈Γ〉 = iV1 leads to [xy] ×{1}∈〈Γ〉 in an analogous manner. The cases 〈Γ〉 = iV0 and 〈Γ〉 = iV2 lead to [xy] ×{0}∈〈Γ〉 and [xy] ×{01}∈〈Γ〉, respectively, by observing that [S1(y, y, x)] = [S0yyxy)] = [(¬y ∧¬y) ≈ (¬y ∧¬x)] = [xy].

Lemma 46

If Γ is exactly dual Horn (iV ⊆〈Γ〉⊆iV2), then the problemNOSol(Γ) isMinHornDeletion-hard.

Proof

There are four cases to consider, namely 〈Γ〉∈{iV,iV0,iV1,iV2}. For simplicity we only present the situation where 〈Γ〉 = iV1; the case 〈Γ〉 = iV2 is very similar, and the other possibilities are even less complicated. At the end we shall give a few hints how to adapt the proof in these cases.

The basic structure of the proof is as follows: we choose a suitable weak base of iV1 consisting of an irredundant relation R1, and identify a relation H1 ∈〈{R1}〉 which allows us to encode a sufficiently complicated variant of the MinOnes-problem into NOSol({H1}). Thus by Theorem 1 and Lemma 45 we have H1 ∈〈{R1}〉⊆〈Γ〉 and [xy] ×{1}∈〈Γ〉, wherefore Proposition 15 implies NOSol(Γ) ≤APNOSol(Γ) where Γ = {H1,[xy] ×{1}}. According to [22, Theorem 2.14(4)], MinHornDeletion is equivalent to MinOnes(Δ) for constraint languages Δ being dual Horn, not 0-valid and not implicative hitting set bounded+ with any finite bound, that is, if 〈Δ〉∈{iV1,iV2}. The key point of the construction is to choose R1 and H1 in such a way that we can find a relation G1 satisfying iV1 ⊆〈{G1}〉⊆iV2 and ((G1 ×{1}) ∪{0}) ×{1} = H1. The latter property will allow us to prove an AP-reduction MinHornDeletion ≡APMinOnes({G1}) ≤APNOSol(Γ), completing the chain.

We first check that R1 = V1 ∘〈χ4〉 satisfies 〈{R1}〉 = iV1: namely, by construction, this relation is preserved by the disjunction and by the constant operation with value 1, i.e., 〈R1〉⊆iV1. This inclusion cannot be proper, since 0R1 (〈R1〉⫅̸iI0) and x ∨ (yz)∉R1 while x = (e1β) ∨ (e4β), y = (e1β) ∨ (e2β) and z = (e1β) ∨ (e3β) belong to V1 ∘〈χ4〉 (cf. before Theorem 2 for the notation), i.e. the generating function (x, y, z)↦x ∨ (yz) of the clone S00 [13, Figure 2, p. 8] fails to be a polymorphism of R1. For later we note that when β is chosen such that the coordinates of χ4 are ordered lexicographically (and we are going to assume this from now on), then this failure can already be observed within the first seven coordinates of R1. Now according to Theorem 2, the sedenary relation R1 := V1 ∘〈χ4〉 is a weak base relation for iV1 without duplicate coordinates, and a brief moment of inspection shows that none of them is fictitious either. Therefore, R1 is an irredundant weak base relation for iV1. We define H1 to be {(x0,…,x8)∣(x0,…,x7,x8,…,x8) ∈ R1}, then clearly H1 ∈〈{R1}〉. Now we put \(G_{1} := G_{1}^{\prime }\smallsetminus \{\boldsymbol {0}\}\) where \(G_{1}^{\prime } := \{(x_{0},\dotsc ,x_{6})\mid {(x_{0},\dotsc ,x_{8})\in H_{1}}\}\), and one quickly verifies that ((G1 ×{1}) ∪{0}) ×{1} = H1. Since \(G_{1}^{\prime }\in \langle {H_{1}}\rangle \subseteq \langle {R_{1}}\rangle = \text {iV}_{1}\) and removing the bottom-element 0 of a non-trivial join-semilattice with top-element still yields a join-semilattice with top-element, we have G1 ∈iV1. With the analogous counterexample as for the relation R1 above, we can show that (x, y, z)↦x ∨ (yz) is not a polymorphism of G1 (because the non-membership is witnessed among the first seven coordinates). Thus, 〈{G1}〉 = iV1; in particular G1, and any relation conjunctively definable from it, is not 0-valid.

For the reduction let now φ(x) = G1(x1) ∧⋯ ∧ G1(xk) be an instance of MinOnes({G1}). We construct a corresponding Γ-formula φ as follows.

$$\begin{array}{@{}rcl@{}} \varphi^{\prime\prime}(\boldsymbol{x},y,z) &=& H_{1}({\boldsymbol{x}_{1}},y,z) \land \cdots \land H_{1}({\boldsymbol{x}_{k}},y,z) \\ \varphi^{\prime\prime\prime}(\boldsymbol{x}, {\boldsymbol{x}^{(\boldsymbol{2})}}, \dotsc, {\boldsymbol{x}^{(\boldsymbol{\ell})}},z) &=& \bigwedge_{i = 1}^{\ell} \left( (x_{i} \xrightarrow{z = 1} x_{i}^{(2)}) \land \bigwedge_{j = 2}^{\ell-1} (x_{i}^{(j)} \xrightarrow{z = 1} x_{i}^{(j + 1)}) \land (x_{i}^{\ell} \xrightarrow{z = 1} x_{i}) \right)\\ \varphi^{\prime}(\boldsymbol{x}, {\boldsymbol{x}^{(\boldsymbol{2})}}\mkern-\thickmuskip, \dotsc, {\boldsymbol{x}^{(\boldsymbol{\ell})}}\mkern-\thickmuskip,y,z) &=& \varphi^{\prime\prime}(\boldsymbol{x},y,z) \land \varphi^{\prime\prime\prime}(\boldsymbol{x}, {\boldsymbol{x}^{(\boldsymbol{2})}}, \dotsc,{\boldsymbol{x}^{(\boldsymbol{\ell})}},z) \end{array} $$

where = |x| is the number of variables of φ, y and z are new global variables, and where we have written \((u\xrightarrow {w = 1}v)\) to denote ([xy] ×{1})(u, v, w). Let m0 be the assignment to the 2 + 2 variables of φ given by m0(z) = 1 and m0(x) = 0 elsewhere. It is clear that (φ, m0) is an instance of NOSol(Γ), since m0 satisfies φ. The formula φ″′ only multiplies each variable x from φ-times and forces xx(2) ≈⋯ ≈ x(), which is just a technicality for establishing an AP-reduction. The main idea of this proof is the correspondence between the solutions of φ and φ.

For each solution s of φ(x) there exists a solution s of φ(x, y) with s(y) = 1 (and s(z) = 1). Each solution s of φ has always s(z) = 1 and either s(y) = 0 or s(y) = 1. Because every variable from x is part of one of the xi, the assignment m0 restricted to (x, y, z) is the only solution s of φ satisfying s(y) = 0. If otherwise s(y) equals 1, then s restricted to the variables x satisfies φ(x), following the correspondence between the relations G1 and H1.

For r ∈ [1,) let s be an r-approximate solution of the \(\textsf {NOSol}(\Gamma ^{\prime })\)-instance (φ, m0). Let \(s := s^{\prime }\!\upharpoonright _{\boldsymbol {x}}\) be the restriction of s to the variables of φ. Since sm0, by what we showed before, s(y) = 1 and s is a solution of φ(x). We have \(\text {OPT}(\varphi ^{\prime }, m_{0}) \geq 2\) and OPT(φ) ≥ 1, since solutions of the \(\textsf {NOSol}(\Gamma ^{\prime })\)-instance \((\varphi ^{\prime },m_{0})\) must be different from m0, whereby y is forced to have value 1, and \([\varphi ]\in \langle {\{G_{1}\}}\rangle _{\wedge }\) is not 0-valid. Moreover, \(\text {hw}(s) = \text {hd}(\boldsymbol {0}, s)\), hd(s, m0) = hw(s) + 1, \(\text {OPT}(\varphi ^{\prime },m_{0}) = \ell \text {OPT}(\varphi ) + 1\), and \(\text {hd}(s^{\prime },m_{0})\leq r\text {OPT}(\varphi ^{\prime },m_{0})\). From this and OPT(φ) ≥ 1 it follows that

$$\begin{array}{@{}rcl@{}} \ell\text{hw}(s) < \ell\text{hw}(s)+ 1 = \text{hd}(s^{\prime},m_{0})&\leq& r\text{OPT}(\varphi^{\prime},m_{0}) = r\ell\text{OPT}(\varphi) + r\\ &\leq& r\ell\text{OPT}(\varphi) + r\text{OPT}(\varphi)\\ &\leq& r\ell\text{OPT}(\varphi) + r\text{OPT}(\varphi) + (r-1)\ell\text{OPT}(\varphi)\\ &=& (2r-1+r/\ell)\ell\text{OPT}(\varphi)\\ &=& (1 + 2(r-1) +r/\ell)\ell\text{OPT}(\varphi). \end{array} $$

Hence s is an (1 + α(r − 1) + o(1))-approximate solution of the instance φ of the problem MinOnes({G1}) where α = 2.

In the case when 〈Γ〉 = iV2, the proof goes through with minor changes: \(R_{2} = \mathrm {V}_{2}\circ \langle {\chi _{4}}\rangle = R_{1}\smallsetminus \{\boldsymbol {1}\}\), so we define H2 and G2 like H1 and G1 just using R2 and H2 in place of R1 and H1. Then we have \(H_{2} = H_{1}\smallsetminus \{\boldsymbol {1}\}\), \(G_{2} = G_{1}\smallsetminus \{\boldsymbol {1}\}\) and 〈{G2}〉 = iV2. Moreover, for the reduction we shall need an additional global variable w for φ″′ (and φ) since the encoding of the implication from Lemma 45 requires it (and forces it to zero in every model).

For 〈Γ〉 = iV0 we can use R0 = V0 ∘〈χ4〉 = R2 ∪{0}; then, letting H0 = {(x0,…,x7) | (x0,…,x7,x7,…,x7) ∈ R0}∈〈{R0}〉, we have H0 = (G2 ×{1}) ∪{0}. On a side note, we observe that H0 = V0 ∘〈χ3〉, which we can use alternatively without detouring via R0. Given the relationship between G2 and H0, we do not need the global variable z in the definition of φ, but we need to have it in the definition of φ″′, where the relation given by Lemma 45 necessitates atoms of the form \((u\xrightarrow {z = 0}v)\) forcing z to zero in every model.

The case where 〈Γ〉 = iV is similar to the previous: we can use the irredundant weak base relation H = V ∘〈χ3〉 = H0 ∪{1} = (G1 ×{1}) ∪{0}. Except for y in the definition of φ no additional global variables are needed in the definition of φ, because [uv] atoms are directly available for φ″′.

Corollary 47

If Γ is exactly Horn (iE ⊆〈Γ〉⊆iE2) or exactly dual-Horn (iV ⊆〈Γ〉⊆iV2) thenNOSol(Γ) isMinHornDeletion-completeunder AP-Turing-reductions.

Proof

Hardness follows from Lemma 46 and duality. Moreover, NOSol(Γ) can be AP-Turing-reduced to NSol(Γ ∪{[x],[¬x]}) as follows: Given a Γ-formula φ and a model m, we construct for every variable x of φ a formula \(\varphi _{x}= \varphi \land (x\approx \overline {m}(x))\). Then for every x where [φx]≠ we run an oracle algorithm for NSol(Γ ∪{[x],[¬x]}) on (φx,m) and output one result of these oracle calls that is closest to m.

We claim that this algorithm provides indeed an AP-Turing reduction. To see this observe first that the instance (φ, m) has feasible solutions if and only if this holds for (φx,m) and at least one variable x. Moreover, we have \(\text {OPT}(\varphi ,m) = \min _{x,[\varphi _{x}]\neq \emptyset }(\text {OPT}(\varphi _{x}, m))\). Let A(φ, m) be the answer of the algorithm on (φ, m) and let B(φx,m) be the answers to the oracle calls. Consider a variable x such that \(\text {OPT}(\varphi ,m) = \min _{x,[\varphi _{x}]\neq \emptyset }(\text {OPT}(\varphi _{x},m)) = \text {OPT}(\varphi _{x^{*}},m)\), and assume that \(B(\varphi _{x^{*}}, m)\) is an r-approximate solution of \((\varphi _{x^{*}},m)\). Then we get

$$\frac{\text{hd}(m, A(\varphi, m))}{\text{OPT}(\varphi, m)} = \frac{\min_{y,[\varphi_y]\neq\emptyset}(\text{hd}(m, B(\varphi_y, m))}{\text{OPT}(\varphi_{x^*}, m)} \leq \frac{\text{hd}(m, B(\varphi_{x^*}, m))}{\text{OPT}(\varphi_{x^*}, m)} \leq r . $$

Thus the algorithm is indeed an AP-Turing-reduction from NOSol(Γ) to NSol(Γ ∪{[x],[¬x]}). Note that for Γ ⊆iV2 the problem NSol(Γ ∪{[x],[¬x]}) reduces to MinHornDeletion according to Propositions 29 and 27. Duality completes the proof.

7 Finding the Minimal Distance Between Solutions

In this section we study the optimization problem MinSolutionDistance. We first consider the polynomial-time cases and then the cases of higher complexity.

7.1 Polynomial-Time Cases

We show that for bijunctive constraints the problem MinSolutionDistance can be solved in polynomial time. After stating the result we present an algorithm and analyze its complexity and correctness.

Proposition 48

If Γ is a bijunctive constraint language (Γ ⊆iD2) then the problemMSD(Γ) is in PO.

By Proposition 32, an algorithm for bijunctive constraint languages Γ can be restricted to at most binary clauses. Alternatively, one can use the plain base

$$\{[x],[\neg x],[x\lor y], [\neg x\lor y], [\neg x\lor\neg y]\} $$

of iD2 exhibited in [12] to see that every relation in Γ can be written as a conjunction of disjunctions of two not necessarily distinct literals. We shall treat these disjunctions as one- or two-element sets of literals when extending the algorithm of Aspvall, Plass, and Tarjan [2] to compute the minimum distance between distinct models of a bijunctive constraint formula.

figure a

Complexity

The size of \(\mathcal {L}\) is linear in the number of variables, the reflexive closure can be computed in time linear in \(|{\mathcal {L}}|\), the transitive closure in time cubic in \(|{\mathcal {L}}|\), see [32]. The equivalence relation ∼ is the intersection of ≤ restricted to \(\mathcal {L}^{\prime }\) and its inverse (quadratic in \(|{\mathcal {L}^{\prime }}|\)); from it we can obtain the partition \(\mathcal {L}^{\prime }/{\sim }\) in linear time in \(|{\mathcal {L}^{\prime }}|\leq |{\mathcal {L}}|\), including the cardinalities of the equivalence classes and their minimization. Similarly, the remaining sets from the proof (\(\mathcal {V}_{0}\), \(\mathcal {V}_{1}\), their intersection and union, and thus also \(\mathcal {L}^{\prime }\)) can be computed with polynomial time complexity.

Correctness

The pairs in R arise from interpreting the atomic constraints in φ as implications. By transitivity of implication, the inequality uv for literals u, v means that every model m of φ satisfies the implication uv or, equivalently, m(u) ≤ m(v). In particular, x ≤¬x implies m(x) = 0 and ¬xx implies m(x) = 1. Therefore \(\mathcal {V}_{0}\) can be seen to be the set of variables that have to be false in every model of φ, and \(\mathcal {V}_{1}\) the set of variables true in every model.

If \(\mathcal {V}_{0} \cap \mathcal {V}_{1} \neq \emptyset \) holds then the formula φ is inconsistent and has no solution. If \(\mathcal {V}_{0} \cup \mathcal {V}_{1} = \mathcal {V}\) holds, then every variable has a unique fixed value, hence φ has only one solution. Otherwise the formula is consistent and not all variables are fixed, hence there are at least two models.

To determine the minimal number of variables, whose values can be flipped between any two models of φ, it suffices to consider the literals without fixed value, \(\mathcal {L}^{\prime }\). If we have uv and vu, the literals are equivalent, uv, and must have the same value in every model. This means that any two distinct models have to differ on all literals of at least one equivalence class in \(\mathcal {L}^{\prime }/{\sim }\). Therefore, the return value of the algorithm is a lower bound for the minimal distance.

To prove that the return value can indeed be attained, we exhibit two models m0m1 of φ having the least cardinality of any equivalence class in \(\mathcal {L}^{\prime }/{\sim }\) as their Hamming distance. Let \(L \in \mathcal {L}^{\prime }/{\sim }\) be a class of minimum cardinality. Define m0(u) := 0 and m1(u) := 1 for all literals uL. We extend this by setting m0(w) := m1(w) := 0 for all \(w\in \mathcal {L}\) such that wu for some uL, and by m0(w) := m1(w) := 1 for all \(w\in \mathcal {L}\) such that uw for some uL. For variables \(v\in \mathcal {V}\) satisfying v ≤¬v or ¬vv we have \(v\in \mathcal {V}_{0}\cup \mathcal {V}_{1}\), and thus \(v\notin \mathcal {L}^{\prime }\); in other words, for \([v]_{\sim } \in \mathcal {L}^{\prime }/{\sim }\) the classes [v] and [¬v] are incomparable. Thus, so far, we have not defined m0 and m1 on a variable \(v\in \mathcal {V}\) and on its negation ¬v at the same time. Of course, fixing a value for a negative literal ¬v implicitly means that we bind the assignment for \(v\in \mathcal {V}\) to the opposite value.

It remains to fix the value of literals in \(\mathcal {L}^{\prime }\) that are neither related to the literals in L nor have fixed values in all models. Suppose \((\bar u, v) \in R\) is a constraint such that the value of at least one literal has not yet been defined. There are three cases: either both literals have not yet received a value, or \(\bar u\) is undefined and v has been assigned the value 1 (either as a fixed value in all models or because of being greater than a literal in L or because of being lesser than a complement of a literal in L), or v is undefined and \(\bar u\) has been assigned the value 0 (either as a fixed value in all models or because of being smaller than a literal in L or greater than a complement of a literal in L). All three cases can be handled by defining both models, m0 and m1, on the remaining variables identically: starting with a minimal literal u, where m0 and m1 are not yet defined, we assign \(m_{0}(u) := m_{1}(\overline {u}) := 0\) and \(m_{1}(u):= m_{0}(\overline {u}):= 1\).

This way none of the constraints is violated, and m0 and m1 are distinct only on variables corresponding to literals in L. Iterate this procedure until all variables (and their complements) have been assigned values. If \(\mathcal {L}^{\prime \prime }\subseteq \mathcal {L}^{\prime }\) denotes the literals remaining after propagating the values of m0 and m1 on L, then the presented method can be implemented by partitioning \(\mathcal {L}^{\prime \prime }\) into two classes L0 and L1 such that \(L_{0}\cap \{u,\overline {u}\}\) is a singleton for every \(u\in \mathcal {L}^{\prime \prime }\) and each weakly connected component of the quasiordered set \((\mathcal {L}^{\prime \prime },\leq )\) is either a subset of L0 or L1. Then set m0 and m1 to k on the literals belonging to Lk for k ∈{0,1}.

By construction, m0 differs from m1 only in the variables corresponding to the literals in L, so their Hamming distance is |L| as desired. Moreover, both assignments respect the order constraints in \((\mathcal {L},\leq )\). As these faithfully reflect all original atomic constraints, m0 and m1 are indeed models of φ.

Proposition 49

If Γ is a Horn (Γ ⊆iE2) or a dual Horn (Γ ⊆iV2) constraint language thenMSD(Γ) is in PO.

We only discuss the Horn case (Γ ⊆iE2), dual Horn (Γ ⊆iV2) being symmetric. The following algorithm improves the description given in [6] by correctly treating two marginal cases where the output is evident.

figure b

Complexity

The run-time of the algorithm is polynomial in the number of clauses in φ: Unit resolution/subsumption can be applied at most once for each variable, and hyper-resolution has to be applied at most once for each variable x and each clause ¬y1 ∨⋯ ∨¬ykz and ¬y1 ∨⋯ ∨¬yk.

Correctness

Adding resolvents and removing subsumed clauses maintains logical equivalence, therefore \(\mathcal {D}\cup \mathcal {U}\) is logically equivalent to φ, i.e., both clause sets have the same models. We note that the sets of variables of \(\mathcal {U}\) and of \(\mathcal {D}\) are disjoint. The unit clauses in \(\mathcal {U}\) are always (uniquely) satisfiable, thus \(\mathcal {D}\) and φ are equisatisfiable. Therefore, if \(\mathcal {D}\) contains the empty clause, φ is also unsatisfiable; otherwise \(\mathcal {D}\) is satisfiable, e.g., by assigning 0 to every \(x\in \mathcal {V}\). In this case, if \(\mathcal {U}\) contains a literal for every variable of φ, the unit clauses in \(\mathcal {U}\) define a unique model of φ.

Otherwise φ has at least two models m1m2. In the simplest case some variable x in φ has been left unconstrained by \(\mathcal {D}\) and \(\mathcal {U}\); in this case we can pick any model of \(\mathcal {D}\) and \(\mathcal {U}\) and extend it to two different models of φ with Hamming distance 1 by setting m1(x) = 0 and m2(x) = 1 and setting m1(y) = m2(y) = 0 for any other variable y outside \(\mathcal {D}\) and \(\mathcal {U}\). For the remaining situations it is sufficient to consider the models of \(\mathcal {D}\) only, as each model m of \(\mathcal {D}\) uniquely extends to a model of φ by defining m(x) = 1 for \((x)\in \mathcal {U}\) and m(x) = 0 for \((\neg x)\in \mathcal {U}\); hence the minimal Hamming distances of the models of φ and \(\mathcal {D}\) will be the same.

We are thus looking for models m1,m2 of \(\mathcal {D}\) such that the size of the difference set Δ(m1,m2) = {xm1(x)≠m2(x)} is minimal. In fact, since the models of Horn formulas are closed under minimum, we may assume m1 < m2, i.e., we have m1(x) = 0 and m2(x) = 1 for all variables x ∈Δ(m1,m2). Indeed, given two models m2 and \(m_{2}^{\prime }\) of \(\mathcal {D}\) where neither \(m_{2}\leq m_{2}^{\prime }\) nor \(m_{2}^{\prime }\leq m_{2}\), \(m_{1}= m_{2} \land m_{2}^{\prime }\) is also a model, and it is distinct from m2. Since \(\text {hd}(m_{1},m_{2}) \leq \text {hd}(m_{2},m_{2}^{\prime })\), the minimal Hamming distance will occur between models m1 and m2 satisfying m1 < m2.

Note the following facts regarding the equivalence relation ∼ and dependent variables.

  • If xy then the two variables must have the same value in every model of \(\mathcal {D}\) in order to satisfy the implications ¬xy and ¬yx. This means that for all models m of \(\mathcal {D}\) and all \(X\in \mathcal {V}/\sim \), we have either m(x) = 0 for all xX or m(x) = 1 for all xX.

  • The dependence of variables is acyclic: If, for some l ≥ 2, for every 1 ≤ i < l we have that zi depends on variables including one, say yi, which is equivalent to zi+ 1, and zl = z1, then there is a cycle of binary implications between the variables and thus ziyizj for all i, j, contradicting the definition of dependence.

  • If a variable z depending on y1, …, yk belongs to a difference set Δ(m1,m2), then at least one of the yis also has to belong to Δ(m1,m2): m2(z) = 1 implies m2(yj) = 1 for all j = 1,…,k (because of the clauses ¬zyi), and m1(z) = 0 implies m1(yi) = 0 for at least one i (because of the clause ¬y1 ∨⋯ ∨¬ykz). Therefore Δ(m1,m2) is the union of at least two sets in \(\mathcal {V}/\sim \), namely the equivalence class of z and the one of yi.

  • If some z1 ∈Δ(m1,m2) is equivalent to a variable \(z_{1}^{\prime }\) that depends on some other variables, then we have a variable z2 among them, which also belongs to Δ(m1,m2). If the equivalence class of z2 still contains a variable \(z_{2}^{\prime }\) depending on other variables, we can iterate this procedure. In this way we obtain a sequence \(z_{1}\sim z_{1}^{\prime }, z_{2}\sim z_{2}^{\prime }, z_{3}\sim z_{3}^{\prime }, \dotsc \) where \(z_{i}^{\prime }\) depends on variables including zi+ 1, which is equivalent to \(z_{i + 1}^{\prime }\). Because there are only finitely many variables and because of acyclicity, after a linear number of steps we must reach a variable zn ∈Δ(m1,m2) such that its equivalence class (being a subset of the difference set) does not contain any dependent variables.

Hence the difference between any two models cannot be smaller than the cardinality of the smallest set in \(\mathcal {V}/\sim \) without dependent variables. It remains to show that we can indeed find two such models.

Let X be a set in \(\mathcal {V}/\sim \) which has minimal cardinality among the sets without dependent variables, and let m0,m1 be interpretations defined as follows: (1) m0(y) = 0 and m1(y) = 1 if yX; (2) m0(y) = 1 and m1(y) = 1 if yX and \((\neg x\lor y)\in \mathcal {D}\) for some xX; (3) m0(y) = 0 and m1(y) = 0 otherwise. We have to show that m0 and m1 satisfy all clauses in \(\mathcal {D}\). Let m be any of these models. \(\mathcal {D}\) contains two types of clauses.

  1. Type 1:

    Horn clauses with a positive literal ¬y1 ∨⋯ ∨¬ykz. If m(yi) = 0 for any i, we are done. So suppose m(yi) = 1 for all i = 1,…,k; we have to show m(z) = 1. The condition m(yi) = 1 means that either yiX (for m = m1) or that there is a clause \((\neg x_{i} \lor y_{i})\in \mathcal {D}\) for some xiX. We distinguish the two cases zX and zX.

    Let zX. If zyi for any i, we are done for we have m(z) = m(yi) = 1. So suppose zyi for all i. As the elements in X, in particular z and the xis, are equivalent and the binary clauses are closed under resolution, \(\mathcal {D}\) contains the clause ¬zyi for all i. But this would mean that z is a variable depending on the yis, contradicting the assumption zX.

    Let zX, and let xX. As the elements in X are equivalent and the binary clauses are closed under resolution, \(\mathcal {D}\) contains ¬xyi for all i. Closure under hyper-resolution with the clause ¬y1 ∨⋯ ∨¬ykz means that \(\mathcal {D}\) also contains ¬xz, whence m(z) = 1.

  2. Type 2:

    Horn clauses with only negative literals ¬y1 ∨⋯ ∨¬yk. If m(yi) = 0 for any i, we are done. It remains to show that the assumption m(yi) = 1 for all i = 1,…,k leads to a contradiction. The condition m(yi) = 1 means that either yiX (for m = m1) or that there is a clause \((\neg x_{i}\lor y_{i})\in \mathcal {D}\) for some xiX. Let x be some particular element of X. Since the elements in X are equivalent and the binary clauses are closed under resolution, \(\mathcal {D}\) contains the clause ¬xyi for all i. But then a hyper-resolution step with the clause ¬y1 ∨⋯ ∨¬yk would yield the unit clause ¬x, which by construction does not occur in \(\mathcal {D}\). Therefore at least one yi is neither in X nor part of a clause ¬xyi with xX, i.e., m(yi) = 0.

7.2 Hard Cases

7.2.1 Two Solution Satisfiability

In this section we study the feasibility problem of MSD(Γ) which is, given a Γ-formula φ, to decide if φ has two distinct solutions.

Problem: :

TwoSolutionSAT(Γ)

Input: :

A conjunctive formula φ over the relations from the constraint language Γ.

Question: :

Are there two satisfying assignments mm of φ?

A priori it is not clear that the tractability of TwoSolutionSAT is fully characterized by co-clones. The problem is that the implementation of relations of some language Γ by another language Γ might not be parsimonious, that is, in the implementation one solution to a constraint might be blown up into several ones in the implementation. Fortunately we can still determine the tractability frontier for TwoSolutionSAT by combining the corresponding results for SAT and AnotherSAT.

Lemma 50

Let Γ be a constraint language for whichSAT(Γ) is NP-hard. Then the problemTwoSolutionSAT(Γ) is NP-hard.

Proof

Since SAT(Γ) is NP-hard, there must be a relation R in Γ having more than one tuple, because every relation containing only one tuple is at the same time Horn, dual Horn, bijunctive, and affine. Given an instance φ for SAT(Γ), construct φ as φR(y1,…,y) where is the arity of R and y1,…,y are new variables not appearing in φ. Obviously, φ has a solution if and only if φ has at least two solutions. Hence, we have proved SAT(Γ) ≤mTwoSolutionSAT(Γ).

Lemma 51

Let Γ be a constraint language for whichAnotherSAT(Γ) is NP-hard. Then the problemTwoSolutionSAT(Γ) is NP-hard.

Proof

Let a Γ-formula φ and a satisfying assignment m be an instance of the problem AnotherSAT(Γ). Then φ has a solution other than m if and only if it has two distinct solutions.

Lemma 52

Let Γ be a constraint language for which both problemsSAT(Γ) andAnotherSAT(Γ) are in P. ThenTwoSolutionSATis also in P.

Proof

Let φ be an instance of TwoSolutionSAT(Γ). All polynomial-time decidable cases of SAT(Γ) are constructive, i.e., whenever that problem is polynomial-time decidable, there exists a polynomial-time algorithm computing a satisfying assignment provided it exists. If φ is not satisfiable, we reject the instance. Otherwise, we can compute in polynomial time a satisfying assignment m of φ. Now use the algorithm for AnotherSAT(Γ) on the instance (φ, m) to decide if there is a second solution to φ.

Corollary 53

For any constraint language Γ, the problemTwoSolutionSAT(Γ) is inP if bothSAT(Γ) andAnotherSAT(Γ) are in P. Otherwise,TwoSolutionSAT(Γ) is NP-hard.

Proposition 54

Let Γ be a constraint language for whichTwoSolutionSAT(Γ) is in P. Then there is a polynomial-timen-approximationalgorithm forMSD(Γ), wherenis the number of variables of the Γ-formulaon input.

Proof

Since TwoSolutionSAT(Γ) is in P, both SAT(Γ) and AnotherSAT(Γ) must be in P by Corollary 53. Since SAT(Γ) is in P, we can compute a model m of the input φ in polynomial time if it exists. Now we check the AnotherSAT(Γ)-instance (φ, m). If it has a solution mm, it is also polynomial time computable, and we return (m, m). If we fail somewhere in this process, then the MSD(Γ)-instance φ does not have feasible solutions; otherwise, hd(m, m) ≤ nn ⋅OPT(φ).

7.2.2 MinDistance-Equivalent Cases

In this section we show that, as for the NearestOtherSolution problem, the affine cases of MSD are MinDistance-complete.

Proposition 55

MSD(Γ) isMinDistance-complete if the constraint language Γ satisfies the inclusions iL ⊆〈Γ〉⊆iL2.

Proof

We prove MSD(Γ) ≡APNearestOtherSolution(Γ), which is MinDistance-complete for each constraint language Γ satisfying the inclusions iL ⊆〈Γ〉⊆iL2, according to Proposition 44. As the inclusion Γ ⊆iL2 = 〈{even4,[x],[¬x]}〉 holds, any Γ-formula ψ is expressible as ∃y(A1x + A2yc). The projection of the affine solution space is again an affine space, so it can be understood as solutions of a system Ax = b. If (ψ, m0) is an instance of NOSol(Γ), then ψ is a MSD(Γ)-instance, and a feasible solution m1m2 satisfying ψ gives a feasible solution m3 := m0 + (m2m1) for (ψ, m0), where hd(m0,m3) = hd(m2,m1). Conversely, a solution m3m0 to (ψ, m0) yields a feasible answer to the MSD-instance ψ. Thus, OPT(ψ) = OPT(ψ, m0) and so NOSol(Γ) ≤APMSD(Γ). The other way round, if ψ is an MSD-instance, then attempt to solve the system Ax = b defined by it; if there is no or a unique solution, then the instance does not have feasible solutions. Otherwise, we have at least two distinct models of ψ; let m0 be one of these. As above we conclude OPT(ψ) = OPT(ψ, m0), and therefore, MSD(Γ) ≤APNOSol(Γ).

7.2.3 Tightness Results

We prove that Proposition 54 is essentially tight for some constraint languages. This result builds heavily on the previous results from Section 6.2.1.

Proposition 56

For a constraint language Γ satisfying the inclusions iN ⊆〈Γ〉⊆iI and anyε > 0 there is no polynomial-timen1−ε-approximationalgorithm forMSD(Γ), unless P = NP.

Proof

We show that any polynomial time n1−ε-approximation algorithm for MSD(Γ) would also allow to decide in polynomial time the problem AnotherSATnc(Γ), which is NP-complete by Proposition 38.

The algorithm works as follows. Given an instance (φ, m) for AnotherSATnc(Γ), the algorithm accepts if m is not a constant assignment. Since Γ is 0-valid (and 1-valid), this output is correct. If φ has only one variable, reject because φ has only two models; otherwise, proceed as follows.

For each variable x of φ, we construct a new formula \(\varphi ^{\prime }_{x}\) as follows. Let k be the smallest integer greater than 1/ε. Introduce nkn new variables xi for i = 1,…,nkn. For every i ∈{1,…,nkn} and every constraint R(y1,…,y) in φ, such that x ∈{y1,…,y}, construct a new constraint \(R({z_{1}^{i}}, \ldots , z_{\ell }^{i})\) by \({z_{j}^{i}} = x^{i}\) if yj = x and \({z_{j}^{i}} = y_{j}\) otherwise; add all the newly constructed constraints to φ in order to get \(\varphi ^{\prime }_{x}\). Note, that we can extend models s of φ to models s of \(\varphi ^{\prime }_{x}\) by setting s(xi) = s(x). In particular, this can be done for m, yielding \(m^{\prime }\in [\varphi ^{\prime }_{x}]\). As Γ ⊆iI = iI0 ∩iI1, the MSD(Γ)-instance \(\varphi ^{\prime }_{x}\) has feasible solutions; thus run the n1−ε-approximation algorithm for MSD(Γ) on \(\varphi ^{\prime }_{x}\). If for every x the answer is a pair (m1,m2) with \(m_{2} = \overline {m_{1}}\), then reject, otherwise accept.

This procedure is a correct polynomial-time algorithm for AnotherSATnc(Γ). For polynomial runtime is clear, it remains to show correctness. If φ has only constant models, then the same is true for every \(\varphi ^{\prime }_{x}\) since φ contains a variable distinct from x. Thus each approximation must result in a pair of complementary constant assignments, and the output is correct. Assume now that there is a model s of φ different from 0 and 1. Hence, there exists a variable x such that s(x) = m(x) because m is constant. It follows that \(\varphi ^{\prime }_{x}\) has a model s fulfilling \(\text {OPT}(\varphi ^{\prime }_{x})\leq \text {hd}(s^{\prime }, m^{\prime })<n\), where n is the number of variables of φ. But then the approximation algorithm must find two distinct models m1m2 of \(\varphi ^{\prime }_{x}\) satisfying hd(m1,m2) < n ⋅ (nk)1−ε = nk(1−ε)+ 1. Since we stipulated k > 1/ε, it follows that hd(m1,m2) < nk. Consequently, we have \(m_{2}\neq \overline {m_{1}}\) and the output of our algorithm is again correct.

8 Concluding Remarks

The problems investigated in this paper are quite natural. In the space of bit-vectors we search for a solution of a formula that is closest to a given point, or for a solution next to a given solution, or for two solutions witnessing the smallest Hamming distance between any two solutions. Our results describe the complexity of exploring the solution space for arbitrary families of Boolean relations. Moreover, our problems generalize problems familiar from the literature: MinOnes, NearestCodeword, and DistanceSAT are instances of our NearestSolution, while MinDistance is the same as our problem MinSolutionDistance when restricting the latter to affine relations.

To prove the results, we first had to extend the notion of AP-reduction. The optimization problems considered in the literature have the property that each instance has at least one feasible solution. This is not the case when looking for nearest solutions regarding a given solution or a prescribed Boolean tuple, as a formula may have just a single solution or no solution at all. Therefore we had to refine the notion of AP-reductions such that it correctly handles instances without feasible solutions.

The complexity of NearestSolution can be classified by the usual approach: We first show that for each constraint language the complexity of the problem does not change when admitting existential quantifiers and equality, and then check all finitely related clones according to Post’s lattice. This approach does not work for the problems NearestOtherSolution and MinSolutionDistance: It does not seem to be possible to show a priori that the complexity remains unaffected under such language extensions. In principle the complexity of a problem might well differ for two constraint languages Γ1 and Γ2 that represent the same clone (〈Γ1〉 = 〈Γ2〉) but that differ with respect to partial polymorphisms (〈Γ1 ∪{≈}〉≠〈Γ2 ∪{≈}〉). Theorems 4 and 5 finally show that this is not the case, but we learn this only a posteriori. Our method of proof fundamentally relies on irredundant weak bases that seem to be the perfect fit for such a situation: a priori compatibility with existential quantification is not required, but it will follow once the proof succeeds just using weak bases.

Figure 4 compares the complexity classifications of the three problems. Regarding NearestSolution and NearestOtherSolution, knowing that an assignment is a solution apparently helps in finding a solution nearby. For expressive constraint languages it is NP-complete to decide whether a feasible solution exists at all; for NearestSolution this requires the existence of at least one satisfying assignment, while the other two problems need even two. Kann proved in [21] that MinOnes(Γ) is NPOPB-complete for 〈Γ〉 = BR, where NPOPB is the class of NPO problems with a polynomially bounded objective function. This result implies that NearestSolution(Γ) is NPOPB-complete for 〈Γ〉 = BR as well. It is unclear whether this result also holds for 〈Γ〉 = iN2. It may be possible to find a suitable constraint language Γ satisfying 〈Γ〉 = BR such that MinOnes(Γ) reduces to NearestOtherSolution(Γ) for iI0 ⊆〈Γ〉 or iI1 ⊆〈Γ〉, proving thus that NOSol(Γ) is NPOPB-complete for these cases. Likewise, the NPOPB-hardness of MSD(Γ) for iN2 ⊆〈Γ〉 or iI0 ⊆〈Γ〉 or iI1 ⊆〈Γ〉 remains open for the time being.

Fig. 4
figure 4

Comparing the complexities: The hard cases (colored blue and black) are the same, whereas the polynomial cases (green) increase from left to right