1 Introduction

In wireless ad-hoc networks, nodes equipped with limited power supply transmit data using a multi-hop path. We study the problem of minimizing the overall power consumption while maintaining full network connectivity, that is, each node can send messages to each other node using some (multi-hop) route through the network. Formally, we study the following optimization problem.

Min-Power Asymmetric Connectivity (MinPAC)

  1. Input:

    A strongly connected directed graph G and a weight function (cost function) .

  2. Task:

    Find a strongly connected spanning subgraph H of G minimizing

    $$ \sum\limits_{v \in V} \max_{vu \in A(H)} w(vu). $$

Related work

This problem was initially formalized and shown to be NP-complete by Chen and Huang [9]. Since then, there have been numerous publications on polynomial-time approximation algorithms (an asymptotically optimal \(O(\log n)\) approximation [6], a constant approximation factor with symmetric arc weights [4, 9], approximation algorithms for special cases [5, 7, 8]), and hardness results for special cases [7, 10]. To the best of our knowledge, the parameterized complexity of MinPAC has not been investigated yet.

In previous work, we investigated the parameterized complexity of the symmetric version of our problem [2]; the difference to MinPAC is that an undirected graph is given and for every undirected edge in the solution subgraph H both endpoints pay at least the weight of the edge. The asymmetric case turns out to be more involved on a technical level. However, comparable results (to the symmetric case) are achievable.

Our contribution

We show algorithmic results for grid-like and tree-like input graphs as well as parameterized hardness for very restricted cases. Table 1 summarizes our results. We discuss the different parameters subsequently.

Table 1 Overview of our results, using the following notation: n—number of vertices, m—number of arcs, c—number of strongly connected components in the obligatory subgraph (see Section 2), q—number of different arc weights, x—size of a minimum vertex cover in the underlying undirected graph (see Section 3), g—size of a minimum feedback edge set of the underlying undirected graph (see Section 4), h—size of a minimum feedback arc set (see Section 5), PAC—the decision version of MinPAC asking for a solution of cost at most k

It is known that the alignment of nodes in some regular grid-like patterns is optimal to fully cover a plane. In such cases, we can assume that the obligatory arcs, arcs that are in any optimal solution, induce a small number c of strongly connected components as there are many arcs of minimum weight. In Section 2, we define obligatory arcs, discuss the connection to grid-like graphs, and present an algorithm that solves MinPAC in linear time when c is a constant.

In Section 3, we study MinPAC parameterized by the number of different arc weights and the vertex cover number. For this combined parameter, we present an exponential-size kernel.

In Section 4, we describe a linear-time algorithm which reduces any input instance to an equivalent instance with at most 20g − 20 vertices and 42g − 42 arcs, where g is the feedback edge number of the underlying undirected graph. The parameter is also motivated by real world applications in which the feedback edge number is small; for instance, sensor networks along waterways (including canals) are expected to have a small number of feedback edges. It follows from our result hat the problem can be solved in polynomial time for \(g \in O(\log n)\), that is, for very tree-like input graphs. In terms of parameterized complexity, this gives us a partial (weights left unbounded) kernelization of MinPAC with respect to the feedback edge number. Using an existing weight-shrinking technique [3], we also provide a “full” polynomial-size kernel with respect to the feedback edge number.

Finally, in Section 5 we derive hardness results for PAC, the decision version of MinPAC. We show that even if the input graph has only binary weights and is almost a DAG (a directed acyclic graph with one additional arc), PAC parameterized by the solution cost is W[2]-hard. This is in sharp contrast to the FPT result for the parameter feedback edge number.

Preliminaries

For , we abbreviate \(\{1, \dots , a\}\) by [a]. Throughout this work, we assume that a graph is directed unless stated otherwise. For a graph G = (V,A), we write V (G) to denote V and A(G) to denote A. We abbreviate arcs (u,v) ∈ A by uv. We denote by \(G[V^{\prime }]\) the subgraph induced by \(V^{\prime }\subseteq V(G)\). We use G + vu to denote (V (G) ∪{v,u},A(G) ∪{vu}) and Gvu to denote (V (G),A(G) ∖{vu}). For a vertex vV (G), we write \(N^{+}_{G}(v) = \{ u \mid vu \in A \}\) and \(N^{-}_{G}(v) = \{ u \mid uv \in A \}\) to denote the out- and in-neighborhood of v. We define the degree of v as \(\deg _{G}(v) = |N^{+}_{G}(v) \mathbin {\cup } N^{-}_{G}(v)|\). We say that \(S \subseteq V(G)\) is a strongly connected component if there exists a path from each vertex uS to every other vertex vS in G[S]. We write \(\mathcal {S}_{G}\) to denote the set of strongly connected components. We use UG to denote the underlying undirected graph of G. We denote the optimal cost of an instance of MinPAC I by OPT(I). The cost of a vertex subset \(V^{\prime } \subseteq V(G)\) in a solution with arcs \(A^{\prime } \subseteq A(G)\) is denoted by \(\text {Cost}(V^{\prime }, A^{\prime }, w) = {\sum }_{v \in V^{\prime }} \max \limits _{vu \in A^{\prime }} w(vu)\). For ease of notation, we write \(w(vu) = \infty \) if vuA.

A parameterized problem π is a set of pairs (I,k), where I denotes the problem instance and k is the parameter. The problem π is fixed-parameter tractable (FPT) if there exists an algorithm solving any instance of π in f(k) ⋅|I|c time, where f is some computable function and c is some constant. A reduction to a problem kernel is a polynomial-time algorithm that, given an instance (I,k) of π, returns an equivalent instance \((I^{\prime }, k^{\prime })\), such that \(|I^{\prime }| + k^{\prime } \le g(k)\) for some computable function g. Problem kernels are usually achieved by applying data reduction rules. Given an instance (I,k) for MinPAC, our data reduction rules compute in polynomial time a new instance \((I^{\prime }, k^{\prime })\) of MinPAC and a number d. We call a data reduction rule correct, if \(\text {OPT}(I) = \text {OPT}(I^{\prime }) + d\).

2 Parameterization by the Number of Strongly Connected Components Induced by the Obligatory Arcs

In this section we present a fixed-parameter algorithm with respect to the number c of strongly connected components (SCCs) induced by obligatory arcs—arcs that can be included into any optimal solution with no additional cost. We find the obligatory arcs by means of lower bounds on costs paid by each vertex.

Definition 1

A vertex lower bound is a function such that for any optimal solution H and any vertex vV (G), it holds that

$$ \ell(v) \le \max_{vu \in A(H)} w(vu). $$

Observe that each vertex vV (G) has at least one outgoing arc in any optimal solution. Hence, the cost paid by v in any optimal solution is at least \(\min \limits _{vu \in A(G)} w(vu)\). Thus, \(\ell (v) \ge \min \limits _{vu \in A(G)} w(vu)\). Moreover, if a vertex v has only one incoming arc uv, then the cost for the vertex u is at least w(uv), and thus (u) ≥ w(uv). Clearly, finding more effective but still efficiently computable vertex lower bounds is challenging on its own.

Definition 2

The obligatory subgraph G induced by a vertex lower bound for G is a subgraph (V (G),A), where A = {vuw(vu) ≤ (v)}.

It has been shown that sensors are optimally placed for fully covering an area when sensors are deployed in a triangular lattice pattern [18] or a strip based-pattern [1, 15]. In such cases, there are many arcs of minimum weight. Taking these arcs usually suffices to (almost) achieve strong connectivity. So even the obligatory subgraph induced by the trivial vertex lower bound described above yields a small number of SCCs.

Let be a vertex lower bound for a graph G. We denote the number of SCCs of the obligatory subgraph G by \(c = |\mathcal {S}_{G_{\ell }}|\). The number c of (strongly) connected components in the obligatory subgraph has recently been used as parameter to obtain FPT results [2, 17]. In this section, we also provide an FPT result with respect to this parameter. More specifically, we will present an algorithm for MinPAC that runs in O(2cc2n + m + 4cc2c− 3/2) time. Our algorithm runs in three phases. In the first phase, it shrinks the graph to a relevant subgraph in which each vertex v has at most one arc towards each SCC that does not contain v (Algorithm 1). In the second phase, it uses a dynamic programming algorithm to compute the minimum cost to connect each SCC to each subset of other SCCs (Algorithm 2). In the last phase, it exhaustively tries all combinations of connecting SCCs to find an optimal solution (Algorithm 3).

  1. Phase 1

    The following lemma specifies the conditions under which we can remove arcs. It plays a central role in this phase. The basic idea herein is to remove, for each vertex vV (G) and each SCC S, all but the cheapest arc from v to vertices in S.

Lemma 1

Let (G,w) be an instance of MinPAC and let be a vertex lower bound. Let \(S_{v}, S_{u} \in \mathcal {S}_{G_{\ell }}\) be two distinct SCCs and let vSv and \(u, u^{\prime } \in S_{u}\) be vertices of G with \(w(vu) \le w(vu^{\prime })\). Then, it holds that \(\text {OPT}((G, w)) = \text {OPT}((G - vu^{\prime }, w))\).

Proof

Observe that the removal of an arc does not decrease the cost of an optimal solution. Thus, \(\text {OPT}((G, w)) \le \text {OPT}((G - vu^{\prime }, w))\). Let H be an optimal solution of (G,w). Suppose that H contained \(vu^{\prime }\). The cost paid by v in H is then at least \(w(vu^{\prime })\). Since u and \(u^{\prime }\) both belong to Su, the subgraph \(H^{\prime } = H + vu - vu^{\prime }\) is a strongly connected spanning subgraph. Since \(w(vu) \le w(vu^{\prime })\), it follows that \(H^{\prime }\) is optimal. Moreover, \(H^{\prime }\) is also a solution of \((G - vu^{\prime }, w)\), and thus \(\text {OPT}((G, w)) \ge \text {OPT}((G - vu^{\prime }, w))\). □

figure e
figure f
figure g

Algorithm 1 exhaustively removes all arcs \(vu^{\prime }\) which satisfy the preconditions of Lemma 1: The algorithm iterates over each arc in G twice. It finds a minimum-weight arc from each vertex to each SCC in the first iteration. In the second iteration, it removes all but one minimum-weight arc that share the initial vertex and the SCC the terminal vertex belongs to.

We show subsequently that the resulting instance of MinPAC satisfies the properties listed in the next definition.

Definition 3

Let (G,w) be an instance of MinPAC and let be a lower bound. We say that a graph G rel and a weight function w rel with V (G) = V (G rel), \(A(G_{\ell }^{\text {rel}}) \subseteq A(G)\), and are a relevant subgraph and relevant weight function induced by , respectively, if they satisfy the following properties:

  1. (i)

    OPT((G,w)) = OPT((G rel,w rel)).

  2. (ii)

    For any SCC \(S \in \mathcal {S}_{G_{\ell }}\), it holds that G[S] = G rel[S].

  3. (iii)

    For any SCC \(S \in \mathcal {S}_{G_{\ell }}\) and any vertex vS, it holds that |{vuA(G rel)∣uS}|≤ 1.

Since it follows from property (ii) that \(\mathcal {S}_{G_{\ell }} = \mathcal {S}_{G_{\ell }^{\text {rel}}}\), we will use them interchangeably.

Lemma 2

Let (G,w) be an instance of MinPAC and let be a vertex lower bound. Algorithm 1 computes in O(m) time a relevant subgraph G rel and a relevant weight function w rel induced by .

Proof

For any vV (G) and any \(S \in \mathcal {S}_{G_{\ell }}\), Algorithm 1 sets M(v,S) to a minimum-weight arc from v to a vertex in S in the first iteration. In the second iteration, every arc vu is removed unless v and u belong to the same SCC or M(v,Su) is set to vu. Recall that the optimal cost remains the same after arc removals as shown in Lemma 1. Thus, Algorithm 1 returns an instance of MinPAC (G rel,w rel) that satisfies the aforementioned properties. The algorithm spends O(m) time on initializing M (Line 2) and O(m) time on the iteration over the arcs (Line 3 and Line 7). □

  • Phase 2. In this phase, we aim to compute an optimal set of arcs to connect each SCC to all other SCCs. We start with some notation.

Definition 4

Let G rel be a relevant subgraph. For any \(S \in \mathcal {S}_{G_{\ell }}\), we define the set of SCCs reachable from S via an arc as

$$ \mathcal{S}_{G,\ell}^{\text{reach}}(S) = \{ S^{\prime} \in \mathcal{S}_{G_{\ell}} \setminus \{ S \} \mid \exists vu \in A(G_\ell^{\text{rel}}). v \in S \wedge u \in S^{\prime} \}. $$

We say that an arc set B is a connector if B connects some SCC S to some set \(\mathcal {T} \subseteq \mathcal {S}_{G,\ell }^{\text {reach}}(S)\) of SCCs reachable from S. Then, our goal is to find a connector of minimum cost for each \(S \in \mathcal {S}_{G_{\ell }}\) and each subset \(\mathcal {T} \subseteq \mathcal {S}_{G,\ell }^{\text {reach}}(S)\). This allows us to compute an optimal solution with exhaustive search on connections between SCCs in the last phase

Definition 5

Let (G,w) be an instance of MinPAC and let be a vertex lower bound. A minimum-cost connector is a function \(\text {MCC} \colon \mathcal {S}_{G_{\ell }} \times 2^{\mathcal {S}_{G_{\ell }}} \to 2^{A(G_{\ell }^{\text {rel}})}\) such that for any \(S \in \mathcal {S}_{G_{\ell }}\) and any \(\mathcal {T} \subseteq \mathcal {S}_{G,\ell }^{\text {reach}}(S)\) the following properties are satisfied:

  1. 1.

    For any \(S^{\prime } \in \mathcal {T}\), there exist vertices vS and \(u \in S^{\prime }\) with \(vu \in \text {MCC}(S, \mathcal {T})\).

  2. 2.

    There is no subset \(B \subseteq A(G_{\ell }^{\text {rel}})\) that satisfies the above property and that satisfies \(\text {Cost}(S, B, w_{\ell }^{\text {rel}}) < \text {Cost}(S, \text {MCC}(S, \mathcal {T}), w_{\ell }^{\text {rel}})\).

Algorithm 2 computes a minimum-cost connector. For each SCC \(S \in \mathcal {S}_{G_{\ell }}\), we employ dynamic programming over vertices in S and subsets of \(\mathcal {S}_{G_{\ell }}\). This gives us a significant speed-up compared to the naïve approach of branching into at worst c different neighbors on each vertex: from n𝜃(c) time to O(2cc2n) time.

Lemma 3

Given a relevant subgraph G rel and a relevant weight function w rel, Algorithm 2 computes a minimum-cost connector MCC in O(2cc2n) time.

Proof

We fix some \(S = \{ v_{1}, \dots , v_{n_{S}} \}\). We use Sv to denote the SCC to which the vertex vV (G rel) belongs and \(\mathcal {T}_{B} = \{ S_{u} \mid \exists w. wu \in B \}\) to denote the SCCs containing a terminal vertex of at least one arc in \(B \subseteq A(G_{\ell }^{\text {rel}})\). Note that in the subsequent proof the arcs in B will all have their initial vertex in S. Moreover, the arc set Bi contains only arcs having a initial vertex in \(\{v_{1}, \ldots , v_{i}\} \subseteq S\) and we have \(\mathcal {T}_{B_{i}} = \{ S_{u} \mid \exists j \le i. v_{j}u \in A(G_{\ell }^{\text {rel}})\}\). (See Line 5 of Algorithm 2 for the computation of the arc set Bi.) □

In order to show that the output of Algorithm 2 satisfies the property of Definition 5, 1, we prove the following stronger claim.

Claim 1

For any i ∈{0,…,nS} and any set \(\mathcal {T} \subseteq \mathcal {T}_{B_{i}}\) of SCC, the arc set \(D_{i}(\mathcal {T})\) contains exactly one arc that starts in {v1,…,vi} and ends inside \(S^{\prime }\) for each \(S^{\prime } \in \mathcal {T}\) and contains no other arc.

Proof Proof of claim

We first show by induction that the claim holds after the initialization phase (Line 3 to 6 holds for the base case i = 0 because we have D0() = . When i ≥ 1, since G rel is a relevant subgraph (see Definition 3), it follows that the set \(\{ v_{i} u \in A(G_{\ell }^{\text {rel}}) \mid u \not \in S, S_{u} \not \in \mathcal {T}_{B_{i - 1}} \}\) on Line 5 contains exactly one arc whose terminal vertex lies in \(S^{\prime }\) for any \(S^{\prime } \in \mathcal {T}_{B_{i}} \setminus \mathcal {T}_{B_{i-1}}\). Since the claim holds for i − 1 by induction hypothesis, Bi contains exactly one arc that ends inside \(S^{\prime }\) for each \(S^{\prime } \in \mathcal {T}_{B_{i}}\) and no other arc. In Line 6, the algorithm computes from this set Bi of arcs all arcs that end in \(\mathcal {T}\) and assigns this to \(D_{i}(\mathcal {T})\). Hence \(D_{i}(\mathcal {T})\) contains exactly one arc for each \(S^{\prime } \in \mathcal {T}\) and thus the the claim holds for i as well.

We now show—again by induction—that the claim holds after the update phase. Again the claim clearly holds for the base case i = 0. When i ≥ 1, note that each iteration step (Line 9 to 14) corresponds to the case in which the cost for vi is exactly w(viu). The algorithm finds the set Bi,u of arcs that vi can cover with cost w(viu). We verify that the claimed property is maintained after the assignment on Line 12 and Line 14. The assignment on Line 12 is clearly correct by induction hypothesis. Observe that Bi,u contains exactly one arc whose terminal vertex lies in \(S^{\prime }\) for any \(S^{\prime } \in \mathcal {T}_{B_{i, u}}\) since G rel is a relevant subgraph. We can assume from the induction hypothesis that \(D_{i-1}(\mathcal {T} \setminus \mathcal {T}_{B_{i, u}})\) contains exactly one arc that ends inside \(S^{\prime \prime }\) for each \(S^{\prime \prime } \in \mathcal {T} \setminus \mathcal {T}_{B_{i, u}}\) and no further arc. Thus, the assignment on Line 14 ensures that the claim is correct. □

For the second property, we prove by induction over i ∈{0,…,nS} that for any \(\mathcal {T} \in \mathcal {S}_{G,\ell }^{\text {reach}}(S)\), the cost of S associated with \(D_{i}(\mathcal {T})\) is minimized when S is restricted to the vertices \(\{ v_{1}, \dots , v_{i} \}\). It holds in the base case i = 0 because we have D0() = . When i ≥ 1, we assume from the induction hypothesis that values \(\text {Cost}(S, D_{i-1}(\mathcal {T}), w_{\ell }^{\text {rel}})\) are minimum for any \(\mathcal {T} \subseteq \mathcal {S}_{G,\ell }^{\text {reach}}(S)\). Assume towards a contradiction that there exists an arc set \(B^{\prime }\) that satisfies \(\text {Cost}(S, D_{i}(\mathcal {T}_{B^{\prime }}), w_{\ell }^{\text {rel}}) > \text {Cost}(S, B^{\prime }, w_{\ell }^{\text {rel}})\). Let \(B_{i}^{\prime } \subseteq B^{\prime }\) denote the set of arcs that have vi as their initial vertex. We distinguish two cases depending on whether \(B_{i}^{\prime }\) is empty.

  1. Case 1.

    If \(B_{i}^{\prime } = \emptyset \), then \(B^{\prime }\) consists of arcs whose initial vertices are in \(\{ v_{1} \dots , v_{i - 1} \}\). Then, we have

    $$ \text{Cost}(S, B^{\prime}, w_\ell^{\text{rel}}) \ge \text{Cost}(S, D_{i - 1}(\mathcal{T}_{B^{\prime}}), w_\ell^{\text{rel}}) \ge \text{Cost}(S, D_{i}(\mathcal{T}_{B^{\prime}}), w_\ell^{\text{rel}}). $$

    Here the first inequality follows from the induction hypothesis, and the second inequality follows from the assignment on Line 12 because \(\mathcal {T}_{B^{\prime }} \subseteq \mathcal {T}_{B_{i-1}}\).

  2. Case 2.

    If \(B_{i}^{\prime } \ne \emptyset \), then the cost of S associated with \(B^{\prime }\) is

    $$ \begin{array}{@{}rcl@{}} \text{Cost}(S, B^{\prime}, w_\ell^{\text{rel}}) &\ge \text{Cost}(S, \mathcal{D}_{i - 1}(\mathcal{T}_{B^{\prime}} \setminus \mathcal{T}_{B_{i}^{\prime}}), w_\ell^{\text{rel}}) + \max_{v_{i} u \in B_{i}^{\prime}} w_\ell^{\text{rel}}(v_{i} u) \\ &\ge \text{Cost}(S, D_{i}(\mathcal{T}_{B^{\prime}}), w_\ell^{\text{rel}}). \end{array} $$

Here the first inequality follows from the induction hypothesis, and the second inequality follows from the assignment on Line 14. In both cases, we have \(\text {Cost}(S, B^{\prime }, w_{\ell }^{\text {rel}}) \ge \text {Cost}(S, D_{i}(\mathcal {T}_{B^{\prime }}), w_{\ell }^{\text {rel}})\), and thus a contradiction is reached.

It remains to analyze the running time. In the initialization phase, we iterate over all vertices (Line 4) and all subsets of \(\mathcal {T}_{B_{i}}\) (Line 6). Note that \(|\mathcal {T}_{B_{i}}| \le |\mathcal {S}_{G_{\ell }}| = c\) as shown in the above claim and hence there are at most 2c subsets. Each iteration takes O(c) time because Bi contains at most c arcs for any i ∈ [nS]. In the update phase, we iterate over all vertices (Line 7), at most c neighbors (Line 9), and all subsets of \(\mathcal {S}_{G,\ell }^{\text {reach}}(S)\) (Line 10). Each iteration takes O(c) time because \(D_{i}(\mathcal {T})\) contains at most c arcs for any i ∈ [nS] and any \(\mathcal {T} \subseteq \mathcal {S}_{G,\ell }^{\text {reach}}(S)\). Thus, the overall running time is O(2cc2n).

  1. Phase 3.

    We finally present the search tree algorithm for MinPAC in Algorithm 3. The algorithm “guesses” the connections between SCCs of G to obtain an optimal solution. To this end, we first try all possible numbers of outgoing arcs from each SCC. The array \(\mathcal {C}\) contains after Line 10 for each SCC Si in the ith entry all SCCs that Si has an arc to in the solution.

Lemma 4

Given a relevant subgraph G rel, a relevant weight function w rel, and a minimum-cost connector \(\text {MCC} \colon \mathcal {S}_{G_{\ell }} \times 2^{\mathcal {S}_{G_{\ell }}} \to 2^{A(G_{\ell }^{\text {rel}})}\), Algorithm 3 computes an optimal solution of (G rel,w rel) in O(n + m + 4cc2c− 3/2) time.

Proof

We first show the correctness of Algorithm 3. In each iteration step (Line 6 and 10), the algorithm constructs an auxiliary graph Haux, which is basically a graph obtained from \((V(G), \bigcup ^{c}_{i=1} \text {MCC}(S_{i}, \mathcal {T}_{S_{i}}))\) by contracting each SCC of G into a single vertex. Since our algorithm performs an exhaustive search, it finds a graph of cost OPT((G rel,w rel)).

We now analyze the running time of the algorithm. For each ck ≤ 2c − 2, the number of sets of integers \(\{ k_{1}, \dots , k_{c} \}\) that satisfy ki ≥ 1 for all i ∈ [c] and \({\sum }^{c}_{i=1} k_{i} = k\) is

$$ \begin{array}{@{}rcl@{}} \binom{k - 1}{c - 1} & \le& \binom{2c - 3}{c - 1} = \frac{(2c - 3)!}{(c - 1)! (c - 2)!} \\ &\in& O\left( \frac{\sqrt{2c - 3}}{\sqrt{c - 1}\sqrt{c - 2}} \cdot \frac{(2c - 3)^{2c - 3}}{(c - 1)^{c - 1} (c - 2)^{c - 2}} \right) = O \left( \frac{4^{c}}{\sqrt{c}} \right), \end{array} $$

where the membership is due to Stirling’s approximation. For each fixed set of integers \(\{k_{1}, \dots , k_{c}\}\), the number of sets \(\{ \mathcal {T}_{S_{1}}, \dots , \mathcal {T}_{S_{c}} \}\) the algorithm generates is

$$ \begin{array}{@{}rcl@{}} {\prod}^{c}_{i = 1} \binom{c}{k_{i}} \le {\prod}^{c}_{i =1} c^{k_{i}} = c^{{\sum}^{c}_{i=1} k_{i}} = c^{k}. \end{array} $$

So the total number of iterations (Line 6 to 10) is

$$ \begin{array}{@{}rcl@{}} O\left( \frac{4^{c}}{\sqrt{c}} \cdot \sum\limits^{2c - 2}_{k = c} c^{k}\right) = O(4^{c} \cdot c^{2c - 5/2}). \end{array} $$

We claim that each iteration step runs in O(c) time. Computing the SCCs of Haux takes O(c) time because Haux contains c vertices and at most 2c − 2 arcs and the algorithm spends O(c) time to compute the cost of H and update OptCost and connection \(\mathcal {C}\). Constructing the output graph takes O(n + m) time. Thus, the overall running time is O(n + m + 4cc2c− 3/2). □

Combining Algorithm 1 to 3 we arrive at our main theorem of this section.

Theorem 1

MinPAC can be solved in O(c2 ⋅ 2cn + m + 4cc2c− 3/2) time.

Proof

Let (G,w) be an instance of MinPAC. We run Algorithm 1 to 3 sequentially to obtain an optimal solution H of (G rel,w rel). Since G rel is a relevant subgraph of G it follows from Lemma 1 that the graph H is also an optimal solution of (G,w).

The overall running time is then

$$ \begin{array}{@{}rcl@{}} O(cn + m) + O(2^{c} \cdot c^{2} \cdot n) + O(n + m + 4^{c} \cdot c^{2c - 3/2}) \\ = O(c^{2} \cdot 2^{c} \cdot n + m + 4^{c} \cdot c^{2c - 3 / 2}). \end{array} $$

This concludes the proof of the theorem. □

3 Parameterization by the Number of Power Levels

It is fair to assume that the nodes cannot transmit signals with arbitrary power levels due to practical limitations [7]. In fact, many researchers have studied approximation algorithms for the MinPAC problem when only two power levels are available [4, 5, 7, 16]. In this section, we consider the case w: A(G) → Q, where the set of integers \(Q = \{ p_{1}, \dots , p_{q} \}\) represents available power levels. The parameter q—“the number of numbers”—has been advocated by [12]. The problem remains NP-hard even when q = 2 [9], as also can be seen in our hardness result (Theorem 5). Thus, fixed-parameter tractability is unlikely with this parameter alone. However, using an additional parameter may alleviate this problem. We consider the vertex cover number, as many problems are known to become tractable when this parameter is bounded. Here we define the vertex cover number for a directed graph as the vertex cover number of the underlying undirected graph. Recall that the vertex cover number for an undirected graph is the minimum number of vertices that have to be removed to make it edgeless. Computing a minimum-cardinality vertex cover is NP-hard but any maximal matching (which can be found in linear time) gives a factor-2 approximation. We present a partial kernelization (unbounded weights) with respect to q + x, where x is the size of a given vertex cover. Afterwards, we strengthen this result to a proper polynomial kernel (with a worse but still polynomial running time).

Theorem 2

Let I = (G,w) be a MinPAC-instance where w: A(G) → Q and . Given I and a vertex cover X for G of size x, one can compute an instance \(I^{\prime }\) of MinPAC with at most (q + 1)2x + x vertices and a value such that \(\text {OPT}(I) = \text {OPT}(I^{\prime }) + d\) in \(O(\min \limits (\{ x, \log _{q} n \}) \cdot n + m)\) time.

In order to prove Theorem 2, we first observe that there are some conditions under which a vertex can be included or removed without losing the strong connectivity. Notably, we use Observation 1 to remove “twin” vertices.

Observation 1

Let G be a strongly connected graph with uV (G). If there exists a vertex \(u^{\prime } \in V(G)\) with \(N^{+}_{G}(u) \subseteq N^{+}_{G}(u^{\prime })\) and \(N^{-}_{G}(u) \subseteq N^{-}_{G}(u^{\prime })\), the graph G[V (G) ∖{u}] is strongly connected.

Observation 2

Let G be a strongly connected graph. For any vertices \(v, v^{\prime } \in V(G)\), uV (G), the graph \(G + uv + v^{\prime }u\) is strongly connected.

We define “types” for vertices outside the vertex cover according to the weights of their incident arcs. This helps us to reduce the number of vertices using the observations above. Recall that we write \(w(vu) = \infty \) if vuA.

Definition 6

Let (G,w) be an instance of MinPAC with w: A(G) → Q and . Let \(X = \{ v_{1}, \dots , v_{x} \}\) be a vertex cover of G. The vertex cover partition is the partition \(\mathcal {P}\) of vertices in V (G) ∖ X into sets

$$ \begin{array}{@{}rcl@{}} \mathcal{P}_{r_{1}, \dots, r_{2x}} = \{ u \in V(G) \setminus X \mid \forall i \in [x]. w(u v_{i}) = p_{r_{i}} \wedge w(v_{i} u) = p_{r_{i+x}} \}, \end{array} $$

for each \(r_{1}, \dots , r_{2x} \in [q + 1]\). Here we set \(p_{q+1} = \infty \).

We initialize d with 0. In our reduction rule, we remove vertices such that, after the reduction is completed, there is at most one vertex in each set of the vertex cover partition.

Reduction Rule 1

Let \(\mathcal {P}_{r_{1}, \dots , r_{2x}}\) be a set of the vertex cover partition with \(|\mathcal {P}_{r_{1}, \dots , r_{2x}}| > 1\). Delete an arbitrary vertex \(u \in \mathcal {P}_{r_{1}, \dots , r_{2x}}\) and increase d by \(\min \limits _{i \in [x]} p_{r_{i}}\).

Note that since the input graph is strongly connected, the increase in d in Reduction Rule 1 is at most \(\max \limits _{i \in [q]} p_{i} < \infty \).

Lemma 5

Reduction Rule 1 is correct.

Proof

Let I = (G,w) be an instance of MinPAC and let \(I^{\prime }= (G^{\prime }, w^{\prime })\) be the instance obtained by deleting vertex u in a set \(\mathcal {P}_{r_{1}, \dots , r_{2x}}\) of the vertex cover partition for G, as specified in Reduction Rule 1. We show that \(\text {OPT}(I) = \text {OPT}(I^{\prime }) + \min \limits _{i \in [x]} p_{r_{i}}\).

Let H be an optimal solution of I. Since \(|\mathcal {P}_{r_{1}, \dots , r_{2x}}| > 1\), there exists a vertex \(u^{\prime } \in \mathcal {P}_{r_{1}, \dots , r_{2x}} \setminus \{ u \}\). We can assume without loss of generality that \(\max \limits _{uv \in A(H)} w(uv) \le \max \limits _{(u^{\prime }v) \in A(H)} w(u^{\prime }v)\) holds: if it does not hold, we can exchange the role of u and \(u^{\prime }\) in H without changing the cost of the solution, that is, we can update H to \(H^{\prime }\) with

$$ \begin{array}{@{}rcl@{}} A(H^{\prime}) := {}&& \{vv^{\prime} \mid vv^{\prime} \in A(H) \wedge \{v,v^{\prime}\} \cap \{u,u^{\prime}\} = \emptyset \} \cup \{uv \mid u^{\prime}v \in A(H)\} \\ && \cup \{vu \mid vu^{\prime} \in A(H)\} \cup \{u^{\prime}v \mid uv \in A(H)\} \cup \{vu^{\prime} \mid vu \in A(H)\}. \end{array} $$

Then, we can assume that \(N^{+}_{H}(u) \subseteq N^{+}_{H}(u^{\prime })\) and \(N^{-}_{H}(u)\subseteq N^{-}_{H}(u^{\prime })\) hold (otherwise we can add the missing arcs to H without additional cost). Then, it follows from Observation 1 that \(G[V(G^{\prime })]\) is a solution of \(I^{\prime }\). Its cost is at most \(\text {OPT}(I) - \min \limits _{i \in [x]} p_{r_{i}}\) because u pays at least \(\min \limits _{i \in [x]} p_{r_{i}}\) in H. This shows that \(\text {OPT}(I) \ge \text {OPT}(I^{\prime }) + \min \limits _{i \in [x]} p_{r_{i}}\). For the other direction, suppose that \(H^{\prime }\) is an optimal solution of \(I^{\prime }\). Let \(u^{\prime } \in \mathcal {P}_{r_{1}, \dots , r_{2x}} \setminus \{ u \}\) be a vertex and let v be a vertex with \(w(u^{\prime }v) = \min \limits _{i \in [x]} p_{r_{i}}\). Since \(H^{\prime }\) is strongly connected, there exists a vertex \(v^{\prime } \in X\) with \(v^{\prime } u^{\prime } \in A(H^{\prime })\). Due to Observation 2, \(H^{\prime } + uv + v^{\prime }u\) is strongly connected. Observe that the cost for u is \(\min \limits _{i \in [x]} p_{r_{i}}\) and the cost for \(v^{\prime }\) remains unchanged because \(v^{\prime }\) pays at least \(w(v^{\prime }u^{\prime }) = w(v^{\prime }u)\) in \(H^{\prime }\). Hence, \(\text {OPT}(I) = \text {OPT}(I^{\prime }) + \min \limits _{i \in [x]} p_{r_{i}}\). □

We have shown that Reduction Rule 1 is correct. It remains to show that it can be applied in O(xn + m) or \(O(n \log _{q} n + m)\) time to complete the proof of Theorem 2.

Proof Proof of Theorem 2

We present a procedure that transforms an instance of MinPAC I = (G,w) into another instance \(I^{\prime } = (G^{\prime }, w^{\prime })\) with at most (q + 1)2x + x vertices in \(O(\min \limits (\{ x, \log _{q} n \}) \cdot n + m)\) time. In our transformation, we distinguish two cases depending on the input size.

  1. Case 1.

    If n ≤ (q + 1)2x + x, then we return I as the output of the transformation with d = 0.

  2. Case 2.

    If n > (q + 1)2x + x, then we apply Reduction Rule 1 exhaustively. Let \(\mathcal {P}\) be the vertex cover partition. There are at most (q + 1)2x sets of \(\mathcal {P}\) and each set yields at most one vertex in \(G^{\prime }\). Thus, the reduced instance has at most (q + 1)2x + x vertices. We show that the transformation can be performed in O(xn + m) time. We first build a 2x-dimensional table D, where for each dimension there are q + 1 values. All the (q + 1)2x entries of D are initialized as false. (This can be done in O(n) time, since n > (q + 1)2x + x.) The entry \(D[r_{1},\dots ,r_{2x}]\) represents whether a vertex in the set \(\mathcal {P}_{r_{1}, \dots , r_{2x}}\) has been found. We iterate through all vertices in V (G) ∖ X. For each vertex uV (G) ∖ X, we set the corresponding entry in D to true if it is false, and we remove u and its incident arcs if it is true. Since accessing an entry in D takes O(x) time and removing u takes \(O(\deg _{G}(u))\) time, the transformation overall takes O(xn + m) time. Note that n > (q + 1)2x + x yields that \(2x < \log _{q} n\) and hence the reduction can also be done in \(O(n \log _{q} n + m)\) time.

Notice that Theorem 2 does not show a kernel for the parameter combination vertex cover x plus number of numbers q. In order to obtain a kernel, we will next show how to shrink the weights.

Theorem 3

Let I = (G,w) be an instance of MinPAC where G contains n vertices and m edges. There is a polynomial-time algorithm that computes a new weight function \(\hat {w}\) such that \(||\hat {w}||_{\infty } < 2^{4m^{3}}(4nm+1)^{m (m+2)}\) and such that any optimal solution T = (V,F) of (G,w) is also an optimal solution for \((G,\hat {w})\).

Proof

We use the notion of α- -linearizability, which uses

and is defined as follows:

Definition 7 (3)

A function with \(L\subseteq {\Sigma }^{*}\) is α- -linearizable, , if for all and for all xL it holds that

  1. 1.

    there exists such that \(f(x,\omega )=b_{x}^{\top } \omega \) and

  2. 2.

    for all it holds that \(f(x,\omega )=b_{x}^{\top } \omega \) if and only if \(f(x,\omega ^{\prime })=b_{x}^{\top } \omega ^{\prime }\).

To this end, observe that we can rewrite the goal function to fit their notion as follows. Let \(F_{v}:=\{vu\in F\mid u \in N_{G}^{+}(v)\}\) and \(\mathcal {F}:=\{F_{v}\mid v\in V\}\). Then

$$ \text{Cost}(V, F, w) = {\sum}_{F_{v}\in \mathcal{F}} g(F_{v},w),\quad\text{ with } g(F,w)=\max_{e\in F} w(e). $$

Clearly, with A = {e1,…,em} the function , f(ei,ω)↦ωi := w(ei) is 1- -linearizable,: On the one hand, we have that \(f(e_{i},\omega )=\mathbf {e}_{i}^{\top } \omega \) (where ei denotes the unit vector with the ith entry being one). On the other hand, for all it holds true that \(f(e_{i},\omega )=\mathbf {e}_{i}^{\top } \omega \) if and only if \(f(e_{i},\omega ^{\prime })=\mathbf {e}_{i}^{\top } \omega ^{\prime }\).

By Lemma 4.8 in [3], it follows that Cost(V,F,w) is 2n- -linearizable,. Finally, Theorem 4.7 in [3] yields the desired weight function \(\hat {w}\). □

Combining Theorems 2 and 3 gives us the desired kernel.

Corollary 1

MinPAC admits an exponential-size kernel with respect to the combined parameter vertex cover plus number of numbers.

4 Parameterization by Feedback Edge Number

In this section we describe a kernelization for MinPAC parameterized by the feedback edge number. The feedback edge number for an undirected graph is the minimum number of edges that have to be removed in order to make it a forest. We define the feedback edge number for a directed graph G as the feedback edge number of its underlying undirected graph UG. Note that a minimum feedback edge set can be computed in linear time. In Section 5, we will show that the parameter feedback arc number, which is the directed counterpart of the feedback edge number, does not allow the design of an FPT algorithm for MinPAC unless P = NP.

The feedback edge number measures how tree-like the input is. From a theoretical perspective this is interesting to analyze because any instance (G,w) of MinPAC is easy to solve if UG is a tree. In this case all edges of UG must correspond to arcs in both directions in G and the optimal solution is G itself. The parameter is also motivated by real world applications in which the feedback edge number is small; for instance, sensor networks along waterways (including canals) are expected to have a small number of feedback edges. In this section we first prove the following theorem which states that MinPAC admits a partial kernel with respect to feedback edge number. Afterwards, we strengthen this result to a proper polynomial kernel (with a worse but still polynomial running time).

Theorem 4

In linear time, one can transform any instance I = (G,w) of MinPAC with feedback edge number g into an instance \(I^{\prime }=(G^{\prime },w^{\prime })\) and compute a value such that \(G^{\prime }\) has at most 20g − 20 vertices, 42g − 42 arcs, and \(\text {OPT}(I)=\text {OPT}(I^{\prime }) + d\).

Corollary 2

MinPAC can be solved in O(2O(g) + n + m) time.

We will present a set of data reduction rules which shrink any instance of MinPAC to an essentially equivalent instance whose size is bounded as specified in Theorem 4. We simultaneously compute the value d, which we initialize with 0.

Our first reduction rule reduces the weights of arcs outgoing from a vertex by the weight of its cheapest outgoing arc. This ensures that each vertex has at least one outgoing arc of weight zero.

Reduction Rule 2

Let v be a vertex with \(\delta _{v} := \min \limits _{vu \in A(G)} w(vu) > 0\). Update the weights and d as follows:

  1. (i)

    w(vu) = w(vu) − δv for each vuA(G).

  2. (ii)

    d := d + δv.

Lemma 6

Reduction Rule 2 is correct.

Proof

Let I = (G,w) be an instance of MinPAC and let vV (G) be a vertex with \(\delta _{v} = \min \limits _{vu \in A(G)} w(vu) > 0\). Let \(I^{\prime }\) be a instance with reduced weights using Reduction Rule 2. We show that \(\text {OPT}(I) = \text {OPT}(I^{\prime }) + \delta _{v}\).

Let H be an optimal solution of I. Then, H is also a solution of \(I^{\prime }\), where the cost for v is decreased by δv and the cost for every other vertex remains identical. Thus, \(\text {OPT}(I^{\prime })\) is at most OPT(I) − δv and we obtain \(\text {OPT}(I) \geq \text {OPT}(I^{\prime }) + \delta _{v}\). For the other direction, let H be an optimal solution for \(I^{\prime }\). Then H is also a solution for I, where the cost for v is increased by δv and the cost for every other vertex remains identical. Thus, OPT(I) is at most \(\text {OPT}(I^{\prime }) + \delta _{v}\) and we obtain \(\text {OPT}(I) = \text {OPT}(I^{\prime }) + \delta _{v}\). □

Our next reduction rule discards all degree-one vertices.

Reduction Rule 3

Let v be a vertex with \(\deg _{G}(v) = 1\) and let u be its neighbor. Update (G,w) and d as follows:

  1. (i)

    G := G[V (G) ∖{v}].

  2. (ii)

    \(w(u v^{\prime }) := {\max \limits } \{ 0, w(uv^{\prime }) - w(uv) \}\) for each \(uv^{\prime } \in A(G) \setminus \{ uv \}\).

  3. (iii)

    d := d + w(vu) + w(uv).

Lemma 7

Reduction Rule 3 is correct.

Proof

Let I = (G,w) be an instance of MinPAC with an optimal solution H. Let vV (G) be a vertex with degG(v) = 1 and u be its neighbor. (Since G is strongly connected, we have uvA(G) and vuA(G).) Let \(I^{\prime } = (G^{\prime }, w^{\prime })\) be the instance in which v is removed according to Reduction Rule 3. Then, \(H[V(G^{\prime })]\) is a solution of \(I^{\prime }\). Since the cost for u decreases by \(\max \limits _{uv^{\prime } \in A(H)} w(uv^{\prime }) - \max \limits _{uv^{\prime } \in A(H) \setminus \{ uv \}} w^{\prime }(uv^{\prime }) = w(uv)\) and the costs for other vertices remain unchanged, we have \(\text {OPT}(I) \ge \text {OPT}(I^{\prime }) + w(vu) + w(uv)\). For the other direction, let \(H^{\prime }\) be an optimal solution of \(I^{\prime }\). Then, \(H^{\prime } + vu + uv\) is a solution of I. The cost for v is w(vu) and the cost for u is \(\max \limits _{uv^{\prime } \in A(H^{\prime }) \cup \{ uv \}} w(uv^{\prime }) = w(uv) + \max \limits _{uv^{\prime } \in A(H^{\prime })} w^{\prime }(uv^{\prime })\), while the costs for other vertices remain the same. Thus, we obtain \(\text {OPT}(I) \le \text {OPT}(I^{\prime }) + w(vu) + w(uv)\). □

Lemma 8

Reduction Rules 2 and 3 can be exhaustively applied in linear time.

Proof

For each vertex vV (G), set \(\ell (v) := \min \limits _{vu \in A(G)} w(vu)\). Let L be a list of degree-1 vertices. We apply the following procedure as long as L is nonempty. Let v be the vertex taken from L and let u be its neighbor. Remove v and its incident arcs from G, set \(\ell (u) := {\max \limits } \{ \ell (u), w(uv) \}\), and update \(d := d + {\max \limits } \{ w(vu), \ell (v) \}\). If the degree of u becomes 1 after deleting v, then add u to L. Once L is empty, update the weight of each remaining arc \(w(vu) := {\max \limits } \{ 0, w(vu) - \ell (v) \}\). Finally, update d := d + (v) for each remaining vertex v. It is easy to see that the algorithm runs in linear time. □

Henceforth, we can assume that Reduction Rules 2 and 3 are exhaustively applied. Thus, the underlying undirected graph UG will have no degree-one vertices. It remains to bound the number of vertices that have degree two in UG. Once this is achieved, we can use standard arguments to upper-bound the size of the instance [2].

The rough idea to bound the number of degree-two vertices is as follows: In order to upper-bound the number of degree-two vertices in UG, we consider long paths in UG. A path \(P = (v_{0}, \dots , v_{h+1})\) in UG is a maximal induced path of G if \(\deg _{G}(v_{0}) > 2\), \(\deg _{G}(v_{h+1}) > 2\), and \(\deg _{G}(v_{i}) = 2\) for all i ∈ [h]. We call the vertices {vii ∈ [h]} the inner vertices of P. We will replace the inner vertices of each maximal induced path on at least seven vertices with a fixed gadget. The arc-weights in the gadget are chosen such that the four possible ways in which the outermost inner vertices are connected inside the path (see Fig. 1 for a visualization of the four cases) are preserved.

Fig. 1
figure 1

Visualization of the four cases for connectivity inside maximal induced paths (see Observation 3). For cases (R) and (L), at least one of the dotted arcs is not present

Observation 3

Let I = (G,w) be an instance of MinPAC with an optimal solution H. Let \(P = (v_{0}, \dots , v_{h+1})\) be a maximal induced path of G. Then, there are four cases in which v1 and vh are connected inside P in H (Fig. 1):

Case (R):

It holds for any i ∈ [h − 1] that vivi+ 1A(H) and there exists k ∈ [h − 1] such that vk+ 1vkA(H).

Case (L):

It holds for any i ∈ [h − 1] that vi+ 1viA(H) and there exists k ∈ [h − 1] such that vkvk+ 1A(H).

Case (B):

It holds for any i ∈ [h − 1] that vivi+ 1A(H) and vi+ 1viA(H).

Case (N):

There exists k ∈ [h − 1] such that vkvk+ 1A(H) and vk+ 1vkA(H), and thus it holds for any \(i \in \{ 1, \dots , k - 1, k + 1, \dots , h - 1 \}\) that vivi+ 1A(H) and vi+ 1viA(H).

Proof

If vivi+ 1A(H) holds for all i ∈ [h − 1] or vi+ 1viA(H) holds for all i ∈ [h − 1], then we have one of the three cases (R), (L), or (B). Otherwise there exist \(k, k^{\prime } \in [h - 1]\) such that vkvk+ 1A(H) and \(v_{k^{\prime }+1} v_{k^{\prime }} \not \in A(H)\). We show that this corresponds to the case (N). To this end, we show that \(k = k^{\prime }\). If \(k < k^{\prime }\), then there is no outgoing arc from any vertices of \(S_{1} = \{ v_{k+1}, {\dots } v_{k^{\prime }} \}\) to V (G) ∖ S1. This is contradicting the assumption that H is a solution, and hence \(k \ge k^{\prime }\). If \(k > k^{\prime }\), then there is no incoming arc to any vertices of \(S_{2} = \{ v_{k^{\prime }+1}, \dots , v_{k} \}\) from V (G) ∖ S2. Again this is a contradiction, and hence we obtain \(k \le k^{\prime }\). □

Before giving the gadget to replace the inner vertices of a maximal induced path, we define the cost for the inner vertices in cases (R), (L), and (N).

Definition 8

Let (G,w) be an instance of MinPAC and let \(P = (v_{0}, \dots , v_{h+1} )\) be a maximal induced path of G. We define the cost for the connection inside P in the right direction, the left direction, and neither direction as follows:

$$ \begin{array}{@{}rcl@{}} C_{R} :=\sum\limits_{i = 1}^{h - 1} w(v_{i} v_{i+1}), C_{L} := \sum\limits_{i=1}^{h-1} w(v_{i+1}v_{i}),\\ C_{N} := \sum\limits_{i = 1, i \ne k}^{h-1}{w(v_{i} v_{i+1}) + w(v_{i+1} v_{i})}, \end{array} $$

where

$$ k := \mathop{\text{argmax}}_{i \in [h - 1]} (w(v_{i} v_{i + 1}) + w(v_{i + 1} v_{i}) ). $$

Note that \(C_{R} = \infty \) (or \(C_{L} = \infty \)) if vivi+ 1A(G) (or vi+ 1viA(G)) for some i (recall that \(w(vu) = \infty \) for vuA(G)). We finally present the gadget to replace the inner vertices of a maximal induced path. The gadget is somewhat more involved than the gadget used in the symmetric version of MinPAC [2] because it needs to encode the four cases seen in Observation 3.

Definition 9

Let \(P=(v_{0}, \dots , v_{h+1} )\) be a maximal induced path. The path-gadget for P is a graph on 6 vertices {v1,vh,a1,a2,b1,b2} and 10 arcs {v1a1,a1v1,vha2,a2vh,a1b1,a2b2,b1a2,b2a1,a1b2,a2b1} with weights defined as follows:

$$ \begin{array}{@{}rcl@{}} w(v_{1} a_{1}) &:=& 0, \quad w(a_{1} v_{1}) := 0, \quad w(v_{h}a_{2}) := 0, \quad w(a_{2}v_{h}) := 0, \\ w(a_{1} b_{1}) &:=& C_{R}, \quad w(a_{2} b_{2}) := C_{L}, \quad w(b_{1}a_{2}) := 0, \quad w(b_{2}a_{1}) := 0, \end{array} $$
$$ \begin{array}{@{}rcl@{}} w(a_{1} b_{2}) &:= \left\{\begin{array}{ll} C_{R} & \text{if } C_{R} \leq C_{N} \text{ or } C_{L} \leq C_{N}, \\ \left\lceil\displaystyle\frac{1}{2} C_{N} \right\rceil & \text{otherwise}, \end{array}\right. \\ w(a_{2} b_{1}) &:= \left\{\begin{array}{ll} C_{L} & \text{if } C_{R} \leq C_{N} \text{ or } C_{L} \leq C_{N},\\ \left\lfloor\displaystyle\frac{1}{2} C_{N} \right\rfloor & \text{otherwise}. \end{array}\right. \end{array} $$

Observation 4

In Definition 9, it always holds that w(a1b2) ≤ CR, w(a2b1) ≤ CL, and w(a1b2) + w(a2b1) ≥ CN.

Reduction Rule 4

Let P = (v0,…,vh+ 1) be a maximal induced path of G with h ≥ 7. Then, remove the vertices v2,…,vh− 1 of P, add a path-gadget for P with endpoints v1 and vh (see Fig. 2), and keep d unchanged.

Fig. 2
figure 2

Illustration of Reduction Rule 4. We replace the inner vertices of a maximal induced path with a path-gadget. Bold arcs denote arcs of weight 0. For the weights of other arcs in the path-gadget, see Definition 9. The value d remains unchanged, that is, the cost in both instances are the same

Lemma 9

Reduction Rule 4 is correct and can be exhaustively applied in linear time.

Proof

Let I = (G,w) be an instance of MinPAC and \(P = (v_{0}, \dots , v_{h+1} )\) be a maximal induced path of G with h ≥ 7. Let \(I^{\prime } = (G^{\prime }, w^{\prime })\) be the instance where the inner vertices of P are replaced by the path-gadget for P. We use Vin to denote the inner vertices \(\{ v_{1}, {\dots } v_{h} \}\) of P and Vnew to denote {v1,vh,a1,a2,b1,b2} the new vertices in the path-gadget. We show that \(\text {OPT}(I) = \text {OPT}(I^{\prime })\).

  1. (≥)

    Let H be an optimal solution of I. Let B = A(H) ∖{vivi+ 1,vi+ 1vii ∈ [h − 1]} be the set of arcs in H that do have the initial or terminal vertex outside Vin. Let B0 = {v1a1,a1v1,vha2,a2vh,b1a2,b2a1} be weight-zero arcs inside Vnew and let B1 = {a1b1,a1b2,a2b1,a2b2} be the arcs inside Vnew that have weight at least one. We will construct a solution \(H^{\prime }\) of \(I^{\prime }\) such that \(A(H^{\prime }) = B \cup B_{0} \cup B_{1}^{\prime }\) for some \(B_{1}^{\prime } \subseteq B_{1}\) which we specify later (in a case distinction). Thus, in this construction the arcs outside Vin and Vnew remain identical in H and in \(H^{\prime }\). Hence, it is sufficient to compare the cost for Vin in H and the costs for Vnew in \(H^{\prime }\). To this end, for \(X \subseteq A(G)\) we define an auxiliary function for the weight of an arc

    $$ \begin{array}{@{}rcl@{}} \mathbf{w}_{X}(vu) = \left\{\begin{array}{ll} w(vu) & \text{if } vu \in X, \\ 0 & \text{otherwise}. \end{array}\right. \end{array} $$

We define \(\mathbf {w}_{X}^{\prime }\) analogously for \(w^{\prime }\) of \(I^{\prime }\). Using this notation, the cost for Vin reads

$$ \begin{array}{@{}rcl@{}} \text{Cost}(V_{\text{in}}, A(H), w) ={} && \sum\limits_{i = 1}^{h} \max \{ \mathbf{w}_{A(H)}(v_{i} v_{i-1}), \mathbf{w}_{A(H)}(v_{i} v_{i+1}) \} \\ ={} && \sum\limits_{i = 1}^{h} \mathbf{w}_{A(H)}(v_{i} v_{i-1}) + \mathbf{w}_{A(H)}(v_{i} v_{i+1}). \\ ={} && \mathbf{w}_{B}(v_{1}v_{0}) + \mathbf{w}_{B}(v_{h}v_{h+1}) \\ && + \sum\limits_{i = 1}^{h - 1} \mathbf{w}_{A(H)}(v_{i} v_{i+1}) + \mathbf{w}_{A(H)}(v_{i+1} v_{i}). \end{array} $$

Here the second equality follows from the assumption that Reduction Rule 2 is applied: it holds for any i ∈ [h] that at least one of w(vivi− 1) or w(vivi+ 1) is 0. On the other hand, the cost for Vnew reads

$$ \begin{array}{@{}rcl@{}} \text{Cost}(V_{\text{new}}, A(H^{\prime}), w^{\prime}) &=& \mathbf{w}_{B}(v_{1}v_{0}) + \mathbf{w}_{B}(v_{h}v_{h+1}) + \max \{ \mathbf{w}_{B_{1}^{\prime}}^{\prime}(a_{1} b_{1}), \mathbf{w}_{B_{1}^{\prime}}^{\prime}(a_{1} b_{2}) \} \\ & &+ \max \{ \mathbf{w}_{B_{1}^{\prime}}^{\prime}(a_{2} b_{1}), \mathbf{w}_{B_{1}^{\prime}}^{\prime}(a_{2} b_{2}) \}. \end{array} $$

We show that \(\delta := \text {Cost}(V_{\text {in}}, A(H), w) - \text {Cost}(V_{\text {new}}, A(H^{\prime }), w^{\prime }) \ge 0\). We rewrite δ, by canceling out the terms wB(v1v0) and wB(vhvh+ 1), as

$$ \begin{array}{@{}rcl@{}} \delta = \text{Cost}(V_{\text{in}}, A(H), w) - \text{Cost}(V_{\text{new}}, A(H^{\prime}), w^{\prime}) = C_{I} - C_{I^{\prime}} \end{array} $$

where CI and \(C_{I^{\prime }}\) are given by

$$ \begin{array}{@{}rcl@{}} C_{I} &=& \sum\limits^{h-1}_{i=1} \mathbf{w}_{A(H)}(v_{i} v_{i+1}) + \mathbf{w}_{A(H)}(v_{i+1} v_{i}) \text{ and } \\ C_{I^{\prime}} &= &\max \{ \mathbf{w}_{B_{1}^{\prime}}^{\prime}(a_{1} b_{1}), \mathbf{w}_{B_{1}^{\prime}}^{\prime}(a_{1} b_{2}) \} + \max \{ \mathbf{w}_{B_{1}^{\prime}}^{\prime}(a_{2} b_{1}), \mathbf{w}_{B_{1}^{\prime}}^{\prime}(a_{2} b_{2}) \}. \end{array} $$

We distinguish between the four cases shown in Observation 3.

  1. Case (R).

    We set \(B_{1}^{\prime } := \{ a_{1}b_{1}, a_{1}b_{2} \}\). Then, \(H^{\prime }\) is a solution because the connectivity from v1 to vh inside the gadget is preserved. Since H contains arcs vivi+ 1 for all i ∈ [h − 1], we have CICR. As noted in Observation 4, we have \(C_{I^{\prime }} = {\max \limits } \{ w(a_{1}b_{1}), w(a_{1}b_{2}) \} \le C_{R} \le C_{I}\).

  2. Case (L).

    We set \(B_{1}^{\prime } := \{ a_{2}b_{1}, a_{2}b_{2} \}\). Then, \(H^{\prime }\) is a solution because the connectivity from vh to v1 inside the gadget is preserved. Since H contains arcs vi+ 1vi for all i ∈ [h − 1], we have CICL. As noted in Observation 4, we have \(C_{I^{\prime }} = {\max \limits } \{ w(a_{2}b_{1}), w(a_{2}b_{2} )\} \le C_{L} \le C_{I}\).

  3. Case (B).

    We set \(B_{1}^{\prime } = \{ a_{1}b_{1}, a_{2}b_{2} \}\). Then, \(H^{\prime }\) is a solution because the connectivities from v1 to vh and from vh to v1 are both preserved. Since H contains arcs vivi+ 1 and vi+ 1vi for all i ∈ [h − 1], we have CICR + CL. We also have \(C_{I}^{\prime } = C_{R} + C_{L}\) because the cost for a1 and a2 are CR and CL, respectively.

  4. Case (N).

    Here we have CI = CN. We further distinguish three subcases. If CRCN, then we set \(B_{1}^{\prime } := \{ a_{1}b_{1}, a_{1}b_{2} \}\). Then, \(H^{\prime }\) is a solution with \(C_{I^{\prime }} = C_{R} \le C_{N}\). If CLCN, then we set \(B_{1}^{\prime } := \{ a_{2}b_{1}, a_{2}b_{2} \}\). Then, \(H^{\prime }\) is a solution with \(C_{I^{\prime }} = C_{L} \le C_{N}\). Otherwise we set \(B_{1}^{\prime } :=\{ a_{1} b_{2}, a_{2} b_{1} \}\). Then, \(H^{\prime }\) is a solution with \(C_{I^{\prime }} = C_{N}\).

  5. (≤)

    Let \(H^{\prime }\) now be an optimal solution of \(I^{\prime }\). We can assume that \(H^{\prime }\) contains all weight-zero arcs in B0 and some of the non-zero-weight arcs in B1. Let \(B = A(H^{\prime }) \setminus (B_{0} \cup B_{1})\). We construct a solution H such that \(A(H)\supseteq B\). We will give H by specifying \(B^{\prime } = A(H) \setminus B\). By the same argument as before we compare the following two quantities:

    $$ \begin{array}{@{}rcl@{}} C_{I} &=& \sum\limits^{h-1}_{i=1} \mathbf{w}_{B^{\prime}}(v_{i} v_{i+1}) + \mathbf{w}_{B^{\prime}}(v_{i+1} v_{i}) \text{ and } \\ C_{I^{\prime}} &=& \max \{ \mathbf{w}_{A(H^{\prime})}^{\prime}(a_{1} b_{1}), \mathbf{w}_{A(H^{\prime})}^{\prime}(a_{1} b_{2}) \}\\ &&+ \max \{ \mathbf{w}_{A(H^{\prime})}^{\prime}(a_{2} b_{1}), \mathbf{w}_{A(H^{\prime})}^{\prime}(a_{2} b_{2}) \}. \end{array} $$
  1. Case 1.

    If there is a path from v1 to vh but no path from vh to v1 inside the path-gadget, then we set \(B^{\prime } = \{ v_{i} v_{i+1} \mid i \in [h - 1] \}\). Then, H is a solution with CI = CR. Since \(H^{\prime }\) contains a1b1, it holds that \(C_{I^{\prime }} \ge C_{R}\).

  2. Case 2.

    If there is a path from vh to v1 but no path from v1 to vh inside the path-gadget, then we set \(B^{\prime } = \{ v_{i+1} v_{i} \mid i \in [h - 1] \}\). Then, H is a solution with CI = CL. Since \(H^{\prime }\) contains a2b2, it holds that \(C_{I^{\prime }} \ge C_{L}\).

  3. Case 3.

    If there is paths from v1 to vh and from vh to v1 inside the path-gadget, then we set \(B^{\prime } = \{ v_{i} v_{i+1}, v_{i+1} v_{i} \mid i \in [h - 1] \}\). Then, H is a solution with CI = CR + CL. Since \(H^{\prime }\) contains a1b1 and a2b2, it holds that \(C_{I^{\prime }} \ge C_{R} + C_{L}\).

  4. Case 4.

    If there is neither a path from v1 to vh nor from vh to v1 inside the path-gadget, then we set \(B^{\prime } = \{ v_{i} v_{i+1}, v_{i+1} v_{i} \mid i \in [h - 1], i \ne k \}\), where k is \(\arg \max \limits _{i \in [h - 1]} w(v_{i} v_{i+1}) + w(v_{i+1} v_{i})\). Then, H is a solution with CI = CN. Since \(H^{\prime }\) neither contains a1b1 nor a2b2 in this case, it must contain a1b2 and a2b2 so that b1 and b2 can be reached in \(H^{\prime }\). As noted in Observation 4, we have \(C_{I^{\prime }} \ge w(a_{1} b_{2}) + w(a_{2} b_{1}) \ge C_{N}\).

Running time To find maximal induced paths, we start with a degree-two vertex and traverse in both directions until a vertex with degree at least 3 is discovered. If the maximal induced path contains at least 7 inner vertices, then we replace it with a gadget with appropriate weights. The algorithm spends a constant time for each inner vertex in the maximal induced path. Since inner vertices of maximal induced paths are pairwise disjoint, this procedure applies Reduction Rule 4 exhaustively in linear time. □

Remark 1

Reduction Rule 4 cannot be applied when UG is a large cycle because there is no vertex with degree 3 or larger. However, if UG is a cycle, then we can easily compute a solution: Let vV (G) be an arbitrary vertex. Then, compute costs corresponding to the cases (R), (L), and (N) with v1 = vh = v (see Fig. 1). Take the cheapest solution found.

We have so far shown a reduction rule to remove all degree-one vertices and a gadget to replace every maximal induced path with a fixed number of vertices. As shown in previous work [2], this is sufficient to obtain a linear-size kernel.

Proposition 1 (2)

Any undirected graph G without degree-one vertices contains at most 2g − 2 vertices of degree at least three, where g is the feedback edge number of G.

Proposition 2 (2)

Any connected undirected graph G without degree-one vertices consists of at most 3g − 3 maximal induced paths, where g ≥ 2 is the feedback edge number of G.

We use the two propositions above to prove the main theorem of this section.

Proof Proof of Theorem 4

Let I = (G,w) be an instance of MinPAC with feedback edge number g. We apply Reduction Rules 2 and 3 exhaustively to obtain \(I^{\prime } = (G^{\prime }, w^{\prime })\), in which there is no degree-one vertex. We then obtain \(I^{\prime \prime } = (G^{\prime \prime }, w^{\prime \prime })\), in which the inner vertices of each maximal induced path is replaced with a path-gadget using Reduction Rule 4. It follows from Lemmas 6, 7 and 9 that this transformation is correct and can be done in linear time.

We show that \(G^{\prime \prime }\) has at most 20g − 20 vertices and 42g − 42 arcs. It follows from Propositions 1 and 2 that there are at most 2g − 2 vertices of degree at least three and 3g − 3 maximal induced paths in \({U_{G^{\prime }}}\). After the exhaustive application of ReductionRule4, each maximal induced path \((v_{0}, \dots , v_{h+1})\) contains at most 6 inner vertices and 14 arcs (including v0v1,v1v0,vhvh+ 1,vh+ 1vh). Thus, \(G^{\prime \prime }\) contains at most 2g − 2 + 6 ⋅ (3g − 3) = 20g − 20 vertices and at most 14 ⋅ (3g − 3) = 42g − 42 arcs. Note that we count edges between vertices of degree at least three as a maximal induced paths with no inner vertex. □

We can finally again use Theorem 3 to bound the weights and hence arrive at the following result.

Corollary 3

MinPAC admits a polynomial-size kernel with respect to the feedback edge number.

5 Parameterized Hardness

In this section we present several hardness results for MinPAC. To this end, we consider the decision variant of MinPAC.

Power Asymmetric Connectivity (PAC)

  1. Input:

    A strongly connected graph G, arc weights , and a budget .

  2. Question:

    Is there a strongly connected spanning subgraph H of G, such that Cost(V (G),A(H),w) ≤ k?

We prove that PAC remains NP-hard even if the feedback arc number is 1. This complements the result in Section 4, where we showed that MinPAC parameterized by the feedback edge number admits an FPT algorithm via a kernelization. Recall that the feedback arc number for a directed graph is the minimum number of arcs that have to be removed to make it a directed acyclic graph. Furthermore, we show that PAC is W[2]-hard with respect to the solution cost k. We also show that PAC cannot be solved in subexponential time in the number of vertices assuming the Exponential Time Hypothesis (ETH) [13], which states that 3-Sat cannot be solved in 2o(n+m) time, where n and m are the number of variables and clauses in the input formula. Summarizing we show the following.

Theorem 5

Even if each arc weight is either one or zero and the feedback arc number is 1,

  1. (i)

    PAC is NP-hard,

  2. (ii)

    PAC is W[2]-hard when parameterized by the solution cost k, and

  3. (iii)

    PAC is not solvable in 2o(n) time, unless the ETH fails.

It follows from Theorem 5 (ii) that there (presumably) is no algorithm solving PAC running in f(k) ⋅ nO(1) time. Nonetheless, a simple brute-force algorithm solves PAC in n𝜃(k) time, certifying that PAC is in the class XP with respect to the parameter solution cost. In order to prove the claims of Theorem 5, we use a reduction from the well-studied Set Cover problem.

Set Cover

  1. Input:

    A universe U = {u1,…,un}, a set family \( \mathcal {F} = \{ S_{1}, {\dots } , S_{m} \}\) containing sets \(S_{i} \subseteq U\), and .

  2. Question:

    Is there a size- set cover \(\mathcal {F}^{\prime } \subseteq \mathcal {F}\) (that is, \(\bigcup _{S \in \mathcal {F}^{\prime }} S = U\))?

Set Cover is NP-hard and W[2]-hard with respect to the solution size [11] and is not solvable in \(2^{o(\vert \mathcal {U} \vert + \vert \mathcal {F} \vert )}\) time unless the ETH fails [14].

For the reduction, we use one vertex for each element and each subset and one arc to represent the membership of an element in a subset. The construction resembles the one used in Min-Power Symmetric Connectivity [2].

Reduction 1

Given an instance \(I = (U, \mathcal {F}, \ell )\) of Set Cover, we construct an instance \(I^{\prime } = (G, w, k = \ell )\) of PAC as follows. We introduce a vertex vu for every uU, a vertex vS for every \(S \in \mathcal {F}\), and two additional vertices s and t. We construct a graph such that \(V(G) = \{ s, t \} \cup V_{U} \cup V_{\mathcal {F}}\) where VU = {vuuU} and \(V_{\mathcal {F}} = \{ v_{S} \mid S \in \mathcal {F} \}\). For the arcs we first add an arc ts of weight 0. We then add arcs svS and vSt of weight 0 for every \(S \in \mathcal {F}\) and an arc vut of weight 0 for every uU. For every \(S \in \mathcal {F}\) and every uS we finally add an arc vSvu of weight 1.

Figure 3 illustrates the reduction to PAC. We can assume that arcs of weight zero (bold arcs in the figure) are part of the solution. The idea is that in order to obtain a strongly connected subgraph, one has to select at least one incoming arc for each vertex in VU such that only k vertices in \(V_{\mathcal {F}}\) have outgoing arcs that are selected.

Fig. 3
figure 3

Illustration of Reduction 1 on a Set Cover instance with universe U = {1,2,3} and set family \(\mathcal {F} = \{ \{ 2 ,3 \}, \{ 1, 2\} \}\). Bold arcs denote arcs of weight 0 and other arcs have weight 1

To prove the W[2]-hardness, we have to verify that the given reduction is indeed a parameterized reduction.

Definition 10

A parameterized reduction from a parameterized problem \({\Pi } \subseteq {\Sigma }^{*} \times {\Sigma }^{*}\) to a parameterized problem \({\Pi }^{\prime } \subseteq {\Sigma }^{*} \times {\Sigma }^{*}\) is a function which maps any instance (I,p) ∈Σ×Σ to another instance \((I^{\prime },p^{\prime })\) such that

  1. (i)

    \((I^{\prime },p^{\prime })\) can be computed from (I,p) in f(p) ⋅|I|O(1) time for some computable function f,

  2. (ii)

    \(p^{\prime } \leq g(p)\) for some computable function g, and

  3. (iii)

    (I,p) ∈π if and only if \((I^{\prime },p^{\prime }) \in {\Pi }^{\prime }\).

Lemma 10

Reduction 1 is a parameterized reduction from Set Cover parameterized by the solution size to PAC parameterized by the solution cost.

Proof

To prove that Reduction 1 is a parameterized reduction, we verify Definition 10 (i) to (iii). Observe that Reduction 1 can be done in \(O(|U| + |\mathcal {F}|)\) time, which satisfies Definition 10 (i). Definition 10 (ii) is clearly satisfied. For Definition 10 (iii), we show that I has a set cover of size at most if and only if G has a strongly connected subgraph H of cost is at most .

  1. (⇒)

    Let \(\mathcal {F^{\prime }} \subseteq \mathcal {F}\) be a set cover of size at most . Let \(B_{0} = \{ s v_{S}, v_{S} t \mid S \in \mathcal {F} \} \cup \{ v_{u} t \mid u \in U \}\) be the arcs of weight 0. We claim that \(H = (V(G), B_{0} \cup \{ v_{S^{\prime }} v_{u^{\prime }} \mid S^{\prime } \in \mathcal {F}^{\prime }, u^{\prime } \in S^{\prime } \})\) is a solution with cost . Since \(\mathcal {F}^{\prime }\) is a set cover, there exists at least one incoming arc in H for any vertex in VU. Thus, H is strongly connected. Since the cost for \(v_{S^{\prime }}\) is 1 for any \(S^{\prime } \in \mathcal {F}^{\prime }\) and the costs for other vertices are 0, the cost of H is at most .

  2. (⇐)

    Let H be a strongly connected subgraph of cost at most . Let \(\mathcal {F}^{\prime } = \{ S \mid \exists u. v_{S} v_{u} \in A(H) \}\). Then, \(\mathcal {F}^{\prime }\) is a set cover because there is at least one incoming arc in H for any vuVU. Since the cost of H is at most , we have \(|\mathcal {F}^{\prime }| \le \ell \).

Now we can prove the statements of the theorem.

Proof Proof of Theorem 5

Theorem 5 (ii) follows from Lemma 10 because Set Cover is W[2]-hard when parameterized by the solution size [11]. For Theorem 5 (i), observe that Reduction 1 is a polynomial-time reduction from Set Cover and the constructed graph has a feedback arc set of size 1. For Theorem 5 (iii), observe that the constructed graph of Reduction 1 has \(O(|U| + |\mathcal {F}|)\) vertices. Since Set Cover cannot be solved in \(2^{o(\vert U \vert + \vert \mathcal {F} \vert )}\) time assuming ETH [14], Theorem 5 (iii) follows. □

Remark 2

We remark that having arcs of weight zero is essential for the W[2]-hardness in Theorem 5 (ii): If \(\min \limits _{vu \in A(G)} w(vu) \ge 1\) for any vV (G), then PAC is trivially FPT with respect to the solution cost (as the cost is at least n). However, even if \(\min \limits _{vu \in A(G)} w(vu) \ge 1\), PAC is still W[2]-hard with respect to the above lower bound \(k - {\sum }_{v \in V(G)} \min \limits _{vu \in A(G)} w(vu)\) (this follows from a modification to Reduction 1 where every arc weight is increased by one).

6 Conclusion

We started the investigation of the parameterized complexity of MinPAC, leading to first tractability and intractability results. We remark that our algorithms run in linear time when the respective parameters are bounded. Thus we believe that our results are worthwhile for empirical experiments. There are also several theoretical challenges for future work: Can the running time of the parameterized algorithm with respect to the number c of SCCs in the obligatory subgraph be improved to single-exponential? Resolving the parameterized complexity of MinPAC with respect to the single parameter vertex cover number is another task for future work. Finally, problem variants where the solution graph is not only required to be strongly connected but needs to have at most a certain diameter might be interesting (theoretically and from an application point of view where the number of hops for communication should be limited).