Advertisement

Locating a semi-obnoxious facility in the special case of Manhattan distances

  • Andrea WagnerEmail author
Open Access
Original Article
  • 96 Downloads

Abstract

The aim of this work is to locate a semi-obnoxious facility, i.e. to minimize the distances to a given set of customers in order to save transportation costs on the one hand and to avoid undesirable interactions with other facilities within the region by maximizing the distances to the corresponding facilities on the other hand. Hence, the goal is to satisfy economic and environmental issues simultaneously. Due to the contradicting character of these goals, we obtain a non-convex objective function. We assume that distances can be measured by rectilinear distances and exploit the structure of this norm to obtain a very efficient dual pair of algorithms.

Keywords

Obnoxious facility location Global optimization Primal and dual algorithms Dc problems 

Mathematics Subject Classification

90B85 90C26 90C46 

1 Introduction

This paper deals with the problem of locating a semi-obnoxious facility such as an industrial plant, where the goal is to minimize travel distances to customers and suppliers and to maximize distances to nature reserves and residential areas. First attempts to solve location problems with undesirable facilities appeared in the 1970’s (Church and Garfinkel 1978; Dasarathy and White 1980; Goldman and Dearing 1975). Since then, researchers have focussed on many different variants of the problem, like different spaces (e.g. networks, discrete settings, \({\mathbb {R}}^2\) or \({\mathbb {R}}^n\)), different distance functions (e.g. Euclidean distance, Manhatten norm, maximum norm, polyhedral gauges), different objective functions (e.g. bi-objective models, dc formulations) and many more. For surveys and summaries on location problems with undesirable facilities the reader is referred for instance to Cappanera (1999), Carrizosa and Plastria (1999), Eiselt and Laporte (1995), Plastria (1996), Wagner (2015).

The case of Manhattan distances in the Euclidean space \({\mathbb {R}}^2\) using a single-objective model is considered for instance in Drezner and Wesolowsky (1991), Nickel and Dudenhöffer (1997). Compared to them, this paper provides an alternative approach (in \({\mathbb {R}}^n\)) which allows to obtain a dual pair of algorithms and a variant of the primal one that all mainly consist of a sorting process followed by a very efficient procedure to evaluate the objective at all candidate coordinates.

In Wagner et al. (2016) the problem is considered for the general case of mixed gauge distances. A dual pair of algorithms to find exact solutions is developed, based on the following discretization result: Both, the primal and the dual problem, provide grids w.r.t. attraction and w.r.t. repulsion, such that the primal grid points w.r.t. attraction provide a finite set of candidates for optimal primal solutions and the dual grid points w.r.t. repulsion provide a finite set of candidates for optimal dual solutions. In case of mixed gauge distances, those grids may be very complex and for determining its grid points it is suggested to apply a generalized version of Benson’s algorithm (Benson 1998; Löhne and Weißing 2015, 2016b).

In Löhne and Wagner (2017) a more general setting is considered. The goal is to minimize the difference of two convex functions where at least one of them is polyhedral convex, i.e. its epigraph is a polyhedral convex set. A dual pair of algorithms is presented, in which the vertices of the epigraphs are determined by solving a polyhedral projection problem, e.g. with a vlp solver (Löhne and Weißing 2016a). The projections of these vertices onto \({\mathbb {R}}^n\) provide finite sets of candidates for optimal solutions.

It turns out that the finite sets of candidates for optimal solutions in Löhne and Wagner (2017) and in Wagner et al. (2016) can efficiently be found by using for instance the implementation Bensolve (Löhne and Weißing 2015).

In case of Manhattan distances the grids have an axes-parallel structure and hence there is no further need for special solvers for multi-objective linear programs or polyhedral projection problems. We exploit the structure of the Manhattan norm in order to obtain more efficient variants of the algorithms presented in Löhne and Wagner (2017) and Wagner et al. (2016).

This paper is organized as follows: In Sect. 2 we introduce the mathematical formulation of the considered optimization problem and review the main results and algorithms presented in Löhne and Wagner (2017) and Wagner et al. (2016) which this paper is based on. In Sect. 3 we derive a method for determining the finite set of grid points, which are candidates for optimal solutions. Furthermore, in Sect. 4, we derive recursive methods for calculating the primal and dual objective values with help of the special structure given by the norm. The resulting algorithms are presented in Sect. 5. In Sect. 6 we provide computational results and compare different solving procedures. We close with a conclusion in Sect. 7.

2 Problem formulation and preliminary results

In this section we provide relevant terms, properties and problem formulations. For more detailed information the reader is referred to Löhne and Wagner (2017) and Wagner et al. (2016).

Let \(x,a\in {\mathbb {R}}^n\). Then the Manhattan distance between x and a is given by \(d_1(x,a)=\sum _{i=1}^n\left| x_i-a_i\right| \) and the corresponding unit ball is given by \(B=\left\{ x\in {\mathbb {R}}^n|\;\sum _{i=1}^n\left| x_i\right| \right\} =1\).

The optimization problem under consideration is a dc location problem (difference of convex functions) that can be formulated as
with functions \(g,h:\;{\mathbb {R}}^n\rightarrow {\mathbb {R}}_+\) defined as
$$\begin{aligned} g(x):=\sum _{m=1}^{{\overline{M}}}{\overline{w}}_m{d_1}(x,{\overline{a}}^{m}),&h(x):=\sum _{m=1}^{{\underline{M}}}{\underline{w}}_m{d_1}(x,{\underline{a}}^{m}), \end{aligned}$$
where the parameters \({\overline{a}}^1,\ldots ,{\overline{a}}^{{\overline{M}}}\in {\mathbb {R}}^n\), \({\overline{M}}\ge 1\), denote the attracting points with weights \({\overline{w}}_1,\ldots ,{\overline{w}}_{{\overline{M}}}>0\), and \({\underline{a}}^1,\ldots ,{\underline{a}}^{{\underline{M}}}\in {\mathbb {R}}^n\), \({\underline{M}}\ge 1\), denote the repulsive points with weights \({\underline{w}}_1,\ldots ,{\underline{w}}_{{\underline{M}}}>0\). Based on Wagner et al. (2016) the Toland-Singer dual problem (Singer 1979; Toland 1978) results as
where the conjugate functions
$$\begin{aligned} h^*(y)&=\min _{({\underline{y}}^1,\ldots ,{\underline{y}}^{{\underline{M}}})}\left\{ \left. \sum _{m=1}^{{\underline{M}}} \left\langle {\underline{y}}^{m},{\underline{a}}^{m} \right\rangle \right| \,{{{\underline{y}}^{m}\in [-{\underline{w}}_m,{\underline{w}}_m]^n}},\;y=\sum _{m=1}^{{\underline{M}}}{\underline{y}}^m\right\} , \end{aligned}$$
(1)
$$\begin{aligned} g^*(y)&=\min _{({\overline{y}}^1,\ldots ,{\overline{y}}^{{\overline{M}}})}\left\{ \left. \sum _{m=1}^{{\overline{M}}}\left\langle {\overline{y}}^{m},{\overline{a}}^{m} \right\rangle \right| \,{{{\overline{y}}^{m}\in [-{\overline{w}}_m,{\overline{w}}_m]^n}},\;y=\sum _{m=1}^{{\overline{M}}}{\overline{y}}^m\right\} , \end{aligned}$$
(2)
of g and h, respectively, are obtained with help of basic calculus rules for conjugate functions, see e.g. Rockafellar (1997). For the dual pair of optimization problems (P) and (D) it holds (e.g. Singer 2006):
$$\begin{aligned} \min _{x\in {\mathbb {R}}^n}\left\{ g(x)-h(x)\right\} =\min _{y\in {\mathbb {R}}^n}\left\{ h^*(y)-g^*(y)\right\} . \end{aligned}$$
A direct consequence of Theorem 4.2 in Wagner et al. (2016) is the following:

Corollary 1

(Finiteness Criterion) A finite solution of problem (P) exists and is attained if and only if \(\sum _{m=1}^{{\underline{M}}}{\underline{w}}_m\le \sum _{m=1}^{{\overline{M}}}{\overline{w}}_m\).

According to Wagner et al. (2016) the following relations hold: For all \(y\in {{\,\mathrm{dom}\,}}g^*\) there exists a tuple \(\left( {\overline{y}}^1,\ldots ,{\overline{y}}^{{\overline{M}}}\right) \in {\overline{w}}_1{\overline{B}}_1^*\times \ldots \times {\overline{w}}_{{\overline{M}}}{\overline{B}}_{{\overline{M}}}^*\), such that \(\displaystyle y={\overline{y}}^1+\ldots +{\overline{y}}^{{\overline{M}}}\) and \(\displaystyle \bigcap _{m=1,\ldots ,{\overline{M}}}\left[ {\overline{a}}^m+N_{{\overline{w}}_m{\overline{B}}_m^*}({\overline{y}}^m)\right] \ne \emptyset \). Whenever this intersection in non-empty, it coincides with the subdifferential \(\partial g^*(y)\) and the corresponding tuple \(\left( {\overline{y}}^1,\ldots ,{\overline{y}}^{{\overline{M}}}\right) \) directly provides the objective value \(g^*(y)\), i.e.
$$\begin{aligned} \partial g^*\left( y\right) =\bigcap _{m=1}^{{\overline{M}}}\left[ {\overline{a}}^m+N_{{\overline{w}}_m{\overline{B}}_m^*}({\overline{y}}^m)\right]&\text {and}&g^* \left( y\right) =\sum _{m=1}^{{\overline{M}}}\left\langle {\overline{a}}^m,{\overline{y}}^m \right\rangle . \end{aligned}$$
(3)
Moreover,
$$\begin{aligned} \partial g(x)&=\sum _{m=1}^{{\overline{M}}}\mathop {{\mathrm{argmax}}}\limits _{y\in {\overline{w}}_m{\overline{B}}_m^*}\left\langle x-{\overline{a}}^m,y \right\rangle ,&x\in {\mathbb {R}}^n. \end{aligned}$$
(4)
Note that \(B^*\) defines the dual unit ball, which in case of Manhattan distances is \(B^*=[-1,1]^n\). The extreme points of these subdifferentials in (3) and (4) define the primal and dual grid points w.r.t. attraction. Analogously, the subdifferentials of h and \(h^*\), define primal and dual grids w.r.t. repulsion. These grids provide finite sets of candidates for optimal solutions, as the following result states.

Theorem 1

(Discretization Result, (Wagner et al. 2016, Theorem 4.11)) Let \(\mathcal {{\overline{I}}}\) denote the set of primal grid points w.r.t. attraction, \(\mathcal {{\underline{I}}}_D\) the set of dual grid points w.r.t. repulsion, and \({\mathcal {X}}\) and \({\mathcal {Y}}\) the sets of minimizers of (P) and (D), respectively. Then, \(\mathcal {{\overline{I}}}\cap {\mathcal {X}}\ne \emptyset \) and \(\mathcal {{\underline{I}}}_D\cap {\mathcal {Y}}\ne \emptyset \).

Due to the special structure of the Manhattan norm we can simplify the algorithms in Löhne and Wagner (2017) and Wagner et al. (2016), which mainly consist of determining all grid points and verifying their optimality. Solving the original non-convex problem (P) is reduced to a sorting process followed by a very efficient procedure to evaluate the objective at all candidate coordinates. No special solvers as used in Löhne and Wagner (2017) and Wagner et al. (2016) are necessary.

The following proposition can be applied to determine primal optimal solutions, when dual optimal points are known and vice verse.

Proposition 1

(Wagner et al. 2016, Remark 3.4) Let \({\mathcal {X}}\) be the set of minimizers of \(g-h\) and \({\mathcal {Y}}\) be the set of minimizers of \(h^*-g^*\). Then
$$\begin{aligned} {\mathcal {X}}&=\bigcup _{y\in {\mathcal {Y}}}\partial g^*(y),&{\mathcal {Y}}&=\bigcup _{x\in {\mathcal {X}}}\partial h(x). \end{aligned}$$

Proposition 2

(Necessary Optimality Conditions, (Horst and Thoai 1999; Tuy 1998)) Let \(g,h: \;{\mathbb {R}}^n\rightarrow {\mathbb {R}}\cup {+\infty }\) be proper, convex and closed functions. If \({\hat{x}}\in {{\,\mathrm{dom}\,}}g\cap {{\,\mathrm{dom}\,}}h \) is a global minimizer of \(g-h\) on \({\mathbb {R}}^n\) then \(\partial h(x)\subseteq \partial g(x)\). Vice verse, if \({\hat{y}}\in {{\,\mathrm{dom}\,}}g^*\cap {{\,\mathrm{dom}\,}}h^* \) is a global minimizer of \(h^*-g^*\) on \({\mathbb {R}}^n\) then \(\partial g^*(x)\subseteq \partial h^*(x)\).

3 Determination of grid points

In this section we present methods for determining the sets of primal and dual grid points. These grid points are determined in the primal and the dual algorithm for solving the optimization problems (P) and (D).

First of all we reorder and consolidate the coordinates of the existing facilities. For \(i=1,\ldots ,n\) we denote by \({\overline{M}}_i\) the number of different values of the i-th coordinates \({\overline{a}}_i^1,\ldots ,{\overline{a}}_i^{{\overline{M}}}\) of all attracting facilities, sort them in ascending order and consolidate the weights of equal coordinates, such that
$$\begin{aligned} {\overline{\alpha }}_i^1:&= \min \left\{ {\overline{a}}_i^1,\ldots ,{\overline{a}}_i^{{\overline{M}}}\right\} , \end{aligned}$$
(5)
$$\begin{aligned} {\overline{\alpha }}_i^m:&=\min \left\{ {\overline{a}}_i^k\in \left\{ \left. {\overline{a}}_i^1,\ldots ,{\overline{a}}_i^{{\overline{M}}}\right\} \right| \,{\overline{a}}_i^k>{\overline{\alpha }}_i^{m-1}\right\} ,&m=2,\ldots ,{\overline{M}}_i, \end{aligned}$$
(6)
$$\begin{aligned} {\overline{v}}_i^m:&=\sum _{\left. \left\{ k\in \left\{ 1,\ldots ,{\overline{M}}\right\} \right| \,{\overline{a}}_i^k={\overline{\alpha }}_i^m\right\} }{\overline{w}}_k,&m=1,\ldots ,{\overline{M}}_i. \end{aligned}$$
(7)
Analogously, we define
$$\begin{aligned} {\underline{\alpha }}_i^1:&= \min \left\{ {\underline{a}}_i^1,\ldots ,{\underline{a}}_i^{{\underline{M}}}\right\} , \end{aligned}$$
(8)
$$\begin{aligned} {\underline{\alpha }}_i^m:&=\min \left\{ {\underline{a}}_i^k\in \left\{ \left. {\underline{a}}_i^1,\ldots ,{\underline{a}}_i^{{\underline{M}}}\right\} \right| \,{\underline{a}}_i^k>{\underline{\alpha }}_i^{m-1}\right\} ,&m=2,\ldots ,{\underline{M}}_i, \end{aligned}$$
(9)
$$\begin{aligned} {\underline{v}}_i^m:&=\sum _{\left\{ k\in \left\{ 1,\ldots ,{\underline{M}}\right\} \left| \,{\underline{a}}_i^k={\underline{\alpha }}_i^m\right. \right\} }{\underline{w}}_k,&m=1,\ldots ,{\underline{M}}_i. \end{aligned}$$
(10)

3.1 Determining primal grid points

Primal grid points w.r.t. attraction are given by the subdifferentials of \(g^*\), which have a rectangular axes-parallel shape in case of Manhattan distances. To see that, we consider the n components of the sets separately: Let \(({\overline{y}}^1,\ldots ,{\overline{y}}^{{\overline{M}}})\) be such that \(\bigcap _{m=1}^{{\overline{M}}}\left[ {\overline{a}}^m+N_{{{[-{\overline{w}}_m,{\overline{w}}_m]^n}}}({\overline{y}}^m)\right] \ne \emptyset \). Then, by (3),
$$\begin{aligned} \partial g^*\left( y\right)&=\bigcap _{m=1}^{{\overline{M}}}\left[ {\overline{a}}^m+N_{{{[-{\overline{w}}_m,{\overline{w}}_m]^n}}}({\overline{y}}^m)\right] \\&=\bigcap _{m=1}^{{\overline{M}}}\left[ {\overline{a}}^m_1+N_{{{[-{\overline{w}}_m,{\overline{w}}_m]}}}({\overline{y}}^m_1)\right] \times \ldots \times \bigcap _{m=1}^{{\overline{M}}}\left[ {\overline{a}}^m_n+N_{{{[-{\overline{w}}_m,{\overline{w}}_m]}}}({\overline{y}}^m_n)\right] \\&=\bigcap _{m=1}^{{\overline{M}}_1}\left[ {\overline{\alpha }}^m_1+N_{{{[-{\overline{v}}_m,{\overline{v}}_m]}}}({\overline{y}}^m_1)\right] \times \ldots \times \bigcap _{m=1}^{{\overline{M}}_n}\left[ {\overline{\alpha }}^m_n+N_{{{[-{\overline{v}}_m,{\overline{v}}_m]}}}({\overline{y}}^m_n)\right] . \end{aligned}$$
Since for \(i=1,\ldots ,n\) and \(m=1,\ldots ,{\overline{M}}_i\) it holds
$$\begin{aligned} {\overline{\alpha }}_i^m+N_{[-{\overline{v}}_i^m,{\overline{v}}_i^m]}({\overline{y}}_i^{m})= {\left\{ \begin{array}{ll} \begin{array}{ll} (-\infty ,{\overline{\alpha }}_i^m],&{}{\overline{y}}_i^m=-{\overline{v}}_i^m,\\ \{{\overline{\alpha }}_i^m\},&{}{\overline{y}}_i^m \in (-{\overline{v}}_i^m,{\overline{v}}_i^m),\\ {[}{\overline{\alpha }}_i^m,+\infty ),&{}{\overline{y}}_i^m= {\overline{v}}_i^m,\\ \end{array} \end{array}\right. } \end{aligned}$$
(11)
we directly obtain the extreme points of the subdifferentials and hence the sets \(\mathcal {{\overline{I}}}\) and \(\mathcal {{\underline{I}}}\) of primal grid points w.r.t. attraction and w.r.t. repulsion, respectively, as
$$\begin{aligned} \mathcal {{\overline{I}}}&:=\left\{ {\overline{\alpha }}^1_1,\ldots ,{\overline{\alpha }}^{{\overline{M}}_1}_1\right\} \times \cdots \times \left\{ {\overline{\alpha }}^1_n,\ldots ,{\overline{\alpha }}^{{\overline{M}}_n}_n\right\} , \end{aligned}$$
(12)
$$\begin{aligned} \mathcal {{\underline{I}}}&:=\left\{ {\underline{\alpha }}^1_1,\ldots ,{\underline{\alpha }}^{{\underline{M}}_1}_1\right\} \times \cdots \times \left\{ {\underline{\alpha }}^1_n,\ldots ,{\underline{\alpha }}^{{\underline{M}}_n}_n\right\} . \end{aligned}$$
(13)

3.2 Determination of dual grid points

Dual grid points w.r.t. attraction are given by the subdifferentials of g, which also have a rectangular axes-parallel shape in case of Manhattan distances. To see that, we again consider the n components of the sets separately. By (4) we havewhere for \(i=1,2,\ldots ,n\) and \(m=1,\ldots ,{\overline{M}}_i\) it holdsHence, we directly obtain the extreme points of the subdifferentials of g and thus the sets \(\mathcal {{\overline{I}}}_D\) and \(\mathcal {{\underline{I}}}_D\) of dual grid points w.r.t. attraction and w.r.t. repulsion, respectively, as
$$\begin{aligned} \mathcal {{\overline{I}}}_D&=\left\{ {\overline{y}}_1^0,\ldots ,{\overline{y}}_1^{{{\overline{M}}}_1}\right\} \times \cdots \times \left\{ {\overline{y}}_n^0,\ldots ,{\overline{y}}_n^{{{\overline{M}}}_n}\right\} ,\\ \mathcal {{\underline{I}}}_D&=\left\{ {\underline{y}}_1^0,\ldots ,{\underline{y}}_1^{{{\underline{M}}}_1}\right\} \times \cdots \times \left\{ {\underline{y}}_n^0,\ldots ,{\underline{y}}_n^{{{\underline{M}}}_n}\right\} , \end{aligned}$$
where for \(i=1,\ldots ,n\) the coordinates \({\overline{y}}_i^k\) can be determined recursively by
$$\begin{aligned} {\overline{y}}_i^k:&= {\left\{ \begin{array}{ll} \displaystyle -\sum _{m=1}^{{{\overline{M}}}_i}{{\overline{v}}}_i^m,&{} k=0,\\ {\overline{y}}_i^{k-1}+2{\overline{v}}_i^k, &{} k=1,2,\ldots ,{{\overline{M}}}_i, \end{array}\right. } \end{aligned}$$
(14)
or explicitly by
$$\begin{aligned} {\overline{y}}_i^k:&= \sum _{m=1}^{k}{{\overline{v}}}_i^m-\sum _{m=k+1}^{{{\overline{M}}}_i}{{\overline{v}}}_i^m,&k=0,1,\ldots ,{{\overline{M}}}_i. \end{aligned}$$
(15)
Analogously, for \(i=1,\ldots ,n\), the coordinates \({\underline{y}}_i^k\) can be determined by
$$\begin{aligned} {\underline{y}}_i^k:&= {\left\{ \begin{array}{ll} \displaystyle -\sum _{m=1}^{{{\underline{M}}}_i}{{\underline{v}}}_i^m,&{} k=0,\\ {\underline{y}}_i^{k-1}+2{\underline{v}}_i^k, &{} k=1,2,\ldots ,{{\underline{M}}}_i, \end{array}\right. } \end{aligned}$$
(16)
or
$$\begin{aligned} {\underline{y}}_i^k:&= \sum _{m=1}^{k}{{\underline{v}}}_i^m-\sum _{m=k+1}^{{{\underline{M}}}_i}{{\underline{v}}}_i^m,&k=0,1,\ldots ,{{\underline{M}}}_i. \end{aligned}$$
(17)
Obviously, in case of Manhattan distances the primal problem (P) and the dual problem (D) provide axes-parallel grid structures.

4 Determining objective values

For a more efficient implementation we derive recursive representations for determining the objective values of grid coordinates. In the primal algorithm this will substitute the obvious explicit determination and in the dual case it will even replace to solve a linear program for each grid candidate.

4.1 Determination of primal objective values

To check all primal grid points w.r.t. attraction for optimality we need to determine the differences \(g(x)-h(x)\), for all \(x\in \mathcal {{\overline{I}}}\). Instead of the function g and h we may consider subfunctions \(g_1,\ldots , g_n, h_1,\ldots ,h_n:\;{\mathbb {R}}\rightarrow {\mathbb {R}}_+\) such that
$$\begin{aligned} g(x)&=\sum _{i=1}^{n}g_i(x_i),&g_i(x_i):=\sum _{m=1}^{{\overline{M}}_i}{\overline{v}}_m\left| x_i-{\overline{\alpha }}_i^{m}\right| , \end{aligned}$$
(18)
$$\begin{aligned} h(x)&=\sum _{i=1}^n h_i(x_i),&h_i(x_i):=\sum _{m=1}^{{\underline{M}}_i}{\underline{v}}_m\left| x_i-{\underline{\alpha }}_i^{m}\right| . \end{aligned}$$
(19)
Since the increase of \(g_i\) from a grid coordinate \({\overline{\alpha }}_i^k\) to \({\overline{\alpha }}_i^{k+1}\) can be determined by
$$\begin{aligned} g_i({\overline{\alpha }}_i^k)-g_i({\overline{\alpha }}_i^{k-1})=\sum _{m=1}^{k-1}{\overline{v}}_m({\overline{\alpha }}_i^k-{\overline{\alpha }}_i^{k-1})-\sum _{m=k}^{{\overline{M}}_i}{\overline{v}}_m({\overline{\alpha }}_i^k-{\overline{\alpha }}_i^{k-1}), \end{aligned}$$
we obtain by (14)
$$\begin{aligned} g_i({\overline{\alpha }}_i^k)={\left\{ \begin{array}{ll} \displaystyle \sum _{m=1}^{{\overline{M}}_i}{\overline{v}}_m({\overline{\alpha }}_i^m-{\overline{\alpha }}_i^1),&{} k=1,\\ g_i({\overline{\alpha }}_i^{k-1})+{\overline{y}}_i^{k-1}({\overline{\alpha }}_i^k-{\overline{\alpha }}_i^{k-1}),&{} k=2,\ldots ,{\overline{M}}_i, \end{array}\right. } \end{aligned}$$
(20)
and analogously by (16)
$$\begin{aligned} h_i({\underline{\alpha }}_i^k)&={\left\{ \begin{array}{ll} \displaystyle \sum _{m=1}^{{\underline{M}}_i}{\underline{v}}_m({\underline{\alpha }}_i^m-{\underline{\alpha }}_i^1),&{}k=1,\\ h_i({\underline{\alpha }}_i^{k-1})+{\underline{y}}_i^{k-1}({\underline{\alpha }}_i^k-{\underline{\alpha }}_i^{k-1}),&{}k=2,\ldots ,{\underline{M}}_i. \end{array}\right. } \end{aligned}$$
(21)
Since a grid coordinate \({\overline{\alpha }}_i^m\) w.r.t. attraction does not necessarily need to be a grid coordinate w.r.t. repulsion, we use the piecewise linearity of the subfunctions \(h_i\) to determine the values \(h_i({\overline{\alpha }}_i^m)\). We obtain for \(k=1,\ldots ,{\overline{M}}_i\)
$$\begin{aligned} h_i({\overline{\alpha }}_i^k)={\left\{ \begin{array}{ll} h_i({\underline{\alpha }}_i^{j})+{\underline{y}}_i^{j}({\overline{\alpha }}_i^k-{\underline{\alpha }}_i^{j}),&{} {\overline{\alpha }}_i^k\in [{\underline{\alpha }}_i^j,{\underline{\alpha }}_i^{j+1}),\;j\in \left\{ 1,\ldots ,{\underline{M}}_i-1\right\} ,\\ h_i({\underline{\alpha }}_i^{1})-{\underline{y}}_i^{0}({\underline{\alpha }}_i^1-{\overline{\alpha }}_i^{k}),&{}{\overline{\alpha }}_i^k<{\underline{\alpha }}_i^1,\\ h_i({\underline{\alpha }}_i^{{\underline{M}}_i})+{\underline{y}}_i^{{\underline{M}}_i}({\overline{\alpha }}_i^k-{\underline{\alpha }}_i^{{\underline{M}}_i}),&{}{\overline{\alpha }}_i^k\ge {\underline{\alpha }}_i^{{\underline{M}}_i}. \end{array}\right. } \end{aligned}$$
(22)
While determining all objective values by applying (18) and (19) has quadratic computational costs, the recursive variant in (20) – (22), involving (14) and (16), has linear costs only. The results of this subsection are applied in Algorithm 1.

4.2 Determination of dual objective values

To check all dual grid points w.r.t. repulsion for optimality we need to determine the differences \(h^*(y)-g^*(y)\), for all \(y\in \mathcal {{\underline{I}}}_D\). In order to avoid solving linear programs as given in (1) and (2), we derive a method for calculating \(g^*(y)\) and \(h^*(y)\) with help of the special structure given by the norm.

Instead of the function \(g^*\) we may consider subfunctions \(g_1^*,\ldots , g_n^*\) which all together add up to \(g^*\), such that
$$\begin{aligned} g^*(y)&=\min \left\{ \sum _{m=1}^{{\overline{M}}}\left\langle {\overline{a}}^m,{\overline{y}}^m \right\rangle \left| \,{\overline{y}}^m\in [-{\overline{w}}_m,{\overline{w}}_m]^n,\,\sum _{m=1}^{{\overline{M}}}{\overline{y}}^m=y \right. \right\} =\sum _{i=1}^{n}g_i^*(y_i), \end{aligned}$$
where for \(i=1,\ldots ,n\) the functions \(g_i^*:\;{\mathbb {R}}\rightarrow {\mathbb {R}}\) are defined as
$$\begin{aligned} g_i^*(y_i)&:=\min \left\{ \sum _{m=1}^{{\overline{M}}_i}{\overline{\alpha }}^m_i{\overline{y}}^m_i \left| \,{\overline{y}}^m_i\in [-{\overline{v}}_m,{\overline{v}}_m],\,\sum _{m=1}^{{\overline{M}}_i}{\overline{y}}^m_i=y_i \right. \right\} . \end{aligned}$$
(23)
By (3) it follows that
$$\begin{aligned} g_i^*\left( {\overline{y}}_i=\sum _{m=1}^{{\overline{M}}_i}{\overline{y}}_i^{m}\right) =\sum _{m=1}^{{\overline{M}}_i}{{\overline{\alpha }}_i^m{\overline{y}}_i^{m}}\Leftrightarrow & {} \bigcap _{m=1}^{{\overline{M}}_i}\left[ {\overline{\alpha }}_i^m+N_{[-{\overline{v}}_i^m,{\overline{v}}_i^m]}({\overline{y}}_i^{m})\right] \ne \emptyset . \end{aligned}$$
Thus, by (11), for the dual grid components \({\overline{y}}_i^k\) w.r.t. attraction as defined in (14) and (15), we obtain for \(i=1,\ldots ,n\) the values
$$\begin{aligned} g_i^*({\overline{y}}_i^k)={\left\{ \begin{array}{ll} \displaystyle -\sum _{m=1}^{{\overline{M}}_i}{\overline{\alpha }}_i^m{\overline{v}}_i^m,&{} k=0,\\ g_i^*({\overline{y}}_i^{k-1})+2{{\overline{\alpha }}}_i^k{{\overline{v}}}_i^k,&{} k=1,2,\ldots ,{{\overline{M}}}_i, \end{array}\right. } \end{aligned}$$
(24)
or explicitly
$$\begin{aligned} g_i^*({\overline{y}}_i^k)&=\displaystyle \sum _{m=1}^{k}{\overline{\alpha }}_i^m{\overline{v}}_i^m-\sum _{m=k+1}^{{{\overline{M}}}_i}{{\overline{\alpha }}}_i^m{{\overline{v}}}_i^m,&k=0,1,\ldots ,{{\overline{M}}}_i. \end{aligned}$$
Analogously, for the dual grid components \({\underline{y}}_i^k\) w.r.t. repulsion as defined in (16) and (17), we obtain for \(i=1,\ldots ,n\) the values
$$\begin{aligned} h_i^*({\underline{y}}_i^k)={\left\{ \begin{array}{ll} \displaystyle -\sum _{m=1}^{{\underline{M}}_i}{\underline{\alpha }}_i^m{\underline{v}}_i^m,&{} k=0,\\ h_i^*({\underline{y}}_i^{k-1})+2{{\underline{\alpha }}}_i^k{{\underline{v}}}_i^k,&{} k=1,2,\ldots ,{{\underline{M}}}_i, \end{array}\right. } \end{aligned}$$
(25)
or explicitly
$$\begin{aligned} h_i^*({\underline{y}}_i^k)&=\displaystyle \sum _{m=1}^{k}{\underline{\alpha }}_i^m{\underline{v}}_i^m-\sum _{m=k+1}^{{{\underline{M}}}_i}{{\underline{\alpha }}}_i^m{{\underline{v}}}_i^m,&k=0,1,\ldots ,{{\underline{M}}}_i. \end{aligned}$$
Obviously, the functions \(g_i^*\) and \(h_i^*\) are piecewise linear.
Since a dual grid coordinate \({\underline{y}}_i^k\) w.r.t. repulsion does not necessarily need to be a grid point w.r.t. attraction, we use the piecewise linearity of the subfunctions \(g_i^*\) to determine the value \(g_i^*({\underline{y}}_i^k)\). Assume that a finite solution does exist, i.e. \(\sum _{m=1}^{{\underline{M}}}{\underline{w}}_m\le \sum _{m=1}^{{\overline{M}}}{\overline{w}}_m\). Then \({\overline{y}}_i^0\,\le \,{\underline{y}}_i^k\,\le \,{\overline{y}}_i^{{{\overline{M}}}_i}\) holds for \(i=1,\ldots ,n\). In particular, there exists \(j\in \left\{ 1,\ldots ,{{\overline{M}}}_i\right\} \) such that \({\overline{y}}_i^{j-1}\,\le \,{\underline{y}}_i^k\,\le \,{\overline{y}}_i^j.\) Since \(g_i^*\) is linear in \([{\overline{y}}_i^{j-1},{\overline{y}}_i^j]\) we have
$$\begin{aligned} g_i^*({\underline{y}}_i^k)&= {\left\{ \begin{array}{ll} g_i^*\big ({\overline{y}}_i^{j}\big ),&{} {\underline{y}}_i^k={\overline{y}}_i^{j},\\ g_i^*\big ({\overline{y}}_i^{j-1}\big )+{\overline{\alpha }}_i^{j}\bigg ({\underline{y}}_i^k-{\overline{y}}_i^{j-1}\bigg ),&{} {\underline{y}}_i^k \in [{\overline{y}}_i^{j-1},{\overline{y}}_i^{j}). \end{array}\right. } \end{aligned}$$
(26)
While determining all objective values by applying (23) and the analogous program for \(h_i^*(y_i)\) involves \(2\cdot {\underline{M}}_i\) linear programs for \(i=1,\ldots ,n\), the recursive variant using (24), (25) and (26) has linear costs only.
Moreover, by (11), we obtain the assignment
$$\begin{aligned} \partial g^*_i(y_i)= \bigcap _{m=1}^{{\overline{M}}_i}[{\overline{\alpha }}_i^m+N_{[-{\overline{v}}_i^m,{\overline{v}}_i^m]}({\overline{y}}_i^{m})]= {\left\{ \begin{array}{ll} (-\infty ,{\overline{\alpha }}^{1}_i],&{} y_i={\overline{y}}_i^0,\\ {[}{\overline{\alpha }}^k_i,{{\overline{\alpha }}}^{k+1}_i],&{} y_i ={\overline{y}}_i^k,\\ {[}{{\overline{\alpha }}}^{{\overline{M}}_i}_i,+\infty ),&{}y_i ={\overline{y}}_i^{{\overline{M}}_i},\\ \left\{ {\overline{\alpha }}^{k+1}_i\right\} ,&{} y_i \in ({\overline{y}}_i^k,{\overline{y}}_i^{k+1}), \end{array}\right. } \end{aligned}$$
(27)
which is applied in Algorithm 2 to easily deduce primal optimal solutions from dual ones, see Proposition 1. We obtain an analogous assignment between primal and dual elements w.r.t. repulsion.

The results of this subsection are applied in Algorithm 2.

4.3 Alternative variant

In Nickel and Dudenhöffer (1997) the authors provide an algorithm that has a similar solving strategy as Algorithm 1: Based on the piecewise linearity of the objective function (facilities are not distinguished between attractive and repulsive ones), their algorithm checks each grid point for local optimality and evaluates all local minimal points to find a global minimum of the objective. Since not all grid points are evaluated, an explicit determination is applied. The following serves to provide a combination of Algorithm 1 and the algorithm in Nickel and Dudenhöffer (1997).

Let us reformulate Problem (P) as follows:
$$\begin{aligned} \min \limits _{x\in \mathbb {R}^{n}}\left\{ f(x):=\sum _{m=1}^{M}w_m\left| x-a^m\right| \right\} , \end{aligned}$$
(P′)
where \(M:={\overline{M}}+{\underline{M}}\) and
$$\begin{aligned} w_m:={\left\{ \begin{array}{ll} {\overline{w}}_m,&{} m=1,\ldots ,{\overline{M}},\\ -{\underline{w}}_m,&{} m={\overline{M}}+1,\ldots ,{\overline{M}}+{\underline{M}}, \end{array}\right. }&a^m:={\left\{ \begin{array}{ll} {\overline{a}}^m, &{}m=1,\ldots ,{\overline{M}},\\ {\underline{a}}^m, &{}m={\overline{M}}+1,\ldots ,{\overline{M}}+{\underline{M}}. \end{array}\right. } \end{aligned}$$
(28)
As in (5)–(10) we reorder and consolidate the coordinates of the existing facilities. For \(i=1,\ldots ,n\) we denote by \(M_i\) the number of different values of the i-th coordinates of all facilities \(a_i^1,\ldots ,a_i^{M}\), sort them in ascending order and consolidate the weights of equal coordinates, such that
$$\begin{aligned} \alpha _i^1:&= \min \left\{ a_i^1,\ldots ,a_i^{M}\right\} , \end{aligned}$$
(28)
$$\begin{aligned} \alpha _i^m:&=\min \left\{ a_i^k\in \left\{ \left. a_i^1,\ldots ,a_i^{M}\right\} \right| \,a_i^k>\alpha _i^{m-1}\right\} ,&m=2,\ldots ,M_i, \end{aligned}$$
(29)
$$\begin{aligned} v_i^m:&=\sum _{\left. \left\{ k\in \left\{ 1,\ldots ,M\right\} \right| \,a_i^k=\alpha _i^m\right\} }w_k,&m=1,\ldots ,M_i. \end{aligned}$$
(30)
The following variables correspond to the derivatives of f from the left of all grid points, see Nickel and Dudenhöffer (1997),
$$\begin{aligned} y_i^k:&= {\left\{ \begin{array}{ll} \displaystyle -\sum _{m=1}^{{M}_i}{v}_i^m,&{} k=0,\\ y_i^{k-1}+2v_i^k, &{} k=1,2,\ldots ,{M}_i. \end{array}\right. } \end{aligned}$$
(31)
Instead of checking for local minimality, and if so, evaluating explicitely, we determine all objective values recursively as we have done in Sect. 4.1 and obtain
$$\begin{aligned} f_i(\alpha _i^k)={\left\{ \begin{array}{ll} \displaystyle \sum _{m=1}^{M_i}v_m(\alpha _i^m-\alpha _i^1),&{} k=1,\\ f_i(\alpha _i^{k-1})+y_i^{k-1}(\alpha _i^k-\alpha _i^{k-1}),&{} k=2,\ldots ,M_i. \end{array}\right. } \end{aligned}$$
(32)
The combined results of this subsection are applied in Algorithm 3.

5 Primal and dual algorithm

Based on the derived results, we can formulate the simplified algorithms for locating a semi-desirable facility in the special case of Manhattan distances.

Assume that the finiteness criterion is satisfied, i.e. \(\sum _{m=1}^{{\overline{M}}}{\overline{w}}_m\ge \sum _{m=1}^{{\underline{M}}}{\underline{w}}_m\), see Corollary 1. Then, the algorithms can be formulated as follows:

Algorithm 1

(Primal Algorithm)

Algorithm 2

(Dual Algorithm)

Remark 1

In the primal algorithm, we need to evaluate all grid coordinates w.r.t. attraction. Depending on the input data, we do not necessarily need to determine all grid coordinates w.r.t. repulsion. In fact, due to (22), we need only the coordinates
$$\begin{aligned} {\underline{\alpha }}_i^1,{\underline{\alpha }}_i^2,\ldots ,\min _{k=1,\ldots ,{\underline{M}}_i}\left\{ {\underline{\alpha }}_i^k|\;{\underline{\alpha }}_i^k>{\overline{\alpha }}_i^{{\overline{M}}_i}\right\} . \end{aligned}$$
Thus, instead of pre-calculating all dual grid coordinates, we only determine those, that are really necessary and include this into Step (ii). Analogously, in the dual algorithm, we only need to determine the primal grid points w.r.t. attraction
$$\begin{aligned} {\overline{y}}_i^1,{\overline{y}}_i^2,\ldots ,\min _{k=0,\ldots ,{\overline{M}}_i}\left\{ {\overline{y}}_i^k|\;{\overline{y}}_i^k>{\underline{y}}_i^{{\underline{M}}_i}\right\} . \end{aligned}$$
and include this into Step (iii).

Remark 2

Whether or not the complete interval between two adjacent optimal grid coordinates \({\overline{\alpha }}_i^k\) and \({\overline{\alpha }}_i^{k+1}\) belongs to the set of optimal points can be decided by applying Proposition 2 and the piecewise linearity of the functions g and h. Due to this result, the interval is optimal, whenever there exists \(q\in \left\{ 1,\ldots ,{\underline{M}}_i\right\} \) such that \([{\overline{\alpha }}_i^k,\;{\overline{\alpha }}_i^{k+1}]\subseteq [{\underline{\alpha }}_i^q,\;{\underline{\alpha }}_i^{q+1}]\). This property may also be used for implementing the algorithms in order to reduce the number of grid coordinates to be checked.

Algorithm 3

(Variant of the Primal Algorithm)

6 Computational results

We solve several randomly generated instances of Problem (P) with up to 2.000.000 facilities, where different ratios of the numbers \({\overline{M}}\) and \({\underline{M}}\) of attracting and repulsive facilities, respectively, are considered. Tables 12 and 3 state the computational results of Algorithms 12 and 3, respectively. We furthermore compare our results with the algorithm presented in Nickel and Dudenhöffer (1997), see Tables 4 and 5. All tables are based on the same generated input data. The locations of all facilities are sampled from the continuous uniform distribution over the interval \((-0.5, 0.5)\). The number of digits is limited to \(\lceil {lg ({\overline{M}}+{\underline{M}})\rceil }\) to make repeating entries possible but not dominant. The weights of all facilities are uniformly generated values over the interval (0, 1). The generated weights \({\underline{w}}_1,\ldots ,{\underline{w}}_{{\underline{M}}}\) are scaled such that the sum over all weights equals 1.0.
Table 1

Computational results (running time in seconds) using the primal Algorithm 1, \(n=2\)

\({\overline{M}}\)

\({\underline{M}}\)

100

1000

10,000

100,000

1,000,000

100

0.038

0.018

0.113

0.787

7.836

1000

0.025

0.025

0.115

0.784

7.748

10,000

0.091

0.096

0.170

0.862

7.824

100,000

0.772

0.786

0.843

1.517

8.557

1,000,000

7.774

8.075

7.924

8.627

15.325

All algorithms were implemented in MATLAB R2017a. All examples were run on a computer with Intel® Core™ i5–6300U CPU with 2.40GHz.

The computational effort of the Algorithms 123 and the algorithm in Nickel and Dudenhöffer (1997) consists of two main parts: first, sorting coordinates and consolidating weights, and second, determining objective values. The derived recursive structure for evaluating all objectives in Algorithms 1,  2 and 3 induces linear computational costs \({\mathcal {O}}(M)\). Depending on the input data each of the three algorithms might perform slightly better than the other two. For instance, generating data as described above provides a small advantage of Algorithm 3.

Conversely, the algorithm in Nickel and Dudenhöffer (1997) applies an explicit determination of the objective values at all local minima. Thus, the computational performance of this algorithm also depends on the number of local minima. The resulting running times and the corresponding amounts of local minimal points are illustrated in Tables 4 and 5.

Although the asymptotical computational complexity of the entire algorithms is driven by the \({\mathcal {O}}(M\log M)\) sorting part, the computational experiments show that the second part of evaluating all candidates (either all grid coordinates in Algorithms 1,  2 and 3 or all local minima as in Nickel and Dudenhöffer (1997)) can become a crucial part of the running time.
Table 2

Computational results (running time in seconds) using the dual Algorithm 2, \(n=2\)

\({\overline{M}}\)

\({\underline{M}}\)

100

1000

10,000

100,000

1,000,000

100

0.109

0.066

0.299

2.118

21.419

1000

0.033

0.058

0.240

2.169

21.387

10,000

0.140

0.158

0.350

2.363

22.535

100,000

1.209

1.229

1.435

3.371

22.726

1,000,000

12.034

12.134

12.273

14.422

33.867

Table 3

Computational results (running time in seconds) using Algorithm 3, \(n=2\)

\({\overline{M}}\)

\({\underline{M}}\)

100

1000

10,000

100,000

1,000,000

100

0.017

0.015

0.092

0.782

7.273

1000

0.015

0.022

0.087

0.693

6.598

10,000

0.064

0.093

0.116

0.642

6.486

100,000

0.694

0.680

0.680

1.289

8.328

1,000,000

5.814

5.885

5.818

7.094

13.470

Table 4

Computational results (running time in seconds) using the algorithm in Nickel and Dudenhöffer (1997), \(n=2\)

\({\overline{M}}\)

\({\underline{M}}\)

100

1000

10,000

100,000

1,000,000

100

0.009

0.066

0.238

2.850

25.720

1000

0.042

0.134

1.186

13.092

78.199

10,000

0.452

0.949

4.415

16.077

376.124

100,000

5.778

13.185

28.969

139.800

1230.201

1,000,000

34.860

91.168

635.097

740.419

2737.920

Table 5

Pairs \([z_1, z_2]\) of amounts of local minimal points for both directions \(i=1,2\) detected in the randomly generated data sets

\({\overline{M}}\)

\({\underline{M}}\)

100

1000

10,000

100,000

1,000,000

100

[8, 8]

[14, 17]

[4, 7]

[4, 12]

[2, 12]

1000

[11, 8]

[31, 9]

[36, 30]

[61, 19]

[23, 24]

10,000

[15, 12]

[13, 40]

[84, 62]

[41, 51]

[104, 129]

100,000

[19, 16]

[37, 44]

[97, 68]

[211, 254]

[186, 484]

1,000,000

[6, 14]

[15, 38]

[115, 272]

[115, 305]

[619, 280]

The algorithms in Löhne and Wagner (2017) and Wagner et al. (2016) find all grid points, whose number is up to \({\overline{M}}^n\) or \({\underline{M}}^n\) in the primal and the dual case, respectively. Compared to this, the Algorithms 1 and 2 determine all appearing grid coordinates whose number is at most \({\overline{M}}\cdot n\) or \({\underline{M}}\cdot n\), respectively. Thus, the number of candidates to be checked for optimality is much smaller. Additionally, for each candidate the determination of the objective value has less computational effort due to the used method (recursive determination vs. explicit determination or linear programs). Furthermore, the effort to find the candidate set by solving projection problems as described in Löhne and Wagner (2017) and Wagner et al. (2016) increases significantly when the dimension n increases. Compared to this, the separation of the problem into n subproblems is much more efficient, such that the computational effort increases linearly with the dimension n.

7 Conclusion

It turns out that exploiting the special structure of Manhattan distances instead of applying methods for mixed gauge distances, as in Wagner et al. (2016), or even more general dc structures, as in Löhne and Wagner (2017), leads to an improvement of the computational effort in the sense that the given non-convex optimization problem (P) can be solved by using a primal or a dual algorithm, both of which consist of a sorting process followed by a very efficient procedure to evaluate the objective at all candidate coordinates. No special solvers for polyhedral projection problems, vector linear programs, linear or convex problems are needed. Such a significant simplification of the solving procedures makes it worth to handle this special case of Manhattan distances separately.

Notes

Acknowledgements

Open access funding provided by Vienna University of Economics and Business (WU). The author thanks both anonymous reviewers for careful reading and their valuable comments which inspired in particular Algorithm 3 and Sect.  6.

References

  1. Benson H (1998) An outer approximation algorithm for generating all efficient extreme points in the outcome set of a multiple objective linear programming problem. J Global Optim 13:1–24MathSciNetCrossRefzbMATHGoogle Scholar
  2. Cappanera P (1999) A survey on obnoxious facility location problems. Technical report. University of Pisa, PisaGoogle Scholar
  3. Carrizosa E, Plastria F (1999) Location of semi-obnoxious facilities. Stud Locat Anal 12:1–27MathSciNetzbMATHGoogle Scholar
  4. Church RL, Garfinkel RS (1978) Locating an obnoxious facility on a network. Transp Sci 12(2):107–118MathSciNetCrossRefGoogle Scholar
  5. Dasarathy B, White LJ (1980) A maxmin location problem. Oper Res 28(6):1385–1401MathSciNetCrossRefzbMATHGoogle Scholar
  6. Drezner Z, Wesolowsky GO (1991) The Weber problem on the plane with some negative weights. INFOR 29(2):87–99zbMATHGoogle Scholar
  7. Eiselt HA, Laporte G (1995) Objectives in location problems. In: Drezner Z (ed) Facility location, a survey of applications and methods. Springer series in operations research. Springer, BerlinGoogle Scholar
  8. Goldman AJ, Dearing PM (1975) Concepts of optimal location for partially noxious facilities. Bull Oper Res Soc Am 23(1):B-31Google Scholar
  9. Horst R, Thoai NV (1999) DC programming: overview. J Optim Theory Appl 103(1):1–43MathSciNetCrossRefzbMATHGoogle Scholar
  10. Löhne A, Wagner A (2017) Solving dc programs with a polyhedral component utilizing a multiple objective linear programming solver. J Global Optim 69(2):369–385MathSciNetCrossRefzbMATHGoogle Scholar
  11. Löhne A, Weißing B: Bensolve-VLP solver, version 2.0.1. www.bensolve.org
  12. Löhne A, Weißing B (2016) Equivalence between polyhedral projection, multiple objective linear programming and vector linear programming. Math Methods Oper Res 84(2):411–426MathSciNetCrossRefzbMATHGoogle Scholar
  13. Löhne A, Weißing B (2016) The vector linear program solver Bensolve: notes on theoretical background. Eur J Oper Res 260(3):807–813MathSciNetCrossRefzbMATHGoogle Scholar
  14. Nickel S, Dudenhöffer EM (1997) Weber’s problem with attraction and repulsion under polyhedral gauges. J Global Optim 11(4):409–432MathSciNetCrossRefzbMATHGoogle Scholar
  15. Plastria F (1996) Optimal location of undesirable facilities: a selective overview. Belg J Oper Res Stat Comput Sci 36(2–3):109–127zbMATHGoogle Scholar
  16. Rockafellar RT (1997) Convex analysis. Princeton landmarks in mathematics. Princeton University Press, Princeton (Reprint of the 1970 original, Princeton Paperbacks)Google Scholar
  17. Singer I (1979) A Fenchel–Rockafellar type duality theorem for maximization. Bull Aust Math Soc 20(2):193–198MathSciNetCrossRefzbMATHGoogle Scholar
  18. Singer I (2006) Duality for nonconvex approximation and optimization. CMS books in mathematics/Ouvrages de Mathématiques de la SMC, vol 24. Springer, New YorkGoogle Scholar
  19. Toland JF (1978) Duality in nonconvex optimization. J Math Anal Appl 66(2):399–415MathSciNetCrossRefzbMATHGoogle Scholar
  20. Tuy H (1998) Convex analysis and global optimization, nonconvex optimization and its applications, vol 22. Kluwer Academic Publishers, DordrechtCrossRefzbMATHGoogle Scholar
  21. Wagner A (2015) A new duality-based approach for the problem of locating a semi-obnoxious facility. Ph.D. thesis, Martin-Luther-University, Halle-Saale, GermanyGoogle Scholar
  22. Wagner A, Martinez-Legaz JE, Tammer C (2016) Locating a semi-obnoxious facility: a Toland–Singer duality based approach. J Convex Anal 23(4):1073MathSciNetzbMATHGoogle Scholar

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Institute for Statistics and MathematicsVienna University of Economics and BusinessViennaAustria

Personalised recommendations