1 Introduction

Regularized risk minimization is a well-known paradigm in machine learning:

$$\begin{aligned} \min _{\mathbf {w}} P\left( \mathbf {w}\right) := \lambda \sum _{j} \phi _{j}\left( w_{j}\right) + \frac{1}{m} \sum _{i=1}^{m} \ell \left( \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle , y_{i}\right) . \end{aligned}$$
(1)

Here, we are given m training data points \(\mathbf {x}_{i} \in \mathbb {R}^{d}\) and their corresponding labels \(y_{i}\), while \(\mathbf {w}\in \mathbb {R}^{d}\) is the parameter of the model. Furthermore, \(w_{j}\) denotes the j-th component of \(\mathbf {w}\), while \(\phi _{j}\left( \cdot \right) \) is a convex function which penalizes complex models. \(\ell \left( \cdot , \cdot \right) \) is a loss function, which is convex in \(\mathbf {w}\). Moreover, \(\left\langle \cdot ,\cdot \right\rangle \) denotes the Euclidean inner product, and \(\lambda > 0\) is a scalar which trades-off between the average loss and the regularizer. For brevity, we will use \(\ell _{i}\left( \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) \) to denote \(\ell \left( \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle , y_{i}\right) \).

Many well-known models can be derived by specializing (1). For instance, if \(y_{i} \in \left\{ \pm 1\right\} \), then setting \(\phi _{j}(w_{j}) = \frac{1}{2}w_{j}^{2}\) and \(\ell _{i}\left( \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) = \max \left( 0, 1 - y_{i} \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) \) recovers binary linear support vector machines (SVMs) [23]. On the other hand, using the same regularizer but changing the loss function to \(\ell _{i}\left( \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) = \log \left( 1 + \exp \left( -y_{i} \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) \right) \) yields regularized logistic regression [11]. Similarly, setting \(\phi _{j}\left( w_{j}\right) =\left| w_{j}\right| \) leads to sparse learning such as LASSO [11] with \(\ell _{i}\left( \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) =\frac{1}{2} \left( y_i -\left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) ^{2}\).

A number of specialized as well as general purpose algorithms have been proposed for minimizing the regularized risk. For instance, if both the loss and the regularizer are smooth, as is the case with logistic regression, then quasi-Newton algorithms such as L-BFGS [17] have been found to be very successful. On the other hand, for smooth regularizers but non-smooth loss functions, Teo et al. [27] proposed a bundle method for regularized risk minimization (BMRM). Another popular first-order solver is alternating direction method of multipliers (ADMM) [4]. These optimizers belong to the broad class of batch minimization algorithms; that is, in order to perform a parameter update, at every iteration they compute the regularized risk \(P(\mathbf {w})\) as well as its gradient

$$\begin{aligned} \nabla P\left( \mathbf {w}\right) = \lambda \sum _{j=1}^{d} \nabla \phi _{j}\left( w_{j}\right) \mathbf {e}_{j} + \frac{1}{m} \sum _{i=1}^{m} \nabla \ell _{i} \left( \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) \mathbf {x}_{i}, \end{aligned}$$
(2)

where \(\mathbf {e}_{j}\) denotes the j-th standard basis vector. Both \(P\left( \mathbf {w}\right) \) as well as the gradient \(\nabla P\left( \mathbf {w}\right) \) take O(md) time to compute, which is computationally expensive when m, the number of data points, is large. Batch algorithms can be efficiently parallelized, however, by exploiting the fact that the empirical risk \(\frac{1}{m} \sum _{i=1}^{m} \ell _{i} \left( \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) \) as well as its gradient \(\frac{1}{m} \sum _{i=1}^{m} \nabla \ell _{i} \left( \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) \mathbf {x}_{i}\) decompose over the data points, and therefore one can compute \(P\left( \mathbf {w}\right) \) and \(\nabla P\left( \mathbf {w}\right) \) in a distributed fashion [7].

Batch algorithms, unfortunately, are known to be unfavorable for large scale machine learning both empirically and theoretically [3]. It is now widely accepted that stochastic algorithms which process one data point at a time are more effective for regularized risk minimization. In a nutshell, the idea here is that (2) can be stochastically approximated by

$$\begin{aligned} \mathbf {g}_{i} = \lambda \sum _{j=1}^{d} \nabla \phi _{j}\left( w_{j}\right) \mathbf {e}_{j} + \nabla \ell _{i} \left( \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) \mathbf {x}_{i}, \end{aligned}$$
(3)

when i is chosen uniformly random in \(\left\{ 1,{\ldots },m\right\} \). Note that \(\mathbf {g}_{i}\) is an unbiased estimator of the true gradient \(\nabla P\left( \mathbf {w}\right) \); that is, \(\mathbb {E}_{i\in \left\{ 1,{\ldots }, m\right\} } \left[ \mathbf {g}_{i}\right] = \nabla P\left( \mathbf {w}\right) \). Now we can replace the true gradient by this stochastic gradient to approximate a gradient descent update as

$$\begin{aligned} \mathbf {w}\leftarrow \mathbf {w}- \eta \mathbf {g}_{i}, \end{aligned}$$
(4)

where \(\eta \) is a step size parameter. Computing \(\mathbf {g}_i\) only takes O(d) effort, which is independent of m, the number of data points. Bottou and Bousquet [3] show that stochastic optimization is asymptotically faster than gradient descent and other second-order batch methods such as L-BFGS for regularized risk minimization.

However, a drawback of update (4) is that it is not easy to parallelize anymore. Usually, the computation of \(\mathbf {g}_{i}\) in (3) is a very lightweight operation for which parallel speed-up can rarely be expected. On the other hand, one cannot execute multiple updates of (4) simultaneously, since computing \(\mathbf {g}_{i}\) requires reading the latest value of \(\mathbf {w}\), while updating (4) requires writing to the components of \(\mathbf {w}\). The problem is even more severe in distributed memory systems, where the cost of communication between processors is significant.

Existing parallel stochastic optimization algorithms try to work around these difficulties in a somewhat ad-hoc manner (see Sect. 4). In this paper, we take a fundamentally different approach and propose a reformulation of the regularized risk (1), for which one can naturally derive a parallel stochastic optimization algorithm. Our technical contributions are:

  • We reformulate regularized risk minimization as an equivalent saddle-point problem, and show that it can be solved via a new distributed stochastic optimization (DSO) algorithm.

  • We prove \(O\left( 1/\sqrt{T}\right) \) rates of convergence for DSO which is independent of number of processors and theoretically described almost linear dependence of total required time with the number of processors.

  • We verify with empirical evaluations that when used for training linear support vector machines (SVMs) or binary logistic regression models, DSO is comparable to general-purpose stochastic (e.g., Zinkevich et al. [33]) or batch (e.g., Teo et al. [27]) optimizers.

2 Reformulating Regularized Risk Minimization

We begin by reformulating the regularized risk minimization problem as an equivalent saddle-point problem. Towards this end, we first rewrite (1) by introducing an auxiliary variable \(u_i\) for each \(\mathbf {x}_{i}\):

$$\begin{aligned} \min _{\mathbf {w}, \mathbf {u}} \; \; \lambda \sum _{j=1}^{d} \phi _{j}\left( w_{j}\right) + \frac{1}{m} \sum _{i=1}^{m} \ell _{i}\left( u_i\right) \quad \mathrm { s.t. } \; \; u_i=\left\langle \mathbf {w},\mathbf {x}_i \right\rangle \quad \forall \, i=1,{\ldots },m. \end{aligned}$$
(5)

By introducing Lagrange multipliers \(\alpha _i\) to eliminate the constraints, we obtain

$$\begin{aligned} \min _{\mathbf {w}, \mathbf {u}} \max _{\varvec{\alpha }} \lambda \sum _{j=1}^{d} \phi _{j}\left( w_{j}\right) + \frac{1}{m} \sum _{i=1}^{m} \ell _{i}\left( u_i\right) + \alpha _i( u_i -\left\langle \mathbf {w},\mathbf {x}_i \right\rangle ). \end{aligned}$$

Here \(\mathbf {u}\) denotes a vector whose components are \(u_{i}\). Likewise, \(\varvec{\alpha }\) is a vector whose components are \(\alpha _{i}\). Since the objective function (5) is convex and the constraints are linear, strong duality applies [5]. Therefore, we can switch the maximization over \(\varvec{\alpha }\) and the minimization over \(\mathbf {w}, \mathbf {u}\). Note that \(\min _{u_i} \alpha _i u_i + \ell _{i}(u_i)\) can be written \(-\ell _{i}^{\star } (-\alpha _{i})\), where \(\ell _i^{\star }(\cdot )\) is the Fenchel-Legendre conjugate of \(\ell _{i}(\cdot )\) [5]. The above transformations yield to our formulation:

$$\begin{aligned} \max _{\varvec{\alpha }} \min _{\mathbf {w}} f\left( \mathbf {w}, \varvec{\alpha }\right) := \lambda \sum _{j=1}^{d} \phi _{j}\left( w_{j}\right) - \frac{1}{m} \sum _{i=1}^{m} \alpha _{i} \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle -\frac{1}{m} \sum _{i=1}^{m} \ell ^{\star }_{i}\left( -\alpha _{i}\right) . \end{aligned}$$
(6)

If we analytically minimize \(f(\mathbf {w}, \varvec{\alpha })\) in terms of \(\mathbf {w}\) to eliminate it, then we obtain so-called dual objective which is only a function of \(\varvec{\alpha }\). Moreover, any combination of \(\mathbf {w}^{*}\) which is a solution of the primal problem (1), and \(\varvec{\alpha }^{*}\) which is a solution of the dual problem, forms a saddle point of \(f\left( \mathbf {w}, \varvec{\alpha }\right) \) [5]. In other words, minimizing the primal, maximizing the dual, and finding a saddle point of \(f\left( \mathbf {w}, \varvec{\alpha }\right) \) are all equivalent problems.

2.1 Stochastic Optimization

Let \(x_{ij}\) denote the j-th coordinate of \(\mathbf {x}_{i}\), and \(\varOmega _{i} := \left\{ j: x_{ij} \ne 0\right\} \) denote the non-zero coordinates of \(\mathbf {x}_{i}\). Similarly, let \(\bar{\varOmega }_{j} := \left\{ i: x_{ij} \ne 0\right\} \) denote the set of data points where the j-th coordinate is non-zero and \(\varOmega := \left\{ \left( i,j\right) : x_{ij} \ne 0\right\} \) denotes the set of all non-zero coordinates in the training dataset \(\mathbf {x}_{1},{\ldots }, \mathbf {x}_{m}\). Then, \(f(\mathbf {w}, \varvec{\alpha })\) can be rewritten as

$$\begin{aligned} f\left( \mathbf {w}, \varvec{\alpha }\right)&= \sum _{\left( i,j\right) \in \varOmega } \frac{\lambda \phi _j\left( w_j\right) }{\left| \bar{\varOmega }_j\right| } - \frac{\ell ^{\star }_i(-\alpha _i)}{m\left| \varOmega _i\right| } - \frac{\alpha _i w_jx_{ij}}{m} \\&=: \sum _{\left( i,j\right) \in \varOmega } f_{i,j}\left( w_{j}, \alpha _{i}\right) , \end{aligned}$$

where \(|\cdot |\) denotes the cardinality of a set. Remarkably, each component \(f_{i,j}\) in the above summation depends only on one component \(w_{j}\) of \(\mathbf {w}\) and one component \(\alpha _{i}\) of \(\varvec{\alpha }\). This allows us to derive an optimization algorithm which is stochastic in terms of both i and j. Let us define

$$\begin{aligned} \mathbf {g}_{i,j} := \Bigg ( \left| \varOmega \right| \left( \frac{\lambda \nabla \phi _{j} \left( w_j\right) }{\left| \bar{\varOmega }_{j}\right| } - \frac{\alpha _{i} x_{ij}}{m} \right) \mathbf {e}_{j}, \left| \varOmega \right| \left( \frac{\nabla \ell ^{\star }_{i}(-\alpha _{i})}{ m \left| \varOmega _{i}\right| } - \frac{w_{j} x_{ij}}{m} \right) \mathbf {e}_{i} \Bigg ). \end{aligned}$$
(7)

Under the uniform distribution over \(\left( i,\,j\right) \in \varOmega \), one can easily see that \(\mathbf {g}_{i,\,j}\) is an unbiased estimate of the gradient of \(f\left( \mathbf {w}, \varvec{\alpha }\right) \), that is, \(\mathbb {E}_{\left\{ \left( i,\,j\right) \in \varOmega \right\} } \left[ \mathbf {g}_{i,\,j}\right] = \left( \nabla _{\mathbf {w}} f\left( \mathbf {w}, \varvec{\alpha }\right) , -\nabla _{\varvec{\alpha }} {-f}\left( \mathbf {w}, \varvec{\alpha }\right) \right) \). Since we are interested in finding a saddle point of \(f\left( \mathbf {w}, \varvec{\alpha }\right) \), our stochastic optimization algorithm uses the stochastic gradient \(\mathbf {g}_{i,\,j}\) to take a descent step in \(\mathbf {w}\) and an ascent step in \(\varvec{\alpha }\) [19]:

$$\begin{aligned} w_j \leftarrow w_j - \eta \left( \frac{\lambda \nabla \phi _j\left( w_j\right) }{\left| \bar{\varOmega }_j\right| } - \frac{\alpha _i x_{ij}}{m}\right) ,\ \alpha _i \leftarrow \alpha _i + \eta \left( \frac{ \nabla \ell ^{\star }_i(-\alpha _i)}{m\left| \varOmega _i\right| } - \frac{w_j x_{ij}}{m} \right) . \end{aligned}$$
(8)

Surprisingly, the time complexity of update (8) is independent of the size of data; it is O(1). Compare this with the O(md) complexity of batch update and O(d) complexity of regular stochastic gradient descent.

Note that in the above discussion, we implicitly assumed that \(\phi _{j}\left( \cdot \right) \) and \(\ell _{i}^{\star }\left( \cdot \right) \) are differentiable. If that is not the case, then their derivatives can be replaced by sub-gradients [5]. Therefore this approach can deal with wide range of regularized risk minimization problem.

3 Parallelization

The minimax formulation (6) not only admits an efficient stochastic optimization algorithm, but also allows us to derive a distributed stochastic optimization (DSO) algorithm. The key observation underlying DSO is the following: Given \(\left( i,\,j\right) \) and \(\left( i', j'\right) \) both in \(\varOmega \), if \(i \ne i'\) and \(j \ne j'\) then one can simultaneously perform update (8) on \((w_{j}, \alpha _{i})\) and \((w_{j'}, \alpha _{i'})\). In other words, the updates to \(w_{j}\) and \(\alpha _{i}\) are independent of the updates to \(w_{j'}\) and \(\alpha _{i'}\), as long as \(i \ne i'\) and \(j \ne j'\).

Before we formally describe DSO we would like to present some intuition using Fig. 1. Here we assume that we have 4 processors. The data matrix X is an \(m \times d\) matrix formed by stacking \(\mathbf {x}_{i}^{\top }\) for \(i=1,{\ldots },m\), while \(\mathbf {w}\) and \(\varvec{\alpha }\) denote the parameters to be optimized. The non-zero entries of X are marked by an \(\mathrm {x}\) in the figure. Initially, both parameters as well as rows of the data matrix are partitioned across processors as depicted in Fig. 1 (left); colors in the figure denote ownership e.g., the first processor owns a fraction of the data matrix and a fraction of the parameters \(\varvec{\alpha }\) and \(\mathbf {w}\) (denoted as \(\mathbf {w}^{\left( 1\right) }\) and \(\varvec{\alpha }^{\left( 1\right) }\)) shaded with red. Each processor samples a non-zero entry \(x_{ij}\) of X within the dark shaded rectangular region (active area) depicted in the figure, and updates the corresponding \(w_{j}\) and \(\alpha _{i}\). After performing updates, the processors stop and exchange coordinates of \(\mathbf {w}\). This defines an inner iteration. After each inner iteration, ownership of the \(\mathbf {w}\) variables and hence the active area change, as shown in Fig. 1 (right). If there are p processors, then p inner iterations define an epoch. Each coordinate of \(\mathbf {w}\) is updated by each processor at least once in an epoch. The algorithm iterates over epochs until convergence.

Four points are worth noting. First, since the active area of each processor does not share either row or column coordinates with the active area of other processors, as per our key observation above, the updates can be carried out by each processor in parallel without any need for intermediate communication with other processors. Second, we partition and distribute the data only once. The coordinates of \(\varvec{\alpha }\) are partitioned at the beginning and are not exchanged by the processors; only coordinates of \(\mathbf {w}\) are exchanged. This means that the cost of communication is independent of m, the number of data points. Third, our algorithm can work in both shared memory, distributed memory, and hybrid (multiple threads on multiple machines) architectures. Fourth, the \(\mathbf {w}\) parameter is distributed across multiple machines and there is no redundant storage, which makes the algorithm scale linearly in terms of space complexity. Compare this with the fact that most parallel optimization algorithms require each local machine to hold a copy of \(\mathbf {w}\).

Fig. 1.
figure 1

Illustration of DSO with 4 processors. The rows of the data matrix X as well as the parameters \(\mathbf {w}\) and \(\varvec{\alpha }\) are partitioned as shown. Colors denote ownerships. The active area of each processor is in dark colors. Left: the initial state. Right: the state after one bulk synchronization. (Color figure online)

To formally describe DSO, suppose p processors are available, and let \(I_{1},{\ldots }, I_{p}\) denote a fixed partition of the set \(\left\{ 1,{\ldots }, m\right\} \) and \(J_{1},{\ldots }, J_{p}\) denote a fixed partition of the set \(\left\{ 1,{\ldots }, d\right\} \) such that \(\left| I_{q}\right| \approx \left| I_{q'}\right| \) and \(\left| J_{r}\right| \approx \left| J_{r'}\right| \) for any \(1 \le q,q',r,r' \le p\). We partition the data \(\left\{ \mathbf {x}_{1},{\ldots }, \mathbf {x}_{m}\right\} \) and the labels \(\left\{ y_{1},{\ldots }, y_{m}\right\} \) into p disjoint subsets according to \(I_{1},{\ldots }, I_{p}\) and distribute them to p processors. The parameters \(\left\{ \alpha _{1},{\ldots }, \alpha _{m}\right\} \) are partitioned into p disjoint subsets \(\varvec{\alpha }^{(1)},{\ldots }, \varvec{\alpha }^{(p)}\) according to \(I_{1},{\ldots }, I_{p}\) while \(\left\{ w_{1},{\ldots }, w_{d}\right\} \) are partitioned into p disjoint subsets \(\mathbf {w}^{(1)},{\ldots }, \mathbf {w}^{(p)}\) according to \(J_{1},{\ldots }, J_{p}\) and distributed to p processors, respectively. The partitioning of \(\left\{ 1,{\ldots }, m\right\} \) and \(\left\{ 1,{\ldots }, d\right\} \) induces a \(p \times p\) partition of \(\varOmega \):

$$\begin{aligned} \varOmega ^{(q,r)} := \left\{ (i,j) \in \varOmega \;:\; i \in I_q, j \in J_r \right\} , \;\; \forall \, q,r \in \left\{ 1,{\ldots }, p\right\} . \end{aligned}$$

The execution of DSO proceeds in epochs, and each epoch consists of p inner iterations; at the beginning of the r-th inner iteration (\(r \ge 1\)), processor q owns \(\mathbf {w}^{\left( \sigma _{r}\left( q\right) \right) }\) where \(\sigma _{r}\left( q\right) := \left\{ \left( q + r - 2\right) \text {mod} p\right\} + 1\), and executes stochastic updates (8) on coordinates in \(\varOmega ^{\left( q,\sigma _{r}\left( q\right) \right) }\). Since these updates only involve variables in \(\varvec{\alpha }^{\left( q\right) }\) and \(\mathbf {w}^{\left( \sigma \left( q\right) \right) }\), no communication between processors is required to execute them. After every processor has finished its updates, \(\mathbf {w}^{\left( \sigma _{r}\left( q\right) \right) }\) is sent to machine \(\sigma _{r+1}^{-1}\left( \sigma _{r}\left( q\right) \right) \) and the algorithm moves on to the \((r+1)\)-st inner iteration. Detailed pseudo-code for the DSO algorithm can be found in Algorithm 1.

figure a

3.1 Convergence Analysis

It is known that the stochastic procedure in Sect. 2.1 is guaranteed to converge to asaddle point of \(f(\mathbf {w},\varvec{\alpha })\) if \((i,\,j)\) is randomly accessed [19]. The main technical difficulty in proving convergence in our case is due to the fact that DSO does not sample \(\left( i,\,j\right) \) coordinates uniformly at random due to its distributed nature. Therefore, first we prove that DSO is serializable in a certain sense, that is, there exists an ordering of the updates such that replaying them on a single machine would recover the same solution produced by DSO. We then analyze this serial algorithm to establish convergence. We believe that this proof technique is of independent interest, and differs significantly from convergence analysis for other parallel stochastic algorithms which typically assume correlation between data points e.g. [6, 15]. We first formally state the main theorem and then prove 3 lemmas. Finally we give a proof of the main theorem in the last part of this section.

Theorem 1

Let \(\left( \mathbf {w}^{t}, \varvec{\alpha }^{t}\right) \) and \(\left( \tilde{\mathbf {w}}^{t}, \tilde{\varvec{\alpha }}^{t}\right) := \left( \frac{1}{t}\sum _{s=1}^{t} \mathbf {w}^{s}, \frac{1}{t} \sum _{s=1}^{t} \varvec{\alpha }^{s}\right) \) denote the parameter values, and the averaged parameter values respectively after the t-th epoch of Algorithm 1. Moreover, assume that \(\left\| \mathbf {w}\right\| ,\left\| \varvec{\alpha }\right\| ,\left| \nabla \phi _j(w_j)\right| ,\left| \nabla \ell _i^\star \left( -\alpha _{i}\right) \right| ,\) and \(\lambda \) are upper bounded by a constant \(c>1\). Then, there exists a constant C, which is dependent only on c, such that after T epochs the duality gap is

$$\begin{aligned} \max _{\varvec{\alpha }} f\left( \tilde{\mathbf {w}}^{T}, \varvec{\alpha }\right) - \min _{\mathbf {w}} f\left( \mathbf {w}, \tilde{\varvec{\alpha }}^{T}\right) \le C \frac{\sqrt{d}}{\sqrt{T}}. \end{aligned}$$
(9)

On the other hand, if \(\phi _j(s) = \frac{1}{2}s^2\), \(\sqrt{{\max _i\left| \varOmega _i\right| }} < m\) and \(\eta _t < \frac{1}{\lambda }\) hold, then there exists a different constant \(C'\) dependent only on c and satisfying

$$\begin{aligned} \max _{\varvec{\alpha }'} f\left( \tilde{\mathbf {w}}^{T}{,}\varvec{\alpha }'\right) - \min _{\mathbf {w}'} f\left( \mathbf {w}', \tilde{\varvec{\alpha }}^{T}\right) \le \frac{C'}{\sqrt{T}}. \end{aligned}$$
(10)

The first lemma states that there exists an ordering of the pairs of coordinates \((i,\,j)\)s that recovers the solution produced by DSO.

Lemma 1

On the inner iterations of the t-th epoch of Algorithm 1, let us index all \((i,\,j)\in \varOmega \) as \(\left( i_{k}, j_{k}\right) \) by \(k=1,{\ldots },|\varOmega |\) as follows: \(a<b\) if updates to \(\left( w_{j_{a}}, \alpha _{i_{a}}\right) \) were performed before updating \(\left( w_{j_{b}}, \alpha _{i_{b}}\right) \). On the other hand, if \(\left( w_{j_{a}}, \alpha _{i_{a}}\right) \) and \(\left( w_{j_{b}}, \alpha _{i_{b}}\right) \) were updated at the same time because we have p processors simultaneously updating the parameters, then the updates are ordered according to the rank of the processor performing the updateFootnote 1. Then, denote the parameter values after k updates by \(\left( \mathbf {w}_{k}^{t}, \varvec{\alpha }_{k}^{t}\right) \). For all k and t we have

$$\begin{aligned}&\mathbf {w}^t_k = \mathbf {w}^t_{k-1} -\eta _t \nabla _{\mathbf {w}} f_k\left( \mathbf {w}^t_{k-1} ,\varvec{\alpha }^t_{k-1}\right) \end{aligned}$$
(11)
$$\begin{aligned}&\varvec{\alpha }^t_k = \varvec{\alpha }^t_{k-1} -\eta _t \nabla _{\varvec{\alpha }} {-f}_k\left( \mathbf {w}^t_{k-1},\varvec{\alpha }^t_{k-1}\right) , \end{aligned}$$
(12)

where

$$\begin{aligned} f_{k} (\mathbf {w}, \varvec{\alpha }) := f_{i_{k}, j_{k}}\left( w_{j_k}, \alpha _{i_k}\right) . \end{aligned}$$

Proof

Let q be the processor which performed the k-th update in the \(\left( t+1\right) \)-st epoch. Moreover, let \(\left( k-\delta \right) \) be the most recent previous update done by processor q. There exists \(1 \le \delta ',\delta '' \le \delta \) such that \(\left( \mathbf {w}^t_{k-\delta '}, \varvec{\alpha }^t_{k-\delta ''}\right) \) be the parameter values read by the q-th processor to the perform k-th update. Because of our data partitioning scheme, only q can change the value of the \(i_k\)-th component of \(\varvec{\alpha }\) and the \(j_k\)-th component of \(\mathbf {w}\). Therefore, we have

$$\begin{aligned} \alpha ^t_{k-1,i_k} = \alpha ^t_{\kappa ,i_k}, \ \mathrm{and } \ w_{k-1, j_k} = w^t_{\kappa , j_{k}}, \quad k-\delta \le \forall \kappa \le k-1. \end{aligned}$$

Since \(f_k\) is invariant to changes in any coordinate other than \(\left( i_k,j_k\right) \), we have

$$\begin{aligned} f_k\left( \mathbf {w}^t_{k-\delta '} ,\varvec{\alpha }^t_{k-\delta ''}\right) = f_k\left( \mathbf {w}^t_{k-1} ,\varvec{\alpha }^t_{k-1}\right) . \end{aligned}$$

The claim holds because we can write the k-th update formula as

$$\begin{aligned} \mathbf {w}^t_k&= \mathbf {w}^t_{k-1} -\eta _t \nabla _{\mathbf {w}} f_k\left( \mathbf {w}^t_{k-\delta '} ,\varvec{\alpha }^t_{k-\delta ''}\right) \text { and } \end{aligned}$$
(13)
$$\begin{aligned} \varvec{\alpha }^t_k&= \varvec{\alpha }^t_{k-1} -\eta _t \nabla _{\varvec{\alpha }} {-f}_k\left( \mathbf {w}^t_{k-\delta '} ,\varvec{\alpha }^t_{k-\delta ''}\right) . \end{aligned}$$
(14)

   \(\square \)

Next, we prove the following technical lemma that shows a sufficient condition to establish a global convergence of general iterative algorithms on general convex-concave functions. Note that it is closely related to well-known results on convex functions ( e.g., Theorem 3.2.2 in [20], Lemma 14.1. in [24] ).

Lemma 2

Suppose there exists \(D>0\) and \(C>0\) such that for all \(\left( \mathbf {w}, \varvec{\alpha }\right) \) and \(\left( \mathbf {w}', \varvec{\alpha }'\right) \) we have \(\left\| \mathbf {w}- \mathbf {w}'\right\| ^{2} + \left\| \varvec{\alpha }- \varvec{\alpha }'\right\| ^{2} \le D\), and for all \(t = 1,{\ldots }, T\) and all \(\left( \mathbf {w},\varvec{\alpha }\right) \) we have

$$\begin{aligned} \left\| \mathbf {w}^{t+1} - \mathbf {w}\right\| ^{2} + \left\| \varvec{\alpha }^{t+1} - \varvec{\alpha }\right\| ^{2} \le&\left\| \mathbf {w}^{t} - \mathbf {w}\right\| ^{2} + \left\| \varvec{\alpha }^{t} - \varvec{\alpha }\right\| ^{2} \nonumber \\&- 2\eta _{t} \left( f\left( \mathbf {w}^{t},\varvec{\alpha }\right) - f\left( \mathbf {w},\varvec{\alpha }^{t}\right) \right) + C\eta _{t}^{2}, \end{aligned}$$
(15)

then setting \(\eta _{t} = \sqrt{\frac{D}{2Ct}}\) ensures that

$$\begin{aligned} \max _{\varvec{\alpha }'} f\left( \tilde{\mathbf {w}}^{T}{,}\varvec{\alpha }'\right) - \min _{\mathbf {w}'} f\left( \mathbf {w}', \tilde{\varvec{\alpha }}^{T}\right) \le \sqrt{\frac{2DC}{T}}. \end{aligned}$$
(16)

Proof

Rearrange (15) and divide by \(\eta _{t}\) to obtain

$$\begin{aligned} 2\left( f\left( \mathbf {w}^{t},\varvec{\alpha }\right) - f\left( \mathbf {w},\varvec{\alpha }^{t}\right) \right) \le \eta _{t} C + \frac{1}{\eta _t}&\Big ( \left\| \mathbf {w}^{t} - \mathbf {w}\right\| ^ 2 + \left\| \varvec{\alpha }^{t} - \varvec{\alpha }\right\| ^{2} \\&-\left\| \mathbf {w}^{t+1} - \mathbf {w}\right\| ^{2} - \left\| \varvec{\alpha }^{t+1} - \varvec{\alpha }\right\| ^{2} \Big ). \end{aligned}$$

Summing the above for \(t=1,{\ldots }, T\) yields

$$\begin{aligned} \nonumber 2\sum _{t=1}^{T} f\left( \mathbf {w}^{t},\varvec{\alpha }\right) - 2\sum _{t=1}^{T} f\left( \mathbf {w},\varvec{\alpha }^{t}\right)&\le \sum _{t=1}^{T} \eta _{t} C + \frac{1}{\eta _{1}}\left( \left\| \mathbf {w}^{1} - \mathbf {w}\right\| ^{2} + \left\| \varvec{\alpha }^{1} - \varvec{\alpha }\right\| ^{2}\right) \\ \nonumber&{\le }+ \sum _{t=2}^{T-1} \left( \frac{1}{\eta _{t+1}} - \frac{1}{\eta _{t}}\right) \left( \left\| \mathbf {w}^{t} - \mathbf {w}\right\| ^{2} + \left\| \varvec{\alpha }^{t} - \varvec{\alpha }\right\| ^{2}\right) \\ \nonumber&{\le }- \frac{1}{\eta _{T}} \left( \left\| \mathbf {w}^{T+1} - \mathbf {w}\right\| ^{2} + \left\| \varvec{\alpha }^{T+1} - \varvec{\alpha }\right\| ^{2}\right) \\ \nonumber&\le \sum _{t=1}^{T} \eta _{t} C + \frac{1}{\eta _{1}} D + \sum _{t=2}^{T-1} \left( \frac{1}{\eta _{t+1}} - \frac{1}{\eta _{t}}\right) D \\&\le \sum _{t=1}^{T} \eta _{t} C + \frac{1}{\eta _{T}} D. \end{aligned}$$
(17)

On the other hand, thanks to convexity in \(\mathbf {w}\) and concavity in \(\varvec{\alpha }\), we see

$$\begin{aligned} f\left( \tilde{\mathbf {w}}^{T},\varvec{\alpha }\right) \le \frac{1}{T} \sum _{t=1}^{T} f\left( \mathbf {w}^{t},\varvec{\alpha }\right) , \ \text { and } \ -f\left( \mathbf {w},\tilde{\varvec{\alpha }}^{T}\right) \le \frac{1}{T} \sum _{t=1}^{T} {-f}\left( \mathbf {w},\varvec{\alpha }^{t}\right) . \end{aligned}$$

Using them for (17) and letting \(\eta _{t} = \sqrt{\frac{D}{2Ct}}\) leads to the following inequalities

$$\begin{aligned} f\left( \tilde{\mathbf {w}}^{T},\varvec{\alpha }\right) - f\left( \mathbf {w},\tilde{\varvec{\alpha }}^{T}\right) \le \frac{\sum _{t=1}^{T} \eta _{t} C + \frac{1}{\eta _{T}} D}{2T} \le \frac{\sqrt{DC}}{2T} \sum _{t=1}^{T} \frac{1}{\sqrt{2t}} + \sqrt{\frac{DC}{2T}}. \end{aligned}$$

The claim in (16) follows by using \(\sum _{t=1}^{T} \frac{1}{\sqrt{2t}} \le \sqrt{2T}\).    \(\square \)

In order to derive (9), C of (15) has to be the order of d. In case of \(L_2\)-regularizer, it has to be dependent only on c to obtain (10). The last lemma validates them. The proof is technical, and related to techniques outlined in Nedić and Bertsekas [18]. See Appendix for the proof.

Lemma 3

Under the assumptions outlined in Theorem 1, (13) and (14), (15) is satisfied with C of the form of \(C=C_1 d\). It does with \(C=C_2\) in case of \(L_2\)-regularizer. Here \(C_1\) and \(C_2\) are dependent only on c.

The proof of Theorem 1 can be shown in a very simple form given those 3 lemmas.

Proof

Because the parameter produced by Algorithm 1 is the same as one defined by (13) and (14), it is sufficient to show (13) and (14) lead to the statements in the theorem. From Lemma 3 and the fact that \(\left\| \mathbf {w}- \mathbf {w}'\right\| ^{2} + \left\| \varvec{\alpha }- \varvec{\alpha }'\right\| ^{2} \le 8c^2\), (16) of Lemma 2 holds with \(\sqrt{CD} = 2c\sqrt{2C_1 d}\) for general case and \(\sqrt{CD} = 2c\sqrt{2C_2}\) in case of \(L_2\)-regularizer, where \(C_1\) and \(C_2\) are dependent only on c. This immediately implies (9) and (10).    \(\square \)

To understand the implications of the above theorem, let us assume that Algorithm 1 is run with \(p \le \min \left( m, d\right) \) processors with a partitioning of \(\varOmega \) such that \(\left| \varOmega ^{\left( q, \sigma _{r}\left( q\right) \right) }\right| \approx \frac{\left| \varOmega \right| }{p^{2}}\) and \(\left| J_q\right| \approx \frac{d}{p}\) for all q. Let us denote time amount taken in performing updates in one epoch by \(T_\mathrm{u}\), which is of \(\mathcal {O}(|\varOmega |)\). Moreover, let us assume that communicating \(\mathbf {w}\) across the network takes time amount denoted by \(T_\mathrm{c}\), which is of \(\mathcal {O}(d)\), and communicating a subset of \(\mathbf {w}\) takes time proportional to its cardinality.Under these assumptions, the time for each inner iteration of Algorithm 1 can be written as

$$\begin{aligned} \frac{\left| \varOmega ^{\left( q,\sigma _{r}\left( q\right) \right) }\right| }{\left| \varOmega \right| } T_\mathrm{u} + \frac{\left| J_{\sigma _{r}\left( q\right) }\right| }{d} T_\mathrm{c} \approx \frac{ T_\mathrm{u} }{p^2} + \frac{T_\mathrm{c}}{p}. \end{aligned}$$

Since there are p inner iterations per epoch, the time required to finish an epoch is \(T_\mathrm{u} / {p} + T_\mathrm{c}\). As per Theorem 1 the number of epochs to obtain an \(\epsilon \) accurate solution is independent of p. Therefore, one can conclude that DSO scales linearly in p as long as \( T_\mathrm{u}/{ T_\mathrm{c}} \gg p \) holds. As is to be expected, for large enough p the cost of communication \(T_\mathrm{c}\) will eventually dominate.

4 Related Work

Effective parallelization of stochastic optimization for regularized risk minimization has received significant research attention in recent years. Because of space limitations, our review of related work will unfortunately only be partial.

The key difficulty in parallelizing update (4) is that gradient calculation requires us to read, while updating the parameter requires us to write to the coordinates of \(\mathbf {w}\). Consequently, updates have to be executed in serial. Existing work has focused on working around the limitation of stochastic optimization by either (a) introducing strategies for computing the stochastic gradient in parallel (e.g., Langford et al. [15]), (b) updating the parameter in parallel (e.g., Bradley et al. [6], Recht et al. [21]), (c) performing independent updates and combining the resulting parameter vectors (e.g., Zinkevich et al. [33]), or (d) periodically exchanging information between processors (e.g., Bertsekas and Tsitsiklis [2]). While the former two strategies are popular in the shared memory setting, the latter two are popular in the distributed memory setting. In many cases the convergence bounds depend on the amount of correlation between data points and are limited to the case of strongly convex regularizer (Hsieh et al. [12], Yang [30], Zhang and Xiao [32]). In contrast our bounds in Theorem 1 do not depend on such properties of data and more general.

Algorithms that use so-called parameter server to synchronize variable updates across processors have recently become popular (e.g., Li et al. [16]). The main drawback of these methods is that it is not easy to “serialize” the updates, that is, to replay the updates on a single machine. This makes proving convergence guarantees, and debugging such frameworks rather difficult, although some recent progress has been made [16].

The observation that updates on individual coordinates of the parameters can be carried out in parallel has been used for other models. In the context of Latent Dirichlet Allocation, Yan et al. [29] used a similar observation to derive an efficient GPU based collapsed Gibbs sampler. On the other hand, for matrix factorization Gemulla et al. [10] and Recht and Ré [22] independently proposed parallel algorithms based on a similar idea. However, to the best of our knowledge, rewriting (1) as a saddle point problem in order to discover parallelism is our novel contribution.

Table 1. Summary of the datasets used in our experiments. m is the total # of examples, d is the # of features, s is the feature density (% of features that are non-zero). K/M/G denotes a thousand/million/billion.

5 Experimental Results

5.1 Dataset and Implementation Details

We implemented DSO, SGD, and PSGD ourselves, while for BMRM we used the optimized implementation that is available from the toolkit for advanced optimization (TAO, https://bitbucket.org/sarich/tao-2.2). All algorithms are implemented in C++ and use MPI for communication. In our multi-machine experiments, each algorithm was run on four machines with eight cores per machine. DSO, SGD, and PSGD used AdaGrad [8] step size adaptation. We also used stochastic variance reduced gradient (SVRG) of Johnson and Zhang [14] to accelerate updates of DSO. In the multi-machine setting DSO initializes parameters of each MPI process by locally executing twenty iterations of dual coordinate descent [9] on its local data to locally initialize \(w_j\) and \(\alpha _i\) parameters; then \(w_j\) values were averaged across machines. We chose binary logistic regression and SVM as test problems, i.e., \(\phi _j(s)=\frac{1}{2}s^2\) and \(\ell _i(u)=\log (1+\exp (-u)),[1-u]_+\). To prevent degeneracy in logistic regression, values of \(\alpha _{i}\)’s are restricted to \((10^{-14}, 1-10^{-14})\), while in the case of linear SVM they are restricted to [0, 1]. Similarly, the \(w_{j}\)’s are restricted to lie in the interval \([-1/\sqrt{\lambda }, 1/\sqrt{\lambda }]\) for linear SVM and \([-\sqrt{\log (2)/\lambda }, \sqrt{\log (2)/\lambda }]\) for logistic regression, following the idea of Shalev-Shwartz et al. [25].

Fig. 2.
figure 2

The average time per epoch using p machines on the webspam-t dataset.

5.2 Scalability of DSO

We first verify, that the per epoch complexity of DSO scales as \(T_\mathrm{u} / p + T_\mathrm{c}\), as predicted by our analysis in Sect. 3.1. Towards this end, we took the webspam-t dataset of Webb et al. [28], which is one of the largest datasets we could comfortably fit on a single machine. We let \(p = \left\{ 1, 2, 4, 8, 16\right\} \) while fixing the number of cores on each machine to be 4.

Using the average time per epoch on one and two machines, one can estimate \(T_\mathrm{u}\) and \(T_\mathrm{c}\). Given these values, one can then predict the time per iteration for other values of p. Figure 2 shows the predicted time and the measured time averaged over 40 epochs. As can be seen, the time per epoch indeed goes down as \(\approx 1/p\) as predicted by the theory. The test error and objective function values on multiple machines was very close to the test error and objective function values observed on a single machine, thus confirming Theorem 1.

5.3 Comparison with Other Solvers

In our single machine experiments we compare DSO with stochastic gradient descent (SGD) and bundle methods for regularized risk minimization (BMRM) of Teo et al. [27]. In our multi-machine experiments we compare with parallel stochastic gradient descent (PSGD) of Zinkevich et al. [33] and BMRM. We chose these competitors because, just like DSO, they are general purpose solvers for regularized risk minimization (1), and hence can solve non-smooth problems such as SVMs as well as smooth problems such as logistic regression. Moreover, BMRM is a specialized solver for regularized risk minimization, which has similar performance to other first-order solvers such as ADMM.

Fig. 3.
figure 3

The test error of different optimization algorithms on linear SVM with real-sim dataset, as a function of the number of iteration.

Fig. 4.
figure 4

The test error of different optimization algorithms on logistic regression with webspam-t dataset. Test error as a function of elapsed time.

We selected two representative datasets and two values of the regularization parameter \(\lambda = \left\{ 10^{-5}, 10^{-6}\right\} \) to present our results. For the single machine experiments we used the real-sim dataset from Hsieh et al. [13], while for the multi-machine experiments we used webspam-t. Details of the datasets can be found in Table 1 in the appendix. We use test error rate as comparison metric, since stochastic optimization algorithms are efficient in terms of minimizing generalization error, not training error [3]. The results for single machine experiments on linear SVM training can be found in Fig. 3. As can be seen, DSO shows comparable efficiency to that of SGD, and outperforms BMRM. This demonstrates that saddle-point optimization is a viable strategy even in serial setting.

Our multi-machine experimental results for linear SVM training can be found in Fig. 5. As can be seen, PSGD converges very quickly, but the quality of the final solution is poor; this is probably because PSGD only solves processor-local problems and does not have a guarantee to converge to the global optimum. On the other hand, both BMRM and DSO converges to similar quality solutions, and at fairly comparable rates. Similar trends we observed on logistic regression. Therefore we only show the results with \(10^{-5}\) in Fig. 4.

Fig. 5.
figure 5

Test errors of different parallel optimization algorithms on linear SVM with webspam-t dataset, as a function of elapsed time.

5.4 Terascale Learning with DSO

Next, we demonstrate the scalability of DSO on one of the largest publicly available datasets. Following the same experimental setup as Agarwal et al. [1], we work with the splice site recognition dataset [26] which contains 50 million training data points, each of which has around 11.7 million dimensions. Each datapoint has approximately 2000 non-zero coordinates and the entire dataset requires around 3 TB of storage. Previously [26], it has been shown that sub-sampling reduces performance, and therefore we need to use the entire dataset for training.

Similar to Agarwal et al. [1], our goal is not to show the best classification accuracy on this data (this is best left to domain experts and feature designers). Instead, we wish to demonstrate the scalability of DSO and establish that (a) it can scale to such massive datasets, and (b) the empirical performance as measured by AUPRC (Area Under Precision-Recall Curve) improves as a function of time.

Fig. 6.
figure 6

AUPRC (Area Under Precision-Recall Curve) as a function of elapsed time on linear SVM with splice site recognition dataset.

We used 14 machines with 8 cores per machine to train a linear SVM, and plot AUPRC as a function of time in Fig. 6. Since PSGD did not perform well in earlier experiments, here we restrict our comparison to BMRM. This experiment demonstrates one of the advantages of stochastic optimization, namely that the test performance increases steadily as a function of the number of iterations. On the other hand, for a batch solver like BMRM the AUPRC fluctuates as a function of the iteration number. The practical consequence of this observation is that, one usually needs to wait for a batch optimizer to converge before using the resulting solution. On the other hand, even the partial solutions produced by a stochastic optimizer such as DSO usually exhibit good generalization properties.

6 Discussion and Conclusion

We presented a new reformulation of regularized risk minimization as a saddle point problem and showed that one can derive an efficient distributed stochastic optimizer (DSO). We also proved rates of convergence of DSO. Unlike other solvers, our algorithm does not require strong convexity and thus has wider applicability. Our experimental results show that DSO is competitive with state-of-the-art optimizers such as BMRM and SGD, and outperforms simple parallel stochastic optimization algorithms such as PSGD.

A natural next step is to derive an asynchronous version of DSO algorithm along the lines of the NOMAD algorithm proposed by Yun et al. [31]. We can see that our convergence proof which only relies on having an equivalent serial sequence of updates will still apply. Of course, there is also more room to further improve the performance of DSO by deriving better step size adaptation schedules, and exploiting memory caching to speed up random access.