Abstract
This work focuses on distributed optimization for multi-task learning with matrix sparsity regularization. We propose a fast communication-efficient distributed optimization method for solving the problem. With the proposed method, training data of different tasks can be geo-distributed over different local machines, and the tasks can be learned jointly through the matrix sparsity regularization without a need to centralize the data. We theoretically prove that our proposed method enjoys a fast convergence rate for different types of loss functions in the distributed environment. To further reduce the communication cost during the distributed optimization procedure, we propose a data screening approach to safely filter inactive features or variables. Finally, we conduct extensive experiments on both synthetic and real-world datasets to demonstrate the effectiveness of our proposed method.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Multi-task learning (MTL) (Caruana 1997) aims to jointly learn multiple machine learning tasks by exploiting their commonality to boost the generalization performance of each task. Similar to many standard machine learning techniques, in MTL, a single machine is assumed to be able to access all training data over different tasks. However, in practice, especially in the context of smart city, training data for different tasks is owned by different organizations and geo-distributed over different local machines, and centralizing the data may result in expensive cost of data transmission and cause privacy and security issues. Take personalized healthcare as a motivating example. In this context, learning a personalized healthcare prediction model from each user’s personal data including his/her profile and various sensor readings from his/her mobile device is considered as a different task. On one hand, the personal data may be too sparse to learn a precise prediction model for each task, and thus MTL is desired. On the other hand, some of the users may not be willing to share their personal data, which results in a failure of applying standard MTL methods. Thus, a distributed MTL algorithm is more preferred. However, if frequent communication is required for the distributed MTL algorithm to obtain an optimal prediction model for each task, users have to pay for expensive cost on data transmission, which is not practical. Therefore, designing a communication-efficient MTL algorithm in the distributed computing environment is crucial to address the aforementioned problem.
Though a number of distributed machine learning frameworks have been proposed, most of them are focused on single task learning problems (Li et al. 2014; Boyd et al. 2011; Jaggi et al. 2014; Ma et al. 2015). In particular, COCOA+ as a general distributed machine learning framework has been proposed for strongly convex learning problems (Smith et al. 2017b; Ma et al. 2015; Jaggi et al. 2014). To handle non-strongly regularizers (e.g., \(\ell _1\)-norm), Smith et al. (2015, 2017b) extended COCOA+ by directly solving the primal problem instead of its dual problem. However, in their proposed method, data needs to be distributed by features rather than instances. In our problem setting, we suppose the training data for different tasks is originally geo-distributed over different machines. In this case, to use the method proposed in Smith et al. (2015, 2017b), one has to first centralize the data of all the tasks and then re-distribute the data w.r.t. different sets of features, which is impractical.
In this paper, different from previous methods, we focus on the MTL formulation with a \(\ell _{2,1}\)-norm regularization on the weight matrix over all the tasks, and offer a communication-efficient distributed optimization framework to solve it. Specifically, we have two main contributions: (1) We first present an efficient distributed optimization method that enjoys a fast convergence rate for solving the \(\ell _{2,1}\)-norm regularized MTL problem. To achieve this, we carefully design a subproblem for each local worker by incorporating an extrapolation step on the dual variables. We theoretically prove that with the well-designed local subproblem, our proposed method obtains a faster convergence rate than COCOA+ (Ma et al. 2015; Smith et al. 2017b), especially on ill-conditioned problems. Recently, Ma et al. (2017) also attempted to improve the convergence rate of COCOA+. However, our acceleration scheme is different from theirs. Specifically, with a strongly convex regularizer, the acceleration (Ma et al. 2017) can only be done for Lipschitz continuous losses, while our method is able to improve the convergence rate for both smooth and Lipschitz continuous losses. (2) To further reduce the communication cost at each round when handling extremely high-dimensional data, we propose a dynamic feature screening approach to progressively eliminate the features that are associated with zeros values in the optimal solution. Consequently, the communication cost can be substantially reduced as there are only a few features associated with nonzero values in the solution due to the effect of the sparsity regularization. Note that there exist several data or feature screening approaches for single task learning or MTL problems. We believe that this is the first proposed to reduce communication cost in distributed optimization.
Recently, there have been several attempts at developing distributed optimization frameworks for MTL. Baytas et al. (2016) and Xie et al. (2017) proposed asynchronous proximal gradient based algorithms for distributed MTL. Their proposed methods, however, are communication-heavy as gradients need to be frequently communicated among machines. Wang et al. (2016) proposed a Distributed debiased Sparse Multi-task Lasso (DSML) algorithm. In DSML, there is only one round of communication between the local workers and the master. However, it requires the local workers to perform heavy computation (i.e., estimating a \(d \times d\) sparse matrix) to obtain a debiased lasso solution. More importantly, DSML makes a stronger assumption to ensure support recovery. More recently, to provide trade-off between local computation and global communication, COCOA+ has been extended for multi-task relationship learning by Liu et al. (2017). Later, this problem is further studied in Smith et al. (2017a) by considering statistical and systems challenges. Note that our work is different from Liu et al. (2017) and Smith et al. (2017a) in two ways: (1) Our proposed method enjoys a faster convergence rate than that analyzed in Liu et al. (2017) and Smith et al. (2017a) since their rates are the same as COCOA+. (2) We study different MTL models. Specifically, Liu et al. (2017) and Smith et al. (2017a) studied task-relationship based MTL model (Zhang and Yeung 2010) while our problem is feature based MTL. They are different as discussed in Zhang and Yang (2017). Moreover, as our work focuses on feature-based MTL model with sparsity (Obozinski et al. 2010, 2011; Wang and Ye 2015), it enables us to design a tailored feature screening technique to further reduce the communication cost. Unlike our framework, decentralized MTL methods have also been studied in Wang et al. (2018), Bellet et al. (2018), Vanhaesebrouck et al. (2017) and Zhang et al. (2018). However, these approaches may incur heavier communication cost because frequent communications are often required between tasks in MTL.
2 Notation and preliminaries
Throughout this paper, \({\mathbf {w}} \in \mathbb {R}^{dK}\) and \({\mathbf {W}} \in \mathbb {R}^{d \times K}\) denote a vector and a matrix, respectively, and \({\mathscr {G}}\) denotes a set.
\([m] \mathop {=}\limits ^{\mathrm{def}}\{i ~|~ 1 \le i \le m, i \in \mathbb {N}\}\), \(\big \{\mathscr {G}_j^{}\big \}_{j=1}^d\): \(\mathscr {G}_j^{} \mathop {=}\limits ^{\mathrm{def}}\big \{(k-1)d + j~|~ k \in [K] \big \}\), \([x]_+ \mathop {=}\limits ^{\mathrm{def}}\max (x, 0)\).
\(w_i^{}\) and \(W_{ij}^{}\): the ith and (i, j)th entries of \({\mathbf {w}}\) and \({\mathbf {W}}\), respectively.
\({\mathbf {W}}_{i\varvec{\cdot }}^{}\): the ith row of \({\mathbf {W}}\), \({\mathbf {w}}_{\mathscr {G}}^{} \mathop {=}\limits ^{\mathrm{def}}\{w_i^{} ~|~ i \in \mathscr {G}\},{\mathbf {W}}_{\mathscr {G}\varvec{\cdot }} \mathop {=}\limits ^{\mathrm{def}}\{{\mathbf {W}}_{i \varvec{\cdot }} ~|~ i \in \mathscr {G}\}\).
\({\mathbf {0}}\): a vector or matrix with all its entries equal to 0, \({\mathbf {I}}\): identity matrix.
\(\Vert {\mathbf {w}}\Vert \mathop {=}\limits ^{\mathrm{def}}\sqrt{\langle {\mathbf {w}}, {\mathbf {w}}\rangle }\): \(\ell _2\)-norm of \({\mathbf {w}}\), \(\Vert {\mathbf {W}}\Vert _\text {F}^{} \mathop {=}\limits ^{\mathrm{def}}\sqrt{\mathrm{tr}[{\mathbf {W}}^\top {\mathbf {W}}]}\): Frobenius norm of \({\mathbf {W}}\).
\(\Vert {\mathbf {w}}\Vert _{2,1}^{} \mathop {=}\limits ^{\mathrm{def}}\sum _{j=1}^d \Vert {\mathbf {w}}_{\mathscr {G}_j^{}}\Vert \) and \(\Vert {\mathbf {W}}\Vert _{2,1}^{} \mathop {=}\limits ^{\mathrm{def}}\sum _{j=1}^d \Vert {\mathbf {W}}_{j\varvec{\cdot }}\Vert \): \(\ell _{2,1}^{}\)-norm of \({\mathbf {w}}\) and \({\mathbf {W}}\), respectively.
Definition 1
A function \(f(\cdot )\) is L-Lipschitz continuous with respect to \(\Vert \cdot \Vert \), if \(\forall {\mathbf {w}}, \widehat{{\mathbf {w}}} \in \mathbb {R}^d\) it holds that \(|f(\widehat{{\mathbf {w}}}) - f({\mathbf {w}})| \le L \Vert \widehat{{\mathbf {w}}} - {\mathbf {w}}\Vert \).
Definition 2
A function \(f(\cdot )\) is L-smooth with respect to \(\Vert \cdot \Vert \), if \(\forall {\mathbf {w}}, \widehat{{\mathbf {w}}} \in \mathbb {R}^d\) it holds that \(f(\widehat{{\mathbf {w}}}) \le f({\mathbf {w}}) + \langle \nabla f({\mathbf {w}}), \widehat{{\mathbf {w}}} - {\mathbf {w}}\rangle + L\Vert \widehat{{\mathbf {w}}} - {\mathbf {w}}\Vert ^2/2\).
Definition 3
A function \(f(\cdot )\) is \(\gamma \)-strongly convex with respect to \(\Vert \cdot \Vert \), if \(\forall {\mathbf {w}}, \widehat{{\mathbf {w}}} \in \mathbb {R}^d\) it holds that \(f(\widehat{{\mathbf {w}}}) \ge f({\mathbf {w}}) + \langle \nabla f({\mathbf {w}}), \widehat{{\mathbf {w}}} - {\mathbf {w}} \rangle + \gamma \left\| \widehat{{\mathbf {w}}} - {\mathbf {w}}\right\| ^2/2\).
Definition 4
For function \(f(\cdot )\), its convex conjugate \(f_{}^*(\cdot )\) is defined as \(f_{}^*(\varvec{\alpha }) \mathop {=}\limits ^{\mathrm{def}}\sup _{{\mathbf {w}}} \big \{ \langle \varvec{\alpha }, {\mathbf {w}}\rangle - f({\mathbf {w}}) \big \}\).
Lemma 1
(Hiriart-Urruty and Lemaréchal 1993) Assume that function f is closed and convex. If f is \((1/\gamma )\)-smooth w.r.t. \(\Vert \cdot \Vert \), then \(f^*\) is \(\gamma \)-strongly convex w.r.t. the dual norm \(\Vert \cdot \Vert _*\).
3 Problem setup
For simplicity, we consider the setting with K tasks distributed over K workers.Footnote 1 For each task k, we have \(n_k^{}\) labeled instances \(\{{\mathbf {x}}_i^k, y_i^k\}_{i=1}^{n_k^{}}\) stored locally on worker k, where \({\mathbf {x}}_i^k \in \mathbb {R}^d\) is the ith input, and \(y_i^k\) is the corresponding output. Our goal is to jointly learn different models in terms of \({\mathbf {w}}^k \in \mathbb {R}^d, k \in [K]\) for each task. For ease of presentation, we define
\(n \mathop {=}\limits ^{\mathrm{def}}\sum _{k=1}^K n_k^{}\): the total number of training instances over all the tasks.
\({\mathbf {X}}^k \mathop {=}\limits ^{\mathrm{def}}\big [{\mathbf {x}}_1^k, \ldots , {\mathbf {x}}_{n_k^{}}^k\big ] \in \mathbb {R}^{d \times n_k^{}}\) and \({\mathbf {y}}^k \mathop {=}\limits ^{\mathrm{def}}\big [y_1^k, \ldots , y_{n_k^{}}^k\big ]^\top \in \mathbb {R}^{n_k^{}}\): the input and output for task k.
\({\mathbf {W}} \mathop {=}\limits ^{\mathrm{def}}[{\mathbf {w}}^1, \ldots , {\mathbf {w}}^K] \in \mathbb {R}^{d \times K}\): the weight matrix over all the tasks.
\({\mathbf {A}} \mathop {=}\limits ^{\mathrm{def}}\mathrm{diag}\big ({\mathbf {X}}^1, \ldots , {\mathbf {X}}^K\big ) \in \mathbb {R}^{dK \times n}\), \({\mathbf {w}} \mathop {=}\limits ^{\mathrm{def}}[({\mathbf {w}}^1)^\top , \ldots , ({\mathbf {w}}^K)^\top ]^\top \in \mathbb {R}^{dK}\).
We focus on the following MTL formulation with sparsity regularization (Obozinski et al. 2010, 2011; Lee et al. 2010; Wang and Ye 2015):
where \(f({\mathbf {w}}) \mathop {=}\limits ^{\mathrm{def}}\sum _{k=1}^K \sum _{i=1}^{n_k^{}} f_{ki}^{}\big (\langle {\mathbf {x}}_i^k, {\mathbf {w}}^k\rangle \big )\), \(f_{ki}^{}\big (\langle {\mathbf {x}}_i^k, {\mathbf {w}}^k\rangle \big )\) is the loss function of the kth task on the ith data point \(({\mathbf {x}}_i^k, y_i^k)\) and \(\rho \in (0, 1)\). The group sparsity regularization \(\Vert {\mathbf {W}}\Vert _{2,1}\) aims to improve the generalization performance for each task by selecting important features, whose effect to the overall objective is controlled by the parameter \(\lambda \). Note that the regularization term \(\Vert {\mathbf {W}}\Vert _\text {F}^2\) is not only to control the complexity of each linear model but also to facilitate distributed optimization.Footnote 2 One can rewrite (1) as the following vectorization form,
where \(g({\mathbf {w}}) \mathop {=}\limits ^{\mathrm{def}}\rho \sum _{j=1}^d \Vert {\mathbf {w}}_{\mathscr {G}_j^{}}\Vert + (1 - \rho )\Vert {\mathbf {w}}\Vert ^2/2\).
3.1 Dual problem
Compared to the primal problem, it is well-known that there is a dual variable associated with each training instance in its dual problem. This property makes the dual problem more tractable for distributed optimization if training instances are stored on different workers. Let \(\varvec{\alpha }= [\alpha _1^1, \ldots , \alpha _{n_{K}^{}}^K]^\top \in \mathbb {R}^n\). As derived in “Appendix A”, the dual problem of (2) is
where \(f^*(-\varvec{\alpha }) \mathop {=}\limits ^{\mathrm{def}}\sum _{k=1}^K \sum _{i=1}^{n_k^{}} f_{ki}^*(-\alpha _i^k)\), \(f_{ki}^*(\cdot )\) is the conjugate function of \(f_{ki}(\cdot )\) and
Let \({\mathbf {w}}_\star \) and \(\varvec{\alpha }_\star ^{}\) be optimal solutions to (2) and (3), respectively. One can obtain a primal solution \({\mathbf {w}}(\varvec{\alpha })\) from any dual feasible \(\varvec{\alpha }\) via
Thus, the duality gap at \(\varvec{\alpha }\) is \(G(\varvec{\alpha }) \mathop {=}\limits ^{\mathrm{def}}P({\mathbf {w}}(\varvec{\alpha })) - (-D(\varvec{\alpha })) = P({\mathbf {w}}(\varvec{\alpha })) + D(\varvec{\alpha })\).
4 Efficient distributed optimization
For ease of presentation, we further introduce some additional notations. Let \(\{\mathscr {P}_k^{}\}_{k=1}^K\) be a partition of [n] such that \(\varvec{\alpha }_{\mathscr {P}_k^{}}^{} \in \mathbb {R}^{n_k^{}}\) are the dual variables associated with the training instances of the kth task. For \(k \in [K]\), \({\mathbf {A}} \in \mathbb {R}^{dK \times n}\) and \({\mathbf {z}} \in \mathbb {R}^n\), we define
\(\widehat{{\mathbf {A}}}_{}^{k} \in \mathbb {R}^{dK \times n}\): \(\big (\widehat{{\mathbf {A}}}_{}^{k}\big )_{\varvec{\cdot } i} \mathop {=}\limits ^{\mathrm{def}}{\mathbf {A}}_{\varvec{\cdot } i} \) if \(i \in \mathscr {P}_k^{}\), otherwise \({\mathbf {0}}\).
\(\widehat{\varvec{\alpha }}^k\in \mathbb {R}^n\): \(\big (\widehat{\varvec{\alpha }}^k\big )_i \mathop {=}\limits ^{\mathrm{def}}\alpha _i\) if \(i \in \mathscr {P}_k^{}\), otherwise 0, \(\varvec{\alpha }^k \in \mathbb {R}^{n_k^{}}\): \(\varvec{\alpha }^k \mathop {=}\limits ^{\mathrm{def}}\varvec{\alpha }_{\mathscr {P}_k^{}}^{}\), \(f_k^*(-\widehat{\varvec{\alpha }}^k) \mathop {=}\limits ^{\mathrm{def}}\sum _{i \in \mathscr {P}_k^{}} f_{ki}^*(-\alpha _i^k)\).
Recall that we assume \(\{{\mathbf {X}}_{}^{k}, {\mathbf {y}}^{k}\}_{k=1}^K\) to be stored over K local workers. Therefore, it is highly desirable to develop a communication-efficient distributed optimization method to solve (3). Note that one can adopt COCOA+ (Ma et al. 2015; Smith et al. 2017b) to solve the dual problem, which is similar to the idea of adopting COCOA+ for distributed multi-task relationship learning (Liu et al. 2017; Smith et al. 2017a). However, in this way, the convergence rate of such a COCOA+-based approach fails to reach the best one as discussed in Arjevani and Shamir (2015). To address this problem, we present an efficient distributed optimization method to solve (3) with a faster convergence rate compared with the COCOA+-based approach. The high-level idea of the proposed method is summarized in Algorithm 1, and the details are discussed as follows.
In order to minimize (3) with respect to \(\varvec{\alpha }\) in a distributed environment, one needs to design a subproblem for each worker such that the objective value of (3) decreases when each worker minimizes its local subproblem by only accessing its local data. In (3), the term \(f_{}^*(\cdot )\) is separable for examples on different workers but \(g_{}^*(\cdot )\) is not. Note that \(g_{}^*(\cdot )\) is a smooth function. By Definition 2, it has a quadratic upper bound based on a reference point \({\mathbf {u}}\) that is separable. By making use of this upper bound, one can design a subproblem for each worker such that \(D(\varvec{\alpha })\) decreases if each worker minimizes its local subproblem. Let \(\eta \mathop {=}\limits ^{\mathrm{def}}(1 - \rho )\lambda n^2\). The following subproblem is used for the kth worker at the tth iteration:
where \({\mathbf {u}}_t^{}\) is a reference point at the tth iteration and
It can be proved that \(D(\varvec{\alpha }_t^{}) \le \textstyle {\sum }_{k=1}^K L_k\big (\widehat{\varvec{\alpha }}_t^k; \widehat{{\mathbf {u}}}_t^k, {\mathbf {w}}({\mathbf {u}}_t^{})\big )\) holds for any \({\mathbf {u}}_t^{}\). Therefore, \(D(\varvec{\alpha })\) can be minimized by employing each local worker to solve its own local subproblem 5. With \({\mathbf {w}}({\mathbf {u}}_t^{})\), each subproblem can be minimized by only accessing the corresponding local data \(({\mathbf {X}}^k, {\mathbf {y}}^k)\).
In the literature of distributed optimization, e.g., COCOA+-based approaches (Ma et al. 2015; Smith et al. 2017a, b; Liu et al. 2017), the reference point \({\mathbf {u}}_t^{}\) is set to be the solution of last iteration \(\varvec{\alpha }_{t-1}^{}\). It leads to that the convergence rate of COCOA+-based approachs fails to reach the best one as discussed in Arjevani and Shamir (2015). In contrast, \({\mathbf {u}}_t^{}\) in our proposed method is set as follows,
where \(\theta _t^{}\) is the solution of
where \(\vartheta \mathop {=}\limits ^{\mathrm{def}}\mu /n\). The definition of \({\mathbf {u}}_{t+1}^{}\) implies
Specifically, \({\mathbf {u}}_{t+1}^{}\) is obtained based on an extrapolation from \(\varvec{\alpha }_t^{}\) and \(\varvec{\alpha }_{t-1}^{}\). This is similar to Nesterov’s acceleration technique (Nesterov 2013). As we will see, this technique yields a faster convergence rate compared to COCOA+-based approaches (Ma et al. 2015; Smith et al. 2017a, b; Liu et al. 2017). Recently, Zheng et al. (2017) presented an accelerated distributed alternating dual maximization algorithm for single task learning, where an extrapolation is applied on the primal variable for acceleration. For smooth losses, they only proved the accelerated convergence rate in terms of primal suboptimality while we also prove it for duality gap, resulting in a stronger result.
Remark 1
In each iteration of Algorithm 1, \({\mathbf {w}}({\mathbf {u}}_t^{})\) and \(\{{\mathbf {A}}\widehat{\varvec{\alpha }}_t^k\}_{k=1}^K\) are communicated between master and workers. By the definitions of \({\mathbf {A}}\) and \(\widehat{\varvec{\alpha }}_t^k\), we note that \(\big ({\mathbf {w}}({\mathbf {u}}_t^{})\big )^k \in \mathbb {R}^d\) and \({\mathbf {X}}_{}^k\varvec{\alpha }_t^k \in \mathbb {R}^d\) are actually communicated between master and the kth worker. Therefore, its communication cost for each iteration is the same as COCOA+ in which \(\big ({\mathbf {w}}(\varvec{\alpha }_t^{})\big )^k \in \mathbb {R}^d\) and \({\mathbf {X}}_{}^k\varvec{\alpha }_t^k \in \mathbb {R}^d\) are communicated. Note that \({\mathbf {w}}({\mathbf {u}}_{t+1}^{})\) depends on \({\mathbf {A}}\varvec{\alpha }_t^{}\) but also \({\mathbf {A}}\varvec{\alpha }_{t-1}^{}\), therefore we can keep a copy of \({\mathbf {A}}\varvec{\alpha }_{t-1}^{}\) on the master until iteration t. In this way, no extra communication cost is induced in each iteration by our method for acceleration.
5 Convergence analysis
In this section, we analyze the convergence rate of the proposed method and show that it is faster than COCOA+-based approaches. All the proofs can be found in “Appendix”. In our analysis, we assume that all \(f_{ki}^*, k \in [K], i \in [n_k^{}]\) are \(\mu \)-strongly convex (\(\mu \ge 0\)) with respect to the norm \(\Vert \cdot \Vert \). According to Lemma 1, it is equivalent to assuming that all \(f_{ki}^{}\), for \(k \in [K]\) and \(i \in [n_k^{}]\) are \((1/\mu )\)-smooth with respect to the norm \(\Vert \cdot \Vert \). Since \(\mu \) is allowed to be 0, our analysis also covers the case that all \(f_{ki}^*\), for \(k \in [K]\) and \(i \in [n_k^{}]\) are only generally convex (i.e., \(\mu = 0\)), which implies that all \(f_{ki}\) for \(k \in [K]\) and \(i \in [n_k^{}]\) are Lipschitz continuous instead of smooth. To facilitate analysis, we also assume that \(L_k\big (\widehat{\varvec{\alpha }}_t^k; \widehat{{\mathbf {u}}}_t^k, {\mathbf {w}}({\mathbf {u}}_t^{})\big )\) is exactly solved for any \(k \in [K]\) and \(t \ge 1\).
By defining \(\zeta _t^{}\mathop {=}\limits ^{\mathrm{def}}\theta _t^2/\eta \), (7) becomes
For any \(t \ge 1\) and \(k \in [K]\), \(\widehat{{\mathbf {v}}}_t^k\) is defined as
In addition, the suboptimality on dual objective function \(\epsilon _D^t\) is defined as \(\epsilon _D^t \mathop {=}\limits ^{\mathrm{def}}D(\varvec{\alpha }_t^{}) - D(\varvec{\alpha }_\star ^{}), t \ge 0\). By using the above notations, the following lemma shows that there is an upper bound for the suboptimality \(\epsilon _D^t\). As we will see, this is the foundation for analyzing the convergence rate of duality gap.
Lemma 2
Consider applying Algorithm 1 to solve (3), the following inequality holds for any \(t \ge 1\),
where \(R^t = \frac{\zeta _t^{}}{2} \sum _{k=1}^K \big \Vert {\mathbf {A}}\big (\widehat{\varvec{\alpha }}_\star ^k - \widehat{{\mathbf {v}}}_t^k\big )\big \Vert ^2, \gamma _t^{} = \prod _{i=1}^t \big (1 - \theta _i\big )\) for any \(t \ge 1\) and \(\gamma _0^{} = 1\).
It can be found that the form of \(\gamma _t^{}\) determines the convergence rate of Algorithm 1. Therefore, next, we study the convergence rate by using the upper bound of \(\gamma _t^{}\) under different settings of the loss function.
5.1 Convergence rate for smooth losses
By applying Lemma 2, the following lemma characterizes the effect of iterations of Algorithm 1 when the loss functions \(f_{ki}^{}\)’s are \((1/\mu )\)-smooth for any \(k \in [K]\) and \(i \in [n_k^{}]\).
Lemma 3
Assume the loss functions \(f_{ki}^{}\)’s are \((1/\mu )\)-smooth for any \(k \in [K]\) and \(i \in [n_k^{}]\). If \(\theta _0^{} = \sqrt{\vartheta \eta }\) and \((1-\rho )\lambda \mu n \le 1\), then the following inequality holds for any \(t \ge 1\)
Let \(\sigma _\text {max}^{} \mathop {=}\limits ^{\mathrm{def}}\max _{\varvec{\alpha }\ne 0} \Vert {\mathbf {A}}\varvec{\alpha }\Vert _{}^2/\Vert \varvec{\alpha }\Vert _{}^2\). By applying Lemma 3, the next theorem shows the communication complexities for smooth losses in terms of dual objective and duality gap.
Theorem 1
Assume the loss functions \(f_{ki}^{}\)’s are \((1/\mu )\)-smooth for any \(k \in [K]\) and \(i \in [n_k^{}]\). If \(\theta _0^{} = \sqrt{\vartheta \eta }\) and \((1-\rho )\lambda \mu n \le 1\), then after T iterations in Algorithm 1 with
\(D\big (\varvec{\alpha }_T^{}\big ) - D\big (\varvec{\alpha }_\star ^{}) \le \epsilon _D^{}\) holds. Furthermore, after T iterations with
it holds that \(P\big ({\mathbf {w}}(\varvec{\alpha }_T^{})) - (-D(\varvec{\alpha }_T^{})) \le \epsilon _G^{}\).
Following Zhang and Xiao (2017), we define the condition number \(\kappa \) as \(\kappa \mathop {=}\limits ^{\mathrm{def}}\max _{k,i} \Vert {\mathbf {x}}_i^k\Vert ^2/(\lambda \mu )\). With the above analysis, the communication complexity of our method is linear with respect to \(\sqrt{\kappa }\), while it is linear with \(\kappa \) for COCOA+-based approaches (Ma et al. 2015; Smith et al. 2017b). The value of \(\kappa \) is typically the order of n as \(\lambda \) is usually set to the order of 1 / n (Bousquet and Elisseeff 2002). Therefore, our method is expected to converge faster than COCOA+-based approaches.
5.2 Convergence rate for Lipschitz continuous losses
Next, we present the convergence rate of the Algorithm 1 when the loss function is just general convex and Lipschitz continuous.
Theorem 2
Assume the loss functions \(f_{ki}^{}\)’s are generally convex and L-Lipschitz continuous for any \(k \in [K]\), \(i \in [n_k^{}]\). If \(\theta _0 = 1\), the following inequality holds for any \(t \ge 1\)
After T iterations in Algorithm 1 with
it holds that \(D\big (\varvec{\alpha }_T^{}\big ) - D\big (\varvec{\alpha }_\star ^{}) \le \epsilon _D^{}\).
Remark 2
For generally convex loss function, the dual objective obtained by Algorithm 1 decreses in \(O(1/t^2)\) instead of O(1 / t) for COCOA+. Therefore, the complexity for obtaining \(\epsilon _D^{}\)-suboptimal solution is \(\sqrt{1/\epsilon _D^{}}\) that is faster than that of COCOA+ (i.e., \(1/\epsilon _D^{}\)).
6 Further reduce communication cost via dynamic feature screening
In Sect. 4, we present an acceleration method for distributed optimization of (3) that reduces the communication cost in terms of iteration of communications. As discussed in Remark 1, the communication cost of our method in each iteration is linear with the number of features d, that is the same as previous distributed optimization methods for sparsity-regularized problems. It can be expensive for high-dimensional data. To address this issue, we present a method to reduce the communication cost for each iteration by exploiting the sparsity of \({\mathbf {w}}_\star \) (Bonnefoy et al. 2015; Fercoq et al. 2015; Ndiaye et al. 2017). It is well-known that the \(\ell _{2,1}\)-norm regularization is able to produce a row sparse pattern on \({\mathbf {W}}_{\star }^{}\) (Obozinski et al. 2011, 2010; Yuan et al. 2006; Zou and Hastie 2005). In other words, \(({{\mathbf {w}}_\star })_{\mathscr {G}_{j}^{}}\) will be \({\mathbf {0}}\) for most \(\mathscr {G}_{j}^{},j \in [d]\). Thereafter, we refer the jth feature as an inactive feature if \(({\mathbf {w}}_\star )_{\mathscr {G}_{j}^{}} = {\mathbf {0}}\), otherwise an active feature. The key idea of feature screening is to identify inactive features before sending the updated information to workers (Line 4 in Algorithm 1). In this way, the communication cost can be reduced since it is linear with the number active features.
To identify inactive features, we need to exploit the KKT condition of (2)
By checking the subgradient of \(\Vert \cdot \Vert \), it implies \(({{\mathbf {w}}_\star })_{\mathscr {G}_{j}^{}} = {\mathbf {0}}\) if \(\Vert ({{\mathbf {w}}_\star })_{\mathscr {G}_{j}^{}}\Vert < 1\). Combining this fact with (16), we have
It can be shown that one can obtain the exact optimum even without considering these inactive features during optimization. Therefore, one can reduce the communication cost by discarding these inactive features, thus less information needs to be communicated. To use (17) to identify inactive features, one needs to have \(\varvec{\alpha }_\star ^{}\) that is unknown before the optimization problem (3) is solved. Next, we show that a feasible set \(\mathscr {F}\) can be constructed for \(\varvec{\alpha }_\star ^{}\) by using the strong convexity of \(D(\varvec{\alpha })\).
Crucial Value\(\lambda _\text {max}\): In view of (17) and (15), there exists a crucial value \(\lambda _\text {max}\) such that \({\mathbf {w}}_\star = {\mathbf {0}}\) for any \(\lambda \ge \lambda _\text {max}\). Let \({\mathbf {r}} = [f_{11}'(0), \ldots , f_{Kn_K^{}}'(0)] \in \mathbb {R}^n\), (15) implies that \(\varvec{\alpha }_\star ^{} = {\mathbf {r}}\) when \({\mathbf {w}}_\star ^{} = {\mathbf {0}}\). By substituting \(\varvec{\alpha }_\star ^{}\) into (17), we obtain \(\lambda _\text {max} = \max _{j \in [d]} \Vert {\mathbf {A}}_{\mathscr {G}_{j}^{}}{\mathbf {r}}\Vert /(\rho n)\). It is trivial to obtain a closed form solution \({\mathbf {w}}_\star = {\mathbf {0}}\) and \(\varvec{\alpha }_\star ^{} = {\mathbf {r}}\) if \(\lambda \ge \lambda _\text {max}\). Therefore, we only focus on the cases when \(\lambda < \lambda _\text {max}\).
Feasible Set of\(\varvec{\alpha }_\star ^{}\): Lemma 1 implies \(D(\varvec{\alpha })\) is strongly convex if \(f_{ki}^{}\)’s are smooth for all k and i. By using this fact, the dual optimal solution \(\varvec{\alpha }_\star ^{}\) can be bounded in terms of \(\varvec{\alpha }\) and its duality gap \(G(\varvec{\alpha })\) as stated in the following lemma.
Lemma 4
Assume the loss functions \(f_{ki}^{}\)’s are \((1/\mu )\)-smooth for any \(k \in [K],i \in [n_k^{}]\). For any dual feasible solution \(\varvec{\alpha }\), it holds that \(\varvec{\alpha }_\star ^{} \in \mathscr {F} \mathop {=}\limits ^{\mathrm{def}}\big \{\varvec{\theta }~|~ \Vert \varvec{\theta }- \varvec{\alpha }\Vert \le \sqrt{2G(\varvec{\alpha })n/\mu } \big \}\).
By using Lemma 4, (17) can be relaxed as
In other words, we need to solve the following problem
Although it is non-convex, the global optimum of (19) can be obtained by using the result in Gay (1981). Let us define \({\mathbf {H}} \in \mathbb {R}^{K \times K}, {\mathbf {g}} \in \mathbb {R}^K, \upsilon _j, \mathscr {I}_j, \bar{\mathscr {I}}_j\) and \(\bar{{\mathbf {s}}} \in \mathbb {R}^K\) as
\({\mathbf {H}} \mathop {=}\limits ^{\mathrm{def}}-\mathrm{diag}\big (2\Vert {\mathbf {X}}_{j\varvec{\cdot }}^1\Vert ^2, \ldots , 2\big \Vert {\mathbf {X}}_{j\varvec{\cdot }}^K\big \Vert ^2\big )\), \({\mathbf {g}} \mathop {=}\limits ^{\mathrm{def}}-2{\big [\big \Vert {\mathbf {X}}_{j\varvec{\cdot }}^1\big \Vert \big |\big \langle {\mathbf {X}}_{j\varvec{\cdot }}^1, \varvec{\alpha }^1\big \rangle \big |, \ldots , \big \Vert {\mathbf {X}}_{j\varvec{\cdot }}^K\big \Vert \big |\big \langle {\mathbf {X}}_{j\varvec{\cdot }}^K, \varvec{\alpha }^K\big \rangle \big | \big ]}^\top \).
\(\upsilon _j \mathop {=}\limits ^{\mathrm{def}}\max _{k \in [K]} \big \Vert {\mathbf {X}}_{j\varvec{\cdot }}^k\big \Vert ^2\), \(\mathscr {I}_j \mathop {=}\limits ^{\mathrm{def}}\Big \{k ~\big |~ \big \Vert {\mathbf {X}}_{j\varvec{\cdot }}^k\big \Vert ^2 = \upsilon _j, k \in [K] \Big \}\), \(\bar{\mathscr {I}}_j \mathop {=}\limits ^{\mathrm{def}}[K]\setminus \mathscr {I}_j\).
\(\bar{s}_k^{} \mathop {=}\limits ^{\mathrm{def}}\frac{\big \Vert {\mathbf {X}}_{j\varvec{\cdot }}^k\big \Vert \big |\big \langle {\mathbf {X}}_{j\varvec{\cdot }}^k, \varvec{\alpha }^k \big \rangle \big |}{\upsilon _j^{} - \big \Vert {\mathbf {X}}_{j\varvec{\cdot }}^k\big \Vert ^2} ~\text {if}~ k \in \bar{\mathscr {I}}_j, ~\text {otherwise}~ \bar{s}_k^{} \mathop {=}\limits ^{\mathrm{def}}0\).
By using the above notations, the solution of (19) is given in the following lemma.
Lemma 5
If \(\upsilon _j^{} = 0\), the maximum value of (19) is 0. Otherwise, the upper bound is
where \(\vartheta _\star \) and \({\mathbf {s}}_\star \) are defined as follows: (a) \(\vartheta _\star = 2 \upsilon _j\) and \({\mathbf {s}}^\star = \bar{{\mathbf {s}}} + \widehat{{\mathbf {s}}}\) if 1) \(\exists ~\widehat{{\mathbf {s}}} \in \mathbb {R}^K\) with \(\widehat{{\mathbf {s}}}_{\mathscr {I}_j} = {\mathbf {0}}\) and \(\Vert \bar{{\mathbf {s}}} + \widehat{{\mathbf {s}}}\Vert = \sqrt{2G(\varvec{\alpha })n/\mu }\), and 2) \(\left\langle {\mathbf {X}}_{\cdot j}^t, \varvec{\theta }_t^{}\right\rangle = 0, \forall t \in \mathscr {I}_j\). (b) Otherwise, \(\vartheta _\star > 2 \upsilon _j\) is solution of \(\Vert \left( {\mathbf {H}} + \vartheta _\star {\mathbf {I}}\right) ^{-1} {\mathbf {g}}\Vert = \sqrt{2G(\varvec{\alpha })n/\mu }\), and \({\mathbf {s}}_\star = - \left( {\mathbf {H}} + \vartheta _\star {\mathbf {I}}\right) ^{-1}{\mathbf {g}}\).
To perform screening every p iterations, one can simply add the following three lines before line 4 in Algorithm 1.
if\(t\%p = 0\) then
Call Algorithm 2
end if
Costs of Screening: Note that the screening is performed on the master every p iterations.
By carefully examining the detailed screening rule, the master actually only needs \({\mathbf {A}}\varvec{\alpha }_t^{}\) when evaluating screening rule. Even without screening, the \({\mathbf {A}}\varvec{\alpha }_t^{}\) needs to be computed and sent to the master in each iteration as stated in Algorithm 1 and Remark 1. Therefore, the feature screening does not induce extra communication cost.
Regarding the computational cost, we note that the screening problem is dependent on the number of active features that is at most d (there are less and less feature due to screening). As shown in Lemma 5, the screening problem for each feature is a one dimension variable optimization problem. It either has a closed form solution (Case 1) or can be efficiently solved by using Newton’s method (Case 2) that usually takes less than 5 iterations to meet the accuracy \(10^{-15}\).
More importantly, by screening out inactive features, it can substantially save optimization problem, especially on local computation. Recall that the local SDCA computation complexity is O(Hd) where H is the local SDCA iteration number and its is usually more than \(10^5\). Compared to local SDCA computation cost, the cost of screening is negligible.
We note that Ndiaye et al. (2015) also presented a feature screening method for multi-task learning. However, in their work, all tasks are assumed to share the same training data while our method allows each task to has its own training data. Consequently, the feature screening problem (19) becomes non-convex instead of convex, which is different from and more challenging than that studied in Ndiaye et al. (2015). In addition, Wang and Ye (2015) developed a static screening rule that exploits the solution at another regularization parameter and only performs screening before the optimization procedure. By contrast, our screening rule is a dynamic with a weaker assumption to exploit the latest solution to repeatedly perform screening during optimization. Therefore, our screening is more practical and performs better empirically.
Difference between Our Proposed Method and COCOA+ We denote the proposed method by \(\texttt {DMTL}_{S}\). There are two main differences between \(\hbox {DMTL}_S\) and COCOA+. First, \(\hbox {DMTL}_S\) constructs the subproblem 5 by using the extrapolation of the solutions in last two iterations that is able to achieve accelerated convergence rate. In contrast, COCOA+ only uses the solution of last iteration. Second, \(\hbox {DMTL}_S\) presents a dynamic feature screening method to reduce the communication cost for each iteration by exploiting the sparsity of the model.
7 Experiments
7.1 Experimental setting
In previous sections, we present our method by focusing on distributed MTL. We hereby conduct experiments to show the advantages of the proposed method for MTL. In fact, our approach can also be extended for distributed single task learning (STL) and the details are provided in the “Appendix”.
To demonstrate the advantages of \(\texttt {DMTL}_S\), we compare \(\texttt {DMTL}_S\) with a COCOA+-based approach (Ma et al. 2015; Smith et al. 2017b) and its extension MOCHA (Smith et al. 2017a) to solve the dual problem (3). In our experiments, the squared loss is used for regression, and the smoothed hinge loss (Shalev-Shwartz and Zhang 2013) is used for classification with \(\mu = 0.5\) for all experiments. It is clear to see that \(f_{ki}^{}\) is \((1/\mu )\)-smooth. For ease of comparison, the local subproblem is solved by using SDCA (Shalev-Shwartz and Zhang 2013) for all methods. The number of iterations for SDCA is set to \(H = 10^4\) for all datasets.
We run all experiments on a local server with 64 worker cores. A distributed environment is simulated on the machine by using distributed platform Petuum (Xing et al. 2015),Footnote 3 and workers for each task are assigned to isolated processes that communicate solely through the platform. Regarding the performance, we evaluate the number of communication iterations required by different methods to obtain a solution with prescribed duality gap. Due to the limitation of computational resources, we are not able to perform experiments on a real distributed environment. However, the results (i.e., the number of communication iterations) reported in this paper does not depend on the environment that it runs on. Compared to COCOA+, the additional computation incurred by our method is negligible: the computation complexity of each iteration of COCOA+ is \(O(H \times d)\). The additional computations required by our method for acceleration and feature screening is O(d) and O(d), respectively. This cost is negligible compared to that of SDCA because H is usually around \(10^5\).
We conduct experiments on the following three datasets (Table 1).
Synthetic Data contains \(K = 10\) regression tasks and generated by using \(y_i^k = \langle {\mathbf {x}}_i^k, {\mathbf {w}}^k\rangle + \epsilon \). The number of examples for each task is randomly generated, which ranges from 903 to 1098. \({\mathbf {x}}_i^k \in \mathbb {R}^{50,000}\) is drawn from \(\mathscr {N}({\mathbf {0}}, {\mathbf {I}})\) and \(\epsilon \sim \mathscr {N}({\mathbf {0}}, 0.5 {\mathbf {I}})\). To obtain a \({\mathbf {W}}\) with row sparsity, we randomly select 400 dimensions from [d] and generate them from \(\mathscr {N}({\mathbf {0}}, {\mathbf {I}})\) for all tasks. For each task, extra noise from \(\mathscr {N}({\mathbf {0}}, 0.5 {\mathbf {I}})\) is added to \({\mathbf {W}}\).
News20 (Lang 1995) is a collection of around 20, 000 documents from 20 different newsgroups. To construct a multi-task learning problem, we create 5 binary classification tasks using data of all the 5 groups from comp as positive examples. For the negative examples, we choose data from misc.forsale, rec.autos, rec.motorcycles, rec.sport.baseball and rec.sport.hockey. The number of training examples for each task ranges from 1163 to 1190, and the number of features is 34, 967.
MDS (Blitzer et al. 2007) includes product reviews on 25 domains in Amazon. We use 22 domains each of which has more than 100 examples for multitask binary sentiment classification. To simulate MTL, we randomly select 1000 examples as training data for the domain with more than 1000 examples. Consequently, the number training examples for each domain ranges from 220 to 1000. The number of features of is 10, 000.
7.2 Results of faster convergence rate
In order to test the convergence rate of \(\texttt {DMTL}_S\), we compare it with the COCOA+-based approach to solving (3) under varying values of \(\lambda \). In view of Sect. 6, we chose \(\lambda = 10^{-2} \lambda _\text {max}\) and \(\lambda = 10^{-3} \lambda _\text {max}\) to solve (3). We set \(\varvec{\alpha }_0^{} = {\mathbf {0}}\) for all methods and \(\rho = 0.9\) for all experiments.
Figure 1 shows the comparison results in terms of the numbers of iterations for communication used by \(\texttt {DMTL}_S\) and COCOA+ to obtain a solution meeting a prescribed duality gap. From the Fig. 1, we can observe that:
\(\texttt {DMTL}_S\) is significantly faster than COCOA+ in terms of the number of iterations to meet a prescribed duality gap. Take the synthetic dataset and News20 for example, to obtain a solution at \(\lambda = 10^{-3}\lambda _\text {max}\) with duality gap \(10^{-5}\), \(\texttt {DMTL}_S\) obtains speedups of a factor of 6.64 and 6.94 over COCOA+ on the two datasets, respectively.
Generally, the speedup obtained by \(\texttt {DMTL}_S\) is more significant for small values of \(\lambda \). For example, when \(\lambda = 10^{-2} \lambda _\text {max}\), \(\texttt {DMTL}_S\) converges 4.81 and 4.05 times faster than COCOA+ on the synthetic dataset and News20, respectively. In contrast, the speedups is improved up to 7.00 and 5.70 times faster than COCOA+ when \(\lambda = 10^{-3} \lambda _\text {max}\).
The improvement is more pronounced when a higher precision is used as the stopping criterion. Take News20 with \(\lambda = 10^{-3} \lambda _\text {max}\) for example, the speedups of \(\texttt {DMTL}_S\) over COCOA+ are 4.00, 4.94, 5.70 and 6.94 when the duality gaps are \(10^{-2}\), \(10^{-3}\), \(10^{-4}\) and \(10^{-5}\), respectively.
7.3 Robust to straggler
In Smith et al. (2017a), MOCHA is proposed to improve COCOA+ to handle systems heterogeneity, e.g., straggler. That means some workers are considerably slower than others and the stragglers fail to return prescribed accurate solution for some iterations. Here, we compare our method with COCOA+ equipped with handling system heterogeneity as presented in Smith et al. (2017a) on News20 and show that our method converges faster even if there exist stragglers. Specifically, we perform experiments under the setting of Smith et al. (2017a) by using different values of H for different workers to simulate the effect of stragglers. The value of H for each iteration is draw from \([0.9 n_\text {min}, n_\text {min}]\) to simulate low variability environment and \([0.5 n_\text {min}, n_\text {min}]\) to simulate high variability environment, where \(n_\text {min} = \min _{k} n_k^{}\).
As shown in Fig. 2, our method is still able to substantially reduce the number of communication for both low and high variability environments. This shows that empirically \(\texttt {DMTL}_S\) is robust to straggler although our analysis assumes that the local subproblem needs to be exactly solved.
7.4 Results of reduced communication cost
To demonstrate the effect of dynamic screening for reducing communication cost, we perform a warm start cross validation experiment on News20 and MDS. Specifically, we solve (3) with 50 various values of \(\lambda \), \(\{\lambda _i\}_{i=1}^{50}\), which are equally distributed on the logarithmic grid from \(0.01 \lambda _\text {max}\) to \(0.3 \lambda _\text {max}\) sequentially (i.e., the solution of \(\lambda _i\) is used as the initialization of \(\lambda _{i-1}\)). To evaluate the total communication cost for the 50 values of \(\lambda \), we calculate the total number of vectors of dimension d used for communication for each worker. We experiment on the following two settings: 1) \(\texttt {DMTL}_S\)without dynamic screening (Without DS), and 2) \(\texttt {DMTL}_S\)with dynamic screening (With DS). Figure 3 presents the total communication cost used by \(\texttt {DMTL}_S\)without and with dynamic screening to solve (3) over \(\{\lambda _i\}_{i=1}^{50}\) on News20 and MDS.
From Fig. 3, we can observe that:
The communication cost has been substantially reduced by the proposed dynamic screening because the most inactive features have been progressively identified and discarded during optimization. For example, when the prescribed duality is \(10^{-7}\), the communication cost reduction by the proposed method is \(83.32\%\) and \(67.43\%\) on News20 and MDS, respectively.
This advantage of dynamic screening is more significant when a higer precision is used as the stopping criterion. On News20, the speedup increases from 5.99 to 8.63 when the duality gap changes from \(10^{-7}\) to \(10^{-8}\). This is because more inactive features can be screened out when a more accurate solution is obtained.
More importantly, the proposed dynamic screening is more pronounced for the problem with higher dimension. Take the duality gap of \(10^{-8}\) for example, the speedups obtained by dynamic screening are 8.63 and 4.14 on News20 and MDS, respectively, where News20 is of much higher dimensionality than MDS.
8 Conclusion
In this paper, we present a new distributed optimization method, \(\texttt {DMTL}_S\), for MTL with matrix sparsity regularization. We provide theoretical convergence analysis for \(\texttt {DMTL}_S\). We also propose a data screening method to further reduce the communication cost. We carefully design and conduct extensive experiments on both synthetic and real-world datasets to verify the faster convergence rate and the reduced communication cost of \(\texttt {DMTL}_S\) in comparison with two state-of-the-art baselines, COCOA+ and MOCHA.
Notes
In general, the numbers of tasks and workers can be different.
Note that in this work we assume the regularizer is strongly convex which is the same as in COCOA+. As discussed in Sect. 1, for non-strongly convex regularizer, though an extension of COCOA+ has been proposed in Smith et al. (2015, 2017b), it is not practical for real-world scenarios as data needs to be geo-distributed by features rather than instances over local workers. In fact, our proposed method can also be applied to accelerate the approach proposed in Smith et al. (2015, 2017b). However, how to develop a distributed optimization algorithm when data is geo-distributed by instances and the regularizer of the objective is non-strongly convex is still an open problem. We leave this to our future study.
Note that our method can be implemented in other distributed platforms.
References
Arjevani, Y., & Shamir, O. (2015). Communication complexity of distributed convex learning and optimization. In Proceedings of NIPS.
Baytas, I. M., Yan, M., Jain, A. K., & Zhou, J. (2016). Asynchronous multi-task learning. In Proceedings of ICDM.
Bellet, A., Guerraoui, R., Taziki, M., & Tommasi, M. (2018). Personalized and private peer-to-peer machine learning. In: Proceedings of AISTATS.
Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., & Wortman, J. (2007). Learning bounds for domain adaptation. In Proceedings of NIPS.
Bonnefoy, A., Emiya, V., Ralaivola, L., & Gribonval, R. (2015). Dynamic screening: Accelerating first-order algorithms for the lasso and group-lasso. IEEE Transactions on Signal Processing, 63(19), 5121–5132.
Bousquet, O., & Elisseeff, A. (2002). Stability and generalization. Journal of Machine Learning Research, 2, 499–526.
Boyd, S. P., Parikh, N., Chu, E., Peleato, B., & Eckstein, J. (2011). Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1), 1–122.
Caruana, R. (1997). Multitask learning. Machine Learning, 28(1), 41–75.
Dünner, C., Forte, S., Takác, M., & Jaggi, M. (2016). Primal–dual rates and certificates. In Proceedings of ICML (pp 783–792).
Fercoq, O., Gramfort, A., & Salmon, J. (2015). Mind the duality gap: Safer rules for the lasso. In Proceedings of ICML.
Gay, D. (1981). Computing optimal locally constrained steps. SIAM Journal on Scientific and Statistical Computing, 2(2), 186–197.
Hiriart-Urruty, J. B., & Lemaréchal, C. (1993). Convex analysis and minimization algorithms II: Advanced theory and bundle methods. Berlin: Springer.
Jaggi, M., Smith, V., Takác, M., Terhorst, J., Krishnan, S., Hofmann, T., et al. (2014). Communication-efficient distributed dual coordinate ascent. In Proceedings of NIPS.
Lang, K. (1995). Newsweeder: Learning to filter netnews. In Proceedings of ICML.
Lee, S., Zhu, J., & Xing, E. P. (2010). Adaptive multi-task lasso: With application to eQTL detection. In Proceedings of NIPS.
Li, M., Andersen, D. G., Smola, A. J., & Yu, K. (2014). Communication efficient distributed machine learning with the parameter server. In Proceedings of NIPS.
Liu, S., Pan, S. J., & Ho, Q. (2017). Distributed multi-task relationship learning. In Proceedings of SIGKDD
Ma, C., Jaggi, M., Curtis, F. E., Srebro, N., & Takáč, M. (2017). An accelerated communication-efficient primal–dual optimization framework for structured machine learning. arXiv preprint arXiv:1711.05305.
Ma, C., Smith, V., Jaggi, M., Jordan, M. I., Richtárik, P., & Takác, M. (2015). Adding vs. averaging in distributed primal–dual optimization. In Proceedings of ICML.
Ndiaye, E., Fercoq, O., Gramfort, A., & Salmon, J. (2015). Gap safe screening rules for sparse multi-task and multi-class models. In Proceedings of NIPS.
Ndiaye, E., Fercoq, O., Gramfort, A., & Salmon, J. (2017). Gap safe screening rules for sparsity enforcing penalties. Journal of Machine Learning Research, 18, 128:1–128:33.
Nesterov, Y. (2013). Introductory lectures on convex optimization: A basic course. Berlin: Springer.
Obozinski, G., Taskar, B., & Jordan, M. (2010). Joint covariate selection and joint subspace selection for multiple classification problems. Statistics and Computing, 20(2), 231–252.
Obozinski, G., Wainwright, M. J., & Jordan, M. I. (2011). Support union recovery in high-dimensional multivariate regression. The Annals of Statistics, 39(1), 1–47.
Shalev-Shwartz, S., & Zhang, T. (2013). Stochastic dual coordinate ascent methods for regularized loss. Journal of Machine Learning Research, 14(1), 567–599.
Smith, V., Chiang, C., Sanjabi, M., & Talwalkar, A. S. (2017a). Federated multi-task learning. In Proceedings of NIPS.
Smith, V., Forte, S., Jordan, M. I., & Jaggi, M. (2015). L1-regularized distributed optimization: A communication-efficient primal–dual framework. CoRR arXiv:1512.04011.
Smith, V., Forte, S., Ma, C., Takáč, M., Jordan, M. I., & Jaggi, M. (2017b). Cocoa: A general framework for communication-efficient distributed optimization. Journal of Machine Learning Research, 18, 230:1–230:49.
Vanhaesebrouck, P., Bellet, A., & Tommasi, M. (2017). Decentralized collaborative learning of personalized models over networks. In Proceedings of AISTATS.
Wang, J., Kolar, M., & Srebro, N. (2016). Distributed multi-task learning. In Proceedings of AISTATS.
Wang, J., & Ye, J. (2015). Safe screening for multi-task feature learning with multiple data matrices. In Proceedings of ICML.
Wang, W., Wang, J., Kolar, M., & Srebro, N. (2018). Distributed stochastic multi-task learning with graph regularization. arXiv preprint arXiv:1802.03830.
Xie, L., Baytas, I. M., Lin, K., & Zhou, J. (2017). Privacy-preserving distributed multi-task learning with asynchronous updates. In Proceedings of SIGKDD.
Xing, E. P., Ho, Q., Dai, W., Kim, J. K., Wei, J., Lee, S., et al. (2015). Petuum: A new platform for distributed machine learning on big data. IEEE Transactions on Big Data, 1(2), 49–67.
Yuan, M., Ekici, A., Lu, Z., & Monteiro, R. (2006). Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B, 68(1), 49–67.
Zhang, C., Zhao, P., Hao, S., Soh, Y. C., Lee, B., Miao, C., et al. (2018). Distributed multi-task classification: A decentralized online learning approach. Machine Learning, 107(4), 727–747.
Zhang, Y., & Xiao, L. (2017). Stochastic primal–dual coordinate method for regularized empirical risk minimization. Journal of Machine Learning Research, 18, 18:1–18:42.
Zhang, Y., & Yang, Q. (2017). A survey on multi-task learning. CoRR arXiv:1707.08114.
Zhang, Y., & Yeung, D. Y. (2010). A convex formulation for learning task relationships in multi-task learning. In Proceedings of UAI.
Zheng, S., Wang, J., Xia, F., Xu, W., & Zhang, T. (2017). A general distributed dual coordinate optimization framework for regularized loss minimization. Journal of Machine Learning Research, 18, 115:1–115:52.
Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B, 67(2), 301–320.
Acknowledgements
This work is supported by NTU Singapore Nanyang Assistant Professorship (NAP) Grant M4081532.020 and Singapore MOE AcRF Tier-2 Grant MOE2016-T2-2-060.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Editors: Kee-Eung Kim and Jun Zhu.
Appendices
Appendix A: Dual problem
By introducing \(z_i^k\) for each \(f_{ki}^{}\), one can rewrite (2) as
Let \(-\frac{1}{n}\alpha _i^k\) be the Lagrangian multiplier for the (k, i)th constraint. For convenience, let
Then, the Lagrangian is
The dual problem can be obtained by taking the infimum with respect to both \({\mathbf {w}}\) and \({\mathbf {z}}\)
where
Regarding the explicit form of \(g^*\big ( \frac{{\mathbf {A}}\varvec{\alpha }}{\lambda n} \big )\), it can be shown that
The optimality condition of the above problem implies
The definition of subgradient implies
Otherwise, we have
which implies
Combining these two cases together, we obtain
Then, the conjugate of \(g({\mathbf {w}})\) is
Therefore, the dual problem of (2) is
Let \({\mathbf {w}}_\star \) and \(\varvec{\alpha }_\star ^{}\) denote the primal and dual optimal solutions, respectively. From (20) and (21) the KKT condition of (2) establishes
Appendix B: Convergence analysis
To facilitate the proof, we first introduce some useful notations and technical Lemmas. It is easy to verify that \(\widehat{{\mathbf {u}}}_t^k\) can be rewritten as
For any \(t \ge 0\), we define \(\varvec{\beta }_t\) as \(\varvec{\beta }_t \mathop {=}\limits ^{\mathrm{def}}\big ({\mathbf {u}}_t^{} - \varvec{\alpha }_t^{}\big )/\eta \Rightarrow \varvec{\alpha }_t^{} = {\mathbf {u}}_t^{} - \eta \varvec{\beta }_t~~\forall k \in [K]\).
Lemma 6
(Dünner et al. 2016) Consider the following pair of optimization problems, which are dual to each other:
where \(f^*\) is \(\mu \)-strongly convex with respect to a norm \(\Vert \cdot \Vert _{f^*}\) and \(g^*\) is \(1/\beta \)-smooth with respect to a norm \(\Vert \cdot \Vert _{g^*}\). Let \(\sigma _\text {max}^{} = \max _{\varvec{\alpha }\ne 0} \Vert {\mathbf {A}}\varvec{\alpha }\Vert _{g^*}^2/\Vert \varvec{\alpha }\Vert _{f^*}^2\). Suppose an arbitrary optimization algorithm is applied to the first problem and it produces a sequence of (possibly random) iterates \(\{\varvec{\alpha }_t^{}\}_{t=0}^{\infty }\) such that there exits \(C \in (0, 1], D \ge 0\) such that
Then, for any
it holds that \({\mathbb {E}}\big [P({\mathbf {w}}(\varvec{\alpha }_t^{})) - (-D(\varvec{\alpha }_t^{})) \big ] \le \epsilon \).
Remark 3
This lemma enables us transfer the convergence rate of objective function to the convergence rate of duality gap.
Lemma 7
For any \(t \ge 1\), the following identities hold
Proof
First, we show that (24) can be proved by using the definition of \(\widehat{{\mathbf {u}}}_t^k\) and \(\zeta _t^{}\).
which implies \(\theta _t^{} \zeta _{t-1}^{}\big (\widehat{{\mathbf {u}}}_t^k - \widehat{{\mathbf {v}}}_{t-1}^k\big )/\zeta _t^{} = \big (\widehat{\varvec{\alpha }}_{t-1}^k - \widehat{{\mathbf {u}}}_t^k\big )\). Next, (25) can be shown by using \(\widehat{\varvec{\beta }}_t^k\) and \(\zeta _t^{} = \theta _t^2/\eta \). Following from the definition of \(\widehat{{\mathbf {v}}}_t^k\), we have
which implies
To proved (26), we need to use (24) and (25). By using the definition of \(\zeta _t^{}\) and (25), one can show that
which can be rewritten as
Finally, we prove (27) by using (24) and (25).
By using (24), we obtain
This completes the proof. \(\square \)
1.1 Proof of Lemma 2
Lemma 2
Consider applying Algorithm 1 to solve (3), the following inequality holds for any \(t \ge 1\),
where \(R^t = \frac{\zeta _t^{}}{2} \sum _{k=1}^K \big \Vert {\mathbf {A}}\big (\widehat{\varvec{\alpha }}_\star ^k - \widehat{{\mathbf {v}}}_t^k\big )\big \Vert ^2, \gamma _t^{} = \prod _{i=1}^t \big (1 - \theta _i\big )\) for any \(t \ge 1\) and \(\gamma _0^{} = 1\).
Proof
Following from the optimality condition of \(\widehat{\varvec{\alpha }}_t^k\), the following holds for any \(k \in [K]\)
By using the fact \(f^*\) is \(\mu \)-strongly convex, the following inequality holds for any \({\mathbf {z}} \in \mathbb {R}^n\)
Substituting (28) into the above inequality, we obtain
By using the fact that \(g^*\) is \(1/(1 - \rho )\)-smooth and convex, the following inequality holds for any \({\mathbf {z}} \in \mathbb {R}^n\)
where the last inequality is obtained by using the fact that \({\mathbf {A}}\) is a block diagonal matrix. Thus,
which implies
Substituting \({\mathbf {z}} = \varvec{\alpha }_\star ^{}\) and \({\mathbf {z}} = \varvec{\alpha }_{t-1}\) into (29), we obtain
Combining these two inequalities together with coefficients \(\theta _t^{}\) and \((1 - \theta _t^{})\), respectively, we obtain
which is equivalent to
Substituting (26) into the above inequality, we obtain
which is equivalent to
Substituting (27) into the above inequality, we obtain
which can be rewritten as
This implies
Applying the above inequality for \(i = 1\) to t, we obtain
By using the definition of \(\gamma _t\), the above inequality can be rewritten as
This completes the proof. \(\square \)
1.2 Convergence analysis for smooth losses
1.2.1 Proof of Lemma 3
Lemma 3
Assume the loss functions \(f_{ki}^{}\)’s are \((1/\mu )\)-smooth for any \(k \in [K]\) and \(i \in [n_k^{}]\). If \(\theta _0^{} = \sqrt{\vartheta \eta }\) and \((1-\rho )\lambda \mu n \le 1\), then the following inequality holds for any \(t \ge 1\)
Proof
It can be proved by using Lemma 2. From Lemma 1, we know that \(f_{ki}^{}\) are \(\mu \)-strongly convex for any \(k \in [K], i \in [n_k^{}]\) since \(f_{ki}^{}\) is \((1/\mu )\)-smooth. If \(\zeta _{t-1}^{} \ge \vartheta \), then \(\zeta _t^{} = (1- \theta _t^{})\zeta _{t-1}^{} + \vartheta \theta _t^{} \ge (1 - \theta _t^{}) \vartheta + \theta _t^{} \vartheta = \vartheta \). There we have \(\zeta _t^{} \ge \vartheta \) holds for any \(t \ge 1\) since \(\zeta _0 \ge \vartheta \). Hence,
Then, \(\gamma _t\) can be bounded
Substituting this result and \({\mathbf {v}}_0 = \varvec{\alpha }_0^{}\) into (11), we obtain
This completes the proof. \(\square \)
1.2.2 Proof of Theorem 1
Theorem 1
Assume the loss functions \(f_{ki}^{}\)’s are \((1/\mu )\)-smooth for any \(k \in [K]\) and \(i \in [n_k^{}]\). If \(\theta _0^{} = \sqrt{\vartheta \eta }\) and \((1-\rho )\lambda \mu n \le 1\), then after T iterations in Algorithm 1 with
\(D\big (\varvec{\alpha }_T^{}\big ) - D\big (\varvec{\alpha }_\star ^{}) \le \epsilon _D^{}\) holds. Furthermore, after T iterations with
it holds that \(P\big ({\mathbf {w}}(\varvec{\alpha }_T^{})) - (-D(\varvec{\alpha }_T^{})) \le \epsilon _G^{}\).
Proof
It is easy to see that \(D(\varvec{\alpha })\) is \(\vartheta \)-strongly convex since \(f_{ki}^{}\) is \((1/\mu )\)-smooth for any \(k \in [K], i \in [n_k^{}]\). It implies
By using this result, (12) can be rewrite as
where the last upper bound will be smaller than \(\epsilon _D\) if
By applying Lemma 6, we know that for any
it holds that \(D(\varvec{\alpha }_T) - P\big ({\mathbf {w}}(\varvec{\alpha }_T)) \le \epsilon _G\). \(\square \)
1.3 Convergence analysis for Lipschitz continuous losses: Proof of Theorem 2
Theorem 2
Assume the loss functions \(f_{ki}^{}\)’s are generally convex and L-Lipschitz continuous for any \(k \in [K]\), \(i \in [n_k^{}]\). If \(\theta _0 = 1\), the following inequality holds for any \(t \ge 1\)
After T iterations in Algorithm 1 with
it holds that \(D\big (\varvec{\alpha }_T^{}\big ) - D\big (\varvec{\alpha }_\star ^{}) \le \epsilon _D^{}\).
Proof
It can be proved by using Lemma 2. It is easy to see that \(f_{ki}^*\) are general convex (i.e., \(\mu = 0\)) since \(f_{ki}^{}\) are L-Lipschitz continuous for any \(k \in [K], i \in [n_k^{}]\). By using the definition of \(\zeta _t^{}\) and the fact that \(\mu = 0\), we obtain \(\gamma _t = (1- \theta _t^{})\gamma _{t-1} = \zeta _t^{}/\zeta _{t-1}^{} \gamma _{t-1}\). Applying the above identity from \(i = 1\) to t, we obtain \(\gamma _t = \lambda _0\zeta _t^{}/\zeta _0 = \zeta _t^{}/\zeta _0\). In addition, we can obtain \(\theta _t^{} = \big (\gamma _{t-1} - \gamma _t\big )/\gamma _{t-1}\) from \(\gamma _t = (1 - \theta _t^{})\gamma _{t-1}\). Therefore,
By using \(\theta _t^2/\eta = \zeta _t^{}\), we obtain \(1/\gamma _t - 1/\gamma _{t-1} \ge 0.5 \sqrt{\eta \zeta _0} = 0.5 \sqrt{\zeta _0(1 - \rho )\lambda n^2}\nonumber \). Combing the above inequality from \(i = 1\) to \(i = t\), we obtain
Substituting this results into (11), we obtain
Since \(\theta _0 = 1\), we have \(\zeta _0 = \theta _0^2/\eta = 1/((1 - \rho )\lambda n^2)\). Substituting the value of \(\zeta _0\) into (13), we obtain
where the last upper bound will be smaller than \(\epsilon _D\) if
This completes the proof. \(\square \)
Appendix C: More details of dynamic feature screening
1.1 Proof of Lemma 4
Lemma 4
Assume the loss functions \(f_{ki}^{}\)’s are \((1/\mu )\)-smooth for any \(k \in [K],i \in [n_k^{}]\). For any dual feasible solution \(\varvec{\alpha }\), it holds that \(\varvec{\alpha }_\star ^{} \in \mathscr {F} \mathop {=}\limits ^{\mathrm{def}}\big \{\varvec{\theta }~|~ \Vert \varvec{\theta }- \varvec{\alpha }\Vert \le \sqrt{2G(\varvec{\alpha })n/\mu } \big \}\).
Proof
Since \(f_{ki}^{}\) are \((1/\mu )\)-smooth for any \(k \in [K], i \in [n_k^{}]\), it implies that \(D(\varvec{\alpha })\) is (\(\mu /n\))-strongly convex.
where (a) follows from \(D(\varvec{\alpha })\) is \((\mu /n)\)-strongly convex , (b) is obtained by applying the weakly duality theorem, and (c) follows from the optimality of \(\varvec{\alpha }_\star ^{}\). Therefore, we obtain
This completes the proof. \(\square \)
Before proving Lemma 5, we first introduce the following lemma.
Lemma 8
(Gay 1981) Let us consider the following minimization problem
where \({\mathbf {H}} \in \mathbb {R}^{n \times n}\) be a symmetric matrix, \({\mathbf {D}} \in \mathbb {R}^{n \times n}\) is an nonsingular matrix, and \(\delta > 0\). Then, \({\mathbf {s}}_\star \) minimizes \(\psi ({\mathbf {s}})\) over the constraint set if and only if there exists a \(\vartheta _\star \ge 0\) such that
This \(\vartheta _\star \) is unique.
Next, we prove Lemma 5 by using Lemma 8.
Lemma 5
If \(\upsilon _j^{} = 0\), the maximum value of (19) is 0. Otherwise, the upper bound is
where \(\vartheta _\star \) and \({\mathbf {s}}_\star \) are defined as follows: (a) \(\vartheta _\star = 2 \upsilon _j\) and \({\mathbf {s}}^\star = \bar{{\mathbf {s}}} + \widehat{{\mathbf {s}}}\) if 1) \(\exists ~\widehat{{\mathbf {s}}} \in \mathbb {R}^K\) with \(\widehat{{\mathbf {s}}}_{\mathscr {I}_j} = {\mathbf {0}}\) and \(\Vert \bar{{\mathbf {s}}} + \widehat{{\mathbf {s}}}\Vert = \sqrt{2G(\varvec{\alpha })n/\mu }\), and 2) \(\left\langle {\mathbf {X}}_{\cdot j}^t, \varvec{\theta }_t^{}\right\rangle = 0, \forall t \in \mathscr {I}_j\). (b) Otherwise, \(\vartheta _\star > 2 \upsilon _j\) is solution of \(\Vert \left( {\mathbf {H}} + \vartheta _\star {\mathbf {I}}\right) ^{-1} {\mathbf {g}}\Vert = \sqrt{2G(\varvec{\alpha })n/\mu }\), and \({\mathbf {s}}_\star = - \left( {\mathbf {H}} + \vartheta _\star {\mathbf {I}}\right) ^{-1}{\mathbf {g}}\).
Proof
Let \({\mathbf {z}} = \varvec{\theta }- \varvec{\alpha }\), then (19) is equivalent to
The objective can be relaxed as following
Let \({\mathbf {s}} \in \mathbb {R}^K\) with \(s_t = \Vert {\mathbf {z}}^k\Vert \), we then define \(\psi ({\mathbf {s}})\) as \(\psi ({\mathbf {s}}) = \frac{1}{2} \langle {\mathbf {s}}, {\mathbf {H}}{\mathbf {s}} \rangle + \langle {\mathbf {g}}, {\mathbf {s}}\rangle \). By using the relaxed objective function, (19) becomes
where \(\min _{\Vert {\mathbf {s}}\Vert \le \sqrt{2G(\varvec{\alpha })n/\mu }} \psi ({\mathbf {s}})\) can be rewritten in the form of (30) by defining \({\mathbf {D}} = {\mathbf {I}}\) and \(\delta = \sqrt{2G(\varvec{\alpha })n/\mu }\). Then, Lemma 8 implies there exists a unique \(\vartheta _\star \) such that
which implies \(\vartheta _\star \ge > 0\) since \(\upsilon _j > 0\). Then, the problem can be considered as two cases \(\vartheta _\star = 2 \upsilon _j\) and \(\vartheta _\star \ge 2 \upsilon _j\). Given \(\vartheta _\star \) and \({\mathbf {s}}_\star \), \(\psi ({\mathbf {s}}_\star )\) can be formulated by using (32) and (33)
which implies the upper bound of (19) is
Next, we show the values of \({\mathbf {s}}_\star \) when \(\vartheta _\star = 2 \upsilon _j\) and \(\vartheta _\star \ge 2 \upsilon _j\), respectively.
Case 1:\(\vartheta _\star = 2 \upsilon _j\). In this case, (32) and (33) imply \(({\mathbf {H}} + 2 \upsilon _j{\mathbf {I}}){\mathbf {s}}_\star = - {\mathbf {g}}\) and \(\Vert {\mathbf {s}}_\star \Vert = \delta \) that is equivalent to Therefore, if all above conditions hold then \(\vartheta _\star = 2\upsilon _j\), otherwise we have \(\vartheta _\star \) that is discussed in the following.
Case 2:\(\vartheta _\star > 2 \upsilon _j\). In this case, \({\mathbf {H}} + \vartheta _\star {\mathbf {I}}\) is an invertible matrix. From (32) and (33), we obtain
which implies \({\mathbf {s}}_\star = -\left( {\mathbf {H}} + \vartheta _\star {\mathbf {I}}\right) ^{-1}{\mathbf {g}}\) and \(\big \Vert \big ({\mathbf {H}} + \vartheta _\star {\mathbf {I}}\big )^{-1}{\mathbf {g}}\big \Vert = \sqrt{2G(\varvec{\alpha })n/\mu }\). This completes the proof. \(\square \)
Lemma 5 shows that there exists a global optimum \(\vartheta _\star \), however, we need some algorithm to obtain the value for the case of \(\vartheta _\star > 2 \upsilon _j\). Note that \(\vartheta _\star \in (2 \upsilon _j, \infty )\) is the unique solution of
The above equation can be efficiently solved by using Newton’s method. Besides Newton’s method, \(\vartheta _\star \) can also be efficiently solved by using bisection method.
Appendix D: More details on single task learning
In this section, we provide more details on the extension of our method to single task learning. Specifically, we consider the following \(\ell _1\)-norm regularized learning problem [i.e., elastic net (Zou and Hastie 2005)]
Then, the local subproblem for each worker is
where \(\eta \mathop {=}\limits ^{\mathrm{def}}(1 - \rho )\lambda n^2\) and a safe value for \(\sigma '\) is \(\sigma ' = K\) (Ma et al. 2015). We compare the performance of our method with COCOA+ on two datasets (Table 2) with smoothed hinge loss (Shalev-Shwartz and Zhang 2013)
where \(\mu \) is set to \(\mu = 0.5\).
We compare the performance of our method with COCOA+ on two datasets in Table 2 with smoothed hinge loss (Shalev-Shwartz and Zhang 2013). In our experiments, 8 workers are used (i.e., \(K = 8\)) and \(\rho = 0.9\) for both datasets. The SDCA (Shalev-Shwartz and Zhang 2013) is used as local solver for both methods and H is set to \(H = 5 \times 10^5\). We evaluate two methods for \(\lambda = 10^{-2} \lambda _\text {max}\) and \(\lambda = 10^{-3} \lambda _\text {max}\). Figure 4 shows the comparison in terms of the number iterations for communication used by our method and COCOA+ to obtain a solution meeting a prescribed duality gap. In addition, we also evaluate the effect of dynamic screening for further reduced communication cost. The setting is the same as that presented in Sect. 7.4. Figure 5 presents the total communication cost used by our method without and with dynamic screening to solve (34) on RCV1 and URL. As observed, the proposed method performs as well as it works for MTL.
Rights and permissions
About this article
Cite this article
Zhou, Q., Chen, Y. & Pan, S.J. Communication-efficient distributed multi-task learning with matrix sparsity regularization. Mach Learn 109, 569–601 (2020). https://doi.org/10.1007/s10994-019-05847-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10994-019-05847-6