Advertisement

Robust and sparse regression in generalized linear model by stochastic optimization

  • Takayuki KawashimaEmail author
  • Hironori Fujisawa
Original Paper Information Theory and Statistics
  • 39 Downloads

Abstract

The generalized linear model (GLM) plays a key role in regression analyses. In high-dimensional data, the sparse GLM has been used but it is not robust against outliers. Recently, the robust methods have been proposed for the specific example of the sparse GLM. Among them, we focus on the robust and sparse linear regression based on the \(\gamma\)-divergence. The estimator of the \(\gamma\)-divergence has strong robustness under heavy contamination. In this paper, we extend the robust and sparse linear regression based on the \(\gamma\)-divergence to the robust and sparse GLM based on the \(\gamma\)-divergence with a stochastic optimization approach to obtain the estimate. We adopt the randomized stochastic projected gradient descent as a stochastic optimization approach and extend the established convergence property to the classical first-order necessary condition. By virtue of the stochastic optimization approach, we can efficiently estimate parameters for very large problems. Particularly, we show the linear regression, logistic regression and Poisson regression with \(L_1\) regularization in detail as specific examples of robust and sparse GLM. In numerical experiments and real data analysis, the proposed method outperformed comparative methods.

Keywords

Sparse Robust Divergence Stochastic gradient descent Generalized linear model 

1 Introduction

The regression analysis is a fundamental tool in data analysis. The generalized linear model (GLM) (Nelder and Wedderburn 1972; McCullagh and Nelder 1989) is often used and includes many important regression models such as linear regression, logistic regression and Poisson regression. Recently, the sparse modeling has been popular in GLM to treat high-dimensional data and, for some specific examples of GLM, the robust methods have also been incorporated [linear regression: Khan et al. (2007), Alfons et al. (2013), logistic regression: Bootkrajang and Kabán (2013), Chi and Scott (2014)].

Kawashima and Fujisawa (2017) proposed a robust and sparse regression based on the \(\gamma\)-divergence (Fujisawa and Eguchi 2008), which has a strong robustness that the latent bias can be sufficiently small even under heavy contamination. The proposed method showed better performances than past methods by virtue of the strong robustness. A coordinate descent algorithm with majorization–minimization algorithm (MM algorithm) (Hunter and Lange 2004) was constructed as an efficient estimation procedure for linear regression, but it is not always useful for GLM. In particular, when we consider the Poisson regression with \(L_1\) regularization based on the \(\gamma\)-divergence, although the objective function includes a hypergeometric series and demands high computational cost. To overcome this problem, we propose a new estimation procedure with a stochastic optimization approach, which largely reduces the computational cost and is easily applicable to any examples of GLM. In many stochastic optimization approaches, we adopt the randomized stochastic projected gradient descent (RSPG) proposed by Ghadimi et al. (2016).

In Sect. 2, we review the robust and sparse regression based on the \(\gamma\)-divergence. In Sect. 3, the RSPG is explained with regularized expected risk minimization. In Sect. 4, an online algorithm is proposed for GLM and the robustness of online algorithm is described with some typical examples of GLM. In Sect. 5, the convergence property of the RSPG is extended to the classical first-order necessary condition. In Sects. 6 and 7, numerical experiments and real data analysis are illustrated to show better performances than comparative methods. Concluding remarks are given in Sect. 8.

2 Regression via \(\gamma\)-divergence

2.1 Regularized empirical risk minimization

We suppose g is the underlying probability density function and f is a parametric probability density function. Let us define the \(\gamma\)-cross entropy for regression given by
$$\begin{aligned}&d_{\gamma } (g(y|x),f(y|x);g(x)) \\&\quad = -\frac{1}{\gamma } \log \int \frac{ \int g(y|x) f(y|x)^{\gamma } {\text{d}}y }{ \left( \int f( y|x)^{1+\gamma }{\text{d}}y\right) ^\frac{\gamma }{1+\gamma }} g(x){\text{d}}x \\&\quad = -\frac{1}{\gamma } \log \int \int \frac{ f(y|x)^{\gamma } }{ \left( \int f(y|x)^{1+\gamma }{\text{d}}y\right) ^\frac{\gamma }{1+\gamma }} g(x,y){\text{d}}x{\text{d}}y, \\&\quad = -\frac{1}{\gamma } \log E_{g(x,y)} \left[ \frac{ f(y|x)^{\gamma } }{ \left( \int f(y|x)^{1+\gamma }{\text{d}}y\right) ^\frac{\gamma }{1+\gamma }} \right] . \end{aligned}$$
The \(\gamma\)-divergence for regression is defined by
$$\begin{aligned}&D_{\gamma } (g(y|x),f(y|x);g(x)) \\&\quad =-d_{\gamma } (g(y|x),g(y|x);g(x)) +d_{\gamma } (g(y|x),f(y|x);g(x)). \end{aligned}$$
The main idea of robustness in the \(\gamma\)-divergence is based on density power weight \(f(y|x)^{\gamma }\) which gives a small weight to the terms related to outliers. Then, the parameter estimation using the \(\gamma\)-divergence becomes robust against outliers and it is known for having a strong robustness, which implies that the latent bias can be sufficiently small even under heavy contamination. More details about robust properties were investigated by Fujisawa and Eguchi (2008), Kanamori and Fujisawa (2015), and Kawashima and Fujisawa (2017).
Let \(f(y|x;\theta )\) be the parametric probability density function with parameter \(\theta\). The target parameter can be considered by
$$\begin{aligned} \theta ^*_{\gamma }&= \mathop {\mathrm{argmin}}\limits _{\theta } D_{\gamma }(g(y|x),f(y|x;\theta );g(x)) \nonumber \\&= \mathop {\mathrm{argmin}}\limits _{\theta } d_{\gamma }(g(y|x),f(y|x;\theta );g(x)). \end{aligned}$$
Moreover, we can also consider the target parameter with a convex regularization term, given by
$$\begin{aligned} \theta ^*_{\gamma , {\text {pen}}}&= \mathop {\mathrm{argmin}}\limits _{\theta } D_{\gamma }(g(y|x),f(y|x;\theta );g(x)) + \lambda P(\theta ) \nonumber \\&= \mathop {\mathrm{argmin}}\limits _{\theta } d_{\gamma }(g(y|x),f(y|x;\theta );g(x))+ \lambda P(\theta ) , \end{aligned}$$
(1)
where \(P(\theta )\) is a convex regularization term for parameter \(\theta\) and \(\lambda\) is a tuning parameter. As an example of convex regularization term, we can consider \(L_1\) (Lasso, Tibshirani 1996), elasticnet (Zou and Hastie 2005), the indicator function of a closed convex set (Kivinen and Warmuth 1995; Duchi et al. 2008) and so on. In what follows, we refer to the regression based on the \(\gamma\)-divergence as the \(\gamma\)-regression.
Let \((x_1,y_1) ,\ldots , (x_n,y_n)\) be the observations randomly drawn from the underlying distribution g(xy). The \(\gamma\)-cross entropy can be empirically estimated by
$$\begin{aligned}&\bar{d}_\gamma (f(y|x;\theta )) = -\frac{1}{\gamma } \log \frac{1}{n} \sum _{i=1}^n \frac{ f(y_i|x_i)^{\gamma } }{ \left( \int f(y|x_i)^{1+\gamma }{\text{d}}y\right) ^\frac{\gamma }{1+\gamma }} . \end{aligned}$$
By virtue of (1), the sparse \(\gamma\)-estimator can be proposed by
$$\begin{aligned} \hat{ \theta }_{\gamma , {\text {pen}}}&=\mathop {\mathrm{argmin}}\limits _{\theta } \bar{d}_\gamma (f(y|x;\theta ))+ \lambda P(\theta ) . \end{aligned}$$
(2)
To obtain the minimizer, we solve a non-convex and non-smooth optimization problem. Iterative estimation algorithms for such a problem can not easily achieve numerical stability and efficiency.

2.2 MM algorithm for \(\gamma\)-regression

Kawashima and Fujisawa (2017) proposed the iterative estimation algorithm for (2) by MM algorithm (Hunter and Lange 2004). It has a monotone decreasing property, i.e., the objective function monotonically decreases at each iterative step, which property leads to numerical stability and efficiency. In particular, the linear regression with \(L_1\) penalty was deeply considered.

Here, we explain the idea of MM algorithm briefly. Let \(h(\eta )\) be the objective function. Let us prepare the majorization function \(h_{MM}\) satisfying
$$\begin{aligned} h_{\text {MM}}(\eta ^{(m)}|\eta ^{(m)})&= h(\eta ^{(m)}), \\ h_{\text {MM}}(\eta |\eta ^{(m)})&\ge h(\eta ) \ \ \text{ for } \text{ all } \eta , \end{aligned}$$
where \(\eta ^{(m)}\) is the parameter of the m-th iterative step for \(m=0,1,2,\ldots\). MM algorithm optimizes the majorization function instead of the objective function as follows:
$$\begin{aligned} \eta ^{(m+1)} = \mathop {\mathrm{argmin}}\limits _{\eta } h_{\text {MM}}(\eta |\eta ^{(m)}). \end{aligned}$$
Then, we can show that the objective function \(h(\eta )\) monotonically decreases at each iterative step, because
$$\begin{aligned} h(\eta ^{(m)})&= h_{\text {MM}}(\eta ^{(m)}|\eta ^{(m)}) \\&\ge h_{\text {MM}}(\eta ^{(m+1)}|\eta ^{(m)}) \\&\ge h(\eta ^{(m+1)}). \end{aligned}$$
Note that \(\eta ^{(m+1)}\) is not necessary to be the minimizer of \(h_{\text {MM}}(\eta |\eta ^{(m)})\). We only need
$$\begin{aligned} h_{\text {MM}}(\eta ^{(m)}|\eta ^{(m)}) \ge h_{\text {MM}}(\eta ^{(m+1)}|\eta ^{(m)}). \end{aligned}$$
The problem on MM algorithm is how to make a majorization function \(h_{\text {MM}}\).
In Kawashima and Fujisawa (2017), the following majorization function was proposed using Jensen’s inequality:
$$\begin{aligned} h_{\text {MM}}(\theta |\theta ^{(m)}) = - \frac{1}{\gamma } \sum _{i=1}^n \alpha ^{(m)}_{i} \log \left\{ \frac{ f(y_i|x_i;\theta )^{\gamma } }{ \left( \int f(y|x_i;\theta )^{1+\gamma }{\text{d}}y\right) ^{\frac{\gamma }{1+\gamma }} } \right\} + \lambda P(\theta ) , \end{aligned}$$
(3)
where
$$\begin{aligned} \alpha ^{(m)}_{i}= \frac{ \frac{ f(y_i|x_i;\theta ^{(m)} )^{\gamma } }{ \left( \int f(y|x_i;\theta ^{(m)})^{1+\gamma }{\text{d}}y\right) ^{\frac{\gamma }{1+\gamma }} } }{ \sum _{l=1}^n \frac{ f(y_l|x_l;\theta ^{(m)} )^{\gamma } }{ \left( \int f(y|x_l;\theta ^{(m)})^{1+\gamma }{\text{d}}y\right) ^{\frac{\gamma }{1+\gamma }} } }. \end{aligned}$$
Moreover, for linear regression \(y=\beta _0 + x^T \beta + e \ (e \sim N(0,\sigma ^2) )\) with \(L_1\) regularization, the following majorization function and iterative estimation algorithm based on a coordinate descent method were obtained:
$$\begin{aligned} h_{\text {MM}}, \ {\text {linear}}(\theta |\theta ^{(m)})&= \frac{1}{2(1+\gamma )} \log \sigma ^{2} + \frac{1}{2} \sum _{i=1}^n \alpha ^{(m)}_i \frac{(y_i -\beta _0 - x_{i}^{T}\beta )^{2}}{\sigma ^{2}} +\lambda ||\beta ||_1 ,\\ \beta _0^{(m+1)}&= \sum _{i=1}^n \alpha _i^{(m)} (y_i-{x_i}^T \beta ^{(m)}) , \\ \beta _{j}^{(m+1)}&= \frac{ S\left( \sum _{i=1}^n \alpha ^{(m)}_{i}(y_i-\beta ^{(m+1)}_0 - r_{i,-j}^{(m)} )x_{i j} , \ {\sigma ^2}^{(m)}\lambda \right) }{ \left( \sum _{i=1}^n \alpha ^{(m)}_i x^{2}_{ij} \right) } \ \ (j=1,\ldots ,p), \\ {\sigma ^{2}}^{(m+1)}&= (1 + \gamma ) \sum _{i=1}^n \alpha ^{(m)}_i \left( y_i - \beta ^{(m+1)}_0 - x^{T}_{i} \beta ^{(m+1)}\right) ^{2} , \end{aligned}$$
where \(S(t,\lambda )=\text {sign}(t)(|t|-\lambda )\) and \(r_{i,-j}^{(m)}=\sum _{ k \ne j} x_{ik} ( \mathbb {1}_{( k < j) } \beta _k^{(m+1)} + \mathbb {1}_{ (k > j) } \beta _k^{(m)} )\).

2.3 Sparse \(\gamma\)-Poisson regression case

Typical GLMs are a linear regression, logistic regression, and Poisson regression: The former two regressions are easily treated with the above coordinate descent algorithm, but the Poisson regression has a problem as described in the following. Here, we consider a Poisson regression with a regularization term. Let \(f(y|x;\theta )\) be the conditional density with \(\theta =(\beta _0,\beta )\), given by
$$\begin{aligned} f(y|x;\theta ) = \frac{\exp (-\mu _{x}(\theta )) }{y!} \mu _{x}(\theta )^y, \end{aligned}$$
where \(\mu _{x}(\theta ) = \mu _{x}(\beta _0,\beta ) = \exp (\beta _0+x^T\beta )\). By virtue of (3), we can obtain the majorization function for Poisson regression with a regularization term, given by
$$\begin{aligned} h_{\text {MM}, \ {\text {poisson}}}(\theta |\theta ^{(m)}) =&- \sum _{i=1}^n \alpha _{i}^{(m)} \log \frac{\exp (-\mu _{x_i}(\theta )) }{y_{i}!} \mu _{x_i}(\theta )^{y_i} \nonumber \\&+\frac{1}{1+\gamma } \sum _{i=1}^n \alpha _{i}^{(m)} \log \left\{ \sum _{y=0}^\infty \frac{ \exp ( -(1+\gamma )\mu _{x_i}(\theta ))}{y!^{1+\gamma }} \mu _{x_i}(\theta )^{(1+\gamma )y} \right\} + \lambda P(\theta ). \end{aligned}$$
(4)
The second term of the right hand side in (4) contains the hypergeometric series, and then we can not obtain a closed form on the MM algorithm with respect to the parameters \(\beta _0, \beta\) although this series converges (see Appendix 1). Therefore, we can not derive an efficient iterative estimation algorithm based on a coordinate descent method in a similar way to in Kawashima and Fujisawa (2017). Other sparse optimization methods which use a linear approximation on the loss function, e.g., proximal gradient descent (Nesterov 2007; Duchi and Singer 2009; Beck and Teboulle 2009), can solve (4). However, these methods require at least sample size n times of an approximate calculation for the hypergeometric series at each iterative step in sub-problem \(\mathop {\mathrm{argmin}}\limits _{\theta } h_{MM} (\theta |\theta ^{(m)})\). Therefore, it requires high computation cost, especially for very large problems. We need another optimization approach to overcome such problems. In this paper, we consider minimizing the regularized expected risk (1) directly by a stochastic optimization approach. In what follows, we refer to the sparse \(\gamma\)-regression in GLM as the sparse \(\gamma\)-GLM.

3 Stochastic optimization approach for regularized expected risk minimization

The regularized expected risk minimization is generally the following form:
$$\begin{aligned} \varPsi ^{*} :=\min _{\theta \in \varTheta } \left\{ \varPsi (\theta ) :=E_{(x,y)} \left[ l((x,y);\theta ) \right] + \lambda P(\theta ) \right\} , \end{aligned}$$
(5)
where \(\varTheta\) is a closed convex set in \(\mathbb {R}^n\), l is a loss function with a parameter \(\theta\) and \(\varPsi (\theta )\) is bounded below over \(\varTheta\) by \(\varPsi ^* > - \infty\). Stochastic optimization approach solves (5) sequentially. More specifically, we draw a sequence of i.i.d. paired samples \((x_1,y_1),(x_2,y_2),\ldots ,(x_t,y_t),\ldots\) and, at t-th time, update the parameter \(\theta ^{(t)}\) based on the latest paired sample \((x_t,y_t)\) and the previous updated parameter \(\theta ^{(t-1)}\). Therefore, it requires low computational complexity per iteration and stochastic optimization can scale well for very large problems.

3.1 Stochastic gradient descent

The stochastic gradient descent (SGD) is one of the popular stochastic optimization approaches and is widely used in machine learning community (Bottou 2010). The SGD takes the form
$$\begin{aligned} \theta ^{(t+1)} = \mathop {\mathrm{argmin}}\limits _{\theta \in \varTheta } \left\langle \nabla l((x_t,y_t);\theta ^{(t)}) , \theta \right\rangle + \lambda P(\theta ) +\frac{1}{2 \eta _t}\Vert \theta -\theta ^{(t)} \Vert ^{2}_2, \end{aligned}$$
(6)
where \(\eta _t\) is a step size parameter. For some important examples of \(P(\theta )\), e.g., \(L_1\) regularization, (6) can be solved in a closed form.
When a loss function l is convex (possibly non-differentiable) and \(\eta _t\) is set to be appropriate, e.g., \(\eta _t= \mathcal{O} \left( \frac{1}{ \sqrt{t}} \right)\), under some mild conditions, the convergence property was established for the average of the iterates, i.e., \(\bar{\theta }_{T}=\frac{1}{T} \sum _{t=1}^T \theta ^{(t)}\) as follows [see, e.g., Bubeck (2015)]:
$$\begin{aligned} E \left[ \varPsi (\bar{\theta }_T) \right] - \varPsi ^* \le \mathcal{O} \left( \frac{1}{\sqrt{T}} \right) , \end{aligned}$$
where the expectation is taken with respect to past paired samples \((x_t, y_t) \cdots (x_T,y_T)\). Moreover, for some variants of SGD, e.g., RDA (Xiao 2010), Mirror descent (Duchi et al. 2010), Adagrad (Duchi et al. 2011), the convergence property was established under similar assumptions.

These methods assume that a loss function is convex to establish the convergence property, but the loss function is non-convex in our problem (1). Then, we can not adopt these methods directly. Recently, for non-convex loss function with convex regularization term, randomized stochastic projected gradient (RSPG) was proposed by Ghadimi et al. (2016). Under some mild conditions, the convergence property was established. Therefore, we consider applying the RSPG to our problem (1).

3.2 Randomized stochastic projected gradient

First, we explain the RSPG, following Ghadimi et al. (2016). The RSPG takes the form
$$\begin{aligned} \theta ^{(t+1)} = \mathop {\mathrm{argmin}}\limits _{\theta \in \varTheta } \left\langle \frac{1}{ m_t} \sum _{i=1}^{m_t} \nabla l((x_{t,i},y_{t,i});\theta ^{(t)}) , \theta \right\rangle + \lambda P(\theta ) +\frac{1}{ \eta _t} V( \theta , \theta ^{(t)} ) , \end{aligned}$$
(7)
where \(m_t\) is the size of mini-batch at t-th time, \((x_{t,i},y_{t,i})\) is the i-th mini-batch sample at t-th time and
$$\begin{aligned} V(a,b) = w(a) -w(b) - \langle \nabla w(b) , a-b \rangle , \end{aligned}$$
where w is continuously differentiable and \(\alpha\)-strongly convex function satisfying \(\langle a - b , \nabla w(a) - \nabla w(b) \rangle \ge \alpha \Vert a - b \Vert ^2\) for \(a , b \in \varTheta\). When \(w(\theta ) = \frac{1}{2} || \theta ||_2^2\), i.e., \(V(\theta , \theta ^{(t)}) = \frac{1}{2} || \theta - \theta ^{(t)} ||_2^2\), (7) is almost equal to (6).

Here, we denote two remarks on RSPG as a difference from the SGD. One is that the RSPG uses the mini-batch strategy, i.e., taking multiple samples at t-th time. The other is that the RSPG randomly select the output \(\hat{\theta }\) from \(\left\{ \theta ^{(1)}, \ldots , \theta ^{(T)} \right\}\) according to a certain probability distribution instead of taking the average of the iterates. This is because for non-convex stochastic optimization, later iterates does not always gather around local minimum and the average of the iterates can not work in such a convex case.

Next, we show the implementation of the RSPG, given by Algorithm 1. However, Algorithm 1 has a large deviation of the output because the only one final output is selected via some probability mass function \(P_R\). Therefore, Ghadimi et al. (2016) also proposed the two phase RSPG (2-RSPG) which has the post-optimization phase. In the post-optimization phase, multiple outputs are selected and these are validated to determine the final output, as shown in Algorithm 2. This can be expected to achieve a better complexity result of finding an \((\epsilon ,\varLambda )\)-solution, i.e., Prob\(\left\{ C(\theta ^{(R)}) \le \epsilon \right\} \ge 1- \varLambda\), where C is some convergence criterion, for some \(\epsilon > 0\) and \(\varLambda \in (0,1)\). For more detailed descriptions and proofs, we refer to the Sect. 4 in Ghadimi et al. (2016).

4 Online robust and sparse GLM

In this section, we show the sparse \(\gamma\)-GLM with the stochastic optimization approach on three specific examples; linear regression, logistic regression, and Poisson regression with \(L_1\) regularization. In what follows, we refer to the sparse \(\gamma\)-GLM with the stochastic optimization approach as the online sparse \(\gamma\)-GLM.

To apply the RSPG to our methods (1), we prepare the monotone transformation of the \(\gamma\)-cross entropy for regression in (1) as follows:
$$\begin{aligned} \mathop {\mathrm{argmin}}\limits _{\theta \in \varTheta } E_{g(x,y)} \left[ - \frac{ f(y|x;\theta )^{\gamma } }{ \left( \int f(y|x;\theta )^{1+\gamma }{\text{d}}y\right) ^\frac{\gamma }{1+\gamma }} \right] + \lambda P(\theta ) , \end{aligned}$$
(8)
and we suppose that \(\varTheta\) is \(\mathbb {R}^n\) or closed ball with sufficiently large radius. Then, we can apply the RSPG to (8) and by virtue of (7), the update formula takes the form
$$\begin{aligned} \theta ^{(t+1)} = \mathop {\mathrm{argmin}}\limits _{\theta \in \varTheta } \left\langle - \frac{1}{m_t} \sum _{i=1}^{m_t} \nabla \frac{f(y_{t,i}|x_{t,i};\theta ^{(t)})^{\gamma }}{\left( \int f(y|x_{t,i};\theta ^{(t)})^{1+\gamma }{\text{d}}y\right) ^{\frac{\gamma }{1+\gamma }}} , \theta \right\rangle + \lambda P(\theta ) +\frac{1}{ \eta _t} V( \theta , \theta ^{(t)} ) . \end{aligned}$$
(9)
More specifically, we suppose that \(V(\theta , \theta ^{(t)}) = \frac{1}{2} || \theta - \theta ^{(t)} ||_2^2\) because the update formula can be obtained in closed form for some important sparse regularization terms, e.g., \(L_1\) regularization, elasticnet. We illustrate the update algorithms based on Algorithm 1 for three specific examples. The update algorithms based on Algorithm 2 are obtained in a similar manner.

To implement our methods, we need to determine some tuning parameters, e.g., the step size \(\eta _t\), mini-batch size \(m_t\). In Sect. 5, we discuss how to determine some tuning parameters in detail.

4.1 Online sparse \(\gamma\)-linear regression

Let \(f(y|x;\theta )\) be the conditional density with \(\theta =(\beta _0, \beta ^T, \sigma ^2)^T\), given by
$$\begin{aligned} f(y|x;\theta )=\phi (y;\beta _0+x^T\beta ,\sigma ^2), \end{aligned}$$
where \(\phi (y;\mu ,\sigma ^2)\) is the normal density with mean parameter \(\mu\) and variance parameter \(\sigma ^2\). Suppose that \(P(\theta )\) is the \(L_1\) regularization \(||\beta ||_1\). Then, by virtue of (9), we can obtain the update formula given by
$$\begin{aligned}&\left( \beta _0^{(t+1)},\beta ^{(t+1)},{\sigma ^{2}}^{(t+1)} \right) \nonumber \\&\quad = \mathop {\mathrm{argmin}}\limits _{\beta _0,\beta ,\sigma ^{2}} \xi _1 (\beta _0^{(t)}) \beta _0 + \langle \xi _2 (\beta ^{(t)}) , \beta \rangle + \xi _3 ( {\sigma ^2}^{(t)}) \sigma ^{2} \nonumber \\&\qquad + \lambda \Vert \beta \Vert _1 + \frac{1}{2\eta _t} \Vert \beta _0-\beta _0^{(t)} \Vert ^{2}_2 +\frac{1}{2\eta _t} \Vert \beta -\beta ^{(t)} \Vert ^{2}_2 +\frac{1}{2\eta _t} \Vert \sigma ^2-{\sigma ^2}^{(t)} \Vert ^{2}_2 , \end{aligned}$$
(10)
where
$$\begin{aligned} \xi _1(\beta _0^{(t)})&= - \frac{1}{m_t} \sum _{i=1}^{m_t} \left[ \frac{\gamma (y_{t,i} -\beta _0^{(t)} - {x_{t,i}}^T \beta ^{(t)} )}{ {\sigma ^2}^{(t)} } \left( \frac{1+\gamma }{2 \pi {\sigma ^2}^{(t)}} \right) ^{\frac{\gamma }{2(1+\gamma )}} \exp \left\{ - \frac{\gamma (y_{t,i} -\beta _0^{(t)} - {x_{t,i}}^T \beta ^{(t)})^2 }{2 {\sigma ^2}^{(t)}} \right\} \right] , \\ \xi _2(\beta ^{(t)})&= - \frac{1}{m_t} \sum _{i=1}^{m_t} \left[ \frac{\gamma (y_{t,i} -\beta _0^{(t)} - {x_{t,i}}^T \beta ^{(t)} )}{ {\sigma ^2}^{(t)} } \left( \frac{1+\gamma }{2 \pi {\sigma ^2}^{(t)}} \right) ^{\frac{\gamma }{2(1+\gamma )}} \exp \left\{ - \frac{\gamma (y_{t,i} -\beta _0^{(t)} - {x_{t,i}}^T \beta ^{(t)})^2 }{2 {\sigma ^2}^{(t)}} \right\} x_{t,i} \right] , \\ \xi _3( {\sigma ^2}^{(t)})&= \frac{1}{m_t} \sum _{i=1}^{m_t} \left[ \frac{\gamma }{2} \left( \frac{1+\gamma }{2 \pi {\sigma ^2}^{(t)} } \right) ^{\frac{\gamma }{2(1+\gamma )}} \left\{ \frac{1}{(1+\gamma ) {\sigma ^2}^{(t)} } - \frac{(y_{t,i} -\beta _0^{(t)} - {x_{t,i}}^T \beta ^{(t)} )^2}{ {\sigma ^4}^{(t)}} \right\} \right. \\&\quad \left. \exp \left\{ - \frac{\gamma (y_{t,i} - \beta _0^{(t)} - {x_{t,i}}^T \beta ^{(t)} )^2 }{2 {\sigma ^2}^{(t)}} \right\} \right] . \end{aligned}$$
Consequently, we can obtain the update algorithm, as shown in Algorithm 3.
Here, we briefly show the robustness of online sparse \(\gamma\)-linear regression. For simplicity, we consider the intercept parameter \(\beta _0\). Suppose that the \((x_{t,k},y_{t,k})\) is an outlier at t-th time. The conditional probability density \(f(y_{t,k}|x_{t,k};\theta ^{(t)})\) can be expected to be sufficiently small. We see from \(f(y_{t,k}|x_{t,k};\theta ^{(t)}) \approx 0\) and (10) that
$$\begin{aligned}&\beta _0^{(t+1)} \nonumber \\&\quad = \mathop {\mathrm{argmin}}\limits _{\beta _0} - \frac{1}{m_t} \sum _{1 \le i \ne k \le m_t} \left[ \frac{\gamma (y_{t,i} -\beta _0^{(t)} - {x_{t,i}}^T \beta ^{(t)} )}{ {\sigma ^2}^{(t)} } \left( \frac{1+\gamma }{2 \pi {\sigma ^2}^{(t)}} \right) ^{\frac{\gamma }{2(1+\gamma )}} \exp \left\{ - \frac{\gamma (y_{t,i} -\beta _0^{(t)} - {x_{t,i}}^T \beta ^{(t)})^2 }{2 {\sigma ^2}^{(t)}} \right\} \right] \times \beta _0 \nonumber \\&\qquad \underset{\approx 0}{ \underline{ - \frac{1}{m_t} \frac{ \gamma (1+\gamma )^{\frac{\gamma }{2(1+\gamma )}} (y_{t,k} -\beta _0^{(t)} - {x_{t,k}}^T \beta ^{(t)} )}{ {\sigma ^2}^{(t)} } \left( 2 \pi {\sigma ^2}^{(t)} \right) ^{\frac{\gamma ^2}{2(1+\gamma )}} f(y_{t,k} | x_{t,k} ; \theta ^{(t)})^{\gamma } } } \times \beta _0 + \frac{1}{2\eta _t} \Vert \beta _0-\beta _0^{(t)} \Vert ^{2}_2 . \end{aligned}$$
Therefore, the effect of an outlier is naturally ignored in (10). Similarly, we can also see the robustness for parameters \(\beta\) and \(\sigma ^2\).

4.2 Online sparse \(\gamma\)-logistic regression

Let \(f(y|x;\theta )\) be the conditional density with \(\theta =(\beta _0,\beta ^T)^T\), given by
$$\begin{aligned} f(y|x;\beta _0,\beta )=F( \tilde{x}^T \theta )^y (1 - F(\tilde{x}^T \theta ) )^{(1-y)} , \end{aligned}$$
where \(\tilde{x}=(1,x^T)^T\) and \(F(u)= \frac{1}{1+\exp (-u)}\). Then, by virtue of (9), we can obtain the update formula given by
$$\begin{aligned}&\left( \beta _0^{(t+1)}, \beta ^{(t+1)} \right) \nonumber \\&\quad = \mathop {\mathrm{argmin}}\limits _{\beta _0, \beta } \nu _1(\beta _0^{(t)}) \beta _0 + \langle \nu _2(\beta ^{(t)}) , \beta \rangle + \lambda || \beta ||_1 +\frac{1}{2\eta _t} \Vert \beta _0-\beta _0^{(t)} \Vert ^{2}_2 +\frac{1}{2\eta _t} \Vert \beta -\beta ^{(t)} \Vert ^{2}_2 , \end{aligned}$$
(11)
where
$$\begin{aligned} \nu _1(\beta _0^{(t)})&= - \frac{1}{m_t} \sum _{i=1}^{m_t} \left[ \frac{ \gamma \exp ( \gamma y_{t,i} \tilde{x}_{t,i}^T \theta ^{(t)} ) \left\{ y_{t,i} - \frac{ \exp ( (1+\gamma ) \tilde{x}_{t,i}^T \theta ^{(t)} ) }{ 1+\exp ( (1+\gamma ) \tilde{x}_{t,i}^T \theta ^{(t)} ) } \right\} }{ \left\{ 1+\exp ( (1+\gamma ) \tilde{x}_{t,i}^T \theta ^{(t)} ) \right\} ^{\frac{\gamma }{1+\gamma }} } \right] , \\ \nu _2(\beta ^{(t)})&= - \frac{1}{m_t} \sum _{i=1}^{m_t} \left[ \frac{ \gamma \exp ( \gamma y_{t,i} \tilde{x}_{t,i}^T \theta ^{(t)} ) \left\{ y_{t,i} - \frac{ \exp ( (1+\gamma ) \tilde{x}_{t,i}^T \theta ^{(t)} ) }{ 1+\exp ( (1+\gamma ) \tilde{x}_{t,i}^T \theta ^{(t)} ) } \right\} }{ \left\{ 1+\exp ( (1+\gamma ) \tilde{x}_{t,i}^T \theta ^{(t)} ) \right\} ^{\frac{\gamma }{1+\gamma }} } x_{t,i} \right] . \end{aligned}$$
Consequently, we can obtain the update algorithm as shown in Algorithm 4. In a similar way to online sparse \(\gamma\)-linear regression, we can also see the robustness for parameters \(\beta _0\) and \(\beta\) in online sparse \(\gamma\)-logistic regression (11).

4.3 Online sparse \(\gamma\)-Poisson regression

Let \(f(y|x;\theta )\) be the conditional density with \(\theta =(\beta _0,\beta ^T)^T\), given by
$$\begin{aligned} f(y|x;\theta ) = \frac{\exp (-\mu _{x}(\theta ) ) }{y!} \mu _{x}(\theta )^y, \end{aligned}$$
where \(\mu _{x}(\theta ) = \mu _{x}(\beta _0,\beta ) = \exp (\beta _0+x^T\beta )\). Then, by virtue of (9), we can obtain the update formula given by
$$\begin{aligned}&\left( \beta _0^{(t+1)}, \beta ^{(t+1)} \right) \nonumber \\&\quad =\mathop {\mathrm{argmin}}\limits _{\beta _0, \beta } \zeta _1(\beta _0^{(t)}) \beta _0 + \langle \zeta _{2}(\beta ^{(t)}) , \beta \rangle + \lambda || \beta ||_1 +\frac{1}{2\eta _t} \Vert \beta _0-\beta _0^{(t)} \Vert ^{2}_2 +\frac{1}{2\eta _t} \Vert \beta -\beta ^{(t)} \Vert ^{2}_2 , \end{aligned}$$
(12)
where
$$\begin{aligned} \zeta _1(\beta _0^{(t)})&= \frac{1}{m_t} \sum _{i=1}^{m_t} \left[ \frac{ \gamma f(y_{t,i}|x_{t,i};\theta ^{(t)})^\gamma \left\{ \sum _{y=0}^\infty (y - y_{t,i}) f(y|x_{t,i};\theta ^{(t)})^{1+\gamma } \right\} }{ \left\{ \sum _{y=0}^\infty f(y|x_{t,i};\theta ^{(t)})^{1+\gamma } \right\} ^{\frac{1+2\gamma }{1+\gamma }} } \right] , \\ \zeta _{2}(\beta ^{(t)})&= \frac{1}{m_t} \sum _{i=1}^{m_t} \left[ \frac{ \gamma f(y_{t,i}|x_{t,i};\theta ^{(t)})^\gamma \left\{ \sum _{y=0}^\infty (y - y_{t,i}) f(y|x_{t,i};\theta ^{(t)})^{1+\gamma } \right\} }{ \left\{ \sum _{y=0}^\infty f(y|x_{t,i};\theta ^{(t)})^{1+\gamma } \right\} ^{\frac{1+2\gamma }{1+\gamma }} } x_{t,i} \right] . \end{aligned}$$
Here, two types of hypergeometric series exist in (12). We can show the convergence of these as follows:

Lemma 1

If the term \(\mu _{x_{t,i}}(\beta _0^{(t)}, \beta ^{(t)})\) is bounded, \(\sum _{y=0}^\infty f(y|x_{t,i};\theta ^{(t)})^{1+\gamma }\) and \(\sum _{y=0}^\infty (y- y_{t,i} ) f(y|x_{t,i};\theta ^{(t)})^{1+\gamma }\) converge.

The proof is in Appendix 1.

Consequently, we can obtain the update algorithm as shown in Algorithm 5. In a similar way to online sparse \(\gamma\)-linear regression, we can also see the robustness for parameters \(\beta _0\) and \(\beta\) in online sparse \(\gamma\)-Poisson regression (12). Moreover, this update algorithm requires at most twice sample size \(2n=2 \times \sum _{t=1}^T m_t\) times of an approximate calculation for the hypergeometric series in Algorithm 5. Therefore, we can achieve a significant reduction in computational complexity.

5 Convergence property of online sparse \(\gamma\)-GLM

In this section, we show the global convergence property of the RSPG established by Ghadimi et al. (2016). Moreover, we extend it to the classical first-order necessary condition, i.e., at a local minimum, the directional derivative, if it exists, is non-negative for any direction [see, e.g., Borwein and Lewis (2010)].

First, we show the global convergence property of the RSPG. To apply to online sparse \(\gamma\)-GLM, we slightly modify some notations. We consider the following optimization problem (5) again:
$$\begin{aligned} \varPsi ^* :=\min _{\theta \in \varTheta } \underset{ :=\varPsi (\theta ) }{ \underline{ E_{(x,y)} \left[ l((x,y);\theta ) \right] + \lambda P(\theta ) } }, \end{aligned}$$
where \(E_{(x,y)} \left[ l((x,y);\theta ) \right]\) is continuously differentiable and possibly non-convex. The update formula (7) of the RSPG is as follows:
$$\begin{aligned} \theta ^{(t+1)} = \mathop {\mathrm{argmin}}\limits _{ \theta \in \varTheta } \left\langle \frac{1}{ m_t} \sum _{i=1}^{m_t} \nabla l((x_{t,i},y_{t,i});\theta ^{(t)} ) , \theta \right\rangle + \lambda P(\theta ) +\frac{1}{ \eta _t} V( \theta , \theta ^{(t)} ) , \end{aligned}$$
where
$$\begin{aligned} V(a,b) = w(a) -w(b) - \langle \nabla w(b) , a-b \rangle , \end{aligned}$$
and w is continuously differentiable and \(\alpha\)-strongly convex function satisfying \(\langle a - b , \nabla w(a) - \nabla w(b) \rangle \ge \alpha \Vert a - b \Vert ^2\) for \(a , b \in \varTheta\). We make the following assumptions.

Assumption 1

\(\nabla E_{(x,y)} \left[ l((x,y);\theta ) \right]\) is L-Lipschitz continuous for some \(L>0\), i.e.,
$$\begin{aligned} \Vert \nabla E_{(x,y)} \left[ l((x,y);\theta _1) \right] - \nabla E_{(x,y)} \left[ l((x,y);\theta _2) \right] \Vert < L \Vert \theta _1 - \theta _2 \Vert , \text{ for } \text{ any } \ \theta _1, \theta _2 \ \in \varTheta . \end{aligned}$$
(13)

Assumption 2

For any \(t \ge 1\),
$$\begin{aligned}&E_{(x_t,y_t)} \left[ \nabla l((x_t,y_t);\theta ^{(t)}) \right] = \nabla E_{(x_t,y_t)} \left[ l((x_t,y_t);\theta ^{(t)}) \right] , \end{aligned}$$
(14)
$$\begin{aligned}&E_{(x_t,y_t)} \left[ \left\| \nabla l((x_t,y_t);\theta ^{(t)}) -\nabla E_{(x_t,y_t)} \left[ l((x_t,y_t);\theta ^{(t)}) \right] \right\| ^2 \right] \le \tau ^2, \end{aligned}$$
(15)
where \(\tau > 0\) is a constant.
Let us define
$$\begin{aligned} P_{X,R}&= \frac{1}{\eta _{R}} \left( \theta ^{(R)} -\theta ^{+} \right) , \\ \tilde{P}_{X,R}&= \frac{1}{\eta _{R}} \left( \theta ^{(R)} -\tilde{\theta }^{+} \right) , \end{aligned}$$
where
$$\begin{aligned} \theta ^{+}&= \mathop {\mathrm{argmin}}\limits _{\theta \in \varTheta } \left\langle \nabla E_{(x,y)} \left[ l((x,y); \theta ^{(R)} ) \right] , \theta \right\rangle + \lambda P(\theta ) +\frac{1}{\eta _R} V( \theta , \theta ^{(R)} ), \nonumber \\ \tilde{\theta }^{+}&= \mathop {\mathrm{argmin}}\limits _{\theta \in \varTheta } \left\langle \frac{1}{ m_{R} } \sum _{i=1}^{m_{R}} \nabla l((x_{R,i},y_{R,i}); \theta ^{(R)} ) , \theta \right\rangle + \lambda P(\theta ) +\frac{1}{\eta _R} V( \theta , \theta ^{(R)} ). \end{aligned}$$
(16)
Then, the following global convergence property was obtained.

Theorem 1

[Global Convergence Property in Ghadimi et al. (2016)]

Suppose that the step sizes \(\left\{ \eta _{t} \right\}\) are chosen such that \(0< \eta _{t} \le \frac{\alpha }{L}\) with \(\eta _{t} < \frac{\alpha }{L}\) for at least one t, and the probability mass function \(P_{R}\) is chosen such that for any \(t=1,\ldots ,T\),
$$\begin{aligned} P_{R}(t) :=\text{ Prob } \left\{ R=t \right\} = \frac{\alpha \eta _{t} - L \eta _{t}^2 }{ \sum _{t=1}^T \left( \alpha \eta _{t} - L \eta _{t}^2 \right) }. \end{aligned}$$
(17)
Then, we have
$$\begin{aligned} E \left[ || \tilde{P}_{X,R} ||^2 \right] \le \frac{ L D^2_{\varPsi } + \left( \tau ^2 / \alpha \right) \sum _{t=1}^T \left( \eta _{t} / m_{t} \right) }{ \sum _{t=1}^T \left( \alpha \eta _{t} - L \eta _{t}^2 \right) } , \end{aligned}$$
where the expectation was taken with respect to R and past samples \((x_{t,i}, y_{t,i}) \ (t=1,\ldots ,T; \ i=1,\ldots , m_{t} )\) and \(D_{ \varPsi }= \left[ \frac{ \varPsi ( \theta ^{(1)} ) - \varPsi ^* }{L} \right] ^{\frac{1}{2}}\).

Proof

See Ghadimi et al. (2016), Theorem 2. \(\square\)

In particular, Ghadimi et al. (2016) investigated the constant step size and mini-batch size policy as follows.

Corollary 1

[Global Convergence Property with constant step size and mini-batch size in Ghadimi et al. (2016)]

Suppose that the step sizes and mini-batch sizes are \(\eta _{t} = \frac{\alpha }{2L}\) and \(m_t =m \ (\ge 1)\) for all \(t=1, \ldots , T\), and the probability mass function \(P_{R}\) is chosen as (17). Then, we have
$$\begin{aligned} E \left[ \Vert \tilde{P}_{X,R} \Vert ^2 \right] \le \frac{4 L^2 D^2_{\varPsi }}{\alpha ^2 T} + \frac{2 \tau ^2}{ \alpha ^2 m} \ \ {\text {and}} \ \ E \left[ \Vert P_{X,R} \Vert ^2 \right] \le \frac{8 L^2 D^2_{\varPsi }}{\alpha ^2 T} + \frac{6 \tau ^2}{ \alpha ^2 m} . \end{aligned}$$
Moreover, the appropriate choice of mini-batch size m is given by
$$\begin{aligned} m = \left\lceil \min \left\{ \max \left\{ 1 , \frac{\tau \sqrt{6 N } }{4 L \tilde{D} } \right\} , N \right\} \right\rceil , \end{aligned}$$
where \(\tilde{D} >0\) and \(N\left( = m \times T\right)\) is the number of total samples. Then, with the above setting, we have the following result
$$\begin{aligned} \frac{\alpha ^2}{L} E \left[ \Vert P_{X,R} \Vert ^2 \right] \le \frac{16 L D^2_{\varPsi } }{N} + \frac{4 \sqrt{6} \tau }{ \sqrt{ N}} \left( \frac{D^2_{\varPsi } }{\tilde{D}} + \tilde{D} \max \left\{ 1, \frac{\sqrt{6} \tau }{ 4 L \tilde{D} \sqrt{N} } \right\} \right) . \end{aligned}$$
(18)
Furthermore, when N is relatively large, the optimal choice of \(\tilde{D}\) would be \(D_{\varPsi }\) and (18) reduces to
$$\begin{aligned} \frac{\alpha ^2}{L} E \left[ \Vert P_{X,R} \Vert ^2 \right] \le \frac{16 L D^2_{\varPsi } }{N} + \frac{8 \sqrt{6} D_{\varPsi } \tau }{ \sqrt{ N}} . \end{aligned}$$

Proof

See Ghadimi et al. (2016), Corollary 4. \(\square\)

Then, using (18) and Markov’s inequality, the following complexity result can be established by Ghadimi et al. (2016):
$$\begin{aligned} \text{ Prob } \left( \Vert P_{X,R} \Vert ^2 \ge \frac{ \kappa L B_{N}}{\alpha ^2} \right) \le \frac{1}{\kappa } \text{ for } \text{ any } \kappa >0, \end{aligned}$$
(19)
where \(B_{N} = \frac{16 L D^2_{\varPsi } }{N} + \frac{4 \sqrt{6} \tau }{ \sqrt{ N}} \left( \frac{D^2_{\varPsi } }{\tilde{D}} + \tilde{D} \max \left\{ 1, \frac{\sqrt{6} \tau }{ 4 L \tilde{D} \sqrt{N} } \right\} \right)\). In the 2-RSPG, this complexity result can be improved as follows:
$$\begin{aligned} \text{ Prob } \left( \Vert P_{X,R_{\text{ s }}} \Vert ^2 \ge \frac{2}{\alpha ^2} \left( 4L B_{N} + \frac{3 \kappa \tau ^2}{N} \right) \right) \le \frac{N_{\text {cand}}}{\kappa } + 2^{-N_{\text {cand}}} \text{ for } \text{ any } \kappa >0, \end{aligned}$$
(20)
where \(N_{\text {cand}}\) is the number of candidates for the output in Algorithm 2. For more detailed descriptions and proofs of complexity results, we refer to the Sect. 4.2 in Ghadimi et al. (2016).

Finally, we extend (18) to the classical first-order necessary condition as follows

Theorem 2

The Modified Global Convergence Property

Under the same assumptions in Theorem 1 and arbitrary large N, we can expect \(P_{X,R} \approx 0\) with probability of (19) and (20) in RSPG and 2-RSPG, respectively. Then, for any direction \(\delta\) and \(\theta ^{(R)} \in \ relint \left( \varTheta \right)\), we have
$$\begin{aligned}&\varPsi ^{'}( \theta ^{(R)}; \delta ) = \lim _{ k \downarrow 0} \frac{ \varPsi (\theta ^{(R)} + k\delta ) - \varPsi (\theta ^{(R)}) }{k} \ge 0 \nonumber \\&\qquad \text{ with } \text{ probability } \text{ of } (19) \text{ and } (20) \text{ in } \text{ RSPG } \text{ and } \text{2-RSPG, } \text{ respectively. } \end{aligned}$$
(21)

The proof is in Appendix 2.

We adopted the following parameter setting in online sparse \(\gamma\)-GLM:
$$\begin{aligned} \text{ step } \text{ size: } \eta _{t}&= \frac{1}{2L}, \\ \text{ mini-batch } \text{ size: } m_{t}&= \left\lceil \min \left\{ \max \left\{ 1 , \frac{\tau \sqrt{6 N } }{4 L \tilde{D} } \right\} , N \right\} \right\rceil . \end{aligned}$$
More specifically, when the (approximate) minimum value of the objective function \(\varPsi ^*\) is known, e.g., the objective function is non-negative, we should use \(D_{\varPsi }\) instead of \(\tilde{D}\). In numerical experiment, we used the \(D_{\varPsi }\) because we can obtain \(\varPsi ^*\) in advance. In real data analysis, we can not obtain \(\varPsi ^*\) in advance. Then, we used the some values of \(\tilde{D}\), i.e., the some values of mini-batch size \(m_{t}\).

Here we discuss assumptions 1 and 2 in the case of online sparse \(\gamma\)-GLM. For all examples in Sect. 4, \(l((x,y;\theta ))\) is continuously twice differentiable, then \(\nabla l((x,y);\theta )\) is locally Lipschitz continuous over a compact domain. Therefore, assumption 1 holds locally. In particular, assumption 1 holds globally, i.e., (globally) Lipschitz continuous, in online sparse \(\gamma\)-logistic regression. By the expression of stochastic gradients (10), (11) and (12), it is easy to verify that (14) in assumption 2 holds. On the other hand, (15) in assumption 2 is generally hard to verify precisely. As an alternative way, using finite samples, we can check in advance that (15) practically holds.

6 Numerical experiments

In this section, we present the numerical results of online sparse \(\gamma\)-linear regression. We compared online sparse \(\gamma\)-linear regression based on the RSPG with online sparse \(\gamma\)-linear regression based on the SGD, which does not guarantee convergence for non-convex case. The RSPG has two variants, which are shown in Algorithms 1 and 2. In this experiment, we adopted the 2-RSPG for the numerical stability. In what follows, we refer to the 2-RSPG as the RSPG. As a comparative method, we implemented the SGD with the same parameter setting described in Sect. 3.1. All results were obtained in R version 3.3.0 with Intel Core i7-4790K machine.

6.1 Linear regression models for simulation

We used the simulation model given by
$$\begin{aligned} y=\beta _0+\beta _1 x_1 + \beta _2 x_2+ \cdots +\beta _p x_p + e, \quad e \sim N(0,0.5^2). \end{aligned}$$
The sample size and the number of explanatory variables were set to be \(N=10000, 30000\) and \(p=1000, 2000\), respectively. The true coefficients were given by
$$\begin{aligned} \beta _1=\,\, & {} 1 ,\ \beta _2 = 2,\ \beta _4 = 4,\ \beta _7 = 7,\ \beta _{11} =11,\nonumber \\ \beta _j=\,\, & {} 0 \ \text{ for } \ j \in \{0, \ldots ,p \} \backslash \{1,2,4,7,11\}. \end{aligned}$$
We arranged a broad range of regression coefficients to observe sparsity for various degrees of regression coefficients. The explanatory variables were generated from a normal distribution \(N(0,\varSigma )\) with \(\varSigma =(0.2^{|i-j|})_{1 \le i,j \le p }\). We generated 30 random samples.

Outliers were incorporated into simulations. We set the outlier ratio (\(\epsilon =0.2\)) and the outlier pattern that the outliers were generated around the middle part of the explanatory variable, where the explanatory variables were generated from \(N(0,0.5^2)\) and the error terms were generated from \(N(20,0.5^2)\).

6.2 Performance measure

The empirical regularized risk and the (approximated) expected regularized risk were used to verify the fitness of regression:
$$\begin{aligned} \text {EmpRisk}&= \frac{1}{N} \sum _{i=1}^N - \frac{ f(y_i|x_i;\hat{\theta })^{\gamma } }{ \left( \int f(y|x_i;\hat{\theta })^{1+\gamma }{\text{d}}y\right) ^\frac{\gamma }{1+\gamma }} + \lambda \Vert \hat{\beta } \Vert _1 , \\ \text {ExpRisk}&= E_{g(x,y)} \left[ - \frac{ f(y|x;\hat{\theta })^{\gamma } }{ \left( \int f(y|x;\hat{\theta })^{1+\gamma }{\text{d}}y\right) ^\frac{\gamma }{1+\gamma }} \right] + \lambda \Vert \hat{\beta } \Vert _1 \\&\approx \frac{1}{N_{test}} \sum _{i=1}^{N_{test}} - \frac{ f(y^*_i|x^*_i;\hat{\theta })^{\gamma } }{ \left( \int f(y|x^*_i;\hat{\theta })^{1+\gamma }{\text{d}}y\right) ^\frac{\gamma }{1+\gamma }} + \lambda \Vert \hat{\beta } \Vert _1, \end{aligned}$$
where \(f(y|x;\hat{\theta })=\phi (y;\hat{\beta _0}+x^T\hat{\beta },\hat{\sigma }^2)\) and \((x_i^*, y_i^*)\) (\(i=1, \ldots ,N_\text {test}\)) are test samples generated from the simulation model with outlier scheme. In this experiment, we used \(N_{\text {test}}=70000\).

6.3 Initial point and tuning parameter

In our method, we need an initial point and some tuning parameters to obtain the estimate. Therefore, we used \(N_{\text {init}}=200\) samples which were used for estimating an initial point and other parameters L in (13) and \(\tau ^2\) in (15) to calculate in advance. We suggest the following ways to prepare an initial point. The estimate of other conventional robust and sparse regression methods would give a good initial point. For another choice, the estimate of the RANSAC (random sample consensus) algorithm would also give a good initial point. In this experiment, we added the noise to the estimate of the RANSAC and used it as an initial point.

For estimating L and \(\tau ^2\), we followed the way in Sect. 6 of Ghadimi et al. (2016). Moreover, we used the following value of tuning parameters in this experiment. The parameter \(\gamma\) in the \(\gamma\)-divergence was set to 0.1. The parameter \(\lambda\) of \(L_1\) regularization was set to \(10^{-1}, 10^{-2}, 10^{-3}\).

The RSPG needed the number of candidates \(N_{\text {cand}}\) and post-samples \(N_{\text {post}}\) for post-optimization as described in Algorithm 2. Then, we used \(N_{\text {cand}}=5\) and \(N_{\text {post}}= \left\lceil N/10 \right\rceil\).

6.4 Result

Tables 1, 2, 3 show the EmpRisk, ExpRisk and computation time in the case \(\lambda = 10^{-3}, 10^{-2} \text{ and } 10^{-1}\). Except for the computation time, our method outperformed comparative methods with several sizes of sample and dimension. We verify that the SGD, which are not theoretically guaranteed to converge for non-convex loss, can not reach the stationary point numerically. For the computation time, our method was comparable to the SGD.
Table 1

EmpRisk, ExpRisk, and computation time for \(\lambda =10^{-3}\)

Methods

\(N=10,000\), \(p=1000\)

\(N=30,000\), \(p=1000\)

EmpRisk

ExpRisk

Time

EmpRisk

ExpRisk

Time

RSPG

\(-\) 0.629

\(-\) 0.628

75.2

\(-\) 0.692

\(-\)0.691

78.3

SGD with 1 mini-batch

\(-\) 0.162

\(-\)0.155

95.9

\(-\) 0.365

\(-\)0.362

148

SGD with 10 mini-batch

\(1.1\times 10^{-2}\)

\(1.45\times 10^{-2}\)

73.2

\(5.71\times 10^{-2}\)

\(5.6\times 10^{-2}\)

73.7

SGD with 30 mini-batch

\(4.79\times 10^{-2}\)

\(5.02\times 10^{-2}\)

71.4

\(5.71\times 10^{-2}\)

\(-5.6\times 10^{-2}\)

73.7

SGD with 50 mini-batch

\(6.03\times 10^{-2}\)

\(6.21\times 10^{-2}\)

71.1

− 3.98\(\times 10^{-2}\)

\(-3.88\times 10^{-2}\)

238

Methods

\(N=10,000\), \(p=2000\)

\(N=30,000\), \(p=2000\)

EmpRisk

ExpRisk

Time

EmpRisk

ExpRisk

Time

RSPG

\(-\) 0.646

\(-\)0.646

117

\(-\) 0.696

\(-\)0.696

125

SGD with 1 mini-batch

0.187

0.194

145

\(-3.89\times 10^{-2}\)

\(-3.56\times 10^{-2}\)

251

SGD with 10 mini-batch

0.428

0.431

99.2

0.357

0.359

112

SGD with 30 mini-batch

0.479

0.481

95.7

0.442

0.443

101

SGD with 50 mini-batch

0.496

0.499

166

0.469

0.47

337

Table 2

EmpRisk, ExpRisk, and computation time for \(\lambda =10^{-2}\)

Methods

\(N=10,000\), \(p=1000\)

\(N=30,000\), \(p=1000\)

EmpRisk

ExpRisk

Time

EmpRisk

ExpRisk

Time

RSPG

\(-\) 0.633

\(-\)0.632

75.1

\(-\) 0.65

\(-\)0.649

78.4

SGD with 1 mini-batch

\(-\) 0.322

\(-\)0.322

96.1

\(-\) 0.488

\(-\)0.487

148

SGD with 10 mini-batch

1.36

1.37

73.4

0.164

0.165

79.7

SGD with 30 mini-batch

2.61

2.61

71.6

1.34

1.34

73.9

SGD with 50 mini-batch

3.08

3.08

409

1.95

1.95

576

Methods

\(N=10,000\), \(p=2000\)

\(N=30,000\), \(p=2000\)

EmpRisk

ExpRisk

Time

EmpRisk

ExpRisk

Time

RSPG

\(-\) 0.647

\(-\)0.646

117

\(-\) \(-\) 0.66

\(-\)0.66

125

SGD with 1 mini-batch

\(-\) 0.131

\(-\)0.13

144

\(-\) 0.436

\(-\)0.435

250

SGD with 10 mini-batch

3.23

3.23

99.1

0.875

0.875

112

SGD with 30 mini-batch

5.63

5.63

95.6

3.19

3.19

100

SGD with 50 mini-batch

6.52

6.53

503

4.38

4.38

675

Table 3

EmpRisk, ExpRisk, and computation time for \(\lambda =10^{-1}\)

Methods

\(N=10,000\), \(p=1000\)

\(N=30,000\), \(p=1000\)

EmpRisk

ExpRisk

Time

EmpRisk

ExpRisk

Time

RSPG

\(-\) 0.633

\(-\)0.632

74.6

\(-\) 0.64

\(-\)0.639

78.1

SGD with 1 mini-batch

\(-\) 0.411

\(-\)0.411

95.6

\(-\) 0.483

\(-\)0.482

148

SGD with 10 mini-batch

0.483

0.483

72.9

\(-4.56\times 10^{-2}\)

\(-4.5\times 10^{-2}\)

79.6

SGD with 30 mini-batch

1.53

1.53

71.1

0.563

0.563

73.7

SGD with 50 mini-batch

2.39

2.39

70.8

0.963

0.963

238

Methods

\(N=10,000\), \(p=2000\)

\(N=30,000\), \(p=2000\)

EmpRisk

ExpRisk

Time

EmpRisk

ExpRisk

Time

RSPG

\(-\) 0.654

\(-\)0.653

116

− 0.66

\(-\)0.66

130

SGD with 1 mini-batch

\(-\) 0.462

\(-\)0.461

144

\(-\) 0.559

\(-\)0.558

262

SGD with 10 mini-batch

0.671

0.672

98.9

\(-9.71\times 10^{-2}\)

\(-9.62\times 10^{-2}\)

116

SGD with 30 mini-batch

2.43

2.44

95.4

0.697

0.697

104

SGD with 50 mini-batch

4.02

4.02

165

1.32

1.32

340

7 Application to real data

We applied our method ‘online sparse \(\gamma\)-Poisson’ to real data ‘Online News Popularity’ [Fernandes et al. (2015)], which is available at https://archive.ics.uci.edu/ml/datasets/online+news+popularity. We compared our method with sparse Poisson regression which was implemented by R-package ‘glmnet’ with default parameter setting.

Online News Popularity data set contains 39,644 samples with 58 dimensional explanatory variables. We divided the data set to 20,000 training and 19,644 test samples. In Online News Popularity , the exposure time of each sample is different. Then, we used the \(\log\) transformed feature value ‘timedelta’ as the offset term. Moreover, 2000 training samples were randomly selected. Outliers were incorporated into training samples as follows:
$$\begin{aligned} y_{\text {outlier},i} = y_{i} + 100 \times t_{i} \quad (i=1, \ldots , 2000), \end{aligned}$$
where i is the index of the randomly selected sample and \(y_{i}\) is the response variable of the i-th randomly selected sample and \(t_{i}\) is the offset term of the i-th randomly selected sample.
As a measure of predictive performance, the root trimmed mean squared prediction error (RTMSPE) was computed for the test samples given by
$$\begin{aligned} \text {RTMSPE} =\sqrt{ \frac{1}{h} \sum _{j=1}^h e_{[j]}^2 }, \end{aligned}$$
where \(e_j^2= \left( ( y_{j} - \left\lfloor \exp \left( \log (t_{j}) + \hat{\beta _0} + x_j^{T} \hat{\beta } \right) \right\rfloor \right) ^2\), \(e_{[1]}^2 \le \cdots \le e_{[19644]}^2\) are the order statistics of \(e_1^2, \ldots , e_{19644}^2\) and \(h= \lfloor (19644+1)(1-\alpha ) \rfloor\) with \(\alpha =0.05, \ldots , 0.3\).
In our method, we need an initial point and some tuning parameters to obtain the estimate. Therefore, we used \(N_{init}=200\) samples which were used for estimating an initial point and other parameters L in (13) and \(\tau ^2\) in (15) to calculate in advance. In this experiment, we used the estimate of the RANSAC. For estimating L, we followed the way in Ghadimi et al. (2016), page 298–299. Moreover, we used the following value of tuning parameters in this experiment. The parameter \(\gamma\) in the \(\gamma\)-divergence was set to 0.1, 0.5, and 1.0. The parameter \(\lambda\) of \(L_1\) regularization was selected by the robust cross-validation proposed by Kawashima and Fujisawa (2017). The robust cross-validation was given by:
$$\begin{aligned} \text{ RoCV }(\lambda ) = - \frac{1}{n} \sum _{i=1}^n \frac{ f(y_i|x_i;\hat{\theta }^{[-i]})^{\gamma _0} }{ \left( \int f(y|x_i;\hat{\theta }^{[-i]})^{1+\gamma _0}{\text{d}}y\right) ^\frac{\gamma _0}{1+\gamma _0}} , \end{aligned}$$
where \(\hat{\theta }^{[-i]}\) is the estimated parameter deleting the i-th observation and \(\gamma _0\) is an appropriate tuning parameter. In this experiment, \(\gamma _0\) was set to 1.0. The mini-batch size was set to 100, 200, 500. The RSPG needed the number of candidates and post-samples \(N_{cand}\) and \(N_{post}\) for post-optimization as described in Algorithm 2. We used \(N_{\text {cand}}=5\) and \(N_{\text {post}}= \left\lceil N/10 \right\rceil\). We showed the best result of our method and comparative method in Table 4. All results were obtained in R version 3.3.0 with Intel Core i7-4790K machine. Table 4 shows that our method performed better than sparse Poisson regression.
Table 4

Root trimmed mean squared prediction error in test samples

Methods

Trimming fraction 100\(\alpha \%\)

\(5\%\)

\(10\%\)

\(15\%\)

\(20\%\)

\(25\%\)

\(30\%\)

Our method

2419.3

1760.2

1423.7

1215.7

1064

948.9

Sparse Poisson regression

2457.2

2118.1

1902.5

1722.9

1562.5

1414.1

8 Conclusions

We proposed the online robust and sparse GLM based on the \(\gamma\)-divergence. We applied a stochastic optimization approach to reduce the computational complexity and overcome the computational problem on the hypergeometric series in Poisson regression. We adopted the RSPG, which guaranteed the global convergence property for non-convex stochastic optimization problem, as a stochastic optimization approach. We proved that the global convergence property can be extended to the classical first-order necessary condition. In this paper, linear/logistic/Poisson regression problems with \(L_1\) regularization were illustrated in detail. As a result, not only Poisson case but also linear/logistic case can scale well for very large problems by virtue of the stochastic optimization approach. To the best of our knowledge, there is no efficient method for the robust and sparse Poisson regression, but w e have succeeded to propose an efficient estimation procedure with online strategy. The numerical experiments and real data analysis suggested that our methods had good performances in terms of both accuracy and computational cost. However, there are still some problems in Poisson regression problem, e.g., overdispersion (Dean and Lawless 1989), zero inflated Poisson (Lambert 1992). Therefore, it can be useful to extend the Poisson regression to the negative binomial regression and the zero inflated Poisson regression for future work. Moreover, the accelerated RSPG was proposed in (Ghadimi and Lan 2016), and then we can adopt it as a stochastic optimization approach to achieve faster convergence than the RSPG.

Notes

Acknowledgements

This work was partially supported by JSPS KAKENHI Grant Number 17K00065.

References

  1. Alfons, A., Croux, C., & Gelper, S. (2013). Sparse least trimmed squares regression for analyzing high-dimensional large data sets. The Annals of Applied Statistics, 7(1), 226–248.MathSciNetCrossRefzbMATHGoogle Scholar
  2. Beck, A., & Teboulle, M. (2009). A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1), 183–202.  https://doi.org/10.1137/080716542.MathSciNetCrossRefzbMATHGoogle Scholar
  3. Bootkrajang, J., & Kabán, A. (2013). Classification of mislabelled microarrays using robust sparse logistic regression. Bioinformatics, 29(7), 870–877.  https://doi.org/10.1093/bioinformatics/btt078.CrossRefGoogle Scholar
  4. Borwein, J., & Lewis, A. S. (2010). Convex analysis and nonlinear optimization: Theory and examples. Berlin: Springer Science & Business Media.Google Scholar
  5. Bottou, L. (2010). Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010, Springer, pp. 177–186.Google Scholar
  6. Bubeck, S. (2015). Convex optimization: Algorithms and complexity. Found Trends in Machine Learning, 8(3–4), 231–357.  https://doi.org/10.1561/2200000050.CrossRefzbMATHGoogle Scholar
  7. Chi, E. C., & Scott, D. W. (2014). Robust parametric classification and variable selection by a minimum distance criterion. Journal of Computational and Graphical Statistics, 23(1), 111–128.  https://doi.org/10.1080/10618600.2012.737296.MathSciNetCrossRefGoogle Scholar
  8. Dean, C., & Lawless, J. F. (1989). Tests for detecting overdispersion in poisson regression models. Journal of the American Statistical Association, 84(406), 467–472.  https://doi.org/10.1080/01621459.1989.10478792. http://www.tandfonline.com/doi/abs/10.1080/01621459.1989.10478792.
  9. Duchi, J., & Singer, Y. (2009). Efficient online and batch learning using forward backward splitting. Journal of Machine Learning Research, 10, 2899–2934. http://dl.acm.org/citation.cfm?id=1577069.1755882.
  10. Duchi, J., Shalev-Shwartz, S., Singer, Y., & Chandra, T. (2008). Efficient projections onto the l1-ball for learning in high dimensions. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, ACM, New York, NY, USA, pp 272–279,  https://doi.org/10.1145/1390156.1390191.
  11. Duchi, J. C., Shalev-Shwartz, S., Singer, Y., & Tewari, A. (2010). Composite objective mirror descent. In COLT 2010 - The 23rd Conference on Learning Theory, pp 14–26. http://colt2010.haifa.il.ibm.com/papers/COLT2010proceedings.pdf#page=22.
  12. Duchi, J. C., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12, 2121–2159. http://dblp.uni-trier.de/db/journals/jmlr/jmlr12.html#DuchiHS11.
  13. Fernandes, K., Vinagre, P., & Cortez, P. (2015). A proactive intelligent decision support system for predicting the popularity of online news. In F. Pereira, P. Machado, E. Costa, & A. Cardoso (Eds.), Progress in artificial intelligence (pp. 535–546). Cham: Springer International Publishing.CrossRefGoogle Scholar
  14. Fujisawa, H., & Eguchi, S. (2008). Robust parameter estimation with a small bias against heavy contamination. Journal of Multivariate Analysis, 99(9), 2053–2081.MathSciNetCrossRefzbMATHGoogle Scholar
  15. Ghadimi, S., & Lan, G. (2016). Accelerated gradient methods for nonconvex nonlinear and stochastic programming. Mathematical Programming, 156(1), 59–99.  https://doi.org/10.1007/s10107-015-0871-8.MathSciNetCrossRefzbMATHGoogle Scholar
  16. Ghadimi, S., Lan, G., & Zhang, H. (2016). Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization. Mathematical Programming, 155(1–2), 267–305.  https://doi.org/10.1007/s10107-014-0846-1.MathSciNetCrossRefzbMATHGoogle Scholar
  17. Hunter, D. R., & Lange, K. (2004). A tutorial on mm algorithms. The American Statistician, 58(1), 30–37.MathSciNetCrossRefGoogle Scholar
  18. Kanamori, T., & Fujisawa, H. (2015). Robust estimation under heavy contamination using unnormalized models. Biometrika, 102(3), 559–572.MathSciNetCrossRefzbMATHGoogle Scholar
  19. Kawashima, T., & Fujisawa, H. (2017). Robust and sparse regression via \(\gamma\)-divergence. Entropy, 19, 608.  https://doi.org/10.3390/e19110608.CrossRefGoogle Scholar
  20. Khan, J. A., Van Aelst, S., & Zamar, R. H. (2007). Robust linear model selection based on least angle regression. Journal of the American Statistical Association, 102(480), 1289–1299.MathSciNetCrossRefzbMATHGoogle Scholar
  21. Kivinen, J., & Warmuth, M. K. (1995). Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132, 1–63.MathSciNetCrossRefzbMATHGoogle Scholar
  22. Lambert, D. (1992). Zero-inflated poisson regression, with an application to defects in manufacturing. Technometrics, 34(1), 1–14. http://www.jstor.org/stable/1269547.
  23. McCullagh, P., & Nelder, J. (1989). Generalized linear models, Second Edition. Chapman and Hall/CRC Monographs on Statistics and Applied Probability Series, Chapman & Hall. http://books.google.com/books?id=h9kFH2_FfBkC.
  24. Nelder, J. A., & Wedderburn, R. W. M. (1972). Generalized linear models. Journal of the Royal Statistical Society Series A (General), 135(3), 370–384. http://www.jstor.org/stable/2344614.
  25. Nesterov, Y. (2007). Gradient methods for minimizing composite objective function. CORE Discussion Papers 2007076, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE). https://EconPapers.repec.org/RePEc:cor:louvco:2007076.
  26. Rockafellar, R. T. (1970). Convex analysis. Princeton Mathematical Series. Princeton: Princeton University Press.Google Scholar
  27. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B pp 267–288.Google Scholar
  28. Xiao, L. (2010). Dual averaging methods for regularized stochastic learning and online optimization. Journal of Machine Learning Research, 11, 2543–2596.MathSciNetzbMATHGoogle Scholar
  29. Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B, 67(2), 301–320.MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Japanese Federation of Statistical Science Associations 2019

Authors and Affiliations

  1. 1.Department of Mathematical and Computing ScienceTokyo Institute of TechnologyTokyoJapan
  2. 2.The Institute of Statistical MathematicsTokyoJapan
  3. 3.Center for Advanced Intelligence Project, RIKENTokyoJapan

Personalised recommendations