Skip to main content

Analysis of Error Terms of Signatures Based on Learning with Errors

  • Conference paper
  • First Online:
Information Security and Cryptology – ICISC 2016 (ICISC 2016)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 10157))

Included in the following conference series:

  • 1089 Accesses

Abstract

Lyubashevsky proposed a lattice-based digital signature scheme based on short integer solution (SIS) problem without using trapdoor matrices [12]. Bai and Galbraith showed that the hard problem in Lyubashevsky’s scheme can be changed from SIS to SIS and learning with errors (LWE) [4]. Using this change, they could compress the signatures. But Bai and Galbraith’s scheme had some additional rejection processes on its algorithms. These rejection processes decreased the acceptance rate of the signing algorithm. We showed mathematically that the rejection process in key generation algorithm of [4] is not necessary. Using this fact, we suggested a scheme modified from [4]’s scheme, and doubled the acceptance rate of the signing algorithm. Furthermore, our implementation results show that our scheme is two times faster than that of [4] on similar parameter settings.

S. Jung and D. Roh—National Security Research Institute, Republic of Korea.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ajtai, M.: Generating hard instances of lattice problems. In: Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, pp. 99–108. ACM (1996)

    Google Scholar 

  2. Ajtai, M., Dwork, C.: A public-key cryptosystem with worst-case/average-case equivalence. In: Proceedings of the Twenty-Ninth Annual ACM Symposium on Theory of Computing, pp. 284–293. ACM (1997)

    Google Scholar 

  3. Applebaum, B., Cash, D., Peikert, C., Sahai, A.: Fast cryptographic primitives and circular-secure encryption based on hard learning problems. In: Halevi, S. (ed.) CRYPTO 2009. LNCS, vol. 5677, pp. 595–618. Springer, Heidelberg (2009). doi:10.1007/978-3-642-03356-8_35

    Chapter  Google Scholar 

  4. Bai, S., Galbraith, S.D.: An improved compression technique for signatures based on learning with errors. In: Benaloh, J. (ed.) CT-RSA 2014. LNCS, vol. 8366, pp. 28–47. Springer, Heidelberg (2014). doi:10.1007/978-3-319-04852-9_2

    Chapter  Google Scholar 

  5. Bernstein, D.J., Lange, T.: eBACS: ecrypt benchmarking of cryptographic systems (2009)

    Google Scholar 

  6. Chen, Y., Nguyen, P.Q.: BKZ 2.0: better lattice security estimates. In: Lee, D.H., Wang, X. (eds.) ASIACRYPT 2011. LNCS, vol. 7073, pp. 1–20. Springer, Heidelberg (2011). doi:10.1007/978-3-642-25385-0_1

    Chapter  Google Scholar 

  7. Dagdelen, Ö., Bansarkhani, R., Göpfert, F., Güneysu, T., Oder, T., Pöppelmann, T., Sánchez, A.H., Schwabe, P.: High-speed signatures from standard lattices. In: Aranha, D.F., Menezes, A. (eds.) LATINCRYPT 2014. LNCS, vol. 8895, pp. 84–103. Springer, Heidelberg (2015). doi:10.1007/978-3-319-16295-9_5

    Google Scholar 

  8. Ducas, L., Durmus, A., Lepoint, T., Lyubashevsky, V.: Lattice signatures and bimodal gaussians. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013. LNCS, vol. 8042, pp. 40–56. Springer, Heidelberg (2013). doi:10.1007/978-3-642-40041-4_3

    Chapter  Google Scholar 

  9. Goldreich, O., Goldwasser, S., Halevi, S.: Public-key cryptosystems from lattice reduction problems. In: Kaliski, B.S. (ed.) CRYPTO 1997. LNCS, vol. 1294, pp. 112–131. Springer, Heidelberg (1997). doi:10.1007/BFb0052231

    Chapter  Google Scholar 

  10. Hoffstein, J., Howgrave-Graham, N., Pipher, J., Silverman, J.H., Whyte, W.: NTRUSign: digital signatures using the NTRU lattice. In: Joye, M. (ed.) CT-RSA 2003. LNCS, vol. 2612, pp. 122–140. Springer, Heidelberg (2003). doi:10.1007/3-540-36563-X_9

    Chapter  Google Scholar 

  11. Lyubashevsky, V.: Lattice-based identification schemes secure under active attacks. In: Cramer, R. (ed.) PKC 2008. LNCS, vol. 4939, pp. 162–179. Springer, Heidelberg (2008). doi:10.1007/978-3-540-78440-1_10

    Chapter  Google Scholar 

  12. Lyubashevsky, V.: Lattice signatures without trapdoors. In: Pointcheval, D., Johansson, T. (eds.) EUROCRYPT 2012. LNCS, vol. 7237, pp. 738–755. Springer, Heidelberg (2012). doi:10.1007/978-3-642-29011-4_43

    Chapter  Google Scholar 

  13. Micciancio, D., Regev, O.: Worst-case to average-case reductions based on Gaussian measures. SIAM J. Comput. 37(1), 267–302 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  14. Regev, O.: On lattices, learning with errors, random linear codes, and cryptography. J. ACM (JACM) 56(6), 34 (2009)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jeongsu Kim .

Editor information

Editors and Affiliations

A Appendix: Reusability of Error Terms

A Appendix: Reusability of Error Terms

We have proved that \(\Vert \mathbf {Ec}\Vert _\infty \) is sufficiently small for one \(\mathbf {E}\) and one \(\mathbf {c}\in \mathcal {B}_{n,w}\). But it is also important to check that \(\Vert \mathbf {Ec}_i\Vert _\infty \) is small for one arbitrary \(\mathbf {E}\) and several arbitrary \(\mathbf {c}_i\in \mathcal {B}_{n,w}\).

Remark 4

The events \(\Vert \mathbf {Ec}_i\Vert _\infty \le L\) may be dependent to each other. For example, if we let \(n=3\), \(w=2\), \(\mathbf {c}_1=(1,0,-1)^T\), and \(\mathbf {c}_2=(-1,1,0)^T\), then the two events \(\Vert \mathbf {Ec}_1\Vert _\infty \le L\) and \(\Vert \mathbf {Ec}_2\Vert _\infty \le L\) are dependent.

Ignorable Probability. We first define an ignorable probability with respect to \(r\in \mathbb {N}\) different from a negligible probability.

Definition 2

A probability \(\delta _r\) is ignorable with respect to \(r\in \mathbb {N}\) if

$$ \delta _r\le 1-(1-\epsilon )^r $$

for some negligible probability \(\epsilon \).

Also, for convenience, we define \({{\mathrm{abs}}}(\mathbf {A})\) to be \((|a_{ij}|)\) where \(\mathbf {A}=(a_{ij})\).

We need to check that the probability

$$ p:={{\mathrm{Pr}}}\left[ \mathbf {E}\leftarrow D_\sigma ^{m\times n}, \mathbf {c}\leftarrow \mathcal {B}_{n,w} : \left\| \mathbf {Ec}_i \right\| _\infty \le L \textit{ for }i=1,\cdots ,r \right] $$

for some integer \(r\in \mathbb {N}\) is big enough so that \(1-p\) is ignorable with respect to r.

Before we give an analysis on p, we will define some variables we use in this section. We let \(\mathbf {c}_i=(c_{i,1},c_{i,2},\cdots ,c_{i,n})^T\) and \(\theta _i\) be a permutation on the set \(\{1,\cdots ,n\}\) such that \(c'_{i,1}:=c_{i,\theta _i(1)},c'_{i,2}:=c_{i,\theta _i(2)},\cdots ,c'_{i,w}:=c_{i,\theta _i(w)}\) are all the nonzero entries of \(\mathbf {c}_i\). We also let \(X_1,\cdots ,X_n\) be iid discrete Gaussian random variables with mean 0 and standard deviation \(\sigma \). Furthermore, we let \(X_{i,1},\cdots ,X_{i,w}\) be distinct random variables in \(\{X_1,\cdots ,X_n\}\) such that \(X_{i,j}:=X_{\theta _i(j)}\). By construction, we can know that

$$ c_{i,1}X_1+c_{i,w}X_2+\cdots +c_{i,n}X_n=c'_{i,1}X_{i,1}+c'_{i,2}X_{i,2}+\cdots +c'_{i,w}X_{i,w}. $$

If we let

$$\begin{aligned} p_a:= & {} {{\mathrm{Pr}}}\left[ \left| c_{i,1}X_1+c_{i,2}X_2+\cdots +c_{i,n}X_n \right| \le L \textit{ for } i=1,\cdots ,r \right] \\= & {} {{\mathrm{Pr}}}\left[ \left| c'_{i,1}X_{i,1}+c'_{i,2}X_{i,2}+\cdots +c'_{i,w}X_{i,w} \right| \le L \textit{ for } i=1,\cdots ,r \right] , \end{aligned}$$

then \(p=p_a^m\). Therefore, it is enough to show that \(p_a\) is big enough. But it is not easy to give a direct analysis on \(p_a\) because of the alternative signs of \(\mathbf {c}_i\)’s entries. Instead, we think of a probability

$$\begin{aligned} p_b:= & {} {{\mathrm{Pr}}}\left[ |c_{i,1}||X_1|+|c_{i,2}||X_2|+\cdots +|c_{i,n}||X_n| \le L \textit{ for } i=1,\cdots ,r \right] \\= & {} {{\mathrm{Pr}}}\left[ |c'_{i,1}||X_{i,1}|+|c'_{i,2}||X_{i,2}|+\cdots +|c'_{i,w}||X_{i,w}| \le L \textit{ for } i=1,\cdots ,r \right] \\= & {} {{\mathrm{Pr}}}\left[ |X_{i,1}|+|X_{i,2}|+\cdots + |X_{i,w}| \le L \textit{ for } i=1,\cdots ,r \right] \end{aligned}$$

which is smaller than \(p_a\). we also think of a probability

$$\begin{aligned} p_c:={{\mathrm{Pr}}}\left[ |Y_{i,1}|+|Y_{i,2}|+\cdots + |Y_{i,w}| \le L \textit{ for } i=1,\cdots ,r \right] \end{aligned}$$

where \(Y_{i,j}\) are iid discrete Gaussian random variables with mean 0 and standard deviation \(\sigma \) for every i and j. We can show that \(p_c \le p_b\) from Theorem 4. Therefore, since \(p_c\le p_b\le p_a\), it is enough to show that \(p_c\) is big enough so that \(1-p_c^m\) is ignorable with respect to r.

Remark 5

\(p_c^m={{\mathrm{Pr}}}[ \Vert {{\mathrm{abs}}}(\mathbf {E}_i) {{\mathrm{abs}}}(\mathbf {c}_i) \Vert _\infty \le L \textit{ for } i=1,\cdots ,r ] \) where \(\mathbf {E}_i\)’s are mutually independent \(m\times n\) matrices of discrete Gaussian random variables with mean 0 standard deviation \(\sigma \). Therefore, \(p_c^m=\Pr [\Vert {{\mathrm{abs}}}(\mathbf {E}){{\mathrm{abs}}}(\mathbf {c})\Vert _\infty \le L]^r\).

Now, Let

$$ p_*:={{\mathrm{Pr}}}[|Y_1|+|Y_2|+\cdots +|Y_w|\le L] $$

where \(Y_{j}\) are iid discrete Gaussian random variables with mean 0 and standard deviation \(\sigma \) for \(j=1,\cdots \,w\). Then \(p_c=p_*^r\), and \(\Pr [\Vert {{\mathrm{abs}}}(\mathbf {E}){{\mathrm{abs}}}(\mathbf {c})\Vert _\infty \le L]=p_*^m\). If the probability

$$ 1-\Pr [\Vert {{\mathrm{abs}}}(\mathbf {E}){{\mathrm{abs}}}(\mathbf {c})\Vert _\infty \le L]=1-p_*^m $$

is negligible, then we can finally say that the probabilities

$$\begin{aligned} 1-p=1-p_a^m\le 1-p_b^m\le 1-p_c^m=1-(p_*^m)^r=1-(1-(1-p_*^m))^r \end{aligned}$$

are ignorable with respect to r.

In order to prove that \(1-p_*^m\) is negligible, we first check that \(1-p_*\) is negligible.

Theorem 3

Let \(Y_1,\cdots ,Y_w\) be the iid discrete Gaussian random variables with mean 0 and standard deviation \(\sigma \). Then

$$ {{\mathrm{Pr}}}[|Y_1|+\cdots +|Y_w|>L]<2^w e^{-L^2/2\sigma ^2 w}. $$

Proof

From Markov’s inequality, for some \(t>0\),

$$\begin{aligned} {{\mathrm{Pr}}}[|Y_1|+\cdots +|Y_w|>L]= & {} {{\mathrm{Pr}}}\left[ e^{{t\over \sigma ^2}(|Y_1|+\cdots +|Y_w|)}>e^{{t\over \sigma ^2}L} \right] \\\le & {} {\mathrm {E}\left[ e^{\frac{t}{\sigma ^2}(|Y_1|+\cdots +|Y_w|)}\right] \over e^{{t\over \sigma ^2}L}}={\mathrm {E}\left[ e^{{t\over \sigma ^2}|Y_1|} \right] ^w \over e^{{t\over \sigma ^2}L}} \end{aligned}$$

Also,

$$\begin{aligned} \mathrm {E}\left[ e^{{t\over \sigma ^2}|Y_1|} \right]= & {} \sum _{a\in \mathbb {Z}}e^{{t\over \sigma ^2}|a|}f_\sigma (a)\\= & {} {\rho _\sigma (0)\over \rho _\sigma (\mathbb {Z})}+\sum _{a\in \mathbb {N}}2e^{{t\over \sigma ^2}a}{\rho _\sigma (a)\over \rho _\sigma (\mathbb {Z})}\\= & {} {1\over \rho _\sigma (\mathbb {Z})}\left( 1+\sum _{a\in N}2e^{{t\over \sigma ^2}a}e^{-{a^2\over 2\sigma ^2}} \right) \\< & {} {e^{t^2\over 2\sigma ^2}\over \rho _\sigma (\mathbb {Z})}\left( 2e^{-{t^2\over 2\sigma ^2}}+\sum _{a\in \mathbb {N}} 2e^{-{1\over 2\sigma ^2}(a-t)^2}\right) \\< & {} e^{t^2\over 2\sigma ^2 }\left( {\rho _{t,\sigma }(\mathbb {Z})\over \rho _\sigma (\mathbb {Z})}+{\rho _{-t,\sigma }(\mathbb {Z})\over \rho _\sigma (\mathbb {Z})} \right) \end{aligned}$$

where \(\rho _{\mu ,\sigma }(\mathbb {Z}):=\sum _{a\in \mathbb {Z}}e^{-(a-\mu )^2/2\sigma ^2}\) for \(\mu ,\sigma \in \mathbb {R}\). From Lemma 2.9 of [13], we know that \({\rho _{\pm t,\sigma }(\mathbb {Z})\over \rho _\sigma (\mathbb {Z})}\le 1\). Therefore, we get \({{\mathrm{Pr}}}[|Y_1|+\cdots +|Y_w|>L]<2^w e^{-L^2/2\sigma ^2 w}\) by letting \(t=L/w\) to get a tighter upper bound using AM-GM inequality.    \(\square \)

Since we let the bound \(L=\lambda \sigma w\), we can know that

$$\begin{aligned} 1-p_*^m=1-(1-(1-p_*))^m < 1-(1-2^w e^{-L^2/2\sigma ^2 w})^m =1-(1-2^w e^{-\lambda ^2w/2})^m. \end{aligned}$$

According to the parameters we have chosen, \(1-(1-2^w e^{-\lambda ^2w/2})^m< 2^{-55}\).

Actually, we can compute \(p_*\) if the parameters are chosen. Let \(\mathbf {p}:=(p_0,p_1,\cdots ,p_L)^T\) be a vector whose each entry \(p_i\) is the probability of the absolute value of discrete Gaussian random variable with mean 0 and standard deviation \(\sigma \) being \(i=0,\cdots ,L\). In other words, \(p_0=f_\sigma (0), p_i=2f_\sigma (i)\) for \(i=1,\cdots ,L\). If we let \(Y_1,Y_2,\cdots \) be iid discrete Gaussian random variables with mean 0 and standard deviation \(\sigma \), then \(\mathbf {p}\) contains all of the possible probability that \(|Y_j|=i\le L\) with respect to i. If we convolute \(\mathbf {p}\) to itself w times (a convolution of two vectors \(\mathbf {a}=(a_0,\cdots ,a_s)^T\) and \(\mathbf {b}=(b_0,\cdots ,b_t)^T\) is defined as \(\mathbf {a} *\mathbf {b}:=(a_0 b_0,\cdots , \sum _{i+j=k}a_i b_j , \cdots ,a_s b_t)^T\in \mathbb {R}^{s+t+1}\)),

$$ \underbrace{\mathbf {p}*\cdots *\mathbf {p}}_w=(p'_0, p'_1, \cdots ,p'_L, \cdots , p'_{wL})^T $$

then we know that \(p'_i\) are the exact probability that \(|Y_1|+\cdots +|Y_w|=i\) for \(i=0,\cdots ,L\). Since we can calculate \(\mathbf {p}\), we can calculate the probability \(p_*=\sum _{i=0}^Lp'_i\) that \(|Y_1|+\cdots +|Y_w|\le L\). Therefore, we can compute \(1-p_*^m\). From our parameter settings, we can know that \(1-p_*^m \le 2^{-62}\) for \(n=512\), \(1-p_*^m\le 2^{-66}\) for \(n=400\) and \(1-p_*^m\approx 2^{-60}\) for \(n=640\).

Since we saw that \(1-p_*^m\) is negligibly small, it is left to prove that \(p_c\le p_b\), which is directly given by Theorem 4. Before we present Theorem 4, we need a lemma below.

Lemma 3

Let \(\{p_i\}_i\) and \(\{q_i\}_i\) be nonnegative absolutely convergent sequences such that \(\sum _{i=0}^\infty p_i=\sum _{i=0}^\infty q_i<\infty \). If there exists \(N> 0\) such that for all \(i\ge N\), \(p_i \le q_i\) and for all \(i<N\), \(p_i > q_i\), then for any nonnegative decreasing sequence \(\{w_i\}_i\),

$$\begin{aligned} \sum _{i=0}^m w_i p_i \ge \sum _{i=0}^m w_i q_i \text { for any } m\ge 0. \end{aligned}$$

Proof

First, consider \(C_m:=\sum _{i=0}^m (p_i-q_i )\). If \(m<N\), then it is obvious that \(C_m\ge 0\). If \(m\ge N\), then we know that for all \(i \ge N\), \(p_i-q_i\le 0\). Therefore, \(C_{N-1},C_N,C_{N+1},\cdots \) is a decreasing sequence which converges to 0. Since \(C_{N-1}\ge 0\), \(C_m \ge 0\) for all \(m \ge 0\). Now consider

$$\begin{aligned} S_m:=\sum _{i=0}^m w_i(p_i-q_i)=\sum _{i=0}^m w_i p_i-\sum _{i=0}^m w_i q_i. \end{aligned}$$

If \(m<N\), then \(S_m\ge 0\). If \(m\ge N\), then

$$\begin{aligned} S_m\ge w_N \sum _{i=0}^m \left( p_i-q_i \right) \ge 0. \end{aligned}$$

Therefore, \(S_m=\sum _{i=0}^m w_i p_i-\sum _{i=0}^m w_i q_i\ge 0\) for any \(m\ge 0\).    \(\square \)

Now, we can prove a theorem below using the previous lemma.

Theorem 4

For any positive integer r, let \(X_1,\cdots ,X_n\), \(Y_j\), and \(Y_{i,j}\) be iid discrete random variables for \(i=1,\cdots ,r\) and \(j=1,\cdots ,w\). For each i, let \(X_{i,1}, X_{i,2}, \cdots , X_{i,w}\) be distinct random variables in \(\left\{ X_1,\cdots ,X_n \right\} \) (\(X_{i_0,j_0}=X_{i_1,j_1}\) can happen when \(i_0\ne i_1\)). Then

$$\begin{aligned}&{{\mathrm{Pr}}}\left[ \left| X_{i,1}\right| +\left| X_{i,2}\right| +\cdots +\left| X_{i,w}\right| \le L \text { for }i=1,\cdots ,r\right] \\\ge & {} {{\mathrm{Pr}}}\left[ \left| Y_{i,1}\right| +\left| Y_{i,2}\right| +\cdots +\left| Y_{i,w}\right| \le L \text { for }i=1,\cdots ,r\right] \\= & {} {{\mathrm{Pr}}}\left[ \left| Y_1 \right| +\left| Y_2 \right| +\cdots +\left| Y_w \right| \le L\right] ^{r} \end{aligned}$$

Proof

Using Lemma 3, we can now prove Theorem 4. Let

$$\begin{aligned} A_i :=\left\{ \left| X_{i,1}\right| +\left| X_{i,2}\right| +\cdots +\left| X_{i,w}\right| \le L \right\}&\text {and}&\\ A'_i :=\left\{ \left| Y_{i,1}\right| +\left| Y_{i,2}\right| +\cdots +\left| Y_{i,w}\right| \le L \right\} . \end{aligned}$$

Then the inequality in Theorem 4 can be represented as Pr\([\cap _{i=1}^r A_i]\ge {{\mathrm{Pr}}}[\cap _{i=1}^r A'_i]=\prod _{i=1}^r{{\mathrm{Pr}}}[ A'_i]\). If we show that

for \(s\ge 2\), then

Therefore, it is enough to compare the probabilities and \({{\mathrm{Pr}}}\left[ \cap _{i=1}^{s-1}A_i\right] \) for \(s\ge 2\). If , then \(\cap _{i=1}^{s-1}A_i\) and \(A_s\) are independent, i.e. . Suppose . Without loss of generality, we may assume that

for some \(k\le w\). Then the two probabilities can be represented as

If we let

then we can see that \(p_i\) and \(q_i\) are probability density functions of positive discrete random variables, and \(\left\{ w_{t_1}\right\} _{t_1}\) is a decreasing sequence. Note that

$$\begin{aligned} p_{t_1}= & {} {{\mathrm{Pr}}}\left[ \left| X_{s,1}\right| =t_1\right] {{{{\mathrm{Pr}}}\left[ \left| X_{s,2}\right| +\cdots \left| X_{s,w}\right| \le L-t_1\right] }\over {{{\mathrm{Pr}}}\left[ \left| X_{s,1}\right| +\cdots \left| X_{s,w}\right| \le L\right] }}\\= & {} q_{t_1}{{{{\mathrm{Pr}}}\left[ \left| X_{s,2}\right| +\cdots \left| X_{s,w}\right| \le L-t_1\right] }\over {{{\mathrm{Pr}}}\left[ \left| X_{s,1}\right| +\cdots \left| X_{s,w}\right| \le L\right] }}\\ l_{t_1}:= & {} {{{{\mathrm{Pr}}}\left[ \left| X_{s,2}\right| +\cdots \left| X_{s,w}\right| \le L-t_1\right] }\over {{{\mathrm{Pr}}}\left[ \left| X_{s,1}\right| +\cdots \left| X_{s,w}\right| \le L\right] }}. \end{aligned}$$

We can know that \(\{ l_{t_1}\}_{t_1}\) is a decreasing sequence converges to 0 and \(l_0>1\). Therefore, there exists \(N_1>0\) such that for all \(t_1\ge N_1\), \(p_{t_1}\le q_{t_1}\) and for all \(t_1<N_1\), \(p_{t_1} \ge q_{t_1}\). If we show that \(w'_{t_1}\ge w_{t_1}\) for \(0\le t_1\le L\), then from the Lemma 3,

Therefore, it is enough to show that \(w'_{t_1}\ge w_{t_1}\) for \(0\le t_1\le L\). We can also write

Similarly, If we let

then we can see that \(\{p_{t_1,t_2}\}_{t_2}\) and \(\{q_{t_1,t_2}\}_{t_2}\) are probability density functions of positive discrete random variables, and \(\{w_{t_1,t_2}\}_{t_2}\) is a decreasing sequence. Note that

$$\begin{aligned} p_{t_1,t_2}= & {} {{\mathrm{Pr}}}\left[ \left| X_{s,2}\right| =t_2\right] {{{{\mathrm{Pr}}}\left[ \left| X_{s,3}\right| +\cdots \left| X_{s,w}\right| \le (L-t_1)-t_2\right] }\over {{{\mathrm{Pr}}}\left[ \left| X_{s,2}\right| +\cdots \left| X_{s,w}\right| \le L-t_1\right] }}\\= & {} q_{t_1,t_2} {{{{\mathrm{Pr}}}\left[ \left| X_{s,3}\right| +\cdots \left| X_{s,w}\right| \le (L-t_1)-t_2\right] }\over {{{\mathrm{Pr}}}\left[ \left| X_{s,2}\right| +\cdots \left| X_{s,w}\right| \le L-t_1\right] }}\\ l_{t_1,t_2}:= & {} {{{{\mathrm{Pr}}}\left[ \left| X_{s,3}\right| +\cdots \left| X_{s,w}\right| \le (L-t_1)-t_2\right] }\over {{{\mathrm{Pr}}}\left[ \left| X_{s,2}\right| +\cdots \left| X_{s,w}\right| \le L-t_1\right] }}. \end{aligned}$$

We can know that \(\{l_{t_1,t_2}\}_{t_2}\) is a decreasing sequence converges to 0 and \(l_{t_1,0}>1\). Therefore, there exists \(N_2>0\) such that for all \(t_2\ge N_2\), \(p_{t_2}\le q_{t_2}\) and for all \(t_2<N_2\), \(p_{t_2} \ge q_{t_2}\). If we show that \(w'_{t_1,t_2}\ge w_{t_1,t_2}\) for \(0\le t_1+t_2\le L\), then from the Lemma 3,

$$ w'_{t_1}=\sum _{t_2=0}^{L-t_1} p_{t_1,t_2}w'_{t_1,t_2}\ge \sum _{t_2=0}^{L-t_1} p_{t_1,t_2}w_{t_1,t_2}\ge \sum _{t_2=0}^{L-t_1}q_{t_1,t_2}w_{t_1,t_2}=w_{t_1}. $$

Therefore, it is enough to show that \(w'_{t_1,t_2}\ge w_{t_1,t_2}\) for \(0\le t_1+t_2\le L\). Using similar notations and arguments, we can know that it is enough to prove that \(w'_{t_1,\cdots ,t_k}\ge w_{t_1,\cdots ,t_k}\) for \(0\le t_1+\cdots +t_k\le L\). Since the random variables \(X_{s,k+1},X_{s,k+2},\cdots ,X_{s,w}\) are independent to the random variables defining \(\cap _{i=1}^{s-1}A_i\),

   \(\square \)

Since Theorem 4 is a direct proof for \(p_c\le p_b\), we can say that \(1-p\) is ignorable with respect to r. According to our parameter settings, if \(r=2^{30}\), then \(1-p\le 1-(p_*^m)^r\le 2^{-30}\). In other words, the probability that \(\Vert \mathbf {Ec}_i\Vert _\infty \le L\) for \(i=1,\cdots ,2^{30}\) is at least \(1-2^{-30}\) for \(L=2w\sigma \). We thought this was enough probability to eliminate the rejection process of the keys on Algorithm 1 of [4].

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Kim, J. et al. (2017). Analysis of Error Terms of Signatures Based on Learning with Errors. In: Hong, S., Park, J. (eds) Information Security and Cryptology – ICISC 2016. ICISC 2016. Lecture Notes in Computer Science(), vol 10157. Springer, Cham. https://doi.org/10.1007/978-3-319-53177-9_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-53177-9_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-53176-2

  • Online ISBN: 978-3-319-53177-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics