Logistic regression model training based on the approximate homomorphic encryption
 717 Downloads
Abstract
Background
Security concerns have been raised since big data became a prominent tool in data analysis. For instance, many machine learning algorithms aim to generate prediction models using training data which contain sensitive information about individuals. Cryptography community is considering secure computation as a solution for privacy protection. In particular, practical requirements have triggered research on the efficiency of cryptographic primitives.
Methods
This paper presents a method to train a logistic regression model without information leakage. We apply the homomorphic encryption scheme of Cheon et al. (ASIACRYPT 2017) for an efficient arithmetic over real numbers, and devise a new encoding method to reduce storage of encrypted database. In addition, we adapt Nesterov’s accelerated gradient method to reduce the number of iterations as well as the computational cost while maintaining the quality of an output classifier.
Results
Our method shows a stateoftheart performance of homomorphic encryption system in a realworld application. The submission based on this work was selected as the best solution of Track 3 at iDASH privacy and security competition 2017. For example, it took about six minutes to obtain a logistic regression model given the dataset consisting of 1579 samples, each of which has 18 features with a binary outcome variable.
Conclusions
We present a practical solution for outsourcing analysis tools such as logistic regression analysis while preserving the data confidentiality.
Keywords
Homomorphic encryption Machine learning Logistic regressionAbbreviations
 AUC
Area under the receiver operating characteristic curve
 CV
Cross validation
 GD
Gradient descent
 HE
Homomorphic encryption
 ML
Machine learning
Background
Machine learning (ML) is a class of methods in artificial intelligence, the characteristic feature of which is that they do not give the solution of a particular problem but they learn the process of finding solutions to a set of similar problems. The theory of ML appeared in the early 60’s on the basis of the achievements of cybernetics [1] and gave the impetus to the development of theory and practice of technically complex learning systems [2]. The goal of ML is to partially or fully automate the solution of complicated tasks in various fields of human activity.
The scope of ML applications is constantly expanding; however, with the rise of ML, the security problem has become an important issue. For example, many medical decisions rely on logistic regression model, and biomedical data usually contain confidential information about individuals [3] which should be treated carefully. Therefore, privacy and security of data are the major concerns, especially when deploying the outsource analysis tools.
There have been several researches on secure computation based on cryptographic primitives. Nikolaenko et al. [4] presented a privacy preserving linear regression protocol on horizontally partitioned data using Yao’s garbled circuits [5]. Multiparty computation technique was also applied to privacypreserving logistic regression [6, 7, 8]. However, this approach is vulnerable when a party behaves dishonestly, and the assumption for secret sharing is quite different from that of outsourcing computation.
Homomorphic encryption (HE) is a cryptosystem that allows us to perform certain arithmetic operations on encrypted data and receive an encrypted result that corresponds to the result of operations performed in plaintext. Several papers already discussed ML with HE techniques. Wu et al. [9] used Paillier cryptosystem [10] and approximated the logistic function using polynomials, but it required an exponentially growing computational cost in the degree of the approximation polynomial. Aono et al. [11] and Xie et al. [12] used an additive HE scheme to aggregate some intermediate statistics. However, the scenario of Aono et al. relies on the client to decrypt these intermediary statistics and the method of Xie et al. requires expensive computational cost to calculate the intermediate information. The most related research of this paper is the work of Kim et al. [13] which also used HE based ML. However, the size of encrypted data and learning time were highly dependent on the number of features, so the performance for a large dataset was not practical in terms of storage and computational cost.
Since 2011, the iDASH Privacy and Security Workshop has assembled specialists in privacy technology to discuss issues that apply to biomedical data sharing, as well as main stakeholders who provided an overview of the main uses of the data, different laws and regulations, and their own views on privacy. In addition, it has began to hold annual competitions on the basis of the workshop from 2014. The goal of this challenge is to evaluate the performance of stateofthearts methods that ensures rigorous data confidentiality during data analysis in a cloud environment.
In this paper, we provide a solution to the third track of iDASH 2017 competition, which aims to develop HE based secure solutions for building a ML model (i.e., logistic regression) on encrypted data. We propose a general practical solution for HE based ML that demonstrates good performance and low storage costs. In practice, our output quality is comparable to the one of an unencrypted learning case. As a basis, we use the HE scheme for approximate arithmetic [14]. To improve the performance, we apply several additional techniques including a packing method, which reduce the required storage space and optimize the computational time. We also adapt Nesterov’s accelerated gradient [15] to increase the speed of convergence. As a result, we could obtain a highaccuracy classifier using only a small number of iterations.
We give an opensource implementation [16] to demonstrate the performance of our HE based ML method. With our packing method we can encrypt the dataset with 1579 samples and 18 features using 39MB of memory. The encrypted learning time is about six minutes. We also demonstrate our implementation on the datasets used in [13] to compare the results. For example, the training of a logistic regression model took about 3.6 min with the storage about 0.02GB compared to 114 min and 0.69GB of Kim et al. [13] when a dataset consists of 1253 samples, each of which has 9 features.
Methods
Logistic regression
Logistic regression or logit model is a ML model used to predict the probability of occurrence of an event by fitting data to a logistic curve [17]. It is widely used in various fields including machine learning, biomedicine [18], genetics [19], and social sciences [20].
Gradient descent
Gradient Descent (GD) is a method for finding a local extremum (minimum or maximum) of a function by moving along gradients. To minimize the function in the direction of the gradient, onedimensional optimization methods are used.
Nesterov’s accelerated gradient
The method of GD can face a problem of zigzagging along a local optima and this behavior of the method becomes typical if it increases the number of variables of an objective function. Many GD optimization algorithms are widely used to overcome this phenomenon. Momentum method, for example, dampens oscillation using the accumulated exponential moving average for the gradient of the loss function.
where 0<γ_{t}<1 is a moving average smoothing parameter.
Approximate homomorphic encryption
HE is a cryptographic scheme that allows us to carry out operations on encrypted data without decryption. Cheon et al. [14] presented a method to construct a HE scheme for arithmetic of approximate numbers (called HEAAN in what follows). The main idea is to treat an encryption noise as part of error occurring during approximate computations. That is, an encryption ct of message \(m \in {\mathcal {R}}\) by a secret key sk for a ciphertext modulus q will have a decryption structure of the form 〈ct,sk〉=m+e (mod q) for some small e.
 KeyGen(1^{λ}).

For an integer L that corresponds to the largest ciphertext modulus level, given the security parameter λ, output the ring dimension N which is a power of two.

Set the small distributions χ_{key},χ_{err},χ_{enc} over \({\mathcal R}\) for secret, error, and encryption, respectively.

Sample a secret s←χ_{key}, a random \(a\leftarrow {\mathcal R}_{L}\) and an error e←χ_{err}. Set the secret key as sk←(1,s) and the public key as \(\mathsf {pk}\leftarrow (b,a)\in {\mathcal R}_{L}^{2}\) where b←−as+e (mod 2^{L}).

 KSGen_{sk}(s^{′}). For \(s'\in {\mathcal R}\), sample a random \(a^{\prime }\leftarrow {\mathcal R}_{2 \cdot L}\) and an error e^{′}←χ_{err}. Output the switching key as \(\mathsf {swk}\leftarrow (b^{\prime },a^{\prime })\in {\mathcal R}_{2\cdot L}^{2}\) where b^{′}←−a^{′}s+e^{′}+2^{L}s^{′}(mod 2^{2·L}).

Set the evaluation key as evk←KSGen_{sk}(s^{2}).


Enc_{pk}(m). For \(m\in {\mathcal R}\), sample v←χ_{enc} and e_{0},e_{1}←χ_{err}. Output v·pk+(m+e_{0},e_{1}) (mod 2^{L}).

Dec_{sk}(ct). For \(\mathsf {ct}= (c_{0},c_{1})\in {\mathcal R}_{\ell }^{2}\), output c_{0}+c_{1}·s (mod 2^{ℓ}).

Add(ct_{1},ct_{2}). For \(\mathsf {ct}_{1},\mathsf {ct}_{2}\in {\mathcal R}_{\ell }^{2}\), output ct_{add}←ct_{1}+ct_{2} (mod 2^{ℓ}).

CMult_{evk}(ct;c). For \(\mathsf {ct}\in {\mathcal R}_{\ell }^{2}\) and \(a\in {\mathcal R}\), output ct^{′}←c·ct (mod 2^{ℓ}).

Mult_{evk}(ct_{1},ct_{2}). For \(\mathsf {ct}_{1}=(b_{1},a_{1}),\mathsf {ct}_{2}=(b_{2},a_{2})\in {\mathcal R}_{\ell }^{2}\), let (d_{0},d_{1},d_{2})=(b_{1}b_{2},a_{1}b_{2}+a_{2}b_{1},a_{1}a_{2}) (mod 2^{ℓ}). Output ct_{mult}←(d_{0},d_{1})+⌊2^{−L}·d_{2}·evk⌉ (mod 2^{ℓ}).

ReScale(ct;p). For a ciphertext \(\mathsf {ct}\in {\mathcal R}_{\ell }^{2}\) and an integer p, output ct^{′}←⌊2^{−p}·ct⌉ (mod 2^{ℓ−p}).

Encode(w;p). For \(\mathbf {w} \in {\mathbb {R}}^{k}\), output the polynomial \(m \leftarrow \phi (2^{p}\cdot \mathbf {w})\in {\mathcal R}\).

Decode(m;p). For a plaintext \(m \in {\mathcal R}\), the encoding of an array consisting of a power of two k≤N/2 messages, output the vector \(\mathbf {w} \leftarrow \phi ^{1} (m / 2^{p}) \in {\mathbb {R}}^{k}\).

Rotate_{rk}(ct;r). For the rotation keys rk, output a ciphertext ct^{′} encrypting the rotated plaintext vector of ct by r positions.
Refer [14] for the technical details and noise analysis.
Database encoding
For an efficient computation, it is crucial to find a good encoding method for the given database. The HEAAN scheme supports the encryption of a plaintext vector and the slotwise operations over encryption. However, our learning data is represented by a matrix (z_{ij})_{1≤i≤n,0≤j≤f}. A recent work [13] used the columnwise approach, i.e., a vector of specific feature data (z_{ij})_{1≤i≤n} is encrypted in a single ciphertext. Consequently, this method required (f+1) number of ciphertexts to encrypt the whole dataset.
Polynomial approximation of the sigmoid function
One limitation of the existing HE cryptosystems is that they only support polynomial arithmetic operations. The evaluation of the sigmoid function is an obstacle for the implementation of the logistic regression since it cannot be expressed as a polynomial.
Homomorphic evaluation of the gradient descent
This section explains how to securely train the logistic regression model using the HEAAN scheme. To be precise, we explicitly describe a full pipeline of the evaluation of the GD algorithm. We adapt the same assumptions as in the previous section so that the whole database can be encrypted in a single ciphertext.
Step 2: To obtain the inner product \(\mathbf {z}_{i}^{T} {\boldsymbol {\beta }}^{(t)}\), the public cloud aggregates the values of \(z_{ij}\beta _{j}^{(t)}\) in the same row. This step can be done by adapting the incomplete column shifting operation.
Homomorphic evaluation of Nesterov’s accelerated gradient
The performance of leveled HE schemes highly depends on the depth of a circuit to be evaluated. The bottleneck of homomorphic evaluation of the GD algorithm is that we need to repeat the update of weight vector β^{(t)} iteratively. Consequently, the total depth grows linearly on the number of iterations and it should be minimized for practical implementation.
For the homomorphic evaluation of Nesterov’s accelerated gradient, a clients sends one more ciphertext \(\mathsf {ct}_{v}^{(0)}\) encrypting the initial vector v^{(0)} to the public cloud. Then the server uses an encryption ct_{z} of dataset Z to update two ciphertexts v^{(t)} and \(\mathsf {ct}_{\beta }^{(t)}\) at each iteration. One can securely compute β^{(t+1)} in the same way as the previous section. Nesterov’s accelerated gradient requires one more step to compute the second equation of (1) and obtain an encryption of v^{(t+1)} from \(\mathsf {ct}_{\beta }^{(t)}\) and \(\mathsf {ct}_{\beta }^{(t+1)}\).
Results
In this section, we present parameter sets with experimental results. Our implementation is based on the HEAAN library [21] that implements the approximate HE scheme of Cheon et al. [14]. The source code is publicly available at github [16].
Parameters settings
We explain how to choose the parameter sets for the homomorphic evaluation of the (Nesterov’s) GD algorithm with security analysis. We start with the parameter L  the bitsize of a fresh ciphertext modulus. The modulus of a ciphertext is reduced after the ReScale operations and the evaluation of an approximate polynomial g(x).
The dimension of a cyclotomic ring \({\mathcal {R}}\) is chosen as N=2^{16} following the security estimator of Albrecht et al. [22] for the learning with errors problem. In this case, the bit size L of a fresh ciphertext modulus should be bounded by 1284 to ensure the security level λ=80 against known attacks. Hence we repeat ITERNUM=9 iterations of GD algorithm g=g_{3}, and ITERNUM=7 iterations when g=g_{5} or g=g_{7}.
The smoothing parameter γ_{t} is chosen in accordance with [15]. The choice of proper GD learning rate parameter α_{t} normally depends on the problem at hand. Choosing too small α_{t} leads to a slow convergence, and choosing too large α_{t} could lead to a divergence, or a fluctuation near a local optima. It is often optimized by a trial and error method, which we are not available to perform. Under these conditions harmonic progression seems to be a good candidate and we choose a learning rate \(\alpha _{t} = \frac {10}{t+1}\) in our implementation.
Implementation
All the experimentations were performed on a machine with an Intel Xeon CPU E52620 v4 at 2.10 GHz processor.
Task for the iDASH challenge. In genomic data privacy and security protection competition 2017, the goal of Track 3 was to devise a weight vector to predict the disease using the genotype and phenotype data (Additional file 1: iDASH). This dataset consists of 1579 samples, each of which has 18 features and a cohort information (disease vs. healthy). Since we use the ring dimension N=2^{16}, we can only pack up to N/2=2^{15} dataset values in a single ciphertext but we have totally 1579×19>2^{15} values to be packed. We can overcome this issue by dividing the dataset into two parts of sizes 1579×16 and 1579×3 and encoding them separately into two ciphertexts. In general, this method can be applied to the datasets with any number of features: the dataset can be encrypted into ⌈(f+1)·n·(N/2)^{−1}⌉ ciphertexts.
Implementation results for iDASH dataset with 10fold CV
Sample  Feature  degg  Iter  Enc  Learn  Storage  Accuracy  AUC 

num  num  num  time  time  
1579  18  3  9  4s  7.94 min  0.04 GB  61.72%  0.677 
5  7  4s  6.07 min  0.04 GB  62.87%  0.689  
7  7  4s  7.01 min  0.04 GB  62.36%  0.689 
Comparison We present some experimental results to compare the performance of implementation to [13]. For a fair comparison, we use the same 5fold CV technique on five datasets  the Myocardial Infarction dataset from Edinburgh [23] (Additional file 2: Edinburgh), Low Birth Weight Study (Additional file 3: lbw), Nhanes III (Additional file 4: nhanes3), Prostate Cancer Study (Additional file 5: pcs), and Umaru Impact Study datasets (Additional file 6: uis) [24, 25, 26, 27]. All datasets have a single binary outcome variable.
Implementation results for other datasets with 5fold CV
Dataset  Sample  Feature  Method  degg  Iter  Enc  Learn  Storage  Accuracy  AUC 

num  num  num  time  time  
Edinburgh  1253  9  Ours  5  7  2s  3.6 min  0.02 GB  91.04%  0.958 
[13]  3  25  12s  114 min  0.69 GB  86.03%  0.956  
[13]  7  20  12s  114 min  0.71 GB  86.19%  0.954  
lbw  189  9  Ours  5  7  2s  3.3 min  0.02 GB  69.19%  0.689 
[13]  3  25  11s  99 min  0.67 GB  69.30%  0.665  
[13]  7  20  11s  86 min  0.70 GB  69.29%  0.678  
nhanes3  15649  15  Ours  5  7  14s  7.3 min  0.16 GB  79.22%  0.717 
[13]  3  25  21s  235 min  1.15 GB  79.23%  0.732  
[13]  7  20  21s  208 min  1.17 GB  79.23%  0.737  
pcs  379  9  Ours  5  7  2s  3.5 min  0.02 GB  68.27%  0.740 
[13]  3  25  11s  103 min  0.68 GB  68.85%  0.742  
[13]  7  20  11s  97 min  0.70 GB  69.12%  0.750  
uis  575  8  Ours  5  7  2s  3.5 min  0.02 GB  74.44%  0.603 
[13]  3  25  10s  104 min  0.61 GB  74.43%  0.585  
[13]  7  20  10s  96 min  0.63 GB  75.43%  0.617 
Discussion
The rapid growth of computing power initiated the study of more complicated ML algorithms in various fields including biomedical data analysis [28, 29]. HE system is a promising solution for the privacy issue, but its efficiency in real applications remains as an open question. It would be great if we could extend this work to other ML algorithms such as deep learning.
One constraint in our approach is that the number of iterations of GD algorithm is limited depending on the choice of HE parameter. In terms of asymptotic complexity, applying the bootstrapping method of approximate HE scheme [30] to the GD algorithm would achieve a linear computation cost on the iteration number.
Conclusion
In the paper, we presented a solution to homomorphically evaluate the learning phase of logistic regression model using the gradient descent algorithm and the approximate HE scheme. Our solution demonstrates a good performance and the quality of learning is comparable to the one of an unencrypted case. Our encoding method can be easily extended to a largescale dataset, which shows the practical potential of our approach.
Notes
Acknowledgements
The authors would like to thank the editor and reviewers for the thoughtful comments and constructive suggestions, which greatly helped us improve the quality of this manuscript. The authors also thank Jinhyuck Jeong for giving valuable comments to the technical part of the manuscript.
Funding
This work was partly supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No.B0717160098) and by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government (MSIP) (No.2017R1A5A1015626).
MK was supported in part by NIH grants U01TR002062 and U01EB023685. Publication of this article has been funded by the NRF Grant funded by the Korean Government (MSIT) (No.2017R1A5A1015626).
Availability of data and materials
All datasets are available in the Additional files provided with the publication. The HEAAN library is available at https://github.com/kimandrik/HEAAN. Our implementation is available at https://github.com/kimandrik/HEML.
About this supplement
This article has been published as part of BMC Medical Genomics Volume 11 Supplement 4, 2018: Proceedings of the 6th iDASH Privacy and Security Workshop 2017. The full contents of the supplement are available online at https://bmcmedgenomics.biomedcentral.com/articles/supplements/volume11supplement4.
Authors’ contributions
JHC designed and supervised the study. KL analyzed the data. AK drafted the source code and MK optimized it. AK and MK performed the experiments. AK and YS are major contributors in writing the manuscript. All authors read and approved the final manuscript.
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
All authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary material
References
 1.Samuel AL. Some studies in machine learning using the game of checkers. IBM J Res Dev. 1959; 3(3):210–29.CrossRefGoogle Scholar
 2.Dietz E. Application of logistic regression and logistic discrimination in medical decision making. Biom J. 1987; 29(6):747–51.CrossRefGoogle Scholar
 3.Rousseau D. Biomedical Research: Changing the Common Rule by David Rousseau – Ammon & Rousseau Translations. 2017. https://www.ammonrousseau.com/changingtherulesbydavidrousseau/ [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spHgiYRI.
 4.Nikolaenko V, Weinsberg U, Ioannidis S, Joye M, Boneh D, Taft N. Privacypreserving ridge regression on hundreds of millions of records. In: Security and Privacy (SP), 2013 IEEE Symposium On. IEEE: 2013. p. 334–48.Google Scholar
 5.Yao ACC. How to generate and exchange secrets. In: Foundations of Computer Science, 1986., 27th Annual Symposium On. IEEE: 1986. p. 162–7.Google Scholar
 6.El Emam K, Samet S, Arbuckle L, Tamblyn R, Earle C, Kantarcioglu M. A secure distributed logistic regression protocol for the detection of rare adverse drug events. J Am Med Inform Assoc. 2012; 20(3):453–61.CrossRefGoogle Scholar
 7.Nardi Y, Fienberg SE, Hall RJ. Achieving both valid and secure logistic regression analysis on aggregated data from different private sources. J Priv Confidentiality. 2012; 4(1):9.Google Scholar
 8.Mohassel P, Zhang Y. SecureML: A System for Scalable PrivacyPreserving Machine Learning. IEEE Symp Secur Priv. 2017.Google Scholar
 9.Wu S KH, Teruya T, Kawamoto J, Sakuma J. Privacypreservation for stochastic gradient descent application to secure logistic regression. 27th Annu Conf Japan Soc Artif Intell. 2013;1–4.Google Scholar
 10.Paillier P. Publickey cryptosystems based on composite degree residuosity classes. In: International Conference on the Theory and Applications of Cryptographic Techniques. Springer: 1999. p. 223–38.Google Scholar
 11.Aono Y, Hayashi T, Trieu Phong L, Wang L. Scalable and secure logistic regression via homomorphic encryption. In: Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. ACM: 2016. p. 142–4.Google Scholar
 12.Xie W, Wang Y, Boker SM, Brown DE. Privlogit: Efficient privacypreserving logistic regression by tailoring numerical optimizers. arXiv preprint arXiv:1611.01170. 2016.Google Scholar
 13.Kim M, Song Y, Wang S, Xia Y, Jiang X. Secure logistic regression based on homomorphic encryption: Design and evaluation. JMIR Med Inform. 2018; 6(2).CrossRefGoogle Scholar
 14.Cheon JH, Kim A, Kim M, Song Y. Homomorphic encryption for arithmetic of approximate numbers. In: Advances in Cryptology–ASIACRYPT 2017: 23rd International Conference on the Theory and Application of Cryptology and Information Security. Springer: 2017. p. 409–37.Google Scholar
 15.Nesterov Y. A method of solving a convex programming problem with convergence rate o (1/k2). In: Soviet Mathematics Doklady, vol. 27: 1983. p. 372–6.Google Scholar
 16.Cheon JH, Kim A, Kim M, Lee K, Song Y. Implementation for iDASH Competition 2017. 2017. https://github.com/kimandrik/HEML [Accessed 11 July 2018] Available from: http://www.webcitation.org/70qbe6xii.
 17.Harrell FE. Ordinal logistic regression. In: Regression Modeling Strategies. Springer: 2001. p. 331–43.Google Scholar
 18.Lowrie EG, Lew NL. Death risk in hemodialysis patients: the predictive value of commonly measured variables and an evaluation of death rate differences between facilities. Am J Kidney Dis. 1990; 15(5):458–82.CrossRefGoogle Scholar
 19.Lewis CM, Knight J. Introduction to genetic association studies. Cold Spring Harb Protocol. 2012; 2012(3):068163.Google Scholar
 20.Gayle V, Lambert PS. Logistic regression models in sociological research. 2009.Google Scholar
 21.Cheon JH, Kim A, Kim M, Song Y. Implementation of HEAAN. 2016. https://github.com/kimandrik/HEAAN [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spMzVJ6U.
 22.Albrecht MR, Player R, Scott S. On the concrete hardness of learning with errors. J Math Cryptol. 2015; 9(3):169–203.CrossRefGoogle Scholar
 23.Kennedy R, Fraser H, McStay L, Harrison R. Early diagnosis of acute myocardial infarction using clinical and electrocardiographic data at presentation: derivation and evaluation of logistic regression models. Eur Heart J. 1996; 17(8):1181–91.CrossRefGoogle Scholar
 24.lbw: Low Birth Weight study data. 2017. https://rdrr.io/rforge/LogisticDx/man/lbw.html [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spNFX2b5.
 25.nhanes, 3: NHANES III data. 2017. https://rdrr.io/rforge/LogisticDx/man/nhanes3.html [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spNJJFDx.
 26.pcs: Prostate Cancer Study data. 2017. https://rdrr.io/rforge/LogisticDx/man/pcs.html [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spNLXr5a.
 27.uis: UMARU IMPACT Study data. 2017. https://rdrr.io/rforge/LogisticDx/man/uis.html [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spNOLB9n.
 28.Wang Y. Application of deep learning to biomedical informatics. Int J Appl Sci Res Rev. 2016.Google Scholar
 29.Ravì D, Wong C, Deligianni F, Berthelot M, AndreuPerez J, Lo B, Yang GZ. Deep learning for health informatics. IEEE J Biomed Health Inform. 2017; 21(1):4–21.CrossRefGoogle Scholar
 30.Cheon JH, Han K, Kim A, Kim M, Song Y. Bootstrapping for approximate homomorphic encryption. In: Advances in Cryptology–EUROCRYPT 2018: Annual International Conference on the Theory and Applications of Cryptographic Techniques. Springer: 2018. p. 360–84.Google Scholar
Copyright information
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.