Abstract
An equality constrained optimization problem with a deterministic objective function and constraints in the form of mathematical expectation is considered. The constraints are transformed into the Sample Average Approximation form resulting in deterministic problem. A method which combines a variable sample size procedure with line search is applied to a penalty reformulation. The method generates a sequence that converges towards first-order critical points. The final stage of the optimization procedure employs the full sample and the SAA problem is eventually solved with significantly smaller cost. Preliminary numerical results show that the proposed method can produce significant savings compared to SAA method and some heuristic sample update counterparts while generating a solution of the same quality.
Similar content being viewed by others
References
Bastin, F.: Trust-Region Algorithms for Nonlinear Stochastic Programming and Mixed Logit Models. Ph.D. thesis, University of Namur, Belgium (2004)
Bastin, F., Cirillo, C., Toint, P.L.: An adaptive Monte Carlo algorithm for computing mixed logit estimators. Comput. Manag. Sci. 3(1), 55–79 (2006)
Deng, G., Ferris, M.C.: Variable-number sample path optimization. Math. Program. 117(12), 81–109 (2009)
Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)
Dolgopolik, M.V.: Smooth exact penalty functions: a general approach. Optim. Lett. 10, 635–648 (2016)
Friedlander, M.P., Schmidt, M.: Hybrid deterministic-stochastic methods for data fitting. SIAM J. Sci. Comput. 34(3), 13801405 (2012)
Gill, P.E., Murray, W., Wright, M.H.: Practical Optimization. Academic Press, London (1997)
Hock, W., Schittkowski, K.: Test Examples for Nonilnear Programming Codes. Lecture Notes in Economics and Mathematical Systems, vol. 187. Springer, Berlin (1981)
Homem-de-Mello, T.: Variable-sample methods for stochastic optimization. ACM Trans. Model. Comput. Simul. 13(2), 108–133 (2003)
Huyer, W., Neumaier, A.: A new exact penalty function. SIAM J. Optim. 13(4), 1141–1158 (2003)
Krejić, N., Krklec, N.: Line search methods with variable sample size for unconstrained optimization. J. Comput. Appl. Math. 245, 213–231 (2013)
Krejić, N., Krklec Jerinkić, N.: Nonmonotone line search methods with variable sample size. Numer. Algorithms 68(4), 711–739 (2015)
Nocedal, J., Wright, S.J.: Numerical Optimization. Springer Series in Operations Research. Springer, Berlin (1999)
Pasupathy, R.: On choosing parameters in retrospective-approximation algorithms for stochastic root finding and simulation optimization. Oper. Res. 58(4), 889–901 (2010)
Polak, E., Royset, J.O.: Eficient sample sizes in stochastic nonlinear programing. J. Comput. Appl. Math. 217(2), 301–310 (2008)
Shapiro, A.: Monte Carlo sampling methods. In: Stochastic Programming, Handbook in Operations Research and Management Science, vol. 10, pp. 353–425. Elsevier, Amsterdam (2003)
Shapiro, A., Dentcheva, D., Ruszczynski, A.: Lectures on Stochastic Programming: Modeling and Theory. MPS-SIAM Series on Optimization (2009)
Spall, J.C.: Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control. Wiley-Interscience Series in Discrete Mathematics, New Jersey (2003)
Wang, X., Ma, S., Yuan, Y.: Penalty Methods with Stochastic Approximation for Stochastic Nonlinear Programming, Technical report. arXiv:1312.2690 [math.OC] (2015)
Acknowledgements
We are grateful to the Associate Editor and reviewers whose comments helped us to improve the paper. N. Krejić and N. Krklec Jerinkić are supported by Serbian Ministry of Education, Science and Technological Development, Grant No. 174030.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Algorithm 2
- Step 0 :
-
Input parameters: \(dm_{k}\), \(\epsilon _{\delta }^{N_{k}}(x_{k})\), \(x_k\), \(N_k\), \(N_{k}^{min}\), \(\nu _{1}\in (0,1)\).
- Step 1 :
-
Determine \(N_{k+1}\)
- 1) :
-
If \(dm_{k}=\epsilon _{\delta }^{N_{k}}(x_{k})\) set \(N_{k+1}=N_{k}\).
- 2) :
-
If \(dm_{k}>\epsilon _{\delta }^{N_{k}}(x_{k})\)
Starting with \(N=N_{k}\), while \(dm_{k}>\frac{N_k}{N}\;\epsilon _{\delta }^{N}(x_{k})\) and \(N>N_{k}^{min}\), decrease N by 1 and calculate \(\varepsilon _{\delta }^{N}(x_{k}). \) Set \(N_{k+1}=N\).
- 3) :
-
\(dm_{k}<\epsilon _{\delta }^{N_{k}}(x_{k})\)
- i) :
-
If \(dm_{k}\ge \nu _{1} \epsilon _{\delta }^{N_{k}}(x_{k})\) Starting with \(N=N_{k}\), while \(dm_{k}<\frac{N_k}{N}\;\epsilon _{\delta }^{N}(x_{k})\) and \(N<N_{max}\), increase N by 1 and calculate \(\epsilon _{\delta }^{N}(x_{k})\). Set \(N_{k+1}=N\).
- ii) :
-
If \(dm_{k}< \nu _{1}\epsilon _{\delta }^{N_{k}}(x_{k})\) set \(N_{k+1}=N_{max}\).
Algorithm 3
We say that we have not made big enough decrease of the function \(\hat{\theta }_{N_{k+1}}\) if the following inequality is true
where l(k) is the iteration at which we started to use the sample size \(N_{k+1}\) for the last time.
- Step 0 :
-
Input parameters: \(N_{k}\), \(N_{k+1}\), \(N_{k}^{min}\).
- Step 1 :
-
Determine \(N_{k+1}^{min}\):
- 1) :
-
If \(N_{k+1}\le N_{k}\) then \(N_{k+1}^{min}=N_{k}^{min}\).
- 2) :
-
If \(N_{k+1}> N_{k}\) and
- i) :
-
if \(N_{k+1}\) is a sample size which has not been used so far then \(N_{k+1}^{min}=N_{k}^{min}\).
- ii) :
-
if \(N_{k+1}\) is a sample size which had been used and if we have made big enough decrease of the function \(\hat{\theta }_{N_{k+1}}\) since the last time we used it, then \(N_{k+1}^{min}=N_{k}^{min}\).
- iii) :
-
if \(N_{k+1}\) is a sample size which had been used and if we have not made big enough decrease of the function \(\hat{\theta }_{N_{k+1}}\) since the last time we used it, then \(N_{k+1}^{min}=N_{k+1}\).
Rights and permissions
About this article
Cite this article
Krejić, N., Krklec Jerinkić, N. & Rožnjik, A. Variable sample size method for equality constrained optimization problems. Optim Lett 12, 485–497 (2018). https://doi.org/10.1007/s11590-017-1143-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11590-017-1143-8