Abstract
In this chapter, we provide the necessary foundation for completely design and implement SVM optimization algorithm. The concepts are described so that those can be broadly applied to general-purpose optimization problems.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
More details about this table-based method can be found in [1].
- 2.
We will not detail this information in here, but discuss it later in a convex optimization scenario. We suggest [1] as a more detailed introduction to linear optimization problems.
- 3.
More details in [1].
- 4.
More details in [23].
- 5.
Observe we build matrix B using the columns associated to those variables indexing rows.
- 6.
Remember the original constraints were modified to assume the equality form by using the slack variables.
- 7.
This ends up as concave after applying the minus sign.
- 8.
A matrix M is referred to as positive definite if [ α i α j ]M [α i α j ] > 0.
- 9.
We could have added simply slack variables π 3 and π 4 to provide the same results in the minimization form, however we decided to use this formulation to follow the proposal [11].
- 10.
Other scenarios may require a solver to approximate the solution.
- 11.
We detail and implement most of such paper, but we do not consider its rank reduction.
- 12.
Meaning we wish them to have the least relevance as possible for our problem, once they are associated to relaxation terms.
- 13.
Jacobian matrices must be squared so that the input and the output spaces have the same dimensionality, allowing inverse transformations.
References
M.S. Bazaraa, J.J. Jarvis, H.D. Sherali, Linear Programming and Network Flows (Wiley, Hoboken, 2010)
S. Boyd, L. Vandenberghe, Convex Optimization (Cambridge University Press, New York, 2004)
G.B. Dantzig, M.N. Thapa, Linear Programming 2: Theory and Extensions (Springer, Berlin, 2006)
M.C. Ferris, T.S. Munson, Interior-point methods for massive support vector machines. SIAM J. Optim. 13(3), 783–804 (2002)
A.V. Fiacco, G.P. McCormick, Nonlinear Programming: Sequential Unconstrained Minimization Techniques. Classics in Applied Mathematics (Society for Industrial and Applied Mathematics, Philadelphia, 1990)
S. Fine, K. Scheinberg, Efficient svm training using low-rank kernel representations. J. Mach. Learn. Res. 2, 243–264 (2002)
J.A. Freeman, D.M. Skapura, Neural Networks: Algorithms, Applications, and Programming Techniques. Addison-Wesley Computation and Neural Systems Series (Addison-Wesley, Boston, 1991)
R. Frisch, The logarithmic potential method of convex programming with particular application to the dynamics of planning for national development: synopsis of a communication to be presented at the international colloquium of econometrics in Paris 23–28 May 1955, Technical report, University (Oslo), Institute of Economics, 1955
P.E. Gill, W. Murray, M.A. Saunders, J.A. Tomlin, M.H. Wright, On projected Newton barrier methods for linear programming and an equivalence to Karmarkar’s projective method. Math. Program. 36(2), 183–209 (1986)
A.J. Hoffmann, M. Mannos, D. Sokolowsky, N. Wiegmann, Computational experience in solving linear programs. J. Soc. Ind. Appl. Math. 1, 17–33 (1953)
P.A. Jensen, J.F. Bard, Operations Research Models and Methods. Operations Research: Models and Methods (Wiley, Hoboken, 2003)
N. Karmarkar, A new polynomial-time algorithm for linear programming. Combinatorica 4(4), 373–396 (1984)
W. Karush, Minima of functions of several variables with inequalities as side conditions, Master’s thesis, Department of Mathematics, University of Chicago, Chicago, IL, 1939
H.W. Kuhn, A.W. Tucker, Nonlinear programming, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA (University of California Press, Berkeley, 1951), pp. 481–492
J.T. Ormerod, M.P. Wand, Low Rank Quadratic Programming (R Foundation for Statistical Computing, Vienna, 2015)
J.T. Ormerod, M.P. Wand, I. Koch, Penalised spline support vector classifiers: computational issues. Comput. Stat. 23(4), 623–641 (2008)
PatrickJMT, Linear programming (2008). https://youtu.be/M4K6HYLHREQ
PatrickJMT, Linear programming word problem - example 1 (2010). https://youtu.be/2ACJ9ewUC6U
PatrickJMT, The simplex method - finding a maximum/word problem example (part 1 to 5) (2010). https://youtu.be/gRgsT9BB5-8
B. Scholkopf, A.J. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond (MIT Press, Cambridge, 2001)
C.E. Shannon, Prediction and entropy of printed English. Bell Syst. Tech. J. 30, 50–64 (1951)
C.E. Shannon, W. Weaver, A Mathematical Theory of Communication (University of Illinois Press, Champaign, 1963)
G. Strang, Introduction to Linear Algebra (Wellesley-Cambridge Press, Wellesley, 2009)
E. Süli, D.F. Mayers, An Introduction to Numerical Analysis (Cambridge University Press, 2003)
T. Terlaky, Interior Point Methods of Mathematical Programming, 1st edn. (Springer, Berlin, 1996)
M. videos de Matemáticas, Dualidad
Mini-videos de Matemticas, Dualidad (2013). https://youtu.be/KMmgF3ZaBRE
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Fernandes de Mello, R., Antonelli Ponti, M. (2018). In Search for the Optimization Algorithm. In: Machine Learning. Springer, Cham. https://doi.org/10.1007/978-3-319-94989-5_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-94989-5_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-94988-8
Online ISBN: 978-3-319-94989-5
eBook Packages: Computer ScienceComputer Science (R0)