Skip to main content

SiLVR: Projection Pursuit for Response Surface Modeling

  • Chapter
  • First Online:
Machine Learning in VLSI Computer-Aided Design
  • 2718 Accesses

Abstract

Circuit performance metrics depend on several design and process parameters of the components of the circuit. This relationship can often be quite nonlinear and in a high-dimensional space resulting from high parameter counts. We look at a response surface modeling approach that models this relationship effectively by extracting the dominant latent variables in the input space that primarily influence the performance metric. The approach is reminiscent of project pursuit, but applies that technique via a carefully crafted neural network architecture. The neural network in this case is grown dynamically in stages, where each stage extracts the next dominant latent variable and models the relationship between the performance metrics and that latent variable.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. S.W. Director, P. Feldmann, K. Krishna, Statistical integrated circuit design. IEEE J. Solid-State Circuits 28(3), 193–202 (1993)

    Google Scholar 

  2. P. Cox, P. Yang, S.S. Mahant-Shetti, P. Chatterjee, Statistical modeling for efficient parametric yield estimation of MOS VLSI circuits. IEEE Trans. Electron Devices 32(2), 471–478 (1985)

    Google Scholar 

  3. T.-K. Yu, S.M. Kang, I.N. Hajj, T.N. Trick, Statistical performance modeling and parametric yield estimation of MOS VLSI. IEEE Trans. Comput. Aided Des. 6(6), 1013–1022 (1987)

    Google Scholar 

  4. P. Feldmann, S.W. Director, Integrated circuit quality optimization using surface integrals. IEEE Trans. Comput. Aided Des. 12(12), 1868–1879 (1993)

    Google Scholar 

  5. M. Hane, T. Ikezawa, T. Ezaki, Atomistic 3d process/device simulation considering gate line-edge roughness and poly-Si random crystal orientation effects, in Proceedings of IEEE International Electron Devices Meeting (2003)

    Google Scholar 

  6. T. Ezaki, T. Izekawa, M. Hane, Investigation of random dopant fluctuation induced device characteristics variation for sub-100 nm CMOS by using atomistic 3d process/device simulator, in Proceedings of IEEE International Electron Devices Meeting (2002)

    Google Scholar 

  7. D.J. Frank, Y. Taur, M. Ieong, H.-S.P. Wong, Monte Carlo modeling of threshold variation due to dopant fluctuation, in Proceedings of International Symposium on VLSI Technology (1999)

    Google Scholar 

  8. X. Li, J. Le, L.T. Pileggi, A. Stojwas, Projection-based performance modeling for inter/intra-die variations, in Proceedings of IEEE/ACM International Conference on CAD (2005)

    Google Scholar 

  9. Z. Feng, P. Li, Performance-oriented statistical parameter reduction of parameterized systems via reduced rank regression, in Proceedings of IEEE/ACM International Conference on CAD (2006)

    Google Scholar 

  10. A.J. Burnham, R. Viveros, J.F. MacGregor, Frameworks for latent variable multivariate regression. J. Chemom. 20, 31–45 (1996)

    Google Scholar 

  11. P.J. Huber, Projection pursuit. Ann. Stat. 13(2), 435–475 (1985)

    MathSciNet  MATH  Google Scholar 

  12. A. Singhee, R.A. Rutenbar, Beyond low-order statistical response surfaces: latent variable regression for efficient, highly nonlinear fitting, in Proceedings of IEEE/ACM Design Automation Conference (2007)

    Google Scholar 

  13. S. Wold, A. Ruhe, H. Wold, W.J. Dunn III, The collinearity problem in linear regression. The partial least squares (PLS) approach to generalized inverses. J. Sci. Stat. Comput. 5(3), 735–743 (1984)

    Google Scholar 

  14. G. Reinsel, R. Velu, Multivariate Reduced-Rank Regression, Theory and Applications (Springer, Berlin, 1998)

    MATH  Google Scholar 

  15. S. Wold, M. Sjstrm, L. Eriksson, PLS-regression: a basic tool of chemometrics. Chemom. Intell. Lab. Syst. 58, 109–130 (2001)

    Google Scholar 

  16. P.T. Davies, M.K.-S. Tso, Procedures for reduced-rank regression. Appl. Stat. 31(3), 244–255 (1982)

    MathSciNet  Google Scholar 

  17. A-L. Boulesteix, K. Strimmer, Partial least squares: a versatile tool for the analysis of high-dimensional genomic data. Brief. Bioinform. 8(1), 32–44 (2006)

    Google Scholar 

  18. G. Baffi, E.B. Martin, A.J. Morris, Non-linear projection to latent structures revisited (the neural network PLS algorithm). Comput. Chem. Eng. 23(9), 1293–1307 (1999)

    Google Scholar 

  19. C. Malthouse, A.C. Tamhane, R.S.H. Mah, Nonlinear partial least squares. Comput. Chem. Eng. 21(8), 875–890 (1997)

    Google Scholar 

  20. B.D. Ripley, Pattern Recognition and Neural Networks (Cambridge University Press, Cambridge, 1996)

    MATH  Google Scholar 

  21. J.H. Friedman, W. Stuetzle, Projection pursuit regression. J. Am. Stat. Assoc. 76(376), 817–823 (1981)

    MathSciNet  Google Scholar 

  22. G. Golub, C. Loan, Matrix Computations (JHU Press, Baltimore, 1996)

    MATH  Google Scholar 

  23. T. Hastie, R. Tibshirani, J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction (Springer, Berlin, 2001)

    MATH  Google Scholar 

  24. B.F. Logan, L.A. Shepp, Optimal reconstruction of a function from its projections. Duke Math. J. 42, 645–659 (1975)

    MathSciNet  MATH  Google Scholar 

  25. B.A. Vostrecov, M.A. Kreines, Approximation of continuous functions by superpositions of plane waves. Soviet Math. Dokl. 2, 1326–1329 (1961)

    Google Scholar 

  26. F. John, Plane Waves and Spherical Means Applied to Partial Differential Equations (Interscience Publishers, New York, 1955)

    MATH  Google Scholar 

  27. W.S. McCullough, W. Pitts, A logical calculus of the ideas immanent in nervous activity. Null. Math. Biophys. 5, 115–133 (1943)

    MathSciNet  MATH  Google Scholar 

  28. I. Goodfellow, Y. Bengio, A. Courville, Deep Learning (MIT Press, Cambridge, 2016)

    MATH  Google Scholar 

  29. B. Irie, S. Miyake, Capabilities of three-layered perceptrons, in International Conference on Neural Networks (1988)

    Google Scholar 

  30. P. Diaconis, M. Shahshahani, On nonlinear functions of linear combinations. SIAM J. Sci. Stat. Comput. 5(1), 175–191 (1984)

    MathSciNet  MATH  Google Scholar 

  31. A.M. Bruckner, J.B. Bruckner, B.S. Thompson, Real Analysis (Prentice-Hall, New Jersey, 1997)

    MATH  Google Scholar 

  32. X. Sun, E.W. Cheney, The fundamentality of sets of ridge functions. Aequationes Math. 44, 226–235 (1992)

    MathSciNet  MATH  Google Scholar 

  33. V.Y. Lin, A. Pinkus, Fundamentality of ridge functions. J. Approx. Theory 75, 295–311 (1993)

    MathSciNet  MATH  Google Scholar 

  34. P.P. Petrushev, Approximation by ridge functions and neural networks. SIAM J. Math. Anal. 30(1), 155–189 (1998)

    MathSciNet  MATH  Google Scholar 

  35. V.E. Maiorov, On best approximation by ridge functions. J. Approx. Theory 99, 68–94 (1999)

    MathSciNet  MATH  Google Scholar 

  36. M. Burger, A. Neubauer, Error bounds for approximation with neural networks. J. Approx. Theory 112, 235–250 (2001)

    MathSciNet  MATH  Google Scholar 

  37. H.N. Mhaskar, Approximation by superposition of sigmoidal and radial basis functions. Adv. App. Math. 13, 350–373 (1992)

    MathSciNet  MATH  Google Scholar 

  38. A.R. Barron, Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inf. Theory 39(3), 930–945 (1993)

    MathSciNet  MATH  Google Scholar 

  39. R.A. Adams, Sobolev Spaces (Academic Press, New York, 1975)

    MATH  Google Scholar 

  40. T.H. Cormen, C.E. Leiserson, R.L. Rivest, Introduction to Algorithms, 2nd edn. (MIT Press, Cambridge, 2001)

    MATH  Google Scholar 

  41. W. Light, Ridge functions, sigmoidal functions and neural networks, in Approximation Theory, ed. by E.W. Cheney, C.K. Chui, L.L. Schumaker, vol. VII (Academic Press, New York, 1992)

    Google Scholar 

  42. B.L.S. Prakasa Rao, Nonparametric Functional Estimation (Academic Press, New York, 1983)

    MATH  Google Scholar 

  43. J.H. Friedman, A variable span smoother, Department of Statistics Tech. Report LCS 05, Stanford University, 1984

    Google Scholar 

  44. H.H. Rosenbrock, An automatic method for finding the greatest or least value of a function. Comput. J. 3, 175–184 (1960)

    MathSciNet  Google Scholar 

  45. D. Donoho, I. Johnstone, P. Rousseeuw, W. Stahel, Projection pursuit (discussion). Ann. Stat. 13(2), 496–500 (1985)

    Google Scholar 

  46. L.K. Jones, On a conjecture of Huber concerning the convergence of projection pursuit regression. Ann. Stat. 15(2), 880–882 (1987)

    MathSciNet  MATH  Google Scholar 

  47. P. Hall, On projection pursuit regression. Ann. Stat. 17(2), 573–588 (1989)

    MathSciNet  MATH  Google Scholar 

  48. M. Loéve, Probability Theory I & II, 4th edn. (Springer, Berlin, 1977)

    MATH  Google Scholar 

  49. G. Cybenko, Approximation by superpositions of sigmoidal functions. Math. Control Signals Syst. 2, 303–314 (1989)

    MathSciNet  MATH  Google Scholar 

  50. K. Hornik, M. Stinchcombe, H. White, Multilayer feedforward networks are universal approximators. Neural Netw. 2, 359–366 (1989)

    MATH  Google Scholar 

  51. C.K. Chui, X. Li, Approximation by ridge functions and neural networks with one hidden layer. J. Approx. Theory 70, 131–141 (1992)

    MathSciNet  MATH  Google Scholar 

  52. K. Funahashi, On the approximate realization of continuous mappings by neural networks. Neural Netw. 2, 183–192 (1989)

    Google Scholar 

  53. H.N. Mhaskar, Neural networks for optimal approximation of smooth and analytic functions. Neural Comput. 8, 164–177 (1996)

    Google Scholar 

  54. C.K. Chui, X. Li, H.N. Mhaskar, Limitations of the approximation capabilities of neural networks with one hidden layer. Adv. Comput. Math. 5, 233–243 (1996)

    MathSciNet  MATH  Google Scholar 

  55. A.R. Barron, Statistical properties of artificial neural networks, in Proceedings of 28th Conference on Decision and Control (1989)

    Google Scholar 

  56. I.M. Sobol’, S.S. Kucherenko, Global sensitivity indices for nonlinear mathematical models. Rev. Wilmott Mag. 2, 2–7 (2005)

    Google Scholar 

  57. W.H. Press, B.P. Flannery, A.A. Teukolsky, W.T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing, 2nd edn. (Cambridge University Press, Cambridge, 1992)

    MATH  Google Scholar 

  58. D. Marquardt, An algorithm for least squares estimation of non-linear parameters. J. Soc. Ind. Appl. Math. 11, 431–441 (1963)

    MATH  Google Scholar 

  59. D.J.C. MacKay, A practical Bayesian framework for backpropagation networks. Neural Comput. 4(3), 448–472 (1992)

    Google Scholar 

  60. M.T. Hagan, M.B. Menhaj, Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neural Netw. 5(6), 989–993 (1994)

    Google Scholar 

  61. J.E. Dennis, Jr., R.B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations (SIAM, Philadelphia, 1996)

    MATH  Google Scholar 

  62. F. Girosi, M. Jones, T. Poggio, Regularization theory and neural network architectures. Neural Comput. 7(2), 219–269 (1995)

    Google Scholar 

  63. F.D. Foresee, M.T. Hagan, Gauss–Newton approximation to Bayesian learning, in Proceedings of International Conference on Neural Networks (1997)

    Google Scholar 

  64. K. Kundert, The Designer’s Guide to SPICE and Spectre ® (Springer, Berlin, 1995)

    MATH  Google Scholar 

  65. W. Zhao, Y. Cao, New generation of predictive technology model for sub-45 nm early design exploration. IEEE Trans. Electron Devices 53(11), 2816–2823 (2006)

    Google Scholar 

  66. P.R. Gray, P.J. Jurst, S.H. Lewis, R.G. Meyer, Analysis and Design of Analog Integrated Circuits, 4th edn. (Wiley, Hoboken, 2001)

    Google Scholar 

  67. H. Banba, H. Shiga, A. Umezawa, T. Miyaba, A CMOS bandgap reference circuit with sub-1-v operation. IEEE J. Solid-State Circuits 34(5), 670–674 (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amith Singhee .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Singhee, A. (2019). SiLVR: Projection Pursuit for Response Surface Modeling. In: Elfadel, I., Boning, D., Li, X. (eds) Machine Learning in VLSI Computer-Aided Design. Springer, Cham. https://doi.org/10.1007/978-3-030-04666-8_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-04666-8_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-04665-1

  • Online ISBN: 978-3-030-04666-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics