Advertisement

Sparse Relevance Kernel Machine-Based Performance Dependency Analysis of Analog and Mixed-Signal Circuits

  • Honghuang Lin
  • Asad Khan
  • Peng LiEmail author
Chapter

Abstract

Design optimization, verification, and failure diagnosis of analog and mixed-signal (AMS) circuits requires accurate models that can reliably capture complex dependencies of circuit performances on essential circuit and test parameters, such as design parameters, process variations, and test signatures. We present a novel Bayesian learning technique, namely sparse relevance kernel machine (SRKM), for characterizing analog circuits with sparse statistical regression models. SRKM produces more reliable classification models learned from simulation data with a limited number of samples but a large number of parameters, and also computes a probabilistically inferred weighting factor quantifying the criticality of each parameter as part of the overall learning framework, hence offering a powerful enabler for variability modeling, failure diagnosis, and test development. Compared to other popular learning-based techniques, the proposed SRKM produces more accurate models, requires less amount of training data, and extracts more reliable parametric ranking. The effectiveness of SRKM is demonstrated in examples including statistical variability modeling of a low-dropout regulator (LDO), built-in self-test (BIST) development of a charge-pump phase-locked loop (PLL), and applications of building statistical variability models for a commercial automotive interface design.

Notes

Acknowledgements

This material is based upon work supported by the Semiconductor Research Corporation (SRC) through Texas Analog Center of Excellence at the University of Texas at Dallas (Task ID:2712.004).

References

  1. 1.
    P. Bastani, N. Callegari, L.C. Wang, M.S. Abadir, Statistical diagnosis of unmodeled systematic timing effects, in Proceedings of the 45th Annual Design Automation Conference (ACM, New York, 2008), pp. 355–360Google Scholar
  2. 2.
    C.M. Bishop, Neural Networks for Pattern Recognition (Oxford University Press, Oxford, 1995)zbMATHGoogle Scholar
  3. 3.
    A.L. Blum, P. Langley, Selection of relevant features and examples in machine learning. Artif. Intell. 97(1), 245–271 (1997)MathSciNetzbMATHGoogle Scholar
  4. 4.
    A. Bounceur, B. Brahmi, K. Beznia, R. Euler, Accurate analog/RF BIST evaluation based on SVM classification of the process parameters, in 2014 9th International Design & Test Symposium (IDT) (IEEE, Piscataway, 2014), pp. 55–60Google Scholar
  5. 5.
    G. Chandrashekar, F. Sahin, A survey on feature selection methods. Comput. Electr. Eng. 40(1), 16–28 (2014)Google Scholar
  6. 6.
    H. Cheng, H. Chen, G. Jiang, K. Yoshihira, Nonlinear feature selection by relevance feature vector machine, in Machine Learning and Data Mining in Pattern Recognition (Springer, Berlin, 2007), pp. 144–159Google Scholar
  7. 7.
    B.R. Cobb, P.P. Shenoy, Nonlinear deterministic relationships in Bayesian networks, in Symbolic and Quantitative Approaches to Reasoning with Uncertainty (Springer, Berlin, 2005), pp. 27–38zbMATHGoogle Scholar
  8. 8.
    M. Fernández-Delgado, E. Cernadas, S. Barro, D. Amorim, Do we need hundreds of classifiers to solve real world classification problems? J. Mach. Learn. Res. 15(1), 3133–3181 (2014)MathSciNetzbMATHGoogle Scholar
  9. 9.
    I. Guyon, S. Gunn, M. Nikravesh, L.A. Zadeh, Feature Extraction: Foundations and Applications, vol. 207 (Springer, Berlin, 2008)zbMATHGoogle Scholar
  10. 10.
    S. Hochreiter, K. Obermayer, Nonlinear feature selection with the potential support vector machine, in Feature Extraction (Springer, Berlin, 2006), pp. 419–438Google Scholar
  11. 11.
    S.W. Hsiao, N. Tzou, A. Chatterjee, A programmable BIST design for PLL static phase offset estimation and clock duty cycle detection, in 2013 IEEE 31st VLSI Test Symposium (VTS) (IEEE, Piscataway, 2013), pp. 1–6Google Scholar
  12. 12.
    X. Huang, L. Shi, J.A. Suykens, Ramp loss linear programming support vector machine. J. Mach. Learn. Res. 15(1), 2185–2211 (2014)MathSciNetzbMATHGoogle Scholar
  13. 13.
    S.S. Keerthi, O. Chapelle, D. DeCoste, Building support vector machines with reduced classifier complexity. J. Mach. Learn. Res. 7, 1493–1515 (2006)MathSciNetzbMATHGoogle Scholar
  14. 14.
    D.E. King, Dlib-ml: a machine learning toolkit. J. Mach. Learn. Res. 10, 1755–1758 (2009)Google Scholar
  15. 15.
    S. Lai, Modeling, design and optimization of IC power delivery with on-chip regulation. Doctoral dissertation, Texas A&M University, 2014Google Scholar
  16. 16.
    S. Lai, P. Li, A fully on-chip area-efficient CMOS low-dropout regulator with fast load regulation. Analog Integr. Circuits Signal Process. 72(2), 433–450 (2012)Google Scholar
  17. 17.
    F. Li, Y. Yang, E.P. Xing, From lasso regression to feature vector machine, in Advances in Neural Information Processing Systems (2005), pp. 779–786Google Scholar
  18. 18.
    H. Lin, Algorithms for verification of analog and mixed-signal integrated circuits. Doctoral dissertation, Texas A&M University, 2016Google Scholar
  19. 19.
    H. Lin, P. Li, Circuit performance classification with active learning guided sampling for support vector machines. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 34(9), 1467–1480 (2015). https://doi.org/10.1109/TCAD.2015.2413840 Google Scholar
  20. 20.
    H. Lin, P. Li, Relevance vector and feature machine for statistical analog circuit characterization and built-in self-test optimization, in Proceedings of the 53rd Annual Design Automation Conference (2016), pp. 11–16Google Scholar
  21. 21.
    D.J. MacKay, The evidence framework applied to classification networks. Neural Comput. 4(5), 720–736 (1992)Google Scholar
  22. 22.
    S. Maltabas, O.K. Ekekon, K. Kulovic, A. Meixner, M. Margala, An IDDQ BIST approach to characterize phase-locked loop parameters, in 2013 IEEE 31st VLSI Test Symposium (VTS) (IEEE, Piscataway, 2013), pp. 1–6Google Scholar
  23. 23.
    M. Neil, M. Tailor, D. Marquez, Inference in hybrid Bayesian networks using dynamic discretization. Stat. Comput. 17(3), 219–233 (2007)MathSciNetGoogle Scholar
  24. 24.
    J. Neumann, C. Schnörr, G. Steidl, Combined SVM-based feature selection and classification. Mach. Learn. 61(1–3), 129–150 (2005)zbMATHGoogle Scholar
  25. 25.
    B. Schölkopf, A.J. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond (MIT Press, Cambridge, 2002)Google Scholar
  26. 26.
    A. Smola, B. Scholkopf, G. Ratsch, Linear programs for automatic accuracy control in regression, in Ninth International Conference on (Conf. Publ. No. 470) Artificial Neural Networks, 1999 (ICANN 99), vol. 2 (IET, Stevenage, 1999), pp. 575–580Google Scholar
  27. 27.
    P. Somol, P. Pudil, J. Novovičová, P. Paclık, Adaptive floating search methods in feature selection. Pattern Recogn. Lett. 20(11), 1157–1163 (1999)Google Scholar
  28. 28.
    J.A. Suykens, J. Vandewalle, Least squares support vector machine classifiers. Neural Process. Lett. 9(3), 293–300 (1999)Google Scholar
  29. 29.
    R. Tibshirani, Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Methodol. 58, 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  30. 30.
    M.E. Tipping, Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 1, 211–244 (2001)MathSciNetzbMATHGoogle Scholar
  31. 31.
    M.E. Tipping, An efficient matlab implementation of the sparse Bayesian modelling algorithm (version 2.0). Vector Anomaly, March 2009Google Scholar
  32. 32.
    M.E. Tipping, A.C. Faul et al., Fast marginal likelihood maximisation for sparse Bayesian models, in Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics (2003)Google Scholar
  33. 33.
    V.N. Vapnik, Statistical Learning Theory (Wiley, New York, 1998)zbMATHGoogle Scholar
  34. 34.
    V. Vapnik, S. Golowich, A. Smola, Support vector method for function approximation, regression estimation, and signal processing, in Advances in Neural Information Processing Systems (1997), pp. 281–287Google Scholar
  35. 35.
    D. Ververidis, C. Kotropoulos, Fast and accurate sequential floating forward feature selection with the Bayes classifier applied to speech emotion recognition. Signal Process. 88(12), 2956–2970 (2008)zbMATHGoogle Scholar
  36. 36.
    F. Wang, M., Zaheer, X. Li, J.O. Plouchart, A. Valdes-Garcia, Co-learning Bayesian model fusion: efficient performance modeling of analog and mixed-signal circuits using side information, in Proceedings of the IEEE/ACM International Conference on Computer-Aided Design (IEEE Press, Piscataway, 2015), pp. 575–582Google Scholar
  37. 37.
    P.M. Williams, Bayesian regularization and pruning using a Laplace prior. Neural Comput. 7(1), 117–143 (1995)Google Scholar
  38. 38.
    D.P. Wipf, B.D. Rao, An empirical Bayesian strategy for solving the simultaneous sparse approximation problem. IEEE Trans. Signal Process. 55(7), 3704–3716 (2007)MathSciNetzbMATHGoogle Scholar
  39. 39.
    H. Xu, C. Caramanis, S. Mannor, Robustness and regularization of support vector machines. J. Mach. Learn. Res. 10, 1485–1510 (2009)MathSciNetzbMATHGoogle Scholar
  40. 40.
    G. Yu, P. Li, A methodology for systematic built-in self-test of phase-locked loops targeting at parametric failures, in IEEE International Test Conference, 2007 (ITC 2007) (IEEE, Piscataway, 2007), pp. 1–10Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Texas InstrumentsDallasUSA

Personalised recommendations