Skip to main content

Fast PVT Verification and Design

Efficiently Managing Process-Voltage-Temperature Corners

  • Chapter
  • First Online:
Variation-Aware Design of Custom Integrated Circuits: A Hands-on Field Guide
  • 1769 Accesses

Abstract

This chapter explores how to design circuits under PVT variation effects, as opposed to statistical process variation effects. Process, voltage, and temperature (PVT) variations are taken into account by individually varying P, V, and T over their allowable ranges and analyzing the subsequent combinations or so-called PVT corners. In modern designs, there can be hundreds or thousands of PVT corners. This chapter reviews design flows to handle PVT variations, and compares them in terms of relative speed and accuracy. It introduces a “Fast PVT” flow and shows how that flow has excellent speed and accuracy characteristics. It describes the Fast PVT algorithm, which is designed to quickly extract the most relevant PVT corners. These corners can be used within a fast and accurate iterative design loop. Furthermore, Fast PVT reliably verifies designs, on average 5x faster than the method of testing all corners on a suite of benchmark circuits. This chapter concludes with design examples based on the production use of Fast PVT technology by industrial circuit designers.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Cadence Design Systems Inc. (2012) Cadence® Virtuoso® Spectre® Circuit Simulator, http://www.cadence.com

  • Cressie N (1989) Geostatistics. Am Statistician 43:192–202

    Google Scholar 

  • Huyer W, Neumaier A (1999) Global optimization by multilevel coordinate search. J Global Optim 14:331–355

    Article  MathSciNet  MATH  Google Scholar 

  • Jones DR, Schonlau M, Welch WJ (1998) Efficient global optimization of expensive black-box functions. J Global Optim 13:455–592

    Article  MathSciNet  MATH  Google Scholar 

  • Land AH, Doig AG (1960) An automatic method of solving discrete programming problems. Econometrica 28(3):497–520

    Article  MathSciNet  MATH  Google Scholar 

  • Montgomery DC (2004) Design and analysis of experiments, 6th Edition, Wiley, Hoboken

    Google Scholar 

  • Rasmussen CE, Williams CKI (2006) Gaussian processes for machine learning, MIT Press, Cambridge, MA

    Google Scholar 

  • Sasena MJ (2002) Flexibility and efficiency enhancements for constrained global optimization with kriging approximations, PhD thesis, University of Michigan

    Google Scholar 

  • Solido Design Automation Inc. (2012) Variation Designer, http://www.solidodesign.com

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Trent McConaghy .

Appendices

Appendix A: Details of Fast PVT Verification Algorithm

2.1.1 Detailed Algorithm Description

We now give a more detailed description of the Fast PVT algorithm. We do so in two parts: first, by showing how we recast the problem as a global optimization problem, then how this problem can be quickly and reliably approached with an advanced model-building optimization technique.

We can cast the aim of finding worst-case corners as a global optimization problem. Consider x as a point in PVT space, i.e. a PVT corner. Therefore, x has a value for the model set or for each device type if separate per-device models are used, V dd, R load, C load, temperature T, etc. We are given the discrete set of N C possible PVT corners \( \user2{X}_{{all}} = \,\left\{ {\user2{x}_{1} ,\user2{x}_{2} ,\, \ldots ,\user2{x}_{{NC}} } \right\} \), and a SPICE-simulated output response to each corner f(x). We aim to find x*, the optimal PVT corner which gives the minimum or maximum f(x), depending on the output. Collecting this together in an optimization formulation, we get:

$$ \begin{array}{*{20}c} {{\mathbf{x}}^{*} = argmin \,(f(\user2{x}))} \\ {{\text{subject}}\,{\text{to}}\,\user2{x}\,{\text{in}}\,\user2{X}_{{all}} } \\ \end{array} $$

Now, given the aims of speed and reliability, the challenge is to solve the global optimization problem with as few evaluations of f(x) as possible to minimize simulations, yet reliably find the x * returning the global minimum, which is the true worst-case corner.

Fast PVT approaches this optimization problem with an advanced model-building optimization approach that explicitly leverages modeling error.

We now detail the steps in the approach, as shown in Fig. 2.11.

Fig. 2.11
figure 11

Fast PVT verification algorithm

Step 1: Raw initial samples: Fast PVT generates a set of initial samples \( \user2{X = X}_{{init}} \) in PVT space using design of experiments (DOE) (Montgomery 2004). Specifically, the full set of PVT corners is bounded by a hypercube, then DOE selects a fraction of the corners of the hypercube in a structured fashion.

Simulate initial samples: Run SPICE on the initial samples to compute all initial output values: \( {\user2{y}} = {\user2{y}}_{{\user2{init}}} = f\left( {\user2{X}_{{\user2{init}}} } \right) . \)

Step 2: Construct model mapping X  y: Here, Fast PVT constructs a regressor (an RSM) mapping the PVT input variables to the SPICE-simulated output values. The choice of regressor is crucial. Recall that a linear or quadratic model makes unreasonably strong assumptions about the nature of the mapping. We do not want to make any such assumptions—the model must be able to handle arbitrarily nonlinear mappings. Furthermore, the regressor must not only predict an output value for unseen input PVT points, it must be able to report its confidence in that prediction. Confidence should approach 100 % at points that have previously been simulated, and decrease as distance from simulated points increases.

An approach that fits these criteria is Gaussian process models (GPMs, a.k.a. kriging)(Cressie 1989). GPMs exploit the relative distances among training points and the distance from the input point to training points while predicting output values and the uncertainty of the predictions. For further details on GPMs, we refer the reader to Appendix B.

Step 3: Choose new sample xnew: Once the model is constructed, we use it to choose the next PVT corner xnew from the remaining candidate corners Xleft = Xall\X. One approach might be to simply choose the x that gives minimum predicted output value g(x):

$$ {\user2 {x_{new}}} = argmin\left( {g\left( {\user2 {x}} \right)} \right){\text{ }}subject \;to\:{\user2 {x}}\:in\:{\user2 {X_{{left}}}} $$

However, this is problematic. While such an approach optimizes f(x) in regions near where worst-case values have already been simulated, there may be other regions with relatively fewer simulations, which have different simulated values than model predictions. These are model blind spots, and if such a region contained the true worst-case value, then this simple approach would fail.

GPMs, however, are aware of their blind spots because they can report their uncertainty. So, we can choose x new by including uncertainty s2(x), where X left is the set of remaining unsimulated corners from X:

$$ {\user2{x}}_{\user2{new}} = argmin\left( {h\left( {g\left( {\user2{x}} \right),{\text{ }}s^{2} \left( {\user2 {x}} \right)} \right)} \right){\text {s.t.}}\:{\user2{x}}\:in:{\user2{X}}_{\user2{left}} $$

where h(x) is an infill criterion function that combines both g(x) and s 2(x) in some fashion. There are many options for h(x), but a robust one uses least-constrained bounds (LCB) (Sasena 2002). This method returns the x new that returns the minimum value for the lower-bound of the confidence interval. Mathematically, LCB is simply a weighted sum of g(x) and s 2(x).

Step 4: Simulate new sample; update: Run SPICE on the new sample: \( {\user2{y}_{new}} = f\left( {\user2{x}_{{\user2{new}}} } \right) \). We update all the training data with the latest point: \( {\user2{X}} = {\user2{X}}\,{\text{U}}\,\user2{x}_{{\user2{new}}}, \;{\text{and}}\;{\user2{y}} = {\user2{y}}{\text{U}}\,{\user2{y_{new}}} . \).

Step 5: Stop if converged: Here, Fast PVT stops once it is confident that it has found the true worst-case. Specifically, it stops when it has determined that there is very low probability of finding any output values that are worse than the ones it has seen.

2.1.2 Illustrative Example of Fast PVT Convergence

Figure 2.12 shows an example Fast PVT verification convergence curve, plotting output value versus sample number. The first 20 samples are initial samples X init and y init. After that, each subsequent sample x new is chosen with adaptive modeling. The predicted lower bound shown is the minimum of all 95 %-confidence predicted lower bounds across all unsimulated PVT corners (X left). The PVT corner with this minimum value is chosen as the next sample x new. That new sample is simulated.

Fig. 2.12
figure 12

Example of Fast PVT convergence

The dashed line in Fig. 2.12 is the minimum simulated value so far. We see that immediately after the initial samples, the first x new finds a significantly lower simulated output value f(x new). Over the course of the next several samples, Fast PVT finds even lower simulated values. Then, the minimum value curve flattens, and does not decrease further. Simultaneously, from sample number 20–40, we see that the predicted lower bound hovers around an output value of 30, but then after that, the lower bound increases, creating an ever-larger gap from the minimum simulated value. This gap grows because X left has run out of corners that are close to worst-case, hence the remaining next-best corners are much higher than the already-simulated worst-case. As this gap grows, confidence that the worst-case is found increases further, and at some point we have enough confidence to stop.

Appendix B: Gaussian Process Models

2.1.1 Introduction

Most regression approaches take the functional form:

$$ g\left( \user2{x} \right)\, = \,\sum\limits_{i}^{{NB}} {w_{i}} g_{i} \left( \user2{x} \right)\, + \,\varepsilon $$

Where g(x) is an approximation of the true function f(x). There are N B basis functions; each basis function g i(x) has weight w i. Error is ε. Because g i(x) can be an arbitrary nonlinear function, this model formulation covers linear models, polynomials, splines, neural networks, support vector machines, and more. The overall class of models is called generalized linear model (GLM). Model fitting reduces to finding the w i and g i(x ) that optimize criteria such as minimizing mean-squared error on the training data, and possibly regularization terms. These models assume that error ε is normally distributed, with mean of zero, and with no error correlation between training points.

In this formulation, the error distribution remains constant throughout input variable space; it does not reduce to zero as one approaches the points that have already been simulated. This does not make sense for SPICE-simulated data: the model should have 100 % confidence (zero error) at previously simulated points, and error should increase as one draws away from the simulated points. Restating this, the model confidence should change depending on the input point.

2.1.2 Towards Gaussian Process Models (GPMs)

We can create a functional form where the model confidence depends on the input point:

$$ g\left( \user2{x} \right)\, = \,\sum\limits_{i}^{{NB}} {w_{i} } g_{i} \left( \user2{x} \right)\, + \,\varepsilon \left( \user2{x} \right) $$

Note how the error ε is now a function of the input point x. Now the question is how to choose w i, g i(x ), and ε(x) given our training data X and y. A regressor approach that fits our criteria of using ε(x) and handling arbitrary nonlinear mappings, is the Gaussian process model approach (GPMs, a.k.a. kriging). GPMs originated in the geostatistics literature (Cressie 1989) but have recently become more popular in the global optimization literature (Jones et al. 1998) and later in machine learning literature (Rasmussen and Williams 2006). GPMs have such a powerful approach to modeling ε(x) that they can replace the first term of g(x) with a constant μ, giving the form:

$$ g\left( {\user2{x}} \right) \, = {{\upmu}} + \, \varepsilon \left({\user2{x}} \right) $$

In GPMs, ε(x) is normally-distributed with mean zero, and variance represented with a special matrix R. R is a function of the N training input points X, where correlation for input points x i and x j is R ij = corr(x i, x j) = exp(−d(x i, x j)), and d is a weighted distance measure \( d\left( {\user2{x}_{\user2{i}} ,\user2{x}_{\user2{j}} } \right)\, = \,\sum\limits_{{h = 1n}} {\user2{\theta }_{\user2{h}} } |\user2{x}_{{\user2{i},h}} - \user2{x}_{{\user2{j},h}} |^{{ph}} \). This makes intuitive sense: as two points x i and x j get closer together, their distance d goes to zero, and therefore their correlation R ij goes to one. Distance measure d is parameterized by n-dimensional vectors θ and p, which characterize the relative importance and smoothness of input variables. θ and p are learned via maximum-likelihood estimation (MLE) on the training data.

From the general form \( g\left( \user2{x} \right)\, = \:\:\mu \: + \,\varepsilon \left( \user2{x} \right) \) which characterizes the distribution, GLMs predict values for unseen x via the following relationship:

$$ g\left( \user2{x} \right)\, = \,\mu \, + \user2{r}^{{\text{T}}} \user2{R}^{{ - 1}} \left( {\user2{y - 1}\mu } \right) $$

where the second term adjusts the prediction away from the mean based on how the input point x correlates with the N training input points X. Specifically, r = r(x) = {corr(x,x 1), …, corr(x, x N)}. Once again, this formula follows intuition: as x gets closer to a training point x i, the influence of that training point x i, and its corresponding output value y i, will become progressively greater.

Recall we want a regressor to not just predict an output value g(x), but also to report the uncertainty in its predicted output value. In GLMs, this is an estimate of variance s 2:

$$ s^{2} \left( \user2{x} \right)\, = \,\sigma ^{2} {^\ast}\,\left( {1 - \user2{r}^{T} \user2{R}^{{ - 1}} \user2{r} + \,\left( {1 - {\user2{1}}^{T} \user2{R}^{{ - 1}} \user2{r}} \right)^{2} /\left( {{\user2{1}}^{T} \user2{R}^{{ - 1}} {\user2{1}}} \right)} \right) $$

In the above formulae, μ and σ2 are estimated via analytical equations that depend on X and y. For further details, we refer the reader to (Jones et al. 1998).

2.1.3 GPM Construction Time

With GPMs, construction time increases linearly with the number of parameters, and as a square of the number of training samples. In practical terms, this is not an issue for 5 or 10 input PVT variables with up to \( \approx 500 \) corners sampled so far, or for 20 input PVT variables and \( \approx 1 50 \) samples; but it does start to become noticeable if the number of input variables or number of samples increases much beyond that.

In order for model construction not to become a bottleneck, the Fast PVT algorithm behaves as follows:

  • Once 180 simulations have been reached, it only builds models every 5 simulations, rather than after every new simulation. The interval between model builds increases with the number of simulations (=max(5, 0.04 * number of simulations)).

  • If Fast PVT has not converged by 1000 simulations, it simply simulates the rest of the full-factorial corners.

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this chapter

Cite this chapter

McConaghy, T., Breen, K., Dyck, J., Gupta, A. (2013). Fast PVT Verification and Design. In: Variation-Aware Design of Custom Integrated Circuits: A Hands-on Field Guide. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-2269-3_2

Download citation

  • DOI: https://doi.org/10.1007/978-1-4614-2269-3_2

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4614-2268-6

  • Online ISBN: 978-1-4614-2269-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics