Skip to main content

3-Sigma Verification and Design

Rapid Design Iterations with Monte Carlo Accuracy

  • Chapter
  • First Online:
Variation-Aware Design of Custom Integrated Circuits: A Hands-on Field Guide

Abstract

This chapter explores how to efficiently design circuits accounting for statistical process variation, with target yields of two to three sigma (95–99.86 %). This yield range is appropriate for typical analog, RF, and I/O circuits. This chapter reviews various design flows to handle statistical process variations, and compares these flows in terms of speed and accuracy. It shows how a sigma-driven corner flow has excellent speed and accuracy characteristics. It then describes the key algorithms needed to enable the sigma-driven corner flow, namely sigma-driven corner extraction and confidence-based statistical verification. Some enabling technologies include Monte Carlo, Optimal Spread Sampling, confidence intervals, and 3σ corner extraction.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Or, if the designer is applying an automated sizer, we want to be independent of the competence of the sizer.

  2. 2.

    To be precise, the non-Gaussian distribution is estimated with Kernel Density Estimation (KDE).

  3. 3.

    It is accurate to the extent that the statistical MOS models are accurate.

  4. 4.

    We say “implicitly” because the designer has no clear way to see the sample’s relation to yield.

  5. 5.

    Benchmarks in Sect. 4.3.2 elaborate on this statement.

  6. 6.

    A box plot is another way of visually representing a distributed set of sample data. The box contains 50 % of the samples, from the 25th to the 75th percentile. The lines extending from the box go to the maximum and minimum sample values seen.

  7. 7.

    Within a target confidence level, such as 95 %; which means that 19 times out of 20 the conclusion is valid.

  8. 8.

    Though an approximate number, making light assumptions, is ≈80 MC samples for 2σ, and ≈1,400 MC samples for 3σ. Section 4.6.1 provides a detailed answer.

  9. 9.

    In the case of yield, we actually chose a spec value such that the true yield value would be ≈95 %.

References

  • Boggs PT, Tolle JW (1995) Sequential quadratic programming. Acta Numer, pp 1–50

    Google Scholar 

  • Cools R, Kuo FY, Nuyens D (2006) Constructing embedded lattice rules for multivariate integration. SIAM J Sci Comput 28(6):2162–2188

    Article  MathSciNet  MATH  Google Scholar 

  • Cortes C, Vapnik VN (1995) Support-vector networks. Mach Learn, 20

    Google Scholar 

  • Cranley R, Patterson T (1976) Randomization of number theoretic methods for multiple integration. SIAM J Numer Anal 13(6):904–914

    Article  MathSciNet  MATH  Google Scholar 

  • Cressie N (1989) Geostatistics. Am Stat 43:192–202

    Google Scholar 

  • Drennan PG, McAndrew CC (2003) Understanding MOSFET mismatch for analog design. IEEE J Solid State Circuits (JSSC) 38(3):450–456

    Article  Google Scholar 

  • Faure H (1982) Discrepance des suites associees a un system de numeration en dimensions. Acta Arithmetica 61:337–351

    MathSciNet  Google Scholar 

  • Graeb H (2007) Analog design centering and sizing, Springer, Dordrecht

    Google Scholar 

  • Halton J (1960) On the efficiency of certain quasi-random sequences of points in evaluating multi-dimensional integrals. Numer Math 2:84–90

    Article  MathSciNet  Google Scholar 

  • Hammersley J (1960) Monte Carlo methods for solving multivariate problems. Ann NY Acad Sci 86:844–874

    Article  MathSciNet  MATH  Google Scholar 

  • Hershenson MDM, Boyd SP, Lee TH (1998) GPCAD: a tool for CMOS op-amp synthesis. In: Proceedings of international conference on computer-aided design (ICCAD), pp 296–303

    Google Scholar 

  • Jaffari J (2011) On efficient LHS-based yield analysis of analog circuits. IEEE Trans Comput Aided Des Integr Circuits Syst 30(1):159–163

    Article  Google Scholar 

  • Keramat M, Kielbasa R (1997) Latin hypercube sampling monte carlo estimation of average quality index for integrated circuits. Analog Integr Circ Sig Process 14(1–2):131–142

    Article  Google Scholar 

  • Korobov NM (1959) The approximate computation of multiple integrals. Dokl Akad Nauk SSSR 124:1207–1210 (In Russian; referenced by L’Ecuyer and Lemieux 2000)

    Google Scholar 

  • L’Ecuyer P, Lemieux C (2000) Variance reduction via lattice rules. J Manage Sci 46(9):1214–1235

    Article  MATH  Google Scholar 

  • Liu B and Gielen G (2012) A fast analog circuit yield estimation method for medium and high dimensional problems. In: Proceedings of design automation and test in Europe (DATE), Dresden, March 2012

    Google Scholar 

  • Li X, McAndrew CC, Wu W, Chaudry S, Victory J, Gildenblat G (2010) Statistical modeling with the PSP MOSFET model. IEEE Trans Comput Aided Des Integr Circuits Syst 29(4):599–606

    Article  Google Scholar 

  • Li X, Pileggi L (2008) Quadratic statistical max approximation for parametric yield estimation of analog/RF integrated circuits. IEEE Trans Comput Aided Des Integr Circuits Syst 27(5):831–843, May 2008

    Google Scholar 

  • Magma Design Automation (2012) Titan ADX Product Page, http://www.magma-da.com/products-solutions/analogmixed/titanADX.aspx (last accessed May 21, 1012). Magma is now part of Synopsys, Inc

  • Matsumoto M, Nishimura T (1998) Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator. ACM Trans Model Comput Simul 8(1):3–30

    Article  MATH  Google Scholar 

  • McAndrew CC, Stevanovic I, Li X, Gildenblat G (2010) Extensions to backward propagation of variance for statistical modeling. IEEE Des Test Comput 27(2):36–43

    Article  Google Scholar 

  • McConaghy T (2009) Latent variable symbolic regression for high-dimensional inputs. In: Riolo R, O’Reilly U-M, McConaghy T (eds) Genetic programming theory and practice VII, Springer, NY (invited paper)

    Google Scholar 

  • McConaghy T (2011) High-dimensional statistical modeling and analysis of custom integrated circuits. In: Proceedings of custom integrated circuits conference (CICC)

    Google Scholar 

  • McKay M, Beckman R, Conover W (1979; 2000) A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 42(1):55–61

    Google Scholar 

  • Niederreiter H (1987) Point sets and sequences with small discrepancy. Monatshefte fur Mathematik, pp 104–133

    Google Scholar 

  • Niederreiter H, Xing C (1998) The algebraic-geometry approach to low-discrepancy sequences volume 127, pp 139–160, Springer-Verlag, Berlin

    Google Scholar 

  • Owen A (1998) Latin supercube sampling for very high-dimensional simulations. ACM Trans Model Comput Simul 8(1):71–102

    Article  MATH  Google Scholar 

  • Park SK, Miller KW (1988) Random number generators: good ones are hard to find. Commun ACM 31 10:1192–1201

    Google Scholar 

  • Schenkel F et al (2001) Mismatch analysis and direct yield optimization by spec-wise linearization and feasibility-guided search. In: Proceedings of design automation conference (DAC), pp 858–863

    Google Scholar 

  • Sinescu V, L’Ecuyer P (2011) Existence and construction of shifted lattice rules with an arbitrary number of points and bounded worst-case error for general weights. J Complexity 27(5):449–465

    Article  MathSciNet  MATH  Google Scholar 

  • Singhee A, Rutenbar RA (2010) Why quasi-Monte Carlo is better than Monte Carlo or Latin hypercube sampling for statistical circuit analysis. IEEE Trans Comput Aided Des Integr Circuits Syst 29(11):1763–1776

    Article  Google Scholar 

  • Silva LG, Silveira LM, Phillips JR (2007) Efficient computation of the worst-delay corner. In: Proceedings of design automation and test in Europe (DATE), March 2007

    Google Scholar 

  • Sloan IH (1994) Lattice methods for multiple integration. Oxford University Press, Oxford

    Google Scholar 

  • Sobol I (1967) On the distribution of points in a cube and the approximate evaluation of integrals. Comput Math Math Phys 7:86–112

    Article  MathSciNet  Google Scholar 

  • Synopsys Inc. (2012) Synopsys® HSPICE®, http://www.synopsys.com

  • Tao L (2011) A numerical integration-based yield estimation method for integrated circuits. J Semiconductors, vol 32

    Google Scholar 

  • Veetil V, Chopra K, Blaauw D, Sylvester D (2011) Fast statistical static timing analysis using smart Monte Carlo techniques. IEEE Trans Comput Aided Des Integr Circuits Syst 30(6):852–865

    Article  Google Scholar 

  • Yao P et al (2012) Understanding and designing for variation in GLOBALFOUNDRIES 28 nm technology. In: Proceedings of design automation conference (DAC), San Francisco, June 2012

    Google Scholar 

  • Zhang H, Chen T-H, Ting M-Y, Li X (2009) Efficient design-specific worst-case corner extraction for integrated circuits. In: Proceedings of design automation conference (DAC), pp 386–389

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Trent McConaghy .

Appendices

Appendix A: Density-Based Yield Estimation on >1 Outputs

4.1.1 Introduction

This section discusses the challenge of yield estimation, with a focus on the application to corner extraction (Sect. 4.3.3). As we will see, density-based approaches provide the necessary resolution, but need special consideration for >1 output. Of the possible approaches to handle >1 output, the “Blocking Min” approach provides the requisite speed, accuracy, and scalability.

Given a set of Monte Carlo (MC) sample output values, there are two main ways to estimate yield: binomial and density-based. Chapter 3 introduced these approaches, and included a description of how confidence intervals for each approach were calculated. It did not discuss how a corner extraction algorithm might use yield estimates, or how density estimation might handle >1 outputs. This section covers those topics.

4.1.2 Binomial-Based Yield Estimation on >1 Outputs

In the binomial approach to estimate yield, one counts the number of MC samples that are feasible on all outputs, and the total number of samples. The yield estimate is simply the ratio of (number feasible)/(total number).

From this simple description, Fig. 4.32 shows information flow in a more detailed fashion. We will be using this view as a framework to present a new technique for yield estimation. At the top of Fig. 4.32, we have 6 Monte Carlo samples, each which has a value for output AV and for output PWR. Each Monte Carlo value for AV and for PWR is compared to its spec, and marked as feasible = True (T), or feasible = False (F). Then, on each sample, the T/F value for output AV is merged with the T/F value for PWR, via the AND operator (only T if both input values are T). This is actually a blocking operation in the statistical sense, because the blocks of data that are similar to one another—the T/F value for each output within each MC sample—are kept together. Now we have one T/F value for each Monte Carlo sample. The yield estimate becomes the ratio of (number feasible, or T’s)/(total number, or T’s and F’s).

Fig. 4.32
figure 32

Flow of information in binomial counting-based yield estimation

Recall that we are estimating yield, with an eye towards the application of 3σ corner extraction. In 3σ corner extraction, we want to be able to make small changes to specifications out at ≈3σ, and get back slightly different estimates for yield. That is, it needs fine-grained precision at 3σ. This is especially necessary in an optimization formulation for corner extraction (Sect. 4.4.3). The problem is that for the binomial approach to start to have good precision out at 3σ, it needs 1,000 samples or so. While 1,000–2,000 samples are reasonable for verification, that is quite an expensive demand for corner extraction, which does not need to be as accurate as verification.

Since a binomial MC approach does not provide us with the desired resolution for 3σ corner extraction, let us examine density estimation to see how well it might fit.

4.1.3 Density-Based Yield Estimation on One Output

We first discuss the density-estimation approach to estimate yield on one output, then consider how we might handle >1 outputs. Figure 4.33 reviews how yield is calculated from a density-estimated PDF. Quite simply, it is the result of integrating under the PDF in the range from −∞ to the spec value (or spec value to +∞). Density estimation has more fine-grained precision at 3σ than binomial: small changes made to the spec value lead to small changes in yield estimate. This makes it a better fit in the optimization formulation for corner extraction (Sect. 4.4.3). Density estimation will work fine out at 3σ even with 50–100 MC samples; it assumes that it can safely extrapolate. This will be accurate on some circuits, where performance does not drop off sharply. On other circuits it will not be as accurate, but for corner extraction that’s fine because the verification step will catch it. (And if there is failure in the verification step, then the new corner extraction round will have better accuracy because it will have more MC samples to work from.)

Fig. 4.33
figure 33

Computing yield from a one-dimensional density-estimated PDF

4.1.4 Density-Based Yield Estimation on >1 Output

When we consider using density estimation to compute a yield across >1 outputs, the discussion gets more complex. Figure 4.34 illustrates the target flow. At the top, we have raw MC output values coming in. Out the bottom, we want to build some sort of density model or models, and somehow integrate on that, to get an overall yield estimate. The question is, what are the appropriate steps in between. It turns out there are a few different options to accomplish this, with different pros and cons. Let us examine the options.

Fig. 4.34
figure 34

Flow of information in density-based yield estimation: the challenge

One approach is to do n-dimensional density estimation across the n outputs, then simply integrate directly. Figure 4.35 illustrates. The challenge with this approach is that density estimation has poor scalability properties: the quality of the density models degrades badly as dimensionality increases. Even 5 dimensions gives quite poor density models, and many circuit applications might even have ≫5 outputs.

Fig. 4.35
figure 35

Yield estimation via n-dimensional density estimation

Another approach is to estimate the PDF of each output one at a time, then combine them somehow. This is a subproblem in some approaches to do statistical static timing analysis (SSTA), which need to combine two input delay PDFs into a single “worst-case” (max) delay PDF. The idea is to approximate the “max” operator with a linear function, where the linear function is calibrated by the incoming PDFs. This approach has been extended to analog circuits, and the “Linear Max” function extended to a “Quadratic Max” (Li and Pileggi 2008). However, this approach only handles unimodal PDFs, and the “max” operator induces error which degrades the overall accuracy of yield estimation.

The final approach is a novel technique which we call the “Blocking Min”. It does not suffer from the accuracy issues of the other approaches, and scales to an arbitrarily high number of outputs. The core idea is as follows. Looking back to the Linear/Quadratic Max approaches, we see that PDFs are estimated one at a time, then combined. This ignores the natural groupings of outputs into individual MC samples. In contrast, the Blocking Min exploits these natural groupings. The general idea is to keep the natural groupings together and apply the “min” operator to real MC sample values, to compress >1 outputs into a single scalar “combined” output. Only at this point is a PDF estimated, from “combined” output.

Figure 4.36 illustrates the Blocking Min approach. At the top is the raw data, with a value of AV and PWR for each MC sample. We want to apply a “min” operator to these, but cannot do so directly because the AV’s units (dB) are not the same units as PWR (amps). So, we rescale each set of output values to be margin values in a Cpk-like formula where margin ≥0 if feasible, <0 if infeasible, and 0 if on the boundary, and having a standard deviation of ≈1.0. Specifically, $$ margin_{i,j} = \min \left( {\frac{{USL_{i} - V_{i,j} }}{{\widehat{{\sigma_{l} }}}},\frac{{V_{i,j} - LSL_{i} }}{{\widehat{{\sigma_{l} }}}}} \right), $$ where v i,j is MC sample j of output i, USL i is the upper spec limit of output i, LSL i is the lower spec limit, and $$ \widehat{{\sigma_{l} }} $$ is the estimated standard deviation of output i.

Fig. 4.36
figure 36

Flow of information in density-based yield estimation: the solution is blocking min

Once we have computed the margin for each output of each MC sample, we can apply the min operator across the outputs of an MC sample, to get the overall margin for each MC sample. Then, we build a density model for overall margin, from that set of samples. The overall yield is simply the area under the density model for a value ≥0.

The Blocking Min is fast because it only needs to estimate a single 1-dimensional PDF. It is scalable because it compresses the multiple outputs into a single dimension. It is accurate because it does not make any linear or quadratic approximations in the course of compressing to a single dimension, thanks to the “blocking” action to compute overall margin. Furthermore, it can handle multimodal distributions and other highly non-Gaussian distributions.

The Blocking Min suits the application of yield estimation for corner extraction quite naturally. It is fast, accurate, and scalable as discussed; and because it uses density estimation it provides high resolution to support an optimization-style tuning of specifications.

Appendix B: Details of Low-Discrepancy Sampling

This section has three parts: a detailed literature review of low-discrepancy sampling (Sect. B.1),followed by descriptions of Optimal Spread Sampling (OSS) for point sets (Sect. B.2) and for point sequences (Sect. B.3).

4.1.1 B.1: Detailed Review of Low-Discrepancy Sampling

In the literature, “well-spread” sampling is most commonly known as “low-discrepancy sampling”. The lower the discrepancy, the better that samples are spread. A simple example of a discrepancy measure is the (negative) minimum distance between all points in a sample set; and more complex measures exist in the literature. Sampling can be done to generate a single point set holding N items, or to generate a point sequence one sample at a time, and make continuous estimates using those samples.

Low-discrepancy sampling has origins from Quasi Monte Carlo (QMC) methods as well as cubature methods, which were both developed with a focus on numerical integration. Modern low-discrepancy techniques can be classified into two main categories: Digital Nets and Lattice Rules. Digital Nets encompass many traditional QMC techniques, including Halton (1960), Sobol (1967), Faure (1982), Hammersley (1960), Niederreiter (1987), and Niederreiter-Xing sampling (1998). Lattice Rules encompasses many traditional techniques including orthogonal arrays, Latin Hypercube Sampling (McKay et al. 1979; 2000), and Latin Supercube Sampling (Owen 1998). While Digital Net methods were traditionally designed for point sequences, they can be used for point sets; and while Lattice Rules methods were traditionally designed for point sets, researchers have shown how to alter them for use as point sequences (Cools et al. 2006).

The CAD field has explored some low-discrepancy sampling approaches. Latin Hypercube Sampling (LHS) (McKay et al. 2000) is quite simple and quite popular (Keramat and Kielbasa 1997; Jaffari 2011; Tao 2011; Liu and Gielen 2012). It works as follows. If one aims to generate n random samples in d dimensions, then each dimension gets divided into n bins of equal probability. When drawing the samples, each bin will get drawn from exactly once. Overall, this means that there will be good spread for each dimension, independently of other dimensions. However, LHS does nothing to ensure that there is good spread among points in >1 dimension. This matters when there are interactions among random (process) variables in the mapping to output variables. As we saw in Sect 4.6.7, LHS does well on circuits where the interactions are weak, and not as well on circuits with stronger or higher-order interactions. There are some techniques to improve LHS on second-order interactions (e.g. Jaffari 2011), but these increase complexity, increase runtime, and still do not handle higher-order interactions.

Other circuit CAD researchers have explored variants of QMC methods like Sobol’ sampling. Many QMC methods do poorly in >10 or so dimensions, so the research has focused on workarounds to handle hundreds or thousands of dimensions. (Singhee and Rutenbar 2010) bypassed the issue by doing a short “pilot” run first to estimate the relative importance of each process variable, then focused the QMC sampling on the most important 10 dimensions. (Veetil et al. 2011) was similar, focusing QMC methods to the most important variables. Of course, this only helps if 10 process variables have most of the impact. (McConaghy 2009) showed a representative circuit problem where the first 10 variables only had 50 % of the impact, and it took 85 variables to get 95 % of the impact. A further challenge is that most circuits have >1 output. With 5 outputs, each with different high-impact variables, one would need 5 × more “important” variables, or to assign just 10/5 = 2 important variables per output.

Optimal Spread Sampling (OSS) is a low-discrepancy sampling technique that draws ideas from both Digital Nets and from Lattice Rules, drawing on advances from those fields rather than the older LHS and QMC approaches that other circuit references used. The recent advances give it properties that greatly improve older LHS and QMC techniques. It generates points with good spread in all the dimensions simultaneously, rather than just one dimension at a time like LHS. It can scale to thousands or hundreds of thousands of input variables, without resorting to heuristics like the recent QMC circuit techniques.

4.1.2 B.2: Creating a Point Set with Optimal Spread Sampling

This section describes how to create a set of uniformly-distributed n samples in d-dimensional space using Optimal Spread Sampling (OSS).

OSS gives the point set

$$ {\user2{P}} \, = \, \left\{ {k*{\user2 z}/n} \right\}; \, k = 0, \, 1, \,\ldots , \, n - 1 $$

where {x} is the fractional part of x, i.e. {x} = x − floor(x); and z = (z 1 , …, z d ) is the generating vector, a d-dimensional integer vector having no factor in common with n.

Once a z is determined, generating a point set P is straightforward, using the equation above. The challenge is to determine z for the given n and d. The OSS algorithm implicitly holds a Fourier-series approximation of all possible functions, and optimizes across all possible z to minimize the discrepancy measure of worst-case error (Sloan 1994). This optimization has time complexity O(d n log(n)) and memory complexity O(n), where d is the number of process variables and n is the number of samples.

Like many low-discrepancy approaches, OSS samples can be “randomized” so that it becomes a variance reduction technique taking samples in the uniform (0,1) space, yet retains its high uniformity when taken as a point set. A simple way to do this is with the random shift modulo technique (aka Cranley-Patterson rotation) (Cranley and Patterson 1976), which draws a single d-dimensional point u ~ unif d(0,1) and adds it to the point set P, modulo 1. This operation can equivalently be incorporated into equation for the point set for the computation of P. A second randomization, permuting z before generating a new point set, will ensure that all variables are treated equally.

Table 4.3 presents the pseudocode to generate to generate a (randomized) set of n points P in unifd(0,1) space.

Table 4.3 Procedure UniformOssSet()

The vector z is computed in step 1 of Table 4.3. There are many techniques to accomplish this, from very early, simple techniques (Korobov 1959) to more involved modern, complex, but scalable (Sinescu and L’Ecuyer 2011). Step 2 and step 7 accomplish Cranley-Patterson rotation for randomization; step 2 draws a point from a uniform distribution using pseudo-random sampling, and step 7 performs the actual rotation via the addition of u j and the modulo operator {}. Step 3 ensures all variables are treated equally. Steps 4–8 iteratively build up the point set P. Each entry in P is a value for one variable d of one sample k. The key operation is step 7, where the base value is a multiple of k and z j , down-sampled by the number of samples n (then “randomized” via the Cranley-Patterson rotation).

In our experience, the extra computational cost of this algorithm is negligible compared to the cost of pseudo-random number generation.

4.1.3 B.3: Creating a Point Sequence with Optimal Spread Sampling

Figure 4.23 introduced the possibility that Optimal Spread Sampling (OSS) could be used to generate not just sets, but sequences too. A sequence is desirable for “anytime” style algorithms, where each additional step of the algorithm provides incremental value to the user, rather than relying on the algorithm to complete fully before results are available. For Monte Carlo sampling, a sequence of well-spread points is useful to give on-the-fly information to the user during sampling, rather than waiting until a full sampling run is complete. This section describes how OSS sequences can be generated.

OSS sequences are possible when the upper limit on n can be estimated; this occurs in many practical problems such as when the user has pre-specified the number of process points, or the target yield to verify a design (from which the number of points can be estimated under mild assumptions). Then the core idea is to embed smaller point sets in successively larger point sets, as Fig. 4.23 shows.

In mathematical terms, if P m is the point set from OSS with b m points, then P 1 is a subset of P 2 , is a subset of P 3 , etc.

To use this practically, we choose a base value b (e.g. 2), let m 1 = 1, then compute minimal m 2 such that zm2 ≥ n. Then, we compute a z which works across a range of m values m = {m 1, m 1 + 1,…, m 2}, to account for many possible point sets simultaneously. It has runtime O(dn(log(n)) 2 ). With z in hand, points are first drawn from set P 1 , then set P 2 /P 1 , then set P 3 /(P 2 U P 1 ), and so forth. Each set’s points can be ordered randomly, with gray codes, or with radical inverse (Niederreiter 1987).

Table 4.4 presents the pseudocode to generate a sequence of n points. Compared to the approach for sets (Table 4.3), it has an outer loop on exponent m (step 7), and each sample divides by b m rather than by n (step 12). It uses i for the sample index (steps 6, 10, 12). Since each set P m +1 embeds all smaller subsets, it must avoid those; this turns out to be easy because the points in P m +1 whose indices k are multiples of b (step 9).

Table 4.4 Procedure UniformOssSequence()

The Optimal Spread Sampling option in Solido Variation Designer draws samples with an OSS sequence.

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this chapter

Cite this chapter

McConaghy, T., Breen, K., Dyck, J., Gupta, A. (2013). 3-Sigma Verification and Design. In: Variation-Aware Design of Custom Integrated Circuits: A Hands-on Field Guide. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-2269-3_4

Download citation

  • DOI: https://doi.org/10.1007/978-1-4614-2269-3_4

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4614-2268-6

  • Online ISBN: 978-1-4614-2269-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics