This chapter discusses some of the DAC correction methods that use exact error information acquired from measurements. The discussion concerns the correction only. The measurement and error processing are beyond the scope of this chapter. Three principle correction methods are discussed: self-calibration in Sect. 4.2, mapping in Sect. 4.3 and digital pre-distortion in Sect. 4.4. These methods are characterized by the way they utilize DAC intrinsic and extrinsic redundancies. These characteristics are further used in Chap. 7 to propose a general classification of the DAC correction methods.

1 Introduction

The open literature provides various DAC correction methods that can improve the DAC performance using exact information about the DAC errors. Broadly, these are methods that are using exact error information to provide exact error correction. This avoids background correction activity that increase power consumption and can potentially deteriorate performance. These methods do not rely on good understanding about the error mechanism and hence they can correct “unexpected” errors, e.g. due to migrating to other technologies or integrating an IP-block within another system.

This chapter discusses a selected set of state-of-the-art DAC correction methods that use self-measurements. That is to say these methods illustrate state-of-the-art principles for improving performance using exact error information. Some of these principles can be used together or in combination with correction methods that do not use exact error information (Chap. 3). Section 4.2 discusses the self-calibration of currents method. Section 4.3 discusses the unary code mapping method. Section 4.4 discusses the digital pre-distortion method. A general discussion is provided in Sect. 4.5. Finally, conclusions are drawn is Sect. 4.6.

2 Self-Calibration of DAC Current Cells

Self-calibration of the switch-current (SI) cell is a direct and powerful method to correct the DAC errors. This method directly targets the errors. It firstly measures the errors, then processes the measurement information, and finally applies correction. In this book, self-calibration is defined as follows. “Self ” means that the correction method is fully autonomous: the error measurement, the algorithm, and the error correction are integrated on-chip and no special activity from the customer is required. Thus, considering the DAC as a black box, there are no functional differences between a self-calibrated DAC and an intrinsic DAC. “Calibration of the SI cell” means a correction method that measures and corrects the DAC errors in the same domain (e.g. analog domain for amplitudes and timing errors). This definition is meant to distinguish the self-calibration correction method from other DAC correction methods for the sake of classifying their properties. Figure 4.1 shows a generalized block diagram of the self-calibration methods for current-steering DACs. The self-calibration methods always add extrinsic redundancy to the DAC design, i.e. circuits not directly used for the purpose of D/A conversion. The measurement is implemented in the analog domain (including in some cases the time domain). The error processing can be either in the analog or in the digital domain. The correction is always in the analog domain. The self-calibration can correct both amplitude and timing errors.

Fig. 4.1
figure 1

A generalized block diagram of self-calibrated CS DACs

2.1 Amplitude Errors Self-Calibration

The transistor mismatch errors cause inaccurate current generation in the DAC current cell, which deteriorates both the static and the dynamic DAC performance. Particularly in combination with other error mechanisms, the DAC performance degradation can be significant. Therefore, correcting the amplitude current cell errors not only improves the accuracy of the DAC analog output generation but also reduces errors due to mechanisms that correlate with the current amplitudes, as discussed in Chap. 5.

The methods for amplitude error calibration are diverse. Some are completely analog, e.g. [27, 34]. Others use digital error processing [35, 36]. There are background and foreground calibration methods, etc. However, the common properties are that the errors are identified and respective corrections are applied. It is also common to observe improvement in the whole DAC bandwidth range, i.e. for both low and high input signal frequencies.

The common conceptual idea of the self-calibration methods is to acquire accurate enough a-posteriori knowledge about the amplitude errors and to apply the appropriate correction. That is how the dependence on the actual IC fabrication process is minimized and the chip yield is improved. Equation 4.1 shows the generation of current I k that can be seen as either a DAC output current or an individual current source. In either case, there is a time-invariant error\({I_{{e_k}}}\)that accounts for the difference from the nominal design value \(\overline I \) (time-variant errors are not considered here).

$${I_k} = \overline I + {I_{{e_k}}}$$
(4.1)

The self-calibration methods apply the correction current I cor that is chosen to be equal to the measured error\({I_{{e_k}}},\)as shown in Eq. 4.2. That is how the error\({I_{{e_k}}}\)is compensated and its effect is practically removed.

$$ {I_k} = \overline I+ {I_{e_k}} - {I_{cor}} {\xrightarrow {{I_{{e_k}}} = {I_{cor}}}\overline I }$$
(4.2)

Adding extrinsic redundancy to the DAC core creates unavoidable interaction with the intrinsic DAC operation. This is a risk of deterioration of intrinsic performance. This risk is more considerable at high speeds, i.e. at high input signal frequencies and sampling rates. The main aspects of this risk include:

  1. 1.

    Performance degradation due to the self-measurement and self-correction circuits;

  2. 2.

    Reduced correction effectiveness at high speeds;

  3. 3.

    Performance degradation due to interference from calibration activity.

In order to add the self-measurement and self-correction circuits, the optimal intrinsic DAC architecture should be adapted, since both the error measurement and correction are performed in the analog domain. Therefore, the DAC intrinsic behavior unavoidably deviates from its optimal performance. Balancing between the performance improvement and deterioration is an important analysis step in the design of a DAC self-calibration method. The correction is static, in the sense that the targeted errors are static. Furthermore, it is always passive for the foreground self-calibration methods, e.g. [35], and it is always active for the background self-calibration methods, e.g. [27]. At high speeds, other error mechanisms become dominant, e.g. timing errors due to mismatched sampling moment. That is to say the self-calibration methods for amplitude errors are most effective at low speeds. Therefore, it is important to analyze up to what speeds the calibration advantages overcome their disadvantages. Furthermore, for the foreground self-calibration methods the correction is time-invariant. However, there are often time-variant errors, e.g. errors due to temperature changes. A quick solution may be the background calibration methods, in which the errors are continually being measured and corrected. However, these methods may additionally deteriorate the DAC performance due to the actual activity of the calibration method, while the DAC performs D/A conversion.

2.2 Timing Errors Self-Calibration

The mismatch errors between the switch-transistor threshold voltages Vth add timing errors in the data sampling moments \(\overline {{t_{on}}}.\) Since the switching transistors (e.g. of the synchronization latches, clock and data buffers, DAC current cells) need to be small to achieve high speeds, their mismatch errors cannot be reduced by design (big layout units placed close to each other). Therefore, the timing errors can substantially limit the DAC performance at high speeds.

Self-calibration of timing errors is similar to the amplitude errors self-correction. Instead of correcting the current amplitudes, the self-calibration corrects the sampling moments t on . Due to various transistor mismatches in the data chain: “latches-data drivers-current switches” and due to various spatial delays and parasitic capacitances, the current cell sampling moments t on are not identical. They deviate from the nominal sampling moment \(\overline {{t_{on}}} \) by an error t e , see Eq. 4.3 and Fig. 5.3.

$${t_{on}} = \overline {{t_{on}}} + {t_e}. $$
(4.3)

The timing error t e is a time-invariant error, because it is due to time-invariant error sources and mechanisms. Note that a distinction should be made between the time-invariant error t e and clock jitter, which is time-variant (it is a similar distinction as between amplitude mismatches and noise). The timing self-calibration method measures t e and applies an appropriate correction t cor .

$${t_{on}} = \overline {{t_{on}}} + {t_e} - {t_{cor}}\xrightarrow {{t_e} = {t_{cor}}}\overline {{t_{on}}} $$
(4.4)

The work of [37] presents an approach that can measure the timing errors and correct them via DDL-based delays in the data chain.

The major limitation for the timing error self-calibration methods is the measurement of the timing errors t e . There is no literature source yet that reports timing errors measurement in practice. Note that this correction method is most effective for high speed DAC input signals, when a lot of switching activity is present. Thus, the resolution step of the timing error measurement needs to be very small.

2.3 Discussion

The DAC self-calibration methods use extrinsic DAC redundancy. That is why the risks of intrinsic DAC performance deterioration should always be carefully considered. The correction is always in the analog domain and interacts with the intrinsic DAC current generation. However, the self-calibration corrects the errors based on a-posteriori error knowledge and hence reduces the DAC accuracy dependence on the IC fabrication process. As a second-order consideration, the self-calibration methods also rely on a-priori error knowledge in order to be able to appropriately design the operational ranges of their correction circuits. In other words, the correction should be able to cover the errors within the predicted range. Since the mismatch errors are corrected, the intrinsic DAC transistors have relaxed accuracy requirements and hence significant silicon area can be saved.

3 Mapping

The mapping methods use a-posteriori knowledge about the particular mismatch errors between the current cells and apply correction via combining the current cells in such a way that their errors mutually compensate each other. Thus, the mapping methods measure the errors in the analog domain, process the errors in the digital domain and compensate them in the digital domain (applying a map) with an analog effect (using the mutual error compensation). Figure 4.2 shows a conceptual diagram of a DAC using a mapping correction method. Note that the intrinsic DAC core is indicated as a DAC system, which may be a single unary or segmented DAC, a sub-binary radix DAC, or multiple parallel sub-DACs.

Fig. 4.2
figure 2

A conceptual diagram of a DAC using a mapping correction method

A map can be defined as a P bit particular representation of the N bit input digital word, with P > N always. From this definition follows a property of these methods: to apply a mapping technique, the presence of DAC current cells redundancy is necessary, since P > N is always required. Note that N bit input word is binary and hence features no code redundancy, while P < N loses input information.

The requirement for the DAC intrinsic redundancy (P bits represent N bit data with P > N) implies in the general case that there are multiple combinations of the DAC resources, i.e. current cells, to represent an input digital word W. Each of these combinations is unique in the analog domain, because of the current source mismatch errors. The coding efficiency is exchanged for redundancy, which is the source of the DAC performance improvement. The mapping algorithms select the optimal combinations.

Generally, the MAP block can be defined as a (2N - 1) ´ P memory block. However, certain more efficient solutions may exist for certain applications of the mapping correction method. The contents of the memory block is the particular mapping function, representing the chosen map Mk as a time invariant correspondence between the input N binary bits and the output P bits. The map Mk can be applied to correct either DAC static or dynamic errors but not both since the amplitude and timing errors are different. A compromise between both may also be the target of the algorithm to select the map.

To reduce the size of the MAP block, digital logic instead of memory can be used. Usually, these logic circuits offer only a limited number of available maps. When the correlation between these available maps is low, a much more efficient improvement is achieved using logic circuits than using memory blocks. Some examples of such digital logic are programmable decoders (e.g. proposal in Chap. 11) and matrices based on switches, such as the butterfly switch matrix [38].

3.1 Low Level Maps for DAC Unary Current Cells

The following text briefly presents the mapping method for the unary DACs. It is the simplest form of mapping methods, if compared to the mapping for sub-binary radix DACs and the high-level mapping method, proposed in Chap. 12. The text considers the unary DAC intrinsic core, but all examples are also applicable for the unary MSB part of the segmented DAC architectures. Basic definitions of the mapping function are made. The limitations of the method are considered. Finally, discussion is provided.

To illustrate the concept of coding redundancy in DACs, consider the thermometer (unary) digital coding, where intrinsic redundancy is present. For the unary coded DAC, each input bit represents an LSB step. Thus, to convert N binary bits, a unary DAC uses 2N - 1 unary bits that control 2N - 1 unary current cells, as shown in Fig. 4.3.

Fig. 4.3
figure 3

Unary CS DAC controlled by unary digital data

Let the digital unary bits be \({u_k},\;k = 1,{2\ldots 2^N} - 1.\) To convert the input W [nT], w digital unary bits are set to 1 and \({2^N} - 1 - w\) digital unary bits are set to 0. The thermometer unary coding is defined to be when\({u_i} = 1,\;i = 1,2\ldots w\)and\({u_j} = 0,\;j = w + 1,{\ldots 2^N} - 1.\) However, the thermometer coding is only one possible combination of the unary bits that represents W [nT]. There are a number of other combinations, given by:

$$ {}_{( {{2^N} - 1} )}{C_w} = \left(\!\!\!{\begin{array}{*{20}{c}} {( {{2^N} - 1} )}\\ w\\\end{array}}\!\!\!\right) = \frac{{( {{2^N} - 1} )!}}{{w!( {{2^N} - 1 - w})!}} $$
(4.5)

In the digital domain there is no difference for which ever combination is chosen for W[nT], since there are no mismatch errors, whilst in the analog domain there are \({}_{\left( {{2^N} - 1} \right)}{C_w}\)different representations of W[nT], due to the presence of the mismatch errors\({I_{{e_k}}}.\)The distribution of the DAC outputs for all \({}_{\left( {{2^N} - 1} \right)}{C_w}\) has its expected value equal to the sum of the nominal currents, i.e. \(w \cdot \overline {{I_u}} \) and its deviation is defined by the statistical properties of the errors \({I_{{e_k}}}.\)

The collection of chosen combinations for all input codes defines the chosen map M. Within the set of all possible maps, some specific sub-sets can be pointed out.

The sub-set of thermometer maps includes these maps that feature strong correlation between the chosen combinations for all input codes, like in the thermometer coding. That is to say the unary code combinations for adjacent codes differ by only one added current cell, like in the conventional thermometer coding. Therefore, these maps share the advantages of the thermometer coding—low glitch energy and low power consumption for incremental code transitions. Note that for non-thermometer maps, the two chosen combinations for two adjacent codes may significantly differ for the selected current sources and hence at the particular code increment event many current switching events occur, which generate glitch energy and consume power.

Various sub-sets can be defined to include such maps which can be implemented using common hardware resources. Since for the general mapping approach at high resolutions large hardware resources are needed, these hardware realizations of the mapping block allow more efficient implementation of the mapping method, e.g. programmable decoders as proposed in Chap. 11 and the butterfly switch matrix (matrices based on switches) [38].

In practice, the choice of the combination C is realized by a 2N - 1 to 2N - 1 decoder block (the MAP) that is placed after the binary-to-thermometer decoder, as shown in Fig. 4.4.

Fig. 4.4
figure 4

A mapping correction method for a unary DAC

A full MAP decoder block can realize all possible permutation of the 2N - 1 codes, i.e. (2N - 1)! different maps \({M_j},\;j \in [1,2,\,\ldots,\left( {{2^N} - 1} \right)!].\) An illustrative example of the mapping process is shown in Fig. 4.5. The first matrix shows the nominally identical DAC current cells. A map Mj assigns a switching sequence order to each of the current cells. In the case of thermometer coding, the switching sequence order coincides with the order of the DAC current cells. This map is considered as the first map M1 (shown at top-right). However, many other switching sequences also exist, e.g. as shown in Fig. 4.5 (bottom two) for the case of the thermometer maps sub-set. The mapping method knows each\({I_{{e_k}}}\)and would choose that map which features the lowest INLmax.

Fig. 4.5
figure 5

Mapping of the DAC unary current cells to a switching sequence

Note that the mapping method is discussed here as using a static map, without loss of generality. However, a combination of the mapping method and the DEM method (see Chap. 3.5) can potentially produce better results than these two methods separately. The DEM method can be applied over a set of selected maps.

The main limitation of the low-level unary mapping method is in its coupling between the correction and the DAC architecture. The method can be applied only to the DAC unary current cells and hence in practice it can be applied only to the MSB DAC unary part in the DAC segmented architectures. The effectiveness of the method depends on how many DAC input bits are decoded in a unary way. However, the complexity of the method and its hardware requirements grow exponentially for each extra binary bit decoded in a unary way.

The example above considers the low level maps, when the intrinsic coding redundancy is applied of the level of the current sources, e.g. unary current sources. Note that other low-level mapping techniques also exist, e.g. redundancy in a sub-binary radix coded DACs [11]. There are also high-level mapping techniques, when the redundancy is at the DAC functional level. If there are multiple parallel sub-DACs available to convert the digital word W, there is intrinsic redundancy in how W[nT] can be split between the different sub-DACs, e.g. \({W_{sub - {D{\kern -1pt}A{\kern -0.5pt}C}1}}[nT] + {W_{sub - {D{\kern -1pt}A{\kern -0.5pt}C}2}}[nT] = W[nT].\)

3.2 Low-Level Maps for Sub-binary Radix DACs

The following text briefly presents the low-level mapping correction techniques for sub-binary radix DACs. The key factors to achieve sufficient intrinsic redundancy for the mapping technique are:

  1. 1.

    DAC currents are designed with scaling coefficients smaller than 2;

  2. 2.

    The number of DAC currents exceeds the number of DAC input bits.

The method measures the errors and selects such a map for the DAC input bits that the DAC output current is sufficiently linear. Since the method can tolerate large amounts of mismatch errors, the current source transistors can be designed small. The method requires much less additional hardware resources than equivalent methods for the unary DAC. Hence, very small DACs with high linearity can be designed.

The digital coding can be represented as a continuum of the scaling coefficients of the individual bit weights of the digital word, as shown in Fig. 4.6. The binary coding having scaling coefficient of 2 is the most efficient coding to process bit information, since the digital bit itself has two states either 0 or 1. However, the binary coding features no redundancy and hence the low-level mapping techniques cannot be used. By contrast, the unary coding, having scaling coefficient 1, features large amounts of redundancy and hence low-level mapping techniques are possible. However, the unary coding is highly inefficient in terms of data processing and hence the mapping techniques require a lot of resources to implement. A trade-off between these two choices is found in the sub-binary radix scalingFootnote 1. The transfer characteristic of a sub-binary radix DAC features redundancy which can be combined with a mapping technique to achieve a targeted DAC linearity, see [11, 39–41].

Fig. 4.6
figure 6

The digital coding continuum of the bit weight scaling

The main conceptual idea of the sub-binary radix DACs is that the redundancy in the DAC transfer characteristic guarantees that the current mismatch errors do not cause DNL k  > 0.5LSB for all codes k. All errors of DNL k  < 0.5LSB (including DNL k  < -0.5LSB) can be corrected by the mapping technique. Figure 4.7 shows the basic conceptual diagram of the sub-binary radix DAC mapping technique.

Fig. 4.7
figure 7

A conceptual diagram of a sub-binary DAC with pre-correction, e.g. [11, 39, 40]

All N + R current sources of the sub-binary radix DAC are measured. An algorithm selects the appropriate map that delivers linear N bit DAC performance. Note that the size of the map is N:(N + R). The work of [11] reports an implementation of N = 12 and R = 4. Thus, the size of the map is much smaller than the necessary maps of the unary currents mapping techniques, which would require map sizes of the order of N:2N - 1.

The common limitation of the mapping techniques is that usually a single map can be used at a time. Therefore, the map needs to be a compromise between the static, low speed and high speed DAC performance. This trade-off is particularly sharp for the sub-binary radix DAC mapping techniques. The dynamic response of the sub-binary radix DAC current cells can hardly be matched and hence significant timing errors are generated. The challenge to match the current cell responses is made difficult, since no unit elements for the sub-binary radix cells can be used in the layout of the data propagation chain, i.e. latches-data drivers-current switches.

The sub-binary radix DAC mapping techniques measure the DAC errors in the analog domain. The actual correction is in the digital domain, but the correction effect is in the analog domain.

The size of the pre-processor digital circuits is highly reduced in comparison with the unary current cells mapping approaches. However, the matching of the current cell dynamic responses is compromised.

4 Digital Pre-distortion

The digital pre-correction methods measure the errors in the analog domain, process the errors in the digital domain and compensate the errors in the digital data domain with an analog effect.

The DAC pre-distortion correction methods measure the linearity of the DAC, calculate the approximate deviation from the desired linear performance and then add an inverse-linearity function to the DAC digital input. The process may be iterative until the DAC output becomes linear, e.g. [42, 43]. Figure 4.8 shows a block diagram of the DAC pre-distortion method.

Fig. 4.8
figure 8

A block diagram of the DAC pre-distortion method

A popular approach in the literature is to include in the pre-distortion correction the non-linearity of the whole transmitter front-end, such as shown in Fig. 4.9, e.g. [44, 45].

Fig. 4.9
figure 9

Pre-distortion correction for the whole transmitter front-end

The method has difficulties with the ‘gaps’ in the DAC transfer characteristic, according to [42]. These are large positive DNL errors that cause distortion for which it is difficult to find a compensating pre-distortion function. In addition, power supply noise and clock jitter problems cannot be compensated by the pre-distortion methods, according to [43].

The DAC pre-distortion method measures the DAC linearity in the analog domain and corrects for it in the digital domain. This method shares a lot of similarities with the mapping correction methods. However, it does not rely on DAC intrinsic redundancy like the DAC mapping correction methods, but rather on an extrinsic redundancy that adds counter distortion in the system aiming at the cancellation of the overall distortion.

5 Discussion

This chapter demonstrates different DAC correction methods using exact error information. Further in this book, Chap. 7 introduces a classification of the DAC correction methods to systemize them, to identify missing methods, and to derive clues to fill up these gaps.

Table 4.1 gives a short summary of the discussed DAC correction methods, ordering them according to four discussed classes: using intrinsic or extrinsic redundancy and implemented on low- or high level. Three notations are used: “Yes” when the method exist within a class; “No” when the method does not exist within a class; and N/A when the method cannot exist within a class (e.g. if it is against its definition). Note that all boxes indicated with “No” may be eventually filled-in with methods that do not exist in the open literature yet.

Table 4.1 Summary of the discussed DAC correction methods

6 Conclusions

An overview of DAC correction methods that use self-measurement is presented. The overview is based on state-of-the-art works that are recently published. The main aspects of each correction method are discussed. The targeted error mechanisms for the two methods self-calibration of currents and unary code mapping are similar. However, both methods have their own advantages and disadvantages. The self-calibration is capable of correcting both random and systematic mismatch errors but interacts with the analog DAC signal generation. The mapping method avoids interacting with the analog DAC signal generation, but is not that powerful in correcting systematic mismatch errors. Therefore, it is also argued that applying a DAC correction method is a matter of a trade-off, since each correction method brings its own particular advantages and disadvantages. Further in the book, Chap. 7 proposes a classification of DAC correction methods.