Frequency-Hopping Systems

  • Don Torrieri


Frequency hopping is the periodic changing of the carrier frequency of a transmitted signal. This time-varying characteristic potentially endows a communication system with great strength against interference. Whereas a direct-sequence system relies on spectral spreading, spectral despreading, and filtering to suppress interference, the basic mechanism of interference suppression in a frequency-hopping system is avoidance. When avoidance fails, it is only temporary because of the periodic changing of the carrier frequency. The impact of the interference is further mitigated by the pervasive use of channel codes, which are more essential for frequency-hopping systems than for direct-sequence systems. The basic concepts, spectral and performance aspects, and coding and modulation issues of frequency-hopping systems are presented in this chapter. The effects of partial-band interference and multitone jamming are examined, and the most important issues in the design of frequency synthesizers are described.

Frequency hopping is the periodic changing of the carrier frequency of a transmitted signal. This time-varying characteristic potentially endows a communication system with great strength against interference. Whereas a direct-sequence system relies on spectral spreading, spectral despreading, and filtering to suppress interference, the basic mechanism of interference suppression in a frequency-hopping system is avoidance. When avoidance fails, it is only temporary because of the periodic changing of the carrier frequency. The impact of the interference is further mitigated by the pervasive use of channel codes, which are more essential for frequency-hopping systems than for direct-sequence systems. The basic concepts, spectral and performance aspects, and coding and modulation issues of frequency-hopping systems are presented in this chapter. The effects of partial-band interference and multitone jamming are examined, and the most important issues in the design of frequency synthesizers are described.

3.1 Concepts and Characteristics

The sequence of carrier frequencies transmitted by a frequency-hopping system is called the frequency-hopping pattern, and the set of M possible carrier frequencies {f1, f2, …, f M } is the hopset. The rate at which the carrier frequency changes is the hop rate, and frequency hopping occurs over a frequency band called the hopping band, which includes M frequency channels. Each frequency channel is defined as a spectral region that includes a single carrier frequency of the hopset as its center frequency and has a bandwidth B large enough to include most of the power in a signal pulse with a specific carrier frequency. Figure 3.1 illustrates the frequency channels associated with a particular frequency-hopping pattern. The time interval between hops is called the hop interval. The length of the hop interval is the hop duration and is denoted as T h . The hopping band has hopping bandwidth W ≥ MB.
Fig. 3.1

Frequency-hopping pattern

Figure 3.2 (a) depicts the general form of a frequency-hopping transmitter. Pattern-control bits, which are the output bits of a pattern generator, change at the hop rate so that a frequency synthesizer produces a frequency-hopping pattern. The data-modulated signal is mixed with the frequency-hopping pattern to produce the frequency-hopping signal. If the data modulation is some form of angle modulation ϕ(t), then the received signal for the ith hop is
$$ \begin{aligned} s(t)=\sqrt{2\mathbb{E}_{s}/T_{s}}\cos\left[ 2\pi f_{ci}t+\phi(t)+\phi _{i}\right] ,\;\;\;(i-1)T_{h}\leq t\leq iT_{h} {} \end{aligned} $$
where \(\mathbb {E}_{s}\) is the energy per symbol, T s is the symbol duration, f ci is the carrier frequency for the ith hop, and ϕ i is a random phase angle for the ith hop.
Fig. 3.2

General form of frequency-hopping system: (a) transmitter and (b) receiver

The frequency-hopping pattern produced by the receiver synthesizer of Figure 3.2 (b) is synchronized with the pattern produced by the transmitter, but is offset by a fixed intermediate frequency, which may be zero. The mixing operation removes the frequency-hopping pattern from the received signal and is hence called dehopping. The mixer output is applied to a bandpass filter that excludes double-frequency components and power that originated outside the appropriate frequency channel and produces the data-modulated dehopped signal, which has the form of (3.1) with f ci for all hops replaced by the common intermediate frequency.

Although it provides no advantage against white noise, frequency hopping enables signals to hop out of frequency channels with interference or slow frequency-selective fading. To fully escape from the effects of narrowband interference signals, disjoint frequency channels are necessary. The disjoint channels may be contiguous or have unused spectral regions between them. Some spectral regions with steady interference or a susceptibility to fading may be omitted from the hopset, a process called spectral notching.

To ensure that a frequency-hopping pattern is difficult to reproduce or dehop by an opponent, the pattern should be pseudorandom with a large period and an approximately uniform distribution over the frequency channels. The pattern generator is a nonlinear sequence generator that maps each generator state to the pattern-control bits that specify a frequency. The linear span or linear complexity of a nonlinear sequence is the number of stages of the shortest linear feedback shift register that can generate the sequence or the successive generator states. A large linear span inhibits the reconstruction of a frequency-hopping pattern from a short segment of it. More is required of frequency-hopping patterns to alleviate multiple-access interference when similar frequency-hopping systems are part of a network (Section  7.4 ).

An architecture that enhances the transmission security of a frequency-hopping system is shown in Figure 3.3. The structure or algorithm of the pattern generator is determined by a set of pattern-control bits that comprise the spread-spectrum key and the time-of-day (TOD). The spread-spectrum key, which is the ultimate source of security, is a set of bits that are changed infrequently. The spread-spectrum key may be generated by combining secret bits with two sets of address bits that identify both the transmitting device and the receiving device at the other end of a communication link. The TOD is a set of bits that are derived from the stages of the TOD counter and change with every transition of the TOD clock. For example, the spread-spectrum key might change daily, whereas the TOD might change every second. The purpose of the TOD is to vary the pattern-generator algorithm without constantly changing the spread-spectrum key. In effect, the pattern-generator algorithm is controlled by a time-varying key. The hop-rate clock, which regulates when the changes occur in the pattern generator, operates at a much higher rate than the TOD clock. In a receiver, the hop-rate clock is produced by the synchronization system. In both a transmitter and a receiver, the TOD clock may be derived from the hop-rate clock. The TOD control bits initiate or reset the TOD counter to a desired state.
Fig. 3.3

Secure method of synthesizer control

A frequency-hopping pulse with a fixed carrier frequency occurs during a portion of the hop interval called the dwell interval. As illustrated in Figure 3.4, the dwell time is the duration of the dwell interval during which the channel symbols are transmitted and the peak amplitude occurs. The hop duration T h is equal to the sum of the dwell time T d and the switching time T sw . The switching time is equal to the dead time, which is the duration of the interval when no signal is present, plus the rise and fall times of a pulse. Even if the switching time is insignificant in the transmitted signal, it is more substantial in the dehopped signal in the receiver because of the imperfect synchronization of received and receiver-generated waveforms. The nonzero switching time, which may include an intentional guard time, decreases the transmitted symbol duration T s . If T so is the symbol duration in the absence of frequency hopping, then T s  = T so (T d /T h ). The reduction in symbol duration expands the transmitted spectrum and thereby reduces the number of frequency channels within a fixed hopping band. Since the receiver filtering ensures that rise and fall times of pulses have durations on the order of a symbol duration, T sw  > T s in practical systems.
Fig. 3.4

Time durations and amplitude changes of a frequency-hopping pulse

Frequency hopping may be classified as fast or slow. Fast frequency hopping occurs if there is more than one hop for each information symbol. Slow frequency hopping ensues if one or more information symbols are transmitted in the time interval between frequency hops. Although these definitions do not refer to the absolute hop rate, fast frequency hopping is an option only if a hop rate that exceeds the information-symbol rate can be implemented. Slow frequency hopping is preferable because the transmitted waveform is much more spectrally compact (see Section 3.4) and the overhead cost of the switching time is greatly reduced.

To obtain the full advantage of block or convolutional channel codes in a slow frequency-hopping system, the code symbols should be interleaved in such a way that the symbols of a block codeword or the symbols within a few free distances in a convolutional code fade independently. In frequency-hopping systems operating over a frequency-selective fading channel, the realization of this independence requires certain constraints among the system parameter values (Section  6.7 ).

Frequency-selective fading and Doppler shifts make it difficult to maintain phase coherence from hop to hop between frequency synthesizers in the transmitter and the receiver. Furthermore, the time-varying delay between the frequency changes of the received signal and those of the synthesizer output in the receiver causes the phase shift in the dehopped signal to differ for each hop interval. Thus, frequency-hopping systems use noncoherent or differentially coherent demodulators unless a pilot signal is available, the hop duration is very long, or elaborate iterative phase estimation (perhaps as part of turbo decoding) is used.

In military applications, the ability of frequency-hopping systems to avoid interference is potentially neutralized by a repeater jammer (also known as a follower jammer), which is a device that intercepts a signal, processes it, and then transmits jamming at the same center frequency. To be effective against a frequency-hopping system, the jamming energy must reach the victim receiver before it hops to a new carrier frequency. Thus, the hop rate is the critical factor in protecting a system against a repeater jammer. Required hop rates and the limitations of repeater jamming are analyzed in [98].

3.2 Frequency Hopping with Orthogonal CPFSK

In a slow frequency-hopping system with frequency-shift keying as its data modulation, the implementation of phase continuity from symbol to symbol within a hop dwell interval prevents excessive spectral splatter outside a frequency channel (Section 3.3). Thus, the data modulation is called continuous-phase frequency-shift keying (CPFSK). In a frequency-hopping system with CPFSK (FH/CPFSK system), one of a set S q of q CPFSK frequencies is selected to offset the carrier frequency for each transmitted symbol within each hop dwell interval. The general transmitter of an FH/CPFSK system is illustrated in Figure 3.5 (a). The pattern-generator output bits, which define a carrier frequency, and the digital symbols, which define a frequency in S q , are combined to determine the frequency generated by the synthesizer during a symbol interval. In the standard design, the q subchannels are contiguous, and each set of q subchannels constitutes a frequency channel within the hopping band.
Fig. 3.5

Orthogonal FH/CPFSK (a) transmitter and (b) noncoherent receiver

An orthogonal FH/CPFSK system adds frequency hopping to orthogonal CPFSK signals (Section  1.2 ). Figure 3.5 (b) depicts the main elements of a noncoherent orthogonal FH/CPFSK receiver. The frequency-hopping carrier frequency is removed, and the resulting orthogonal CPFSK signal is applied to the demodulator and decoder. Each matched filter in the bank of matched filters corresponds to a CPFSK subchannel.

Illustrative System

To illustrate some basic issues of frequency-hopping communications, we consider an orthogonal FH/CPFSK system that uses a repetition code as its channel code, suboptimal metrics, and the receiver of Figure 3.5 (b). The system has significant deficiencies, such as the very weak repetition code, but is amenable to an approximate mathematical analysis that provides insight into the design issues of a frequency-hopping system. Each information symbol is transmitted as a codeword of n identical code symbols. The interference is modeled as wideband Gaussian noise uniformly distributed over part of the hopping band. Along with perfect dehopping and negligible switching times, either slow frequency hopping with ideal interleaving over many hop intervals or fast frequency hopping is assumed. Both ensure the independence of code-symbol errors.

The difficulty of implementing the maximum-likelihood metric ( 1.80 ) leads to consideration of the suboptimal metrics. The square-law metric defined by ( 1.84 ), which performs linear square-law combining, has the advantage that no channel-state information or side information providing information about the reliability of code symbols is required for its computation. However, this metric is unsuitable against strong partial-band interference because a single large symbol metric can corrupt the codeword metric.

A dimensionless metric that is computationally simpler than ( 1.80 ) but retains the division by N0i is the nonlinear square-law metric:
$$ \begin{aligned} U(l)=\sum_{i=1}^{n}\frac{R_{li}^{2}}{N_{0i}}\;,\;\;\;l=1,2,\ldots,q {} \end{aligned} $$
where R li is the sample value of the envelope-detector output that is associated with code symbol i of candidate information-symbol l, and N0i/2 is the two-sided power spectral density (PSD) of the interference and noise over all the CPFSK subchannels during code symbol i. The advantage of this metric is that the impact of a large R li is alleviated by a large N0i, which must be known. The subsequent analysis is for the nonlinear square-law metric.
The union bound ( 1.26 ) implies that the information-symbol error probability satisfies
$$ \begin{aligned} P_{is}\leq(q-1)P_{2} {} \end{aligned} $$
where P2 is the probability of an error in comparing the metric associated with the transmitted information symbol with the metric associated with an alternative one. We assume that there are enough frequency channels that n distinct carrier frequencies are used for the n code symbols. Since the CPFSK tones are orthogonal, the code-symbol metrics \(\{R_{li}^{2}\}\) are independent and identically distributed for all values of l and i (Section  1.2 ). Therefore, the Chernoff bound given by ( 1.145 ) and ( 1.144 ) with α = 1/2 yields
$$ \begin{aligned} P_{2}\leq\frac{1}{2}Z^{n} {} \end{aligned} $$
$$ \begin{aligned} Z=\min_{0<s<s_{1}}E\left[ \exp\left\{ \frac{s}{N_{1}}\left( R_{2}^{2} -R_{1}^{2}\right) \right\} \right] {} \end{aligned} $$
where R1 is the sampled output of an envelope detector when the desired signal is present at the input of the associated matched filter, R2 is the output from a different matched filter when the desired signal is absent at its input, and N1/2 is the two-sided PSD of the interference and noise that is assumed to cover all the CPFSK subchannels during a code symbol and may vary from symbol to symbol. Since the CPFSK tones are orthogonal, and hence q-ary symmetric, ( 1.95 ), (3.3), and (3.4) give an upper bound on the information-bit error probability:
$$ \begin{aligned} P_{b}\leq\frac{q}{4}Z^{n}. {} \end{aligned} $$
The received signal has the form
$$ \begin{aligned} r(t)=\sqrt{2\mathbb{E}_{s}/T_{s}}\cos\left[ 2\pi\left(f_{c}+f_{1}\right) t+\theta\right] +n_{1}(t),\ 0\leq t\leq T_{s} \end{aligned}$$
where \(\mathbb {E}_{s}\) is the energy per code symbol, T s is a code-symbol duration, and n1(t) is a zero-mean Gaussian interference-and-noise process. Assuming that n1(t) is white,
$$ \begin{aligned} E[n_{1}(t)n_{1}(t+\tau)]=\frac{N_{1}}{2}\delta(\tau). {} \end{aligned} $$
As in Section  1.5 , \(R_{l}^{2}\) may be expressed as
$$ \begin{aligned} R_{l}^{2}=R_{lc}^{2}+R_{ls}^{2}\;,\;\;\;l=1,2. {} \end{aligned} $$
Using the orthogonality of the symbol waveforms and (3.7) and assuming that f c  + f l  >> 1/T s in ( 1.99 ) and ( 1.100 ), we verify the independence of the Gaussian random variables R1c, R1s, R2c, and R2s. We obtain the moments
$$ \begin{aligned} E[R_{1c}] & =\sqrt{\mathbb{E}_{s}T_{s}/2}\cos\theta\;,\;\;\;E[R_{1s} ]=\sqrt{\mathbb{E}_{s}T_{s}/2}\sin\theta{} \end{aligned} $$
$$ \begin{aligned} E[R_{2c}] & =E[R_{2s}]=0\;{} \end{aligned} $$
$$ \begin{aligned} \mathrm{var}(R_{lc}) & =\mathrm{var}(R_{ls})=N_{1}T_{s}/4\;,\;\;\;l=1,2. {} \end{aligned} $$
For a Gaussian random variable X with mean m and variance σ2, a straightforward calculation yields
$$ \begin{aligned} E[\exp(aX^{2})]=\frac{1}{\sqrt{1-2a\sigma^{2}}}\exp\left( \frac{am^{2} }{1-2a\sigma^{2}}\right) \;,\;\;\;a<\frac{1}{2\sigma^{2}} {} \end{aligned} $$
which can be used to partially evaluate the expectation in (3.5). After conditioning on N1, (3.8) to (3.12) and the substitution of λ = sT s /2N1 give
$$ \begin{aligned} Z=\min_{0<\lambda<1}E\left[ \frac{1}{1-\lambda^{2}}\exp\left( -\frac {\lambda\mathbb{E}_{s}/N_{1}}{1+\lambda}\right) \right] {} \end{aligned} $$
where the remaining expectation is with respect to the statistics of N1.
To simplify the analysis, we assume that the thermal noise is negligible. When a repetition symbol encounters no interference, N1 = 0; when it does, N1 = It0/μ, where μ is the fraction of the hopping band with interference, and It0 is the PSD that would exist if the interference power were uniformly spread over the entire hopping band. We define
$$ \begin{aligned} \gamma=\frac{\left( \log_{2}q\right) \mathbb{E}_{b}}{I_{t0}} {} \end{aligned} $$
where log2q is the number of bits per information symbol, and \(\mathbb {E}_{b}\) is the energy per information bit. Since μ is the probability that interference is encountered by a received code symbol and \(\mathbb {E}_{s}=\left ( \log _{2}q\right ) \mathbb {E}_{b}/n\), (3.13) becomes
$$ \begin{aligned} Z=\min_{0<\lambda<1}\left[ \frac{\mu}{1-\lambda^{2}}\exp\left( -\frac{\lambda\mu\gamma/n}{1+\lambda}\right) \right] . {} \end{aligned} $$
Using calculus, we find that
$$ \begin{aligned} Z=\frac{\mu}{1-\lambda_{1}^{2}}\exp\left( -\frac{\lambda_{1}\mu\gamma /n}{1+\lambda_{1}}\right) {} \end{aligned} $$
$$ \begin{aligned} \lambda_{1}=-\left( \frac{1}{2}+\frac{\mu\gamma}{4n}\right) +\left[ \left( \frac{1}{2}+\frac{\mu\gamma}{4n}\right) ^{2}+\frac{\mu\gamma}{2n}\right] ^{1/2}. {} \end{aligned} $$
Substituting (3.16) and (3.14) into (3.6), we obtain
$$ \begin{aligned} P_{b}\leq\frac{q}{4}\left( \frac{\mu}{1-\lambda_{1}^{2}}\right) ^{n} \exp\left( -\frac{\lambda_{1}\mu\gamma}{1+\lambda_{1}}\right) . {} \end{aligned} $$
Suppose that the interference is worst-case partial-band interference, which occurs when the interference power is distributed in the most damaging way. An approximate upper bound on P b is obtained by maximizing the right-hand side of (3.18) with respect to μ, where 0 ≤ μ ≤ 1. Calculus yields the maximizing value of μ:
$$ \begin{aligned} \mu_{0}=\min\left( \frac{3n}{\gamma},1\right) . {} \end{aligned} $$
Since μ0 is obtained by maximizing a bound rather than an equality, it is not necessarily equal to the actual worst-case μ. Setting μ = μ0 in (3.18), we obtain the approximate upper bound on P b for worst-case partial-band interference:
$$ P_{b}\leq\begin{cases}\frac{q}{4}\left( \frac{4n}{e\gamma}\right) ^{n}, & n\leq\gamma/3\\ \frac{q}{4}(1-\lambda_{0}^{2})^{-n}\exp\left( -\frac{\lambda_{0}\gamma }{1+\lambda_{0}}\right) , & {n>}\gamma/3 \end{cases}$$
$$ \begin{aligned} \lambda_{0}=-\left( \frac{1}{2}+\frac{\gamma}{4n}\right) +\left[ \left( \frac{1}{2}+\frac{\gamma}{4n}\right) ^{2}+\frac{\gamma}{2n}\right] ^{1/2}. \end{aligned} $$

Further analysis requires some facts about convex functions. A real-valued function f over an interval is a convex function if \(f\left [ \alpha x+\left ( 1-\alpha \right ) y\right ] \leq \alpha f\left ( x\right ) +\left ( 1-\alpha \right ) f\left ( y\right ) \) for all x and y in the interval and \(\alpha \in \left [ 0,1\right ]\). A sufficient condition for f to be a convex function is for it to have a nonnegative second derivative (see Section  7.3 ). A local minimum of a convex function over an interval is a global minimum, and similarly a local maximum is a global maximum [19].

If the value of γ is known, then the number of repetitions can be chosen to minimize the upper bound on P b for worst-case partial-band interference. We treat n as a continuous variable such that n ≥ 1 and let n0 denote the minimizing value of n. A calculation indicates that the derivative with respect to n of the second line on the right-hand side of (3.20) is positive. Therefore, if γ/3 < 1 so that the second line is applicable for n ≥ 1, then n0 = 1. If γ/3 > 1, the upper bound on P b is minimized by minimizing the strictly convex function \(f\left ( n\right ) =\left ( q/4\right ) \left ( 4n/e\gamma \right ) ^{n}\) over the compact set \(\left [ 1,\gamma /3\right ]\). Since \(f\left ( n\right ) \) is strictly convex, a stationary point of \(f\left ( n\right ) \) represents a global minimum. Further calculation yields
$$ \begin{aligned} n_{0}=\max\left( \frac{\gamma}{4}\;,\;1\right) . {} \end{aligned} $$
Since n must be an integer, the optimal number of repetitions against worst-case partial-band interference is approximately ⌊n0⌋, where ⌊x⌋ denotes the largest integer less than or equal to x. Equation (3.22) indicates that the optimal number of repetitions increases with γ.
The approximate upper bound on P b for worst-case partial-band interference and the optimal number of repetitions is obtained by substituting the preceding results for n0 and (3.14) into (3.20), which gives
$$ P_{b}\leq\begin{cases}\frac{q}{4}(1-\lambda_{0}^{2})^{-1}\exp\left( -\frac{\lambda_{0} \varepsilon_{b}/I_{t0}}{1+\lambda_{0}}\right) , & \frac{\varepsilon_{b} }{I_{t0}}<3\log_{2}q\\ \frac{q}{e\left( \log_{2}q\right) }\left( \frac{\varepsilon_{b}}{I_{t0} }\right)^{-1}, & 3\log_{2}q\leq\frac{\varepsilon_{b}}{I_{t0}}<4\log_{2}q\\\frac{q}{4}\exp\left( -\frac{\varepsilon_{b}/I_{t0}}{4\left( \log_{2}q\right) }\right) , &\frac{\varepsilon_{b}}{I_{t0}}\geq4\log_{2}q \end{cases}$$
which indicates that the upper bound on P b decreases exponentially with \(\mathbb {E}_{b}/I_{t0}\) only if the optimal number of repetitions is chosen and \(\mathbb {E}_{b}/I_{t0}\) ≥ 4log2q. Figure 3.6 illustrates the upper bound on P b as a function of \(\mathbb {E}_{b}/I_{t0}\) for q = 2, 4, and 8 and both the optimal number of repetitions and n = 1. The figure indicates that the nonlinear square-law metric and optimal repetitions sharply limit the performance degradation caused by worst-case partial-band interference relative to full-band interference with the same power. For example, setting N0 → It0 in ( 1.94 ) and q = 2 in (3.23) and then comparing the equations, we find that this degradation is approximately 3 dB for binary CPFSK.
Fig. 3.6

Upper bound on the bit error probability of the frequency-hopping system in the presence of worst-case partial-band interference for q = 2, 4, and 8 and both the optimal number of repetitions and n = 1

Substituting (3.22) into (3.19), we obtain
$$ \begin{aligned} \mu_{0}=\left\{ \begin{matrix} {1,} & \frac{\mathbb{E}_{b}}{I_{t0}}<3\log_{2}q\\ \frac{3}{\log_{2}q}\left( \frac{\mathbb{E}_{b}}{I_{t0}}\right) ^{-1} & 3\log_{2}q\leq\frac{\mathbb{E}_{b}}{I_{t0}}<4\log_{2}q\\ {\frac{3}{4},} & \frac{\mathbb{E}_{b}}{I_{t0}}\geq4\log_{2}q \end{matrix} \right. {} \end{aligned} $$
which indicates that if the optimal number of repetitions is used, then the worst-case interference covers three-fourths or more of the hopping band.

For frequency hopping with binary CPFSK and the nonlinear square-law metric, a more precise derivation [44] that does not use the Chernoff bound confirms that (3.23) provides an approximate upper bound on the information-bit error rate caused by worst-case partial-band interference when the thermal noise is negligible, although the optimal number of repetitions is much smaller than is indicated by (3.22). Thus, the appropriate weighting of terms in the nonlinear square-law metric prevents the domination by a single corrupted symbol metric and limits the inherent noncoherent combining loss resulting from the fragmentation of the symbol energy.

The implementation of the nonlinear square-law metric requires the measurement of the interference power. An iterative method of power estimation based on the expectation-maximization algorithm (Section  9.1 ) provides approximate maximum-likelihood estimates, but a stronger code than the repetition code is required. Other strategies are to revert to either hard-decision decoding or to the square-law metric with clipping or soft-limiting of each envelope-detector output R li . Both strategies prevent a single corrupted sample from undermining the codeword detection. Although clipping is potentially more effective than hard-decision decoding, its implementation requires an accurate measurement of the signal power for properly setting the clipping level. Instead of the square-law metric with clipping, an easily implemented suboptimal metric has the form
$$ \begin{aligned} U(l)=\sum_{i=1}^{n}f\left( \frac{R_{li}^{2}}{\sum_{j=1}^{n}R_{lj}^{2} }\right) ,\;\;\;l=1,2,\ldots,q \end{aligned} $$
for some monotonically increasing function \(f\left ( \cdot \right )\). Although this metric does not require the estimation of each N0i, it is sensitive to each N0i.

Multitone Jamming

Orthogonal CPFSK tones produce negligible responses in the incorrect subchannels if the frequency separation between tones is k/T s , where k is a nonzero integer, and T s denotes the symbol duration. To maximize the hopset size when the CPFSK subchannels are contiguous, k = 1 is selected. Consequently, the bandwidth of a frequency channel for slow frequency hopping with many symbols per dwell interval and negligible switching time is
$$ \begin{aligned} B\approx\frac{q}{T_{s}}=\frac{q}{T_{b}\log_{2}q} {} \end{aligned} $$
where T b is the duration of a bit, and the factor log2q accounts for the increase in symbol duration when a nonbinary modulation is used. If the hopping band has bandwidth W and uniformly separated contiguous frequency channels of bandwidth B are assigned, the hopset size is
$$ \begin{aligned} M=\left\lfloor \frac{W}{B}\right\rfloor {} \end{aligned} $$
and thus
$$ \begin{aligned} M=\left\lfloor \frac{WT_{b}\log_{2}q}{q}\right\rfloor . {} \end{aligned} $$

When the CPFSK subchannels are contiguous, it is not advantageous to a jammer to transmit the jamming in all the subchannels of an CPFSK set because only a single subchannel needs to be jammed to cause a symbol error. A sophisticated jammer with knowledge of the spectral locations of the CPFSK sets can cause increased system degradation by placing one jamming tone or narrowband jamming signal in every CPFSK set, which is called multitone jamming.

To assess the impact of this sophisticated multitone jamming on hard-decision decoding in the receiver of Figure 3.5 (b), we assume that thermal noise is absent and that each jamming tone coincides with one CPFSK tone in a frequency channel encompassing q orthogonal CPFSK tones. Whether a jamming tone coincides with the transmitted CPFSK tone or an incorrect one, there will be no symbol error if the desired-signal power S exceeds the jamming power. Thus, if I t is the total available jamming power, then the jammer can maximize symbol errors by placing tones with power levels slightly above S whenever possible in approximately J frequency channels such that
$$ \begin{aligned} J=\left\{ \begin{matrix} 1,\;\;\;\; & I_{t}<S\\ \left\lfloor {\frac{I_{t}}{S}}\right\rfloor , & S\leq I_{t}<MS\\ M,\;\;\; & \;MS\leq I_{t}. \end{matrix} \right. {} \end{aligned} $$
If a transmitted tone enters a jammed frequency channel and I t  ≥ S, then with probability (q − 1)/q, the jamming tone does not coincide with the transmitted tone and causes a symbol error after a hard decision is made. If the jamming tone does coincide with the correct tone, it does not cause a symbol error in the absence of thermal noise. Since J/M is the probability with which a frequency channel is jammed, and no error occurs if I t  < S, the symbol error probability is
$$ \begin{aligned} P_{s}=\left\{ \begin{matrix} 0, & \;\;\;\;\;I_{t}<S\\ {\frac{J}{M}}\left( {\frac{q-1}{q}}\right) , & \ \ \ \ \ I_{t}\geq S. \end{matrix} \right. {} \end{aligned} $$
Substitution of (3.28) and (3.29) into (3.30) and the approximation ⌊x⌋≈ x yield
$$ \begin{aligned} P_{s}=\left\{ \begin{matrix} {\frac{q-1}{q},} & \;\;\;\;{\frac{\mathbb{E}_{b}}{I_{t0}}}<{\frac{q}{\log _{2}q}}\\ \left( {\frac{q-1}{\log_{2}q}}\right) \left( {\frac{\mathbb{E}_{b}} {I_{t0}}}\right) ^{-1}, & \;\;\;\;{\frac{q}{\log_{2}q}}\leq{\frac {\mathbb{E}_{b}}{I_{t0}}}\leq WT_{b}\\ 0, & {\frac{\mathbb{E}_{b}}{I_{t0}}}>WT_{b} \end{matrix} \right. {} \end{aligned} $$
where \(\mathbb {E}_{b}=ST_{b}\) denotes the energy per bit, and It0 = I t /W denotes the PSD of the interference power that would exist if it were uniformly spread over the hopping band. This equation exhibits an inverse linear dependence of P s on \(\mathbb {E}_{b}/I_{t0}\), which indicates that the jamming has an impact that is qualitatively similar to that of Rayleigh fading and to what is observed in Figure 3.6 for worst-case partial-band interference and n = 1. The symbol error probability increases with q, which is the opposite of what is observed for worst-case partial-band interference when n is optimized. The reason for this increase in P s is the increase in the bandwidth of each frequency channel as q increases, which provides a larger target for multitone jamming. Thus, binary CPFSK is advantageous in relation to this sophisticated multitone jamming.

3.3 Frequency Hopping with DPSK and CPM

In a network of frequency-hopping systems and a fixed hopping bandwidth, it is highly desirable to choose a spectrally compact data modulation so that the hopset is large and hence the number of collisions among the frequency-hopping signals is kept low. A spectrally compact modulation also helps to ensure that the bandwidth of a frequency channel is less than the coherence bandwidth (Section  6.3 ) so that equalization in the receiver is not necessary.

The limiting of spectral splatter is another desirable characteristic of data modulation for frequency hopping. Spectral splatter is the interference produced in frequency channels other than that being used by a frequency-hopping pulse. It is caused by the time-limited nature of transmitted pulses. The degree to which spectral splatter causes errors depends primarily on the separation F s between carrier frequencies and the percentage of the signal power included in a frequency channel. In practice, this percentage must be at least 90% to avoid signal distortion and is often more than 95%. Usually, only pulses in adjacent channels produce a significant amount of spectral splatter in a frequency channel.

The adjacent splatter ratio K s is the ratio of the power due to spectral splatter from an adjacent channel to the corresponding power that arrives at the receiver in that channel. For example, if B is the bandwidth of a frequency channel that includes 97% of the signal power and F s  ≥ B, then no more than 1.5% of the power from a transmitted pulse can enter an adjacent channel on one side of the frequency channel used by the pulse; therefore, K s  ≤ 0.015/0.97 = 0.0155. A given maximum value of K s can be reduced by an increase in F s , but eventually the number of frequency channels M must be reduced if the hopping bandwidth is fixed. As a result, the rate at which users hop into the same channel increases. This increase may cancel any improvement due to the reduction of the spectral splatter. The opposite procedure (reducing F s and B so that more frequency channels become available) increases not only the spectral splatter but also signal distortion and intersymbol interference, so the amount of useful reduction is limited.

To avoid spectral spreading due to amplifier nonlinearity, it is desirable for the data modulation to have a constant amplitude, as it is often impossible to implement a filter with the appropriate bandwidth and center frequency for spectral shaping of a signal after it emerges from the final power amplifier. Since they produce constant amplitudes and do not require coherent demodulation, a good modulation candidate is differential phase-shift keying (DPSK) or some form of spectrally compact continuous-phase modulation.


A DPSK demodulator compares the phases of two successive received symbols. If the magnitude of the phase difference is does not exceed π/2, then the demodulator decides that a 1 was transmitted; otherwise, it decides that a 0 was transmitted.

Consider multitone jamming of an FH/DPSK system with negligible thermal noise. Each tone is assumed to have a frequency identical to the center frequency of one of the frequency channels. The composite signal, consisting of the transmitted signal and the jamming tone, has a constant phase over two successive received symbols in the same hop dwell interval if a 1 was transmitted and the thermal noise is absent; thus, the demodulator correctly detects the 1.

Suppose that a 0 was transmitted, and that the desired signal is \(\sqrt {2S}\cos 2\pi f_{c}t\) during the first symbol interval, where S is the average power and f c is the carrier frequency of the frequency-hopping signal during the dwell interval. When a jamming tone is present, trigonometric identities indicate that the composite signal during the first symbol interval may be expressed as
$$ \begin{aligned} \sqrt{2S}\cos2\pi f_{c}t+\sqrt{2I}\cos\left( 2\pi f_{c}t+\theta\right) =\sqrt{2S+2I_{t}+4\sqrt{SI}\cos\theta}\cos\left( 2\pi f_{c}t+\phi_{1}\right)\end{aligned} $$
where I is the average power of the jamming tone, θ is the phase of the tone relative to the phase of the transmitted signal, and ϕ1 is the phase of the composite signal:
$$ \begin{aligned} \phi_{1}=\tan^{-1}\left( \frac{\sqrt{I}\sin\theta}{\sqrt{S}+\sqrt{I} \cos\theta}\right) . {} \end{aligned} $$
Since the desired signal during the second symbol is \(-\sqrt {2S}\cos 2\pi f_{c}t\), the composite signal during the second symbol interval is where
$$ \begin{aligned} \phi_{2}=\tan^{-1}\left( \frac{\sqrt{I}\sin\theta}{-\sqrt{S}+\sqrt{I} \cos\theta}\right) . {} \end{aligned} $$
Using trigonometry, it is found that
$$ \begin{aligned} \phi_{2}-\phi_{1}=\cos^{-1}\left[ \frac{I-S}{\sqrt{S^{2}+I^{2}+2SI(1-2\cos ^{2}\theta)}}\right] . {} \end{aligned} $$
If I ≥ S, then |ϕ2 − ϕ1|≤ π/2, so the demodulator incorrectly decides that a 1 was transmitted. If I < S, no mistake is made. Thus, multitone jamming with total power I t is most damaging when J frequency channels given by (3.29) are jammed and each tone has power I = I t /J. If the information bits 0 and 1 are equally likely, then the symbol error probability given that a frequency channel is jammed with I ≥ S is P s  = 1/2, the probability that a 0 was transmitted. Therefore, P s  = J/2M if I t  ≥ S, and P s  = 0, otherwise. Using (3.27) and (3.29) with \(S=\mathbb {E}_{b}/T_{b},\) I t  = It0W, and ⌊x⌋≈ x, we obtain the symbol error probability for FH/DPSK and multitone jamming:
$$ \begin{aligned} P_{s}\approx\left\{ \begin{matrix} {\frac{1}{2},} & \;\;\;\;{\frac{\mathbb{E}_{b}}{I_{t0}}}<BT_{b}\\ \frac{1}{2}BT_{b}\left( {\frac{\mathbb{E}_{b}}{I_{t0}}}\right) ^{-1}, & \;\;\;\;BT_{b}\leq{\frac{\mathbb{E}_{b}}{I_{t0}}}\leq WT_{b}\\ 0, & \ {\frac{\mathbb{E}_{b}}{I_{t0}}}>WT_{b} \end{matrix} \right. {} \end{aligned} $$
which indicates that the multitone jamming has a much stronger impact on P s than white Gaussian noise with the same power.


We define the frequency pulse g(t) as a piece-wise continuous function that vanishes outside the interval \(\left [ 0,LT_{s}\right ] \); that is,
$$ \begin{aligned} g(t)=0\;,\;\;\;t<0\;,\;\;\;t>LT_{s} \end{aligned} $$
where L is a positive integer and T s is the symbol duration. The function is normalized so that
$$ \begin{aligned} \int_{0}^{LT_{s}}g(x)dx=\frac{1}{2}. \end{aligned} $$
The phase response is defined as the continuous function
$$ \begin{aligned} \phi\left( t\right) =\left\{ \begin{array} [c]{cc} 0, & t<0\\ \int_{0}^{t}g(x)dx, & 0\leq t\leq LT_{s}\\ 1/2, & t>LT_{s}. \end{array} \right. {} \end{aligned} $$
The general form of a signal with continuous-phase modulation (CPM) is
$$ \begin{aligned} s(t)=A\cos[2\pi f_{c}t+\phi(t,\boldsymbol{\alpha})] {} \end{aligned} $$
where A is the amplitude, f c is the carrier frequency, and ϕ(t, α) is the phase function that carries the message. The phase function has the form
$$ \begin{aligned} \phi(t,\boldsymbol{\alpha})=2\pi h\sum_{i=-\infty}^{\infty}\alpha_{i} \phi(t-iT_{s})\ {} \end{aligned} $$
where h is a constant called the deviation ratio or modulation index, and the vector α is a sequence of q-ary channel symbols. Each symbol α i takes one of q values; if q is even, the values are ± 1, ±3, …, ±(q − 1). The phase function is continuous and (3.42) indicates that the phase in any specified symbol interval depends on the previous symbols.
Since g(t) is a piece-wise continuous function, ϕ(t, α) is differentiable. The frequency function of the CPM signal, which is proportional to the derivative of ϕ(t, α), is
$$ \begin{aligned} \frac{1}{2\pi}\phi^{\prime}(t,\boldsymbol{\alpha})=h\sum_{i=-\infty}^{n} \alpha_{i}g(t-iT_{s}),\ \ nT_{s}\leq t\leq\left( n+1\right) T_{s}.\ {} \end{aligned} $$
If L = 1, the continuous-phase modulation is called a full-response modulation; if L > 1, it is called a partial-response modulation, and each frequency pulse extends over two or more symbol intervals. The normalization condition for a full-response modulation implies that the phase change over a symbol interval is equal to hπα i .
Continuous-phase frequency-shift keying (CPFSK) is a full-response subclass of CPM for which the instantaneous frequency is constant over each symbol interval. Because of the normalization, a CPFSK frequency pulse is given by
$$ \begin{aligned} g(t)=\left\{ \begin{matrix} {\frac{1}{2T_{s}}}\;,\;\;\;\;0\leq t\leq T_{s}\\ 0\;,\;\;\;\;\;\mathrm{otherwise} \end{matrix} \right. {} \end{aligned} $$
and its phase response is
$$ \begin{aligned} \phi\left( t\right) =\left\{ \begin{array} [c]{cc} 0, & t<0\\ {\frac{t}{2T_{s}},}\; & 0\leq t\leq T_{s}\\ \frac{1}{2}, & t>T_{s}. \end{array} \right. {} \end{aligned} $$
A CPFSK signal shifts among frequencies separated by f d  = h/T s . The substitution of (3.45) into (3.42) indicates that
$$ \begin{aligned} \phi(t,\boldsymbol{\alpha})=\pi h\sum_{i=-\infty}^{n-1}\alpha_{i}+\frac{\pi h\alpha_{n}}{T_{s}}(t-nT_{s}),\ nT_{s}\leq t\leq\left( n+1\right) T_{s}. {} \end{aligned} $$

The main difference between CPFSK and FSK is that h can have any positive value for CPFSK but is relegated to integer values for FSK so that the tones are orthogonal to each other. Both modulations may be detected with matched filters, envelope detectors, and frequency discriminators. Although CPFSK explicitly requires phase continuity and FSK does not, FSK is almost always implemented with phase continuity to avoid the generation of spectral splatter, and hence is equivalent to CPFSK with h = 1. Minimum-shift keying (MSK) is defined as binary CPFSK with h = 1/2, and hence the two frequencies are separated by f d  = 1/2T s .

With multisymbol noncoherent detection [110], CPFSK systems can provide a better symbol error probability than coherent BPSK systems without multisymbol detection. For r-symbol detection, the optimal receiver correlates the received waveform over all possible r-symbol patterns before making a decision. The drawback is the considerable implementation complexity of multisymbol detection, even for three-symbol detection.

Many communication signals are modeled as bandpass signals having the form
$$ \begin{aligned} s(t)=\operatorname{Re}\left[ s_{l}\left( t\right) \exp\left(j2\pi f_{c}t\right) \right] \end{aligned} $$
where \(j=\sqrt {-1},\) and \(s_{l}\left ( t\right ) \) is a complex-valued function. Multiplication of s(t) by \(2\exp \left ( -j2\pi f_{c}t\right ) \) and lowpass filtering produces \(s_{l}\left ( t\right ) ,\) which is called the complex envelope or equivalent lowpass waveform of s(t). We consider quadrature modulations that have the form
$$ \begin{aligned} s_{l}(t)=\frac{A[d_{1}(t)+jd_{2}(t)]\exp\left(j\theta\right) }{\sqrt{2}} \end{aligned} $$
where d1(t) and d2(t) are data modulations, and θ is the random phase uniformly distributed over [0, 2π). Then s(t) may be expressed as
$$ \begin{aligned} s(t)=\frac{A}{\sqrt{2}}d_{1}(t)\cos{}(2\pi f_{c}t+\theta)+\frac{A}{\sqrt{2} }d_{2}(t)\sin{}(2\pi f_{c}t+\theta) {} \end{aligned} $$
where the \(\sqrt {2}\) has been inserted because the power is divided between the quadrature components.
We consider data modulations with the idealized form
$$ \begin{aligned} d_{i}(t)=\sum_{k=-\infty}^{\infty}a_{ik}\psi(t-kT_{s}-T_{0}-t_{i}),\;\;\;i=1,2 {} \end{aligned} $$
where {a ik } is a sequence of independent, identically distributed random variables, a ik  = +1 with probability 1/2 and a ik  = −1 with probability 1/2, ψ(t) is a pulse waveform, T s is the pulse duration and symbol duration, t i is the relative pulse offset, and T0 is an independent random variable that is uniformly distributed over the interval [0, T s ) and reflects the arbitrariness of the origin of the coordinate system. Equation (3.50) describes an infinite stream of data symbols, which is an approximation that serves to simplify the evaluations of the autocorrelations and PSDs. As shown in Section 3.4, a finite set of symbols has a less compact spectrum.
Since a ik is independent of a in when n ≠ k, it follows that E[a ik a in ] = 0, n ≠ k. Therefore, the autocorrelation of the real-valued d i (t) is
$$ \begin{aligned} R_{di}(\tau) & =E[d_{i}(t)d_{i}(t+\tau)]\\ & =\sum_{k=-\infty}^{\infty}E[\psi(t-kT_{s}-T_{0}-t_{i})\psi(t-kT_{s} -T_{0}-t_{i}+\tau)].\;\;\; \end{aligned} $$
Expressing the expected value as an integral over the range of T0 and changing variables, we obtain
$$ \begin{aligned} R_{di}(\tau) & =\sum_{k=-\infty}^{\infty}\frac{1}{T_{s}}\int_{t-kT_{s} -T_{s}-t_{i}}^{t-kT_{s}-t_{i}}\psi(x)\psi(x+\tau)dx\\ & =\frac{1}{T_{s}}\int_{-\infty}^{\infty}\psi(x)\psi(x+\tau)dx,\;\;\;i=1,2. {} \end{aligned} $$
This equation indicates that d1(t) and d2(t) are wide-sense-stationary processes with the same autocorrelation. The autocorrelation of the complex-valued s l (t) is
$$ \begin{aligned} R_{l}(\tau)=E[s_{l}(t)s_{l}^{\ast}(t+\tau)]. \end{aligned} $$
The independence of d1(t) and d2(t) imply that
$$ \begin{aligned} R_{l}(\tau)=\frac{A^{2}}{2}R_{d1}(\tau)+\frac{A^{2}}{2}R_{d2}(\tau). {} \end{aligned} $$
The two-sided PSD S l (f) of the complex envelope s l (t) is the Fourier transform (Appendix C.1 ) of R l (τ). The Fourier transform of ψ(t) is
$$ \begin{aligned} G(f)=\int_{-\infty}^{\infty}\psi(t)e^{-j2\pi ft}dt. {} \end{aligned} $$
From (3.54), (3.52), and the convolution theorem (Appendix C.1 ), we obtain the PSD
$$ \begin{aligned} S_{l}(f)=A^{2}\frac{|G(f)|{}^{2}}{T_{s}}. {} \end{aligned} $$
In a quadriphase-shift keying (QPSK) signal, d1(t) and d2(t) are modeled as independent random binary sequences with t1 = t2 = 0 and symbol duration T s  = 2T b in (3.50), where T b is a bit duration. If ψ(t) is rectangular with unit amplitude over [0, 2T b ], then (3.56), (3.55), and an elementary integration indicate that the PSD for QPSK is
$$ \begin{aligned} S_{l}(f)=2A^{2}T_{b}\ sinc^{2}(2T_{b}f) \end{aligned} $$
which is the same as the PSD for BPSK.
A binary minimum-shift-keying (MSK) signal with the same component amplitude can be represented by (3.49) and (3.50) with t1 = 0, t2 = −π/2, T s  = T b , and
$$ \begin{aligned} \psi(t)=\sqrt{2}\sin\left( \frac{\pi t}{2T_{b}}\right) \;,\;\;\;0\leq t<2T_{b}. \end{aligned} $$
A straightforward evaluation of G(f) using trigonometry and trigonometric integrals gives the PSD of MSK:
$$ \begin{aligned} S_{l}(f)=\frac{16A^{2}T_{b}}{\pi^{2}}\left[ \frac{\cos{}(2\pi T_{b}f)} {16T_{b}^{2}f^{2}-1}\right] ^{2}. {} \end{aligned} $$
A measure of the spectral compactness of a signal is provided by the fractional in-band power F ib (b) defined as the fraction of power for \(f\in \left [ -b,b\right ]\). Thus,
$$ \begin{aligned} F_{ib}(b)=\frac{\int_{-b}^{b}S_{l}(f)df}{\int_{-\infty}^{\infty}S_{l} (f)df},\;\;\;b\geq0. {} \end{aligned} $$
The bandwidth B of a frequency channel is determined by setting F ib (B/2) equal to the fraction of demodulated signal power in a frequency channel. The autocorrelations of the complex envelopes of practical signals with symbol duration T s are functions of τ/T s  : \(R_{l}(\tau )=f_{1}\left ( \tau /T_{s}\right )\). For these autocorrelations, the PSD of the complex envelope has the form \(S_{l}(f)=T_{s}f_{2}\left (fT_{s}\right )\). Therefore, B is inversely proportional to T s . We define the normalized bandwidth as
$$ \begin{aligned} \zeta=BT_{s}. {} \end{aligned} $$
Usually, F ib (B/2) must exceed at least 0.9 to prevent significant signal distortion and performance degradation in communications over a bandlimited channel. The normalized bandwidth for which F ib (B/2) = 0.99 is approximately 1.2 for binary MSK, but approximately 8 for BPSK.
The fractional out-of-band power of the complex envelope is defined as F ob (f) = 1 − F ib (f). The adjacent-splatter ratio, which is due to out-of-band power on one side of the center frequency, has the upper bound given by
$$ \begin{aligned} K_{s}<\frac{1}{2}F_{ob}(B/2). {} \end{aligned} $$
The closed-form expressions for the PSDs of QPSK and binary MSK are used to generate Figure 3.7. The graphs depict F ob (f) in decibels as a function of f in units of 1/T b , where T b  = T s /log2q for a q-ary modulation.
Fig. 3.7

Fractional out-of-band power for equivalent lowpass waveforms of QPSK and MSK

An even more compact spectrum than binary MSK is obtained by passing the MSK frequency pulses through a Gaussian filter with transfer function
$$ \begin{aligned} H(f)=\exp\left[ -\frac{(\ln2)}{B^{2}}f^{2}\right] \end{aligned} $$
where B1 is the 3-dB bandwidth, which is the positive frequency such that H(f) ≥ H(0)/2, \(\left \vert \,f\right \vert \leq B_{1}\). The filter response to a unit-amplitude MSK frequency pulse is the Gaussian MSK (GMSK) pulse:
$$ \begin{aligned} g(t)=Q\left[ \frac{2\pi B_{1}}{\sqrt{\ln2}}(t-\frac{T_{s}}{2})\right] -Q\left[ \frac{2\pi B_{1}}{\sqrt{\ln2}}(t+\frac{T_{s}}{2})\right] \end{aligned} $$
where T s  = T b . As B1 decreases, the spectrum of a GMSK signal becomes more compact. However, each pulse is longer, and hence there is more intersymbol interference. If B1T s  = 0.3, which is specified in the Global System for Mobile (GSM) cellular communication system , the normalized bandwidth for which F ib (B/2) = 0.99 is approximately ζ = 0.92. Each pulse may be truncated for ∣t∣ > 1.5T s with little loss. The performance loss relative to MSK is approximately 0.46 dB for coherent demodulation and presumably also for noncoherent demodulation.
The cross-correlation parameter for two transmitted signals s1(t) and s2(t), each with energy \(\mathbb {E}_{s}\), is defined as
$$ \begin{aligned} C=\frac{1}{\mathbb{E}_{s}}\int_{0}^{T_{s}}s_{1}(t)s_{2}(t)dt. {} \end{aligned} $$
For binary CPFSK, the two possible transmitted signals, each representing a different channel symbol, are
$$ \begin{aligned} s_{1}(t)=\sqrt{2\mathbb{E}_{s}/T_{s}}\cos{}(2\pi f_{1}t+\phi_{1})\;,\;\;\;s_{2} (t)=\sqrt{2\mathbb{E}_{s}/T_{s}}\cos{}(2\pi f_{2}t+\phi_{2}). {} \end{aligned} $$
The substitution of these equations into (3.65), a trigonometric expansion and discarding of an integral that is negligible if (f1 + f2)T s  >> 1, and the evaluation of the remaining integral give
$$ \begin{aligned} C=\frac{1}{2\pi f_{d}T_{s}}[\sin{}(2\pi f_{d}T_{s}+\phi_{d})-\sin\phi _{d}]\;,\;\;\;f_{d}\neq0 {} \end{aligned} $$
where f d  = f1 − f2 and ϕ d  = ϕ1 − ϕ2. Because of the phase synchronization in a coherent demodulator, we may take ϕ d  = 0. Therefore, the orthogonality condition C = 0 is satisfied if h = f d T s  = k/2, where k is any nonzero integer. The smallest value of h for which C = 0 is h = 1/2, which corresponds to MSK.
In a noncoherent demodulator, ϕ d is a random variable that is assumed to be uniformly distributed over [0, 2π). Equation (3.67) indicates that E[C] = 0 for all values of h. The variance of C is
$$ \begin{aligned} var(C) & =\left( \frac{1}{2\pi f_{d}T_{s}}\right) ^{2}E\left[ \sin ^{2}(2\pi f_{d}T_{s}+\phi_{d})+\sin^{2}\phi_{d}-2\sin\phi_{d}\sin{}(2\pi f_{d}T_{s}+\phi_{d})\right] \\ & =\left( \frac{1}{2\pi f_{d}T_{s}}\right) ^{2}(1-\cos2\pi f_{d} T_{s})\\ & =\frac{1}{2}\mathrm{sinc}^{2}h. {} \end{aligned} $$
Since var(C) ≠ 0 for h = 1/2, MSK does not provide orthogonal signals for noncoherent demodulation. If h is any nonzero integer, then both (3.68) and (3.67) indicate that the two CPFSK signals are orthogonal for any ϕ d . This result justifies the previous assertion that FSK tones must be separated by f d  = k/T s to provide noncoherent orthogonal signals.
Consider the multitone jamming of an FH/CPM or FH/CPFSK system in which the thermal noise is absent and each jamming tone is randomly placed within a single frequency channel. It is reasonable to assume that a symbol error occurs with probability (q − 1)/q when the frequency channel contains a jamming tone with power exceeding S. The substitution of (3.27), (3.20), \(\mathbb {E}_{b}=ST_{b}\), and It0 = I t /W into (3.22) yield
$$ \begin{aligned} P_{s}=\left\{ \begin{matrix} {\frac{q-1}{q},} & \;\;\;\;{\frac{\mathbb{E}_{b}}{I_{t0}}}<BT_{b}\\ \left( {\frac{q-1}{q}}\right) BT_{b}\left( {\frac{\mathbb{E}_{b}}{I_{t0}} }\right) ^{-1}, & \;\;\;\;BT_{b}\leq{\frac{\mathbb{E}_{b}}{I_{t0}}}\leq WT_{b}\\ 0, & \ {\frac{\mathbb{E}_{b}}{I_{t0}}}>WT_{b} \end{matrix} \right. {} \end{aligned} $$
for sophisticated multitone jamming. The symbol error probability increases with B because the enlarged frequency channels present better targets for the multitone jamming.

As implied by Figure 3.7, the bandwidth requirement of DPSK with K s  > 0.9, which is the same as that of BPSK or QPSK and less than that of orthogonal FSK, exceeds that of MSK. Thus, if the hopping bandwidth W is fixed, the number of frequency channels available for FH/DPSK is smaller than it is for noncoherent FH/MSK. This increase in B and reduction in frequency channels offsets the intrinsic performance advantage of FH/DPSK and implies that noncoherent FH/MSK gives a lower P s than FH/DPSK in the presence of worst-case multitone jamming, as indicated in (3.37). Alternatively, if the bandwidth of a frequency channel is fixed, an FH/DPSK signal experiences more distortion and spectral splatter than an FH/MSK signal. Any pulse shaping of the DPSK symbols alters their constant amplitude. An FH/DPSK system is more sensitive to Doppler shifts and frequency instabilities than an FH/MSK system. Another disadvantage of FH/DPSK is the usual lack of phase coherence from hop to hop, which necessitates an extra phase-reference symbol at the start of every dwell interval. This extra symbol reduces \(\mathbb {E}_{s}\) by a factor (N − 1)/N, where N is the number of symbols per hop or dwell interval and N ≥ 2. Thus, DPSK is not as suitable a means of modulation as noncoherent MSK for most applications of frequency-hopping communications, and the main competition for MSK comes from other forms of CPM.

3.4 Power Spectral Density of FH/CPM

The finite extent of the dwell intervals causes an expansion of the spectral density of an FH/CPM signal relative to a CPM signal of the same duration. The reason is that an FH/CPM signal has a continuous phase over each dwell interval with N symbols, but has a phase discontinuity every T h  = NT s  + T sw seconds at the beginning of another dwell interval, where N is the number of symbols per dwell interval. The signal may be expressed as
$$ \begin{aligned} s(t)=A\sum_{i=-\infty}^{\infty}w(t-iT_{h},T_{d})\cos\left[ 2\pi f_{ci} t+\phi(t,\boldsymbol{\alpha})+\theta_{i}\right] {} \end{aligned} $$
where A is the amplitude during a dwell interval, w(t, T d ) as defined in ( 2.48 ) is a unit-amplitude rectangular pulse of duration T d  = NT s , f ci is the carrier frequency during hop-interval i, ϕ(t, α) is the phase function defined by (3.42), and θ i is the phase at the beginning of dwell-interval i.
The PSD of the complex envelope of an FH/CPM signal, which is the same as the dehopped PSD, depends on the number of symbols per dwell interval N because of the finite dwell time. To simplify the derivation of the PSD, we neglect the switching time and set T h  = T d  = NT s . Let w(t) = 1, 0 ≤ t < NT s , and w(t) = 0, otherwise. The complex envelope of the FH/CPM signal is
$$ \begin{aligned} F\left( t,\boldsymbol{\alpha}\right) =A\sum_{i=-\infty}^{\infty}w(t-iNT_{s} )\exp\left[\,j\phi(t,\boldsymbol{\alpha})+j\theta_{i}\right] \end{aligned} $$
where the \(\left \{ \theta _{i}\right \} \) are assumed to be independent and uniformly distributed over [0, 2π). Therefore, \(E\left [ \exp \left (j\theta _{i}-j\theta _{k}\right ) \right ] =0,i\neq k,\) and the autocorrelation of \(F\left ( t,\boldsymbol {\alpha }\right ) \) is
$$ \begin{aligned} R_{f}\left( t,t+\tau\right) & =E\left[\,F^{\ast}\left( t,\boldsymbol{\alpha }\right) F\left( t+\tau,\boldsymbol{\alpha}\right) \right] \\ & =A^{2}\sum_{i=-\infty}^{\infty}w(t-iNT_{s})w(t+\tau-iNT_{s})R_{c}\left( t,t+\tau\right) {} \end{aligned} $$
where the asterisk denotes the complex conjugate, and the autocorrelation of the complex envelope of the underlying CPM signal is
$$ \begin{aligned} R_{c}\left( t,t+\tau\right) =E\left\{ \exp\left[\,j\phi(t+\tau ,\boldsymbol{\alpha})-j\phi(t,\boldsymbol{\alpha})\right] \right\} . {} \end{aligned} $$
Assuming that the symbols of α are independent and identically distributed, (3.42) and (3.73) imply that \(R_{c}\left ( t,t+\tau \right ) \) is periodic in t with period T s . Since the series in (3.72) is infinite, each term in the series has a duration less than NT s . Therefore, the periodicity of \(R_{c}\left ( t,t+\tau \right ) \) implies that \(R_{f}\left ( t,t+\tau \right ) \) is periodic in t with period NT s .
The average autocorrelation of \(F\left ( t,\boldsymbol {\alpha }\right ) \), found by substituting (3.72) into the definition ( 2.14 ), is
$$ \begin{aligned} R_{f}\left( \tau\right) =\frac{A^{2}}{NT_{s}}\int\nolimits_{0}^{NT_{s}} R_{f}\left( t,t+\tau\right) dt. {} \end{aligned} $$
Equation (3.73) indicates that \(R_{c}\left ( t,t-\tau \right ) =\) \(R_{c}^{\ast }\left ( t-\tau ,t\right )\). This equation, (3.72), and (3.74) imply that
$$ \begin{aligned} R_{f}\left( -\tau\right) =R_{f}^{\ast}\left( \tau\right) . {} \end{aligned} $$
Since w(t − iNT s )w(t + τ − iNT s ) = 0 if τ ≥ NT s in (3.72 ),
$$ \begin{aligned} R_{f}\left( \tau\right) =0,\ \ \tau\geq NT_{s}. \end{aligned} $$
Thus, only \(R_{f}\left ( \tau \right ) \) for 0 ≤ τ < NT s remains to be evaluated.
Since w(t − iNT s )w(t + τ − iNT s ) = 0 if τ ≥ 0 and \(t\in \left [ NT_{s}-\tau ,NT_{s}\right ]\), the substitution of (3.72) into (3.74) yields
$$ \begin{aligned} R_{f}\left( \tau\right) =\frac{A^{2}}{NT_{s}}\int\nolimits_{0}^{NT_{s}-\tau }R_{c}\left( t,t+\tau\right) dt,\ \tau\in\lbrack0,NT_{s}). {} \end{aligned} $$
Let τ = νT s  + 𝜖, where ν is a nonnegative integer, 0 ≤ ν < N, and 0 ≤ 𝜖 < T s . Since \(R_{c}\left ( t,t+\tau \right ) \) is periodic in t with period T s , the integration interval in (3.77) can be divided into smaller intervals with similar integrals. Thus, we obtain
$$ \begin{aligned} R_{f}\left( \tau\right) & =A^{2}\left[ \frac{N-\nu-1}{NT_{s}} \int\nolimits_{0}^{T_{s}}R_{c}\left( t,t+\tau\right) dt+\frac{1}{NT_{s}} \int\nolimits_{0}^{T_{s}-\epsilon}R_{c}\left( t,t+\tau\right) dt\right] ,\\ \nu & =\left\lfloor \tau/T_{s}\right\rfloor <N,\ \epsilon=\tau-\nu T_{s},\ \tau=\nu T_{s}+\epsilon. {}\end{aligned} $$
The assumption is that the symbols of α are statistically independent and that the substitution of (3.42) into (3.73) yields an infinite product. The finite duration of \(\phi \left ( t\right ) \) in (3.40) implies that if \(\dot {k}>\left ( t+\tau \right ) /T_{s}\) or k < 1 − L, then \(\phi \left ( t+\tau -kT_{s}\right ) -\phi \left ( t-kT_{s}\right ) =0,\) and the corresponding factors in the infinite product are unity. Thus, we obtain
$$ \begin{aligned} R_{c}\left( t,t+\tau\right) & =\prod_{k=1-L}^{\left\lfloor \left( t+\tau\right) /T_{s}\right\rfloor }E\left\{ \exp[\,j2\pi h\alpha_{k}\phi _{d}\left( t,\tau,k\right) ]\right\} ,\\ \ t & \in\lbrack0,T_{s}),\tau\in\lbrack0,NT_{s}) {} \end{aligned} $$
$$ \begin{aligned} \phi_{d}\left( t,\tau,k\right) =\phi\left( t+\tau-kT_{s}\right) -\phi\left( t-kT_{s}\right) . {} \end{aligned} $$
If each symbol is equally likely to have any of the q possible values, then
$$ \begin{aligned} R_{c}\left( t,t+\tau\right) =\prod_{k=1-L}^{\left\lfloor \left( t+\tau\right) /T_{s}\right\rfloor }\left\{ \frac{1}{q}\sum \limits_{l=-\left( q-1\right) ,odd}^{q-1}\exp\left[\,j2\pi hl\phi_{d}\left( t,\tau,k\right) \right] \right\} {} \end{aligned} $$
where the sum only includes odd values of the index l. After a change of the index in the sum to \(m=\left ( l+q-1\right ) /2,\) (3.81) can be evaluated as a geometric series. We obtain
$$ \begin{aligned} R_{c}\left( t,t+\tau\right) & =\prod_{k=1-L}^{\left\lfloor \left( t+\tau\right) /T_{s}\right\rfloor }\frac{1}{q}\frac{\sin\left[ 2\pi hq\phi_{d}\left( t,\tau,k\right) \right] }{\sin\left[ 2\pi h\phi _{d}\left( t,\tau,k\right) \right] },\\ \ \ t & \in\lbrack0,T_{s}),\tau\in\lbrack0,NT_{s}) {} \end{aligned} $$
where a factor in the product is set equal to +1 if \(2h\phi _{d}\left ( t,\tau ,k\right ) \ \)is equal to an even integer, and is set equal to −1 if \(2h\phi _{d}\left ( t,\tau ,k\right ) \ \)is equal to an odd integer.
Equation (3.82) indicates that \(R_{c}\left ( t,t+\tau \right ) \) is real-valued, and then (3.77) and (3.75) indicate that \(R_{f}\left ( \tau \right ) \) is a real-valued, even function. Therefore, the average PSD of the dehopped signal, which is the Fourier transform of the average autocorrelation \(R_{f}\left ( \tau \right )\), is
$$ \begin{aligned} S_{l}\left(f\right) =2\int\nolimits_{0}^{NT_{s}}R_{f}\left( \tau\right) \cos\left( 2\pi f\tau\right) d\tau. {} \end{aligned} $$
The average PSD can be calculated by substituting (3.78) and (3.82) into (3.83) and then numerically evaluating the integrals, which extend over finite intervals [41]. The PSD of CPM without frequency hopping but with many data symbols is obtained by taking N →.
For FH/CPFSK with binary CPFSK, L = 1, q = 2, and analytical simplifications are computationally useful. Substitution of (3.80) into (3.82) and a trigonometric identity yield
$$ \begin{aligned} R_{c}\left( t,t+\tau\right) & =\cos\{2\pi h[\phi\left( t+\tau\right) -\phi\left( t\right) ]\}\prod_{k=1}^{\left\lfloor \left( t+\tau\right) /T_{s}\right\rfloor }\cos\left[ 2\pi h\phi\left( t-kT_{s}+\tau\right) \right] ,\\ 0 & \leq t<T_{s},\ \tau\in\lbrack0,NT_{s}) \end{aligned} $$
where \(\prod \limits _{k=1}^{0}\left ( \cdot \right ) =1.\) Let τ = νT s  + 𝜖, where ν is a nonnegative integer, 0 ≤ ν < N, and 0 ≤ 𝜖 < T s . Substituting (3.45) and evaluating the product, we obtain
$$ \begin{aligned} R_{c}\left( t,t+\tau\right) =\cos a\epsilon{,}\ \ 0\leq t<T_{s}-\epsilon,\ \nu=0 {} \end{aligned} $$
$$ \begin{aligned} R_{c}\left( t,t+\tau\right) & =\left( \cos\pi h\right) ^{\nu-1} \cos\left( at-\pi h\right) \cos\left( at+a\epsilon\right) ,\\ 0 & \leq t<T_{s}-\epsilon,\ 1\leq\nu<N\end{aligned} $$
$$ \begin{aligned} R_{c}\left( t,t+\tau\right) & =\left( \cos\pi h\right) ^{\nu}\cos\left( at-\pi h\right) \cos\left( at+a\epsilon-\pi h\right) ,\\ T_{s}-\epsilon & \leq t<T_{s},\ 0\leq\nu<N \end{aligned} $$
$$ \begin{aligned} a=\pi h/T_{s},\ \nu=\left\lfloor \tau/T_{s}\right\rfloor <N,\ \epsilon=\tau-\nu T_{s},\ \tau=\nu T_{s}+\epsilon. {} \end{aligned} $$
If h = 1/2, then \(\left ( \cos \pi /2\right ) ^{\nu }=0,\) ν ≥ 1, and \(\left ( \cos \pi /2\right ) ^{0}=1.\) Substitution of \(R_{c}\left ( t,t+\tau \right ) \) into (3.78), use of trigonometric identities, and the evaluation of basic trigonometric integrals yields
Equation (3.83) indicates that the PSD is
$$ \begin{aligned} S_{l}\left(f\right) =2\sum_{\nu=0}^{N-1}\int\nolimits_{0}^{T_{s}} R_{f}\left( \nu T_{s}+\epsilon\right) \cos[2\pi f\left( \nu T_{s} +\epsilon\right) ]d\epsilon. {} \end{aligned} $$
Substitution of (3.89) and (3.90) into (3.91) followed by analytical or numerical integrations gives the PSDs for FH/CPFSK with binary CPFSK.
When frequency hopping is absent, L = 1, and there are many data symbols per hop, we take N → in (3.78), (3.82), and (3.83) to enable the calculation of a closed-form expression for the PSD, but only after a very lengthy series of evaluations of trigonometric integrals and algebraic and trigonometric manipulations. For CPFSK with L = 1, the PSD is [69]
$$ \begin{aligned} S_{l}(f)=A^{2}\frac{T_{s}}{q}\sum_{n=1}^{q}\left[ A_{n}^{2}\left(f\right) +\frac{2}{q}\sum_{m=1}^{q}A_{n}\left(f\right) A_{m}\left(f\right) B_{nm}\left(f\right) \right] {} \end{aligned} $$
$$ \begin{aligned} A_{n}\left(f\right) =\frac{\sin\left[ \pi fT_{s}-\frac{\pi h}{2}\left( 2n-q-1\right) \right] }{\pi fT_{s}-\frac{\pi h}{2}\left( 2n-q-1\right) } {} \end{aligned} $$
$$ \begin{aligned} B_{nm}\left(f\right) =\frac{\cos\left( 2\pi fT_{s}-\alpha_{nm}\right) -\Phi\cos\alpha_{nm}}{1+\Phi^{2}-2\Phi\cos2\pi fT_{s}} \end{aligned} $$
$$ \begin{aligned} \alpha_{nm}=\pi\left( n+m-q-1\right) ,\ \ \ \Phi=\frac{\sin q\pi h}{q\sin\pi h}. \end{aligned} $$
If the denominator in (3.93) is zero, we set \(A_{n}\left (f\right ) =1.\) If h is an even integer, we set Φ = 1; if h is an odd integer, we set Φ = −1. For the special case of MSK, the substitution of q = 2 and h = 1/2 into the preceding equations and considerable mathematical simplification lead again to (3.59).
The normalized 99-% bandwidth is determined from (3.60) by setting F ib (B/2) = 0.99 and solving for ζ = BT s . The normalized 99-% bandwidths of FH/CPFSK with deviation ratios h = 0.5 (MSK) and h = 0.7 are listed in Table 3.1 for different values of N. As N increases, the PSD becomes more compact and approaches that of CPFSK without frequency hopping. For N ≥ 64, the frequency hopping causes little spectral spreading.
Table 3.1

Normalized bandwidth (99%) for FH/CPFSK


Deviation ratio


h = 0.5

h = 0.7






















No hopping



Fast frequency hopping, which corresponds to N = 1, entails a very large 99-% bandwidth. This fact and the long switching times are the main reasons why slow frequency hopping is preferable to the fast form and is the predominant form of frequency hopping. Consequently, frequency hopping is always assumed to be the slow form unless explicitly stated otherwise.

An advantage of FH/CPFSK with h < 1 or FH/GMSK is that it requires less bandwidth than orthogonal CPFSK (h = 1). The increased number of frequency channels due to the decreased bandwidth does not improve performance over the additive white Gaussian noise (AWGN) channel. However, the increase is advantageous against a fixed number of interference tones, optimized jamming, and multiple-access interference in a network of frequency-hopping systems (Section  9.4 ).

3.5 Digital Demodulation of FH/CPFSK

In principle, the frequency hopping/continuous-phase frequency-shift keying (FH/CPFSK) receiver does symbol-rate sampling of matched-filter outputs. However, the details of the actual implementation are more involved, primarily because aliasing should be avoided. A practical digital demodulation is described in this section.

The dehopped FH/CPFSK signal during a dwell interval has the form
$$ \begin{aligned} s_{1}(t)=A\cos\left[ 2\pi f_{1}t+\phi(t,\boldsymbol{\alpha})+\phi_{0}\right] \end{aligned} $$
where A is the amplitude, f1 is the intermediate frequency (IF), ϕ(t, α) is the phase function, and ϕ0 is the initial phase. This signal is applied to the noncoherent digital demodulator illustrated in Figure 3.8. An error f e in the estimated carrier frequency used in the dehopping leads to an f1 that differs from the desired IF f IF by the IF offset frequency f e  = f1 − f IF . The quadrature downconverter, which is shown in Figure  2.17 , uses a sinusoidal signal at frequency f IF  − f o , where f o is the baseband offset frequency, and a pair of mixers to produce in-phase and quadrature components near baseband. The mixer outputs are passed through lowpass filters to remove the double-frequency components. The filter outputs are the in-phase and quadrature CPFSK signals with center frequency at f o  + f e . As shown in Figure 3.8, each of these signals is sampled at the rate f s by an analog-to-digital converter (ADC).
Fig. 3.8

Digital demodulator of dehopped FH/CPFSK signal

A critical choice in the design of the digital demodulator is the sampling rate of the ADCs. This rate must be large enough to prevent aliasing and to accommodate the IF offset. To simplify the demodulator implementation, it is highly desirable for the sampling rate to be an integer multiple of the symbol rate 1/T s . Thus, we assume a sampling rate f s  = L/T s , where L is a positive integer.

To determine the appropriate sampling rate and offset frequency, we use the sampling theorem (Appendix D.4 ), which relates a continuous-time signal x(t) and the discrete-time sequence x n  = x(nT) in the frequency domain:
$$ \begin{aligned} X(e^{j2\pi fT})=\frac{1}{T}\sum_{i=-\infty}^{\infty}X\left(f-\frac{i} {T}\right) {} \end{aligned} $$
where X(f) is the Fourier transform of x(t), X(ej2πfT) is the discrete-time Fourier transform (DTFT) of x n defined as
$$ \begin{aligned} X(e^{j2\pi fT})=\sum_{n=-\infty}^{\infty}x_{n}e^{-j2\pi nfT}\ {} \end{aligned} $$
and 1/T is the sampling rate. The DTFT X(ej2πfT) is a periodic function of frequency f with period 1/T. If x(t) is sampled at rate 1/T, then x(t) can be recovered from the samples if the transform is bandlimited so that X(f) = 0 for \(\left \vert \,f\right \vert >1/2T\). If the sampling rate is not high enough to satisfy this condition, then the terms of the sum in (3.97) overlap, which is called aliasing, and the samples may not correspond to a unique continuous-time signal.
The lowpass filters of the quadrature downconverter have bandwidths wide enough to accommodate both f e and the bandwidth of s1(t). The receiver timing or symbol synchronization may be derived from the frequency-hopping pattern synchronization (Section  4.6 ). If the timing is correct, then the sampled ADC output due to the desired signal in the upper branch of Figure 3.8 is the sequence
$$ \begin{aligned} x_{n}=A\cos\left[ 2\pi\left(f_{o}+f_{e}\right) nT_{s}/L+\phi (nT_{s}/L,\boldsymbol{\alpha})+\phi_{0}\right] {} \end{aligned} $$
where ϕ(nT s /L, α) is the sampled phase function of the CPFSK modulation, and ϕ0 is the unknown phase. A similar sequence
$$ \begin{aligned} y_{n}=A\sin\left[ 2\pi\left(f_{o}+f_{e}\right) nT_{s}/L+\phi (nT_{s}/L,\boldsymbol{\alpha})+\phi_{0}\right] \end{aligned} $$
is produced in the lower branch. Equation (3.46) indicates that during symbol interval m,
$$ \begin{aligned} \phi(nT_{s}/L,\boldsymbol{\alpha})=\frac{\pi h\alpha_{m}}{L}(n-Lm)+\phi _{1}\left( m\right) ,\ Lm\leq n\leq Lm+L-1 {} \end{aligned} $$
where α m denotes the symbol received during the interval,
$$ \begin{aligned} \phi_{1}\left( m\right) =\pi h\sum_{i=-\infty}^{m-1}\alpha_{i} \end{aligned} $$
and the {α i } are previous symbols.
The Fourier transforms of the in-phase and quadrature outputs of the quadrature downconverter occupy the upper band [ f o  + f e  − B/2, f o  + f e  + B/2] and the lower band [−f o  − f e  − B/2, − f o  − f e  + B/2], where B is the one-sided bandwidth of the dehopped FH/CPFSK signal. To avoid aliasing when the sampling rate is L/T s , the sampling theorem requires that
$$ \begin{aligned} f_{o}+f_{e}+\frac{B}{2}<\frac{L}{2T_{s}}. {} \end{aligned} $$
To prevent the upper and lower bands from overlapping, and hence distorting the DTFTs, it is necessary that f o  + f e  − B/2 > −f o  − f e  + B/2. Thus, the necessary condition is
$$ \begin{aligned} f_{o}>f_{\max}+\frac{B}{2} {} \end{aligned} $$
where \(f_{\max }\geq \) \(\left \vert \,f_{e}\right \vert \) is the maximum IF offset frequency that is likely to occur. Combining this inequality with (3.103), we obtain another necessary condition:
$$ \begin{aligned} L>2T_{s}\left( 2f_{\max}+B\right). {} \end{aligned} $$
Inequalities (3.104) and (3.105) indicate that both the baseband offset frequency and the sampling rate must be increased as \(f_{\max }\) or B increases.
If we set
$$ \begin{aligned} f_{o}=\frac{L}{4T_{s}} {} \end{aligned} $$
then (3.104) is automatically satisfied when (3.105) is satisfied. Thus, if \(f_{\max }<B/2,\) then L = 4BT s and f o  = B are satisfactory choices.
Once the ADC outputs with appropriate DTFTs are produced, the offset frequency plays no further role. It is removed by a complex demodulator that computes
$$ \begin{aligned} z_{n} & =\left( x_{n}+jy_{n}\right) \exp\left[ -j2\pi f_{o} nT_{s}/L-j\widehat{\phi}_{1}\left( m\right) \right] \\ & =A\exp\left\{j2\pi\left[ n\left( \frac{2f_{e}T_{s}+h\alpha_{m}} {2L}\right) -\frac{h\alpha_{m}m}{2}\right] +j\phi_{e}\left( m\right) \right\} ,\\ \ Lm & \leq n\leq Lm+L-1 {} \end{aligned} $$
where \(\widehat {\phi }_{1}\left ( m\right ) \) is the estimate of \(\phi _{1}\left ( m\right ) \) obtained from previous demodulated symbols, and \(\phi _{e}\left ( m\right ) =\phi _{0}+\phi _{1}\left ( m\right ) -\widehat {\phi }_{1}\left ( m\right )\). As shown in Figure 3.8, this sequence and the accompanying noise are applied to a metric generator.
The metric generator comprises q discrete-time symbol-matched filters. The symbol-matched filter k of the metric generator, which is matched to symbol β k , has an impulse response gk,n of length L:
$$ \begin{aligned} g_{k,l}=\exp\left\{j2\pi\left[ \frac{h\beta_{k}(L-1-l)}{2L}\right] \right\} ,\ 0\leq l\leq L-1,\ 1\leq k\leq q. {} \end{aligned} $$
The response of the symbol-matched filter k to z n at discrete-time Lm + L − 1, which is denoted by C k (α m ), is the result of a complex discrete-time convolution:
$$ \begin{aligned} C_{k}(m)=\sum_{n=Lm}^{Lm+L-1}z_{n}g_{k,Lm+L-1-n}^{\ast},\ 1\leq k\leq q {} \end{aligned} $$
which is generated at the symbol rate. We define the function
$$ \begin{aligned} D\left( \theta,L\right) =\left\{ \begin{array} [c]{cc} \frac{\sin\left( \theta L/2\right) }{\sin\left( \theta/2\right) } & 0<\theta<2\pi\\ L & \theta=0 \end{array} \right. {} \end{aligned} $$
which is positive and monotonically decreasing over 0 ≤ θ < 2π. We assume that f e is small enough and L is large enough that
$$ \begin{aligned} h\left( \alpha_{m}-\beta_{k}\right) +2f_{e}T_{s}<2L,\ 1\leq k\leq q. {} \end{aligned} $$
Substituting (3.107) and (3.108) into (3.109), changing the summation index, evaluating the resulting geometric series, and then using (3.110), we obtain the q selected matched-filter outputs in the absence of noise:
$$ \begin{aligned} C_{k}(m) & =AD\left( \left[ \frac{\pi h\left( \alpha_{m}-\beta _{k}\right) }{L}+\frac{2\pi f_{e}T_{s}}{L}\right] ,L\right) \\ & \times\exp\left\{j\pi\left[ \frac{\left( L-1\right) \left( 2f_{e}T_{s}+h\left( \alpha_{m}-\beta_{k}\right) \right) }{2L}+2f_{e} T_{s}m\right] +j\phi_{e}\left( m\right) \right\} \\ \ 1 & \leq k\leq q. {} \end{aligned} $$
These outputs are generated at the symbol rate.
For noncoherent detection, the q symbol metrics produced at the symbol rate are the magnitudes of the selected matched-filter outputs. Assuming that \(\beta _{k_{0}}\)=α m , the symbol metrics in the absence of noise are
$$ \begin{aligned} \left\vert C_{k}(m)\right\vert =\left\{ \begin{array} [c]{cc} AD\left( \frac{2\pi f_{e}T_{s}}{L},L\right) , & k=k_{0}\\ AD\left( \left[ \frac{\pi h\left( \alpha_{m}-\beta_{k}\right) }{L} +\frac{2\pi f_{e}T_{s}}{L}\right] ,L\right) , & k\neq k_{0} \end{array} \right. {} \end{aligned} $$
where (3.111) ensures that \(D\left ( \cdot ,L\right ) \) is positive and monotonically decreasing. When hard decisions are made, the largest of the \(\left \vert C_{k}(m)\right \vert \) determines the symbol decision. Equation (3.113) indicates that if (3.111) is satisfied, then as h increases, the distinctions among the symbol metrics increase, and hence the symbol metrics become increasingly favorable for either hard-decision or soft-decision decoding. However, the CPFSK bandwidth decreases, and the frequency-hopping system can accommodate more frequency channels, thereby strengthening it against multitone jamming and multiple-access interference. This issue is examined in Section  9.4 .
In the absence of noise, (3.112) indicates that the sequence obtained by taking, at each discrete-time m, the matched-filter output with the largest magnitude is
$$ \begin{aligned} C_{k_{0}}(m)=AD\left( \frac{2\pi f_{e}T_{s}}{L},L\right) \exp\left\{j\pi\left[ \frac{\left( L-1\right) f_{e}T_{s}}{L}+2f_{e}T_{s}m\right] +j\phi_{e}\left( m\right) \right\} \end{aligned} $$
which has a complex exponential with a component that varies linearly with f e m. Therefore, we can estimate the IF offset frequency f e from the DTFT of successive values of this symbol-rate sequence [68]. Iterative decision-directed feedback can then be used to reduce f e so that (3.111) is increasingly likely to be satisfied.

3.6 Partial-Band Interference and Channel Codes

If partial-band interference has power that is uniformly distributed over J frequency channels out of M in the hopping band, then the fraction of the hopping band with interference is
$$ \begin{aligned} \mu=\frac{J}{M}. {} \end{aligned} $$
The interference PSD in each of the interfered channels is It0/2μ, where It0/2 denotes the interference PSD that would exist if the interference power were uniformly distributed over the hopping band. When the frequency-hopping signal uses a carrier frequency that lies within the spectral region occupied by the partial-band interference, this interference is modeled as AWGN, which increases the two-sided noise PSD from N0/2 to N0/2 + It0/2μ. Therefore, for hard-decision decoding, the symbol error probability is
$$ \begin{aligned} P_{s} & =\mu G\left( \frac{\mathbb{E}_{s}}{N_{0}+I_{t0}/\mu}\right) +(1-\mu)G\left( \frac{\mathbb{E}_{s}}{N_{0}}\right) \\ & \approx\mu G\left( \frac{\mu\mathbb{E}_{s}}{I_{t0}}\right) ,I_{t0}>>\mu N_{0} {} \end{aligned} $$
where the conditional symbol error probability is a monotonically decreasing function G(x) that depends on the modulation and fading.
Consider an orthogonal FH/CPFSK system with noncoherent detection and hard decisions. For the AWGN channel, ( 1.93 ) indicates that
$$ \begin{aligned} G(x)=\sum_{i=1}^{q-1}{\frac{(-1)^{i+1}}{i+1}}{\binom{q-1}{i}}\exp\left[ -{\frac{ix}{(i+1)}}\right] \text{ (AWGN)} {} \end{aligned} $$
where q is the alphabet size of the orthogonal CPFSK symbols. For binary orthogonal CPFSK, this equation reduces to
$$ \begin{aligned} G(x)=\frac{1}{2}\exp(-\frac{x}{2}) {} \end{aligned} $$
and (3.116) yields
$$ \begin{aligned} P_{s}\approx\frac{\mu}{2}\exp\left( -\frac{\mu\mathbb{E}_{s}}{2I_{t0} }\right) ,0\leq\mu\leq1. {} \end{aligned} $$
For the fading channel, the symbol energy may be expressed as \(\mathbb {E} _{s}\alpha ^{2}\), where \(\mathbb {E}_{s}\) represents the average energy, and α is a random fading amplitude with E[α2] = 1. For Ricean fading, which is fully discussed in Section  6.2 , the density function of α is
$$ \begin{aligned} f_{\alpha}(r)=2(\kappa+1)r\exp\{-\kappa-(\kappa+1)r^{2}\}I_{0}(\sqrt {\kappa(\kappa+1)}2r)u(r) {} \end{aligned} $$
where κ is the Rice factor, and u(r) is the unit step function. Replacing x with 2 in (3.117), an integration over the density function (3.120) and the use of ( 1.92 ) yield
$$ \begin{aligned} G(x) & =\sum_{i=1}^{q-1}(-1)^{i+1}{\binom{q-1}{i}}\frac{\kappa+1} {\kappa+1+(\kappa+1+x)i}\exp\left[ -\frac{\kappa\,x\,i}{\kappa+1+(\kappa +1+x)i}\right] \\ & \text{(Ricean).} {} \end{aligned} $$
For Rayleigh fading and binary orthogonal FH/CPFSK, we set κ = 0 and q = 2 in (3.121) to obtain
$$ \begin{aligned} G(x)=\frac{1}{2+x}\text{ (Rayleigh, binary).} \end{aligned} $$
This equation and (3.116) indicate that the worst-case value of μ in the presence of strong interference is μ0 = 1. Thus, for binary orthogonal FH/CPFSK over the Rayleigh channel, strong interference spread uniformly over the entire hopping band hinders communications more than interference concentrated over part of the band.

If a large amount of interference power is received over a small portion of the hopping band, then unless accurate channel-state information is available, soft-decision decoding metrics for the AWGN channel may be ineffective because of the possible dominance of a path or code metric by a single symbol metric (see Section  2.6 on pulsed interference). This dominance is reduced by hard decisions or the use of a practical two- or three-bit quantization of symbol metrics instead of unquantized symbol metrics.

Reed-Solomon Codes

For orthogonal FH/CPFSK with a negligible switching time and a channel code, the relation between the code-symbol duration T s and the information-bit duration T b is T s  = r(log2q)T b , where r is the code rate. Therefore, the energy per channel symbol is
$$ \begin{aligned} \mathbb{E}_{s}=r(\log_{2}q)\mathbb{E}_{b}. {} \end{aligned} $$
The use of a Reed-Solomon code with orthogonal CPFSK is advantageous against partial-band interference for two principal reasons. First, a Reed-Solomon code is maximum-distance separable (Section  1.1 ) and hence accommodates many erasures. Second, the use of nonbinary orthogonal CPFSK symbols to represent code symbols allows a relatively large symbol energy, as indicated by (3.123).
Consider an orthogonal FH/CPFSK system that uses a Reed-Solomon code with no erasures in the presence of partial-band interference and Ricean fading. The demodulator comprises a parallel bank of noncoherent detectors and a device that makes hard decisions. In a frequency-hopping system, symbol interleaving within a hop dwell interval and among different dwell intervals and subsequent deinterleaving in the receiver are used to disperse errors due to the fading or interference. This dispersal facilitates the removal of the errors by the decoder. The frequency channels are assumed to be separated enough and the interleaving sufficient to ensure independent symbol errors. Orthogonal CPFSK modulation implies a q-ary symmetric channel. Therefore, for hard-decision decoding of loosely packed Reed-Solomon codes, ( 1.23 ) and ( 1.95 ) indicate that
$$ \begin{aligned} P_{b}\approx\frac{q}{2(q-1)}\sum_{i=t+1}^{n}{\binom{n-1}{i-1}}P_{s} ^{i}(1-P_{s})^{n-i}. {} \end{aligned} $$
Figure 3.9 shows P b for an FH/CPFSK system with q = 32, \(\mathbb {E} _{b}/I_{t0}=10\) dB, and an extended Reed-Solomon (32,12) code in the presence of Ricean fading. The bit signal-to-noise ratio is \(SNR=\mathbb {E}_{b} /N_{0},\) and the bit signal-to-interference ratio is \(SIR=\mathbb {E} _{b}/I_{t0}.\) Equations (3.116), (3.121), (3.123), and (3.124) are applicable. For κ > 0, the graphs exhibit peaks as the fraction of the band with interference varies. These peaks indicate that the concentration of the interference power over part of the hopping band (perhaps intentionally by a jammer) is more damaging than uniformly distributed interference. Smaller peaks become sharper and occur at smaller values of μ as \(\mathbb {E}_{b}/I_{t0}\) increases. For Rayleigh fading, which corresponds to κ = 0, peaks are absent in the figure, and full-band interference is the most damaging.
Fig. 3.9

Performance of an orthogonal FH/CPFSK system with Reed-Solomon (32,12) code, q = 32, no erasures, SIR = 10 dB, and Ricean factor κ

Much better performance against partial-band interference can be obtained by inserting erasures (Section  1.1 ) among the demodulator output symbols before the symbol deinterleaving and hard-decision decoding. The decision to erase is made independently for each code symbol. It is based on channel-state information, which indicates the codeword symbols that have a high probability of being incorrectly demodulated. The channel-state information must be reliable so that only degraded symbols are erased.

Channel-state information may be obtained from N t known pilot symbols that are transmitted along with the data symbols in each dwell interval of a frequency-hopping signal. A hit is said to occur in a dwell interval if the signal encounters partial-band interference during the interval. If δ or more of the N t pilot symbols are incorrectly demodulated, then the receiver decides that a hit has occurred, and all N symbols in the same dwell interval are erased. Only one symbol of a codeword is erased if the interleaving ensures that only a single symbol of the codeword is in any particular dwell interval. Pilot symbols decrease the information rate, but this loss is negligible if N t  << N, which is assumed henceforth.

The probability of the erasure of a code symbol is
$$ \begin{aligned} P_{\epsilon}=\mu P_{\epsilon1}+(1-\mu)P_{\epsilon0} {} \end{aligned} $$
where P𝜖1 is the erasure probability given that a hit occurred, and P𝜖0 is the erasure probability given that no hit occurred. If δ or more errors among the N t known pilot symbols causes an erasure, then
$$ \begin{aligned} P_{\epsilon i}=\sum_{j=\delta}^{N_{t}}\binom{N_{t}}{j}P_{si}^{j} (1-P_{si})^{N_{t}-j}\;,\;\;\;i=0,1 {} \end{aligned} $$
where Ps1 is the conditional channel-symbol error probability given that a hit occurred, and Ps0 is the conditional channel-symbol error probability given that no hit occurred.
A codeword symbol error can only occur if there is no erasure. Since pilot and codeword symbol errors are statistically independent when the partial-band interference is modeled as a white Gaussian process, the probability of a codeword symbol error is
$$ \begin{aligned} P_{s}=\mu(1-P_{\epsilon1})P_{s1}+(1-\mu)(1-P_{\epsilon0})P_{s0} {} \end{aligned} $$
and the conditional channel-symbol error probabilities are
$$ \begin{aligned} P_{s1}=G\left( \frac{\mathbb{E}_{s}}{N_{0}+I_{t0}/\mu}\right) \;,\;\;\;P_{s0}=G\left( \frac{\mathbb{E}_{s}}{N_{0}}\right) {} \end{aligned} $$
where G(x) depends on the modulation and fading.
The word error probability for errors-and-erasures decoding is upper-bounded in ( 1.24 ). Since most word errors result from decoding failures, it is reasonable to assume that P b  ≈ P w /2. Therefore, the information-bit error probability is given by
$$ \begin{aligned} P_{b}\approx\frac{1}{2}\sum_{j=0}^{n}\sum_{i=i_{0}}^{n-j}{\binom{n}{j}} {\binom{n-j}{i}}P_{s}^{i}P_{\epsilon}^{j}(1-P_{s}-P_{\epsilon})^{n-i-j} {} \end{aligned} $$
where \(i_{0}=\max (0,\lceil (d_{m}-j)/2\rceil )\) and ⌈x⌉ denotes the smallest integer greater than or equal to x.
Figures 3.10 and 3.11 plot the bit error probability P b given by (3.125) to (3.129) for an FH/CPFSK system with orthogonal CPFSK and errors-and-erasures decoding. In Figure 3.10, the FH/CPFSK system transmits over the AWGN channel and uses q = 32, an extended Reed-Solomon (32,12) code, N t  = 2, and δ = 1. A comparison of this figure with the κ =  graphs of Figure 3.9 indicates that when \(\mathbb {E} _{b}/N_{0} = 20\) dB, erasures provide nearly a 7 dB improvement in the required \(\mathbb {E}_{b}/I_{t0}\) for P b  = 10−5. The erasures also confer strong protection against partial-band interference that is concentrated in less than 20% of the hopping band.
Fig. 3.10

Performance of orthogonal FH/CPFSK system over the AWGN channel with Reed-Solomon (32,12) code, q = 32, N t  = 2, and δ = 1

Fig. 3.11

Performance of orthogonal FH/CPFSK system over the AWGN channel with Reed-Solomon (8,3) code, q = 8, N t  = 4, and δ = 1

There are other options for generating channel-state information in addition to demodulating pilot symbols. A radiometer (Section  10.2 ) may be used to measure the energy in the current frequency channel, a future channel, or an adjacent channel. Erasures are inserted if the energy is inordinately large. This method does not have the overhead cost in information rate that is associated with the use of pilot symbols. Other methods include attaching a parity-check bit to each code symbol representing multiple bits to check whether the symbol was correctly received, or using the soft information provided by the inner decoder of a concatenated code.

Consider the receiver for noncoherent detection of orthogonal CPFSK signals shown in Figure 3.5 (b). The envelope-detector outputs provide the symbol metrics used in several low-complexity schemes for erasure insertion [4]. The output threshold test (OTT) compares the largest symbol metric with a threshold to determine whether the corresponding demodulated symbol should be erased. The ratio threshold test (RTT) computes the ratio of the largest symbol metric to the second largest one. This ratio is then compared with a threshold to determine an erasure. If the values of both \(\mathbb {E}_{b}/N_{0}\) and \(\mathbb {E}_{b}/I_{t0}\) are known, then optimal thresholds for the OTT, the RTT, or a hybrid method can be calculated. It is found that the OTT is resilient against fading and tends to outperform the RTT when \(\mathbb {E}_{b}/I_{t0}\) is sufficiently low, but the opposite is true when \(\mathbb {E}_{b}/I_{t0}\) is sufficiently high. The main disadvantage of the OTT and the RTT relative to the pilot-symbol method is the need to estimate \(\mathbb {E}_{b}/N_{0}\) and either \(\mathbb {E}_{b}/I_{t0}\) or \(\mathbb {E}_{b}/(N_{0}+I_{t0})\). The joint maximum-output ratio threshold test (MO-RTT) uses both the maximum and the second largest of the symbol metrics. It is robust against both fading and partial-band interference.

Proposed erasure methods are based on the use of orthogonal CPFSK symbols, and their performances against partial-band interference improve as the alphabet size q increases. For a fixed hopping band, the number of frequency channels decreases as q increases, thereby making an FH/CPFSK system more vulnerable to multitone jamming or multiple-access interference (Chapter  7 ).

Figure 3.11 depicts P b for an FH/CPFSK system over the AWGN channel with q = 8, an extended Reed-Solomon (8,3) code, N t  = 4, and δ = 1. A comparison of Figures 3.11 and 3.10 indicates that reducing the alphabet size while preserving the code rate has increased the system sensitivity to \(\mathbb {E}_{b}/N_{0}\), increased the susceptibility to interference concentrated in a small fraction of the hopping band, and raised the required \(\mathbb {E}_{b}/I_{t0}\) for a specified P b by 5 to 9 dB.

Another approach is to represent each nonbinary code symbol as a sequence of log2q consecutive binary channel symbols. Then, an FH/MSK or FH/DPSK system can be implemented to provide a large number of frequency channels and hence better protection against multiple-access interference. Equations (3.125), (3.126), (3.128), and (3.129) are applicable. However, since a code-symbol error occurs if any of its \(\log _{2}q\) component channel symbols is incorrect, (3.127) is replaced by
$$ \begin{aligned} P_{s}=1-[1-\mu(1-P_{\epsilon1})P_{s1}-(1-\mu)(1-P_{\epsilon0})P_{s0} ]^{\log_{2}q}. {} \end{aligned} $$
For the AWGN channel, (3.118) applies to MSK, whereas
$$ \begin{aligned} G(x)=\frac{1}{2}\exp(-x) {} \end{aligned} $$
applies to DPSK. The results for an FH/DPSK system with an extended Reed-Solomon (32,12) code, N t  = 10 binary pilot symbols, and δ = 1 are shown in Figure 3.12. We assume that N >> 10 so that the loss due to the reference symbol in each dwell interval is negligible. The graphs in Figure 3.12 are similar in form to those of Figure 3.10, but the transmission of binary rather than nonbinary symbols has caused an approximately 10 dB increase in the required \(\mathbb {E}_{b}/I_{t0}\) for a specified P b . Figure 3.12 is applicable to orthogonal CPFSK and MSK if \(\mathbb {E}_{b}/I_{t0}\) and \(\mathbb {E}_{b}/N_{0}\) are both increased by 3 dB.
Fig. 3.12

Performance of the FH/DPSK system over the AWGN channel with Reed-Solomon (32,12) code, binary channel symbols, N t  = 10, and δ = 1

An alternative to erasures that uses binary channel symbols is an FH/DPSK system with concatenated coding (Section  1.5 ). Consider a concatenated code comprising a Reed-Solomon (n, k) outer code, a binary convolutional inner code, and a channel interleaver to ensure independent channel-symbol errors. After demodulation and deinterleaving in the receiver, the inner Viterbi decoder performs hard-decision decoding to limit the impact of individual symbol metrics. For the AWGN channel, the symbol error probability is given by (3.119) and (3.131). The probability of a Reed-Solomon symbol error, Ps1, at the output of the Viterbi decoder is upper-bounded by ( 1.147 ) and ( 1.118 ), and ( 1.148 ) then provides an upper bound on P b . Figure 3.13 depicts this bound for an outer Reed-Solomon (31,21) code and an inner rate-1/2, K = 7 convolutional code. This concatenated code provides a better performance than the Reed-Solomon (32,12) code with binary channel symbols, but a much worse performance than the latter code with nonbinary channel symbols. Figures 3.10 through 3.13 indicate that a reduction in the alphabet size for channel symbols increases the system susceptibility to partial-band interference. The primary reasons are the reduced energy per channel symbol and the orthogonal CPFSK signals.
Fig. 3.13

Performance of FH/DPSK system over the AWGN channel with concatenated code, binary channel symbols, and hard decisions. Inner code is convolutional (rate = 1/2, K = 7) code, and outer code is Reed-Solomon (31,21) code

Trellis-Coded Modulation

Trellis-coded modulation (Section  1.3 ) is a combined coding and modulation method that is usually applied to coherent digital communications over bandlimited channels. Multilevel and multiphase modulations are used to enlarge the signal constellation while not expanding the bandwidth beyond what is required for the uncoded signals. Since the signal constellation is more compact, there is some modulation loss that detracts from the coding gain, but the overall gain can be substantial. Since a noncoherent demodulator is usually required for frequency-hopping communications; thus, the usual coherent trellis-coded modulations are not suitable. Instead, the trellis coding may be implemented by expanding the signal set for q/2-ary CPFSK to q-ary CPFSK. Although the frequency tones are uniformly spaced, they can be nonorthogonal to limit or avoid bandwidth expansion.

Trellis-coded 4-ary CPFSK is illustrated in Figure 3.14 for a system that uses a four-state, rate-1/2, convolutional code followed by a symbol mapper. The signal set partitioning, shown in Figure 3.14 (a), partitions the set of four signals or tones into two subsets, each with two tones. The partitioning doubles the frequency separation between tones from Δ Hz to 2 ΔHz. The mapping of the code bits produced by the convolutional encoder into signals is indicated. In Figure 3.14 (b), the numerical labels denote the signal assignments associated with the state transitions in the trellis for a four-state encoder. The bandwidth of the frequency channel that accommodates the four tones is B ≈ 4 Δ.
Fig. 3.14

Rate-1/2, four-state trellis-coded 4-ary FSK: (a) signal set partitioning and mapping of bits to signals, and (b) mapping of signals to state transitions

There is a tradeoff in the choice of Δ because a small Δ allows more frequency channels and thereby limits the effect of multiple-access interference or multitone jamming, whereas a large Δ tends to improve the system performance against partial-band interference. If a trellis code uses four orthogonal tones with spacing Δ = 1/T b , where T b is the bit duration, then B ≈ 4/T b . The same bandwidth results when an FH/CPFSK system uses two orthogonal tones, a rate-1/2 code, and binary channel symbols because B ≈ 2/T s  = 4/T b . The same bandwidth also results when a rate-1/2 binary convolutional code is used and each pair of code symbols is mapped into a 4-ary channel symbol. The performance of the four-state, trellis-coded, rate-1/2, 4-ary CPFSK frequency-hopping system [118] indicates that it is not as strong against worst-case partial-band interference as an FH/CPFSK system with a rate-1/2 convolutional code and 4-ary channel symbols or an FH/CPFSK system with a Reed-Solomon (32,16) code and errors-and-erasures decoding. Thus, trellis-coded modulation is relatively weak against partial-band interference. The advantage of trellis-coded modulation in a frequency-hopping system is its relatively low implementation complexity.

Turbo and LDPC Codes

Turbo and LDPC codes (Chapter  1 ) are potentially the most effective codes for suppressing partial-band interference if the system latency and computational complexity of these codes are acceptable. A turbo-coded frequency-hopping system that uses spectrally compact channel symbols also resists multiple-access interference. Accurate estimates of channel parameters, such as the variance of the interference and noise and the fading amplitude, are needed in the iterative decoding algorithms. When the channel dynamics are slower than the hop rate, all the received symbols of a dwell interval may be used in estimating the channel parameters associated with that dwell interval. After each iteration by a component decoder, its log-likelihood ratios are updated and the extrinsic information is transferred to the other component decoder. A channel estimator can convert a log-likelihood ratio transferred after a decoder iteration into a posteriori probabilities that can be used to improve the estimates of the fading attenuation and the noise variance for each dwell interval (Section  9.4 ). The operation of a receiver with iterative LDPC decoding and channel estimation is similar.

Known symbols may be inserted into the transmitted code symbols to facilitate the estimation, but the energy per information bit is reduced. Increasing the number of symbols per hop improves the potential estimation accuracy. However, since the reduction in the number of independent hops per information block of fixed size decreases the diversity, and hence the independence of errors, there is an upper limit on the number of symbols per hop beyond which a performance degradation occurs.

A turbo code can still provide a fairly good performance against partial-band interference, even if only the presence or absence of strong interference during each dwell interval is detected. Channel-state information about the occurrence of interference during a dwell interval can be obtained by hard-decision decoding of the output of one of the component decoders of a turbo code with parallel concatenated codes or the inner decoder of a serially concatenated turbo code. The metric for determining the occurrence of interference is the Hamming distance between the binary sequence resulting from the hard decisions and the codewords obtained by bounded-distance decoding.

3.7 Hybrid Systems

Frequency-hopping systems reject interference by avoiding it, whereas direct-sequence systems reject interference by spreading and then filtering it. Channel codes are more essential for frequency-hopping systems than for direct-sequence systems because partial-band interference is a more pervasive threat than high-power pulsed interference. When frequency-hopping and direct-sequence systems are constrained to use the same fixed bandwidth, then direct-sequence systems have an inherent advantage because they can use coherent BPSK rather than a noncoherent modulation. Coherent BPSK has an approximately 4 dB advantage relative to noncoherent MSK over the AWGN channel and an even larger advantage over fading channels. However, the potential performance advantage of direct-sequence systems is often illusory for practical reasons. A major advantage of frequency-hopping systems relative to direct-sequence systems is that it is possible to hop into noncontiguous frequency channels over a much wider band than can be occupied by a direct-sequence signal. This advantage more than compensates for the relatively inefficient noncoherent demodulation that is usually required for frequency-hopping systems. Other major advantages of frequency hopping are the possibility of excluding frequency channels with steady or frequent interference, the reduced susceptibility to the near-far problem (Section  7.7 ), and the relatively rapid acquisition of the frequency-hopping pattern (Section  4.7 ). A disadvantage of frequency hopping is that it is not amenable to transform-domain or nonlinear adaptive filtering (Section  5.3 ) to reject narrowband interference within a frequency channel. In practical systems, the dwell time is too short for adaptive filtering to have a significant effect.

A hybrid frequency-hopping direct-sequence (FH/DS) system is a frequency-hopping system that uses direct-sequence spreading during each dwell interval or, equivalently, a direct-sequence system in which the carrier frequency changes periodically. In the transmitter of the hybrid FH/DS system of Figure 3.15, a single code generator controls both the spreading sequence and the hopping pattern. The spreading sequence is added modulo-2 to the data sequence. Hops occur periodically after a fixed number of sequence chips. Because of the phase changes due to the frequency hopping, noncoherent modulation such as CSK or DPSK is usually required unless the hop rate is very low. In the noncoherent receiver, the frequency hopping and the spreading sequence of the FH/DS signal are removed in succession to produce the inputs to the metric generator. The metric generators for CSK and DPSK are shown in Figures  2.30 and  2.31 , respectively.
Fig. 3.15

Hybrid frequency-hopping direct-sequence system: (a) transmitter and (b) receiver. BF bandpass filter, QD quadrature downconverter, CMFs chip-matched filters, ADCs analog-to-digital converters, SSG spreading-sequence generator

Serial-search acquisition occurs in two stages. The first stage provides alignment of the hopping patterns, whereas the second stage over the unknown timing of the spreading sequence finishes acquisition rapidly because the timing uncertainty has been reduced by the first stage to a fraction of a hop duration.

In principle, the receiver of a hybrid FH/DS system curtails partial-band interference by both dehopping and despreading, but diminishing returns are encountered. The hopping of the FH/DS signal allows the avoidance of the interference spectrum part of the time. When the system hops into a frequency channel with interference, the interference is spread and filtered by the hybrid receiver. However, during a hop interval, interference that would be avoided by an ordinary frequency-hopping receiver is passed by the bandpass filter of a hybrid receiver because the bandwidth must be large enough to accommodate the direct-sequence signal that remains after the dehopping. This large bandwidth also limits the number of available frequency channels, which increases the susceptibility to narrowband interference and the near-far problem. Thus, hybrid FH/DS systems are seldom used, except perhaps in specialized military applications because the additional direct-sequence spreading weakens the major strengths of frequency hopping.

3.8 Frequency Synthesizers

A frequency synthesizer [77, 78, 79, 86] converts a standard reference frequency into a different desired frequency. In a frequency-hopping system, each hopset frequency must be synthesized. In practical applications, a hopset frequency has the form
$$ \begin{aligned} f_{hi}=f_{c}+r_{i}f_{r}\text{ , }\ \ i=1,2,\ldots,M \end{aligned} $$
where the {r i } are rational numbers, f r is the reference frequency, and f c is a frequency in the spectral band of the hopset. The reference signal, which is a tone at the reference frequency, is usually generated by dividing or multiplying by a positive integer the frequency of the tone produced by a stable source, such as an atomic or crystal oscillator. The use of a single reference signal ensures that any output frequency of the synthesizer has the same stability and accuracy as the reference. The three fundamental types of frequency synthesizers are the direct, digital, and indirect synthesizers. Most practical synthesizers are hybrids of these fundamental types.

Direct Frequency Synthesizer

A direct frequency synthesizer uses frequency multipliers and dividers, mixers, bandpass filters, and electronic switches to produce signals with the desired frequencies. Direct frequency synthesizers provide both very fine resolution and high frequencies, but often require a very large amount of hardware and do not provide a phase-continuous output after frequency changes. Although a direct synthesizer can be realized with programmable dividers and multipliers, the standard approach is to use the double-mix-divide (DMD) module illustrated in Figure 3.16. The reference signal at frequency f r is mixed with a tone at the fixed frequency f a . The bandpass filter selects the sum frequency f r  + f a produced by the mixer. A second mixing and filtering operation with a tone at f b  + f1 produces the sum frequency f r  + f a  + f b  + f1. If f b is a fixed frequency such that
$$ \begin{aligned} f_{b}=9f_{r}-f_{a} {} \end{aligned} $$
then the second bandpass filter produces the frequency 10f r  + f1. The divider, which reduces the frequency of its input by a factor of 10, produces the output frequency f r  + f1/10. In principle, a single mixer and bandpass filter could produce this output frequency, but two mixers and bandpass filters simplify the filters. Each bandpass filter must select the sum frequency while suppressing the difference frequency and the mixer input frequencies, which may enter the filter because of mixer leakage. A bandpass filter becomes prohibitively complex and expensive as the sum frequency approaches one of these other frequencies.
Fig. 3.16

Double-mix-divide module. BPF bandpass filter

Arbitrary frequency resolution is achievable by cascading enough double-mix-divide modules. Figure 3.17 illustrates a direct frequency synthesizer that provides two-digit resolution. When the synthesizer is used in a frequency-hopping system, the control bits are generated by the pattern generator, which determines the frequency-hopping pattern. Each set of control bits selects a single tone that the decade switch passes to a double-mix-divide module. The ten tones that are available to the decade switches may be produced by applying the reference frequency to appropriate frequency multipliers and dividers in the tone generator. Equation (3.133) ensures that the output frequency of the second bandpass filter in double-mix-divide module 2 is 10f r  + f2 + f1/10. Thus, the final synthesizer output frequency is f r  + f2/10 + f1/100.
Fig. 3.17

Direct frequency synthesizer with two-digit resolution. DMD double-mix-divide

Example 3.1

It is desired to produce a 1.79 MHz tone. Let f r  = 1 MHz and f b  = 5 MHz. The ten tones provided to the decade switches are 5, 6, 7, …, 14 MHz so that f1 and f2 can range from 0 to 9 MHz. Equation (3.133) yields f a  = 4 MHz. If f1 = 7 MHz and f2 = 9 MHz, then the output frequency is 1.79 MHz. The frequencies f a and f b are such that the designs of the bandpass filters inside the modules are reasonably simple. \(\Box \)

Direct Digital Synthesizer

A direct digital synthesizer converts the stored sample values of a sinusoidal wave into a continuous-time sinusoidal wave with a specified frequency. The periodic and symmetric character of a sine wave implies that only sample values for the first quadrant of the sine wave need to be stored. Figure 3.18 illustrates the principal components of a digital frequency synthesizer. The reference signal is a sinusoidal signal with reference frequency f r . The sine table stores N values of sin θ for \(\theta =0,2\pi /N,\ldots ,2\pi \left ( N-1\right ) /N.\) Frequency data, which is produced by the pattern control bits in a frequency-hopping system, determines the synthesized frequency by specifying a phase increment 2πk/N, where k is an integer. The phase accumulator, which is a discrete-time integrator, converts the phase increment into successive samples of the phase by adding the increment to the content of an internal register at the rate f r after every cycle of the reference signal. A phase sample \(\theta \left ( n\right ) =n\left ( 2\pi k/N\right )\), where n denotes the reference-signal cycle, defines an address in the sine table or memory in which the values of \(\sin \left ( \theta \left ( n\right ) \right ) \) are stored. Each value is applied to a digital-to-analog converter (DAC), which performs a sample-and-hold operation at a sampling rate equal to the reference frequency f r . The DAC output is applied to an anti-aliasing lowpass filter, the output of which is the desired analog signal.
Fig. 3.18

Digital frequency synthesizer. DAC digital-to-analog converter

Let ν denote the number of bits used to represent the N ≤ 2 ν possible values of the phase accumulator contents. Since the contents are updated at the rate f r , the longest period of distinct phase samples before repetition is N/f r . Therefore, since the sample values of \(\sin \left ( \theta \left ( n\right ) \right ) \) are applied to the DAC at the rate f r , the smallest frequency that can be generated by the direct digital synthesizer is
$$ \begin{aligned} f_{\min}=\frac{f_{r}}{N} {} \end{aligned} $$
and the phase sample is \(\theta \left ( n\right ) =2\pi /N.\) The output frequency f0k is produced when every kth stored sample value of the phase θ is applied to the DAC at the rate f r . Thus, if the phase sample is \(\theta \left ( n\right ) =k2\pi /N\) after every cycle of the reference signal, then
$$ \begin{aligned} f_{0k}=kf_{\min},\ 1\leq k\leq N \end{aligned} $$
which implies that \(f_{\min }\) is the frequency resolution.
The maximum frequency \(f_{\max }\) that can be generated is produced by using only a few samples of sin θ per period. From the Nyquist sampling theorem, it is known that \(f_{\max }<f_{r}/2\) is required to avoid aliasing. Practical DAC and lowpass filter requirements further limit \(f_{\max }\) to approximately 0.4 f r or less. Thus, q ≥ 2.5 samples of sin θ per period are used in synthesizing \(f_{\max }\), and
$$ \begin{aligned} f_{\max}=\frac{f_{r}}{q}. {} \end{aligned} $$
The lowpass filter may be implemented with a linear phase across a flat passband extending slightly above \(f_{\max }\). The frequencies f r and \(f_{\max }\) are limited by the speed of the DAC.
Suppose that \(f_{\min }\) and \(f_{\max }\) are specified minimum and maximum frequencies that must be produced by a synthesizer. Equations (3.134) and (3.136) imply that \(qf_{\max }/f_{\min }=N\leq 2^{\nu },\) and the required number of accumulator bits is
$$ \begin{aligned} \nu=\lfloor\log_{2}(qf_{\max}/f_{\min})\rfloor+1. {} \end{aligned} $$

The sine table stores 2 n words, each comprising m bits, and hence has a memory requirement of 2 n m bits. The memory requirements of a sine table can be reduced by using trigonometric identities and hardware multipliers. Each stored word represents one possible value of sin θ in the first quadrant or, equivalently, one possible magnitude of sin θ. The input to the sine table comprises n + 2 parallel bits. The two most significant bits are the sign bit and the quadrant bit. The sign bit specifies the polarity of sin θ. The quadrant bit specifies whether sin θ is in the first or second quadrants or in the third or fourth quadrants. The n least significant bits of the input determine the address in which the magnitude of sin θ is stored. The address specified by the n least significant bits is appropriately modified by the quadrant bit when θ is in the second or fourth quadrants. The sign bit along with the m output bits of the sine table are applied to the DAC. The maximum number of table addresses that the phase accumulator can specify is 2 ν , but if ν input lines were applied to the sine table, the memory requirement would generally be impractically large. Since n + 2 bits are needed to address the sine table, the ν − n − 2 least significant bits in the accumulator contents are not used to address the table.

If n is decreased, the memory requirement of the sine table is reduced, but the truncation of the potential accumulator output from ν to n + 2 bits causes the table output to remain constant for ν − n − 2 samples of sin θ per period. Since the desired phase \(\theta \left ( n\right ) \) is only approximated by \(\widehat {\theta }\left ( n\right )\), the input to the sine table, there are spurious spectral lines in the spectrum of the table output. The phase error, \(\theta _{e}\left ( n\right ) =\theta \left ( n\right ) -\widehat {\theta }\left ( n\right )\), is a periodic ramp with a discrete spectrum. For \(\theta _{e}\left ( n\right ) <<1,\) a trigonometric expansion indicates that
$$ \begin{aligned} \sin\left( \theta\left( n\right) \right) \approx\sin[\widehat{\theta }\left( n\right) ]+\theta_{e}\left( n\right) \cos[\widehat{\theta}\left( n\right) ]. {} \end{aligned} $$
The amplitudes of the spurious spectral lines are determined by the coefficients of the Fourier series of \(\theta _{e}\left ( n\right )\), and the largest amplitude is \(2^{-\left ( n+2\right ) }\). The power in the largest of the spectral lines, which is often the limiting factor in applications, is
$$ \begin{aligned} E_{s}=(2^{-\left( n+2\right) })^{2}=(-6n-12)\;dB. {} \end{aligned} $$
There are several methods, each entailing an additional implementation cost, that can reduce the peak amplitudes of the spurious spectral lines. The error feedforward method uses an estimate of \(\theta _{e}\left ( n\right ) \) to compute the right-hand side of (3.138), which then provides a more accurate estimate of the desired \(\sin \left ( \theta \left ( n\right ) \right )\). The computational requirements are a multiplication and an addition with m bits of precision at the sample rate. Other methods of reducing the peak amplitudes are based on dithering and error feedback.
Since m output bits of the sine table specify the magnitude of sin θ, the quantization error produces the worst-case amplitude-quantization noise power
$$ \begin{aligned} E_{q}=(2^{-m})^{2}\cong-6m\;dB. {} \end{aligned} $$
Quantization noise increases the amplitudes of spurious spectral lines in the table output.

Example 3.2

A direct digital synthesizer is to be designed to cover 1 kHz to 1 MHz with E q  ≤ − 45 dB and E s ≤−60 dB. According to (3.140), the use of 8-bit words in the sine table is adequate for the required quantization noise level. With m = 8, the table contains 28 = 256 distinct words. According to (3.139), n = 8 is adequate for the required E s , and hence the sine table has n + 2 = 10 input bits. If 2.5 ≤ q ≤ 4, then because \(f_{\max }/f_{\min }=10^{3}\), (3.137) yields ν = 12. Thus, a 12-bit phase accumulator is needed. Since 212 = 4096, we may choose N = 4000. If the frequency resolution and smallest frequency is to be \(f_{\min }=\) 1 kHz, then (3.134) indicates that f r  = 4 MHz is required. When the frequency \(f_{\min }\) is desired, the phase increments are so small that 2νn−2 = 4 increments occur before a new address is specified and a new value of sin θ is produced. Thus, the 4 least-significant bits in the accumulator are not used for addressing the sine table. \(\Box \)

The direct digital synthesizer can be easily modified to produce a modulated output when high-speed digital data is available. For amplitude modulation, the table output is applied to a multiplier. Phase modulation may be implemented by adding the appropriate bits to the phase accumulator output. Frequency modulation entails modification of the accumulator input bits. For a quaternary modulation, the quadrature signals may be generated by separate sine and cosine tables.

A direct digital synthesizer can produce nearly instantaneous, phase-continuous frequency changes and a very fine frequency resolution despite its relatively small size, weight, and power requirements. A disadvantage is the limited maximum frequency, which restricts the bandwidth of the covered frequencies following a frequency translation of the synthesizer output. For this reason, direct digital synthesizers are sometimes used as components in hybrid synthesizers. Another disadvantage is the stringent requirement for the lowpass filter to suppress frequency spurs generated during changes in the synthesized frequency.

Indirect Frequency Synthesizers

An indirect frequency synthesizer uses voltage-controlled oscillators and feedback loops. Indirect synthesizers usually require less hardware than comparable direct ones, but need more time to switch from one frequency to another. Like digital synthesizers, indirect synthesizers inherently produce phase-continuous outputs after frequency changes. The principal components of a single-loop indirect synthesizer, which is similar in operation to a phase-locked loop, are depicted in Figure 3.19. The control bits, which determine the value of the modulus or divisor N, are supplied by a pattern generator. The input signal at frequency f1 may be provided by another synthesizer. Since the feedback loop forces the frequency of the divider output, (f0 − f1)/N, to closely approximate the reference frequency f r , the output of the voltage-controlled oscillator (VCO) is a sine wave with frequency
$$ \begin{aligned} f_{0}=Nf_{r}+f_{1} {} \end{aligned} $$
where N is a positive integer.
Fig. 3.19

Indirect frequency synthesizer with a single loop. DAC digital-to-analog converter, VCO voltage-controlled oscillator

Phase detectors in frequency-hopping synthesizers are usually digital devices that measure zero-crossing times rather than the phase differences measured when mixers are used. Digital phase detectors have an extended linear range, are less sensitive to input-level variations, and simplify the interface with a digital divider.

Equation (3.141) indicates that the frequency resolution of the single-loop synthesizer is f r . It is usually necessary to have a loop bandwidth to the order of f r /10 to ensure stable operation and the suppression of sidebands that are offset from f0 by f r . The switching time t s required for changing frequencies is less than or equal to T sw defined previously for frequency-hopping pulses, into which additional guard time may have been inserted. The switching time tends to be inversely proportional to the loop bandwidth and is roughly approximated by
$$ \begin{aligned} t_{s}=\frac{25}{f_{r}} {} \end{aligned} $$
which indicates that a low resolution and a low switching time may not be achievable by a single loop.

To decrease the switching time while maintaining the frequency resolution of a single loop, a coarse steering signal can be stored in a read only memory (ROM), converted into analog form by a DAC, and applied to the VCO (as shown in Figure 3.19) immediately after a frequency change. The steering signal reduces the frequency step that must be acquired by the loop when a hop occurs. An alternative approach is to place a fixed divider with modulus M after the loop so that the divider output frequency is f0 = Nf r /M + f1/M. By this means, f r can be increased without sacrificing resolution, provided that the VCO output frequency, which equals Mf0, is not too large for the divider in the feedback loop. To limit the transmission of spurious frequencies, it may be desirable to inhibit the transmitter output during frequency transitions.

The switching time can be dramatically reduced by using two synthesizers that alternately produce the output frequency. One synthesizer produces the output frequency while the second is being tuned to the next frequency following a command from the pattern generator. If the hop duration exceeds the switching time of each synthesizer, then the second synthesizer begins producing the next frequency before a control switch routes its output to a modulator or a mixer.

A divider , which is a binary counter that produces a square-wave output, counts down by one unit every time its input crosses zero. If the modulus or divisor is the positive integer N, then after N zero crossings, the divider output crosses zero and changes state. The divider then resumes counting down from N. As a result, the output frequency is equal to the input frequency divided by N. Programmable dividers have limited operating speeds that impair their ability to accommodate a high-frequency VCO output. A problem is avoided by the down-conversion of the VCO output by the mixer shown in Figure 3.19, but spurious components are introduced. Since fixed dividers can operate at much higher speeds than programmable dividers, placing a fixed divider before the programmable divider in the feedback loop could be considered. However, if the fixed divider has a modulus N1, then the loop resolution becomes N1f r ; thus, this solution is usually unsatisfactory.

Figure 3.20 depicts a dual-modulus divider, which maintains a frequency resolution equal to f r while allowing synthesizer operation at high frequencies. The principal component of the dual-modulus divider is the dual prescalar, which comprises two fixed dividers with divisors equal to the positive integers P and P + Q, respectively. Dividers 1 and 2 are programmable dividers that divide by the non-negative integers A and B, respectively, where B > A. These programmable dividers are only required to accommodate a frequency f in /P. The dual prescalar initially divides by the modulus P + Q. This modulus changes whenever a programmable divider reaches zero. After (P + Q)A input transitions, divider 1 reaches zero, and the modulus control causes the dual prescalar to divide by P. Divider 2 has counted down to B − A. After P(B − A) more input transitions, divider 2 reaches zero and causes an output transition. The two programmable dividers are then reset, and the dual prescalar reverts to division by P + Q. Thus, each output transition corresponds to A(P + Q) + P(B − A) = AQ + PB input transitions, which implies that the dual-modulus divider has a modulus
$$ \begin{aligned} N=AQ+PB\;,\;\;\;\;B>A \end{aligned} $$
and produces the output frequency f in /N.
Fig. 3.20

Dual-modulus divider

If Q = 1 and P = 10, then the dual-modulus divider is called a 10/11 divider, and
$$ \begin{aligned} N=10B+A\;,\;\;\;\;B>A {} \end{aligned} $$
which can be increased in unit steps by changing A in unit steps. Since B > A is required, a suitable range for A and minimum value of B are
$$ \begin{aligned} 0\leq A\leq9\text{ , }\ \ \ \ \ \ B_{\mathrm{min}}=10.\ \ {} \end{aligned} $$
The relations (3.141), (3.144), and (3.145) indicate that the range of a synthesized hopset is from f1 + 100f r to \(f_{1}+(10B_{\max }+9)f_{r}\). Therefore, a spectral band between f min and f max is covered by the hopset if
$$ \begin{aligned} f_{1}+100f_{r}\leq f_{min} {} \end{aligned} $$
$$ \begin{aligned} f_{1}+(10B_{\max}+9)f_{r}\geq f_{max}. {} \end{aligned} $$

Example 3.3

The Bluetooth communication system is used to establish wireless communications among portable electronic devices. The system has a hopset of 79 carrier frequencies, its hop rate is 1600 hops per second, its hop band is between 2400 and 2483.5 MHz, and the bandwidth of each frequency channel is 1 MHz. Consider a system in which the 79 carrier frequencies are spaced 1 MHz apart from 2402 to 2480 MHz. A 10/11 divider with f r  = 1 MHz provides the desired increment, which is equal to the frequency resolution. Equation (3.142) indicates that t s  = 25 μs, which indicates that 25 potential data symbols have to be omitted during each hop interval. Inequality (3.146) indicates that f1 = 2300 MHz is a suitable choice. Then (3.147) is satisfied by B max  = 18. Therefore, dividers A and B require 4 and 5 control bits, respectively, to specify their potential values. If the control bits are stored in an ROM, then each ROM location contains 9 bits. The number of ROM addresses is at least 79, the number of frequencies in the hopset. Thus, a ROM input address requires 7 bits. \(\Box \)

Multiple Loops

A multiple-loop frequency synthesizer uses two or more single-loop synthesizers to obtain both fine frequency resolution and fast switching. A three-loop frequency synthesizer is shown in Figure 3.21. Loops A and B have the form of Figure 3.19, but loop A does not have a mixer and filter in its feedback. Loop C has the mixer and filter, but lacks the divider. The reference frequency f r is chosen to ensure that the desired switching time is realized. The divisor M is chosen so that f r /M is equal to the desired resolution. Loop A and the divider generate increments of f r /M, while loop B generates increments of f r . Loop C combines the outputs of loops A and B to produce the output frequency
Fig. 3.21

Indirect frequency synthesizer with three loops

$$ \begin{aligned} f_{0}=Bf_{r}+A\frac{f_{r}}{M}+f_{1} {} \end{aligned} $$
where B, A, and M are positive integers because they are produced by dividers. Loop C is preferable to a mixer and bandpass filter because the filter would have to suppress a closely spaced, unwanted component when Af r /M and Bf r were far apart.
To ensure that each output frequency is produced by unique values of A and B, it is required that A max  = A min  + M − 1. To prevent degradation in the switching time, it is required that Amin > M. Both requirements are met by choosing
$$ \begin{aligned} A_{min}=M+1\text{, }\ \ \ \ \ A_{max}=2M. {} \end{aligned} $$
According to (3.148), a range of frequencies from f min to f max is covered if
$$ \begin{aligned} B_{min}f_{r}+A_{min}\frac{f_{r}}{M}+f_{1}\leq f_{min} {} \end{aligned} $$
$$ \begin{aligned} B_{max}f_{r}+A_{max}\frac{f_{r}}{M}+f_{1}\geq f_{max}. {} \end{aligned} $$

Example 3.4

Consider the Bluetooth system of Example 3.3 but with the more stringent requirement that t s  = 2.5 μs, which only sacrifices three potential data symbols per hop interval. The single-loop synthesizer of Example 3.3 cannot provide this short switching time. The required switching time is provided by a three-loop synthesizer with f r  = 10 MHz. The resolution of 1 MHz is achieved by taking M = 10. Equation (3.149) indicates that A min  = 11 and A max  = 20. Inequalities (3.150) and (3.151) are satisfied if f1 = 2300 MHz, B min  = 9, and B max  = 16. The maximum frequencies that must be accommodated by the dividers in loops A and B are A max f r  = 200 MHz and B max f r  = 160 MHz, respectively. Dividers A and B require 5 control bits. \(\Box \)

Fractional-N Synthesizer

A fractional-N synthesizer uses a single loop and auxiliary hardware to produce an average output frequency
$$ \begin{aligned} f_{0}=\left( B+\frac{A}{M}\right) f_{r},\ 0\leq A\leq M-1. {} \end{aligned} $$
Although the switching time is inversely proportional to f r , the resolution is f r /M, which can be made arbitrarily small in principle.
The synthesis method uses a dual-modulus divider with frequency-division modulus equal to either B or B + 1. As shown in Figure 3.22, the modulus is controlled by a sequence of bits \(\{b\left ( n\right ) \}\), where n is the discrete-time index. The sequence is generated at the clock rate equal to f r , and the divider modulus is \(B+b\left ( n\right )\). The average value of the bit \(b\left ( n\right ) \) is
$$ \begin{aligned} \overline{b\left( n\right) }=\frac{A}{M} {} \end{aligned} $$
so that the average divider modulus is B + A/M. The phase detector compares the arrival times of the rising edges of the frequency-divider output with those of the reference signal and produces an output that is a function of the difference in arrival times. A charge pump, which uses two current sources to charge or discharge capacitors, draws proportionate charge into the lowpass filter, the output of which is the control signal of the VCO. Since the lower input to the phase detector has an average frequency of f r , the average output frequency can be maintained at f0.
Fig. 3.22

Fractional-N frequency synthesizer. CP charge pump, VCO voltage-controlled oscillator

A delta-sigma modulator encodes a higher-resolution digital signal into a lower-resolution one. When its input is a multiple-bit representation of A/M, the delta-sigma modulator generates the sequence \(\{b\left ( n\right ) \}\). If this sequence were periodic, then high-level fractional spurs, which are harmonic frequencies that are integer multiples of f r /M, would appear at the VCO input, thereby frequency-modulating its output signal. Fractional spurs are greatly reduced by modulus randomization, which randomizes \(b\left ( n\right ) \) while maintaining (3.153). Modulus randomization is implemented by dithering or randomly changing the least significant bit of the input to the delta-sigma modulator. Quantization noise \(q\left ( n\right ) =b\left ( n\right ) -\overline {b\left ( n\right ) }\) is introduced into the loop because A/M is approximated at the delta-sigma modulator output by a bit \(b\left ( n\right )\). To limit the effect of the quantization noise, the modulus randomization is designed such that the quantization noise has a high-pass spectrum. A lowpass loop filter can then eliminate most of the noise.

Example 3.5

Consider a fractional-N synthesizer for the Bluetooth system of Example 3.4 in which t s = 2.5 μs. If the output of the fractional-N synthesizer is frequency-translated by 2300 MHz, then the synthesizer itself needs to cover 102 to 180 MHz. The switching time is achieved by taking f r  = 10 MHz. The resolution is achieved by taking M = 10. Equation (3.152) indicates that the required frequencies are covered by varying B from 10 to 18 and A from 0 to 9. The integers B and A require 5 and 4 control bits, respectively. \(\square \)

3.9 Problems


Consider FH/CPFSK with soft-decision decoding of repetition codes and large values of \(\mathbb {E}_{b}/I_{t0}\). Suppose that the number of repetitions n is not chosen to minimize the potential impact of partial-band interference. Then, the first line of (3.20) upper-bounds P b . Show that a nonbinary modulation with m =log2q bits per symbol gives a better performance than binary modulation in the presence of worst-case partial-band interference if \(n>(m-1)\ln (2)/\ln (m)\).


Consider FH/CPFSK with soft-decision decoding of repetition codes. (a) Prove that \(f\left ( n\right ) =\left ( q/4\right ) \left ( 4n/e\gamma \right ) ^{n}\) is a strictly convex function over the compact set \(\left [ 1,\gamma /3\right ]\). (b) Find the stationary point that gives the global minimum of \(f\left ( n\right )\). (c) Prove (3.22).


The autocorrelations of the complex envelopes of practical signals have the form \(R_{l}(\tau )=f_{1}\left ( \tau /T_{s}\right )\). (a) Prove that the power spectral density of the complex envelope has the form \(S_{l} (f)=T_{s}f_{2}\left (fT_{s}\right )\). (b) Prove that if the bandwidth B of a frequency channel is determined by setting F ib (B/2) = c, then the required B is inversely proportional to T s .


For FH/CPM, use the procedures described in the text to verify (3.78) and (3.82).


  1. (a)
    Verify (3.85)-(3.88). (b) Use the indefinite integral to verify (3.89) and (3.90).


Derive (3.112) by following the steps described in the text.


Approximate μ = J/M in (3.116) by a continuous variable over the interval \(\left [0,1\right ]\). What is the worst-case value of μ for binary orthogonal FH/CPFSK, noncoherent detection, hard decisions, and the AWGN channel in the presence of strong interference? What is the corresponding worst-case symbol error probability? Why does it not depend on the number of frequency channels?


Consider binary orthogonal FH/CPFSK, noncoherent detection, hard decisions, and the Rayleigh channel in the presence of strong interference. Show that interference spread uniformly over the entire hopping band hinders communications more than equal-power interference concentrated over part of the band.


This problem illustrates the importance of a channel code to a frequency-hopping system in the presence of worst-case partial-band interference. Consider binary orthogonal FH/CPFSK, noncoherent detection, hard decisions, and the AWGN channel. (a) Use the results of Problem 7 to calculate the required \(\mathbb {E}_{b}/I_{t0}\) to obtain a bit error rate P b  = 10−5 when no channel code is used. (b) Calculate the required \(\mathbb {E}_{b}/I_{t0}\) for P b  = 10−5 when a (23,12) Golay code is used. As a first step, use the first term in ( 1.22 ) to estimate the required symbol error probability. What is the coding gain?


It is desired to cover 198-200 MHz in 10-Hz increments using double-mix-divide modules. (a) What is the minimum number of modules required? (b) What is the range of acceptable reference frequencies? (c) Choose a reference frequency and f b . What are the frequencies of the required tones? (d) If an upconversion by 180 MHz follows the DMD modules, what is the range of acceptable reference frequencies? Is this system more practical?


It is desired to cover 100-100.99 MHz in 10-kHz increments with an indirect frequency synthesizer containing a single loop and a dual-modulus divider. Let f1 = 0 in Figure 3.19 and Q = 1 in Figure 3.20 (a) What is a suitable range of values for A? (b) What are a suitable value for P and a suitable range of values for B if it is required to minimize the highest frequency applied to the programmable dividers?


It is desired to cover 198-200 MHz in 10-Hz increments with a switching time equal to 2.5 ms. An indirect frequency synthesizer with three loops in the form of Figure 3.21 is used. It is desired that \(B_{\max } \leq 10^{4}\) and that f1 is minimized. What are suitable values for the parameters f r , M, \(A_{\min }\), \(A_{\max },B_{\min }\), \(B_{\max }\), and f1?


Specify the design parameters of a fractional-N synthesizer that covers 198-200 MHz in 10-Hz increments with a switching time equal to 250 μs.


  1. 4.
    S. Ahmed, L.-L. Yang, and L. Hanzo, “Erasure Insertion in RS-Coded SFH FSK Subjected to Tone Jamming and Rayleigh Fading,” IEEE Trans. Commun., vol. 56, pp. 3563–3571, Nov. 2007.Google Scholar
  2. 19.
    E. K. P. Chong and S. H. Zak,An Introduction to Optimization, 4th ed., Wiley, 2013.Google Scholar
  3. 41.
    Y. M. Lam and P. H. Wittke, “Frequency-Hopped Spread-Spectrum Transmission with Band-Efficient Modulations and Simplified Noncoherent Sequence Estimation,” IEEE Trans. Commun., vol. 38, pp. 2184–2196, Dec. 1990.Google Scholar
  4. 44.
    J. S. Lee, L. E. Miller, and Y. K. Kim, “Probability of Error Analyses of a BFSK Frequency-Hopping System with Diversity under Partial-Band Jamming Interference—Part II: Performance of Square-Law Nonlinear Combining Soft Decision Receivers,” IEEE Trans. Commun., vol. 32, pp. 1243–1250, Dec. 1984.Google Scholar
  5. 68.
    B. Porat, A Course in Digital Signal Processing, Wiley, 1997.Google Scholar
  6. 69.
    J. G. Proakis and M. Salehi, Digital Communications, 5th ed., McGraw-Hill, 2008.Google Scholar
  7. 77.
    B. Razavi, RF Microelectronics, 2nd ed., Pearson Prentice Hall, 2012.Google Scholar
  8. 79.
    U. L. Rohde and M. Rudolph, RF/Microwave Circuit Design for Wireless Applications, 2nd ed.,Wiley, 2013.Google Scholar
  9. 86.
    J. R. Smith, Modern Communication Circuits, 2nd ed., McGraw-Hill, 1998.Google Scholar
  10. 98.
    D. J. Torrieri, “Fundamental Limitations on Repeater Jamming of Frequency-Hopping Communications,” IEEE J. Select. Areas Commun., vol. 7, pp. 569–578, May 1989.Google Scholar
  11. 110.
    M. C. Valenti, S. Cheng, and D. Torrieri, “Iterative Multisymbol Noncoherent Reception of Coded CPFSK,” IEEE Trans. Commun., vol. 58, pp. 2046–2054, July 2010.Google Scholar
  12. 118.
    P. H. Wittke, Y. M. Lam, and M. J. Schefter, “The Performance of Trellis-Coded Nonorthogonal Noncoherent FSK in Noise and Jamming,” IEEE Trans. Commun., vol. 43, pp. 635–645, Feb./March/April 1995.Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Don Torrieri
    • 1
  1. 1.Silver SpringUSA

Personalised recommendations