Abstract
In considering the usefulness and practicality of a cryptographic system, it is necessary to measure its resistance to various forms of attack. Such attacks include simple brute-force searches through the key or message space, somewhat faster searches via collision or meet-in-the-middle algorithms, and more sophisticated methods that are used to compute discrete logarithms, factor integers, and find short vectors in lattices.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Sometimes the length of the search can be significantly shortened by matching pieces of keys taken from two or more lists. Such an attack is called a collision or meet-in-the-middle attack; see Sect. 5.4.
- 2.
You may wonder why Alice and Bob, those intrepid exchangers of encrypted secret messages, are sitting down for a meal with their cryptographic adversary Eve. In the real world, this happens all the time, especially at cryptography conferences!
- 3.
The binomial theorem’s fame extends beyond mathematics. Moriarty, Sherlock Holmes’s arch enemy, “wrote a treatise upon the Binomial Theorem,” on the strength of which he won a mathematical professorship. And Major General Stanley, that very Model of a Modern Major General, proudly informs the Pirate King and his cutthroat band:
About Binomial Theorem I’m teeming with a lot o’ news—
With many cheerful facts about the square of the hypotenuse.
(The Pirates of Penzance, W.S. Gilbert and A. Sullivan 1879)
- 4.
This cipher is named after Blaise de Vigenère (1523–1596), whose 1586 book Traicté des Chiffres describes the known ciphers of his time. These include polyalphabetic ciphers such as the “Vigenère cipher,” which according to [63] Vigenère did not invent, and an ingenious autokey system (see Exercise 5.19), which he did.
- 5.
More typically one uses a key phrase consisting of several words, but for simplicity we use the term “keyword” to cover both single keywords and longer key phrases.
- 6.
Cryptography and the Art of Decryption.
- 7.
We were a little lucky in that every relation in Table 5.6 is correct. Sometimes there are erroneous relations, but it is not hard to eliminate them with some trial and error.
- 8.
David Copperfield, 1850, Charles Dickens.
- 9.
General (continuous) probability theory also deals with infinite sample spaces Ω, in which case only certain subsets of Ω are allowed to be events and are assigned probabilities. There are also further restrictions on the probability function \(\mathrm{Pr}:\varOmega \rightarrow \mathbb{R}\). For our study of cryptography in this book, it suffices to use discrete (finite) sample spaces.
- 10.
- 11.
- 12.
For an amusing commentary on long strings of heads, see Act I of Tom Stoppard’s Rosencrantz and Guildenstern Are Dead.
- 13.
A sequence a 1, a 2, a 3, … is called a geometric progression if all of the ratios a n+1∕a n are the same. Similarly, the sequence is an arithmetic progression if all of the differences a n+1 − a n are the same.
- 14.
Note that the expression Pr(X = x and Y = y) is really shorthand for the probability of the event
$$\displaystyle{\bigl \{\omega \in \varOmega: X(\omega ) = x\ \mbox{ and}\ Y (\omega ) = y\big\}.}$$If you find yourself becoming confused about probabilities expressed in terms of values of random variables, it often helps to write them out explicitly in terms of an event, i.e., as the probability of a certain subset of Ω.
- 15.
If you think that \(\frac{40} {365}\) is the right answer, think about the same situation with 366 people. The probability that someone shares your birthday cannot be \(\frac{366} {365}\), since that’s larger than 1.
- 16.
If this value of x happens to be negative and we want a positive solution, we can always use the fact that g N = 1 to replace it with x = y − z + N.
- 17.
For example, it would suffice that F have a continuous derivative.
- 18.
For most cryptographic applications, the prime p is chosen such that p − 1 has precisely one large prime factor, since otherwise, the Pohlig–Hellman algorithm (Theorem 2.31) may be applicable. And it is unlikely that d will be divisible by the large prime factor of p − 1.
- 19.
As is typical, we have omitted reference to the underlying sample spaces. To be completely explicit, we have three probability spaces with sample spaces Ω M , Ω C , and Ω K and probability functions Pr M , Pr C , and Pr K . Then M, C and K are random variables
$$\displaystyle{M:\varOmega _{M} \rightarrow \mathcal{M},\qquad K:\varOmega _{K} \rightarrow \mathcal{K},\qquad C:\varOmega _{C} \rightarrow \mathcal{C}.}$$Then by definition, the density function f M is
$$\displaystyle{f_{M}(m) =\mathrm{ Pr}(M = m) =\mathrm{ Pr}_{M}{\bigl (\{\omega \in \varOmega _{M}: M(\omega ) = m\}\bigr )},}$$and similarly for K and C.
- 20.
Although this notation is useful, it is important to remember that the domain of H is the set of random variables, not the set of n-tuples for some fixed value of n. Thus the domain of H is itself a set of functions.
- 21.
This convention makes sense, since we want H to be continuous in the p i ’s, and it is true that limp→0 plog2 p = 0.
- 22.
It should be noted that when implementing a modern public key cipher, one generally combines the plaintext with some random bits and then performs some sort of invertible transformation so that the resulting secondary plaintext looks more like a string of random bits. See Sect. 8.6.
- 23.
To be rigorous, one should really define upper and lower densities using liminf and limsup, since it is not clear that limit defining H(L) exists. We will not worry about such niceties here.
- 24.
This does not mean that one can remove 70 % of the letters and still have an intelligible message. What it means is that in principle, it is possible to take a long message that requires 4.7 bits to specify each letter and to compress it into a form that takes only 30 % as many bits.
- 25.
As mentioned in Sect. 2.1, the question of whether \(\mathcal{P} = \mathcal{N}\mathcal{P}\) is one of the $1,000,000 Millennium Prize problems.
- 26.
Spelt is an ancient type of wheat.
- 27.
A hekat is \(\frac{1} {30}\) of a cubic cubit, which is approximately 4.8 l.
References
M. Agrawal, N. Kayal, N. Saxena, PRIMES is in P. Ann. Math. (2) 160(2), 781–793 (2004)
M. Ajtai, C. Dwork, A public-key cryptosystem with worst-case/average-case equivalence, in STOC ’97, El Paso (ACM, New York, 1999), pp. 284–293 (electronic)
R.P. Brent, An improved Monte Carlo factorization algorithm. BIT 20(2), 176–184 (1980)
H. Cohen, A Course in Computational Algebraic Number Theory. Volume 138 of Graduate Texts in Mathematics (Springer, Berlin, 1993)
S.A. Cook, The complexity of theorem-proving procedures, in STOC ’71: Proceedings of the Third Annual ACM Symposium on Theory of Computing, Shaker Heights (ACM, New York, 1971), pp. 151–158
M.R. Garey, D.S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness. A Series of Books in the Mathematical Sciences (W. H. Freeman, San Francisco, 1979)
G.R. Grimmett, D.R. Stirzaker, Probability and Random Processes, 3rd edn. (Oxford University Press, New York, 2001)
E.T. Jaynes, Information theory and statistical mechanics. Phys. Rev. (2) 106, 620–630 (1957)
D. Kahn, The Codebreakers: The Story of Secret Writing (Scribner Book, New York, 1996)
P.L. Montgomery, Speeding the Pollard and elliptic curve methods of factorization. Math. Comput. 48(177), 243–264 (1987)
NIST–DES, Data Encryption Standard (DES). FIPS Publication 46-3, National Institue of Standards and Technology, 1999. http://csrc.nist.gov/publications/fips/fips46-3/fips46-3.pdf
J.M. Pollard, Monte Carlo methods for index computation (mod p). Math. Comput. 32(143), 918–924 (1978)
E.L. Post, A variant of a recursively unsolvable problem. Bull. Am. Math. Soc. 52, 264–268 (1946)
S. Ross, A First Course in Probability, 9th edn. (Pearson, England 2001)
C.E. Shannon, A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423, 623–656 (1948)
C.E. Shannon, Communication theory of secrecy systems. Bell Syst. Tech. J. 28, 656–715 (1949)
V. Shoup, Lower bounds for discrete logarithms and related problems, in Advances in Cryptology—EUROCRYPT ’97, Konstanz. Volume 1233 of Lecture Notes in Computer Science (Springer, Berlin, 1997), pp. 256–266
J. Talbot, D. Welsh, Complexity and Cryptography: An Introduction (Cambridge University Press, Cambridge, 2006)
E. Teske, Speeding up Pollard’s rho method for computing discrete logarithms, in Algorithmic Number Theory, Portland, 1998. Volume 1423 of Lecture Notes in Computer Science (Springer, Berlin, 1998), pp. 541–554
E. Teske, Square-root algorithms for the discrete logarithm problem (a survey), in Public-Key Cryptography and Computational Number Theory, Warsaw, 2000 (de Gruyter, Berlin, 2001), pp. 283–301
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer Science+Business Media New York
About this chapter
Cite this chapter
Hoffstein, J., Pipher, J., Silverman, J.H. (2014). Combinatorics, Probability, and Information Theory. In: An Introduction to Mathematical Cryptography. Undergraduate Texts in Mathematics. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-1711-2_5
Download citation
DOI: https://doi.org/10.1007/978-1-4939-1711-2_5
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4939-1710-5
Online ISBN: 978-1-4939-1711-2
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)