Skip to main content
Log in

Cryptographic puzzles and DoS resilience, revisited

  • Published:
Designs, Codes and Cryptography Aims and scope Submit manuscript

Abstract

Cryptographic puzzles (or client puzzles) are moderately difficult problems that can be solved by investing non-trivial amounts of computation and/or storage. Devising models for cryptographic puzzles has only recently started to receive attention from the cryptographic community as a first step toward rigorous models and proofs of security of applications that employ them (e.g. Denial-of-Service (DoS) resistance). Unfortunately, the subtle interaction between the complex scenarios for which cryptographic puzzles are intended and typical difficulties associated with defining concrete security easily leads to flaws in definitions and proofs. Indeed, as a first contribution we exhibit shortcomings of the state-of-the-art definition of security of cryptographic puzzles and point out some flaws in existing security proofs. The main contribution of this paper are new security definitions for puzzle difficulty. We distinguish and formalize two distinct flavors of puzzle security which we call optimality and fairness and in addition, properly define the relation between solving one puzzle versus solving multiple ones. We demonstrate the applicability of our notions by analyzing the security of two popular puzzle constructions. We briefly investigate existing definitions for the related notion of security against DoS attacks. We demonstrate that the only rigorous security notion proposed to date is not sufficiently demanding (as it allows to prove secure protocols that are clearly not DoS resistant) and suggest an alternative definition. Our results are not only of theoretical interest: the better characterization of hardness for puzzles and DoS resilience allows establishing formal bounds on the effectiveness of client puzzles which confirm previous empirical observations. We also underline clear practical limitations for the effectiveness of puzzles against DoS attacks by providing simple rules of thumb that can be easily used to discard puzzles as a valid countermeasure for certain scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Abadi M., Burrows M., Manasse M., Wobber T.: Moderately hard, memory-bound functions. ACM Trans. Internet Technol. 5, 299–327 (2005).

    Google Scholar 

  2. Abliz M., Znati T.: A guided tour puzzle for denial of service prevention. In: Proceedings of the 2009 Annual Computer Security Applications Conference, ACSAC ’09, pp. 279–288. IEEE Computer Society, Washington, DC (2009).

  3. Aura T., Nikander P., Leiwo J.: DoS-resistant authentication with client puzzles. In: Revised Papers from the 8th International Workshop on Security Protocols, pp. 170–177. Springer, London (2001).

  4. Back A.: Hashcash–a denial of service counter-measure. Technical Report (2002).

  5. Bellare M., Impagliazzo R., Naor M.: Does parallel repetition lower the error in computationally sound protocols. In: Proceedings of 38th Annual Symposium on Foundations of Computer Science, pp. 374–383. IEEE, Los Alamitos (1997).

  6. Bellare M., Ristenpart T., Tessaro S.: Multi-instance security and its application to password-based cryptography. In: Advances in Cryptology, CRYPTO 2012, pp. 312–329. Springer, Heidelberg (2012).

  7. Bencsáth B., Vajda I., Buttyán L.: A game based analysis of the client puzzle approach to defend against dos attacks. Proc. SoftCOM. 11, 763–767 (2003).

    Google Scholar 

  8. Boyd C., Gonzalez-Nieto J., Kuppusamy L., Narasimhan H., Rangan C., Rangasamy J., Smith J., Stebila D., Varadarajan V.: An investigation into the detection and mitigation of denial of service (Dos) attacks: critical information infrastructure protection. In: Cryptographic Approaches to Denial-of-Service Resistance, p. 183. Springer, Heidelberg (2011).

  9. Chen L., Morrissey P., Smart N.P., Warinschi B.: Security notions and generic constructions for client puzzles. In: Proceedings of the 15th International Conference on the Theory and Application of Cryptology and Information Security: Advances in Cryptology, ASIACRYPT ’09, pp. 505–523. Springer, Heidelberg (2009).

  10. Dean D., Stubblefield A.: Using client puzzles to protect TLS. In: Proceedings of the 10th conference on USENIX Security Symposium, SSYM’01, vol. 10, p. 1. USENIX Association, Berkeley (2001).

  11. Dwork C., Goldberg A., Naor M.: On memory-bound functions for fighting spam. In: Proceedings of the 23rd Annual International Cryptology Conference, pp. 426–444. Springer, New York (2003).

  12. Dwork C., Naor M.: Pricing via processing or combating junk mail. In: Proceedings of the 12th Annual International Cryptology Conference on Advances in Cryptology, pp. 139–147. Springer, London (1993).

  13. Fallah M.: A puzzle-based defense strategy against flooding attacks using game theory. IEEE Trans. Dependable Secur. Comput. 7(1), 5–19 (2010).

    Google Scholar 

  14. Gao Y., Susilo W., Mu Y., Seberry J.: Efficient trapdoor-based client puzzle against DoS attacks. In: Network Security, pp. 229–249 (2010).

  15. Gao Y.: Efficient trapdoor-based client puzzle system against DoS attacks. Technical, Report (2005).

  16. Grimaldi R.P.: Generating functions, Chap. 3.2. In: Rosen, K.H. (ed.) Handbook of Discrete and Combinatorial Mathematics. CRC, Boca Raton (1999).

  17. Groza B., Warinschi B.: Revisiting difficulty notions for client puzzles and DoS resilience. In: Gollmann, D., Freiling, F. (eds.) Information Security Conference (ISC), LNCS, vol. 7483, pp. 39–54. Springer, Heidelberg (2012).

  18. Jeckmans A.: Computational puzzles for spam reduction in SIP. Draft (2007).

  19. Jeckmans A.: Practical client puzzle from repeated squaring. Technical Report (2009).

  20. Jerschow Y.I., Mauve M.: Non-parallelizable and non-interactive client puzzles from modular square roots. In: Sixth International Conference on Availability, Reliability and Security, ARES 2011, pp. 135–142 (2011).

  21. Juels A., Brainard J.: Client puzzles: A cryptographic countermeasure against connection depletion attacks. In Proceedings of NDSS ’99 (Networks and Distributed, Security Systems), pp. 151–165 (1999).

  22. Karame G., Čapkun S.: Low-cost client puzzles based on modular exponentiation. In: Proceedings of the 15th European Conference on Research in Computer Security. ESORICS’10, pp. 679–697. Springer, New York (2010).

  23. Laurie B., Clayton R., Proof-of-work proves not to work; version 0.2. In: Workshop on Economics and Information, Security (2004).

  24. Liu D., Camp L.: Proof of work can work. In: Fifth Workshop on the Economics of Information, Security (2006).

  25. Narasimhan H., Varadarajan V., Rangan C.: Game theoretic resistance to denial of service attacks using hidden difficulty puzzles. In: Information Security, Practice and Experience, pp. 359–376 (2010).

  26. Rangasamy J., Stebila D., Boyd C., Nieto J.: An integrated approach to cryptographic mitigation of denial-of-service attacks. In: Proceedings of the 6th ACM Symposium on Information, Computer and Communications Security, pp. 114–123. ACM, New York (2011).

  27. Rivest R., Shamir A., Wagner D.: Time-lock puzzles and timed-release crypto. Technical Report. MIT Press, Cambridge (1996).

  28. Stebila D., Kuppusamy L., Rangasamy J., Boyd C., Nieto J.G.: Stronger difficulty notions for client puzzles and denial-of-service-resistant protocols. In: Proceedings of the 11th International Conference on Topics in cryptology: CT-RSA 2011, CT-RSA’11, pp. 284–301. Springer, Heidelberg (2011).

  29. Suriadi S., Stebila D., Clark A., Liu H.: Defending web services against denial of service attacks using client puzzles. In: 2011 IEEE International Conference on Web Services (ICWS), pp. 25–32. IEEE, New York (2011).

  30. Tang Q., Jeckmans A.: On Non-parallelizable Deterministic Client Puzzle Scheme with Batch Verification Modes. Springer, Heidelberg (2010).

  31. Tritilanunt S., Boyd C., Foo E., Nieto J.M.G.: Toward non-parallelizable client puzzles. In: Proceedings of the 6th International Conference on Cryptology and Network Security, CANS’07, pp. 247–264. Springer, Heidelberg (2007).

Download references

Acknowledgments

We thank Douglas Stebila and to the anonymous referees for their comments and feedback on our work. First author was partially supported by National Research Grants CNCSIS UEFISCDI, Project Number PNII IDEI 940/2008–2011 and POSDRU/21/1.5/G/13798, inside POSDRU Romania 2007–2013.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bogdan Groza.

Additional information

Communicated by C. Boyd.

Appendices

A Puzzle properties

Some flavours of the notions defined by us appeared in the literature but they have never been formalized and previous work does not seem to make a clear distinction between them. For example,  [2] introduces informally the notion of computation guarantee which requires that a malicious party cannot solve the puzzle significantly faster than honest clients. This is what we call optimality. Other papers [30] require that solving the puzzle be done via deterministic computation—this seems to be what we call an fairness in solving. For completeness we now enumerate various puzzle properties that can be found in the literature. Most of them are orthogonal, although we are not aware whether constructions for each combination of them exists or if they are useful in some protocols.

  1. 1.

    Non-parallelizability Prohibits and adversary to use distributed computation in solving the puzzle. The first construction was provided by Rivest et al. in [27] in the context of time released crypto. Later non-parallelizable constructions were proposed by Tritilanunt et al. in [31] and constructions based on repeated squarings were studied by Jeckmans [19], Ghassan and Čapkun [22]. A construction based on computing modular square roots is presented in [20].

  2. 2.

    Batching A popular technique in the case of the RSA cryptosystem, the ability to verify more puzzles in parallel can certainly save time on the side of the verifier. This property is discussed in [30].

  3. 3.

    Granularity as called by Tritilanunt et al. [31] or adjustability of difficulty by Abliz and Znati [2]. Refers to the pattern under which the difficulty level can be scaled: linearly, exponential, etc. One may want full control on how this difficulty level is adjusted, but some constructions by default allow only an exponential growth of difficulty (this can be easily fixed in most situations).

  4. 4.

    Non-interactiveness Allows puzzles to be constructed in the absence of the verifier. This property was initially used by Back in [4] to combat spam, as one can not expect that the recipient of an e-mail will be present at the time when the e-mail is sent in order to produce a puzzle for the sender.

  5. 5.

    Unforgeability Initially proposed by Chen et al. [9] this property prevents an adversary from forging puzzles. This property was questioned by Stebila et al. [28] in the context of non-interactive puzzles where the property does not always make sense (as the solver is the one that builds them). Still, this property is vital for practical scenarios when interaction between principals exists.

  6. 6.

    State Some constructions require the server side to store information. Stateless puzzles may be desirable in order to avoid server depletion of memory resources (this is more prevalent in constrained environments).

  7. 7.

    Freshness Replaying puzzles to clients can cause resource exhaustion on the client side, freshness prohibits this. This property is also called tamper-resistance by Abliz and Znati [2].

  8. 8.

    Cost This can be refined along several lines, namely the cost on the server side (to generate) or the client side (to solve). Further it can be analyzed along with the unit of cost (CPU time, memory, bandwidth, etc.) and with the quantity required by one step (one hash function, one modular squaring, etc.). This is in close relation to the difficulty of the puzzle discussed bellow, but difficulty is usually defined formally as an upper bound on the adversary advantage in answering a puzzle, rather than the cost itself for solving the puzzle.

  9. 9.

    Strong puzzle difficulty was introduced by Stebila et al. in [28]. This requires that an adversary is unable to solve \(n\) puzzles easier than \(n\) times solving one puzzle. The same property is actually encountered by Abliz and Znati [2] as correlation-free, with the informal requirement that previous answers must not help the adversary to solve a new puzzle easier.

  10. 10.

    Resilience to pre-computed attacks In some scenarios it is relevant if an adversary can perform off-line computations before obtaining the puzzle itself. Lack of resilience to pre-computed attacks may allow the adversary to mount a directed attack against a particular principal, despite the existence of a PoW protocol (performing off-line computations, before a connection is actually requested, is also immediately achievable in the case of non-interactive puzzles).

  11. 11.

    Trapdoor Refers to whether there exists some private information that makes the puzzle easier to solve. This property along with constructions and practical settings is discussed by Gao in [15].

  12. 12.

    Fairness is defined in [2] as the property that a puzzle should take the same amount of time whatever the resource of the solver are. This is required in the context of DoS resistance in order to reduce the capability of a powerful attacker to a regular user. An example in this sense are memory bound functions from Abadi et al. [1] which rely on memory speed that in contrast to CPU speed is more uniform between devices.

  13. 13.

    Minimum interference is defined in [2] as a property that requires a puzzle not to interfere with the user’s regular operations. If the puzzle takes too long then the user may avoid the use of the puzzle. Possibly, this is more likely related to good engineering of protocols, rather than an intrinsic property of a puzzle.

  14. 14.

    Uniqueness Refers to whether a puzzle has or not a unique solution. This property is trivial, but it is not underlined explicitly in related work. In some contexts this property is not relevant, for example in most PoW protocols, while it is extremely relevant in others, for example in time-release crypto. In this later context it is essential that a puzzle has a unique solution since this solution has to be used as a key to decrypt some particular ciphertext.

B Proofs for the difficulty bounds (Theorems 1 and 2)

As a general procedure, in both proofs for the difficulty bounds we proceed in a similar way by first establishing the bound of the solving algorithm and then bound the adversary advantage which proves that the puzzles are optimal. As can be easily noted, the game played by the adversary is the concurrent solving game \(\mathsf{CS }\) and subsequently, as also established by Proposition 1, the puzzles are difficulty preserving given that the solving algorithm works in a sequential manner (for completeness we prove that the solving time of \(n\) puzzles is \(n\) times the solving time for one puzzle given any fixed difficulty level \(d\)).

1.1 B.1 Proof of Theorem 1

Proof of the solving bound Suppose that \(\mathsf{Find }\) finishes at exactly the \(t\)th query and let \(t=t_1+t_2+\cdots +t_n\) where \(t_i\) denotes the number of queries made to \(\mathcal{H }\) to solve the \(i\)th puzzle. The probability to solve the \(i\)th puzzle at exactly the \(t_i\) query is obviously \((1-\frac{1}{2^d})^{t_i-1}\cdot \frac{1}{2^d}\). Since solving each puzzle is an independent event, the probability to solve the puzzles at exactly \(t_1, t_2,\ldots ,t_n\) steps for each puzzle is \(\prod _{i=1,n} (1-\frac{1}{2^d})^{t_i-1}\cdot \frac{1}{2^d} = (1-\frac{1}{2^d})^{t-n}\cdot \frac{1}{2^{nd}}\). But there are exactly \(\left( {\begin{array}{c}t-1\\ n-1\end{array}}\right) \) ways of writing \(t\) as a sum of exactly \(n\) integers from which the probability to solve the puzzle follows as: \( \zeta ^{ HT }_{k, d, n}(t) = \sum \nolimits _{i=n, t} \left( {\begin{array}{c}i-1\\ n-1\end{array}}\right) \cdot \frac{1}{2^{nd}} \cdot \left( 1-\frac{1}{2^d} \right) ^{i-n} \).

Proof of the adversary advantage We prove the adversary advantage in the random oracle model. For this, challenger \(\mathcal{C }\) simulates \(\mathcal{H }\) by flipping coins and playing the following game \(\mathbf{G }_0\) with adversary \(\mathcal{A }\):

  1. (1)

    The challenger \(\mathcal{C }\) runs \(\mathsf{Setup }\) on input \(1^k\) then it will flip coins to answer to the adversary \(\mathcal{A }\),

  2. (2)

    The adversary \(\mathcal{A }\) is allowed to ask \(\mathsf{OGenSolve }, \;\mathsf{OTest },\; \mathsf{ComputeHash }\) which \(C\) answers as follows:

  • on \(\mathsf{OGenSolve }\), challenger \(\mathcal{C }\) picks \(r \in \{0,\, 1\}^k\) checks if \(r\) is present on its tape and stores it if not then randomly chooses a solution \(\mathsf{sol }\) and returns the pair \(\{r, \,\mathsf{sol }\}\),

  • on \(\mathsf{OTest }\), challenger \(\mathcal{C }\) queries itself \(\mathsf{OGenSolve }\) but marks its answers and solutions as \(\{ (r^{\diamondsuit }_1,\, \mathsf{sol }^{\diamondsuit }_1),\; (r^{\diamondsuit }_2,\mathsf{sol }^{\diamondsuit }_2)\),..., \((r^{\diamondsuit }_n,\,\mathsf{sol }^{\diamondsuit }_{n})\}\) and returns just \(\{ r^{\diamondsuit }_1,\; r^{\diamondsuit }_2,\ldots , r^{\diamondsuit }_{n}\}\),

  • on \(\mathsf{ComputeHash }\), challenger \(\mathcal{C }\) simulates \(\mathcal{H }\) to the adversary \(A\), that is, he receives \((r, \,\mathsf{sol })\) from adversary \(\mathcal{A }\), checks if \((r, \,\mathsf{sol })\) was not already queried and if not he flips coins to get \(y\) and stores stores the triple \((r, \,\mathsf{sol }, y)\) on its tape then returns \(y\) to \(\mathcal{A }\),

  1. (3)

    At any point the adversary \(\mathcal{A }\) can stop the game by sending \(\mathcal{C }\) a set of pairs \(\{ (r^{\diamondsuit }_1,\, \mathsf{sol }^{\diamondsuit }_1),\; (r^{\diamondsuit }_2,\,\mathsf{sol }^{\diamondsuit }_2),\ldots , (r^{\diamondsuit }_n,\mathsf{sol }^{\diamondsuit }_{n})\}\),

  2. (4)

    When challenger \(\mathcal{C }\) receives \(\{ (r^{\diamondsuit }_1, \mathsf{sol }^{\diamondsuit }_1),\; (r^{\diamondsuit }_2,\mathsf{sol }^{\diamondsuit }_2),\ldots ,(r^{\diamondsuit }_n,\mathsf{sol }^{\diamondsuit }_{n})\}\) he checks that each \(\{ r^{\diamondsuit }_1,\; r^{\diamondsuit }_2,\ldots , r^{\diamondsuit }_{n}\}\) are stored on its tape and for each solution it checks that the last \(d\) bits of \(y\) in \(\{r, \,\mathsf{sol }, y\}\) are zero. If a triple \(\{r, \,\mathsf{sol }, y\}\) such that the last \(d\) bits of \(y\) are zero is not present on the tape, then challenger \(\mathcal{C }\) flips coins one more time to get a new \(y\) and accepts the solution if \(y\) ends with \(d\) zeros (note that these values are not stored on the tape). If all these hold then challenger \(\mathcal{C }\) outputs one, otherwise it outputs zero.

Remark 19

For correct simulation of \(\mathsf{OGenSolve }\) the length \(l\) of the correct answer should be chosen according to the probability distribution of the lengths for a particular difficulty level, i.e., \(\Pr [l]=(1-(1-2^{-d})^{2^l})(1-2^{-d})^{2^{l-1}}\).

Let \(\mathbf{G }_1\) be the same as \(\mathbf{G }_0\) with the following difference: on \(\mathsf{OGenSolve }\), challenger \(\mathcal{C }\) picks \(r \in \{0,\,1\}^k\) checks if \(r\) is present on its tape and aborts if so, otherwise it continues as in \(\mathbf{G }_0\) by storing the values then sending them to \(\mathcal{A }\). We have: \( \Bigl | \Pr \bigl [\mathcal{A } \text{ wins } \mathbf{G }_0\bigr ] - \Pr \bigl [\mathcal{A } \text{ wins } \mathbf{G }_1 \bigr ]\Bigr | \le \frac{q^2_{\mathsf{Gen }}}{2^{k+1}} \).

We now bound the adversary advantage in \(\mathbf{G }_1\). At the end of the game, challenger \(\mathcal{C }\) inspects his tape and sets \(t\) as the number of queries made to \(\mathsf{ComputeHash }\) that have an \(r^{\diamondsuit }_i,\; \forall i \in \{1, n\}\) as input. Let \(E_i\) denote the event that for \(i\) of the puzzles a pair \(\{r^{\diamondsuit },\; \mathsf{sol }^{\diamondsuit },\; y\}\) where \(y\) ends with \(d\) zeros is not present on the tape. Obviously, there \(n+1\) possible outcomes of \(\mathbf{G }_1\): \(E_0,\; E_1,\ldots ,E_n\). In each \(E_i\) let \(\Pr \bigl [\mathcal{A } \text{ wins } E_i \bigr ]\) be the probability that the adversary has the correct answers for \(n-i\) of the puzzles and he guessed the output of \(i\) of them which happens with probability \(2^{-id}\) since the adversary never queried \(\mathcal{H }\) to get a correct output. We have:

$$\begin{aligned} \Pr \bigl [\mathcal{A } \text{ wins } \mathbf{G }_1 \bigr ]&= \Pr \bigl [\mathcal{A } \text{ wins } E_0\bigr ] + \frac{1}{2^d} \cdot \Pr \bigl [\mathcal{A } \text{ wins } E_1\bigr ] + \frac{1}{2^{2d}} \cdot \Pr \bigl [\mathcal{A } \text{ wins } E_2\bigr ] \\&\quad +\cdots + \frac{1}{2^{nd}} \cdot \Pr \bigl [\mathcal{A } \text{ wins } E_n\bigr ] = \zeta _{k, d, n}(t) + \frac{1}{2^d} \cdot \Pr \bigl [\mathcal{A } \text{ wins } E_1\bigr ] \\&\quad + \frac{1}{2^{2d}} \cdot \Pr \bigl [\mathcal{A } \text{ wins } E_2\bigr ] +\cdots + \frac{1}{2^{nd}} \cdot \Pr \bigl [\mathcal{A } \text{ wins } E_n\bigr ] \\&< \zeta ^{ HT }_{k, d, n}(t) + \frac{1}{2^d} + \frac{1}{2^{2d}} + \cdots + \frac{1}{2^{nd}} < \zeta ^{ HT }_{k, d, n}(t) + \frac{1}{2^d-1} \end{aligned}$$

By elementary calculations it follows that: \(\mathsf{Win }^{\mathsf{HashTrail }}_{\mathcal{A }, k, d , n}(q_{\mathsf{Gen }}, t) \le \Bigl | \Pr \bigl [\mathcal{A } \text{ wins } \mathbf{G }_0\bigr ] - \Pr \bigl [\mathcal{A } \text{ wins } \mathbf{G }_1 \bigr ]\Bigr | + \Pr \bigl [\mathcal{A } \text{ wins } \mathbf{G }_1\bigr ] = \zeta ^{ HT }_{k, d, n}(t) + \frac{1}{2^d-1} + \frac{q^2_{\mathsf{Gen }}}{2^{k+1}}\). The puzzle follows as optimal since \(\epsilon ^{ HT }_{k, d, n}(t) \le \zeta ^{ HT }_{k, d, n}(t) + \frac{1}{2^d-1} + \frac{q^2_{\mathsf{Gen }}}{2^{k+1}}\) and \( \frac{1}{2^d-1} + \frac{q^2_{\mathsf{Gen }}}{2^{k+1}}\) is negligible in \(d\) and \(k\) respectively. Now we prove that the puzzle is difficulty preserving which is trivial to do. For \(n=1\) it is easy to prove that \(\mathrm{t}_{\mathrm{avr}}(k,1,d) = 2^d\). This is straight forward since:

$$\begin{aligned} \mathrm{t}_{\mathrm{avr}}(k,1,d)&= \sum \limits _{i=1, \infty } i \cdot \frac{1}{2^{d}} \cdot \left( 1-\frac{1}{2^d} \right) ^{i-1} = \frac{1}{2^{d}} \cdot \sum \limits _{i=1, \infty } i \cdot \left( 1-\frac{1}{2^d} \right) ^{i-1}\\&= \frac{1}{2^{d}} \cdot \mathop {\lim }\limits _{i \rightarrow \infty } \frac{ i \cdot \left( 1 - \frac{1}{2^d} \right) ^{i-1} \cdot \left( - \frac{1}{2^d} \right) - \left( 1 - \frac{1}{2^d} \right) ^i + 1 }{\frac{1}{2^{2d}}} = 2^d \end{aligned}$$

We now want to show that \(n \cdot \mathrm{t}_{\mathrm{avr}}(k,1,d) =\mathrm{t}_{\mathrm{avr}}(k,n,d)\). By definition we have \(\zeta ^{ HT }_{k, d , n}(t) = \sum \nolimits _{i=n, t} \left( {\begin{array}{c}i-1\\ n-1\end{array}}\right) \cdot \frac{1}{2^{nd}} \cdot \left( 1-\frac{1}{2^d} \right) ^{i-n}\). Thus it follows:

$$\begin{aligned} \mathrm{t}_{\mathrm{avr}}(k,n,d) \!=\! \sum \limits _{i=n, \infty } i \cdot \left( \zeta ^{ HT }_{k, d , n}(t) - \zeta ^{ HT }_{k, d , n}(t-1)\right) \!=\! \sum \limits _{i=n, \infty } i \cdot \left( {\begin{array}{c}i-1\\ n-1\end{array}}\right) \cdot \frac{1}{2^{nd}} \cdot \left( 1-\frac{1}{2^d}\right) ^{i-n} \end{aligned}$$

Recall that \(\left( {\begin{array}{c}i\\ j\end{array}}\right) =\left( {\begin{array}{c}i-1\\ j-1\end{array}}\right) +\left( {\begin{array}{c}i-1\\ j\end{array}}\right) \) and write

$$\begin{aligned} \mathrm{t}_{\mathrm{avr}}(k,n,d)&= \sum \limits _{i=n, \infty } i \cdot \left[ \left( {\begin{array}{c}i-2\\ n-2\end{array}}\right) + \left( {\begin{array}{c}i-2\\ n-1\end{array}}\right) \right] \cdot \frac{1}{2^{nd}} \cdot \left( 1-\frac{1}{2^d}\right) ^{i-n} \\&= \frac{1}{2^{d}} \cdot \underbrace{\sum \limits _{i=n, \infty } i \cdot \left( {\begin{array}{c}i-2\\ n-2\end{array}}\right) \cdot \frac{1}{2^{(n-1)d}} \cdot \left( 1-\frac{1}{2^d} \right) ^{i-n}}_{ \mathrm{t}_{\mathrm{avr}}(k,n-1,d) + \underbrace{\sum \limits _{i=n, \infty } \left( {\begin{array}{c}i-2\\ n-2\end{array}}\right) \cdot \frac{1}{2^{(n-1)d}} \cdot \left( 1-\frac{1}{2^d} \right) ^{i-n}}_{\epsilon _{k, d , n-1}(\infty ) = 1}} \\&\quad + \left( 1-\frac{1}{2^d} \right) \cdot \underbrace{ \sum \limits _{i=n, \infty } i \cdot \left( {\begin{array}{c}i-2\\ n-1\end{array}}\right) \cdot \frac{1}{2^{nd}} \cdot \left( 1-\frac{1}{2^d}\right) ^{i-n-1}}_{ \mathrm{t}_{\mathrm{avr}}(k,n,d) + \underbrace{\sum \limits _{i=n, \infty } \left( {\begin{array}{c}i-2\\ n-2\end{array}}\right) \cdot \frac{1}{2^{(n-1)d}} \cdot \left( 1-\frac{1}{2^d} \right) ^{i-n}}_{\epsilon _{k, d , n}(\infty ) = 1}} \end{aligned}$$

Multiply with \(2^d\) to get \(\mathrm{t}_{\mathrm{avr}}(k,n,d) = \mathrm{t}_{\mathrm{avr}}(k,n-1,d) + 2^d\) from which by recurrence we have \(\mathrm{t}_{\mathrm{avr}}(k,n,d) = n \cdot \mathrm{t}_{\mathrm{avr}}(k,1,d)\) which completes the proof.

1.2 B.2 Proof of Theorem 1

Proof of the solving bound Algorithm \(\mathsf{Find }\) solves \(n\) instances in exactly \(t\) steps for any set \(\{t_1, t_2,\ldots , t_n\}\) such that \(t = t_1 + t_2 +\cdots + t_n\) and \(1 \le t_i \le 2^d\) where each \(t_i\) denotes the exact number of queries made to solve the \(i\)th puzzle. The number of such sets is given by the restricted compositions of integer \(n\) which is \([z^i](z \cdot \frac{1-z^{2^d}}{1-z})^n\) , i.e., the ways of writing \(i\) as sum of \(n\) terms each at most \(2^d\). Each such compositions has probability \(\frac{1}{2^{n d}}\), thus the bound of \(\mathsf{Find }\) follows:

$$\begin{aligned} \zeta ^{ HI }_{k, d, n}(t) = \sum \limits _{i=n, t} [z^i] \left( z \cdot \frac{1-z^{2^d}}{1-z} \right) ^n \cdot \frac{1}{2^{n d}} \end{aligned}$$

Proof of the adversary advantage We prove the adversary advantage in the random oracle model. For this, challenger \(\mathcal{C }\) simulates \(\mathcal{H }\) by flipping coins and playing the following game \(\mathbf{G }_0\) with adversary \(\mathcal{A }\):

  1. (1)

    The challenger \(\mathcal{C }\) runs \(\mathsf{Setup }\) on input \(1^k\) then it will flip coins to answer to the adversary \(\mathcal{A }\),

  2. (2)

    The adversary \(\mathcal{A }\) then starts to ask \(\mathsf{OGenSolve }, \;\mathsf{OTest },\; \mathsf{ComputeHash }\) which \(C\) answers as follows:

  • on \(\mathsf{OGenSolve }\), challenger \(\mathcal{C }\) picks \(x \in \{0,\, 1\}^k\) flips coins to get \(y=\mathcal{H }(x)\), sets \(x^{\prime }\) as the first \(d\) bits of \(x\) and \(x^{\prime \prime }\) as the remaining bits, stores \(\{\mathsf{puz}=(x^{\prime \prime }, y), \mathsf{sol }=x^{\prime }\}\) on its tape and returns this value,

  • on \(\mathsf{OTest }\), challenger \(\mathcal{C }\) queries itself \(\mathsf{OGenSolve }\) but marks its answers and solutions as \(\{ (\mathsf{puz}^{\diamondsuit }_1,\, \mathsf{sol }^{\diamondsuit }_1),\; (\mathsf{puz}^{\diamondsuit }_2,\,\mathsf{sol }^{\diamondsuit }_2),\ldots , (\mathsf{puz}^{\diamondsuit }_n,\,\mathsf{sol }^{\diamondsuit }_{n})\}\) and returns just \(\{ \mathsf{puz}^{\diamondsuit }_1,\, \mathsf{puz}^{\diamondsuit }_2,\ldots , \mathsf{puz}^{\diamondsuit }_{n}\}\),

  • on \(\mathsf{ComputeHash }\), challenger \(\mathcal{C }\) simulates \(\mathcal{H }\) to the adversary \(A\), that is, he receives \(\mathsf{sol }, x^{\prime \prime }\) from adversary \(\mathcal{A }\), it inspects its tape to see if a triple \(\mathsf{sol }, x^{\prime \prime }, y\) is present on its tape and returns \(y\) if so, otherwise it flips coins to get an \(y\) and stores the triple \(\{\mathsf{sol }, x^{\prime \prime }, y\}\) on its tape then returns \(y\) to \(\mathcal{A }\).

  1. (3)

    At any point the adversary \(\mathcal{A }\) can stop the game by sending \(\mathcal{C }\) a set of pairs \(\{ (\mathsf{puz}^{\diamondsuit }_1,\, \mathsf{sol }^{\diamondsuit }_1), \;(\mathsf{puz}^{\diamondsuit }_2,\,\mathsf{sol }^{\diamondsuit }_2),\ldots , (\mathsf{puz}^{\diamondsuit }_n,\,\mathsf{sol }^{\diamondsuit }_{n})\}\),

  2. (4)

    When challenger \(\mathcal{C }\) receives \(\{ (\mathsf{puz}^{\diamondsuit }_1,\, \mathsf{sol }^{\diamondsuit }_1),\; (\mathsf{puz}^{\diamondsuit }_2,\,\mathsf{sol }^{\diamondsuit }_2),\ldots , (\mathsf{puz}^{\diamondsuit }_n,\,\mathsf{sol }^{\diamondsuit }_{n})\}\) he checks that each \(\{ \mathsf{puz}^{\diamondsuit }_1,\, r^{\diamondsuit }_2,\ldots , \mathsf{puz}^{\diamondsuit }_{n}\}\) is on its tape and for each puzzle and solution it checks that a triple \(\mathsf{sol }, x^{\prime \prime }, y\) is present on its tape. If these hold then challenger \(\mathcal{C }\) outputs one, otherwise it outputs zero.

Let \(\mathbf{G }_1\) be the same as \(\mathbf{G }_0\) with the following difference: on \(\mathsf{OGenSolve }\), challenger \(\mathcal{C }\) picks \(x \in \{0,\,1\}^k\) checks if \(x^{\prime }\) is already present on its tape and aborts if so, otherwise it continues as in \(\mathbf{G }_0\) and stores it then sends it to \(\mathcal{A }\). We have:

$$\begin{aligned} \Bigl | \Pr \bigl [\mathcal{A } \text{ wins } \mathbf{G }_0\bigr ] - \Pr \bigl [\mathcal{A } \text{ wins } \mathbf{G }_1 \bigr ]\Bigr | \le \frac{q^2_{\mathsf{Gen }}}{2^{k-d+1}}. \end{aligned}$$

We now bound the adversary advantage in \(\mathbf{G }_1\). At the end of \(\mathbf{G }_1\) challenger \(\mathcal{C }\) inspects his tape and sets \(t\) as the number of queries made to \(\mathsf{ComputeHash }\) that have a target puzzle as input.

Let \(E_i\) denote the event that for \(i\) of the puzzles the solution is not present on the tape records from \(\mathsf{ComputeHash }\). Obviously, there are \(n+1\) possible outcomes of \(\mathbf{G }_1\): \(E_0,\, E_1,\ldots ,E_n\). In each \(E_i\) it must be that the solution was guessed and passed the verification of the challenger. Note that in each \(E_i\) the challenger has performed exactly \(i\) more queries to \(\mathcal{H }\) and the probability to get a correct solution in exactly \(t+i\) queries is \([z^{t+1}](z \cdot \frac{1-z^{2^d}}{1-z})^n \cdot \frac{1}{2^{n d}}\).

We make the following relevant observation:

$$\begin{aligned}{}[z^{i}] \left( z \cdot \frac{1-z^{2^d}}{1-z} \right) ^n \cdot \frac{1}{2^{n d}} \le \frac{1}{2^d}, \quad \forall i \in [n\ldots n\cdot 2^d] \end{aligned}$$

This is easy to prove. Note that for \(n=1\) it holds since each coefficient is \(2^{-d}\). Now proceed by induction. Assume this holds for \(n-1\) and prove that it holds for \(n\). We have:

$$\begin{aligned}{}[z^{i}]\left( z \cdot \frac{1-z^{2^d}}{1-z}\right) ^n \cdot \frac{1}{2^{n d}} \!=\! [z^{i}]\left( \left( z \cdot \frac{1-z^{2^d}}{1-z}\right) ^{n-1} \cdot \frac{1}{2^{(n-1) d}} \cdot \left( z+z^2\!+\!\cdots \!+\!z^{2^d} \right) \cdot \frac{1}{2^{d}}\right) \end{aligned}$$

Note that this last product has on the left side the coefficients for \(n-1\) which are all smaller than \(2^{-d}\) and due to the multiplication to the right hand side all these coefficients are added the divided again by \(2^{-d}\), which again gives coefficients at most \(2^{-d}\).

Thus we have:

$$\begin{aligned} \Pr \bigl [\mathcal{A } \text{ wins } \mathbf{G }_{\mathcal{R }} \bigr ]&\le \zeta ^{ HI }_{k, d, n}(t) + [z^{t+1}] \left( z \cdot \frac{1-z^{2^d}}{1-z} \right) ^n \cdot \frac{1}{2^{n d}} + [z^{t+2}]\left( z \cdot \frac{1-z^{2^d}}{1-z}\right) ^n \cdot \frac{1}{2^{n d}} \\&\quad +\cdots + [z^{t+n}]\left( z \cdot \frac{1-z^{2^d}}{1-z}\right) ^n \cdot \frac{1}{2^{n d}} \\&\le \zeta ^{ HI }_{k, d, n}(t) + \frac{1}{2^{d}} + \frac{1}{2^d} + \frac{1}{2^{d}} +\cdots + \frac{1}{2^{d}} \le \zeta ^{ HI }_{k, d, n}(t) + \frac{n}{2^{d}} \end{aligned}$$

It follows that:

$$\begin{aligned} \mathsf{Win }^{\mathsf{HashInv }}_{\mathcal{A }, k, d , n}(q_{\mathsf{Gen }}, t)&= \Pr \bigl [\mathcal{A } \text{ wins } \mathbf{G }_0\bigr ] \\&= \Pr \bigl [\mathcal{A } \text{ wins } \mathbf{G }_0\bigr ] + \Pr \bigl [\mathcal{A } \text{ wins } \mathbf{G }_1\bigr ] - \Pr \bigl [\mathcal{A } \text{ wins } \mathbf{G }_1 \bigr ] \\&\le \Bigl | \Pr \bigl [\mathcal{A } \text{ wins } \mathbf{G }_0\bigr ] - \Pr \bigl [\mathcal{A } \text{ wins } \mathbf{G }_1 \bigr ]\Bigr | + \Pr \bigl [\mathcal{A } \text{ wins } \mathbf{G }_1\bigr ] \\&= \zeta ^{ HI }_{k, d, n}(t) + \frac{n}{2^d} + \frac{q^2_{\mathsf{Gen }}}{2^{k-d+1}}. \end{aligned}$$

Same as previously, the puzzle is optimal as \(\epsilon ^{ HI }_{k, d, n}(t) \le \zeta ^{ HI }_{k, d, n}(t) + \frac{n}{2^d} + \frac{q^2_{\mathsf{Gen }}}{2^{k-d+1}}\) and \(\frac{n}{2^d} + \frac{q^2_{\mathsf{Gen }}}{2^{k-d+1}}\) is negligible in \(d\) and \(k\) respectively.

To prove that the puzzle is difficulty preserving, for one instance of the puzzle the average solving time is \(2^{ d -1}+1/2\). For \(n\) instances the average solving time is given by:

$$\begin{aligned} \mathrm{t}_{\mathrm{avr}}(k,n,d) = \sum \limits _{t=n,n\cdot 2^d}t \cdot \Big [\zeta ^{ HI }_{k, d, n}(t) - \zeta ^{ HI }_{k, d, n}(t-1) \Big ] \end{aligned}$$

Take polynomial \(P(z)=z^n \cdot (1+z+z^2+\cdots +z^{2^d-1})^n\), derive it and replace \(z\) with one. Notice that this gives exactly \(2^{nd} \cdot \mathrm{t}_{\mathrm{avr}}(k,n,d)\) and subsequently we have \(\mathrm{t}_{\mathrm{avr}}(k,n,d)= n\cdot (2^{d-1}+1/2)\).

1.3 B.3 Proofs on the approximations of the bounds

The first bound requires us to compute:

$$\begin{aligned} \sum \limits _{i=n, t} \left( {\begin{array}{c}i-1\\ n-1\end{array}}\right) \cdot \frac{1}{2^{nd}} \cdot \left( 1-\frac{1}{2^d} \right) ^{i-n} = \sum \limits _{i=0, t-n} \left( {\begin{array}{c}i+n-1\\ n-1\end{array}}\right) \cdot \frac{1}{2^{nd}} \cdot \left( 1-\frac{1}{2^d} \right) ^{i}. \end{aligned}$$

By using the binomial identity \(\left( {\begin{array}{c}n\\ k\end{array}}\right) =\left( {\begin{array}{c}n\\ n-k\end{array}}\right) \) the term \(\left( {\begin{array}{c}i+n-1\\ n-1\end{array}}\right) \) can be rewritten as \(\left( {\begin{array}{c}i+n-1\\ i\end{array}}\right) \) and this is precisely the coefficient of \(x^i\) in the expansion of \((1+x+x^2+x^3+\cdots )^n\) (see [16], p. 208). While the sum \((1+x^2+x^3+\cdots )^n\) ranges to infinity, the coefficient of the \(i\)th term is obviously determined only by the first \(i\) terms and thus contained in \((1+x^2+x^3+\cdots +x^i)^n\). By replacing \(x\) with \(\left( 1-\frac{1}{2^d} \right) \) the \(i\)th term is exactly the term we are looking for and this is obviously smaller than the sum of the terms, i.e.,

$$\begin{aligned} \left( {\begin{array}{c}i+n-1\\ n-1\end{array}}\right) \cdot \left( 1-\frac{1}{2^d} \right) ^{i}&< \left[ 1+\left( 1-\frac{1}{2^d} \right) ^1+\left( 1-\frac{1}{2^d} \right) ^2+\cdots +\left( 1-\frac{1}{2^d} \right) ^i \right] ^n \\&= \left[ \frac{ \left( 1- \frac{1}{2^d} \right) ^{i+1} - 1}{ -\frac{1}{2^d} } \right] ^n = \left[ 2^d - \frac{(2^{d}-1)^{i+1}}{2^{di}} \right] ^n \end{aligned}$$

Now multiply the right part with \(2^{-nd}\) to get:

$$\begin{aligned} \zeta ^{ HT }_{k, d, n}(t) = \sum \limits _{i=n, t} \left( {\begin{array}{c}i-1\\ n-1\end{array}}\right) \cdot \frac{1}{2^{nd}} \cdot \left( 1-\frac{1}{2^d} \right) ^{i-n} < \left[ 1 - \left( 1-\frac{1}{2^{d}} \right) ^{i+1} \right] ^n \end{aligned}$$

To approximate the second difficulty bound we work with the coefficients of the polynomial \(\left( z \cdot \frac{1-z^{2^d}}{1-z}\right) ^n = z^n \left( z^0 + z^{1} +\cdots + z^{2^d-1} \right) ^n \). Indeed, these are the same coefficients as previously but we will not use their binomial expression. To compute the bound \(\zeta ^{ HI }_{k, d, n}(t)\) we are simply interested in the sum of the coefficients from \(\left( z^0 + z^{1} +\cdots + z^{2^d-1} \right) ^n\) up to the term having \(z^{t-n}\) (note that multiplication with \(z^n\) will shift all these coefficients to the right with \(n\) positions, so we sum up to \(t-n\) rather than up to \(t\)). The sum of these coefficients is upper bounded by \((z^0+z^1+z^2+\cdots +z^{t-n})^n\) if we set \(z=1\) (indeed, no term higher than \(z^{t-n}\) will contribute to this sum). But this sums all the \((t-n)n+1\) coefficients while we are interested only in the first \(t-n\) of them. Since the first \(t-n\) coefficients are the smallest (up to the term in the middle which is the larger due to the binomial expansion), we can safely divide the sum with \(n\). Consequently, we can write:

$$\begin{aligned} \zeta ^{ HI }_{k, d, n}(t) = \sum \limits _{i=n, t} [z^i]\left( z \cdot \frac{1-z^{2^d}}{1-z}\right) ^n \cdot \frac{1}{2^{n d}} < \frac{(t-n+1)^n}{n2^{n d}} = \frac{1}{n} \left( \frac{t-n+1}{2^{d}} \right) ^n \end{aligned}$$

C Proofs for limitations of practical schemes (Theorem 3)

Basic scheme Let \(\mathcal{R }(\lambda )\) denote the revenue function for the adversary, that is, the number of connections earned by the adversary given that he made \(\lambda \) requests to the server. In case of PoW protocols, \(\mathcal{R }(\lambda )\) is bounded by the amount of puzzles that the adversary correctly solved (not necessarily equal to \(\lambda \)). The maximum number of requests from the adversary is upper bounded by \(\lambda _{\mathcal{A }}\) which is the maximum arrival rate of the adversary (limited by network parameters only), i.e., we have \(\lambda \in [0, \lambda _{\mathcal{A }}]\). Obviously, a DoS takes place if \(\mathcal{R }(\lambda ) > \theta _{ service }^{-1}\) since the server can handle at most \(\theta _{ service }^{-1}\) connections each second. It also holds that \(\mathcal{R }(\lambda )\le \lambda _{\mathcal{A }}\) since the adversary cannot get more connections than he requested for. Clearly, while \(\lambda _{\mathcal{A }}\) is limited by network parameters only, \(\mathcal{R }(\lambda )\) is also limited by the number of puzzles he was able to solve.

Now let \(\mathcal{R }_{\max }\) denote the maximum number of connections that the adversary can get, given the limits of its computational resources. A misleading intuition is that the number of connections granted to the adversary is upper bounded by \(\mathcal{R }_{\max } = \dfrac{\pi _{\mathcal{A }}}{d_{\mathsf{init }}}\) (this represents the maximum number of puzzles that he is able to solve at each instant of time). But by careful inspection of Definition 10 the difficulty bound includes the puzzle lifetime \(t_{ puz }\) and the correct bound is \(\mathcal{R }_{\max } = \dfrac{\pi _{\mathcal{A }} + t_{ puz }\pi _{\mathcal{A }}}{d_{\mathsf{init }}}\) (since all puzzles computed during \(t_{ puz }\) can be used as well to gain connections). But puzzle lifetime \(t_{ puz }\) must be bigger than the time a client needs to solve the puzzle, i.e., \(t_{ puz }> d_{\mathsf{init }} \pi _{\mathcal{C }}^{-1}\), since otherwise clients are unable to solve the puzzles and cannot get connections anyway. Thus \(\mathcal{R }_{\max } > \dfrac{\pi _{\mathcal{A }}}{d_{\mathsf{init }}} + \dfrac{\pi _{\mathcal{A }}}{\pi _{\mathcal{C }}}\). It follows that for any number of requests from the adversary (up to the maximum number of connections that he can get due to limitations on its computational resources) the revenue function is defined as \(\mathcal{R }(\lambda ) = \lambda , \; \text{ if }\; \lambda \in \left[ 0, \dfrac{\pi _{\mathcal{A }}}{d_{\mathsf{init }}} + \dfrac{\pi _{\mathcal{A }}}{\pi _{\mathcal{C }}} \right] \). Which means that the number of connections (granted to the adversary) drops with the increase in the difficulty of the puzzle but it never drops below \(\dfrac{\pi _{\mathcal{A }}}{\pi _{\mathcal{C }}}\) since: \( \lim _{d_{\mathsf{init }} \rightarrow +\infty } \mathcal{R }(\lambda ) = \dfrac{\pi _{\mathcal{A }}}{\pi _{\mathcal{C }}} \). Accordingly, the adversary can always get at least \(\pi _{\mathcal{A }} \cdot \pi _{\mathcal{C }}^{-1}\) connections, regardless of the puzzle difficulty level, and the DoS condition is met when \(\pi _{\mathcal{A }} \cdot \pi _{\mathcal{C }}^{-1} \ge \theta _{ service }^{-1}\). Obviously \(\pi _{\mathcal{A }} \cdot \pi _{\mathcal{C }}^{-1}\) is the minimum amount of connections gained on the side of the adversary and this is met as soon as \(d_{\mathsf{init }} > \pi _{\mathcal{A }}\).

Filtering scheme Having \(\lambda \in [0, \lambda _{\mathcal{A }}]\) we build an adversary which gains connections faster than the inverse of the service time. By previous observation that the number of puzzles solved by the adversary includes the puzzles solved during the lifetime of the puzzles let us use the notation \(\widetilde{\pi }_{\mathcal{A }} = \pi _{\mathcal{A }} + \pi _{\mathcal{A }} \dfrac{ d _{\mathcal{C }} }{\pi _{\mathcal{C }}}\) to depict a more accurate bound on the computational resources of the adversary. If \(\lambda (\beta d _{client} + (1-\beta ) d _{\mathcal{A }}) < \widetilde{\pi }_{\mathcal{A }}\) the total number of PoWs received by the adversary does not exceed its computational resources, consequently he solves all PoWs that he receives. Further, if this is not the case then he solves all the \(\beta \lambda \) easier PoWs (received due to the false negative rate of the filter) and uses its remaining resources to solve \((\widetilde{\pi }_{\mathcal{A }} - \beta \lambda d _{\mathcal{C }})/ d _{\mathcal{A }}\) harder PoWs. All this as long as \(\beta \lambda < \widetilde{\pi }_{\mathcal{A }}/ d _{\mathcal{C }}\) a situation in which he solves only the easier PoWs. The revenue function of our adversary is the following:

$$\begin{aligned} \mathcal{R }(\lambda ) = {\left\{ \begin{array}{ll} \lambda , &{} \text{ if }\quad \lambda < \dfrac{\widetilde{\pi }_{\mathcal{A }}}{\beta d _{client} + (1-\beta ) d _{\mathcal{A }}} \\ \dfrac{\widetilde{\pi }_{\mathcal{A }} - \beta \lambda d _{\mathcal{C }}}{ d _{\mathcal{A }}} + \beta \lambda , &{} \text{ if }\quad \dfrac{\widetilde{\pi }_{\mathcal{A }}}{\beta d _{client} + (1-\beta ) d _{\mathcal{A }}} \le \lambda < \dfrac{\widetilde{\pi }_{\mathcal{A }}}{\beta d _{\mathcal{C }}} \\ \dfrac{\widetilde{\pi }_{\mathcal{A }}}{ d _{\mathcal{C }}}, &{} \text{ if }\quad \dfrac{\widetilde{\pi }_{\mathcal{A }}}{\beta d _{\mathcal{C }}} \le \lambda \end{array}\right. } \end{aligned}$$

But \(\lambda \) is limited by \(\lambda _{\mathcal{A }}\). Since \(\lambda _{\mathcal{A }} > \theta _{ service }^{-1}\) (indeed, for a DoS attack to make sense, the adversary arrival rate must exceed the number of connections that can be handled by the server) and \(\pi _{\mathcal{A }} \cdot \pi _{\mathcal{C }}^{-1} \ge \theta _{ service }^{-1}\) clearly in the first and third case of \(\mathcal{R }(\lambda )\) the filter cannot help. In the second case by increasing the difficulty \( d _{\mathcal{A }}\) he solves at least \(\beta \lambda \) PoWs and this does not help either if \(\beta \lambda > \theta _{ service }^{-1}\).

Filtering scheme with hidden difficulty puzzles We use a similar adversary as previously, except that he cannot choose to solve only the easier PoW (of the client) since the difficulty is hidden. However, he can choose to invest only up to \( d _{\mathcal{C }}\) in each puzzle and renounce to solve it if a solution is not found. This gives:

$$\begin{aligned} \mathcal{R }(\lambda ) = {\left\{ \begin{array}{ll} \lambda , &{} \text{ if }\quad \lambda < \dfrac{\widetilde{\pi }_{\mathcal{A }}}{\beta d _{client} + (1-\beta ) d _{\mathcal{A }}} \\ \dfrac{\widetilde{\pi }_{\mathcal{A }} - \lambda d _{\mathcal{C }}}{ d _{\mathcal{A }} - d _{\mathcal{C }}} + \beta \lambda , &{} \text{ if }\quad \dfrac{\widetilde{\pi }_{\mathcal{A }}}{\beta d _{client} + (1-\beta ) d _{\mathcal{A }}} \le \lambda < \dfrac{\widetilde{\pi }_{\mathcal{A }}}{ d _{\mathcal{C }}} \\ \beta \dfrac{\widetilde{\pi }_{\mathcal{A }}}{ d _{\mathcal{C }}}, &{} \text{ if }\quad \dfrac{\widetilde{\pi }_{\mathcal{A }}}{\beta d _{\mathcal{C }}} \le \lambda \end{array}\right. } \end{aligned}$$

Same as previously, the first and second branches cannot help. In the third branch, the adversary again gets more connections than the inverse of the service time if \(\beta \dfrac{\widetilde{\pi }_{\mathcal{A }}}{ d _{\mathcal{C }}} > \theta _{ service }^{-1}\).

Cascade scheme In this case, the difficulty of each client PoW is added to the difficulty of the initial PoW. This also modifies \(\widetilde{\pi }_{\mathcal{A }}\) to \(\widetilde{\pi }_{\mathcal{A }} = \pi _{\mathcal{A }} + \pi _{\mathcal{A }} \dfrac{ d _{\mathcal{C }} + d _{\mathsf{init }}}{\pi _{\mathcal{C }}}\). The revenue of the adversary follows as:

$$\begin{aligned} \mathcal{R }(\lambda ) \!=\! {\left\{ \begin{array}{ll} \lambda , &{} \text{ if }\quad \lambda < \dfrac{\widetilde{\pi }_{\mathcal{A }}}{\beta d _{client} + (1-\beta ) d _{\mathcal{A }} + d _{\mathsf{init }}} \\ \dfrac{\widetilde{\pi }_{\mathcal{A }} - \lambda \beta ( d _{\mathcal{C }} + d _{\mathsf{init }}) }{ d _{\mathcal{A }}} + \beta \lambda , &{} \text{ if }\quad \dfrac{\widetilde{\pi }_{\mathcal{A }}}{\beta d _{client} + (1-\beta ) d _{\mathcal{A }} + d _{\mathsf{init }}} \le \lambda \!<\! \dfrac{\widetilde{\pi }_{\mathcal{A }}}{\beta d _{\mathcal{C }} + d _{\mathsf{init }}} \\ \beta \dfrac{\widetilde{\pi }_{\mathcal{A }}}{\beta d _{\mathcal{C }} + d _{\mathsf{init }}}, &{} \text{ if }\quad \lambda > \dfrac{\widetilde{\pi }_{\mathcal{A }}}{\beta d _{\mathcal{C }} + d _{\mathsf{init }}} \end{array}\right. } \end{aligned}$$

On the first and second branches, the Cascade scheme fails to protect from the same reasons as previously. On the third branch we have:

$$\begin{aligned} \beta \dfrac{\widetilde{\pi }_{\mathcal{A }}}{\beta d _{\mathcal{C }} + d _{\mathsf{init }}} > \beta \dfrac{\widetilde{\pi }_{\mathcal{A }}}{ d _{\mathcal{C }} + d _{\mathsf{init }}} > \beta \dfrac{\pi _{\mathcal{A }} + \pi _{\mathcal{A }} \dfrac{ d _{\mathcal{C }} + d _{\mathsf{init }}}{\pi _{\mathcal{C }}}}{ d _{\mathcal{C }} + d _{\mathsf{init }}} > \beta \dfrac{\pi _{\mathcal{A }}}{\pi _{\mathcal{C }}} \end{aligned}$$

This again shows that if \( \beta \dfrac{\pi _{\mathcal{A }}}{\pi _{\mathcal{C }}} > \theta _{ service }^{-1}\) the scheme fails to protect.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Groza, B., Warinschi, B. Cryptographic puzzles and DoS resilience, revisited. Des. Codes Cryptogr. 73, 177–207 (2014). https://doi.org/10.1007/s10623-013-9816-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10623-013-9816-5

Keywords

Mathematics Subject Classification (2010)

Navigation