Beyond Conventional Security in SpongeBased Authenticated Encryption Modes
 706 Downloads
Abstract
The Sponge function is known to achieve \(2^{c/2}\) security, where c is its capacity. This bound was carried over to its keyed variants, such as SpongeWrap, to achieve a \(\min \{2^{c/2},2^\kappa \}\) security bound, with \(\kappa \) the key length. Similarly, many CAESAR competition submissions were designed to comply with the classical \(2^{c/2}\) security bound. We show that Spongebased constructions for authenticated encryption can achieve the significantly higher bound of \(\min \{2^{b/2},2^c,2^\kappa \}\), with \(b>c\) the permutation size, by proving that the CAESAR submission NORX achieves this bound. The proof relies on rigorous computation of multicollision probabilities, which may be of independent interest. We additionally derive a generic attack based on multicollisions that matches the bound. We show how to apply the proof to five other Spongebased CAESAR submissions: Ascon, CBEAM/STRIBOB, ICEPOLE, Keyak, and two out of the three PRIMATEs. A direct application of the result shows that the parameter choices of some of these submissions are overly conservative. Simple tweaks render the schemes considerably more efficient without sacrificing security. We finally consider the remaining one of the three PRIMATEs, APE, and derive a blockwise adaptive attack in the noncerespecting setting with complexity \(2^{c/2}\), therewith demonstrating that the techniques cannot be applied to APE.
Keywords
Authenticated encryption CAESAR Ascon CBEAM ICEPOLE Keyak NORX PRIMATEs STRIBOB Multicollisions1 Introduction
Authenticated encryption schemes, cryptographic functions that aim to simultaneously provide data privacy and integrity, have gained renewed attention in light of the CAESAR competition [25]. A common approach to building such schemes is to design a block cipher mode of operation, as in CCM [95], OCB13 [55, 78, 79], EAX [14], GCM [57], COPA [5], OTR [63], AEZ [48], and SCT [72]. Nevertheless a significant fraction of the CAESAR competition submissions use modes of operation for permutations.
Most of the permutationbased modes follow the basic Sponge design [16]: a state is maintained and regularly updated using a permutation. The state is divided into an outer part of r bits, through which the user enters or extracts data, and an inner part of c bits, which is out of the user’s control. The rate r determines how much plaintext can be processed per permutation call, which gives an estimate of the algorithm’s performance. Keccak, the eventual winner of the competition and now standardized as SHA3 [35], internally uses the Sponge construction. The Sponge design also found adoption in the field of lightweight hash functions [24, 45].
Security of the Sponge construction as a hash function follows from the fact that the user can only affect the outer state, hence adversaries only succeed with significant probability if they make on the order of \(2^{c/2}\) permutation queries, as this many are needed to produce an inner state collision [16]. Bertoni et al. [17] proved tightness of this bound in the indifferentiability framework of Maurer et al. [56]. Keyed versions of the Sponge construction, such as KeyedSponge [20] and SpongeWrap [19], are proven up to a similar bound of \(2^{ca}\) (pseudorandom function security for the former and privacy and authenticity for the latter), assuming a limit of \(2^a\) on online complexity, but are additionally restricted by the key size \(\kappa \) to \(2^\kappa \). The permutationbased CAESAR candidates are no exception and recommend parameters based on either the \(2^{c/2}\) or \(2^{ca}\) bound, as shown in Table 1.
1.1 Beyond Conventional Security
Contrary to intuition, a wide range of permutationbased authenticated encryption schemes actually achieve significantly higher mode security: the privacy and authenticity bound on the total complexity can be improved from \(\min \{2^{c/2},2^\kappa \}\) to \(\min \{2^{(r+c)/2},2^c,2^\kappa \}\). Intuitively, the improvement demonstrates that, in the noncerespecting setting, inner collisions are not relevant to the adversary; only full state collisions are. We remark that in the noncereuse scenario [37, 80] the privacy of the scheme can be broken [19], and for authenticity the old bounds hold at best.
The main proof in this work concerns NORX mode v1 and v2 [7, 8], but we demonstrate its applicability to the CAESAR submissions Ascon v1 and v1.1 [33, 34], CBEAM v1 [83, 84],^{1} ICEPOLE v1 and v2 [65, 66], Keyak v1 [22],^{2} two out of three PRIMATEs v1 and v1.02 [2, 3], and STRIBOB v1 and v2 [81, 85, 86].^{3} Additionally, we note that it directly applies to SpongeWrap [19] and DuplexWrap [22], upon which Keyak v1 is built.
Our results imply that the initial submissions of these CAESAR candidates were overly conservative in choosing their parameters, since reducing c would have lead to the same bound. For instance, Ascon128 could take \((c,r)=(128,192)\) instead of (256, 64), NORX64 (the proposed mode with 256bit security) could increase its rate by 128 bits, and GIBBON120 and HANUMAN120 could increase their rate by a factor of 4, without affecting their mode security levels.
Parameters and the achieved mode security levels of seven CAESAR submissions
Scheme  Version  b  c  r  \(\kappa \)  \(\tau \)  Security 

Ascon  v1 [33]  320  192  128  96  96  96 
320  256  64  128  128  128  
v1.1 [34]  320  192  128  128  128  128  
320  256  64  128  128  128  
CBEAM  v1 [84]  256  190  66  128  64  128 
ICEPOLE  1280  254  1026  128  128  128  
1280  318  962  256  128  256  
Keyak  v1 [22]  800  252  548  128..224  128  128..224 
1600  252  1348  128..224  128  128..224  
NORX  v1 [7]  512  192  320  128  128  128 
1024  384  640  256  256  256  
v2 [8]  512  128  384  128  128  128  
1024  256  768  256  256  256  
GIBBON/ HANUMAN  200  159  41  80  80  80  
280  239  41  120  120  120  
STRIBOB  512  254  258  192  128  192 
1.2 Tightness of the Result
The earlier version of this article by Jovanovic et al. [53] had a security bound of the form \(\min \{2^{(r+c)/2},2^c/r,2^\kappa \}\), showing a security loss logarithmic relative to the rate. This loss was, however, not justified by any existing attack; it arose as an artifact of naively bounding the probability of a multicollision occurring in the outer state, where multiple evaluations of the underlying primitive map to the same outer value.
In this article, we thoroughly analyze multicollisions and derive bounds on the size of multicollisions for various possible choices of r and c. Most importantly, we can conclude that if \(r\ll c\) or \(r\gg c\), multicollisions have no effect on the security. If \(r\approx c\), the security loss approaches \(\frac{1.4c}{\log _2 c  2}\), as opposed to the factor r loss from [53]. We refer to Table 2 for a comprehensive description of the bound. Note that for all schemes in Table 1, \(r\ll c\) or \(r\gg c\).
The rigorous analysis of multicollisions relies on an application of Stirling’s approximation and the Lambert W function. It is not only applicable to Spongebased modes. For example, there are quite a few cryptographic schemes that have been attacked using multicollisions, such as blockcipherbased hashing schemes [73], identification schemes [41], JH hash function [58], MDC2 hash function [54], HMAC and ChopMD MAC [68], the LED block cipher [70], iterated EvenMansour [32], and strengthened HMAC [88]. Multicollisions have also influenced various security upper bounds. Typical examples are the indifferentiability proof for the ChopMD construction [27], the collision resistance proof for the LesamntaLW hash function [46], and the indistinguishability proof for RMAC [52], where the bound is \(\mathcal {O}(2^n/n)\) due to the existence of ncollisions. The compression function proposed by Hirose et al. [47] has a similar type of bound. Finally, the recent line of research on the keyed Sponge and Duplex constructions [6, 18, 20, 26, 31, 38, 60, 69] strongly relies on “multiplicities.” Some of these security analyses can be improved using our rigorous analysis of multicollisions.
1.3 APE
One of the interesting questions triggered by the publication of [53] was regarding APE, the third of the PRIMATEs. In more detail, the schemes listed in Table 1 are proven to achieve a beyond \(2^{c/2}\) security level against noncerespecting adversaries, but the schemes are insecure against noncemisusing adversaries. In contrast, APE is proven to achieve \(2^{c/2}\) security in the noncereuse scenario [4], and it is of interest to investigate what security guarantees APE offers against noncerespecting adversaries. In this work, we include an analysis of APE in this setting and show that there exists a noncerespecting blockwise adaptive adversary that can break the privacy with a total complexity of about \(2^{c/2}\). In other words, while APE is more robust against noncemisusing adversaries up to common prefix, in the noncerespecting setting the schemes listed in Table 1 achieve higher security. (We remark that the analysis in this work can be easily extended to the case of blockwise adaptive adversaries.)
1.4 Publication History and Subsequent Work
 (i)
a more rigorous analysis of multicollisions and the therewith induced improved security bound (Sect. 3),
 (ii)
the generic attack on Spongebased authenticated encryption schemes demonstrating tightness of the bound (Sect. 5),
 (iii)
a proof that, unlike the schemes of Table 1, APE does not achieve beyond \(2^{c/2}\) security in the noncerespecting setting (Sect. 7).
In response to the observations made in [53], the designers of Ascon and NORX have reconsidered their parameter choices. The new parameter choices are also listed in Table 1 and testify of a significant security gain for Ascon v1.1 [34] without sacrificing efficiency, and a significant efficiency gain for NORX v2 [8] without sacrificing security. The adjustments will make the schemes faster and more competitive. Mihajloska et al. [61] recently generalized the analysis of [53] to CAESAR submission \(\pi \)Cipher [42, 43], which is structurally different from NORX in the way it maintains state: a socalled “common internal state” is used throughout the evaluation.
From a more general perspective, the work has triggered analysis in the direction of highefficiency fullstate keyed Duplexes [31, 60, 89]. The result of Mennink et al. [60] on the fullstate keyed Duplex has triggered the designers of Keyak to perform a major revision to their scheme. In more detail, Keyak v2 [23] is built on top of the “Motorist” mode, an alternative to the fullstate keyed Duplex that was analyzed by Daemen et al. [31]. We remark that the results on the fullstate keyed Sponges and Duplexes are more general than the target design in this work. The most important difference between [31, 60] and our work is that we explicitly target noncebased designs, and this allows for beyond \(2^{c/2}\) security. The work has, to certain extent, furthermore triggered the use of permutations for noncereuse secure authenticated encryption schemes [29, 44, 59] beyond APE.
Parallel to the research on keyed Duplexes is the research on the keyed Sponges, i.e., keyed versions of the Sponge that only aim for authenticity. Bertoni et al. [18] introduced the original keyed Sponge. Chang et al. [26] suggested to put the key in the inner part of the Sponge. Andreeva et al. [6] formalized and improved the analysis of the outer and innerkeyed sponges. The analysis was generalized to the fullstate Sponge in [31, 38, 60, 69], following upon ideas that date back to the donkeySponge [21]. Beyond authentication (and encryption), keyed versions of the Sponge have found applications in reseedable pseudorandom sequence generation [18, 39].
1.5 Outline
We present our security model in Sect. 2. In Sect. 3, we perform an indepth analysis of multicollisions with respect to Sponges. A security proof for NORX is derived in Sect. 4. Tightness of the bound is proven in Sect. 5. In Sect. 6, we show that the proof of NORX generalizes to other CAESAR submissions, as well as to SpongeWrap and DuplexWrap. We consider the security of APE against noncerespecting adversaries in Sect. 7. The work is concluded in Sect. 8, where we also discuss possible generalizations to Artemia [1].
2 Security Model
For \(n\in \mathbb {N}\), let \(\mathsf {Perm}(n)\) denote the set of all permutations on n bits. When writing \(x\xleftarrow {{\scriptscriptstyle \$}}\mathcal {X}\) for some finite set \(\mathcal {X}\), we mean that x gets sampled uniformly at random from \(\mathcal {X}\). For \(x\in \{0,1\}^{n}\), and \(a,b\le n\), we denote by \([x]^a\) and \([x]_b\) the a leftmost and b rightmost bits of x, respectively. For tuples \((j,k),(j',k')\) we use lexicographical order: \((j,k)>(j',k')\) means that \(j>j'\), or \(j=j'\) and \(k>k'\).
We follow the convention in analyzing modes of operation for permutations by modeling the underlying permutations as being drawn uniformly at random from \(\mathsf {Perm}(b)\), where b is a parameter determined by the scheme.
An adversary \(\mathcal {A}\) is a probabilistic algorithm that has access to one or more oracles \(\mathcal {O}\), denoted \(\mathcal {A}^{\mathcal {O}}\). By \(\mathcal {A}^\mathcal {O}=1\) we denote the event that \(\mathcal {A}\), after interacting with \(\mathcal {O}\), outputs 1. We consider adversaries \(\mathcal {A}\) that have unbounded computational power and whose complexity is solely measured by the number of queries made to their oracles. These adversaries have query access to (i) the underlying idealized permutations, (ii) \(\mathcal {E}_K\) or its counterpart $, and possibly (iii) \(\mathcal {D}_K\). The key K is randomly drawn from \(\{0,1\}^{\kappa }\) at the beginning of the security experiment. The security definitions below follow [11, 37, 51, 77, 80].
2.1 Privacy
2.2 Integrity
3 MultiCollisions
Consider the following game of balls and bins. Let \(R\ge 1\) be the number of bins and \(\sigma \) the number of balls. The \(\sigma \) balls are thrown uniformly at random into the R bins. By \(\mathsf {multcol}(R,\sigma ,\rho )\) we denote a \(\rho \)collision, namely the event that there exists a bin that contains \(\rho \) or more balls after all \(\sigma \) balls are thrown.
Remark 1
An alternative approach to bound the probability that \(\mathsf {multcol}(R,\sigma ,\rho )\) occurs, is via the first and second moments, as done by Raab and Steger [74]. In detail, Raab and Steger demonstrate that \(\mathbf {Pr}\left( \mathsf {multcol}(R,\sigma ,\rho (R,\sigma ))\right) =o(1)\) for various parameter settings and choices of \(\rho \) as a function of R and \(\sigma \) [74, Theorem 1]. This approach, as well as the related approaches in the field of cryptography [10, 49], again does not fit our targeted upper bound.
3.1 Lambert W Function
3.2 Bounding MultiCollision Probability
We will derive Spongeoriented bounds for \(\rho \). In more detail, consider parameters b, r, c such that \(b=r+c\), write \(R=2^r\), and \(S=\min \{2^{b/2},2^c\}\). We will derive choices for \(\rho \) (depending on r and c), such that the probability of a multicollision of (1) is bounded by \(\sigma /S\).
Lemma 1
The proof of Lemma 1 is constructive, and the bounds for \(\rho \) are derived constructively rather than simply proven to hold. However, the reasoning is structurally different for the cases where \(r<c\) (cases (iiv)) and for the cases where \(r\ge c\) (cases (vvii)).
Proof of Lemma 1(iiv)
 Case (i):
 \({{\varvec{r}}} \le {{\varvec{c}}}/\mathbf{5}\). Since \(r\le c/5\), we have \(2^{(5rc)/4}\le 1\). Therefore, the bound of (9) satisfiesWe can choose the minimum \(\theta :=e=2.71\ldots \) so that \((e/\theta )^{\theta 2^{(cr)/2}}=1\), which implies that (10) is upper bounded by \(\sigma /S\), as desired. The size of a multicollision is bounded by$$\begin{aligned} \left( \frac{e}{\theta }\right) ^{\theta 2^{(cr)/2}}\frac{2^{(5rc)/4}}{\sqrt{\theta }}\frac{\sigma }{S} \le \left( \frac{e}{\theta }\right) ^{\theta 2^{(cr)/2}}\frac{1}{\sqrt{e}}\frac{\sigma }{S}\le \left( \frac{e}{\theta }\right) ^{\theta 2^{(cr)/2}}\frac{\sigma }{S}. \end{aligned}$$(10)$$\begin{aligned} \rho =\left\lceil e2^{(cr)/2} \right\rceil . \end{aligned}$$
 Case (ii):

\({{\varvec{c}}}/\mathbf{5} < {{\varvec{r}}} \le {{\varvec{c}}}\mathbf{2}{} \mathbf{log}_{\mathbf{2}}{{\varvec{c}}}\). If \(r>c/5\), then the factor \(2^{(5rc)/4}\) in the bound (9) becomes larger than 1, and we need to somehow cancel this factor by increasing the value of \(\theta \). The factor \(\sqrt{\theta }\) is too small for this purpose, and hence we aim at the factor \((e/\theta )^{\theta 2^{(cr)/2}}\). The following observation suggests that we need to increase the value of \(\theta \) by only a small amount, as long as \(r\le c2\log _2c\):
Claim
If \(r\le c2\log _2c\), then we have \(2^{(cr)/2}\ge (5rc)/4\).
Proof of claim
Direct computation yields \(2^{(cr)/2}\ge 2^{\log _2c}=c\ge (5rc)/4\). \(\square \)
 Case (iii):

\({{\varvec{c}}}\mathbf{2log}_{\mathbf{2}}{{\varvec{c}}} < {{\varvec{r}}} \le {{\varvec{c}}}\mathbf{2log}_{\mathbf{2}}{{\varvec{c}}}+\mathbf{7.2}\). This is a technical case to bridge a gap between case (ii) and case (iv). The reason behind the constant 7.2 will become clear in the analysis of case (iv).
Claim
We have \(\varphi (\zeta )\le 0.495\le 1/2\).
Proof of claim
 Case (iv):

\({{\varvec{c}}}\mathbf{2log}_{\mathbf{2}}{{\varvec{c}}}+\mathbf{7.2}< {{\varvec{r}}} < {{\varvec{c}}}\). The value of \(\theta \) needs to increase as r approaches to c, and in general \(\theta \) cannot be bounded by a constant but is rather a function of r and c. The Lambert W function can handle such a case, yielding a fairly sharp bound.
Claim
Proof of claim
Proof of Lemma 1(vvii)
 Case (v):
 \({{\varvec{c}}} \le {{\varvec{r}}} \le {{\varvec{c}}} + {{\varvec{e}}}{} \mathbf{log}_{\mathbf{2}}{{\varvec{c}}}  {{\varvec{e}}}{\varvec{\beta }}\). Consider bound (4). We have \(R=2^r\) and \(S=2^c\), and hence,Put \(\varphi (\zeta ):=(e/\zeta 2^{rc})^\zeta 2^r\) that is defined for real numbers \(\zeta \ge 2\). We see that \(\varphi (\zeta )\) is strictly decreasing, and at \(\zeta =2\) we have \(\varphi (2)=(e/2)^22^{2cr}\) which is greater than 1 because \(2c\ge r\). So we would like to solve equation \(\varphi (\zeta )=1\). Let \(\zeta _0\) be a solution of this equation. This means that \((\zeta _02^{rc}/e)^{\zeta _0}=2^r\), which is equivalent to$$\begin{aligned} \mathbf {Pr}\left( \mathsf {multcol}(R,\sigma ,\rho )\right) \le \left( \frac{eS}{\rho R}\right) ^\rho \frac{R}{\sqrt{\rho }}\frac{\sigma }{S} =\left( \frac{e2^c}{\rho 2^r}\right) ^\rho \frac{2^r}{ \sqrt{\rho }}\frac{\sigma }{S} =\left( \frac{e}{\rho 2^{rc}}\right) ^\rho \frac{2^r}{\sqrt{\rho }}\frac{\sigma }{S}. \end{aligned}$$To apply (7) to solve (14), set \(\xi =\zeta _02^{rc}/e\) and \(d=2^{r2^{rc}/e}\). We obtain$$\begin{aligned} \left( \frac{\zeta _02^{rc}}{e}\right) ^{\zeta _02^{rc}/e} =2^{r2^{rc}/e}. \end{aligned}$$(14)where \(G:=\ln 2^{r2^{rc}/e}=r2^{rc}(\ln 2)/e\). As \(r\ge c\ge 13\ge 11\), we have \(G\ge 11\cdot (\ln 2)/e=2.80\ldots \ge e\). Using (8),$$\begin{aligned} \frac{\zeta _0 2^{rc}}{e}=e^{W(G)}, \end{aligned}$$where \(\beta =\log _2e+\log _2\log _2e=1.97\ldots \). Since \((1+e^{1})=1.36\ldots \), we can set$$\begin{aligned} \frac{\zeta _02^{rc}}{e}=e^{W_{\mathrm {p}}(G)}\le \frac{(1+e^{1})G}{\ln G}=\frac{(1+e^{1})r2^{rc}/e}{\log _2r+rc\beta }, \end{aligned}$$$$\begin{aligned} \zeta _0\le \rho :=\left\lceil \frac{1.4r}{\log _2r+rc2}\right\rceil . \end{aligned}$$(15)
 Case (vi):
 \({{\varvec{c}}} + {{\varvec{e}}}{} \mathbf{log}_{\mathbf{2}}{{\varvec{c}}}  {{\varvec{e}}}{\varvec{\beta }}< {{\varvec{r}}} < \mathbf{2}{{\varvec{c}}}\). Technically, the bound of case (v) is valid only for \(r\le 2c\). To obtain bounds for \(r\ge 2c\) we perform a different kind of analysis. We do not start with (4) but go back further to (3), and consider a simplified boundThe intuition behind (16) is as follows. The “folklore” approach to obtaining a \(\rho \)collision on rbit values takes about \(2^{(\rho 1)r/\rho }\) trials. Suzuki et al. showed that even under this amount of trials, the probability of finding a \(\rho \)collision is actually quite low, about \(1/\rho !\) [91, 92]. Inspired by this, we consider equation \(2^c=2^{(\rho 1)r/\rho }\). Solving this equation for variable \(\rho \) yields \(\rho =r/(rc)\), as desired.$$\begin{aligned} \rho :=\left\lceil \frac{r}{rc}\right\rceil . \end{aligned}$$(16)
As we will show, the bound (16) “works” not only for \(r\ge 2c\) but for all \(r> c\). Moreover, it turns out that (16) is actually better than (15) for a large part of \(r\in (c,\,2c\,]\), except where \(r\approx c\). \(\square \)
Claim
Let \(r>c\). For \(\rho \) of (16), we have \(\mathbf {Pr}\left( \mathsf {multcol}(R,\sigma ,\rho )\right) \le \sigma /S\).
Proof of claim
Proof of claim
 Case (vii):

\(\mathbf{2}{{\varvec{c}}} \le {{\varvec{r}}}\). In this case, we can use the reasoning of case (vi), with \(\rho =2\) by (16). \(\square \)
4 NORX
We introduce NORX at a level required for the understanding of the security proof and refer to Aumasson et al. [7, 8] for the formal specification. Let p be a permutation on b bits. All bbit state values are split into an outer part of r bits and an inner part of c bits. We denote the key size of NORX by \(\kappa \) bits, the nonce size by \(\nu \) bits, and the tag size by \(\tau \) bits. The header, message, and trailer can be of arbitrary length and are padded using \(10^*1\)padding to a length of a multiple of r bits. Throughout, we denote the rbit header blocks by \(H_1,\ldots ,H_u\), message blocks by \(M_1,\ldots ,M_v\), ciphertext blocks by \(C_1,\ldots ,C_v\), and trailer blocks by \(T_1,\ldots ,T_w\).
Unlike other permutationbased schemes, NORX allows for parallelism in the encryption part, which is described using a parameter \(D\in \{0,\ldots ,255\}\) corresponding to the number of parallel chains. Specifically, if \(D\in \{1,\ldots ,255\}\) NORX has D parallel chains, and if \(D=0\) it has v parallel chains, where v is the block length of M or C.
NORX consists of five proposed parameter configurations: NORX WRD for \((W,R,D)\in \{(64,4,1),(32,4,1),(64,6,1),(32,6,1),(64,4,4)\}\). The parameter R denotes the number of rounds of the underlying permutation p, and W denotes the word size which we use to set \(r=10W\) and \(c=6W\). The default key and tag size are \(\kappa =\nu =4W\). The corresponding parameters for the two different choices of W, 64 and 32, are given in Table 1.
Although NORX starts with an initialization function \(\mathsf {init}\) which requires the parameters \((D, R, \tau )\) as input, as soon as our security experiment starts, we consider \((D,R,\tau )\) fixed and constant. Hence, we can view \(\mathsf {init}\) as a function that maps (K, N) to \((K\Vert N\Vert 0^{b\kappa \nu })\oplus \mathsf {const}\), where \(\mathsf {const}\) is irrelevant to the mode security analysis of NORX, and will be ignored in the remaining analysis.
The privacy of NORX is proven in Sect. 4.1 and the integrity in Sect. 4.2. In both proofs we consider an adversary that makes \(q_p\) permutation queries and \(q_\mathcal {E}\) encryption queries of total length \(\lambda _\mathcal {E}\). In the proof of integrity, the adversary can additionally make \(q_\mathcal {D}\) decryption queries of total length \(\lambda _\mathcal {D}\). To aid the analysis, we compute the number of permutation calls made via the \(q_\mathcal {E}\) encryption queries. The exact same computation holds for decryption queries with the parameters defined analogously.
4.1 Privacy of NORX
Theorem 1
Highlevel security bounds of Theorem 1
Case  Range  Security  Note  

Essentially ideal  (i)  \(r \le c/5\)  \(\min \{2^{b/2}/e,2^c,2^\kappa \}\)  
Slightly worse  (ii)  \(c/5 < r \le c2\log _2c\)  \(\min \{2^{b/2}/3.4,2^c,2^\kappa \}\)  
(iii)  \(c2\log _2c < r \le c2\log _2c+7.2\)  \(\min \{2^{b/2}/8.0,2^c,2^\kappa \}\)  
Reaching \(\displaystyle {\frac{1.4c}{\log _2c2}}\)  (iv)  \(c2\log _2c+7.2< r < c\)  \(\min \{2^{b/2},2^c/\alpha ,2^\kappa \}\)  \(\alpha =\frac{0.7(5rc)}{2\log _2(5rc)+rc8}\) 
(v)  \(c \le r \le c + e\log _2c  e\beta \)  \(\min \{2^{b/2},2^c/\alpha ,2^\kappa \}\)  \(\alpha =\frac{1.4r}{\log _2r+rc2}\)  
Gradually recovering  (vi)  \(c + e\log _2c  e\beta< r < 2c\)  \(\min \{2^{b/2},2^c/\alpha ,2^\kappa \}\)  \(\alpha =\frac{r}{rc}\) 
Optimal  (vii)  \(2c \le r\)  \(\min \{2^{b/2},2^c/2,2^\kappa \}\) 
The proof is based on the observation that NORX is indistinguishable from a random scheme as long as there are no collisions among the (direct and indirect) evaluations of p. Due to uniqueness of the nonce, state values from evaluations of \(\mathcal {E}_K\) collide with probability approximately \(1/2^b\). Regarding collisions between direct calls to p and calls via \(\mathcal {E}_K\): while these may happen with probability about \(1/2^c\), they turn out not to significantly influence the bound. The latter is demonstrated in part using the principle of multiplicities [18]: roughly stated, the maximum number of state values with the same outer part. We use Lemma 1 to bound multiplicities. The formal security proof is more detailed. Furthermore, we remark that, at the cost of readability and simplicity of the proof, the bound could be improved by a constant factor.
Proof
The primitive \(f^\pm \) maintains an initially empty list \(\mathcal {F}\) of query/response tuples (x, y) where the set of domain and range values are denoted by \(\mathrm {dom}(\mathcal {F})\) and \(\mathrm {rng}(\mathcal {F})\), respectively. For a forward query f(x) with \(x\in \mathrm {dom}(\mathcal {F})\), the value in \(\{y \mid (x,y)\in \mathcal {F}\}\) which occurs lexicographically first is returned. For a new forward query f(x), the response y is randomly drawn from \(\{0,1\}^{b}\), then the tuple (x, y) is added to \(\mathcal {F}\). The description for \(f^{1}\) is similar. We let \(\mathsf {abort}\) denote the event that a new query f(x) results in a value y where y is already in \(\mathrm {rng}(\mathcal {F})\), or a new query \(f^{1}(y)\) results in a value x where x is already in \(\mathrm {dom}(\mathcal {F})\).
Lemma 2
The outputs of \((f^\pm ,\mathcal {E}_K)\) and \((f^\pm ,\$)\) are identically distributed until \(\mathsf {event}\) occurs.
Proof
The outputs of \(f^\pm \) are sampled independently and uniformly at random in \((f^\pm , \$)\). This holds in the real world as well, unless a query to \(f^\pm \) collides with an \(f^\pm \) query made via \(\mathcal {E}_K\). Therefore, until \(\mathsf {guess}\) occurs, the outputs of \(f^\pm \) are distributed identically in both worlds. Furthermore, \(f^\pm \)’s outputs are independent of the distinguisher’s query history, hence, assuming all past queries were identically distributed across worlds, a query to \(f^\pm \) will not change the fact that both worlds are identically distributed, until \(\mathsf {guess}\) occurs.
Let \(N_j\) be a new nonce used in the Fquery \((N_j;H_j,M_j,T_j)\), with corresponding ciphertext and authentication tag \((C_j, A_j)\). Denote the query’s state values as in (22). Let u, v, and w denote the number of padded header blocks, padded message blocks, and padded trailer blocks, respectively.
Consider the \(j\hbox {th}\) query. By the definition of $, in the ideal world we have \((C_j,A_j)\xleftarrow {{\scriptscriptstyle \$}}\{0,1\}^{M_j+\tau }\). We will prove that \((C_j,A_j)\) is identically distributed in the real world, under the assumption that \(\mathsf {guess}\vee \mathsf {hit}\) has not yet occurred. Denote the message blocks of \(M_j\) by \(M_{j,k,\ell }\) for \(k=1,\ldots ,D\) and \(\ell =1,\ldots ,v_k\).
Looking at the reasoning of the proof of Lemma 2 above, we notice that if \(\mathsf {event}\) has not yet occurred, then each state value in an Fquery is sampled independently and uniformly at random. In particular, once the adversary fixes the inputs to an Fquery, each state value in that Fquery is independent of the adversary’s input, and independent of each other. Furthermore, the inner part of those state values are never released to the adversary, hence the adversary’s future queries are independent of the inner parts of the state values. Hence, we have the following result:
Corollary 1
Until \(\mathsf {event}\) occurs, the state values in an \(\mathcal {E}_K\) query are distributed independently and uniformly from each other and from the adversary’s input to that \(\mathcal {E}_K\) query. Furthermore, the inner parts of the state values in all \(\mathcal {E}_K\) queries are distributed independently and uniformly from each other and from all of the adversary’s oracleinputs, until \(\mathsf {event}\) occurs.
Lemma 3
\(\mathbf {Pr}\left( \mathcal {A}^{f^\pm ,\mathcal {E}_K} \text { sets } \mathsf {event}\right) \le \dfrac{q_p\sigma _\mathcal {E}+ \sigma _\mathcal {E}^2/2}{2^b} + \dfrac{\sigma _\mathcal {E}}{\min \{2^{b/2},2^c\}} + \dfrac{2\rho q_p}{2^c} + \dfrac{q_p+\sigma _\mathcal {E}}{2^\kappa }\), where \(\rho =\rho (r,c)\) is the function defined in Lemma 1.
Proof

Consider a primitive query \((x_i,y_i)\) for \(i\in \{1,\ldots ,q_p\}\), which may be a forward or an inverse query, and assume it has not been queried to \(f^\pm \) before. If it is a forward query \(x_i\), by \(\lnot \mathsf {multi}\) there are at most \(\rho \) state values s with \([x_i]^r=[s]^r\), and thus \(x_i=s\) with probability at most \(\rho /2^c\). Here, we remark that the inner part of s is unknown to the adversary and it guesses it with probability at most \(1/2^c\), as established by Corollary 1. A slightly more complicated reasoning applies for inverse queries. Denote the query by \(y_i\). By \(\lnot \mathsf {multi}\) there are at most \(\rho \) state values s with \([y_i]^r=[f(s)]^r\), hence, using Corollary 1 again, \(y_i = f(s)\) with probability at most \(\rho /2^c\). If \(y_i\) equals f(s) for any of these states, then \(x_i = s\), otherwise \(x_i = s\) with probability at most \(\sum _{j=1}^{j_i}\sigma _{\mathcal {E},j}/2^b\). Therefore, the probability that \(\mathsf {guess}\) is set via a direct query is at most \(\frac{q_p\rho }{2^c} + \sum _{i=1}^{q_p}\sum _{j=1}^{j_i}\frac{\sigma _{\mathcal {E},j}}{2^b}\);
 Next, consider the probability that the \(j\hbox {th}\) construction query sets \(\mathsf {guess}\), for \(j\in \{1,\ldots ,q_\mathcal {E}\}\). For simplicity, first consider \(D=1\), hence the message is processed in one lane and we can use state labeling \((s_{j,1},\ldots ,s_{j,\sigma _{\mathcal {E},j}})\). We range from \(s_{j,2}\) to \(s_{j,\sigma _{\mathcal {E},j}}\) (recall that \(s_{j,1}=s_j^{\text {init}}\) can be excluded) and consider the probability that this state sets \(\mathsf {guess}\) assuming it has not been set before. Let \(k\in \{2,\ldots ,\sigma _{\mathcal {E},j}\}\). The state value \(s_{j,k}\) equals \(f(s_{j,k1})\oplus v\), where v is some value determined by the adversarial input prior to the evaluation of \(f(s_{j,k1})\), including input from \((H_j,M_j,T_j)\) and constants serving as domain separators. By assumption, \(\mathsf {guess}\vee \mathsf {hit}\) has not been set before, and \(f(s_{j,k1})\) is thus randomly drawn from \(\{0,1\}^{b}\). It hits any \(x_i\) (\(i\in \{1,\ldots ,i_j\}\)) with probability at most \(i_j/2^b\). Next, consider the general case \(D>1\). We return to the labeling of (22). A complication occurs for the branching states \(s_{j,1,0}^M,\ldots ,s_{j,D,0}^M\) and the merging state \(s_{j,0}^T\). Starting with the branching states, these are computed from \(s_{j,u}^H\) aswhere \(v_1,\ldots ,v_D\) are some distinct values determined by the adversarial input prior to the evaluation of the \(j\hbox {th}\) construction query. These are distinct by the XOR of the lane numbers \(id_1,\ldots ,id_D\). Any of these nodes equals \(x_i\) for \(i\in \{1,\ldots ,q_p\}\) with probability at most \(i_jD/2^b\). Finally, for the merging node \(s_{j,0}^T\) we can apply the same analysis, noting that it is derived from a sum of D new fevaluations. Concluding, the \(j\hbox {th}\) construction query sets \(\mathsf {guess}\) with probability at most \(i_j\sigma _{\mathcal {E},j}/2^b\) (we always have in total at most \(\sigma _{\mathcal {E},j}\) new state values). Summing over all \(q_\mathcal {E}\) construction queries, we get \(\sum _{j=1}^{q_\mathcal {E}}i_j\sigma _{\mathcal {E},j}/2^b\).$$\begin{aligned} \left( \begin{array}{c} s_{j,1,0}^M \\ \vdots \\ s_{j,D,0}^M \end{array}\right) = f(s_{j,u}^H) \oplus \left( \begin{array}{c} v_1 \\ \vdots \\ v_D \end{array}\right) , \end{aligned}$$
Event\({\varvec{\mathsf {hit}}}\). We again employ ideas of \(\mathsf {guess}\), and particularly that as long as \(\mathsf {guess}\vee \mathsf {hit}\) is not set, we can consider all new state values (except for the initial states) to be randomly drawn from a set of size \(2^b\). Particularly, we can refrain from explicitly discussing the branching and merging nodes (the detailed analysis of \(\mathsf {guess}\) applies) and label the states as \((s_{j,1},\ldots ,s_{j,\sigma _{\mathcal {E},j}})\). Clearly, \(s_{j,1}\ne s_{j',1}\) for all \(j,j'\) by uniqueness of the nonce. Any state value \(s_{j,k}\) for \(k>1\) (at most \(\sigma _\mathcal {E}q_\mathcal {E}\) in total) hits an initial state value \(s_{j',1}\) only if \([s_{j,k}]^\kappa =K\), which happens with probability at most \(\sigma _\mathcal {E}/2^\kappa \), assuming \(s_{j,k}\) is generated randomly. Finally, any two other states \(s_{j,k},s_{j',k'}\) for \(k,k'>1\) collide with probability at most \({\sigma _\mathcal {E}q_\mathcal {E}\atopwithdelims ()2}/2^b\). Concluding, \(\mathbf {Pr}\left( \mathsf {hit}\mid \lnot (\mathsf {key}\vee \mathsf {multi})\right) \le {\sigma _\mathcal {E}\atopwithdelims ()2}/2^b + \sigma _\mathcal {E}/2^\kappa \).
Event\({\varvec{\mathsf {key}}}\). For \(i\in \{1,\ldots ,q_p\}\), the query sets \(\mathsf {key}(i)\) if \([x_i]^\kappa =K\), which happens with probability \(1/2^\kappa \) (assuming it did not happen in queries \(1,\ldots ,i1\)). The adversary makes \(q_p\) attempts, and hence \(\mathbf {Pr}\left( \mathsf {key}\right) \le q_p/2^\kappa \).
4.2 Authenticity of NORX
Theorem 2
The bound is more complex than the one of Theorem 1, but intuitively implies that NORX offers integrity as long as it offers privacy and the number of forgery attempts \(\sigma _\mathcal {D}\) is limited, where the total complexity \(q_p+\sigma _\mathcal {E}+\sigma _\mathcal {D}\) should not exceed \(2^c/\sigma _\mathcal {D}\). See Table 1 for the security level for the various parameter choices of NORX. Needless to say, the exact bound is more finegrained.
Proof
We inherit terminology from Theorem 1. The state values corresponding to encryption and decryption queries will both be labeled (j, k), where j indicates the query and k the state value within the \(j\hbox {th}\) query. If needed we will add another parameter \(\delta \in \{\mathcal {D},\mathcal {E}\}\) to indicate that a state value \(s_{\delta ,j,k}\) is in the \(j\hbox {th}\) query to oracle \(\delta \), for \(\delta \in \{\mathcal {D},\mathcal {E}\}\) and \(j\in \{1,\ldots ,q_\delta \}\). Particularly, this means we will either label the state values as in (22) with a \(\delta \) appended to the subscript, or simply as \((s_{\delta ,j,1},\ldots ,s_{\delta ,j, \sigma _{\delta ,j}})\).
Lemma 4
\(\mathbf {Pr}\left( \mathcal {A}^{f^\pm ,\mathcal {E}_K,\mathcal {D}_K} \text { sets } \mathsf {event}\right) \le \dfrac{q_p\sigma _\mathcal {E}+ \sigma _\mathcal {E}^2/2}{2^b} + \dfrac{\sigma _\mathcal {E}}{\min \{2^{b/2},2^c\}} + \dfrac{2\rho q_p}{2^c} + \dfrac{q_p+\sigma _\mathcal {E}+\sigma _\mathcal {D}}{2^\kappa } + \dfrac{(q_p+\sigma _\mathcal {E})\sigma _\mathcal {D}+ \sigma _\mathcal {D}^2/2}{2^c}\), where \(\rho =\rho (r,c)\) is the function defined in Lemma 1.
Proof
Event\({\varvec{\mathcal {D}\mathsf {guess}}}\). Note that the adversary may freely choose the outer part in decryption queries and primitive queries. Indeed, the ciphertext values that \(\mathcal {A}\) chooses in decryption queries define the outer parts of the state values. Consequently, \(\mathcal {D}\mathsf {guess}\) gets set as soon as there is a primitive state and a decryption state whose capacities are equal. This happens with probability at most \(\mathbf {Pr}\left( \mathcal {D}\mathsf {guess}\mid \lnot (\mathsf {key}\vee \mathsf {multi})\right) \le q_p\sigma _\mathcal {D}/2^c\).
 (1)
\(({{\varvec{N}}};{{\varvec{H}}},{{\varvec{C}}},{{\varvec{T}}}) =({{\varvec{N}}}_{{\varvec{\delta }},{{\varvec{j}}}}; {{\varvec{H}}}_{{\varvec{\delta }},{{\varvec{j}}}}, {{\varvec{C}}}_{{\varvec{\delta }},{{\varvec{j}}}}, {{\varvec{T}}}_{{\varvec{\delta }},{{\varvec{j}}}})\)but \({{\varvec{A}}}\ne {{\varvec{A}}}_{{\varvec{\delta }}, {{\varvec{j}}}}\). In this case the query renders no new states and \(\mathcal {D}\mathsf {hit}\) cannot be set by definition;
 (2)\(({{\varvec{N}}};{{\varvec{H}}},{{\varvec{C}}}) =({{\varvec{N}}}_{{\varvec{\delta }},{{\varvec{j}}}}; {{\varvec{H}}}_{{\varvec{\delta }},{{\varvec{j}}}}, {{\varvec{C}}}_{{\varvec{\delta }},{{\varvec{j}}}})\)but \({{\varvec{T}}}\ne {{\varvec{T}}}_{{\varvec{\delta }},{{\varvec{j}}}}\). Let \(\ell \in \{1,\ldots ,\min \{w,w_{\delta ,j}\},\infty \}\) be minimal such that \(T_\ell \ne T_{\delta ,j,\ell }\), where \(\ell =\infty \) means that T is a substring of \(T_{\delta ,j}\) (if \(w<w_{\delta ,j}\)) or vice versa (if \(w>w_{\delta ,j}\)). We make a further distinction between \(\ell =\infty \) and \(\ell <\infty \).
 (a)
\({\varvec{\ell }}={\varvec{\infty }}\). Note that \(s_{\min \{w,w_{\delta ,j}\}}^T=s_{\delta ,j,\min \{w,w_{\delta ,j}\}}^T\oplus \mathtt {04}\oplus \mathtt {08}\). If this input to f is old, it implies \(\mathsf {innerhit}(\delta ,j, \min \{w,w_{\delta ,j}\};\delta ',j',k';\mathtt {04}\oplus \mathtt {08})\) for some \((\delta ',j',k')\) older than the current query \((\mathcal {D},\bar{j},\min \{w,w_{\delta ,j}\})\), which is the case with probability at most \(1/2^c\) (for all possible index tuples). Otherwise, f generates a new value and new state value s (\(s_{w+1}^T\) if \(w>w_{\delta ,j}\) or \(s^\text {tag}\) if \(w<w_{\delta ,j}\)), which sets \(\mathcal {D}\mathsf {hit}\) if it sets \(\mathsf {innerhit}\) with an older state \(s_{\delta ',j',k'}\) under \(\mathsf {const}=0\). This also happens with probability at most \(1/2^c\) for any \((\delta ',j',k')\). This procedure propagates to \(s^\text {tag}\). In total, the \(\bar{j}\)th decryption query sets \(\mathcal {D}\mathsf {hit}\) with probability at most \(\sum _{k=1}^{\sigma _{\mathcal {D},\bar{j}}}\frac{\sigma _\mathcal {E}+\sigma _{\mathcal {D},1} +\cdots +\sigma _{\mathcal {D},\bar{j}1} + (k1)}{2^c}\);
 (b)
\({\varvec{\ell }}<{\varvec{\infty }}\). In this case \(s_{\ell 1}^T=s_{\delta ,j,\ell 1}^T\) and \(s_\ell ^T =s_{\delta ,j,\ell }^T\oplus (T_\ell \Vert 0^c)\oplus (T_{\delta ,j,\ell } \Vert 0^c)\ne s_{\delta ,j,\ell }^T\).^{5} As before, \(s_\ell ^T\) is a new input to f, except if \(\mathsf {innerhit}(\delta ,j,\ell ;\delta ',j',k'; \mathtt {0})\) for some \((\delta ',j',k')\) older than the current query \((\mathcal {D},\bar{j},\ell )\). This is the case with probability at most \(1/2^c\) for all possible older queries. The procedure propagates to \(s^\text {tag}\) as before, and the same bound holds;
 (a)
 (3)\(({{\varvec{N}}};{{\varvec{H}}}) =({{\varvec{N}}}_{{\varvec{\delta }}, {{\varvec{j}}}};{{\varvec{H}}}_{{\varvec{\delta }}, {{\varvec{j}}}})\)but \({{\varvec{C}}}\ne {{\varvec{C}}}_{{\varvec{\delta }},{{\varvec{j}}}}\). The analysis is similar but a special treatment is required to deal with the merging phase. Consider the ciphertext C to be divided into blocks \(C_{k,\ell }\) for \(k=1,\ldots ,D\) and \(\ell =1,\ldots ,v_k\). Similarly for \(C_{\delta ,j}\). For \(k=1,\ldots ,D\), let \(\ell _k\in \{1,\ldots ,\min \{v_k,v_{\delta ,j,k}\},\infty \}\) be minimal such that \(C_{k,\ell _k}\ne C_{\delta ,j,k,\ell _k}\). Again, \(\ell _k=\infty \) means that \(C_k\) is a substring of \(C_{\delta ,j,k}\) (if \(v_k\le v_{\delta ,j,k}\)) or vice versa (if \(v_k\ge v_{\delta ,j,k}\)). We make a further distinction between whether or not \((\ell _1,\ldots ,\ell _D)=(\infty ,\ldots ,\infty )\).
 (a)
\(({\varvec{\ell }}_{\mathbf{1}},\ldots , {\varvec{\ell }}_{{\varvec{D}}})=({\varvec{\infty }}, \ldots ,{\varvec{\infty }})\). As \(C\ne C_{\delta ,j}\), there must be a k such that \(v_k\ne v_{\delta ,j,k}\) and thus that \(C_k\) is a strictly smaller substring of \(C_{\delta ,j,k}\) or vice versa. Consequently, \(s_{k,v_k}^C = s_{\delta ,j,k,v_k}^C\oplus \mathtt {02}\oplus \mathtt {20}\oplus id_k[\min \{v_k,v_{\delta ,j,k}\}=1]\) (or \(\oplus \;\mathtt {02}\oplus \mathtt {04}\) if \(D=1\) and there is no merging phase, or \(\oplus \;\mathtt {02}\oplus \mathtt {08}\) if there is furthermore no trailer). Then, this state is new to f except if \(\mathsf {innerhit}(\delta ,j,k,v_k;\delta ',j',k';\mathsf {const})\) is set for the \(\mathsf {const}\) described above. (We slightly misuse notation here in that \(v_k\) is input to \(\mathsf {innerhit}\).) This means that also \(s_0^T\) will be new except if it hits a certain older state, which happens with probability \(1/2^c\). The reasoning propagates up to \(s^\text {tag}\) as before, and the same bound holds;
 (b)
\(({\varvec{\ell }}_{\mathbf{1}},\ldots , {\varvec{\ell }}_{{\varvec{D}}})<({\varvec{\infty }}, \ldots ,{\varvec{\infty }})\). Let k be such that \(\ell _k<\infty \). Then, \(s_{k,\ell _k1}^C=s_{\delta ,j,k,\ell _k1}^C\) and \(s_{k,\ell _k}^C=C_{k,\ell _k}\Vert [s_{\delta ,j,k,\ell _k}^C]_c\ne s_{\delta ,j,k,\ell _k}^C\). The reasoning of case (2b) carries over for all future state values;
 (a)
 (4)
\({{\varvec{N}}}={{\varvec{N}}}_{{\varvec{\delta }}, {{\varvec{j}}}}\)but \({{\varvec{H}}}\ne {{\varvec{H}}}_{{\varvec{\delta }},{{\varvec{j}}}}\). The analysis follows fairly the same principles, albeit using \(\mathsf {const}\in \{\mathtt {0},\mathtt {01}\oplus \mathtt {02}, \mathtt {01}\oplus \mathtt {04}, \mathtt {01}\oplus \mathtt {08}, \mathtt {01}\oplus \mathtt {10}\}\);
 (5)
\({{\varvec{N}}}\ne {{\varvec{N}}}_{{\varvec{\delta }}, {{\varvec{j}}}}\). The nonce N is new (hence the query shares no prefix with any older query). There has not been an earlier state s that satisfies \([s]^\kappa =K\) (by virtue of the analysis in \(\mathsf {hit}\) and \(\mathsf {key}\), and the first step of this event \(\mathcal {D}\mathsf {hit}\)). Therefore, \(s^{\text {init}}\) is new by construction and a simplification of above analysis applies.
5 Tightness of the Bound
We derive a generic attack on Spongebased authenticated encryption schemes. The attack exploits multicollisions on the outer part of the internal state. Using the multicollision bounds of Suzuki et al. [91, 92], we demonstrate that the attack actually matches the proven security bound, meaning that the bounds of Sect. 4 are tight. Therefore, we first describe our simplified target structure in Sect. 5.1. The attack is described in Sect. 5.2 and evaluated in Sect. 5.3.
5.1 Target Structure
As shown in Fig. 3, the bbit state after the first permutation call is denoted \(s_1\). Its outer and inner part are denoted \([s_1]^r\) and \([s_1]_c\), respectively. Then, an rbit message block \(M_1\) is XORed into \([s_1]^r\) and the first ciphertext block \(C_1 = [s_1]^r\oplus M_1\) is output. The state is evaluated using the permutation, and the resulting state is \(s_2\). Note that the values \(M_i\) and \(C_i\) reveal the outer part of state \(s_i\) as \([s_i]^r=M_i\oplus C_i\).
5.2 Distinguishing Attacks via Key Recovery
Let \(\rho \ge 2\). If \(2^\kappa \le 2^c/\rho \) a naive key recovery attack can be performed in complexity \(2^\kappa \), and we assume that \(2^\kappa >2^c/\rho \).
We first give an overview of the attack. Once a bbit state in the structure of Fig. 3 is recovered, the secret key K can be recovered immediately by computing the inverse of the permutation. Our attack aims to recover the internal state \(s_1\) after the first permutation call. It consists of an online phase followed by an offline phase.
In the online phase, the adversary searches for a \(\rho \)collision on the rbit value \(C_1\). It makes a certain amount of encryption oracle queries for different N and possibly different \(M_1\). Let q denote the total number of encryption queries needed. The online phase results in \(\rho \) pairs of \((N,M_1)\) which produce the same \(C_1\) but different \([s_1]_c\). The adversary also stores the tag A for each pair.
In the offline phase, the adversary recovers an inner part \([s_1]_c\). Using the value \(C_1\), the same for all tuples, the value \([s_1]_c\) is exhaustively guessed. In a bit more detail, the adversary computes the authentication tag A from \(C_1\Vert [s_1]_c\) offline, and checks if there is a match with any stored tag. Because \(\rho \) tags are stored, the attack cost is about \(2^c/\rho \). Once \([s_1]_c\) is recovered, the adversary can compute \(p^{1}((M_1 \oplus C_1)\Vert [s_1]_c)\) and recover K.
The formal description of the attack is given below. Here, we denote the data D for the kth block in the \(j\hbox {th}\) query by \(D_{j,k}\). We omit the second subscript for the data where the block length is always 1, e.g., nonce \(N_j\).
 1.
Choose q different pairs \((N_{q},M_{q,1})\) for \(i=1,2,\ldots ,q\);
 2.
Query \((N_{i},M_{i,1})\) for \(i=1,2,\ldots ,q\) and receive \((C_{i,1},A_{i,1}\Vert A_{i,2}\Vert \ldots )\);
 3.
Find a \(\rho \)collision on \(C_{\cdot ,1}\);
 4.
Store \(\rho \) triplets of \((N_i,M_{i,1},A_{i,1}\Vert A_{i,2}\Vert \ldots )\) contributing to the \(\rho \)collision. We denote the colliding value of \(C_{\cdot ,1}\) by \(\overline{C}\), which is also stored.
 1.
Redefine the outer part of the state after the computation of \([s_{\cdot ,1}]^r \oplus M_{\cdot ,1}\) by \(\overline{C}\);
 2.
Make \(2^c/\rho \) guesses for \([s_{\cdot ,1}]_c\), denoted by \([s_{j,1}]_c\) for \(j=1,2,\ldots ,2^c/\rho \);
 3.
For each j, generate the tag \(A_{j,1}\Vert A_{j,2}\Vert \ldots \) with the state \(\overline{C}\Vert [s_{j,1}]_c\);
 4.
Check if \(A_{j,1}\Vert A_{j,2}\Vert \ldots \) matches one of the \(\rho \) values \(A_{i,1}\Vert A_{i,2}\Vert \ldots \) stored in the online phase. If so, assume that \([s_{j,1}]_c\) is the right value. Let \(i'\) and \(j'\) be matching indices;
 5.
Compute \(p^{1}\bigl ( (M_{i',1} \oplus \overline{C}) \Vert [s_{j',1}]_c \bigr )\). If the resulting value matches nonce \(N_{i'}\), output the first \(\kappa \) bits of the state as the recovered key K.
5.3 Attack Evaluation
In the online phase, the adversary does not strictly need to choose N and \(M_1\), a given list of q different tuples suffices. Thus, the attack is a known plaintext attack. The data complexity is q oneblock messages and the memory to store q triples \((N_i,M_{i,1},A_{i,1}\Vert A_{i,2}\Vert \ldots )\) for \(i=1,\ldots ,q\) is required. The time complexity of at least q memory access is also required. Intuitively, all the complexities in the online phase are q.
In the offline phase, because \(\rho \) candidates are stored in the online phase and \(2^c/\rho \) guesses are examined, one match is expected. If the internal state values match, the corresponding tag values also match. Thus, the right guess is identified. Due to the assumption that the tag size is at least c bits, the match likely only suggests the right guess. In addition, we can further filter out the false positive by r bits with the match of N in the last step. Thus, with a very high probability the key is successfully recovered. For the complexity, the only important factor is the time complexity of \(2^c/\rho \) tag generation functions.
Comparison of attack complexity and security bound
Parameters  Attack complexity  Security bound  

\(\rho \)  q  \(2^r/\rho \)  \(2^c/\alpha ,\ \alpha =\frac{1.4r}{\log _2r+rc2}\)  
\(c=r=128\)  18  \(2^{123.806}\)  \(2^{123.830}\)  \(2^{122.837}\) 
\(c=r=256\)  30  \(2^{251.057}\)  \(2^{251.093}\)  \(2^{250.100}\) 
\(c=r=512\)  51  \(2^{506.272}\)  \(2^{506.327}\)  \(2^{505.322}\) 
\({{\varvec{c}}}<{{\varvec{r}}}\). It is common practice to enlarge the rate of Spongebased authenticated encryption so that more data can be processed per permutation call. We demonstrate tightness of our attack for the case of \(c=256\) and \(r\in [257,768]\). Figure 1 depicts the evaluated attack complexity and our security bound for \(c=256\). For the sake of completeness, it also includes the \(2^c/r\) bound of the original ASIACRYPT 2014 article [53], which decreases by approximately a logarithmic factor \(\log _2 r\).
Note that the adversary needs to find a multicollision on r bits with only \(2^c\) trials. When the rate increases, and particularly when \(r>2c\), the adversary cannot even find an ordinary collision within \(2^c\) trials. In this case, the multicollisionbased attack will not be influential. Due to this, our bound is getting close to \(2^c\) when r becomes large. The advantage of the attack comes from the number of generated multicollisions. Considering that the number of multicollisions can only take discrete values while our bound can take sequential values, our bound is strictly tight.
\({{\varvec{c}}}>{{\varvec{r}}}\). Note that, for \(c>r\), the security bound of Theorem 1 is not dominated by \(2^c/\alpha \) but rather by \(2^{b/2}\), omitting constants (cf., Table 2). Tightness of the bound follows by a naive attack that aims to find collisions on the bbit state.
5.4 Distinguishing Attacks Without Key Recovery
As later explained in Fig. 4, several practical designs use key K for the initialization as well as for the tag generation. Those schemes cannot be distinguished with a straightforward application of the above generic procedure, yet it is still possible to distinguish them by increasing the attack complexity only by 1 bit or so.
We focus on Ascon, GIBBON and HANUMAN, in which K in the tag computation prevents the adversary from computing tag A offline. This can be solved by extending the number of message blocks in each query. Instead of the tag \(A_{i,1}\Vert A_{i,2}\Vert \ldots \), outer parts of the subsequent blocks \([s_{i,2}]^r\Vert [s_{i,3}]^r\Vert \ldots \) take a role of filter to identify the correct guess. If the number of filtered bits is much bigger than c, a match suggests the correct guess with very high probability. Owing to the additional message blocks, the attack complexity increases by 1 bit or so, depending how many message blocks are added.
In HANUMAN, K can be recovered from the internal state by inverting the permutation to the initial value. Meanwhile in Ascon and GIBBON, K cannot be recovered and the adversary only can mount distinguishing attacks.
6 Other CAESAR Submissions

NORX uses domain separation constants at all rounds, but this is not strictly necessary and other solutions exist. In the privacy and integrity proofs of NORX, and more specifically at the analysis of state collisions caused by a decryption query in Lemma 4, the domain separations are only needed at the transitions between variablelength inputs, such as header to message data or message to trailer data. This means that the proofs would equally hold if there were simpler transitions at these positions, such as in Ascon. Alternatively, the domain separation can be done by using a different primitive, as in GIBBON and HANUMAN, or a slightly more elaborated padding, as in BLNK, ICEPOLE, and Keyak;

The extra permutation evaluations at the initialization and finalization of NORX are not strictly necessary: in the proof we consider the monotone event that no state collides assuming no earlier state collision occurred. For instance, in the analysis of \(\mathcal {D}\mathsf {hit}\) in the proof of Lemma 4, we necessarily have a new input to p at some point, and consequently all next inputs to p are new (except with some probability);

NORX starts by initializing the state with \(\mathsf {init}(K,N)=(K\Vert N\Vert 0^{b\kappa \nu })\oplus \mathsf {const}\) for some constant \(\mathsf {const}\) and then permuting this value. Placing the key and nonce at different positions of the state does not influence the security analysis. The proof would also work if, for instance, the header is preceded with \(K\Vert N\) or a properly padded version thereof and the starting state is \(0^b\);

In a similar fashion, there is no problem in defining the tag to be a different \(\tau \) bits of the final state; for instance, the rightmost \(\tau \) bits;

Key additions into the inner part after the first permutation are harmless for the mode security proof. Particularly, as long as these are done at fixed positions, these have the same effect as XORing a domain separation constant.
We remark that the attack of Sect. 5 carries over to CBEAM and STRIBOB, ICEPOLE and a simplified version of Keyak v1 (with only one round of key absorption). It does not apply to Ascon, GIBBON, and HANUMAN due to the additional XOR of the secret key at the end.
6.1 Ascon
Ascon is a submission by Dobraunig et al. [33, 34] and is depicted in Fig. 4a. It is originally defined based on two permutations \(p_1,p_2\) that differ in the number of underlying rounds. We discard this difference, considering Ascon with one permutation p.
Ascon initializes its state using \(\mathsf {init}\) that maps (K, N) to \((0^{b\kappa \nu }\Vert K\Vert N)\oplus \mathsf {const}\), where \(\mathsf {const}\) is determined by some designspecific parameters set prior to the security experiment. The header and message can be of arbitrary length and are padded to length a multiple of r bits using \(10^*\)padding. An XOR with 1 separates header processing from message processing. From the above observations, it is clear that the proofs of NORX directly carry over to Ascon.
6.2 ICEPOLE
ICEPOLE is a submission by Morawiecki et al. [65, 66] and is depicted in Fig. 4c. It is originally defined based on two permutations, \(p_1\) and \(p_2\), that differ in the number of underlying rounds. We discard this difference, considering ICEPOLE with one permutation p.
ICEPOLE initializes its state as NORX does, be it with a different constant. The header and message can be of arbitrary length and are padded as follows. Every block is first appended with a frame bit: \(\mathtt {0}\) for header blocks \(H_1,\ldots ,H_{u1}\) and message block \(M_v\), and \(\mathtt {1}\) for header block \(H_u\) and message blocks \(M_1,\ldots ,M_{v1}\). Then, the blocks are padded to length a multiple of r bits using \(10^*\)padding. In other words, every padded block of r bits contains at most \(r2\) data bits. This form of domain separation using frame bits suffices for the proof to go through. One variant of ICEPOLE also allows for a secret message number \(M_\text {secret}\), which consists of one block and is encrypted prior to the processing of the header, similar to the message. As this secret message number is of fixed length, no domain separation is required and the proof can easily be adapted. From above observations, it is clear that the proofs of NORX directly carry over to ICEPOLE. Without going into detail, we note that the same analysis can be generalized to the parallelized mode of ICEPOLE [65, 66].
6.3 Keyak
Keyak v1 is a submission by Bertoni et al. [22]. The basic mode for the serial case is depicted in Fig. 4d, yet due to its hybrid character it is slightly more general in nature. It is built on top of SpongeWrap [19]. We remark that the discussion does not apply to Keyak v2, which is built on top of the fullstate keyed Duplex [31, 60].
6.4 BLNK (CBEAM and STRIBOB)
CBEAM and STRIBOB are submissions by Saarinen [81, 83, 84, 85, 86]. Minaud identified an attack on CBEAM [62], but we focus on the modes of operation. Both modes are based on the BLNK Sponge mode [82], which is depicted in Fig. 4b.
The BLNK mode initializes its state by \(0^b\), compresses K into the state (using one or two permutation calls, depending on \(\kappa \)), and does the same with N. Then, the mode is similar to SpongeWrap [19], though using a slightly more involved domain separation system similar to the one of NORX. Due to above observations, our proof readily generalizes to BLNK [82], and thus to CBEAM and STRIBOB.
6.5 PRIMATEs: GIBBON and HANUMAN
PRIMATEs is a submission by Andreeva et al. [2, 3], and consists of three algorithms: APE, GIBBON, and HANUMAN. The APE mode is the more robust one, and significantly differs from the other two, and from the other CAESAR submissions discussed in this work, in the way that ciphertexts are derived and because the mode is secure against noncemisusing adversaries up to common prefix [4]. (See Sect. 7 for a discussion on APE.) We now focus on GIBBON and HANUMAN, which are depicted in Fig. 4e, f. GIBBON is based on three related permutations \(\mathbf{p}=(p_1,p_2,p_3)\), where the difference in \(p_2,p_3\) is used as domain separation of the header compression and message encryption phases (the difference of \(p_1\) from \((p_2,p_3)\) is irrelevant for the mode security analysis). Similarly, HANUMAN uses two related permutations \(\mathbf{p}=(p_1,p_2)\) for domain separation.^{6}
GIBBON and HANUMAN initialize their state using \(\mathsf {init}\) that maps (K, N) to \(0^{b\kappa \nu }\Vert K\Vert N\). The header and message can be of arbitrary length, and are padded to length a multiple of r bits using \(10^*\)padding. In case the true header (or message) happens to be a multiple of r bits long, the \(10^*\)padding is considered to spill over into the capacity. From above observations, it is clear that the proofs of NORX directly carry over to GIBBON and HANUMAN. A small difference appears due to the usage of two different permutations: we need to make two RPRF switches for each world. Concretely this means that the first term in Theorem 1 becomes \(\frac{5(q_p+\sigma _\mathcal {E})^2}{2^{b+1}}\) and the first term in Theorem 2 becomes \(\frac{3(q_p+\sigma _\mathcal {E}+\sigma _\mathcal {D})^2}{2^{b+1}}\).
7 PRIMATEs: APE
APE uses a key of size c bits, and the initialization \(\mathsf {init}\) places K into the inner part of the state. In case of a present nonce N, in APE it is prepended to the header H, denoted \(N\Vert H\). The nonce is of fixed length, and of suggested size 2r bits [2, 3]. The header and message can be of arbitrary length and are padded to length a multiple of r bits using \(10^*\)padding. In case the true header (or message) happens to be a multiple of r bits long, the \(10^*\)padding is considered to spill over into the capacity. In case the message is not a multiple of r bits long, the last ciphertext is derived slightly differently, and we refer to [2, 3].
The scheme is designed and proven to be \(2^{c/2}\) secure against noncemisusing adversaries up to common prefix [4]. We now consider the security of APE in the noncerespecting setting, and present an adversary that breaks the privacy with a complexity of about \(2^{c/2}\). We assume that the adversary can make blockwise queries to the scheme. In more detail, upon an authenticated encryption of \(M_1,\ldots ,M_v\), it only needs to input the \(j\hbox {th}\) message block after it receives the \(j1\) ciphertext block, for \(j=2,\ldots ,v\).
Proposition 1
Proof
8 Conclusions
In this work we analyzed one of the Spongebased authenticated encryption designs in detail, NORX, and proved that it achieves security of approximately \(\min \{2^{b/2},2^c,2^\kappa \}\), significantly improving upon the traditional bound of \(\min \{2^{c/2},2^\kappa \}\). Additionally, we showed that this proof straightforwardly generalizes to five other CAESAR modes, Ascon, BLNK (of CBEAM/STRIBOB), ICEPOLE, Keyak v1, and PRIMATEs. Our findings indicate an overly conservative parameter choice made by the designers, implying that some designs can improve speed by a factor of 4 at barely any security loss. It is expected that the security proofs also generalize to the modes of Artemia [1]. However, this mode is based on the JH hash function [96] and XORs data blocks in both the rate and inner part. It does not use domain separations, rather it encodes the lengths of the inputs into the padding at the end [9]. Therefore, a generalization of the proof of NORX to Artemia is not entirely straightforward.
The results in this work are derived in the ideal permutation model, where the underlying primitive is assumed to be ideal. We acknowledge that this model does not perfectly reflect the properties of the primitives. For instance, it is stated by the designers of Ascon, NORX, and PRIMATEs that nonrandom (but harmless) properties of the underlying permutation exist. Furthermore, it is important to realize that the proofs of security for the modes of operation in the ideal model do not have a direct connection with security analysis performed on the permutations, as is the case with block ciphers modes of operation. Nevertheless, we can use these proofs as heuristics to guide cryptanalysts to focus on the underlying permutations, rather than the modes themselves.
Footnotes
 1.
CBEAM was withdrawn after an attack by Minaud [62], but we focus on modes of operation.
 2.
Keyak v2 follows a different design approach.
 3.
Both CBEAM and STRIBOB use the BLNK Sponge mode [82].
 4.
For \(D=0\), the original specification dictates an additional \(10^{b2}1\)padding for every complete message block. This means that lanes \(1,\ldots ,v1\) consist of two rounds. We do not take this padding into account, noting that it is unnecessary for the security analysis.
 5.
Note that if \((\delta ,j)\) were not unique, then we similarly have \(s_{\ell 1}^T=s_{\delta ',j',\ell 1}^T\) and \(s_\ell ^T=s_{\delta ',j',\ell }^T\oplus (T_\ell \Vert 0^c)\oplus (T_{\delta ',j',\ell }\Vert 0^c)\ne s_{\delta ',j',\ell }^T\) for all other queries \((\delta ',j')\) with the same prefix (possibly XORed with \(\mathtt {04}\oplus \mathtt {08}\)).
 6.
Vizár [94] pointed out an oversight in the domain separation of an earlier version of HANUMAN. In this work, we consider the latest version of HANUMAN, with fixed domain separation.
Notes
Acknowledgements
The authors would like to thank their codesigners of NORX and PRIMATEs and the designers of Ascon and Keyak for the discussions. In particular, we thank Samuel Neves for his useful comments. The authors furthermore thank the reviewers for their insightful comments. Bart Mennink is supported by a postdoctoral fellowship from the Netherlands Organisation for Scientific Research (NWO) under Veni grant 016.Veni.173.017.
References
 1.J. Alizadeh, M. Aref, N. Bagheri, Artemia v1 (2014), submission to CAESAR competitionGoogle Scholar
 2.E. Andreeva, B. Bilgin, A. Bogdanov, A. Luykx, F. Mendel, B. Mennink, N. Mouha, Q. Wang, K. Yasuda, PRIMATEs v1 (2014), submission to CAESAR competitionGoogle Scholar
 3.E. Andreeva, B. Bilgin, A. Bogdanov, A. Luykx, F. Mendel, B. Mennink, N. Mouha, Q. Wang, K. Yasuda, PRIMATEs v1.1 (2016), submission to CAESAR competitionGoogle Scholar
 4.E. Andreeva, B. Bilgin, A. Bogdanov, A. Luykx, B. Mennink, N. Mouha, K. Yasuda, APE: authenticated permutationbased encryption for lightweight cryptography, in C. Cid, C. Rechberger, (eds.) Fast Software Encryption—21st International Workshop, FSE 2014, London, UK, March 3–5, 2014. Revised Selected Papers. Lecture Notes in Computer Science, vol. 8540 (Springer, 2014), pp. 168–186Google Scholar
 5.E. Andreeva, A. Bogdanov, A. Luykx, B. Mennink, E. Tischhauser, K. Yasuda, Parallelizable and authenticated online ciphers, in K. Sako, P. Sarkar, (eds.) Advances in Cryptology—ASIACRYPT 2013—19th International Conference on the Theory and Application of Cryptology and Information Security, Bengaluru, India, December 1–5, 2013, Proceedings, Part I. Lecture Notes in Computer Science, vol. 8269 (Springer, 2013), pp. 424–443Google Scholar
 6.E. Andreeva, J. Daemen, B. Mennink, G. Van Assche, Security of keyed sponge constructions using a modular proof approach, in G. Leander, (ed.) Fast Software Encryption—22nd International Workshop, FSE 2015, Istanbul, Turkey, March 8–11, 2015, Revised Selected Papers. Lecture Notes in Computer Science, vol. 9054 (Springer, 2015), pp. 364–384Google Scholar
 7.J. Aumasson, P. Jovanovic, S. Neves, NORX v1 (2014), submission to CAESAR competitionGoogle Scholar
 8.J. Aumasson, P. Jovanovic, S. Neves, NORX v2.0 (2015), submission to CAESAR competitionGoogle Scholar
 9.N. Bagheri, Padding of Artemia (2014), CAESAR mailing listGoogle Scholar
 10.M. Bellare, V.T. Hoang, Identitybased formatpreserving encryption, in B.M. Thuraisingham, D. Evans, T. Malkin, D. Xu, (eds.) Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, October 30–November 03, 2017 (ACM, 2017), pp. 1515–1532Google Scholar
 11.M. Bellare, C. Namprempre, Authenticated encryption: Relations among notions and analysis of the generic composition paradigm. J. Cryptol. 21(4), 469–491 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
 12.M. Bellare, P. Rogaway, Codebased gameplaying proofs and the security of triple encryption. Cryptology ePrint Archive, Report 2004/331 (2004)Google Scholar
 13.M. Bellare, P. Rogaway, The security of triple encryption and a framework for codebased gameplaying proofs, in Vaudenay [93], pp. 409–426Google Scholar
 14.M. Bellare, P. Rogaway, D. Wagner, The EAX mode of operation, in B.K. Roy, W. Meier, (eds.) Fast Software Encryption, 11th International Workshop, FSE 2004, Delhi, India, February 5–7, 2004, Revised Papers. Lecture Notes in Computer Science, vol. 3017 (Springer, 2004), pp. 389–407Google Scholar
 15.J. Benaloh, (ed.), Topics in Cryptology—CTRSA 2014—The Cryptographer’s Track at the RSA Conference 2014, San Francisco, CA, USA, February 25–28, 2014, in Proceedings, Lecture Notes in Computer Science, vol. 8366 (Springer, 2014)Google Scholar
 16.G. Bertoni, J. Daemen, M. Peeters, G. Van Assche, Sponge Functions. ECRYPT Hash Function Workshop (2007)Google Scholar
 17.G. Bertoni, J. Daemen, M. Peeters, G. Van Assche, On the indifferentiability of the sponge construction, in N.P. Smart, (ed.) Advances in Cryptology—EUROCRYPT 2008, 27th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Istanbul, Turkey, April 13–17, 2008. Proceedings. Lecture Notes in Computer Science, vol. 4965 (Springer, 2008), pp. 181–197Google Scholar
 18.G. Bertoni, J. Daemen, M. Peeters, G. Van Assche, Spongebased pseudorandom number generators, in S. Mangard, F. Standaert, (eds.) Cryptographic Hardware and Embedded Systems, CHES 2010, 12th International Workshop, Santa Barbara, CA, USA, August 17–20, 2010. Proceedings. Lecture Notes in Computer Science, vol. 6225 (Springer, 2010), pp. 33–47Google Scholar
 19.G. Bertoni, J. Daemen, M. Peeters, G. Van Assche, Duplexing the sponge: Singlepass authenticated encryption and other applications, in A. Miri, S. Vaudenay, (eds.) Selected Areas in Cryptography—18th International Workshop, SAC 2011, Toronto, ON, Canada, August 11–12, 2011, Revised Selected Papers. Lecture Notes in Computer Science, vol. 7118 (Springer, 2011), pp. 320–337Google Scholar
 20.G. Bertoni, J. Daemen, M. Peeters, G. Van Assche, On the security of the keyed sponge construction. Symmetric Key Encryption Workshop (2011)Google Scholar
 21.G. Bertoni, J. Daemen, M. Peeters, G. Van Assche, Permutationbased encryption, authentication and authenticated encryption. Directions in Authenticated Ciphers (2012)Google Scholar
 22.G. Bertoni, J. Daemen, M. Peeters, G. Van Assche, R. Van Keer, Keyak v1 (2014), submission to CAESAR competitionGoogle Scholar
 23.G. Bertoni, J. Daemen, M. Peeters, G. Van Assche, R. Van Keer, Keyak v2 (2015), submission to CAESAR competitionGoogle Scholar
 24.A. Bogdanov, M. Knezevic, G. Leander, D. Toz, K. Varici, I. Verbauwhede, spongent: A lightweight hash function, in B. Preneel, T. Takagi, (eds.) Cryptographic Hardware and Embedded Systems—CHES 2011—13th International Workshop, Nara, Japan, September 28–October 1, 2011. Proceedings. Lecture Notes in Computer Science, vol. 6917 (Springer, 2011), pp. 312–325Google Scholar
 25.CAESAR, Competition for Authenticated Encryption: Security, Applicability, and Robustness (2014). http://competitions.cr.yp.to/caesar.html
 26.D. Chang, M. Dworkin, S. Hong, J. Kelsey, M. Nandi, A Keyed Sponge Construction with Pseudorandomness in the Standard Model. NIST’s 3rd SHA3 Candidate Conference 2012 (2012)Google Scholar
 27.D. Chang, M. Nandi, Improved indifferentiability security analysis of chopmd hash function, in K. Nyberg, (ed.) Fast Software Encryption, 15th International Workshop, FSE 2008, Lausanne, Switzerland, February 10–13, 2008, Revised Selected Papers. Lecture Notes in Computer Science, vol. 5086 (Springer, 2008), pp. 429–443Google Scholar
 28.H. Chernoff, A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations. Ann. Math. Stat. 23(4), 493–507 (1952)MathSciNetCrossRefzbMATHGoogle Scholar
 29.B. Cogliati, R. Lampe, Y. Seurin, Tweaking evenmansour ciphers, in Gennaro and Robshaw [40], pp. 189–208Google Scholar
 30.R.M. Corless, G.H. Gonnet, D.E.G. Hare, D.J. Jeffrey, D.E. Knuth, On the Lambert \({W}\) function. Adv. Comput. Math. 5(1), 329–359 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
 31.J. Daemen, B. Mennink, G. Van Assche, Fullstate keyed duplex with builtin multiuser support, in T. Takagi, T. Peyrin, (eds.) Advances in Cryptology—ASIACRYPT 2017—23rd International Conference on the Theory and Applications of Cryptology and Information Security, Hong Kong, China, December 3–7, 2017, Proceedings, Part II. Lecture Notes in Computer Science, vol. 10625 (Springer, 2017), pp. 606–637Google Scholar
 32.I. Dinur, O. Dunkelman, N. Keller, A. Shamir, Cryptanalysis of iterated evenmansour schemes with two keys, in Sarkar and Iwata [87], pp. 439–457. http://dx.doi.org/10.1007/9783662456118_23
 33.C. Dobraunig, M. Eichlseder, F. Mendel, M. Schläffer, Ascon v1 (2014), submission to CAESAR competitionGoogle Scholar
 34.C. Dobraunig, M. Eichlseder, F. Mendel, M. Schläffer, Ascon v1.1 (2015), submission to CAESAR competitionGoogle Scholar
 35.FIPS 202, SHA3 Standard: PermutationBased Hash and ExtendableOutput Functions (2015)Google Scholar
 36.M. Fischlin, J. Coron, (eds.), Advances in Cryptology—EUROCRYPT 2016—35th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Vienna, Austria, May 8–12, 2016, Proceedings, Part I, Lecture Notes in Computer Science, vol. 9665 (Springer, 2016)Google Scholar
 37.E. Fleischmann, C. Forler, S. Lucks, Mcoe: A family of almost foolproof online authenticated encryption schemes, in A. Canteaut, (ed.) Fast Software Encryption—19th International Workshop, FSE 2012, Washington, DC, USA, March 19–21, 2012. Revised Selected Papers. Lecture Notes in Computer Science, vol. 7549 (Springer, 2012), pp. 196–215Google Scholar
 38.P. Gazi, K. Pietrzak, S. Tessaro, The exact PRF security of truncation: Tight bounds for keyed sponges and truncated CBC, in Gennaro and Robshaw [40], pp. 368–387Google Scholar
 39.P. Gazi, S. Tessaro, Provably robust spongebased prngs and kdfs, in Fischlin and Coron [36], pp. 87–116Google Scholar
 40.R. Gennaro, M. Robshaw, (eds.), Advances in Cryptology—CRYPTO 2015—35th Annual Cryptology Conference, Santa Barbara, CA, USA, August 16–20, 2015, Proceedings, Part I, Lecture Notes in Computer Science, vol. 9215, (Springer, 2015)Google Scholar
 41.M. Girault, J. Stern, On the length of cryptographic hashvalues used in identification schemes, in Y. Desmedt, (ed.) Advances in Cryptology—CRYPTO ’94, 14th Annual International Cryptology Conference, Santa Barbara, California, USA, August 21–25, 1994, Proceedings. Lecture Notes in Computer Science, vol. 839 (Springer, 1994), pp. 202–215Google Scholar
 42.D. Gligoroski, H. Mihajloska, S. Samardjiska, H. Jacobsen, M. ElHadedy, R. Jensen, \(\pi \)Cipher v1 (2014), submission to CAESAR competitionGoogle Scholar
 43.D. Gligoroski, H. Mihajloska, S. Samardjiska, H. Jacobsen, M. ElHadedy, R. Jensen, \(\pi \)Cipher v2.0 (2015), submission to CAESAR competitionGoogle Scholar
 44.R. Granger, P. Jovanovic, B. Mennink, S. Neves, Improved masking for tweakable blockciphers with applications to authenticated encryption, in Fischlin and Coron [36], pp. 263–293Google Scholar
 45.J. Guo, T. Peyrin, A. Poschmann, The PHOTON family of lightweight hash functions, in P. Rogaway, (ed.) Advances in Cryptology—CRYPTO 2011—31st Annual Cryptology Conference, Santa Barbara, CA, USA, August 14–18, 2011. Proceedings. Lecture Notes in Computer Science, vol. 6841 (Springer, 2011), pp. 222–239Google Scholar
 46.S. Hirose, K. Ideguchi, H. Kuwakado, T. Owada, B. Preneel, H. Yoshida, A lightweight 256bit hash function for hardware and lowend devices: Lesamntalw, in K.H. Rhee, D. Nyang, (eds.) Information Security and Cryptology—ICISC 2010—13th International Conference, Seoul, Korea, December 1–3, 2010, Revised Selected Papers. Lecture Notes in Computer Science, vol. 6829 (Springer, 2010), pp. 151–168Google Scholar
 47.S. Hirose, H. Kuwakado, H. Yoshida, Compression functions using a dedicated blockcipher for lightweight hashing, in H. Kim, (ed.) Information Security and Cryptology—ICISC 2011—14th International Conference, Seoul, Korea, November 30–December 2, 2011. Revised Selected Papers. Lecture Notes in Computer Science, vol. 7259 (Springer, 2011), pp. 346–364Google Scholar
 48.V.T. Hoang, T. Krovetz, P. Rogaway, Robust authenticatedencryption AEZ and the problem that it solves, in E. Oswald, M. Fischlin, (eds.) Advances in Cryptology—EUROCRYPT 2015—34th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Sofia, Bulgaria, April 26–30, 2015, Proceedings, Part I. Lecture Notes in Computer Science, vol. 9056 (Springer, 2015), pp. 15–44Google Scholar
 49.V.T. Hoang, S. Tessaro, The multiuser security of double encryption, in J. Coron, J.B. Nielsen, (eds.) Advances in Cryptology—EUROCRYPT 2017—36th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Paris, France, April 30–May 4, 2017, Proceedings, Part II. Lecture Notes in Computer Science, vol. 10211 (2017), pp. 381–411Google Scholar
 50.A. Hoorfar, M. Hassani, Inequalities on the Lambert \({W}\) function and hyperpower function. J. Inequal. Pure Appl. Math. 9(2) (2008)Google Scholar
 51.T. Iwata, K. Ohashi, K. Minematsu, Breaking and repairing GCM security proofs, in R. SafaviNaini, R. Canetti, (eds.) Advances in Cryptology—CRYPTO 2012—32nd Annual Cryptology Conference, Santa Barbara, CA, USA, August 19–23, 2012. Proceedings. Lecture Notes in Computer Science, vol. 7417 (Springer, 2012), pp. 31–49Google Scholar
 52.É. Jaulmes, A. Joux, F. Valette, On the security of randomized CBCMAC beyond the birthday paradox limit: A new construction, in J. Daemen, V. Rijmen, (eds.) Fast Software Encryption, 9th International Workshop, FSE 2002, Leuven, Belgium, February 4–6, 2002, Revised Papers. Lecture Notes in Computer Science, vol. 2365 (Springer, 2002), pp. 237–251Google Scholar
 53.P. Jovanovic, A. Luykx, B. Mennink, Beyond 2 c/2 security in spongebased authenticated encryption modes, in Sarkar and Iwata [87], pp. 85–104Google Scholar
 54.L.R. Knudsen, F. Mendel, C. Rechberger, S.S. Thomsen, Cryptanalysis of MDC2, in A. Joux, (ed.) Advances in Cryptology—EUROCRYPT 2009, 28th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Cologne, Germany, April 26–30, 2009. Proceedings. Lecture Notes in Computer Science, vol. 5479 (Springer, 2009), pp. 106–120Google Scholar
 55.T. Krovetz, P. Rogaway, The software performance of authenticatedencryption modes, in A. Joux, (ed.) Fast Software Encryption—18th International Workshop, FSE 2011, Lyngby, Denmark, February 13–16, 2011, Revised Selected Papers. Lecture Notes in Computer Science, vol. 6733 (Springer, 2011), pp. 306–327Google Scholar
 56.U.M. Maurer, R. Renner, C. Holenstein, Indifferentiability, impossibility results on reductions, and applications to the random oracle methodology, in M. Naor, (ed.) Theory of Cryptography, First Theory of Cryptography Conference, TCC 2004, Cambridge, MA, USA, February 19–21, 2004, Proceedings. Lecture Notes in Computer Science, vol. 2951 (Springer, 2004), pp. 21–39Google Scholar
 57.D.A. McGrew, J. Viega, The security and performance of the galois/counter mode (GCM) of operation, in A. Canteaut, K. Viswanathan, (eds.) Progress in Cryptology—INDOCRYPT 2004, 5th International Conference on Cryptology in India, Chennai, India, December 20–22, 2004, Proceedings. Lecture Notes in Computer Science, vol. 3348 (Springer, 2004), pp. 343–355Google Scholar
 58.F. Mendel, S. Thomsen, An Observation on JH512. Available online (2008)Google Scholar
 59.B. Mennink, XPX: generalized tweakable evenmansour with improved security guarantees, in Robshaw and Katz [76], pp. 64–94Google Scholar
 60.B. Mennink, R. Reyhanitabar, D. Vizár, Security of fullstate keyed sponge and duplex: Applications to authenticated encryption, in T. Iwata, J.H. Cheon, (eds.) Advances in Cryptology—ASIACRYPT 2015—21st International Conference on the Theory and Application of Cryptology and Information Security, Auckland, New Zealand, November 29–December 3, 2015, Proceedings, Part II. Lecture Notes in Computer Science, vol. 9453 (Springer, 2015), pp. 465–489Google Scholar
 61.H. Mihajloska, B. Mennink, D. Gligoroski, \(\pi \)Cipher with Intermediate Tags (2016), available onlineGoogle Scholar
 62.B. Minaud, Re: CBEAM Withdrawn as of today! (2014), CAESAR mailing listGoogle Scholar
 63.K. Minematsu, Parallelizable rate1 authenticated encryption from pseudorandom functions, in P.Q. Nguyen, E. Oswald, (eds.) Advances in Cryptology—EUROCRYPT 2014—33rd Annual International Conference on the Theory and Applications of Cryptographic Techniques, Copenhagen, Denmark, May 11–15, 2014. Proceedings. Lecture Notes in Computer Science, vol. 8441 (Springer, 2014), pp. 275–292Google Scholar
 64.M. Mitzenmacher, E. Upfal, (eds.), Probability and Computing: Randomized Algorithms and Probabilistic Analysis. (Cambridge University Press, New York, 2005)Google Scholar
 65.P. Morawiecki, K. Gaj, E. Homsirikamol, K. Matusiewicz, J. Pieprzyk, M. Rogawski, M. Srebrny, M. Wójcik, ICEPOLE v1 (2014), submission to CAESAR competitionGoogle Scholar
 66.P. Morawiecki, K. Gaj, E. Homsirikamol, K. Matusiewicz, J. Pieprzyk, M. Rogawski, M. Srebrny, M. Wójcik, ICEPOLE v2 (2015), submission to CAESAR competitionGoogle Scholar
 67.R. Motwani, P. Raghavan, (eds.), Randomized Algorithms. (Cambridge University Press, New York, 1995)Google Scholar
 68.Y. Naito, Y. Sasaki, L. Wang, K. Yasuda, Generic staterecovery and forgery attacks on chopmdmac and on NMAC/HMAC, in K. Sakiyama, M. Terada, (eds.) Advances in Information and Computer Security—8th International Workshop on Security, IWSEC 2013, Okinawa, Japan, November 18–20, 2013, Proceedings. Lecture Notes in Computer Science, vol. 8231 (Springer, 2013), pp. 83–98Google Scholar
 69.Y. Naito, K. Yasuda, New bounds for keyed sponges with extendable output: Independence between capacity and message length, in T. Peyrin, (ed.) Fast Software Encryption—23rd International Conference, FSE 2016, Bochum, Germany, March 20–23, 2016, Revised Selected Papers. Lecture Notes in Computer Science, vol. 9783 (Springer, 2016), pp. 3–22Google Scholar
 70.I. Nikolic, L. Wang, S. Wu, Cryptanalysis of roundreduced \({\setminus }\)mathttled, In S. Moriai, (ed.) Fast Software Encryption—20th International Workshop, FSE 2013, Singapore, March 11–13, 2013. Revised Selected Papers. Lecture Notes in Computer Science, vol. 8424 (Springer, 2013), pp. 112–129Google Scholar
 71.F.W.J. Olver, D.W. Lozier, R.F. Boisvert, C.W. Clark, (eds.), NIST Handbook of Mathematical Functions. (Cambridge University Press, New York, 2010)Google Scholar
 72.T. Peyrin, Y. Seurin, Counterintweak: Authenticated encryption modes for tweakable block ciphers, in Robshaw and Katz [76], pp. 33–63Google Scholar
 73.B. Preneel, R. Govaerts, J. Vandewalle, On the power of memory in the design of collision resistant hash functions, in J. Seberry, Y. Zheng, (eds.) Advances in Cryptology—AUSCRYPT ’92, Workshop on the Theory and Application of Cryptographic Techniques, Gold Coast, Queensland, Australia, December 13–16, 1992, Proceedings. Lecture Notes in Computer Science, vol. 718 (Springer, 1992), pp. 105–121Google Scholar
 74.M. Raab, A. Steger, “Balls into Bins”—A simple and tight analysis, in M. Luby, J.D.P. Rolim, M.J. Serna, (eds.) Randomization and Approximation Techniques in Computer Science, Second International Workshop, RANDOM’98, Barcelona, Spain, October 8–10, 1998, Proceedings. Lecture Notes in Computer Science, vol. 1518 (Springer, 1998), pp. 159–170Google Scholar
 75.R. Reyhanitabar, Do Spongebased AE modes have beyond \(2^{c/2}\) “Security”? (2014), CAESAR mailing listGoogle Scholar
 76.M. Robshaw, J. Katz, (eds.), Advances in Cryptology—CRYPTO 2016—36th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 14–18, 2016, Proceedings, Part I, Lecture Notes in Computer Science, vol. 9814 (Springer, 2016)Google Scholar
 77.P. Rogaway, Authenticatedencryption with associateddata, in V. Atluri, (ed.) Proceedings of the 9th ACM Conference on Computer and Communications Security, CCS 2002, Washington, DC, USA, November 18–22, 2002. (ACM, 2002), pp. 98–107Google Scholar
 78.P. Rogaway, Efficient instantiations of tweakable blockciphers and refinements to modes OCB and PMAC, in P.J. Lee, (ed.) Advances in Cryptology—ASIACRYPT 2004, 10th International Conference on the Theory and Application of Cryptology and Information Security, Jeju Island, Korea, December 5–9, 2004, Proceedings. Lecture Notes in Computer Science, vol. 3329 (Springer, 2004), pp. 16–31Google Scholar
 79.P. Rogaway, M. Bellare, J. Black, T. Krovetz, OCB: a blockcipher mode of operation for efficient authenticated encryption, in M.K. Reiter, P. Samarati, (eds.) CCS 2001, Proceedings of the 8th ACM Conference on Computer and Communications Security, Philadelphia, Pennsylvania, USA, November 6–8, 2001 (ACM, 2001), pp. 196–205Google Scholar
 80.P. Rogaway, T. Shrimpton, A provablesecurity treatment of the keywrap problem, in Vaudenay [93], pp. 373–390Google Scholar
 81.M.J.O. Saarinen, Authenticated Encryption from GOST R 34.112012 LPS Permutation, in CTCrypt 2014 (2014)Google Scholar
 82.M.O. Saarinen, Beyond modes: Building a secure record protocol from a cryptographic sponge permutation, in Benaloh [15], pp. 270–285Google Scholar
 83.M.O. Saarinen, CBEAM: efficient authenticated encryption from feebly oneway \(\phi \) functions, in Benaloh [15], pp. 251–269Google Scholar
 84.M.J.O. Saarinen, CBEAM r1 (2014), submission to CAESAR competitionGoogle Scholar
 85.M.J.O. Saarinen, STRIBOB r1 (2014), submission to CAESAR competitionGoogle Scholar
 86.M.J.O. Saarinen, B.B. Brumley, STRIBOB r2: “WHIRLBOB” (2015), submission to CAESAR competitionGoogle Scholar
 87.P. Sarkar, T. Iwata, (eds.), Advances in Cryptology—ASIACRYPT 2014—20th International Conference on the Theory and Application of Cryptology and Information Security, Kaoshiung, Taiwan, R.O.C., December 7–11, 2014. Proceedings, Part I, Lecture Notes in Computer Science, vol. 8873 (Springer, 2014)Google Scholar
 88.Y. Sasaki, L. Wang, Generic attacks on strengthened HMAC: nbit secure HMAC requires key in all blocks, in M. Abdalla, R.D. Prisco, (eds.) Security and Cryptography for Networks—9th International Conference, SCN 2014, Amalfi, Italy, September 3–5, 2014. Proceedings. Lecture Notes in Computer Science, vol. 8642 (Springer, 2014), pp. 324–339Google Scholar
 89.Y. Sasaki, K. Yasuda, How to incorporate associated data in spongebased authenticated encryption, in K. Nyberg, (ed.) Topics in Cryptology—CTRSA 2015, The Cryptographer’s Track at the RSA Conference 2015, San Francisco, CA, USA, April 20–24, 2015. Proceedings. Lecture Notes in Computer Science, vol. 9048 (Springer, 2015), pp. 353–370Google Scholar
 90.Y. Sasaki, K. Yasuda, Directly Evaluating MultiCollisions and Improving Security Bounds. Symmetric Cryptography, Dagstuhl Seminar 16021 (2016)Google Scholar
 91.K. Suzuki, D. Tonien, K. Kurosawa, K. Toyota, Birthday paradox for multicollisions, in M.S. Rhee, B. Lee, (eds.) Information Security and Cryptology—ICISC 2006, 9th International Conference, Busan, Korea, November 30–December 1, 2006, Proceedings. Lecture Notes in Computer Science, vol. 4296 (Springer, 2006), pp. 29–40Google Scholar
 92.K. Suzuki, D. Tonien, K. Kurosawa, K. Toyota, Birthday paradox for multicollisions. IEICE Trans. 91A(1), 39–45 (2008)Google Scholar
 93.S. Vaudenay, (ed.), Advances in Cryptology—EUROCRYPT 2006, 25th Annual International Conference on the Theory and Applications of Cryptographic Techniques, St. Petersburg, Russia, May 28–June 1, 2006, Proceedings, Lecture Notes in Computer Science, vol. 4004 (Springer, 2006)Google Scholar
 94.D. Vizár, Ciphertext forgery on HANUMAN. Cryptology ePrint Archive, Report 2016/697 (2016)Google Scholar
 95.D. Whiting, R. Housley, N. Ferguson, AES Encryption and Authentication Using CTR Mode and CBCMAC. IEEE 802.1102/001r2 (2002)Google Scholar
 96.H. Wu, The Hash Function JH (2011), submission to NIST’s SHA3 competitionGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.