Keywords

1 Introduction

Privacy-enhancing attribute-based credentials systems (aka PABCs, anonymous credentials, or pseudonym systems) allow for cryptographically strong user authentication while preserving the users’ privacy by giving users full control over the information they reveal. There are three types of parties in a PABC system. Issuers assign sets of attribute values to users by issuing credentials for these sets. Users can present (i.e., prove possession of) their credentials to verifiers by revealing a subset of the attributes from one or more credentials. The verifiers can then check the validity of such presentations using the issuers’ public keys, but they do not learn any information about the hidden attributes and cannot link different presentations by the same user. This basic functionality of a PABC system can be extended in a large number of ways, including pseudonyms, revocation of credentials, inspection, proving relations among attributes hidden in presented credentials, and key binding [2, 3].

The importance of privacy and data minimization has been emphasized, e.g., by the European Commission in the European privacy standards [4, 5] and by the US government in the National Strategy for Trusted Identities in Cyberspace (NSTIC) [6]. With IBM’s Identity Mixer based on CL-signatures [710] and Microsoft’s U-Prove based on Brands’ signatures [11, 12], practical solutions for PABCs exist and are currently deployed in several pilot projects [2, 1315]. In fact, numerous anonymous credential schemes as well as special cases thereof such as group signatures, direct anonymous attestation, or identity escrow have been proposed, offering a large variety of different features [8, 1012, 1624].

Despite this large body of work, a unified definitional framework for the security properties of a full-fledged PABC system is still missing. Existing schemes either have targeted security definitions for specific use cases [8, 1820] or do not provide provable security guarantees at all [12, 21, 22]. One possible reason for the lack of a generic framework is that dedicated schemes for specific scenarios are often more efficient than generic solutions. However, defining, instantiating, and re-proving tailored variants of PABCs is hard and error-prone. Clearly, it is desirable to have a unified definitional approach providing security definitions for a full-fledged PABC system. It turns out that achieving such definitions is far from trivial as they quickly become very complex, in particular, if one allows for relations between hidden attributes when issuing and presenting credentials, as we shall discuss. Nevertheless, in this paper we take a major step towards such a unified framework for PABCs by formally defining the most relevant features, detached from specific instantiations or use cases. We further provide a generic construction of a PABC system based on a number of simpler building blocks such as blind signatures or revocation schemes, and a formal proof that this construction meets our security definitions. Finally, we give concrete instantiations of these components, with detailed security proofs provided in the full version.

Considered Features. Our definition of PABC systems comprises the richest set of features integrated into a holistic \({\mathsf{PABC}}\) scheme so far. It supports credentials with any fixed number of attributes, of which any subset can be revealed during presentation. A single presentation can reveal attributes from multiple credentials. Users can prove equality of attributes, potentially across different credentials, without revealing their exact values. Users have secret keys from which arbitrarily many scope-exclusive pseudonyms can be derived. That is, for a given secret key and scope, only one unique pseudonym can be derived. Thus, by reusing the same scope in multiple presentations, users can intentionally create linkability between presentations; using different scopes yields mutually unlinkable pseudonyms.

Credentials can optionally be bound to the users’ secret keys to prevent users from sharing their credentials. For a presentation that involves multiple credentials and/or a pseudonym, all credentials and the pseudonym must be bound to or derived from the same user secret key, respectively. Issuers can revoke credentials, so that they can no longer be used. During issuance, some attribute values may be hidden from the issuer or “carried over” from existing credentials. This latter advanced issuance means that the issuer does not learn their values but is guaranteed that they are equal to an attribute in an existing credential.

Our Contributions. We give formal security definitions for a full PABC system that incorporates all of the features mentioned above. We provide a generic construction from lower-level building blocks that satisfies our definitions and we present secure instantiations of the building blocks.

In terms of security, informally, we expect presentations to be unforgeable, i.e., users can only present attributes from legitimately obtained and unrevoked credentials, and to be private, i.e., they do not reveal anything more than intended. For privacy, we distinguish weak privacy, where presentations of a credential cannot be linked to a specific issuance session, and the strictly stronger notion of simulatable privacy, where in addition presentations of the same credential cannot be linked to each other. This allows us to cover (slight variants of) the most prevalent schemes used in practice, U-Prove and Identity Mixer.

Formally defining these properties is far from trivial because of the complexity of our envisaged system. For example, user can obtain credentials on (i) revealed, (ii) blindly carried-over, and (iii) fully blind attributes. Each type comes with different security expectations that must be covered by a single definition. Carried-over attributes, for example, present a challenge when defining unforgeability: While the issuer never learns the attributes, the adversary must not be able to present a value that was not previously issued as part of a pre-existing credential. For privacy, one challenge is to formalize the exact information that users intend to reveal, as they might reveal the same and possibly identifying attributes in different presentations. Revocation gives the issuer the power to exclude certain credentials from being used, which must be modeled without cementing trivial linking attacks into the model that would turn the definition moot.

As our definitions are rather complex, proving that a concrete scheme satisfies them from scratch can be a challenging and tedious task. Also, proofs for such monolithic definitions tend to be hard to verify. We thus further define building blocks, strongly inspired by existing work, and show how to generically compose them to build a secure PABC system. Our construction is efficient in the sense that its complexity is roughly the sum of the complexities of the building blocks. Additionally, this construction allows for simple changes of individual components (e.g., the underlying revocation scheme) without affecting any other building blocks and without having to reprove the security of the system. Finally, we give concrete instantiations for all our building blocks based on existing protocols.

Related Work. Our definitions are inspired by the work of Chase et al. [18, 25], who provide formal, property-based definitions of delegatable anonymous credential systems and give a generic construction from so-called P-signatures [26]. However, their work focus on pseudonymous access control with delegation, but lacks additional features such as attributes, revocation, and advanced issuance.

PABCs were first envisioned by Chaum [16, 27], and they have been a vivid area of research over the last decade. The currently most mature solutions are IBM’s Identity Mixer based on CL-signatures [710] and Microsoft’s U-Prove based on Brands’ signatures [11, 12]. A first formal definition [8] in the ideal-real world paradigm covered the basic functionalities without attributes and revocation. Their definition is stand-alone and does not allow composability as honest parties never output any cryptographic values such as a credentials or pseudonyms. This restriction makes it infeasible to use their schemes as building block in a larger system. These drawbacks are shared by the definition of Garman et al. [19].

A recent MAC-based credential scheme [20] allows for multiple attributes per credential, but requires issuer and verifier to be the same entity. It does not cover pseudonyms, advanced issuance, or revocation and provides rather informal definitions only. Similarly, Hanser and Slamanig [28] do not consider any of these features nor blind issuance, i.e., the issuer always learns all the user’s attributes.

Baldimtsi and Lysyanskaya [29] define a blind signature scheme with attributes and claim that it yields an efficient linkable anonymous credential scheme, again without giving formal definitions. The scheme can be seen as a weakened version of our signature building block without unlinkability or extractability of hidden attributes – properties that are crucial for our PABC system.

Camenisch et al. [24] provide a UC-definition for anonymous credentials and a construction based on bilinear maps. However, their definition does not cover a full-blown credential systems but just a primitive to issue and verify signatures on attributes, i.e., a primitive we call privacy-enhancing signatures in this paper.

Finally, existing definitions of attribute-based signatures, e.g., [3032] differ substantially from those of our building block for privacy-enhancing attribute-based signature (cf. Sect. ), as again they do not consider, e.g., blind issuance.

2 Notation

Algorithms and parties are denoted by sans-serif fonts, e.g., \(\mathsf{A},\mathsf{B}\). For deterministic (probabilistic) algorithms we write \(a\leftarrow \mathsf{A}(in)\) (), if a is the output of \(\mathsf{A}\) on inputs in. For an interactive protocol \((\mathsf{A},\mathsf{B})\) let \((out_{\mathsf{A}},out_{\mathsf{B}})\leftarrow \langle \mathsf{A}(in_\mathsf{A}),\mathsf{B}(in_\mathsf{B})\rangle \) denote that, on private inputs \(in_\mathsf{A}\) to \(\mathsf{A}\) and \(in_\mathsf{B}\) to \(\mathsf{B}\), \(\mathsf{A}\) and \(\mathsf{B}\) obtained outputs \(out_{\mathsf{A}}\) and \(out_\mathsf{B}\), respectively. For a set \(\mathcal {S}\), denotes that s is drawn uniformly at random from \(\mathcal {S}\). We write \(\Pr [\mathcal {E}:\varOmega ]\) to denote the probability of event \(\mathcal {E}\) over the probability space \(\varOmega \). We write vectors as \(\vec {x}=(x_i)_{i=1}^k=(x_1,\dots ,x_k)\).

A function \(\nu :\mathbb {N}\rightarrow \mathbb {R}\) is negligible if for every k and all sufficiently large n we have \(\nu (n)<\frac{1}{n^k}\). Let \(\pm \{0,1\}^k:=[-2^k+1,2^k-1]\cap \mathbb {Z}\), and \([n]:=\{1,\dots ,n\}\).

Finally, \(\kappa \) is the main security parameter, and \(\varepsilon \) the empty string or list.

3 Privacy ABC Systems

A privacy-enhancing attribute-based credential system for an attribute space \(\mathcal {AS}\) is a set of algorithms \(\mathsf{SPGen}\), \(\mathsf{UKGen}\), \(\mathsf{IKGen}\), \(\mathsf{Present}\), \(\mathsf{Verify}\), \(\mathsf{ITGen}\), \(\mathsf{ITVf}\), and \(\mathsf{Revoke}\) and a protocol \(\langle \mathcal {U}.\mathsf{Issue},{\mathcal I}.\mathsf{Issue}\rangle \). Wherever possible (i.e., except for (advanced) issuance which is inherently interactive), we opted for non-interactive protocols. This is because rounds of interaction are an expensive resource in practice, and should thus be kept as few as possible.

Parties are grouped into issuers, users, and verifiers. The publicly available system parameters are generated using \(\mathsf{SPGen}\) by a trusted party (in practice, this might be implemented using multiparty techniques). Each issuer can issue and revoke credentials that certify a list of attribute values under his issuer public key. Users hold secret keys that can be used to derive pseudonyms that are unique for a given scope string, but are unlinkable across scopes. Using \(\mathsf{Present}\), users can create non-interactive presentation tokens from their credentials that reveal any subset of attributes from any subset of their credentials, or prove that certain attributes are equal without revealing them. Presentation tokens can be publicly verified using \(\mathsf{Verify}\), on input the token and the issuers’ public keys. To obtain a credential, a user generates an issuance token defining the attribute values of the new credential using \(\mathsf{ITGen}\). After the issuer verified the issuance token using \(\mathsf{ITVf}\), the user and the issuer run \(\langle \mathcal {U}.\mathsf{Issue},{\mathcal I}.\mathsf{Issue}\rangle \), at the end of which the user obtains a credential. Issuance can be combined with a presentation of existing credentials to hide some of the attribute values from the issuer, or to prove that they are equal to attributes in credentials that the user already owns. Hence, issuance tokens can be seen as an extension of presentation tokens. Credentials can optionally be bound to a user’s secret key, meaning that knowledge of this key is required to prove possession of the credential. Now, if a pseudonym or multiple key-bound credentials are used in a presentation token, then all credentials and the pseudonym must be bound to the same key. Finally, an issuer can revoke a credential using the \(\mathsf{Revoke}\) algorithm. This algorithm outputs some public revocation information \(\textit{RI}\) that is published and should be used as input to the verification algorithm of presentation and issuance tokens.

Issuers and users agree on the parameters for issuance tokens including a revocation handle and the revealed attributes of the new credential (upon which the user has generated the issuance token) in a step preceding issuance. Issuers further verify the validity of these tokens before engaging in this protocol. There are no requirements on how revocation handles are chosen, but in practice they should be different for each credential an issuer issues.

3.1 Syntax

Before formalizing the security properties, we introduce the syntax of \({\mathsf{PABC}}\)s.

System parameter generation. The system parameters of a \({\mathsf{PABC}}\)-system are generated as . For simplicity we assume that the system parameters in particular contain an integer L specifying the maximum number of attributes that can be certified by one credential, as well as a description of the attribute space \(\mathcal {AS}\). For the rest of this document, we assume that all honest parties only accept attributes from \(\mathcal {AS}\) and abort otherwise.

The system parameters are input to all algorithms presented in the following. However, for notational convenience, we will sometimes not make this explicit.

User key generation. Each user generates a secret key as .

Issuer key generation. Each issuer generates a public/private issuer key pair and some initial revocation information as . We assume that \(\textit{RI}\) also defines the set of all supported revocation handles for this issuer, and that honest parties only accept such revocation handles and abort otherwise.

Presentation. A user generates a pseudonym \(\textit{nym}\) and presentation token \(\textit{pt}\) as

  • \(\textit{usk}\) is the user’s secret key, which can be \(\varepsilon \) if \(\textit{scope}=\varepsilon \) and none of the credentials \((\textit{cred}_i)_{i=1}^{k}\) is bound to a user secret key;

  • \(\textit{scope}\) is the scope of the generated pseudonym \(\textit{nym}\), where \(\textit{scope}=\varepsilon \) if no pseudonym is to be generated (in which case \(\textit{nym}=\varepsilon \));

  • \((\textit{cred}_i)_{i=1}^{k}\) are k user-owned credentials that are involved in this presentation;

  • \(\textit{ipk}_i\) and \(\textit{RI}_i\) are the public key and current revocation information of the issuer of \(\textit{cred}_i\);

  • \((a_{i,j})_{j=1}^{n_i}\) is the list of attribute values certified in \(\textit{cred}_i\), where each \(a_{i,j}\in \mathcal {AS}\);

  • \(R_i \subseteq [n_i]\) is the set of attribute indices for which the value is revealed;

  • E induces an equivalence relation on \(\{(i,j) : i\in [k] \;\wedge \; j\in [n_i]\}\), where \(((i,j), (i',j')) \in E\) means that \(\textit{pt}\) will prove that \(a_{i,j} = a_{i',j'}\) without revealing the actual attribute values. That is, E enables one to prove equality predicates;

  • \(\textit{M}\in \{0,1\}^*\) is a message to which the presentation token is to be bound. This might, e.g., be a nonce chosen by the verifier to prevent replay attacks in which a verifier uses a valid presentation token to impersonate a user.

If \(k=0\) and \(\textit{scope}\ne \varepsilon \), only a pseudonym \(\textit{nym}\) is generated while \(\textit{pt}=\varepsilon \).

Presentation verification. A verifier can check the validity of a pseudonym \(\textit{nym}\) and a presentation token \(\textit{pt}\):

$$ \mathsf{accept}/\mathsf{reject}\leftarrow \mathsf{Verify}\Big (\textit{nym}, \textit{pt}, \textit{scope}, \big ( \textit{ipk}_i, \textit{RI}_i, (a_{i,j})_{j \in R_i}\big )_{i=1}^{k}, E, \textit{M}\Big ), $$

where the inputs are as for presentation. For notational convenience from now on a term like \((a_{i,j})_{j \in R_i}\) implicitly also describes the set \(R_i\).

Issuance token generation. Before issuing a credential, a user generates an issuance token defining the attributes of the credentials to be issued, where (some of) the attributes and the secret key can be hidden from the issuer and can be blindly “carried over” from credentials that the user already possesses (so that the issuer is guaranteed that hidden attributes were vouched for by another issuer). Similarly to a presentation token we have:

where most of the inputs and outputs are as before, but

  • \(\textit{pit}\) and \(\textit{sit}\) are the public and secret parts of the issuance token,

  • \(\textit{cred}_{k+1} = \varepsilon \) is the new credential to be issued,

  • \(\textit{rh}\) is the revocation handle for \(\textit{cred}_{k+1}\) (maybe chosen by the issuer before),

  • \(\textit{ipk}_{k+1}\) and \(\textit{RI}_{k+1}\) are the public key and current revocation information of the issuer of the new credential,

  • \((a_{k+1,j})_{j \in R_{k+1}}\) are the attributes of \(\textit{cred}_{k+1}\) that are revealed to the issuer,

  • \((a_{k+1,j})_{j \not \in R_{k+1}}\) are the attributes of \(\textit{cred}_{k+1}\) that remain hidden, and

  • \(((k+1,j),(i',j')) \in E\) means that the \(j^\text {th}\) attribute of the new credential will have the same value as the \({j'}^\text {th}\) attribute of the \({i'}^\text {th}\) credential.

Issuance token verification. The issuer verifies an issuance token as follows:

For \(j\in [k]\) all inputs are as for \(\mathsf{Verify}\), but for \(k+1\) they are for the new credential to be issued based on \(\textit{pit}\).

Issuance. Issuance of credentials is a protocol between a user and an issuer:

The inputs are defined as before, and \(\textit{pit}\) has been verified by the issuer before. The user obtains a credential as an output, while the issuer receives an updated revocation information \(\textit{RI}'\).

Revocation. To revoke a credential with revocation handle \(\textit{rh}\), the issuer runs: to generate the new revocation information \(\textit{RI}'\) based on the issuer’s secret key, the current revocation information, and the revocation handle to be revoked.

3.2 Oracles for Our Security Definitions

Our security definitions of \({\mathsf{PABC}}\) systems require a number of oracles, some of which are the same for different definitions. We therefore present them all in one place. The oracles are initialized with a set of honestly generated keys of \({n_\mathrm {I}}\) honest issuers \(\{(\textit{ipk}^*_i,\textit{isk}^*_i, \textit{RI}^*_i)\}_{i=1}^{{n_\mathrm {I}}}\) and \({n_\mathrm {U}}\) users with keys \(\{\textit{usk}^*_i\}_{i=1}^{{n_\mathrm {U}}}\), respectively. Let \(\textit{IK}^*= \{\textit{ipk}^*_i\}_{i=1}^{{n_\mathrm {I}}}\). The oracles maintain initially empty sets \(\mathcal {C}\), \(\mathcal {HP}\), \(\mathcal {IT}\), \(\mathcal {IRH}\), \(\mathcal {RRH}\), \(\mathcal {RI}\). Here, \(\mathcal {C}\) contains all credentials that honest users have obtained as instructed by the adversary, while \(\mathcal {HP}\) contains all presentation tokens generated by honest users. All public issuance tokens that the adversary used in successful issuance protocols with honest issuers are stored in \(\mathcal {IT}\). The set \(\mathcal {IRH}\) contains all issued revocation handles, i.e., the revocation handles of credentials issued by honest issuers, while \(\mathcal {RRH}\) contains the revoked handles per issuer. Finally, \(\mathcal {RI}\) contains the history of the valid revocation information of honest issuers at any point in time. Time is kept per issuer I through a counter \(\textit{epoch}^*_I\) that is initially 0 and increased at each issuance and revocation by I.

Honest Issuer Oracle \(\mathcal {O}^\mathtt{issuer}\). This oracle allows the adversary to obtain and revoke credentials from honest issuers. It provides the following interfaces:

  • On input \((\mathtt{issue}, \textit{nym}, \textit{pit}, \textit{scope}, \textit{rh}, ( \textit{ipk}_i, \textit{RI}_i, (a_{i,j})_{j \in R_i})_{i=1}^{k+1}, E, \textit{M})\) the oracle checks that \(\mathsf{ITVf}\) accepts the issuance token \(\textit{pit}\) and that the revocation information of all honest issuers is authentic, i.e., that a tuple \((\textit{ipk}_i,\textit{RI}_i, \cdot )\) exists in \(\mathcal {RI}\) for all honest issuers \(\textit{ipk}_i\). Further, it verifies that \(\textit{ipk}_{k+1}\) is the public key of an honest issuer \(\textit{ipk}^*_I\) with corresponding secret key \(\textit{isk}^*_I\) and current revocation information \(\textit{RI}_I^* = \textit{RI}_{k+1}\). If one of the checks fails, the oracle outputs \(\bot \) and aborts.

    The oracle then runs \({\mathcal I}.\mathsf{Issue}(\textit{isk}_{k+1}, \textit{pit}, \textit{RI}^*_I)\) in interaction with \(\mathsf{A}\) until the protocol outputs \(\textit{RI}_{k+1}\). It returns \(\textit{RI}_{k+1}\) to \(\mathsf{A}\), sets \(\textit{RI}^*_I \leftarrow \textit{RI}_{k+1}\), increases \(\textit{epoch}^*_I\), adds \((\textit{ipk}^*_I, \textit{RI}^*_I, \textit{epoch}^*_I)\) to \(\mathcal {RI}\). It adds \(\big ( \textit{nym}, \textit{pit}, \textit{scope}, ( \textit{ipk}_i, \textit{RI}_i, (a_{i,j})_{j \in R_i})_{i=1}^{k+1}, E, \textit{M}\big )\) to \(\mathcal {IT}\) and adds \((\textit{ipk}^*_I, \textit{rh})\) to the set \(\mathcal {IRH}\).

  • On input \((\mathtt{revoke}, I, \textit{rh})\) the oracle checks that the revocation handle \(\textit{rh}\) has been the input of a successful issuance protocol with an honest issuer with key \(\textit{ipk}^*_I\), or returns \(\bot \) otherwise. The oracle runs , increases \(\textit{epoch}^*_I\), adds \((\textit{ipk}^*_I, \textit{rh},\textit{epoch}^*_I)\) to \(\mathcal {RRH}\), adds \((\textit{ipk}^*_I, \textit{RI}^*_I, \textit{epoch}^*_I)\) to \(\mathcal {RI}\), and returns \(\textit{RI}^*_I\) to the adversary.

Honest User Oracle \(\mathcal {O}^\mathtt{user}\). This oracle gives the adversary access to honest users, which he can trigger to obtain credentials and request presentation tokens on inputs of his choice. The adversary does not get the actual credentials, but only unique credential identifiers \(\textit{cid}\) by which he can refer to the credentials. It provides the following interfaces:

  • On input \((\mathtt{present}, U, \textit{scope}, ( \textit{ipk}_i, \textit{RI}_i, \textit{cid}_i, R_i)_{i=1}^{k}, E, \textit{M})\) the oracle checks if \(U \in [{n_\mathrm {U}}]\) and if, for all \(i\in [k]\), a tuple \((U,\textit{cid}_i,\textit{cred}_i,(a_{i,j})_{j=1}^{n_i}) \in \mathcal {C}\) exists. For all credentials from honest issuers, \(\mathcal {O}^\mathtt{user}\) verifies if \(\textit{RI}_i\) is authentic, i.e., if for all honest issuer public keys \(\textit{ipk}_i = \textit{ipk}^*_I\) there exists \((\textit{ipk}_i,\textit{RI}_i,\cdot ) \in \mathcal {RI}\). If any check fails, \(\mathcal {O}^\mathtt{user}\) returns \(\bot \), otherwise it computes and adds \( \big (\textit{nym},\textit{pt},\textit{scope}, ( \textit{ipk}_i, \textit{RI}_i, (a_{i,j})_{j \in R_i})_{i=1}^{k}, E, \textit{M}\big ) \) to \(\mathcal {HP}\). It returns \((\textit{nym}, \textit{pt})\) to \(\mathsf{A}\).

  • On input \((\mathtt{obtain}, U, \textit{scope}, \textit{rh}, ( \textit{ipk}_i, \textit{RI}_i, \textit{cid}_i, R_i)_{i=1}^{k+1}, E, \textit{M}, (a_{k+1,j})_{j=1}^{n_{k+1}})\) the oracle checks if \(U \in [{n_\mathrm {U}}]\) and, for all \(i\in [k]\), a tuple \((U,\textit{cid}_i,\textit{cred}_i,(a_{i,j})_{j=1}^{n_i}) \in \mathcal {C}\) exist. It further checks that the revocation information of honest issuers is authentic, i.e., that a tuple \((\textit{ipk}_i,\textit{RI}_i, \cdot ) \in \mathcal {RI}\) exists for all honest issuers \(\textit{ipk}_i \in \textit{IK}^*\). The oracle computes the issuance token If \(\textit{ipk}_{k+1} \not \in \textit{IK}^*\), the oracle sends \((\textit{nym}, \textit{pit})\) to the adversary and runs \(\mathcal {U}.\mathsf{Issue}(\textit{sit},\textit{pit})\) in interaction with the adversary until it returns a credential \(\textit{cred}\) and stores \((U, \textit{cid}_{k+1}, \textit{cred}, (a_{k+1,j})_{j=1}^{n_{k+1}})\) in \(\mathcal {C}\). If \(\textit{ipk}_{k+1} = \textit{ipk}^*_I \in \textit{IK}^*\), the oracle runs \(\mathcal {U}.\mathsf{Issue}(\textit{sit},\textit{pit})\) internally against \({\mathcal I}.\mathsf{Issue}(\textit{isk}^*_I, \textit{pit})\) until they output a credential \(\textit{cred}\) and revocation information \(\textit{RI}'\), respectively. The oracle adds \((U, \textit{cid}_{k+1}, \textit{cred}, (a_{k+1,j})_{j=1}^{n_{k+1}})\) to \(\mathcal {C}\) and adds \((\textit{ipk}^*_I, \textit{rh})\) to \(\mathcal {IRH}\). It further increases \(\textit{epoch}^*_I\), sets \(\textit{RI}^*_I \leftarrow \textit{RI}'\), and adds \((\textit{ipk}^*_I, \textit{RI}^*_I, \textit{epoch}^*_I)\) to \(\mathcal {RI}\). Finally, the oracle chooses a fresh and unique credential identifier \(\textit{cid}_{k+1}\) for the new credential and outputs it to \(\mathsf{A}\) (note that \(\mathsf{A}\)’s choice of \(\textit{cid}_{k+1}\) is ignored).

3.3 Security Definitions for \({\mathsf{PABC}}\)s

We now formalize the security properties required from \({\mathsf{PABC}}\)-systems.

Correctness. The correctness requirements are what one would expect, i.e., whenever all parties are honest and run all algorithms correctly, none of the algorithms aborts or outputs \(\mathsf{reject}\). That is, (i) if all inputs are correct, \(\mathsf{Issue}\) always outputs a valid credential, (ii) holding valid credentials satisfying the specified relations allows a user to generate valid presentation- and issuance tokens, and (iii) issuers are able to revoke any attribute of their choice.

Pseudonym Collision-Resistance. On input , no PPT adversary can come up with two user secret keys \(\textit{usk}_0\ne \textit{usk}_1\) and a scope \(\textit{scope}\) such that \(\mathsf{NymGen}(\textit{spar}, \textit{usk}_0,\textit{scope}) = \mathsf{NymGen}(\textit{spar}, \textit{usk}_1, \textit{scope})\) with non-negligible probability.

Furthermore, we require that a pseudonym is a deterministic function of the system parameters, user secret and the scope. That is, we require that for some deterministic function \(\mathsf{NymGen}\), and for all \(\textit{usk}, \textit{scope}, \textit{rh}, E, E', \textit{M}, \textit{M}'\) and all honest tuples \(C,C'\) of the form \((\textit{ipk}_i, \textit{RI}_i, \textit{cred}_i, (a_{i,j})_{j=1}^{n_i}, R_i)_{i=1}^k\) it holds that:

Unforgeability. In the unforgeability game, the adversary can request credentials from honest issuers, or trigger honest users to receive or present credentials, using the oracles from Sect. 3.2. After this interaction, the adversary outputs a number of presentation tokens and pseudonyms, i.e., a set \(\mathcal {FT}\) of tuples \((\textit{nym}, \textit{pt}, \textit{scope}, ( \textit{ipk}_i, \textit{RI}_i, (a_{i,j})_{j \in R_i})_{i=1}^{k}), E, \textit{M})\) as defined in the syntax of the \(\mathsf{Verify}\) algorithm. The adversary wins the game if at least one of the presentation tokens is a forgery or if at least one of the issuance tokens submitted to the honest issuer oracle was a forgery. Note that unforgeability of credentials is implied by this definition, as the adversary can always derive a forged presentation token from a forged credential.

Fig. 1.
figure 1figure 1

\(\mathsf{Forge_{\mathsf{A}}}(1^\kappa , {n_\mathrm {I}}, {n_\mathrm {U}})\)

Informally, a forgery is an issuance or presentation token for which the corresponding credentials were not issued to the adversary or are not supposed to be valid w.r.t. the revocation information stated in the token. Now, as the issuer does not see all attributes nor the user secret key of issued credentials, it is often not clear whether or not a given issued credential is one of the credentials corresponding to a token. However, if we assume that we knew all hidden values for each credential issued (including the user secret key), then we can efficiently test whether or not a given issuance or presentation token is a forgery. Thus, if there is an assignment for all the hidden values of the issued credentials such that all the issuance and presentation tokens presented by the adversary correspond to valid credentials, then there is no forgery among the tokens. Or, in other words, if there is no such assignment, then the adversary has produced a forgery and wins the game. Regarding the validity of credentials, the adversary also wins if he outputs a valid token for a credential that was already revoked with respect to the revocation information specified by the adversary (which may not necessarily be the latest published revocation information).

Definition 1

(Unforgeability). A PABC-scheme satisfies unforgeability, if for every PPT adversary \(\mathsf{A}\) and all \({n_\mathrm {U}}, {n_\mathrm {I}}\in \mathbb {N}\) there exists a negligible function \(\nu \) such that \(\Pr [\mathsf{Forge_{\mathsf{A}}} = 1]\le \nu ,\) where the experiment is described in Fig. 1, and the oracles \(\mathcal {O}^\mathtt{issuer}\) and \(\mathcal {O}^\mathtt{user}\) are as in Sect. 3.2.

We now define what consistency of the sets \(\mathcal {FT}\), \(\mathcal {IT}\), \(\mathcal {HP}\), \(\mathcal {IRH}\), \(\mathcal {RRH}\), and \(\mathcal {RI}\) means. First, the set \(\mathcal {FT}\) must only contain “fresh” and valid presentation tokens, meaning that for each tuple \((\textit{nym}, \textit{pt}, \textit{scope}, (\textit{ipk}_i, \textit{RI}_i,(a_{i,j})_{j \in R_i})_{i=1}^{k}), E, \textit{M})\) in \(\mathcal {FT}\) must pass \(\mathsf{Verify}\) and that for all \(i\in [k]\) where \(\textit{ipk}_i \in \textit{IK}^*\) there exists a tuple \((\textit{ipk}_i, \textit{RI}_i, \cdot ) \in \mathcal {RI}\). If any tuple in \(\mathcal {FT}\) does not satisfy these conditions, then the adversary loses the game. Let \( USK \subset \{0,1\}^*\) be a hypothetical set containing the user secret keys that the adversary may have used throughout the game, and let \( CRED \) be a hypothetical set containing the credentials from honest issuers that the adversary may have collected during the game. In the latter set, we write \((\textit{usk}, \textit{ipk}^*_I, \textit{rh}, (\alpha _1, \ldots , \alpha _{n})) \in CRED \) if the adversary obtained a credential from issuer \(\textit{ipk}^*_I\) with revocation handle \(\textit{rh}\) and attribute values \((\alpha _1,\ldots , \alpha _{n})\) bound to user secret \(\textit{usk}\). Now, we consider the sets \(\mathcal {FT}\), \(\mathcal {IT}\), \(\mathcal {HP}\), \(\mathcal {IRH}\), \(\mathcal {RRH}\) and \(\mathcal {RI}\) to be consistent if there exist sets \( USK \) and \( CRED \) such that the following conditions hold:

  1. 1.

    Each credential is the result of a successful issuance. For all revocation handles \(\textit{rh}\) and honest issuer public keys \(\textit{ipk}^*_I\), the number of tuples \((\cdot , \textit{ipk}^*_I, \textit{rh}, \cdot ) \in CRED \) is at most the number of tuples \((\textit{ipk}^*_I, \textit{rh}) \in \mathcal {IRH}\).

  2. 2.

    All presentation or issuance tokens correspond to honestly obtained unrevoked credentials. For every tuple \(\big ( \textit{nym}, \textit{pt}, \textit{scope}, ( \textit{ipk}_i, \textit{RI}_i, \textit{epoch}_i, (a_{i,j})_{j \in R_i})_{i=1}^{k}, E, \textit{M}\big ) \in \mathcal {FT} \cup \mathcal {IT}\) there exists a \(\textit{usk}\in USK \) and a set of credentials \(\{\textit{cred}_i = (\textit{usk}_i, \textit{ipk}_i, \textit{rh}_i, (\alpha _{i,j})_{j=1}^{n_i}) : \textit{ipk}_i \in \textit{IK}^*\} \subseteq CRED \) such that:

    1. (a)

      \(\textit{usk}_i \in \{\textit{usk},\varepsilon \}\) (all key-bound credentials are bound to the same key),

    2. (b)

      \(\textit{nym}= \textit{scope}= \varepsilon \) or \(\textit{nym}= \mathsf{NymGen}(\textit{spar}, \textit{usk}, \textit{scope})\) (if there is a pseudonym, it is for \(\textit{usk}\) and \(\textit{scope}\)),

    3. (c)

      \(\alpha _{i,j} = a_{i,j}\) for all \(j \in R_i\) (the revealed attribute values are correct),

    4. (d)

      \(\alpha _{i,j} = \alpha _{i',j'}\) for \(((i,j),(i',j')) \in E\) with \(\textit{ipk}_{i'} \in \textit{IK}^*\) (the attributes satisfy the equality relations to other credentials from honest issuers), and

    5. (e)

      there exists \((\textit{ipk}_i, \textit{RI}_i, \textit{epoch}_i) \in \mathcal {RI}\) such that there exists no tuple \((\textit{ipk}_i, \textit{rh}_i, \textit{epoch}'_i) \in \mathcal {RRH}\) with \(\textit{epoch}'_i \le \textit{epoch}_i\) (the credentials were not revoked in the epoch where they were presented).

Thus, the adversary wins the game, if there do not exist sets \( USK \) and \( CRED \) that satisfy all the information captured in \(\mathcal {FT}\), \(\mathcal {IT}\), \(\mathcal {HP}\), \(\mathcal {IRH}\), \(\mathcal {RRH}\), and \(\mathcal {RI}\).

We had to make a number of design decisions for our definitions, which we want to explain in the following:

  • First, we do not require strong unforgeability, i.e., we do not consider a token a forgery if it contains the identical elements as a \(\textit{pt}\) or \(\textit{pit}\) generated by an honest user. In practice, tokens will be bound to messages \(\textit{M}\) that uniquely identify the current session, and thus an adversary will neither be able to reuse \(\textit{pt}\) or \(\textit{pit}\), nor will it be able to benefit from a weak forgery thereof.

  • Next, we do not require that adversarially generated issuance tokens are satisfied until the issuance protocol finishes. That is, we consider forgery of issuance tokens to be a problem only if they are later used in a successful issuance protocol. This is acceptable, as an invalid issuance token does not give an adversary any meaningful power if he cannot use it to obtain a credential. Requiring the stronger property that issuance tokens are unforgeable by themselves is possible, but would further increase the complexity of our definitions – without providing stronger guarantees in practice.

  • For blindly issued attributes we require that they satisfy E, but do not forbid, e.g., that an adversary runs the issuance protocol twice with the same issuance token, but with the resulting credentials containing different values for the blinded attributes as long as they satisfy E. This is not a real-world problem, as the adversary could otherwise just run multiple sessions for different issuance tokens, as the issuer will, by definition of blind attributes, not obtain any guarantees on those attributes apart from what E specifies.

  • Finally, we allow presentation tokens for earlier epochs than the one underlying a credential to be generated. This makes sense for off-line verifiers who cannot update their revocation information continuously. However, our definition and construction could easily be modified to forbid such tokens.

Simulatable Privacy. For privacy, all issuance protocols and presentation tokens performed by honest users must be simulatable using only the public information that is explicitly revealed during the issuance or presentation. That is, the simulator is not given the credentials, values of hidden attributes, or even the index of the user that is supposed to perform the presentation or issuance, but must provide a view to \(\mathsf{A}\) that is indistinguishable from a real user.

Fig. 2.
figure 2figure 2

\(\mathsf{Privacy_{\mathsf{A}}}(1^\kappa , {n_\mathrm {U}})\)

Formalizing this is not straightforward, however. It does not suffice to require two separate simulators that work for issuance and presentation, respectively, because pseudonyms and revocation introduce explicit dependencies across different issuance and presentation queries that must also be reflected in the simulation. Moreover, the simulator must not generate presentation tokens that could not have been generated in the real world, e.g., because the user does not have the required credentials. But as the simulator does not see any user indices or hidden attribute values, it cannot know which queries can be satisfied.

We therefore define a game where an adversary \(\mathsf{A}\) either runs in a real world with access to an honest user oracle performing the actual protocols, or runs in a simulated world, where oracle queries are first filtered by a filter \(\mathcal F\) and then responded to by a stateful simulator \(\mathsf{S}\). The filter’s role is to sanitize the queries from non-public information such as user indices, credential identifiers, etc., and to intercept queries that could not be satisfied in the real world. Note that the filter thereby enforces that the adversary can only obtain presentation tokens for valid inputs. This must be guaranteed by the credential system as well, otherwise the adversary could distinguish between both worlds.

Definition 2

(Privacy). A PABC system is private, if there exist PPT algorithms \(\mathsf{S}_1,\mathsf{S}_2\) such that for every PPT adversary \(\mathsf{A}\) and every \({n_\mathrm {U}}\in \mathbb {N}\) there exists a negligible function \(\nu \) such that: \( \Pr [\mathsf{Privacy_{\mathsf{A}}}(1^\kappa ,{n_\mathrm {U}}) = 1] \le 1/2 + \nu (\kappa )\), cf. (Fig. 2).

Here, \(\mathcal {O}^{\mathtt{user}}\) is as described in Sect. 3.2, while \(\mathcal F\) maintains initially empty lists C and P, a counter \(\textit{ctr}=0\), and internal state \(\mathsf{st}_{\mathsf{S}}=\tau \), and behaves as follows:

  • On input \((\mathtt{present}, U, \textit{scope}, ( \textit{ipk}_i,\textit{RI}_i, \textit{cid}_i, R_i)_{i=1}^{k}, E, \textit{M})\), the filter checks if \(U\in [{n_\mathrm {U}}]\) and if, for all \(i\in [k]\), a tuple \((U, \textit{cid}_i, \textit{ipk}_i, (a_{i,j})_{j=1}^{n_i}, \mathsf{rev}_i) \in C\) exists. Here, \(\mathsf{rev}_i\) is the code of an algorithm that on input \(\textit{RI}_i\) outputs a bit indicating whether \(\textit{cred}_i\) is to be considered revoked. \(\mathcal F\) checks if \(\mathsf{rev}_i(\textit{RI}_i)=0\) \(\forall i\in [k]\) and that \(a_{i,j}=a_{i',j'}\) for all \(((i,j),(i',j')) \in E\). If a check fails, the filter returns \(\bot \). If \(\textit{scope}\ne \varepsilon \) and \((U, \textit{scope}, p) \not \in P\) then \(\mathcal F\) sets \(\textit{ctr}\leftarrow \textit{ctr}+1\), \(p \leftarrow \textit{ctr}\), and adds \((U, \textit{scope}, p)\) to P. It then executes . Finally, it returns \((\textit{nym}, \textit{pt})\) to \(\mathsf{A}\).

  • On input \((\mathtt{obtain}, U, \textit{scope}, \textit{rh}, ( \textit{ipk}_i, \textit{RI}_i, \textit{cid}_i, R_i)_{i=1}^{k+1}, E, \textit{M}, (a_{k+1,j})_{j=1}^{n_{k+1}})\), the filter checks if \(U\in [{n_\mathrm {U}}]\) and if, for all \(i\in [k]\), a tuple \((U, \textit{cid}_i, \textit{ipk}_i, (a_{i,j})_{j=1}^{n_i}, \mathsf{rev}_i) \in C\) exists. For all credentials, \(\mathcal F\) checks if \(\mathsf{rev}_i(\textit{RI}_i)=0\) \(\forall i\in [k]\) and that \(a_{i,j}=a_{i',j'}\) for all \(((i,j),(i',j')) \in E\). If any of the checks fails, the filter returns \(\bot \). The filter then looks up the same value p as in \(\mathcal {O}^{\mathtt{present}}\). It sets and returns \((\textit{nym}, \textit{pit})\) to \(\mathsf{A}\). For the subsequent flows in the issuance protocol, \(\mathcal F\) answers each incoming message \(\textit{M}_\mathrm {in}\) from \(\mathsf{A}\) by running . At the last flow, \(\mathsf{S}_2\) returns a tuple \((\mathsf{st}_\mathsf{S}, \textit{M}_\mathrm {out}, \textit{cid}, \mathsf{rev})\). If \(\textit{cid}\ne \bot \), \(\mathcal F\) adds \((U, \textit{cid}, \textit{ipk}_{k+1},(a_{k+1,j})_{j=1}^{n_{k+1}}, \mathsf{rev})\) to C and returns \(\textit{M}_\mathrm {out}\) to \(\mathsf{A}\).

Weak Privacy. The definition above ensures a very strong notion of privacy that is not satisfied by all existing PABC schemes as they do not provide unlinkability across multiple presentation tokens that were derived from the same credential. For instance, for U-Prove [11, 12] an arbitrary number of presentations cannot be linked to a specific issuance session, but any two presentations of the same credential can be linked to each other. We thus also introduce a strictly weaker privacy notion called weak privacy. Informally, we there give the simulator some more information to be able to generate “linkable” presentation tokens if the adversary requests multiple presentations tokens for some credential. This is done by giving \(\mathsf{S}_2\) “anonymized” pointers to credential identifiers as input, and thus, the simulator is aware if the same credential is used in multiple presentation sessions and can prepare the simulated token accordingly. Due to the anonymization, the simulator still does not learn the connection between an issued credential and a presentation token, thus untraceability (meaning presentation sessions cannot be linked to the actual issuance of the credential) still holds.

4 Building Blocks

We next introduce the syntax for the building blocks needed in our construction. We omit detailed security definitions of the building blocks here and refer to the full version [1], where we also present concrete instantiations of all components. Compared to \({\mathsf{PABC}}\)-systems, most of the security requirements presented in the following are relatively easy to formalize and prove for a specific instantiation. However, in Sect. 5 we will show that these properties are actually sufficient to obtain \({\mathsf{PABC}}\)-systems, by giving a generic construction for \({\mathsf{PABC}}\)-systems from these building blocks. We stress that the following definitions are heavily inspired by existing work, but have been adapted to facilitate the generic construction.

4.1 Global Setup

Global system parameters are parameters that are shared by all building blocks.

Global System Parameter Generation. The global system parameters are generated as where \(\mathcal {AS}\subseteq \pm \{0,1\}^\ell \) is the message space of the signature scheme, and the revocation and pseudonym systems support inputs from at least \(\pm \{0,1\}^{\ell }\). The integer L specifies the maximum number of attributes that can be signed by one signature. Furthermore, and is a public master commitment key. Finally, \(\textit{spar}_\mathtt{g}'\) potentially specifies further parameters.

4.2 Commitment Schemes

A commitment scheme is a tuple of six algorithms \((\mathsf{SPGen}_\mathtt{c},\mathsf{ComKGen},\mathsf{Com},\mathsf{ComOpenVf},\mathsf{ComPf},\mathsf{ComProofVf})\), where \(\mathsf{SPGen}_\mathtt{c}\) generates commitment parameters \(\textit{spar}_\mathtt{c}\). Taking these as inputs, \(\mathsf{ComKGen}\) generates commitment keys \(\textit{ck}\), which can be used to compute a commitment/opening pair (co) to a message m using the commitment algorithm \(\mathsf{Com}\). This pair can be verified using \(\mathsf{ComOpenVf}\). Furthermore, we require that one can generate a non-interactive proof of knowledge \(\pi \) of the content m of a commitment c, and the corresponding opening c, using \(\mathsf{ComPf}\). This proof can then be publicly verified using \(\mathsf{ComProofVf}\).

4.3 Privacy-Enhancing Signatures

We next define the main building block: privacy-enhancing attribute-based signatures (\({\mathsf{PABS}} \)). Informally, parties are split into issuers signing attributes, users obtaining signatures, and verifiers checking whether users possess valid signatures on certain attributes. After setting up some \({\mathsf{PABS}} \)-specific system parameters, each issuer computes his signing/verification key pair, such that everybody can verify that keys are well-formed. At issuance time, users can reveal certain attributes to the issuer and get the remaining attributes signed blindly. Having received a signature, a user can verify its correctness. Presentation is then done in a non-interactive manner: users compute signature presentation tokens, potentially revealing certain attributes, and verifiers can check these tokens.

System Parameter Generation. The signature system parameters are generated as where the input are global system parameters and \(\textit{spar}_\mathtt{s}'\) potentially specifies further parameters.

Similar to Sect. 3.1, we assume that the respective system parameters are input to all the other algorithms of the given scheme. However, for notational convenience, we will sometimes not make this explicit.

Key Generation. An issuer generates a key pair We assume that the issuer public key implicitly also defines a maximum L of attributes a signature may contain.

Key Verification. An issuer public key \(\textit{ipk}\) can be verified for correctness with respect to \(\kappa \) as \( \mathsf{accept}/ \mathsf{reject}\leftarrow \mathsf{KeyVf}(\textit{ipk}). \)

Signature Issuance. Issuance of a signature is a protocol between a user and a signature issuer:

  • \({\vec {a}}=(a_i)_{i=1}^L\in \mathcal {AS}^L\) are the attributes to be signed,

  • R denotes indices of attributes that are revealed to the issuer

  • \(c_j\) is a verified commitment to \(a_j\), \(o_j\) is the associated opening information, and \(\pi _j\) is a non-interactive proof of knowledge of the opening of \(c_j\). In particular, \(c_j\) might be re-used from a preceding signature presentation, allowing users to blindly carry over attributes into new credentials.

Signature Verification. The correctness of a signature can be verified using \(\mathsf{SigVf}\) on input a signature \(\textit{sig}\), attributes \({\vec {a}}\) and a issuer public key \(\textit{ipk}\):

$$\begin{aligned} \mathsf{accept}/ \mathsf{reject}\leftarrow \mathsf{SigVf}(\textit{sig},{\vec {a}},\textit{ipk}). \end{aligned}$$

Signature Presentation Token Generation. The user can compute a signature presentation token \(\textit{spt}\) that proves that he possesses a signature for a set of revealed attributes R and committed attributes \((c_j,o_j)_{j\in C}\) where \(C\cap R=\emptyset \). Furthermore, a signature presentation token can be bound to a specific message \(\textit{M}\) specifying, e.g., some context information or random nonce to disable adversaries to re-use signature presentation tokens in subsequent sessions:

Signature Presentation Token Verification. A signature presentation token can be verified as

4.4 Revocation Schemes

Suppose a set of users, e.g., employees, who are granted access to some online resource. Then this set will often change over time. While adding users would be possible with the features presented so far, revoking access for specific users would not. In the following we thus define revocation for signature systems.

We chose a blacklisting approach rather than whitelisting, as whitelists require verifiers to update their local copy of the revocation information every time a new signature gets issued, which makes it harder to realize offline applications. For blacklists, different verifiers may obtain updates at different intervals, depending on their security policies. As a consequence, our generic construction also only supports blacklisting, while the interfaces and definitions from Sect. 3 would also support whitelisting or hybrid approaches.

After having set up system parameters, a revocation authority generates a secret revocation key, together with some public revocation key and revocation information. Using its secret key, the authority can revoke revocation handles by updating the revocation information accordingly. Proving that a revocation handle has not yet been revoked is again done non-interactively: a user can generate a token showing that some commitment contains an unrevoked handle. This token can later be publicly verified.

System Parameter Generation. On input global system parameters, revocation parameters are generated as where \(\mathcal {RS}\) specifies the set of supported revocation handles, and \(\textit{spar}_\mathtt{r}'\) potentially specifies further parameters.

Revocation Setup. The revocation authority (in the definitions of Sect. 3 this role is taken by the issuer) runs the revocation setup to obtain a secret key \(\textit{rsk}\), an associated public key \(\textit{rpk}\) and a public revocation information \(\textit{RI}\):

Attribute Revocation. A revocation handle \(\textit{rh}\) can get revoked by a revocation authority by updating the public revocation information \(\textit{RI}\) to incorporate \(\textit{rh}\):

Revocation Token Generation. A user can generate a token proving that a certain revocation handle, committed to in c, has not been revoked before:

In practice, a revocation presentation will always be tied to a signature presentation to prove that the signature presentation is valid. The value of c in the former will therefore be one of the commitments from the latter.

Revocation Token Verification. A revocation token is verified by:

$$\begin{aligned} \mathsf{accept}/\mathsf{reject}\leftarrow \mathsf{RevTokenVf}(\textit{rt},c,\textit{RI},\textit{rpk}). \end{aligned}$$

4.5 Pseudonyms

Users can be known to different issuers and verifiers under unlinkable pseudonyms.

System Parameter Generation. The parameters of a pseudonym system are generated as where the input are global system parameters, and \(\textit{spar}_\mathtt{p}'\) potentially specifies further pseudonym parameters.

User key generation. A user generates his secret key as .

Pseudonym generation. A pseudonym \(\textit{nym}\) for given \(\textit{usk}\) and \(\textit{scope}\in \{0,1\}^*\) is computed deterministically as \( \textit{nym}\leftarrow \mathsf{NymGen}(\textit{usk},\textit{scope}). \)

Pseudonym presentation. On input a user’s secret key \(\textit{usk}\), a commitment c to \(\textit{usk}\) with opening information o, and a scope string \(\textit{scope}\in \{0,1\}^*\), the pseudonym presentation algorithm generates a pseudonym \(\textit{nym}\) with a proof \(\pi \):

Pseudonym verification. A pseudonym \(\textit{nym}\) and proof \(\pi \) are verified for a commitment c and scope \(\textit{scope}\) as \( \mathsf{accept}/\mathsf{reject}\leftarrow \mathsf{NymVf}(\textit{spar}_\mathtt{p}, c, \textit{scope}, \textit{nym}, \pi ). \)

5 Generic Construction of PABCs

We next present a generic construction of \({\mathsf{PABC}}\) systems from the building blocks introduced above. Our construction uses a global setup procedure, a commitment scheme, a \({\mathsf{PABS}} \) scheme, a revocation scheme, as well as a pseudonym scheme, where the syntax is as introduced in Sect. 4.

The idea underlying our construction is to use the pseudonym and revocation schemes unchanged to obtain the according properties for the \({\mathsf{PABC}}\)-system. Issuance and presentation are realized via the given \({\mathsf{PABS}} \)-scheme. However, instead of just signing the attributes, the issuer additionally signs the user secret key and the revocation handle whenever applicable. Similarly, whenever a user computes a presentation token for a set of credentials, it proves knowledge of the corresponding signature on the contained attributes, the revocation handle, and the user secret key, where the latter is always treated as an unrevealed attribute.

These independent components are linked together using commitments. For instance, to show that indeed the revocation handle contained in a credential was shown to be unrevoked, the same commitment/opening pair is used to generate revocation and signature presentation tokens.

5.1 Formal Description of the Construction

Let \(\mathrm {eq}\) be a function mapping an attribute index (ij) to its equivalence class as induced by E, i.e., \(\mathrm {eq}(i,j) = \{(i,j)\} \cup \{(i',j') : ((i,j),(i',j')) \in E\}.\) Let \(\overline{E}\) be the set of all these equivalence classes, and \((a_{\overline{e}})_{\overline{e} \in \overline{E}}\) be the corresponding attribute values, i.e., \(\mathrm {eq}(i,j) = \overline{e} \Rightarrow a_{i,j} = a_{\overline{e}}\). Finally, let \(E_i = \{j: ((i,j),(i',j')) \in E \}\).

As some algorithm names from \({\mathsf{PABC}}\) schemes also appear in the building blocks, we stress that all algorithms called in the construction are those from the building blocks and never from the \({\mathsf{PABC}}\) scheme.

System parameter generation. This algorithm outputs \(\textit{spar}= (\textit{spar}_\mathtt{g}, \textit{spar}_\mathtt{s}, \textit{spar}_\mathtt{r}, \textit{spar}_\mathtt{p})\), where the different parts are generated using the system parameter algorithms of the building blocks. We assume that the algorithms of all building blocks take their respective parameters as implicit inputs.

User key generation. Users generate their secret keys as .

Issuer key generation. Issuers generate signature keys and revocation keys . The algorithm outputs \((\textit{ipk},\textit{isk},\textit{RI})=((\textit{ipk}',\textit{rpk}),(\textit{isk}',\textit{rsk}),\textit{RI}')\).

Presentation. On inputs \((\textit{usk}, \textit{scope}, (\textit{ipk}_i, \textit{RI}_i, \textit{cred}_i, (a_{i,j})_{j=1}^{n_i},R_i)_{i=1}^{k}, E, \textit{M})\), this algorithm outputs whatever \(\mathsf{AuxPresent}\) (cf. Fig. 3) outputs, where \(\textit{ipk}_i=(\textit{ipk}'_i, \textit{rpk}_i)\), \(\textit{cred}_i=(\textit{sig}_i, \textit{rh}_i)\), and \(\hat{\textit{M}}=\mathtt{pres}\Vert \textit{M}\).

Fig. 3.
figure 3figure 3

\(\mathsf{AuxPresent}(\textit{usk}, \textit{scope}, ( \textit{ipk}_i, \textit{RI}_i, \textit{cred}_i, (a_{i,j})_{j=1}^{n_i}, R_i)_{i=1}^{k}, E, \hat{\textit{M}})\)

Fig. 4.
figure 4figure 4

\(\mathsf{AuxVerify}(\textit{nym}, \textit{pt}, \textit{scope}, ( \textit{ipk}_i, \textit{RI}_i, (a_{i,j})_{j \in R_i})_{i=1}^{k}, E, \hat{\textit{M}})\)

Fig. 5.
figure 5figure 5

\(\mathsf{ITGen}(\textit{usk}, \textit{scope}, \textit{rh}_{k+1}, (\textit{ipk}_i, \textit{RI}_i, \textit{cred}_i, (a_{i,j})_{j=1}^{n_i}, R_i)_{i=1}^{k+1}, E, \textit{M})\)

Presentation verification. On inputs \((\textit{nym}, \textit{pt}, \textit{scope}, ( \textit{ipk}_i, \textit{RI}_i, (a_{i,j})_{j \in R_i})_{i=1}^{k},E, \textit{M})\), \(\mathsf{Verify}\) outputs whatever \(\mathsf{AuxVerify}\) described in Fig. 4 outputs.

Issuance token generation. An issuance token for inputs \(( \textit{usk}, \textit{scope}, \textit{rh}_{k+1}, ( \textit{ipk}_i, \textit{RI}_i, \textit{cred}_i, (a_{i,j})_{j=1}^{n_i}, R_i)_{i=1}^{k+1}, E,\textit{M})\), is generated as specified in Fig. 5.

Issuance token verification. To verify \(\textit{pit}= (\textit{pt}, \textit{rh}_{k+1}, (c_{k+1,j}, \pi _{k+1,j})_{j \not \in R_{k+1}},\pi _\textit{usk})\), the verifier returns the output of:

$$\begin{aligned} \mathsf{AuxVerify} \big (\textit{nym}, \textit{pt}, \textit{scope}, ( \textit{ipk}_i, \textit{RI}_i, (a_{i,j})_{j \in R_i})_{i=1}^{k}, E, \mathtt{iss}\Vert \textit{M}\big ). \end{aligned}$$

Issuance. For issuance, the user and the issuer run:

$$\begin{aligned} \langle \mathcal {U}.\mathsf{Sign}(&\textit{ipk}'_{k+1}, ((c_{\mathrm {eq}(k+1,j)}, o_{\mathrm {eq}(k+1,j)})_{j \in E_{k+1}}, (c_j,o_j)_{j\notin R_{k+1}\cup E_{k+1}}, c_{\textit{usk}}, o_{\textit{usk}}), \\&((a_{k+1,j})_{j=1}^{n_{k+1}}, \textit{usk}, \textit{rh}_{k+1}));\\ {\mathcal I}.\mathsf{Sign}(&\textit{isk}', ((a_i)_{i\in R_{k+1}}, \textit{rh}_{k+1}), ((c_{\mathrm {eq}(k+1,j)},\pi _j)_{j \in E_{k+1}}, (c_j,\pi _j)_{j\notin R_{k+1}\cup E_{k+1}}, \\&(c_{\textit{usk}},\pi _\textit{usk})))\rangle , \end{aligned}$$

where they extract their inputs from \(\textit{sit}\), and \(\textit{isk}\) and \(\textit{pit}\), respectively. When the user’s protocol returns \(\textit{sig}\), the user outputs \(\textit{cred}= (\textit{sig}, \textit{rh}_{k+1})\).

Revocation. On input an issuer secret key \(\textit{isk}= (\textit{isk}', \textit{rsk})\), a revocation information \(\textit{RI}\) and a revocation handle \(\textit{rh}\), the revocation algorithm returns .

The formal statements and proofs of the following theorem are given in [1].

Theorem 1

(informal). Let the used building blocks satisfy all security properties introduced in Sect. 4. Then the PABC scheme resulting from the above construction is secure and simulatably private according Sect. 3.3. Furthermore, if the PABS scheme is weakly user private, the resulting scheme is secure and weakly private.

6 Conclusion

We provided security definitions, a modular construction, and secure instantiations of a PABC system. Our framework encompasses a rich feature set including multi-attribute credentials, multi-credential presentations, key binding, pseudonyms, attribute equality proofs, revocation, and advanced credential issuance with carried-over attributes. Prior to our work, most of these features found provably secure instantiations in isolation, but their combination into a bigger PABC system was never proved secure, nor even defined formally.

Proving formal implications among existing definitions and ours might require substantial further research as for each related work, all definitions would first have to be reduced to the set of commonly considered features.

Finally, even though we think that our feature set is rich enough to cover a wide range of use cases (cf. Sect. 1), there are more features that can be added to our framework. Among these features are inspection, where presentation tokens can be de-anonymized by trusted inspectors, or attribute predicates, allowing to prove for example greater-than relations between attributes.