Keywords

1 Introduction

As a new commercial and exciting paradigm, cloud computing has attracted much attention from both industrial and academic world. Due to the advantage of cloud computing, plenty of enterprises and individuals can share and outsource their data to cloud servers instead of building and maintaining data centers of their own, and themselves or other authorized users can access the outsorced data anywhere and anytime [1]. Despite lots of benefits provided by cloud computing, the concerns on data security are probably the main obstacles hindering the wide usage of cloud services. To address the data security concerns, encryption has been applied on the data of enterprises and individuals before outsourcing. Nevertheless, in some practical applications of cloud computing, data is often shared with some potential users without knowing who will receive it, thus a fine-grained access control over data is desired. Attribute-Based Encryption (ABE, [13]) is a promising approach to protect the confidentiality of sensitive data and express fine-grained access control for cloud computing. In a CP-ABE system, enterprises and individuals can specify access policies over attributes that the potential users possess. And the data customers whose attributes satisfy the specified access policy can decrypt successfully and get access to the outsourced data.

A Motivating Story. Consider a company employs a cloud storage system to outsource its data after encrypting the data under some access policies. Each employee is assigned with several attributes (such as “manager”, “engineer”, etc.). And those whose attributes satisfy the access policy over the outsourced data could decrypt the ciphertext and get access to the sensitive data stored in the cloud. As a versatile one-to-many encryption mechanism, CP-ABE system is quite suitable in this cloud storage scenario. If it happens to exist an employee from the company’s competitor who is not authorized but could get access to the sensitive data stored in the cloud, as such, the company will suffer severe financial loss. Then, who leaks the decryption key to him? In addition, if an employee from the company named Bob is traced as the traitor (who leaks the decryption key) but claims to be innocent and framed by the system, then how to judge whether Bob is indeed innocent or not? Does Bob have an opportunity to argue for himself?

The problems, as described above, are the main obstacles when CP-ABE is implemented in cloud storage service. In a CP-ABE system, a user’s decryption key is issued by a trusted authority according to the attributes the user possesses. The authority is able to generate and (re-)distribute decryption keys for any user without any risk of being caught and confronted in a court of law. Thus the security of a CP-ABE system relies heavily on trusting the authority. It is actually the key escrow problem in CP-ABE. One approach to reduce this trust is to employ multiple authorities [8, 16, 19]. However, this approach inevitably causes additional communication and infrastructure cost, and the problem of collusion among collaborating authorities remains. It is better to adopt the accountable authority approach to mitigate the key escrow problem in CP-ABE. The problem described above is the key abuse problem of authority. There exists another kind of key abuse problem: the key abuse problem of users. In a CP-ABE system, the decryption keys are defined over sets of attributes shared by multiple users. The misbehavior users may illegally share their decryption keys with others for profits without being detected. It is actually the malicious key delegation problem. It is necessary to trace the malicious users who leak their decryption keys illegally. Moreover, if a user is traced to be malicious (for leaking the decryption key) but claims to be innocent and framed by the system, it is necessary to enable an auditor to judge whether the user is indeed innocent or is framed by the system.

1.1 Our Contribution

In this paper, we address the key abuse and the auditing problems of CP-ABE and affirmatively solve these by proposing an accountable authority CP-ABE system with white-box traceability and public auditing. To the best of our knowledge, this is the first CP-ABE scheme that supports the following properties: traceability of malicious users, accountable authority, almost no storage for tracing, public auditing and high expressiveness (i.e. supporting access policies expressed in any monotone access structures). Also, we prove that our new system is fully secure in the standard model.

We solve the obstacles of CP-ABE implementation in cloud storage scenario as follows:

  1. 1.

    Traceability of malicious users. Anyone who may leak their decryption keys to others for profits can be traced.

  2. 2.

    Accountable authority. The semi-trusted authority could be caught if it illegally generates and distributes legitimate keys to any unauthorized users.

  3. 3.

    Public auditing. We provide an auditor to judge whether a suspected user (for leaking his/her decryption key) is guilty or is framed by the authority. In addition, the auditability of our system is public, that is, anyone can run the \({{\mathbf {\mathtt{{{Audit}}}}}}\) algorithm to make a judgement with no additional secret needed.

  4. 4.

    Almost no storage for tracing. We use a Paillier-style encryption as an extractable commitment in tracing the malicious users. And we do not need to maintain an identity table of users for tracing as used in [21]. As a result, we need almost no storage for tracing.

Table 1 gives the comparison between our work and some other related work.

Table 1. Comparison with other related work

1.2 Our Technique

In this subsection, we briefly introduce the main idea we utilize to realize the properties of traceability of malicious users, accountable authority and public auditing before giving the full details in Sect. 4.

To trace malicious users who may leak their decryption keys to others for profits, we use a Paillier-style encryption as an extractable commitment to achieve white-box traceability. Specifically, we use a Paillier-style extractable commitment to make a commitment to a user’s identity when the user queries for his decryption key. The commitment is further inserted into the user’s decryption key as a necessary part for successful decryption. Due to the hiding and binding properties of the Paillier-style extractable commitment, the user does not know what is inserted into his decryption key and even cannot change the identity insert into his decryption key. When it comes to the \({{\mathbf {\mathtt{{{Trace}}}}}}\) algorithm, the algorithm uses a trapdoor for the commitment to recover the identity of the user from his decryption key. Note that the decryption key needs to take a key sanity check algorithm to see whether it is well-formed or not prior to the tracing step. Take the advantage of the Paillier-style extractable commitment, we do not have to maintain the identity table as used in [21], as a result, we need almost no storage for tracing.

To achieve accountable authority, the main idea is to let the user’s decryption key be jointly determined by both of the authority and the user himself, hence the authority does not have complete control over the decryption key. We let a user get his decryption key sk corresponding to his attributes and identity from the authority using a secure key generation protocol. The protocol allows the user to obtain a decryption key sk for his attributes and identity without letting the authority know which key he obtained. Now if the authority (re-)distribute a decryption key \(\tilde{sk}\) (corresponding to a user’s attributes and identity) for malicious usage, with all but negligible probability, it will be different from the key sk which the user obtained. Hence the key pair \((sk,\tilde{sk})\) is a cryptographic proof of malicious behavior of the authority.

Furthermore, the difference between the user’s decryption key sk and the decryption key \(\tilde{sk}\) (re-)distributed by the authority allows the auditor to judge publicly whether the malicious user is guilty or is framed by the system. And note that the auditor is assumed to be fair and credible.

1.3 Related Work

Attribute-Based Encryption, first introduced by Sahai and Waters [27], generalizes the notion of fuzzy Identity-Based Encryption (IBE) [6, 28]. Goyal et al. [13] formalized two complementary forms of Attribute-Based Encryption (ABE): Key-Policy Attribute-Based Encryption (KP-ABE) and Ciphertext-Policy Attribute-Based Encryption (CP-ABE). In a CP-ABE system, every user’s decryption key is associated with a set of attributes she/he possesses, and every ciphertext is associated with an access policy defined over attributes. KP-ABE is reversed in that every ciphertext is associated with a set of attributes and every user’s decryption key is associated with an access policy. ABE (especially CP-ABE) is envisioned as a highly promising public key primitive for implementing scalable and fine-grained access control over encrypted data, and has attracted much attention in the research community. A series of ABE (including CP-ABE and KP-ABE) systems have been proposed [4, 11, 14, 15, 2022, 2426, 29], aiming at better efficiency, expressiveness or security.

Li et al. first introduced the notion of accountable CP-ABE [18] to prevent illegal key sharing among colluding users. Then a user accountable multi-authority CP-ABE scheme was proposed in [17] which only supported AND gates with wildcard. White-box [21] and black-box [20] traceability CP-ABE systems which supported policies expressed in any monotone access structures were later proposed by Liu et al. Recently, Ning et al. [22] proposed a practical large universe CP-ABE system with white-box traceability. Deng et al. [9] provided a tracing mechanism of CP-ABE to find the leaked access credentials in cloud storage systems. Unfortunately, the above work either only support less expressive access policy, or do not consider the misbehavior of the authority, or do not address the auditing issue.

1.4 Organization

Section 2 introduces the background, including the notation, the access policy, the linear secret sharing scheme, the composite order bilinear groups, the assumptions and the zero-knowledge proof of knowledge of discrete log. Section 3 gives the formal definition of accountable authority CP-ABE with white-box traceability and public auditing (AAT-CP-ABE) and its security model. Section 4 presents the construction of our AAT-CP-ABE system as well as the security proof. Finally, Sect. 5 presents a brief conclusion and foresees our future work.

2 Background

2.1 Notation

We define \([l]=\{1,2,...,l\}\) for \(l\in \mathbb {N}\). We denote by \(s \overset{R}{\leftarrow } S\) the fact that s is picked uniformly at random from the finite set S. By PPT we denote probabilistic polynomial-time. We denote \((v_1,v_2,...,v_n)\) be a row vector and \((v_1,v_2,...,v_n)^\bot \) be a column vector. By \(v_i\) we denote the i-th element in a vector \(\mathbf {v}\). And by \(M\mathbf {v}\) we denote the product of matrix M with vector \({\varvec{v}}\). We denote \(\mathbb {Z}^{l\times n}_p\) be the set of matrices of size \(l\times n\) with elements in \(\mathbb {Z}_N\). The set of column vectors of length n (i.e. \(\mathbb {Z}^{n\times 1}_N\)) are the two special subsets and the set of row vectors of length n (i.e. \(\mathbb {Z}^{1\times n}_N\)).

2.2 Access Policy

Definition 1

(Access Structure [2]): Let S be the attribute universe. A collection (respectively, monotone collection) \(\mathbb {A} \subseteq 2^S\) of non-empty sets of attributes is an access structure (respectively, monotone access structure) on S. A collection \(\mathbb {A} \subseteq 2^S\) is called monotone if \(\forall B,C \in \mathbb {A}: if\) \( B \in \mathbb {A}\) and \(B \subseteq C\), then \(C \in \mathbb {A}\). The sets in \(\mathbb {A}\) are called the authorized sets, and the sets not in \(\mathbb {A}\) are called the unauthorized sets.

For CP-ABE, if a user of the system possess an authorized set of attributes then he can decrypt the ciphertext. Otherwise, the set he possed is unauthorized and he can’t get any information from ciphertext. In our construction, we restrict our attention to monotone access structure.

2.3 Linear Secret-Sharing Schemes

Definition 2

(Linear Secret-Sharing Schemes (LSSS) [2, 22]). Let S denote the attribute universe and p denote a prime. A secret-sharing scheme \(\prod \) with domain of secrets \(\mathbb {Z}_p\) realizing access structure on S in called linear (over \(\mathbb {Z}_p\)) if

  1. 1.

    The shares of a secret \(s\in \mathbb {Z}_p\) for each attribute form a vector over \(\mathbb {Z}_p\).

  2. 2.

    For each access structure \(\mathbb {A}\) on S, there exists a matrix M with l rows and n columns called the share-generating matrix for \(\prod \). For \(i=1,...,l\), we define a function \(\rho \) labels row i of M with attribute \(\rho (i)\) from the attribute universe S. When we consider the column vector \({\varvec{v}}=(s,r_2,...,r_n)\), where \(s\in \mathbb {Z}_p\) is the secret to be shared and \(r_2,...,r_n \in \mathbb {Z}_p\) are randomly chosen. Then \(M{\varvec{v}} \in \mathbb {Z}^{l\times 1}_p\) is the vector of l shares of the secret s according to \(\prod \). The share \((M{\varvec{v}})_j\) “belongs” to attribute \(\rho (j)\), where \(j\in [l]\).

As shown in [2], every linear secret-sharing scheme according to the above definition also enjoys the linear reconstruction property, defined as follows: we suppose that \(\prod \) is an LSSS for the access structure \(\mathbb {A}\), \(S' \in \mathbb {A}\) is an authorized set and let \(I \subset \{1,2,...,l\}\) be defined as \(I=\{i\in [l] \wedge \rho (i)\in S' \}\). Then, there exist constants \(\{\omega _i \in \mathbb {Z}_p\}_{i\in I}\) such that for any valid shares \(\{\lambda _i=(M{\varvec{v}})_i\}_{i\in I}\) of a secret s according to \(\prod \), then \(\sum _{i\in I}\omega _i\lambda _i=s\). Additionally, it is shown in [2] that these constants \(\{\omega _i\}_{i\in I}\) can be found in time polynomial in the size of the share-generating matrix M. On the other hand, for any unauthorized set \(S''\), no such constants \(\{\omega _i\}\) exist.

Note that if we encode the access structure as a monotonic Boolean formula over attributes, there exists a generic algorithm by which we can generate the corresponding access policy in polynomial time [2].

In our construction, an LSSS matrix \((M,\rho )\) will be used to express an access policy associated to a ciphertext.

2.4 Composite Order Bilinear Groups

Composite order bilinear groups are widely used in IBE and ABE systems, which are first introduced in [7]. We let \(\mathcal {G}\) denote a group generator, which takes a security parameter \(\lambda \) as input and outputs a description of a bilinear group G. We define the output of \(\mathcal {G}\) as \((p_1,p_2,p_3,G,G_T,e)\), where \(p_1,p_2,p_3\) are distinct primes, G and \(G_T\) are cyclic groups of order \(N=p_1p_2p_3\), and \(e:G^2\rightarrow G_T\) is a map such that:

  1. 1.

    Bilinearity: \(\forall u,v \in G\) and \(a,b \in \mathbb {Z}_p\), we have \(e(u^a,v^b)=e(u,v)^{ab}\).

  2. 2.

    Non-degeneracy: \(\exists g\in G\) such that e(gg) has order N in \(G_T\).

We assume that group operations in G and \(G_T\) as well as the bilinear map e are computable in polynomial time with respect to \(\lambda \). We refer to G as the source group and \(G_T\) as the target group, and assume the group descriptions of G and \(G_T\) include a generator of each group. Let \(G_{p_1}\), \(G_{p_2}\), and \(G_{p_3}\) be the subgroups of order \(p_1\), \(p_2\), and \(p_3\) in G, respectively. Note that these subgroups are “orthogonal” to each other under the bilinear map e: for any \(u_i \in G_{p_i}\) and \(u_j \in G_{p_j}\) where \(i\ne j\), \(e(u_i,u_j)=1\). Any element \(E_N \in G\) can (uniquely) be expressed as \(g_1^{r_1}g_2^{r_2}g_3^{r_3}\) for some values \(r_1,r_2,r_3 \in \mathbb {Z}_N\), where \(g_1,g_2,g_3\) are the generators of \(G_{p_1},G_{p_2},G_{p_3}\) respectively. And we will refer to \(g_1^{r_1},g_2^{r_2},g_3^{r_3}\) as the “\(G_{p_1}\) part of \(E_N\)”, “\(G_{p_2}\) part of \(E_N\)” and “\(G_{p_3}\) part of \(E_N\)”, respectively. Assume \(G_{p_1p_2}\) be the subgroups of order \(p_1p_2\) in G. Similarly, any element \(E_{p_1p_2} \in G_{}p_1p_2\) can be expressed as the product of an element from \(G_{p_1}\) and an element from \(G_{p_2}\).

2.5 Complexity Assumptions

Assumption 1

(Subgroup Decision Problem for 3 Primes): [14] Given a group generator \(\mathcal {G}\), define the following distribution:

$$\begin{aligned} \begin{array}{c} \mathbb {G}=(N=p_1p_2p_3,G,G_T,e) \overset{R}{\leftarrow } \mathcal {G},\\ g \overset{R}{\leftarrow } G_{p_1},X_{3} \overset{R}{\leftarrow } G_{p_3},\\ D=(\mathbb {G},g,X_{3}),\\ T_1 \overset{R}{\leftarrow } G_{p_1p_2},T_2 \overset{R}{\leftarrow } G_{p_1}. \end{array} \end{aligned}$$

The advantage of an algorithm \(\mathcal {A}\) in breaking this assumption is defined to be: \(Adv1_{\mathcal {G},\mathcal {A}}(\lambda )=|Pr[\mathcal {A}(D,T_1)=1]-Pr[\mathcal {A}(D,T_2)=1]|\).

Definition 3

We say that \(\mathcal {G}\) satisfies Assumption 1 if \(Adv1_{\mathcal {G},\mathcal {A}}(\lambda )\) is a negligible function of \(\lambda \) for any polynomial time algorithm \(\mathcal {A}\).

Assumption 2

[14] Given a group generator \(\mathcal {G}\), define the following distribution:

$$\begin{aligned} \begin{array}{c} \mathbb {G}=(N=p_1p_2p_3,G,G_T,e) \overset{R}{\leftarrow } \mathcal {G},\\ g,X_1 \overset{R}{\leftarrow } G_{p_1},X_{2},Y_2 \overset{R}{\leftarrow } G_{p_2},X_{3},Y_3 \overset{R}{\leftarrow } G_{p_3}\\ D=(\mathbb {G},g,X_{1}X_{2},X_{3},Y_{2}Y_{3}),\\ T_1 \overset{R}{\leftarrow } G,T_2 \overset{R}{\leftarrow } G_{p_1p_3}. \end{array} \end{aligned}$$

The advantage of an algorithm \(\mathcal {A}\) in breaking this assumption is defined to be: \(Adv2_{\mathcal {G},\mathcal {A}}(\lambda )=|Pr[\mathcal {A}(D,T_1)=1]-Pr[\mathcal {A}(D,T_2)=1]|\).

Definition 4

We say that \(\mathcal {G}\) satisfies Assumption 2 if \(Adv2_{\mathcal {G},\mathcal {A}}(\lambda )\) is a negligible function of \(\lambda \) for any polynomial time algorithm \(\mathcal {A}\).

Assumption 3

[14] Given a group generator \(\mathcal {G}\), define the following distribution:

$$\begin{aligned} \begin{array}{c} \mathbb {G}=(N=p_1p_2p_3,G,G_T,e) \overset{R}{\leftarrow } \mathcal {G},\alpha ,s\overset{R}{\leftarrow } \mathbb {Z}_N ,\\ g \overset{R}{\leftarrow } G_{p_1},X_{2},Y_2,Z_2 \overset{R}{\leftarrow } G_{p_2},X_{3} \overset{R}{\leftarrow } G_{p_3}\\ D=(\mathbb {G},g,g^{\alpha }X_{2},X_{3},g^sY_{2},Z_2),\\ T_1=e(g,g)^{\alpha s},T_2 \overset{R}{\leftarrow } G_{T}. \end{array} \end{aligned}$$

The advantage of an algorithm \(\mathcal {A}\) in breaking this assumption is defined to be: \(Adv3_{\mathcal {G},\mathcal {A}}(\lambda )=|Pr[\mathcal {A}(D,T_1)=1]-Pr[\mathcal {A}(D,T_2)=1]|\).

Definition 5

We say that \(\mathcal {G}\) satisfies Assumption 3 if \(Adv3_{\mathcal {G},\mathcal {A}}(\lambda )\) is a negligible function of \(\lambda \) for any polynomial time algorithm \(\mathcal {A}\).

Assumption 4

(l-SDH assumption [5, 10]): Let \(\mathbb {G}\) be a bilinear group of prime order p and g be a generator of \(\mathbb {G}\), the l-Strong Diffie-Hellman (l-SDH) problem in \(\mathbb {G}\) is defined as follows: given a \((l+1)\)-tuple \((g,g^x,g^{x^2},...,g^{x^l})\) as inputs, output a pair \((c,g^{1/(c+x)})\in \mathbb {Z}_p\times \mathbb {G}\). An algorithm \(\mathcal {A}\) has advantage \(\epsilon \) in solving l-SDH in \(\mathbb {G}\) if \(Pr[\mathcal {A}(g,g^x,g^{x^2},...,g^{x^l})=(c,g^{1/(c+x)})] \ge \epsilon \), where the probability is over the random choice of x in \(\mathbb {Z}^*_p\) and the random bits consumed by \(\mathcal {A}\).

Definition 6

We say that the \((l,t,\epsilon )\)-SDH assumption holds in \(\mathbb {G}\) if no t-time algorithm has advantage at least in solving the l-SDH problem in \(\mathbb {G}\).

2.6 Zero-Knowledge Proof of Knowledge of Discrete Log

Informally, a zero-knowledge proof of knowledge (ZK-POK) of discrete log protocol enables a prover to prove that it possesses the discrete log t of a given group element T in question to a verifier.

A ZK-POK protocol has two distinct properties: the zero-knowledge property and the proof of knowledge property. The property of zero-knowledge implies that there exists a simulator S which is able to simulate the view of a verifier in the protocol without being given the witness as input. The proof of knowledge property implies there exists a knowledge-extractor Ext which interacts with the prover and extracts the witness using rewinding techniques [10]. We refer the reader to [3] for more details about ZK-POK.

3 Accountable Authority CP-ABE with White-Box Traceability and Public Auditing

3.1 Definition

An Accountable Authority CP-ABE with White-Box Traceability and Public Auditing (AAT-CP-ABE) is a CP-ABE system which could hold the misbehaved authority accountable, trace the malicious user by his/her decryption key and judge whether the suspected a user is indeed innocent or not. An AAT-CP-ABE system consists of seven algorithms as follows:

  • \({{\mathbf {\mathtt{{{Setup}}}}}}(1^\lambda ,\mathcal {U}) \rightarrow (pp,msk)\): The algorithm takes as input a security parameter \(\lambda \in \mathbb {N}\) encoded in unary and the attribute universe description \(\mathcal {U}\). It outputs the public parameters pp and the master secret key msk.

  • \({{\mathbf {\mathtt{{{KeyGen}}}}}}(pp,msk,id,S) \rightarrow sk_{id,S}\): This is an interactive protocol between the authority AT and a user U. The public parameters pp and a set of attributes S for a user with identity id are the common input to the AT and U. The master secret key msk is the private input to the AT. Additionally, the AT and U may use a sequence of random coin tosses as private input. At the end of the protocol, U is issued a secret key \(sk_{id,S}\) corresponding to S.

  • \({{\mathbf {\mathtt{{{Encrypt}}}}}}(pp,m,\mathbb {A}) \rightarrow ct\): The encryption algorithm takes as input the public parameters pp, a plaintext message m, and an access structure \(\mathbb {A}\) over the universe of attributes. It outputs the ciphertext ct Footnote 1.

  • \({{\mathbf {\mathtt{{{Decrypt}}}}}}(pp,sk_{id,S},ct) \rightarrow m\) or \(\perp \): The decryption algorithm takes as input the public parameters pp, a secret key \(sk_{id,S}\), and a ciphertext ct. If the set of attributes of the private key satisfies the access structure of the ciphertext, the algorithm outputs the plaintext m. Otherwise, it outputs \(\perp \).

  • \({{\mathbf {\mathtt{{{KeySanityCheck}}}}}}(pp,sk) \rightarrow 1\) or 0: The key sanity check algorithm takes as input the public parameters pp and a secret key sk. If sk passes the key sanity check, it outputs 1. Otherwise, it outputs 0. The key sanity check is a deterministic algorithm [10, 12], which is used to guarantee the secret key to be well-formed in the decryption process.

  • \({{\mathbf {\mathtt{{{Trace}}}}}}(pp,msk,sk) \rightarrow id\) or \(\intercal \): The tracing algorithm takes as input the public parameters pp, the master secret key msk and a secret key sk. The algorithm first checks whether sk is well-formed or not so as to determine whether sk needs to be traced. A secret key sk is defined as well-formed which means that \({{\mathbf {\mathtt{{{KeySanityCheck}}}}}}(pp,sk) \rightarrow 1\). If sk is well-formed, the system extracts the identity id from sk. Then it outputs an identity id with which the sk associates. Otherwise, it outputs a special symbol \(\intercal \) indicates that sk does not need to be traced.

  • \({{\mathbf {\mathtt{{{Audit}}}}}}(pp,sk_{id},sk^*_{id}) \rightarrow guilty\) or innocent. This is an interactive protocol between a user U and a public auditor PA. It judges whether a user is guilty or innocent.

3.2 Security

An AAT-CP-ABE system is deemed secure if the following three requirements are satisfied. First, it must satisfy the standard semantic security notion for CP-ABE system: ciphertext indistinguishability under chosen plaintext attacks (IND-CPA). Second, it is intractable for the authority to create a decryption key such that the \({{\mathbf {\mathtt{{{Trace}}}}}}\) algorithm outputs a user and the \({{\mathbf {\mathtt{{{Audit}}}}}}\) algorithm outputs the user is guilty. Finally, it is infeasible for a user to create a decryption key such that the \({{\mathbf {\mathtt{{{Audit}}}}}}\) algorithm implicates the user is innocent. To define security for AAT-CP-ABE system satisfies the above three requirements, we define the following three games, respectively.

The IND-CPA game. The IND-CPA game for AAT-CP-ABE system is similar to that of the CP-ABE system [15], excepting every key query is companied with an explicit identity. The game proceeds as follows:

  • Setup: The challenger runs the \({{\mathbf {\mathtt{{{Setup}}}}}} (1^\lambda ,\mathcal {U})\) algorithm and sends the public parameters pp to the attacker.

  • Query Phase 1: In this phase the attacker can adaptively query the challenger for secret keys corresponding to sets of attributes \((id_1,S_1),(id_2,S_2),..., (id_{Q_1},S_{Q_1})\). For each \((id_i,S_i)\) the challenger calls \({{\mathbf {\mathtt{{{KeyGen}}}}}}(pp,msk,id,S_i) \rightarrow sk_{id,S_i}\) and sends \(sk_{id,S_i}\) to the attacker.

  • Challenge: The attacker declares two equal length messages \(m_0\) and \(m_1\) and an access structure \(\mathbb {A}^*\). Note that this access structure cannot be satisfied by any of the queried attributes sets \((id_1,S_1),(id_2,S_2),...,(id_{Q_1},S_{Q_1})\). The challenge flips a random coin \(\delta \in \{0,1\}\) and calls \({{\mathbf {\mathtt{{{Encrypt}}}}}}(pp,m_{\delta },\mathbb {A}^*) \rightarrow ct\). It sends ct to the attacker.

  • Query Phase 2: The attacker adaptively queries the challenger for the secret keys corresponding to sets of attributes \((id_{Q_1+1},S_{Q_1+1}),...,(id_Q,S_Q)\) with the added restriction that none of these satisfy \(\mathbb {A^*}\). For each \((id_i,S_i)\) the challenger calls \({{\mathbf {\mathtt{{{KeyGen}}}}}}(pp,msk,id,S_i) \rightarrow sk_{id,S_i}\) and sends \(sk_{id,S_i}\) to the attacker.

  • Guess: The attacker outputs a guess \(\delta '\in \{0,1\}\) for \(\delta \).

An attacker’s advantage in this game is defined to be \(Adv=|\Pr [\delta '=\delta ]-1/2|\).

Definition 7

An AAT-CP-ABE system is fully secure if all probabilistic polynomial time (PPT) attackers have at most a negligible advantage in the above game.

The DishonestAuthority Game. The DishonestAuthority game for the AAT-CP-ABE system is defined as follows. The intuition behind this game is that an adversarial authority attempts to create a decryption key which will frame a user. It is described by a game between a challenger and an attacker.

  • Setup: The attacker (acting as a malicious authority) generates public parameters pp, and sends pp, a user’s (idS) to the challenger. The challenger runs a sanity check on pp and (idS) aborts if the check fails.

  • Key Generation: The attacker and the challenger engage in the key generation protocol \({{\mathbf {\mathtt{{{KeyGen}}}}}}\) to generate a decryption key \(sk^*_{id}\) corresponding to the user’s id and S. The challenger gets the decryption key \(sk^*_{id}\) as input and runs a sanity check on it to ensure that it is well-formed. It aborts if the check fails.

  • Output: The attacker outputs a decryption key \(sk^*_{id}\) and succeeds if \({{\mathbf {\mathtt{{{Trace}}}}}}(pp,msk,sk^*_{id}) \rightarrow id\), and \({{\mathbf {\mathtt{{{Audit}}}}}}(pp,sk_{id},sk^*_{id}) \rightarrow guilty\).

The attacker’s advantage in this game is defined to be \(Adv=|\Pr [\mathcal {A} ~succeeds]|\) where the probability is taken over the random coins of \({{\mathbf {\mathtt{{{Trace}}}}}}\), \({{\mathbf {\mathtt{{{Audit}}}}}}\), the attacker and the challenger.

Definition 8

An AAT-CP-ABE system is DishonestAuthority secure if all PPT attackers have at most a negligible advantage in the above security game.

The DishonestUser Game. The DishonestUser game for the AAT-CP-ABE system is defined as follows. The intuition behind this game is that a malicious user attempts to create new decryption key which will frame the authority. It is described by a game between a challenger and an attacker.

  • Setup: The challenger runs the \({{\mathbf {\mathtt{{{Setup}}}}}} (1^\lambda ,\mathcal {U})\) algorithm and sends the public parameters pp to the attacker.

  • Key Query: The attacker submits the sets of attributes \((id_1,S_1),...,(id_q,S_q)\) to request the corresponding decryption keys. The challenger calls \({{\mathbf {\mathtt{{{KeyGen}}}}}}(pp,msk,id,S_i) \rightarrow sk_{id,S_i}\) and returns \(sk_{id,S_i}\) to the attacker.

  • Key Forgery: The attacker will output a decryption key \(sk_*\). If \(\{{{\mathbf {\mathtt{{{Trace}}}}}}(pp,msk,sk_*)\ne \intercal \) and \({{\mathbf {\mathtt{{{Trace}}}}}}(pp,msk,sk_*) \notin \{id_1,...,id_q\}\}\) or \(\{{{\mathbf {\mathtt{{{Trace}}}}}}(pp,msk,sk_*)= id\) and \({{\mathbf {\mathtt{{{Audit}}}}}}(pp,sk_{id},sk^*_{id}) \rightarrow innocent \}\), the attacker wins the game.

An attacker’s advantage in this game is defined to be \(Adv=|\Pr [\mathcal {A}~succeeds]|\) where the probability is taken over the random coins of \({{\mathbf {\mathtt{{{Trace}}}}}}\), \({{\mathbf {\mathtt{{{Audit}}}}}}\), the attacker and the challenger.

Definition 9

An AAT-CP-ABE system is fully traceable if all PPT attackers have at most a negligible advantage in the above security game.

The Key Sanity Check Game. According to [23], the Key Sanity Check game for the AAT-CP-ABE system is defined as follows. It is described by the following game between an attacker and a simulator. On input a security parameter \(1^{\lambda }\) (\(\lambda \in \mathbb {N}\)), a simulator invokes an attacker \(\mathcal {A}\) on \(1^{\lambda }\). \(\mathcal {A}\) returns the public parameters pp, a ciphertext ct and two different secret keys \(sk_{id,S}\) and \(\tilde{sk}_{id,S}\) corresponding to the same set of attributes S for a user with identity id. \(\mathcal {A}\) wins the game if

  1. (1)

    \({{\mathbf {\mathtt{{{KeySanityCheck}}}}}}(pp,sk_{id,S}) \rightarrow 1\).

  2. (2)

    \({{\mathbf {\mathtt{{{KeySanityCheck}}}}}}(pp,\tilde{sk}_{id,S}) \rightarrow 1\).

  3. (3)

    \({{\mathbf {\mathtt{{{Decrypt}}}}}}(pp,sk_{id,S},ct) \ne \perp \).

  4. (4)

    \({{\mathbf {\mathtt{{{Decrypt}}}}}}(pp,\tilde{sk}_{id,S},ct) \ne \perp \).

  5. (5)

    \({{\mathbf {\mathtt{{{Decrypt}}}}}}(pp,sk_{id,S},ct) \ne {{\mathbf {\mathtt{{{Decrypt}}}}}}(pp,\tilde{sk}_{id,S},ct)\).

\(\mathcal {A}\)’s advantage in the above game is defined as \(\Pr [\mathcal {A}~wins]\). And it is easy to see that the intuition of “Key Sanity Check” is captured combining the notion captured in the above game and the related algorithms (\({{\mathbf {\mathtt{{{KeySanityCheck}}}}}}\) and \({{\mathbf {\mathtt{{{Decrypt}}}}}}\)) defined in this section [23].

4 Our System

4.1 Construction

  • \({{\mathbf {\mathtt{{{Setup}}}}}}(\lambda , \mathcal {U}) \rightarrow (pp,msk)\): The algorithm calls the group generator \(\mathcal {G}\) with \(\lambda \) as input and gets a bilinear group G of order \(N=p_1p_2p_3\) (3 distinct primes), \(G_{p_i}\) the subgroup of order \(p_i\) in G, and \(g,g_3\) the generator of the subgroup \(G_{p_1},G_{p_3}\) respectively. It then chooses exponents \(\alpha ,a,\kappa ,\mu \in \mathbb {Z}_N\) and a group element \(v \in G_{p_1}\) randomly. For each attribute \(i \in \mathcal {U}\), the algorithm chooses a random value \(u_i \in \mathbb {Z}_N\). Also, the algorithm chooses two random primes p and q for which it holds \(p\ne q\), \(|p|=|q|\) and \(gcd(pq,(p-1)(q-1))=1\), and then let \(n=pq\), \(\pi =lcm(p-1,q-1)\), \(Q=\pi ^{-1}~mod~n\) and \(g_1=(1+n)\). The public parameters are set to \(pp = (N,n,g_1,v,g,g^a,g^{\kappa },g^{\mu },e(g,g)^\alpha , \{\mathcal {U}_i=g^{u_i}\}_{i \in \mathcal {U}})\). And the master secret key msk is set to \(msk = (p,q,\alpha ,g_3)\).

  • \({{\mathbf {\mathtt{{{KeyGen}}}}}}(pp,msk,id,S) \rightarrow sk_{id,S}\): The authority AT and a user U (with the identity id Footnote 2) interact in the key generation protocol as follows.

    1. 1.

      U first chooses \(t \in \mathbb {Z}_N\) randomly and computes \(R_U=g^t\). Next, it sends \(g^t\), the identity id and a set of attributes S to AT. Then, it runs an interactive ZK-POK of the discrete log of \(R_U\) with respect to g with AT.

    2. 2.

      AT first checks whether the ZK-POK is valid or not. If the check fails, AT aborts the interaction. Otherwise, it chooses a random \(c \in \mathbb {Z}_N\), a random \(r \in \mathbb {Z}^*_n\) and random elements \(R,R_0,R'_0,\{R_i\}_{i\in S} \in G_{p_3}\). Then, it computes the primary secret key \(sk_{pri}\) as follows:

      $$\langle S,~\overline{K}=g^{\frac{\alpha }{a+\overline{T}}}(g^t)^{\frac{\kappa }{a+\overline{T}}}v^{c}R,~\overline{T}= g_1^{id}r^n~mod~n^2,$$
      $$~\overline{L}=g^cR_0,~\overline{L'}=g^{ac}R'_0, ~\{\overline{K_i}=\mathcal {U}_i^{(a+\overline{T})c}R_i\}_{i \in S} \rangle . $$

      And it sends \((c,sk_{pri})\) to U.

    3. 3.

      U checks whether the following equalities hold or not:

      1. (1)

        \(e(\overline{L'},g)=e(\overline{L},g^a)=e(g^a,(g)^c)\).

      2. (2)

        \(e(\overline{K},g^ag^{\overline{T}})=e(g,g)^\alpha e(\overline{L'}(\overline{L})^{\overline{T}},v)e(R_U,g^{\kappa })\).

      3. (3)

        \(\exists x\in S~s.t.~e(\mathcal {U}_x,\overline{L'}(\overline{L})^{\overline{T}})=e(\overline{K_x},g)\).

      If no, U aborts the interaction. Otherwise, U computes \(t_{id}=\frac{c}{t}\) and sets his decryption key \(sk_{id,S}\) as follows:

      $$\langle S,~K=\overline{K}(g^{\mu })^{t_{id}},~T=\overline{T},~L=\overline{L}, ~L'=\overline{L'},~R_U,~t_{id},~\{K_i=\overline{K_i}\}_{i \in S} \rangle .$$
  • \({{\mathbf {\mathtt{{{Encrypt}}}}}}(pp,m,(A,\rho )) \rightarrow ct\): The algorithm takes the access structure encoded in an LSSS policyFootnote 3, the public parameters pp and a plaintext message m. The algorithm chooses \(\overrightarrow{y}=(s,y_2,...,y_n)^\bot \in \mathbb {Z}^{n\times 1}_N\) randomly, where s is the random secret to be shared among the shares according to Subsect. 2.3. Then it chooses \(r_j \in \mathbb {Z}_N\) for each row \(A_j\) of A randomly. The ciphertext ct is set as follows:

    $$ \langle C=m \cdot e(g,g)^{\alpha s}, C_0=g^s, C_1=(g^a)^s, C_2=(g^{\kappa })^s,C_3=(g^{\mu })^s,$$
    $$ \{C_{j,1}=v^{A_j \overrightarrow{y}}\mathcal {U}_{\rho (j)}^{-r_j}, C_{j,2}=g^{r_j} \}_{j\in [l]}, (A,\rho )\rangle .$$
  • \({{\mathbf {\mathtt{{{Decrypt}}}}}}(pp,sk_{id,S},ct) \rightarrow m\) or \(\perp \): The algorithm first parses the \(sk_{id,S}\) to \((S,K,T,L,L',R_U,t_{id},\{K_i\}_{i \in S})\) and ct to \((C,C_0,C_1,C_2,C_3,\{C_{j,1},C_{j,2}\}_{j\in [l]},(A,\rho ))\). The algorithm will output \(\perp \) if the attribute set S cannot satisfy the access structure \((A,\rho )\) of ct. Otherwise, the algorithm first computes constants \(\omega _j \in \mathbb {Z}_N\) such that \(\sum _{\rho (j) \in S}\omega _jA_j=(1,0,...,0)\). It then computes:

    $$ D=e((C_0)^TC_1,K)(e(C_2,R_u)e(C_3,(g^Tg^a)^{t_{id}}))^{-1}$$
    $$E=\varPi _{\rho (j) \in S}(e(C_{j,1},(L)^TL')e(C_{j,2},K_{\rho (j)}))^{\omega _j}$$
    $$F=D/E=e(g,g)^{\alpha s},m=C/F $$
  • \({{\mathbf {\mathtt{{{KeySanityCheck}}}}}}(pp,sk) \rightarrow 1\) or 0: The algorithm takes as input the public parameters pp and a secret key sk. The secret key sk passes the key sanity check if

    1. (1)

      sk is in the form of \((S,K,T,L,L',R_U,t_{id},\{K_i\}_{i \in S})\) and \(T \in \mathbb {Z}^*_{n^2},K,L,L',R_U,\{K_i\}_{i \in S} \in G\).

    2. (2)

      \(e(L',g)=e(L,g^a)\).

    3. (3)

      \(e(K,g^ag^{T})=e(g,g)^\alpha e(L'(L)^{T},v)e(R_U,g^{\kappa })e((g^ag^T)^{t_{id}},g^{\mu })\).

    4. (4)

      \(\exists x\in S~s.t.~e(\mathcal {U}_x,L'(L)^{T})=e(K_x,g)\).

    If sk passes the key sanity check, the algorithm outputs 1. Otherwise, it outputs 0.

  • \({{\mathbf {\mathtt{{{Trace}}}}}}(pp,msk,sk) \rightarrow id\) or \(\intercal \): If \({{\mathbf {\mathtt{{{KeySanityCheck}}}}}}(pp,sk) \rightarrow 0\), the algorithm outputs \(\intercal \). Otherwise, it is a well-formed decryption keyFootnote 4, and the algorithm will extract the identity id from \(T=g_1^{id}r^n~mod~n^2\) in sk as follows: note that \(Q=\pi ^{-1}~mod~n\) and observe that \(T^{\pi Q}=g_1^{id\cdot \pi Q}\cdot r^{n\cdot \pi Q}=g_1^{id}=1+id \cdot n~mod~n^2\). Thus, it recovers \(id=\frac{((T)^{\pi Q}~mod~n^2)-1}{n}~mod~n\) and outputs the identity id.

  • \({{\mathbf {\mathtt{{{Audit}}}}}}(pp,sk_{id},sk^*_{id}) \rightarrow guilty\) or innocent: Suppose a user U (with identity id and decryption key \(sk_{id}\)) is identified as a malicious user by the system (through the traced key \(sk^*_{id}\)), but claims to be innocent and framed by the system. U will interact with the public auditor PA in the following protocol.

    1. (1)

      U sends its decryption key \(sk_{id}\) to PA. If \({{\mathbf {\mathtt{{{KeySanityCheck}}}}}}(pp,sk) \rightarrow 0\), PA aborts. Otherwise, go to (2).

    2. (2)

      PA tests whether the equality \(t_{id}=t^*_{id}\) hold or not. If no, it outputs innocent indicates that U is innocent and is framed by the system. Otherwise, it outputs guilty indicates that U is malicious and \(sk^*_{id}\) is leaked by U.

4.2 IND-CPA Security

Since our construction of accountable authority traceable CP-ABE system is based on the CP-ABE system in [14], for simplicity, we will reduce the IND-CPA security proof of our construction to that of the system in [14]. We denote by \(\varSigma _{cpabe}\), \(\varSigma _{aatcpabe}\) the CP-ABE system in [14] and our system respectively.

The security model of \(\varSigma _{cpabe}\) in [14] is almost the same with the IND-CPA security model of our system \(\varSigma _{aatcpabe}\) in Subsection in 3.2, excepting every key query is companied with an identity and the decryption key is jointly determined by a user and the authority.

Lemma 1

[14] If Assumptions 1,2,3 hold, then the CP-ABE system \(\varSigma _{cpabe}\) in [14] is secure.

(2) IND-CPA Security of our AAT-CP-ABE system:

Lemma 2

[14] If the CP-ABE system \(\varSigma _{cpabe}\) in [14] is secure, then our AAT-CP-ABE system \(\varSigma _{aatcpabe}\) in is secure in the IND-CPA security game of Subsect. 3.2.

Due to space, we refer the reader to Appendix A for the proof of this lemma.

Theorem 1

If Assumptions 1,2,3 hold, then our AAT-CP-ABE system \(\varSigma _{aatcpabe}\) is secure.

Proof. It follows directly from Lemmas 1 and  2.

4.3 DishonestAuthority Security

Theorem 2

If computing discrete log is hard in \(G_{p_1}\), the advantage of an adversary in the DishonestAuthority game is negligible for our AAT-CP-ABE system.

Due to space, we refer the reader to Appendix B for the proof of this theorem.

4.4 DishonestUser Security

In this subsection, we prove the DishonestUser secure of our AAT-CP-ABE system based on q-SDH assumption and Assumption 2. We adopt a similar method from [5] and [21].

Theorem 3

If q-SDH assumption and Assumption 2 hold, then our AAT-CP-ABE system is DishonestUser secure provided that \(q'<q\).

Due to space, we refer the reader to Appendix C for the proof of this theorem.

4.5 Key Sanity Check Proof

In this subsection, we will give the key sanity check proof of our AAT-CP-ABE system. We use the proof method from [23].

Theorem 4

The advantage of an attacker in the key sanity check game (in Subsect. 3.2) is negligible for our AAT-CP-ABE system.

Due to space, we refer the reader to Appendix D for the proof of this theorem.

5 Conclusion and Future Work

In this work, we addressed two practical problems about the key abuse of CP-ABE in the cloud, and have presented an accountable authority CP-ABE system supporting white-box traceability and public auditing. Specifically, the proposed system could trace the malicious users for illegal key sharing. And for the semi-trusted authority, its illegal key (re-)distributing misbehavior could be caught and prosecuted. Furthermore, we have provided an auditor to judge whether a malicious user is innocent or framed by the authority. As far as we known, this is the first CP-ABE system that simultaneously supports white-box traceability, accountable authority and public auditing. We have also proved that the new system is fully secure in the standard model.

Note that there exists a stronger notion for traceability called black-box traceability. In black-box scenario, the malicious user could hide the decryption algorithm by tweaking it, as well as the decryption key. And in this case, the proposed system with white-box traceability in this paper will fail since both the decryption key and decryption algorithm are not well-formed. In our future work, we will focus on constructing an accountable authority CP-ABE system which is black-box traceability and public auditing.