Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The research community has been trying to enhance the strong authentication techniques to respect the privacy of the users. More specifically, efforts have been dedicated to design schemes for providing data minimization, unlinkability and untraceability during an authentication session. In this regard, Privacy-preserving Attribute-based Credentials (Privacy-ABCs), also known as anonymous credentials, have been in the focus of various recent research projects such as Prime, PrimeLife, FutureID, and ABC4Trust. From the different flavours of Privacy-ABCs, the IBM Idemix and Microsoft U-Prove are among the most prominent ones. Privacy-ABCs are cryptographically proven to be unlinkable and untraceable. Thus, the service providers cannot tell whether two tokens were generated by the same user or not. Also the issuers cannot trace tokens back to the issuance phase and the person behind them, unless the disclosed attributes contains some identifying information.

Privacy-ABCs come with new capabilities such as selective disclosure of attributes, predicate over attributes, and proof of set membership, which mainly were never experienced by the users in any of the previous authentication schemes. For instance, using predicates over attributes, a user is able to prove facts such as less than or greater than about their attributes without actually disclosing the attribute value itself. Taking the example of our experiment, a service provider can request a “German” user to prove if her postal code is in the range of 59999 to 61000. In this way, the service provider learns that the user is living in the city of Frankfurt am Main but it does not learn in which district of the city, which could possibly cause leak of information about the financial status. Some scholars [8] reported that users lack an appropriate mental model for such technologies regarding even simpler features, such as combining individual attributes from different credentials in a single authentication. Consequently, we argue in this paper that without additional support regarding the semantic of their actions, the users may not be appropriately influenced by such privacy-enhancing technology regarding their security and privacy risk perception.

In this paper, we demonstrate that the users’ perception of security and privacy risks changes, when they are supported with additional information regarding the semantic of their proofs during authentication with Privacy-ABCs. Our findings are based on the results of an empirical experiment with 80 users. We compared the perceived security and privacy risk of two groups of participants who received different treatments during their practice of authentication with Privacy-ABCs. For this experiment, we first implemented an experiment platform where the user could practice authentication with Privacy-ABCs. In the next step, we used the platform to conduct the experiment with the users and afterwards measured their perceived security and privacy risk using part of a systematically developed measurement instrument (cf. Appendix A).

In the rest of this paper, we review the previously conducted related research in Sect. 2. Later on, we explain the details of our experiment in Sect. 3. The results and implications of our experiment are provided in Sect. 4. In the end, we conclude the paper in Sect. 5.

2 Related Works

To the best of our knowledge, there have not been many studies in the literature concerning the human aspects of Privacy-ABCs. Wästlund et al. [8] were the first ones who reported about the challenges to design user-friendly interfaces that convey the privacy benefits of Privacy-ABCs to users. They observed that users were still unfamiliar with the new and rather complex concept of Privacy-ABCs, since no obvious real-world analogies existed that could help them create the correct mental models. Benenson et al. [2, 3] investigated one of the ABC4Trust trials using the Technology Acceptance Model (TAM) [4]. Benenson et al. discovered significant negative correlation of Perceived Risk with the Intention to Use, the Perceived Usefulness for the first and secondary goals, the Perceived Ease of Use and the Trust. In the same study, they found the Perceived Risk to be dependent to Perceived Anonymity.

An experts survey was conducted in [7] to predict the factors influencing adoption of Privacy-ABCs. The results indicate that the complexity for the user is among the most important factors. In our study, we try to reduce the complexity by providing additional support for the semantic analysis of the presentation policies (which are the artefact describing the requested attributes and proofs by service providers) and observe the effect on the perceived security and privacy risks, because perceived risk is reported to be particularly important for the adoption of e-services [6]. The method to influence the perceived risk was chosen based on the recommendations of the warning theory. The warning theory suggests that more specific information about hazards and consequences can reduce uncertainty and enable people to make better-informed cost-benefit trade-off decisions regarding the need to comply [5]. Bal [1] extended this design theory to the field of information privacy warning design by experimentally investigating the effects of explicitness in privacy warnings on individuals’ perceived risk and trustworthiness of smartphone apps and observed significant effects.

Table 1. Measurement instrument for security and privacy risk of using ID+ to authenticate towards Politiks.eu

3 Experiment Design

The experiment was designed to evaluate the effect of additional semantic analysis of the presentation policy on the perceived privacy risk by end users. We first implemented the necessary software components to enable authentication with Privacy-ABCs. We called our mock-up Privacy-ABC identity card as “ID+” and let the users try it in our experiment portal, “Politiks.eu”. We also adjusted our developed questionnaire for the perceived security and privacy risk to reflect our experiment environment (the details of developing the questionnaire are presented in Appendix A). We used a 7-points Likert scale in the questionnaire ranging from Strongly disagree to Strongly agree. Table 1 demonstrates the final questionnaire for our experiment. Then, we conducted the experiment through the network of the students at the Goethe University Frankfurt in October and November 2015. In the following sections, we explain the details of our process.

Fig. 1.
figure 1

User tasks list

3.1 Experiment Platform Setup

A precondition for our experiment was to set up a platform where scenarios for authenticating with Privacy-ABCs could be tried out. We decided to develop a mock-up prototype which presents the workflow of authenticating with Privacy-ABCs with a more friendly interface than the ABC4Trust reference implementation and better integration into the web browser. We designed the User Agent as a Firefox plugin and integrate it into the browser. We added a button, called “ID” into the toolbar of the Firefox browser, which upon clicking, it would show the users’ identity credential in case the smartcard was connected to the computer. In the experiment, the authentication was emulated, therefore, the smart card was employed to provide the feeling of a real process but the users’ attributes were actually stored in the browser configurations. A small Java application was developed to run in the background in order to check the status of the smartcard, which allowed the browser plugin to query the status via a Restful web-service call. The plugin was designed to attach specific Javascript codes to the html content of the web-page when opening the experiment portal URL. The Javascript codes would provide the possibility of communicating with the plugin in order to invoke the GUI for authentication with Privacy-ABCs. When a button on the web-page triggers login with Privacy-ABCs, the message is communicated to the plugin. The GUI would pop up as small window next to the “ID” button if the smart card is present. The window guides the user through the steps of authentication and upon completion the user is redirected to the requested page.

Fig. 2.
figure 2

Experiment process

3.2 Conducting the Experiment

The experiment was conducted within the student community of the Goethe university Frankfurt. The only limitation was to limit the age to be between 18 and 34. The participants were randomly assigned to one of the two envisioned groups, the “control group” and the the “experiment group”. All participants received a brief introduction of ID+ and its privacy-enhancing features. Afterwards, the participants were given a smartcard and were asked to open Firefox and browse to the experiment portal, “http://politiks.eu”. In order to urge the need for privacy, we decided the deliver political discussion as the main content of the portal. Two forums were initiated in the portal; one about mayoral election in the city of Frankfurt and one about legalizing drugs. Each forum required the user to authenticate with her ID+ in order to get access to the discussion. The process of authenticating with ID+ for the two groups are shown in Fig. 2. Upon clicking on “Login with ID+” the respective GUI would pop up to guide the participant through the authentication process. The Frankfurt mayoral election forum asked the users to deliver a proof for “Your Postal code is between 59999 and 61000” and the forum on legalizing drugs, would request the users a proof of “Your birth date is before the 01.01.1997”. The former policy semantically means that the participant is living in the Frankfurt am Main area as the postal code is following 60xxx format, and therefore the forum ensures that she is a stakeholder. The latter also proves that the participant is older than 18 (by the time of the experiment) and consequently allowed to discuss about drugs. A semantic analysis of the given access policy was presented to the participants of the “experiment group” and not to the “control group”. This additional step was the only difference of the process between the two groups and it was introduced as an additional transparency mechanism which could influence the perceived privacy risk of the users. The participants were guided through a task list (presented in Fig. 1) to interact with the portal. In the end, each participant was asked to fill the questionnaire that we developed to measure their perceived security and privacy risk with regard to the use of ID+.

4 Results and Implications

In total 80 participants took part in the experiment, 35 female and 45 males. All the participant were between 18 and 34 years old. Regarding the education level, 13 had no university degree yet, 42 hold a Bachelor’s degree, and 25 had Master’s degree or above. We statistically analysed the questionnaire results using the IBM SPSS tool. Perceived security and privacy risk is a complex construct and can have various factors. Within the items measuring the perceived security and privacy risk, we covered various aspects namely, Linkability, Traceability, Anonymity, Security, Control, Collection, Impersonation and Unwanted Authorization.

The responses to each of the questions are demonstrated in Fig. 3. The x-axis represents the answers (1 = Strongly Disagree, 2 = Disagree, 3 = Somewhat Disagree, 4 = Neither Agree nor Disagree, 5 = Somewhat Agree, 6 = Agree, 7 = Strongly Agree).

Fig. 3.
figure 3

Participants’ answers to the security and privacy risk questions. First column = control group, second column = experiment group

Table 2. Rotated component matrix. Rotation method Varimax with Kaiser normalization

Comparing the descriptive statistical values for the answers of the control group and the experiment group, we can say both groups Somewhat Disagreed to the risk of Linkability (\(m_{c}=2.85, \sigma _{c}=1.96, m_{e}= 2.95, \sigma _{e}=1.78\)). With regard to Traceability, both groups had on average a neutral perception (\(m_{c}=3.98, \sigma _{c}=1.72, m_{e}= 3.75, \sigma _{e}=1.46\). The results concerning Anonymity were almost the same for both groups and laid between Somewhat Agree and Agree (\(m_{c}=5.60, \sigma _{c}=1.67, m_{e}= 5.50, \sigma _{e}=1.41\)). For Security, the experiment group demonstrated a slightly stronger disagreement compared to the control group but in general they were both around Somewhat Disagree (\(m_{c}=3.38, \sigma _{c}=1.72, m_{e}= 2.88, \sigma _{e}=1.44\)). The results also indicate a slight difference concerning perception of Unwanted Authorization but the average on both groups was close to Somewhat Disagree (\(m_{c}=2.93, \sigma _{c}=1.46, m_{e}= 2.75, \sigma _{e}=1.13\)). The perception of the control group was on average between Somewhat Disagree and Neutral towards Impersonation while the experiment group’s perception was between Disagree and Somewhat Disagree (\(m_{c}=3.38, \sigma _{c}=1.78, m_{e}= 2.40, \sigma _{e}=1.28\)). A similar result was observed for Collection (\(m_{c}=3.48, \sigma _{c}=1.57, m_{e}= 2.50, \sigma _{e}=1.47\)), and Control (\(m_{c}=3.40, \sigma _{c}=1.53, m_{e}= 2.40, \sigma _{e}=1.43\)).

We performed a Principal Component Analysis (PCA) using Varimax rotation with Kaiser Normalization on the security and privacy risk items to investigate whether all the items were loading one “total security and privacy risk” or not. To perform a PCA, the rule of thumb is to have at least 10 participants per variable, which our total number of participants met this requirements for our eight variables. As shown in Table 2, the results indicate that our eight items were loading two components (which we named C1 and C2). Consequently, we calculated component C1 as an un-weighted average of Security, Unwanted Authorization, Impersonation, Collection and Control, and also C2 as the un-weighted average of Unlinkability, Untraceability and Anonymity. Regarding the reliability test, the Bartlett test indicated significant correlation and the Kaiser-Meyer-Olkin (KMO) measure verified the sampling adequacy for the analysis with KMO = .789. Moreover the Cronbach’s \(\alpha \) was calculated as 0.82 and 0.6 for C1 and C2 respectively.

Table 3. Independent samples T-test

After identifying the components, we compared the mean value of C1 and C2 between the control group and the experiment group using a Independent Samples T-test. As reported in Table 3, the results demonstrate statistically significant difference of C1 between the two groups, p-value \(\leqslant \) 0.005, which means that the probability of the corresponding difference in the means to occur by chance is less than or equal to 0.5 %. This shows that the participants of the experiment group perceived less risk (mean diff. = .725) concerning the dimensions of security and privacy covered by C1. Intuitively, the experiment group received additional explicit information with regard to the consequences of delivering the proofs requested by the portal, which made them specially perceive better control over their attributes and their collections.

5 Conclusion

In this work, we designed and conducted an empirical experiment with Privacy-ABCs in order to demonstrate the effect of additional supports to the users with regard to the semantic of the Privacy-ABC proofs. Privacy-ABCs enable new privacy features such as minimal disclosure, predicate over attributes, and set membership. However, users are not very familiar with those concepts and have difficulties to build a mental model of such advanced operations. We argued that additional information explaining the facts and semantics of required Privacy-ABC proofs during an authentication process has an influence on the perceived security and privacy risk of the users. We verified our hypothesis through our experiment, where we examined 80 participants in two groups and measured their perceived security and privacy risk using our systematically developed measurement instrument. Our results demonstrated that the group who received additional information about the semantic of the proofs had a statistically significant difference in some aspects of their perceived security and privacy risk. Despite following methodological approaches, the experiment was conducted through the student network of the Goethe University Frankfurt and the age of the participants were limited to 18–34 years old. Consequently, the results may not be generalizable to users who are significantly different from our sample group.