Next to the systematic literature review (see Part I), this research comprises the empirical study of 22 cases of digital tools, tools which have been used or are still used as instruments for citizen involvement in democratic processes. The cases were for a large part requested by the Panel for the Future of Science and Technology at the European Parliament, who commissioned this research. The other cases were selected based on the following criteria: (1) diversity of tools, (2) diversity of institutional contexts and scales (local, national, European and some international),Footnote 1 (3) geographical diversity and (4) different types of citizen involvement. The combination of these criteria provides a broad perspective on the kind of tools that could be used to strengthen participatory democracy at the EU level. Of course, we do not claim that this set of case studies would be representative for all uses of digital tools as discussed on the basis of our literature review. It remains a selection which could be completed towards still greater correspondence with our conceptual framework and the arsenal of digital practices in political participation, if there were no space limitations.

1 Evaluation Framework

The description of the 22 cases is based on an evaluation framework for assessing the digital tools. The selection of the key elements of the framework has been made according to the project’s central aim: To identify and analyse best practices with digital tools for participatory and direct democracy at different political and governmental levels (local, national, European) that in the future can be used at EU level to encourage citizen engagement and countervail the European democratic deficit.

In view of the current crisis of representative democracy, the disengagement from the democratic processes and the distance of citizens from EU institutions, restoration and enhancement of democratic legitimacy at the European level is needed. Therefore, we put legitimacy and its key dimensions (Schmidt 2013) centre stage in the evaluation framework and use it as the basis for differentiating further, more specific evaluation aspects. In this we follow the Council of Europe in its recommendation on e-democracy as referred to in the Introduction: “E-democracy, as the support and enhancement of democracy, democratic institutions and democratic processes by means of ICT, is above all about democracy. Its main objective is the electronic support of democracy” (Council of Europe 2009: 1).

In order to investigate how digital tools can contribute to stronger connections between EU citizens and EU politics, we distinguish between five types of citizen involvement: (1) monitoring, (2) formal agenda setting (invited space, i.e. initiated by government), (3) informal agenda setting (invented space, i.e. initiated by citizens), (4) non-binding decision-making and (5) binding decision-making (see Table 5.1) (Kersting 2014). In combination with the focus of the research on democratic legitimacy, this leads to an evaluation model along the lines of the input, throughput and output legitimacy of political decision-making processes (Schmidt 2013; Scharpf 1999).

Table 5.1 Overview of case studies

Fritz W. Scharpf (1999) divided democratic legitimisation into input legitimacy, judged in terms of the EU’s responsiveness to citizen concerns as a result of participation by the people and output legitimacy, judged in terms of the effectiveness of the EU’s policy outcomes for the people. Vivien Schmidt (2013) has added to this theorisation of democratic legitimacy, a third criterion for evaluation of EU governance processes: throughput legitimacy, judging legitimacy in terms of their inclusiveness and openness to consultation with the people.

The distinction between the three criteria for democratic legitimacy helps to understand the particular relevance of the democratic deficit in times of the recent and current EU crisis. Due to the transnational character, EU institutions’ legitimisation has difficulties to be rooted in strong channels of information by citizens (input legitimacy) and consultation with citizens (throughput legitimacy) and thus must rely on legitimising its policies by the quality of its output, that is its decisions and regulations being in the best interest of, and thus being supported by, the citizenry (output legitimacy). The fact that in the latter respect the means of the EU institutions are restricted as well has a special bearing in times of crisis. The missing input legitimacy becomes the more problematic, the weaker output legitimacy is getting, entailing apparent difficulties to establish consensus on a, for example, joint European policy to solve the refugee problem. In a situation where strong decisions have to be taken at the EU level (beyond national interests), input but also throughput legitimacy is urgently needed.

The three types of legitimacy pose different demands on digital tools for citizen involvement. In the following paragraphs we will address these different demands.

Regarding the input legitimacy, the use of digital tools will be assessed for how it enhances the voice of citizens in the political decision-making process. “Voice” concerns the way in which affected citizens are able to influence the political agenda (Manin 1987). To what extent are citizens enabled to express their wishes and interests in political decision-making? How can citizens get an issue on to the political agenda? Is there equal opportunity for citizens to voice their concerns? Are citizens supported enough in their efforts to voice themselves in the process (i.e. interaction support)? Is the tool user-friendly (i.e. tool usability)?

Regarding the throughput legitimacy, an evaluation will be made of how digital tools contribute to the quality of the deliberation process, in terms of an inclusive dialogue and a careful consideration of options (Cohen 1989). Relevant questions are: to what extent do the views of the citizens expressed by the digital tool represent the views of the general population (i.e. representation)? How is the diversity of views within the population (including minority views) reflected in the process? Are the different policy options carefully considered in the deliberation process? Do the citizens have access to all the relevant information about the decision-making process to which the results of the digital citizen involvement should contribute?

Concerning the output legitimacy, responsiveness to the arguments and proposals of citizens (Cohen 1989) and effectiveness (Scharpf 1999) will be evaluated, along with the accountability of decisions made. To what extent do the tools substantially contribute to the political decisions made (i.e. democratic impact)? How do the digital tools contribute to feedback? Is information provided about the decision-making process and its outcomes (i.e. accountability)?

The cases are described based on the questions mentioned in Table 5.2 on the evaluation framework. Each case description has at least four sections: an introductory section (i.e. short description of the digital tool), one on the participants, one on the participatory process and one on the results of the digital tool.

Table 5.2 Evaluation framework for assessing digital tools

2 Data Collection

Each individual case is thoroughly studied. All aspects of the evaluation framework are covered in a structured template that forms the empirical checklist for the case studies. Empirical data on all these aspects come from different data sources and methods of data collection, namely:

  • (grey) literature research

  • standardised online questionnaire

  • semi-structured interviews

Key in our strategies for data collection is thus methodological triangulation. We used more than one method and source to gather data on the 22 cases. This was to cross-check our data and to obtain construct validity (an effective methodology and a valid operationalisation) (Fielding and Warnes 2009). The elementary data for the case studies came from the (grey) literature about the case. In addition, two respondents per case were interviewed. In our design the two respondents were (1) a professional that is involved in the case and (2) an expert who scientifically studied and/or contemplated the case. The data collection was finished in February 2017.

The interviews took place via two steps. First, the interviewees were asked to answer a standardised questionnaire online to evaluate the digital tool. For the e-voting experiences a separate questionnaire was created, because not all questions were applicable in these cases. The concept questionnaires were pre-tested in a pilot and feedback was received from two external experts. This led to several adjustments in the questionnaire.

Second, the respondents were interviewed face-to-face, by telephone or Skype, asking follow-up open questions which took no more than one hour. The individual responses of the professionals and experts guided these subsequent semi-structured interviews. The open questions addressed, in a more qualitative way, the motivations of respondents behind their evaluation scores. Moreover, the open questions focused on a better understanding of the success factors, risks, challenges and the EU suitability in relation to the specific digital tool. In addition, in the interviews unsolved issues within the case study—inconsistencies in the data or aspects on which no information can be found in the literature—were discussed with the respondents. The interviewees were able to comment on the transcript of the interview as well as on the draft case study.

The data collection was conducted in the year 2016 until February 2017. In a few of the cases the latest developments in the following months and year (2017–2018) are addressed in the case descriptions.

3 Qualitative Comparative Analysis (QCA)

To analyse case descriptions based on the findings of the desk research, the questionnaire and the interviews, the technique of Qualitative Comparative Analysis (QCA) was used.Footnote 2 QCA is a technique for making a systematic comparison of different case studies. The intention of the QCA is to integrate qualitative case-oriented and quantitative variable-oriented approaches (Ragin 1987). The QCA technique aims for “meeting the need to gather in-depth insight into different cases and to capture their complexity, while still attempting to produce some form of generalization” (Rihoux and Ragin 2009, xvii).

Our particular research has an intermediate-N research design, including 22 cases. This sample is too large to focus on in-depth analysis only and too small to allow for a conventional regression analysis, but QCA is an appropriate technique for analysis (cf. Gerrits and Verweij 2015). It is particularly in such intermediate-N research designs that QCA helps to acknowledge the internal case complexity on the one hand, while it enables cross-case comparison on the other hand (Rihoux and Ragin 2009, xvii).