Manipulability in a group activity selection problem
 192 Downloads
Abstract
We consider the aspect of strategic manipulation in a group activity selection problem. Given a set of activities in which they might participate, the agents have preferences over the activities themselves and over the number of participants in the activities; the goal is to assign agents to activities on basis of their preferences. In this paper, we consider the possibility of strategic manipulation involved in providing solutions in such a setting, for the solution concepts of maximum individual rationality, core stability, and Pareto optimality respectively. For three different preference extensions (Gärdenfors extension, maxi–min extension and maxi–max extension) we analyze strategic manipulability with respect to the number of activities available. In general, the considered solution concepts turn out to be prone to strategic manipulation; in some natural special cases, however, strategyproofness is provided by such an aggregation.
1 Introduction
We investigate the aspect of strategic manipulability in a group activity selection problem considered in Darmann et al. (2018) and Darmann (2018) respectively. In this setting, there is a set of agents and a set of activities to which the agents should be assigned, where each agent can take part in at most one activity. The agents’ preferences depend on the activity itself and the number of participants in that activity. As particular examples consider the organizer of a workshop who plans to set up social activities for the free afternoon, or a company that wants to provide free sports classes for its employees (Skowron et al. 2015) in order to raise their overall satisfaction. Since these take place simultaneously, each agent can take part in at most one activity; of course, the choice of abstaining from any activity, i.e., doing nothing, should be a valid option as well. It is plausible to assume that the preferences of the agents do not depend on the activity alone but also on the number of agents taking part in the respective activity, since, e.g., a table tennis tournament with 40 players and only one table will not be desired even by a passionate table tennis player. A natural goal of the organizer now would be to find a reasonable assignment of agents to activities without forcing an agent to participate when she is not willing to. Another example would be a company that has several possible projects to which some of its employees should be assigned instead of performing their common working tasks, as a bonus in form of a variation from their usual work or in order to let them gain additional experience in project work. However, each of the employees might have different preferences over the projects and the number of agents engaged in the corresponding project teams. As an efficiency consideration, the company is interested in assigning the employees in a way such that they bear a reasonable level of motivation in working on the projects; in particular, the company does not want that employees are poorly motivated for the assigned project with the corresponding team size in the sense that an employee would rather opt out (and perform their usual working tasks in the company instead).
In this paper, we consider the group activity selection problem with ordinal preferences (\({\textsf {o}}\hbox {}{\textsf {GASP}}\)), in which the agents’ preferences are strict orders over pairs “(activity, group size)” including the possibility “do nothing” to which we refer as the void activity. The goal, of course, would be to assign agents to activities in a reasonable manner. As indicated above, a main requirement is that the assignment should be individually rational, meaning that no agent should be forced to take part in an alternative she deems unacceptable, i.e., would rather prefer doing nothing to. The purpose of this paper is to study the aspect of strategyproofness involved when such assignments are provided.
Our contribution and relation to the literature Our focus is laid on the main solution concepts studied in the group activity selection problem: maximum individually rational, core stable, and Pareto optimal assignments respectively. In natural, special preference domains we analyze the strategyproofness of the respective singlevalued aggregation functions and possibly multivalued aggregation correspondences with respect to the number of activities involved.
Darmann et al. (2018) introduce the general group activity selection problem \({\textsf {GASP}}\), where the agents’ preferences are weak orders over the pairs “(activity, group size)”. There, the problems of finding a stable assignment, for stability notions such as Nash and core stability, and, above all, finding a maximum individually rational assignment—that is an assignment maximizing the number of agents assigned to a nonvoid activity in an individually rational assignment—are studied from a computational viewpoint in the approvalbased variant \({\textsf {a}}\)\({\textsf {GASP}}\). Darmann (2018) considers the problem of finding Pareto optimal and stable solutions in the strict preference setting of \({\textsf {o}}\)\({\textsf {GASP}}\), for different stability notions including the one of core stability. In both works the focus is laid on the special cases of increasing and decreasing preferences. Loosely speaking, with increasing preferences an agent would like as many other agents as possible to join the same activity; in the decreasing preferences case, an agent would like to share the same activity with as few other agents as possible. In this paper we analyze the aspect of manipulability involved in providing maximum individually rational, core stable, and Pareto optimal assignments. Typically, these assignments are not unique; our main interest hence is laid on aggregation correspondences which output the set of all maximum individually rational, core stable, and Pareto optimal assignments respectively. We particularly focus on the special cases of increasing, decreasing, and—more generally—singlepeaked preferences. Our results show that such an aggregation is, unfortunately, susceptible to strategic manipulation already for a small number of activities involved: while for the cases of one and two activities some robustness results can be achieved, for three activities all of the considered solution concepts allow for strategic manipulation with respect to each of the preference extensions considered. In addition, it turns out that the negative results generalize to an impossibility result for any aggregation process that respects individual rationality when the mild condition of unanimity is imposed (which is satisfied by basically any reasonable aggregation), for restricted instances of \({\textsf {o}}\hbox {}{\textsf {GASP}}\) already (see Sect. 5.4).
Whether the aggregation of individual preferences into a group solution is susceptible to strategic manipulation is one of the central questions in social choice theory. Such an aggregation function (which outputs a single outcome) or aggregation correspondence (which outputs a set of outcomes) is strategyproof, and hence not manipulable, if no agent can be better off by misrepresenting her true preferences. In its classical framework, both strategyproofness of aggregation functions [see, for instance, Barberà (2010), and the seminal papers by Gibbard (1973) and Satterthwaite (1975)], and of aggregation correspondences [see, e.g., Barberà et al. (2001), Brandt and Brill (2011) and Brandt and Geist (2014)] has been wellstudied. Clearly, comparing different assignments, an agent will prefer one which yields the best alternative for her. Comparing sets of assignments, however, is less obvious. Instead of asking the agents to give a ranking over all possible sets of outcomes (which requires to rank an exponential number of possibilities), the typical assumption is that the preferences over the single alternatives can be extended to binary relations over sets of alternatives. Of course, such a preference extension can be performed in various ways [see, e.g., Barberà et al. (2004) and Barberà (2010)]. We consider the wellknown Gärdenfors extension (Gärdenfors 1976) and include the natural maxi–max extension and maxi–min extension (see Moretti and Tsoukiàs 2012) in our analysis: in an optimistic mindset, one might hope for the mostpreferred among the possible alternatives; having a pessimistic view, one might be worried by the leastpreferred among the alternatives.
Strategyproofness of coalition formation rules has been studied in several papers, often axiomatically motivated. RodríguezÁlvarez (2009) characterizes singlelapping rules over the domain of additively representable or separable preferences by four axioms including strategyproofness. Pápai (2004) uses strategyproofness as one of the characterizing axioms of these rules in general domains. Further literature on strategyproofness of coalition formation games, in connection with core stability, includes the works of Alcalde and Revilla (2004), Cechlárová and RomeroMedina (2001), and Sönmez (1999).
Closely related problems have been considered in the works of Lee and Shoham (2015) and Long (2018). Both in the anonymous stable invitation problem (ASIP) of Lee and Shoham (2015) and in the group selection problem of Long (2018), the objective is to determine a single subgroup of agents in a reasonable way given the agents’ preferences over group size (including 0). ASIP can be understood as the group activity selection problem with a single^{1} activity; Lee and Shoham (2015) provide the impossibility result that a strategyproof mechanism for ASIP that always outputs an individually rational and envyfree solution cannot exist. Also the group selection problem of Long (2018) might be interpreted as the group activity selection problem with a single activity under a certain domain restriction. Long (2018) assumes the agents’ preferences to be strict and singlepeaked, where their notion of singlepeakedness implies that each agent’s set of group sizes preferred over the outside option is an (possibly empty) integer interval starting from one. In contrast, the notion of singlepeakedness for the group activity selection problem used in this paper does not yield the analogous implication. However, Long (2018) proposes two aggregation functions (rules) that output a Pareto optimal and individually rational assignment, and provides an axiomatic justification for each of them. We add to these results by showing that, in the terminology used in our paper, a strategyproof aggregation function outputting a maximum individually rational assignment cannot exist in the group activity selection problem with only one activity when all agents’ preferences are decreasing (and therefore singlepeaked), while for the increasing case each such aggregation function is strategyproof. In addition, imposing a rather mild and natural condition—which is satisfied by all of the considered aggregation correspondences including the one restricted to Pareto optimal assignments—we provide a general impossibility result for aggregation functions and correspondences (w.r.t. Gärdenfors and maxi–min extension) for the case of two activities and increasing preferences.
Note that our work distinguishes from Lee and Shoham (2015) and Long (2018) in several aspects. For instance, we mainly focus on aggregation correspondences instead of singlevalued aggregation functions. Secondly, with maximum individual rationality and core stability we consider different solution concepts. Thirdly, we take into account also the case of more than just one activity. In particular, for the maxi–max, maxi–min and Gärdenfors extension we provide the link between strategic manipulability and the number of available activities in the considered group activity selection problem.
Further related work includes those of Jackson and Nicolò (2004) and Massó and Nicolò (2008). In their setting, agents have preferences over both alternatives and group size, and the goal is to determine a single alternative together with a group of agents who jointly use the alternative. Massó and Nicolò (2008) assume gregarious preferences, i.e., for each alternative the agents want additional agents to join the group; the focus is laid on efficient and both internally and externally stable allocations. Jackson and Nicolò (2004) assume that for each group size the agents’ preferences over the alternatives are singlepeaked; they consider the domains of pure congestion and pure costsharing as special cases (these domains translate to the cases of decreasing and increasing preferences respectively in our framework). They provide the result that, even in the domain of pure congestion, only dictatorship is compatible with strategyproofness, Pareto efficiency and the property of outsider independence.
Finally, the group activity selection problem is also related to anonymous and nonanonymous hedonic games. Note that in nonanonymous hedonic games, agents have preferences over the possible coalitions they could be part of; in the anonymous variant these preferences only depend on the size of the coalition. In contrast, in \({\textsf {o}}\hbox {}{\textsf {GASP}}\) (and in \({\textsf {GASP}}\) in general), the agents’ preferences depend both on the considered activity and the size of the group of agents participating in that activity. However, the setting of \({\textsf {GASP}}\) (and hence \({\textsf {o}}\hbox {}{\textsf {GASP}}\)) can, in a somewhat bulky and artificial way, be embedded in the general hedonic game framework (see Darmann et al. 2018 for details). In particular, the model considered in this work allows for a much more compact representation and has some natural special cases to which we turn our attention.
The paper is organized as follows. In Sect. 2 we present the model of \({\textsf {o}}\hbox {}{\textsf {GASP}}\) and some basic definitions. The concepts of strategyproofness and the preference extensions involved are presented in Sect. 3. In Sect. 4 we discuss manipulability in the case of a single activity. Section 5 considers the aspect of strategic manipulation involved when there are at least two activities, and ends with a general impossibility result for any aggregation function/correspondence satisfying unanimity. Section 6 provides an outlook towards future research questions and concludes the paper.
2 Formal model
We begin with the model considered in this work and some basic definitions (see also Darmann 2018, Darmann et al. 2018, and the survey by Darmann and Lang 2017).
Given a set of agents\(N=\{1,\ldots ,n\}\), and a set of activities\(A=A^{*}\cup \{a_{\emptyset }\}\), where \(A^{*}=\{a_{1},\ldots ,a_{m}\}\) and activity \(a_{\emptyset }\) is the void activity, the set of alternativesW is given by \(W=W^{*}\cup \{a_{\emptyset }\}\), with \(W^{*}=A^{*}\times \{1,\ldots ,n\}\); alternative \((a,k)\in W^{*}\) is interpreted as “activity a with k participants”. The vote \(\succ _{i}\) of an agent \(i\in N\) is a strict order over W. A preference profile\(P=(\succ _{1},\ldots ,\succ _{n})\) over W consists of n votes (one for each agent). The set of all preference profiles over W is denoted by \(\mathcal {P}(N,A)\). We refer to the set \(S_{i}:=\{(a,k)\in W^{*}\mid (a,k)\succ _{i}a_{\emptyset }\}\) as the induced approval vote of agent i, and say that agent iapproves of all alternatives in \(S_{i}\).
An instance of the group activity selection problem with ordinal preferences (\({\textsf {o}}\hbox {}{\textsf {GASP}}\)) consists of a triple (N, A, P). An assignment for an instance (N, A, P) of \({\textsf {o}}\hbox {}{\textsf {GASP}}\) is a mapping \(\pi :N\rightarrow A\). We set \(\pi ^{a}:=\{i\in N\mid \pi (i)=a\}\) for \(a\in A\), and \(\pi _{i}:=\{i^{'}\in N\mid \pi (i^{'})=\pi (i)\}\) for \(i\in N\).
Abusing notation, we say that assignment \(\pi \) assigns agent i to alternative (a, k) if \(\pi (i)=a\) with \(\pi ^{a}=k\). Also, we identify the void activity \(a_{\emptyset }\) representing the outside option “do nothing” with the alternative \((a_{\emptyset },k)\), for any \(k\in \{1,\ldots ,n\}\). For the sake of readability, in this paper we omit the repeated use of the term “nonvoid” when referring to the number of activities in an instance (N, A, P) of \({\textsf {o}}\hbox {}{\textsf {GASP}}\); that is, with abuse of notation we refer to \(A^{*}\) as the number of activities in the instance.
As the main requirement considered for an assignment, no agent should be assigned to an alternative she deems unacceptable. Formally, an assignment \(\pi :N\rightarrow A\) is said to be individually rational if for every \(a\in A^{*}\) and every agent \(i\in \pi ^{a}\) it holds that \((a,\pi ^{a})\in S_{i}\).
Clearly, in any instance (N, A, P) of \({\textsf {o}}\hbox {}{\textsf {GASP}}\) the trivial assignment\(\pi _{\emptyset }\) which assigns each agent to \(a_{\emptyset }\) is individually rational. As a consequence, an individually rational assignment always exists. A natural goal of a benevolent central authority, however, might be to maximize the number of agents assigned to a nonvoid activity. Let \(\#(\pi )=\{i\in N\mid \pi (i)\ne a_{\emptyset }\}\) denote the total number of agents assigned to nonvoid activities under assignment \(\pi \).
Definition 1
Given an instance (N, A, P) of \({\textsf {o}}\hbox {}{\textsf {GASP}}\), an assignment \(\pi \) is said to be maximum individually rational if \(\pi \) is individually rational and \(\#(\pi )\ge \#(\pi ^{'})\) for every individually rational assignment \(\pi ^{'}\).
While the above concept of maximum individual rationality is certainly appealing, it does not take into account the possible desire of a group of agents to deviate from the assignment in favor of a different alternative. The wellknown concept of the core is concerned with stability against such group deviations. In particular, an assignment is core stable if no subgroup of agents wants to deviate from the assignment in order to join some other activity (see also Darmann 2018).
Definition 2
Given an instance (N, A, P) of \({\textsf {o}}\hbox {}{\textsf {GASP}}\), an assignment \(\pi \) is core stable (or in the core) if \(\pi \) is individually rational and there are no \(E\subseteq N\) and \(a\in A^{*}\) with \(\pi ^{a}\subset E\) such that \((a,E)\succ _{i}(\pi (i),\pi _{i})\) for all \(i\in E\).
Requirement \(\pi ^{a}\subset E\) in the above definition represents the intuition that—and hence covers scenarios in which—a deviating group of agents cannot prevent agents from participating in their assigned activity; therefore, the group requires cooperation of the agents assigned to that activity.^{2}
Finally, we will consider Pareto optimal assignments, i.e., individually rational assignments for which there is no other assignment in which an agent is better off while no agent changes for the worse.
Definition 3
Given an instance (N, A, P) of \({\textsf {o}}\hbox {}{\textsf {GASP}}\), an assignment \(\pi \)Paretodominates assignment \(\pi ^{'}\) if \((\pi (i),\pi _{i})\succ _{i}(\pi ^{'}(i),\pi _{i}^{'})\) for at least one \(i\in N\) and there is no \(i\in N\) with \((\pi ^{'}(i),\pi _{i}^{'})\succ _{i}(\pi (i),\pi _{i})\). Assignment \(\pi \) is Pareto optimal if it is individually rational and there is no assignment \(\pi ^{'}\) which Paretodominates \(\pi \).
Example 1
Observe that there are instances of \({\textsf {o}}\hbox {}{\textsf {GASP}}\) which do not admit a core stable assignment (see Darmann 2018), while maximum individually rational assignments and Pareto optimal assignments always exist.
We will consider natural special cases of the agents’ preferences. Informally speaking, an agent has increasing preferences with respect to an activity a if she prefers to participate in a together with as many other agents as possible; an agent has decreasing preferences with respect to an activity a if she wishes to share a with as few other agents as possible. Both increasing and decreasing preferences are special cases of singlepeaked preferences, in which agent i has a conception \(p_{i}(a)\) over the ideal group size in activity a; for any group size \(j<p_{i}(a)\) the agent prefers \(j+1\) agents participating in a to j, and for any group size \(j>p_{i}(a)\) the agent prefers \(j1\) agents participating in a to j.
Definition 4

increasing if for each \(1<j\le n\) we have \((a,j)\succ _{i}(a,j1)\);

decreasing if for each \(1<j\le n\) we have \((a,j1)\succ _{i}(a,j)\).

singlepeaked if there is a \(p_{i}(a)\in \{1,\ldots ,n\}\) such that \(1<j\le p_{i}(a)\) implies \((a,j1)\prec _{i}(a,j)\) and \(p_{i}(a)<j\le n\) implies \((a,j1)\succ _{i}(a,j)\).
We say that an agent has singlepeaked (respectively, increasing/decreasing) preferences if her preferences are singlepeaked (resp., increasing/decreasing) with respect to each activity \(a\in A^{*}\). In the instance considered in Example 1 each agent has singlepeaked and, in particular, increasing preferences. In what follows, since we are interested in individually rational assignments only, for the sake of brevity alternatives ranked below \(a_{\emptyset }\) typically will be omitted in the description of specific profiles. In addition, throughout this paper we assume that each agent approves of at least one alternative (otherwise, the agent has to be assigned to the void activity in any individually rational assignment), and, in order to exclude trivial instances, that \(n\ge 2\) holds.
3 Strategyproofness and preference extensions
We study strategyproofness in connection with maximum individually rational, core stable, and Pareto optimal assignments respectively. Particular focus is laid on aggregation correspondences which output the set of all maximum individually rational assignments (respectively, core stable/Pareto optimal assignments) for a given preference profile. In contrast, aggregation functions output exactly one specific assignment for each preference profile.
Given set N of agents and A of activities, where \(\mathcal {P}(N,A)\) denotes the set of all preference profiles over the set of alternatives, let \(\alpha (N,A):=\{\pi \mid \pi :N\rightarrow A\}\) denote the set of all assignments. A function \({f}:\mathcal {P}(N,A)\rightarrow \alpha (N,A)\) is called aggregation function. A mapping \({C}:\mathcal {P}(N,A)\rightarrow 2^{\alpha (N,A)}{\setminus }\emptyset \) is called aggregation correspondence.
With abuse of notation, we say that an aggregation correspondence is individually rational, if it outputs individually rational assignments only; i.e., aggregation correspondence C is individually rational, if for each preference profile P and each \(\pi \in C(P)\), \(\pi \) is an individually rational assignment in the respective instance of \({\textsf {o}}\hbox {}{\textsf {GASP}}\). We study three particular members of the family of individually rational aggregation correspondences defined below.
The aggregation correspondence \({C}_{\text {mir}}\) such that, for each instance \(\mathcal {I}=(N,A,P)\) of \({\textsf {o}}\hbox {}{\textsf {GASP}}\), \({C}_{\text {mir}}(P)\) corresponds to the set of all maximum individually rational assignments in \(\mathcal {I}\), is called miraggregation correspondence. Analogously, the poaggregation correspondence\({C}_{\mathrm{po}}\) outputs, for a given preference profile P, the set of Pareto optimal assignments of the respective instance of \({\textsf {o}}\hbox {}{\textsf {GASP}}\). An aggregation function \({f}\) is called miraggregation function (resp. poaggregation function) if, for each instance \(\mathcal {I}=(N,A,P)\), \({f}(P)\) is a single maximum individually rational (Pareto optimal) assignment in \(\mathcal {I}\).
The aggregation correspondence \({C}_{\mathrm{cs}}\) such that, for each instance \(\mathcal {I}=(N,A,P)\) of \({\textsf {o}}\hbox {}{\textsf {GASP}}\), \({C}_{\mathrm{cs}}(P)\) is the core of \(\mathcal {I}\) if the core is nonempty, and \(\{\pi _{\emptyset }\}\) otherwise, is called csaggregation correspondence. An aggregation function \({f}\) is called csaggregation function if, for each instance \(\mathcal {I}=(N,A,P)\), \({f}(P)\) is in the core of \(\mathcal {I}\) if the core is nonempty, and \({f}(P)=\pi _{\emptyset }\) otherwise. Recall that in an instance of \({\textsf {o}}\hbox {}{\textsf {GASP}}\) a core stable assignment might not exist; in such a case, the csaggregation function (correspondence) outputs (the singleton set made up of) the trivial assignment.^{3} Observe that any mir/po/csaggregation function is a special case of an iraggregation function which, for each given preference profile, outputs a single individually rational assignment in the respective instance.
Let us now turn to strategyproofness of aggregation functions and aggregation correspondences.
Definition 5
Definition 6
Thus, given the sincere preferences of the other agents, when strategyproofness is provided by a social choice correspondence then no agent has an incentive to misreport her preferences. In what follows, we apply particular representatives of preference extensions to our setting. These are the intuitive maxi–max and maxi–min extension (see Moretti and Tsoukiàs 2012), and the wellknown Gärdenfors extension (Gärdenfors 1976).
Let (N, A, P) be an instance of \({\textsf {o}}\hbox {}{\textsf {GASP}}\). For \(X\in 2^{\alpha (N,A)}{\setminus }\emptyset \), an assignment \(\pi \in X\) is a maxassignment for i in X, if \((\pi (i),\pi _{i})\succ _{i}(\tilde{\pi }(i),\tilde{\pi }_{i})\) holds for all \(\tilde{\pi }\in X\) with \((\pi (i),\pi _{i})\not =(\tilde{\pi }(i),\tilde{\pi }_{i})\); we denote the set of i’s maxassignments in X by \(\max _{i}X\). Analogously, \(\pi \in X\) is a minassignment for i in X, if \((\tilde{\pi }(i),\tilde{\pi }_{i})\succ _{i}(\pi (i),\pi _{i})\) holds for all \(\tilde{\pi }\in X\) with \((\pi (i),\pi _{i})\not =(\tilde{\pi }(i),\tilde{\pi }_{i})\); the set of i’s minassignments in X is denoted by \(\min _{i}X\).
Observe that, given agent i and set X of assignments, a maxassignment for i in X is not necessarily unique, but—due to the strict preferences of the agent over the alternatives—the alternative to which i is assigned must be the same under all maxassignments for i in X. I.e., for all maxassignments \(\pi ,\tilde{\pi }\) for i in X, we have \((\pi (i),\pi _{i})=(\tilde{\pi }(i),\tilde{\pi }_{i})\). Analogously, for each agent i and set X of assignments, the alternative to which i is assigned must be the same under all minassignments for i in X.
In the maxi–max extension, an agent considers a set X of assignments better than a set Y of assignments, if she prefers the best alternative assigned to her by an assignment in X to the best alternative assigned to her by an assignment in Y. Similarly, in the maxi–min extension, an agent considers a set X of assignments better than a set Y of assignments, if she prefers the worst alternative assigned to her by an assignment in X to the worst alternative assigned to her by an assignment in Y.
Definition 7
The maxi–max extension is defined by: for \(i\in N\) and \(X,Y\in 2^{\alpha (N,A)}{\setminus }\emptyset \), \(X\succ _{i}^{max}Y\) iff for all \(x\in \max _{i}X\) and \(y\in \max _{i}Y\) it holds that \((x(i),x_{i})\succ _{i}(y(i),y_{i})\).
Analogously, the maxi–min extension is defined by: for \(i\in N\) and \(X,Y\in 2^{\alpha (N,A)}{\setminus }\emptyset \), \(X\succ _{i}^{min}Y\) iff for all \(x\in \min _{i}X\) and \(y\in \min _{i}Y\) it holds that \((x(i),x_{i})\succ _{i}(y(i),y_{i})\).
In our setting, according to the Gärdenfors extension (Gärdenfors 1976) an agent considers X better than Y if one of the following holds: (1) X can be “created” from Y by adding (removing) assignments, and the agent considers each added (remaining) assignment at least as good as each of the original (removed) assignments, with strict preference in at least one case; (2) otherwise, the agent considers each assignment in \(X{\setminus } Y\) at least as good as each assignment in \(Y{\setminus } X\), with strict preference in at least one case.
Definition 8
 1.
\(X\subset Y\), and there are \(x\in X\) and \(y\in Y{\setminus } X\) with \((x(i),x_{i})\succ _{i}(y(i),y_{i})\), and there is no \(x\in X\) and \(y\in Y{\setminus } X\) with \((y(i),y_{i})\succ _{i}(x(i),x_{i})\).
 2.
\(Y\subset X\), and there are \(x\in X{\setminus } Y\) and \(y\in Y\) with \((x(i),x_{i})\succ _{i}(y(i),y_{i})\), and there is no \(x\in X{\setminus } Y\) and \(y\in Y\) with \((y(i),y_{i})\succ _{i}(x(i),x_{i})\).
 3.
neither \(X\subset Y\) nor \(Y\subset X\) nor \(X=Y\), and there are \(x\in X{\setminus } Y\) and \(y\in Y{\setminus } X\) with \((x(i),x_{i})\succ _{i}(y(i),y_{i})\), but there is no \(x\in X{\setminus } Y\) and \(y\in Y{\setminus } X\) with \((y(i),y_{i})\succ _{i}(x(i),x_{i})\).
Let us begin our study with two basic observations for the miraggregation correspondence \({C}_{\text {mir}}\).
3.1 Aggregation correspondence \({C}_{\text {mir}}\): basic observations
In what follows, for instance \(\mathcal {I}=(N,A,P)\) and preference profile \(P^{'}\)\(=(\succ ^{'}_{1},\ldots ,\succ ^{'}_{n})\) with \(P_{N{\setminus }\{i\}}=P^{'}_{N{\setminus }\{i\}}\), let \(S^{'}_{i}\) denote the approval set of agent i with respect to \(\succ ^{'}_{i}\); also, let \(\mathcal {I}^{'}=(N,A,P^{'})\).
The first observation states that—for each of our dedicated preference extensions \(\varepsilon \) considered—we can w.l.o.g. assume that an agent i who manipulates \({C}_{\text {mir}}\) at profile P does this by reducing her approval set.
Lemma 1
Given instance \(\mathcal {I}=(N,A,P)\) of \({\textsf {o}}\hbox {}{\textsf {GASP}}\) and \(\varepsilon \in \,\){maxi–min, maxi–max, Gärdenfors}. If \({C}_{\text {mir}}\) is \(\varepsilon \)manipulable at P by agent i then there is an instance \(\mathcal {I}^{'}\)\(=(N,A,P^{'})\) with \(P_{N{\setminus }\{i\}}=P^{'}_{N{\setminus }\{i\}}\) and \(S^{'}_{i}\subset S_{i}\) such that \({C}_{\text {mir}}(P^{'})\succ _{i}^{\varepsilon }{C}_{\text {mir}}(P)\) holds.
Proof
Assume that \({C}_{\text {mir}}\) is \(\varepsilon \)manipulable at profile P by agent i, let \(P^{'}\) be the respective manipulated preference profile and \(\mathcal {I}^{'}=(N,A,P^{'})\). Assume i ranks disapproved alternatives above \(a_{\emptyset }\) in \(\succ _{i}^{'}\), i.e., has \((a,k)\succ _{i}^{'}a_{\emptyset }\) for some \((a,k)\in W^{*}{\setminus } S_{i}\). For any such alternative (a, k), note that there is no individually rational assignment in \(\mathcal {I}\) that assigns i to a such that a total of k agents is assigned to a, whereas there might be maximum individually rational assignments in \(\mathcal {I}^{'}\) that do so. By the choice of \(\varepsilon \) and by \({C}_{\text {mir}}(P^{'})\succ _{i}^{\varepsilon }{C}_{\text {mir}}(P)\), it follows that, given \(\mathcal {I}^{'}\), removing all such \((a,k)\in W^{*}{\setminus } S_{i}\) from \(S_{i}^{'}\) (ceteris paribus) results in an instance \(\mathcal {I}^{''}=(N,A,P^{''})\) with \({C}_{\text {mir}}(P^{''})\succ _{i}^{\varepsilon }{C}_{\text {mir}}(P)\). Thus, for the miraggregation correspondence we can assume that \(S_{i}^{'}\subset S_{i}\) holds. \(\square \)
Observe that Lemma 1 immediately implies \(\#(\pi ^{'})\le \#(\pi )\) for \(\pi ^{'}\in {C}_{\text {mir}}(P^{'})\), \(\pi \in {C}_{\text {mir}}(P)\). In addition, we have either \({C}_{\text {mir}}(P^{'})\subseteq {C}_{\text {mir}}(P)\) (if \(\#(\pi ^{'})=\#(\pi )\)) or \({C}_{\text {mir}}(P^{'})\cap {C}_{\text {mir}}(P)=\emptyset \) (if \(\#(\pi ^{'})<\#(\pi )\)). As a consequence, for \(C_{\text {mir}}\) we can note that (with \(X={C}_{\text {mir}}(P^{'})\), \(Y={C}_{\text {mir}}(P)\)) the second condition of Gärdenfors manipulability is redundant.
As as second observation, it follows that for \({C}_{\text {mir}}\) the concepts of Gärdenfors strategyproofness and maxi–min strategyproofness coincide (Lemma 2 below). In general—and, in particular for \({C}_{\mathrm{cs}}\) and \({C}_{\mathrm{po}}\)—this is not the case; we provide an example for \({C}_{\mathrm{cs}}\) and refer to Theorems 14 and 15 for \({C}_{\mathrm{po}}\).
Example 2
Let \(N=\{1,2,3\}\) and \(A^{*}=\{a,b\}\), with P given by \(\succ _{1}:(a,2)\succ _{1}(b,2)\succ _{1}a_{\emptyset }\), and \(\succ _{i}:(b,2)\succ _{i}(a,2)\succ _{i}a_{\emptyset }\) for \(i\in \{2,3\}\). In (N, A, P), the unique core stable assignment is assignment \(\pi \) with \(\pi (1)=a_{\emptyset }\), \(\pi (2)=\pi (3)=b\). Observe that \({C}_{\mathrm{cs}}\) is not maxi–min manipulable at profile P: Clearly, agents 2 and 3 cannot maxi–min manipulate since \(\pi \) assigns them to their topranked alternative. Also, agent 1 cannot maxi–min manipulate at P since \(\pi \) is core stable in any manipulated profile \(P^{'}\) with \(P^{'}_{\{2,3\}}=P_{\{2,3\}}\) unless agent 1 approves of (a, 1) in \(P^{'}\); this, however, would result in an assignment \(\pi ^{'}\) which is core stable in \(P^{'}\) with \(a_{\emptyset }\succ _{1}(a,1)\), which would hence make agent 1 worse off.
On the other hand, \({C}_{\mathrm{cs}}\) is Gärdenfors manipulable at profile P by agent 1. Let \(P^{'}\) with \(P^{'}_{\{2,3\}}=P_{\{2,3\}}\), and \(\succ ^{'}_{1}:(b,2)\succ ^{'}_{1}a_{\emptyset }\). Then, \({C}_{\mathrm{cs}}(P^{'})=\{\pi ,\lambda ,\mu \}\), with \(\lambda (1)\)\(=\lambda (2)=b\), \(\lambda (3)=a_{\emptyset }\), and \(\mu (1)=\mu (3)=b\), \(\mu (2)=a_{\emptyset }\). Thus, \({C}_{\mathrm{cs}}(P^{'})\supset {C}_{\mathrm{cs}}(P)\). With \({C}_{\mathrm{cs}}(P^{'}){\setminus }{C}_{\mathrm{cs}}(P)=\{\lambda ,\mu \}\) and \((b,2)\succ _{1}a_{\emptyset }\) it follows that \({C}_{\mathrm{cs}}(P^{'})\succ _{1}^{\mathcal {G}}{C}_{\mathrm{cs}}(P)\) holds.
Lemma 2
Given instance \(\mathcal {I}=(N,A,P)\) of \({\textsf {o}}\hbox {}{\textsf {GASP}}\), \({C}_{\text {mir}}\) is Gärdenfors manipulable at P if and only if \({C}_{\text {mir}}\) is maxi–min manipulable at P.
Proof
“\(\Rightarrow \)”: Assume \({C}_{\text {mir}}\) is Gärdenfors manipulable at profile P by agent i; let \(\mathcal {I}^{'}=(N,A,P^{'})\) be the respective manipulated instance. With Lemma 1 and the subsequent remarks we can assume that we either have \({C}_{\text {mir}}(P^{'})\subseteq {C}_{\text {mir}}(P)\) or \({C}_{\text {mir}}(P^{'})\cap {C}_{\text {mir}}(P)=\emptyset \).
Case I\({C}_{\text {mir}}(P^{'})\subseteq {C}_{\text {mir}}(P)\). Clearly, manipulability yields \({C}_{\text {mir}}(P^{'})\subset {C}_{\text {mir}}(P)\). Also, Gärdenfors manipulability implies (i) \((v(i),v_{i})\succ _{i}(w(i),w_{i})\) or (ii) \((v(i),v_{i})=(w(i),w_{i})\) for \(v\in \min _{i}{C}_{\text {mir}}(P^{'})\), \(w\in \max _{i}{C}_{\text {mir}}(P){\setminus }{C}_{\text {mir}}(P^{'})\). Therewith \(v(i)\not =a_{\emptyset }\) follows, since otherwise (1) contradicts with the individual rationality of w, and (2) implies that for all \(\pi \in {C}_{\text {mir}}(P){\setminus }{C}_{\text {mir}}(P^{'})\)\(\pi (i)=a_{\emptyset }\) holds— which, in turn, would imply \({C}_{\text {mir}}(P){\setminus }{C}_{\text {mir}}(P^{'})=\emptyset \) (and thus contradict to \({C}_{\text {mir}}(P^{'})\subset {C}_{\text {mir}}(P)\)), because any individually rational assignment in \(\mathcal {I}\) that assigns i to \(a_{\emptyset }\) is also individually rational in \(\mathcal {I}^{'}\).
In addition, due to Gärdenfors manipulability \((x(i),x_{i})\succ _{i}(y(i),y_{i})\) for x\(\in \max _{i}{C}_{\text {mir}}(P^{'})\), \(y\in \min _{i}({C}_{\text {mir}}(P){\setminus }{C}_{\text {mir}}(P^{'}))\). Thus, for the profile \(P^{''}\) which results from \(P^{'}\) by agent i disapproving of all alternatives ranked below \((x(i),x_{i})\), we get \({C}_{\text {mir}}(P^{''})\subseteq {C}_{\text {mir}}(P^{'})\). In particular, for all \(z\in {C}_{\text {mir}}(P^{''})\) we have \((z(i),z_{i})\)\(=(x(i),x_{i})\), because any assignment \(\pi ^{''}\in {C}_{\text {mir}}(P^{''})\) is also contained in \({C}_{\text {mir}}(P^{'})\). Therewith, \({C}_{\text {mir}}\) is maxi–min manipulable at profile P by agent i.
Case II\({C}_{\text {mir}}(P^{'})\cap {C}_{\text {mir}}(P)=\emptyset \). Gärdenfors manipulability implies (1) \((v(i),v_{i})\succ _{i}(w(i),w_{i})\) or (2) \((v(i),v_{i})=(w(i),w_{i})\) for \(v\in \min _{i}{C}_{\text {mir}}(P^{'})\), w\(\in \max _{i}{C}_{\text {mir}}(P)\). Analogously to Case I, \(\pi ^{'}(i)\not =a_{\emptyset }\) for all \(\pi ^{'}\in {C}_{\text {mir}}(P^{'})\) follows [here, (2) contradicts with \({C}_{\text {mir}}(P')\cap {C}_{\text {mir}}(P)=\emptyset \)]. In addition, as above instance \(\mathcal {I}''\) defined in Case I yields maxi–min manipulability of \({C}_{\text {mir}}\).
Case I for all \(\pi ,\lambda \in {C}_{\text {mir}}(P)\) we have \((\pi (i),\pi _{i})=(\lambda (i),\lambda _{i})\). Then, for \(\pi \in {C}_{\text {mir}}(P)\), maxi–min manipulability implies that for \(\pi '\in \min _{i}{C}_{\text {mir}}(P')\) we have \((\pi '(i),\pi '_{i})\succ _{i}(\pi (i),\pi _{i})\), and thus \((\mu (i),\mu _{i})\succ _{i}(\pi (i),\pi _{i})\) for all \(\mu \in {C}_{\text {mir}}(P')\). Clearly, this implies Gärdenfors manipulability.
Case II There are \(\pi ,\lambda \in {C}_{\text {mir}}(P)\) with \((\pi (i),\pi _{i})\not =(\lambda (i),\lambda _{i})\). Now, construct profile \(P^{'''}=(\succ _{1}^{'''},\ldots ,\succ _{n}^{'''})\) from profile P by agent i disapproving of all alternatives ranked below \((w(i),w_{i})\), where \(w\in \max _{i}{C}_{\text {mir}}(P)\). Then, \({C}_{\text {mir}}(P''')\)\(=\max _{i}{C}_{\text {mir}}(P)\) follows. But this implies \({C}_{\text {mir}}(P''')\succ _{i}^{\mathcal {G}}{C}_{\text {mir}}(P)\), i.e., \({C}_{\text {mir}}\) is Gärdenfors manipulable by agent i at profile P.\(\square \)
Let us now, for the preference extensions considered, turn to manipulability in connection with the considered solution concepts with respect to the number of activities involved.
4 Manipulability in \({\textsf {o}}\hbox {}{\textsf {GASP}}\) with a single activity
In the case of only one activity a, the size of a maximum individually rational assignment is given by the maximum number k for which (a, k) is approved by at least k agents; i.e., for \(\pi \in {C}_{\text {mir}}(P)\) we have \(\#(\pi )=\max \{k\in \mathbb {N}:\{i\in N\text { with }(a,k)\)\(\in S_{i}\}\ge k\}\).
Assume that all agents have increasing preferences. Then, since each agent approves of at least one alternative (which is different from \(a_{\emptyset }\)), each agent approves of (a, n) and hence \(\#(\pi )=n\) for \(\pi \in {C}_{\text {mir}}(P)\) holds. In particular, the unique maximum individually rational assignment is to assign each agent to a. Observe that this assignment is also the unique core stable and the unique Pareto optimal assignment: the whole set N of agents would prefer to jointly participate in a over any other assignment. Thus, by the nature of increasing preferences—each agent prefers (a, n) over any other alternative—no agent has an incentive to manipulate.
In the remainder of this section, we will hence consider the aspect of strategic manipulation involved in maximum individually rational, core stable, and Pareto optimal assignments respectively in the singleactivity case, with focus on the special cases of singlepeaked and decreasing preferences.
4.1 Maximum individually rational assignments
Considering maximum individually rational assignments in the case of a single activity, when all agents have decreasing preferences it turns out that the miraggregation correspondence \({C}_{\text {mir}}\) is maxi–min and Gärdenfors strategyproof (but maxi–max manipulable). On the negative side, that strategyproofness result for the miraggregation correspondence \({C}_{\text {mir}}\) cannot be extended from decreasing to singlepeaked preferences—in particular, in the latter case \({C}_{\text {mir}}\) is \(\varepsilon \)manipulable for each preference extension \(\varepsilon \) (see Theorem 2). We begin with a short example which will be useful for the proofs of Theorem 1 and Proposition 1.
Example 3
Instance \(\mathcal {I}=(N,A,P)\) of \({\textsf {o}}\hbox {}{\textsf {GASP}}\) is given by \(N=\{1,2\}\) and \(A^{*}=\{a\}\), and \(\succ _{i}:(a,1)\succ _{i}(a,2)\succ _{i}a_{\emptyset }\) for \(i\in N\); hence, \(S_{1}=S_{2}=\{(a,1),(a,2)\}\). The unique maximum individually rational assignment in \(\mathcal {I}\) is assignment \(\pi \) with \(\pi (1)=\pi (2)=a\).
In instance \(\mathcal {I}'=(N,A,P')\), agent 1 reports \(\succ _{1}':(a,1)\succ _{1}^{'}a_{\emptyset }\), while \(\succ _{2}^{'}=\succ _{2}\). Then, \({C}_{\text {mir}}(P')=\{\lambda ,\mu \}\), where \(\lambda (1)=a\), \(\lambda (2)=a_{\emptyset }\) and \(\mu (1)=a_{\emptyset }\), \(\mu (2)=a\).
Theorem 1

maxi–min and Gärdenfors strategyproof, and

maxi–max manipulable.
Proof
Maxi–max manipulability Consider the instances \(\mathcal {I},\mathcal {I}'\) of Example 3. Comparing the maxassignments of the instances yields \((\lambda (1),1)\succ _{1}(\pi (1),2)\) because \((a,1)\succ _{1}(a,2)\) holds. Hence, \({C}_{\text {mir}}\) is maxi–max manipulable by agent 1 at profile P.
maxi–min strategyproofness Assume that \({C}_{\text {mir}}\) is maxi–min manipulable, i.e., there are instances \(\mathcal {I}=(N,A,P)\), \(\mathcal {I}'=(N,A,P')\) with \(A^{*}=\{a\}\) such that for some \(i\in N\) we have \(P'_{N{\setminus }\{i\}}=P_{N{\setminus }\{i\}}\), agent i’s true preferences are \(\succ _{i}\), but she is better off with misreporting her true preferences in terms of \(\succ _{i}^{'}\). By Lemma 1 we can assume that agent i misreports by ranking some approved alternatives below \(a_{\emptyset }\), i.e., removing alternatives from her approval set. Let \(\pi \in \min _{i}{C}_{\text {mir}}(P)\). We distinguish two cases.
Case I\(\pi (i)=a_{\emptyset }\). Clearly, due to \(P'_{N{\setminus }\{i\}}=P_{N{\setminus }\{i\}}\) it follows that \(\pi \) is individually rational —and hence maximum individually rational —also in instance \(\mathcal {I}'\). Since \(a_{\emptyset }\) is the worst possible alternative for agent i in any individually rational assignment we thus cannot get \({C}_{\text {mir}}(P')\succ _{i}^{min}{C}_{\text {mir}}(P)\).
Case II\(\pi (i)=a\). Let \(k:=\#(\pi )\). Since \(\pi \) is individually rational, \((a,k)\in S_{i}\) holds. If \((a,k)\in S'_{i}\) it follows that \(\pi \in {C}_{\text {mir}}(P')\) and hence \({C}_{\text {mir}}(P')\succ _{i}^{min}{C}_{\text {mir}}(P)\) cannot hold. Otherwise, let \(\ell :=\#(\pi ')\) for a maximum individually rational assignment \(\pi '\) in instance \(\mathcal {I}'\). By assumption, each agent approves of at least one alternative (see also end of Sect. 2); thus \(\ell \ge 1\) must hold. If \(\ell =k\), by \((a,k)\not \in S'_{i}\) we can conclude \(\pi '(i)=a_{\emptyset }\); therefore, \({C}_{\text {mir}}(P)\succ _{i}^{min}{C}_{\text {mir}}(P')\) holds. If \(\ell \not =k\), by Lemma 1 we can conclude \(\ell <k\). Now, in the case \((a,\ell )\not \in S'_{i}\) clearly \(\pi '(i)=a_{\emptyset }\) follows, again implying \({C}_{\text {mir}}(P)\succ _{i}^{min}{C}_{\text {mir}}(P')\). Recall that by decreasing preferences all \(j\in N\) with \((a,k)\in S_{j}\) have \((a,\ell )\in S_{j}\); thus, if \((a,\ell )\in S_{i}'\) then more than \(\ell \) agents approve of \((a,\ell )\) in \(\mathcal {I}'\). Hence, in \(\mathcal {I}'\) there is a maximum individually rational assignment \(\lambda \) with \(\lambda (i)=a_{\emptyset }\); again, this implies \({C}_{\text {mir}}(P)\succ _{i}^{min}{C}_{\text {mir}}(P')\). Either way, we get a contradiction with maxi–min manipulability.
Finally, with Lemma 2Gärdenfors strategyproofness follows from maxi–min strategyproofness. \(\square \)
Unfortunately, the above strategyproofness result does not generalize to singlepeaked preferences; in particular, in that more general domain strategyproofness of \({C}_{\text {mir}}\) cannot be achieved for any preference extension \(\varepsilon \).
Example 4
Let instance \(\mathcal {I}=(N,A,P)\) with \(N=\{1,2\}\) and \(A^{*}=\{a\}\), with \(\succ _{1}:(a,1)\succ _{1}(a,2)\succ _{1}a_{\emptyset }\) and \(\succ _{2}:(a,2)\succ _{2}a_{\emptyset }\succ _{2}(a,1)\). Note that the agents’ preferences are singlepeaked. The only maximum individually rational assignment in instance \(\mathcal {I}\) is \(\pi (1)=\pi (2)=a\).
Let, in instance \(\mathcal {I}'=(N,A,P')\), \(P'\) be given by \(\succ _{2}'=\succ _{2}\) and \(\succ _{1}':(a,1)\succ _{1}^{'}a_{\emptyset }\succ _{1}^{'}(a,2)\). Thus, in \(\mathcal {I}'\) the only maximum individually rational assignment is \(\lambda \) with \(\lambda (1)\)\(=a\), \(\lambda (2)=a_{\emptyset }\).
Theorem 2
When all agents have singlepeaked preferences and \(A^{*}\) consists of one activity, \({C}_{\text {mir}}\) is \(\varepsilon \)manipulable for every preference extension \(\varepsilon \).
Proof
Consider instances \(\mathcal {I},\mathcal {I}'\) in Example 4. Agent 1’s true preference yields \((a,1)\succ _{1}(a,2)\), hence agent 1 is better off with \((N,A,P')\) than with (N, A, P). Therewith, for each preference extension \(\varepsilon \), \(\{\pi '\}\succ _{1}^{\varepsilon }\{\pi \}\) holds by definition. Hence, due to \({C}_{\text {mir}}(P)=\{\pi \}\) and \({C}_{\text {mir}}(P')=\{\pi '\}\) it follows that \({C}_{\text {mir}}\) is \(\varepsilon \)manipulable for each preference extension \(\varepsilon \). \(\square \)
From the proof of the above theorem we can also conclude that any miraggregation function is manipulable over the domain of singlepeaked preferences. This, however, holds even for the case of decreasing preferences as the following proposition shows.
Proposition 1
When all agents have decreasing preferences and \(A^{*}\) consists of one activity, every miraggregation function is manipulable.
Proof
 1.
Consider \(P^{(1)}\) with \(\succ _{1}^{(1)}=\succ _{1}\) and \(\succ _{2}^{(1)}:(a,1)\succ _{2}^{(1)}a_{\emptyset }\). In \(\mathcal {I}^{(1)}\) there are two maximum individually rational assignments: \(\lambda \) and \(\mu \). Assume that we have \({f}(P^{(1)})=\mu \). Then, at profile P agent 2 can manipulate by ranking (a, 2) below \(a_{\emptyset }\), i.e., removing (a, 2) from \(S_{2}\), since she prefers (a, 1) to (a, 2). Therefore, we must have \({f}(P^{(1)})=\lambda \).
 2.
Let \(P^{(2)}\) be given by \(\succ _{1}^{(2)}:(a,1)\succ _{1}^{(2)}a_{\emptyset }\) and \(\succ _{2}^{(2)}=\succ _{2}\). Analogously to 1., \({f}(P^{(2)})=\lambda \) implies that at profile P agent 1 can manipulate by ranking (a, 2) below \(a_{\emptyset }\). Therefore, we must have \({f}(P^{(2)})=\mu \).
Case I\({f}(P^{(3)})=\lambda \). Then at \(P^{(3)}\) agent 2 can manipulate by reporting \(\succ _{2}\) instead of \(\succ _{2}^{(3)}\), i.e., “creating” \(P^{(2)}\): this guarantees her to be the only agent assigned to a (see 2.), which she prefers to \(a_{\emptyset }\) in \(\succ _{2}^{(3)}\).
Case II\({f}(P^{(3)})=\mu \). At \(P^{(3)}\) now agent 1 is able to manipulate by reporting \(\succ _{1}\) instead of \(\succ _{1}^{(3)}\), i.e., “creating” \(P^{(1)}\). In this way, agent 1 is the only agent assigned to a (see 1.), which she prefers to \(a_{\emptyset }\) in \(\succ _{1}^{(3)}\).
Thus, either choice of \({f}(P^{(3)})\) admits a possibility to manipulate. Therefore, there is no strategyproof miraggregation function in the case of one activity and decreasing preferences. \(\square \)
4.2 Core stable and Pareto optimal assignments
In the singleactivity case, it turns out that core stable assignments are less prone to strategic manipulation than maximum individually rational assignments. Providing a general result, Theorem 3 states that the csaggregation correspondence \({C}_{\mathrm{cs}}\) is always maxi–max strategyproof. In addition, for decreasing preferences, \({C}_{\mathrm{cs}}\) is strategyproof for each of the considered preference extensions (Theorem 4). On the negative side, only maxi–max strategyproofness is provided when the preferences of the agents are singlepeaked (see Theorem 5). Also, observe that in the single activity case, an assignment is core stable if and only if it is Pareto optimal (see Lemma 3 below), which allows us to restrict our attention to core stable assignments in the remainder of this section.
Lemma 3
In an instance of \({\textsf {o}}\hbox {}{\textsf {GASP}}\) with a single nonvoid activity an assignment is core stable if and only if it is Pareto optimal.
Proof
In instance (N, A, P) of \({\textsf {o}}\hbox {}{\textsf {GASP}}\) let \(A^{*}=\{a\}\). Assume \(\pi \) is core stable but not Pareto optimal. Then, there is an assignment \(\mu \) in which at least one agent is better off an no agent is worse off under \(\mu \) than under \(\pi \). Since there is only one nonvoid activity, this means that \(\pi ^{a}\subseteq \mu ^{a}\) holds, because otherwise \(\mu \) assigns an agent of \(\pi ^{a}\) to \(a_{\emptyset }\), making that agent worse off. In particular, \(\pi ^{a}\subset \mu ^{a}\) must follow by \(\mu \not =\pi \). Observe that this means \((a,\mu ^{a})\succ _{i}(a,\pi ^{a})\) for all \(i\in \mu ^{a}\). This, however, implies that \(\pi \) is not core stable which contradicts with our assumption.
On the other hand, assume that an assignment \(\pi \) is Pareto optimal. If it is not core stable, then there is an assignment \(\mu \) with \(\pi ^{a}\subset \mu ^{a}\) such that \((a,\mu ^{a})\succ _{i}(a,\pi ^{a})\) for all \(i\in \mu ^{a}\). This contradicts with Pareto optimality of \(\pi \).\(\square \)
Theorem 3
When \(A^{*}\) consists of one activity, the csaggregation correspondence \({C}_{\mathrm{cs}}\) is maxi–max strategyproof.
Proof
Theorem 4

maxi–min strategyproof,

maxi–max strategyproof, and

Gärdenfors strategyproof.
Proof
Maxi–max strategyproofness follows from Theorem 3.
maxi–min/Gärdenfors strategyproofness: Consider an instance \(\mathcal {I}=(N,A,P)\) with \(A^{*}=\{a\}\) and each agent having decreasing preferences. Note that in any assignment, each agent assigned to a objects to other agents joining a because of decreasing preferences. Thus, the set \({C}_{\mathrm{cs}}(P)\) of core stable assignments is the set of nontrivial individually rational assignments; i.e., the set of assignments that assign exactly k agents approving of (a, k) to a, for any choice of \(k\ge 1\).
Assume some agent i tries to manipulate and let the resulting instance be denoted by \(\mathcal {I}'=(N,A,P')\). We first show that \({C}_{\mathrm{cs}}(P')\subseteq {C}_{\mathrm{cs}}(P)\) follows.
Assume there is an assignment \(\pi '\in {C}_{\mathrm{cs}}(P'){\setminus }{C}_{\mathrm{cs}}(P)\). Suppose \(\pi '(i)=a_{\emptyset }\). Then \(\pi '\) is also individually rational in instance \(\mathcal {I}\) because of \(P_{N{\setminus }\{i\}}=P'_{N{\setminus }\{i\}}\). Also, \(\pi '(j)\)\(=a\) must hold for at least some \(j\in N\) since otherwise \(\pi '\) is not core stable in instance \(\mathcal {I}'\) by decreasing preferences. Hence, \(\pi '\) is a nontrivial individually rational and hence core stable assignment in instance \(\mathcal {I}\) which contradicts our assumption.
Suppose \(\pi '(i)=a\). Then it must follow that \(\pi '\) fails individual rationality in \(\mathcal {I}\) since otherwise it must be core stable in \(\mathcal {I}\). However, this implies \(a_{\emptyset }\succ _{i}(a,\#(\pi '))\). It is not difficult to verify that this contradicts both with maxi–min and Gärdenfors manipulability. Therewith, \({C}_{\mathrm{cs}}(P')\subseteq {C}_{\mathrm{cs}}(P)\) holds.
The case \({C}_{\mathrm{cs}}(P')={C}_{\mathrm{cs}}(P)\) is trivial. Assume \({C}_{\mathrm{cs}}(P')\subset {C}_{\mathrm{cs}}(P)\). By decreasing preferences, for each agent the topranked alternative is (a, 1). Thus, there is a core stable assignment in \(\mathcal {I}'\) that assigns an agent \(j\in N{\setminus }\{i\}\) alone to a and each other agent to \(a_{\emptyset }\). Agent i is hence assigned to \(a_{\emptyset }\) in a minassignment for i in \({C}_{\mathrm{cs}}(P')\), which contradicts with maxi–min manipulability.
For the Gärdenfors extension, consider the set \({C}_{\mathrm{cs}}(P){\setminus }{C}_{\mathrm{cs}}(P')\). Note that each assignment in \({C}_{\mathrm{cs}}(P)\) that assigns i to \(a_{\emptyset }\) must also be in \({C}_{\mathrm{cs}}(P')\) due to decreasing preferences of the agents assigned to a. Thus, for each \(\mu \in {C}_{\mathrm{cs}}(P){\setminus }{C}_{\mathrm{cs}}(P')\), \(\mu (i)=a\) holds. \({C}_{\mathrm{cs}}(P')\succ _{i}^{\mathcal {G}}{C}_{\mathrm{cs}}(P)\) hence would require that \(a_{\emptyset }\succ _{i}(a,\mu ^{a})\) holds, which contradicts with the individual rationality of \(\mu \). Therewith, \({C}_{\mathrm{cs}}\) is Gärdenfors strategyproof.\(\square \)
Theorem 5

maxi–max strategyproof,

maxi–min manipulable, and

Gärdenfors manipulable.
Proof
Maxi–max strategyproofness follows from Theorem 3. Consider instances \(\mathcal {I},\mathcal {I}'\) in Example 4. In \(\mathcal {I}\), there are two core stable assignments \(\pi \) and \(\lambda \), given by \(\pi (1)=\pi (2)=a\) and \(\lambda (1)=a,\)\(\lambda (2)=a_{\emptyset }\). In instance \(\mathcal {I}'\) the assignment \(\lambda \) is the unique core stable assignment. Therewith, \({C}_{\mathrm{cs}}\) is both maxi–min and Gärdenfors manipulable by agent 1 at profile P. \(\square \)
5 Manipulability in \({\textsf {o}}\hbox {}{\textsf {GASP}}\) with at least two activities
In what follows, we consider the scenario in which at least two activities are involved in the ordinal group activity selection problem. Our results show that again the miraggregation correspondence \({C}_{\text {mir}}\) is less robust to manipulation than the csaggregation correspondence \({C}_{\mathrm{cs}}\) and the poaggregation correspondence \({C}_{\mathrm{po}}\). However, all three considered aggregation correspondences turn out to be prone to strategic manipulation. Even worse, it turns out that each aggregation correspondence that satisfies a natural unanimity property (and hence basically any reasonable individually rational aggregation correspondence) is prone to strategic manipulation (see Sect. 5.4).
5.1 Maximum individually rational assignments
In this section we show that \({C}_{\text {mir}}\) is manipulable for each of the considered preference extensions when there are two activities, in both the special cases of decreasing and increasing preferences. In the latter case this negative result holds for each possible preference extension.
Theorem 6
Even when all agents have increasing preferences and \(A^{*}\) consists of two activities, \({C}_{\text {mir}}\) is \(\varepsilon \)manipulable for every preference extension \(\varepsilon \).
Proof
Given instance (N, A, P) with \(N=\{1,2\}\), \(A^{*}=\{a,b\}\), \(\succ _{1}:(a,2)\succ _{1}(a,1)\succ _{1}(b,2)\succ _{1}a_{\emptyset }\succ _{1}(b,1)\), and \(\succ _{2}:(b,2)\succ _{2}a_{\emptyset }\succ _{2}(b,1)\succ _{2}(a,2)\succ _{2}(a,1)\). The only maximum individually rational assignment is \(\pi (1)=\pi (2)=b\). However, in instance \((N,A,P')\) with \(\succ _{1}^{'}:(a,2)\succ _{1}^{'}(a,1)\succ _{1}^{'}a_{\emptyset }\succ _{1}^{'}(b,2)\succ _{1}^{'}(b,1)\) and \(\succ _{2}^{'}=\succ _{2}\), the only maximum individually rational assignment is \(\pi '\) with \(\pi '(1)=a\) and \(\pi '(2)=a_{\emptyset }\). By \((a,1)\succ _{1}(b,2)\), we get that \({C}_{\text {mir}}\) is \(\varepsilon \)manipulable for every preference extension \(\varepsilon \). \(\square \)
The above proof immediately yields the following proposition.
Proposition 2
Even when all agents have increasing preferences and \(A^{*}\) consists of two activities, every miraggregation function is manipulable.
Consider the following example to which we will refer in the proofs of Theorem 7 and 9.
Example 5
Instance \(\mathcal {I}=(N,A,P)\) of \({\textsf {o}}\hbox {}{\textsf {GASP}}\) with decreasing preferences is given by \(N=\{1,2\}\), \(A^{*}=\{a,b\}\), \(\succ _{1}:(b,1)\succ _{1}(a,1)\succ _{1}a_{\emptyset }\succ _{1}(b,2)\succ _{1}(a,2)\) and \(\succ _{2}:(a,1)\succ _{2}(b,1)\succ _{2}a_{\emptyset }\succ _{2}(a,2)\succ _{2}(b,2)\).
Let agent 1 report \(\succ _{1}':(b,1)\succ _{1}^{'}a_{\emptyset }\succ _{1}^{'}(a,1)\succ _{1}^{'}(a,2)\succ _{1}^{'}(b,2)\) instead of \(\succ _{1}\) (ceteris paribus).
Consider the assignments \(\pi ,\lambda \) defined by \(\pi (1)=b\), \(\pi (2)=a\) and \(\lambda (1)=a\), \(\lambda (2)=b\). Observe that \({C}_{\text {mir}}(P)={C}_{\mathrm{cs}}(P)=\{\pi ,\lambda \}\). Also, note that we have \({C}_{\text {mir}}(P')={C}_{\mathrm{cs}}(P')=\{\pi \}\). Thus, both \({C}_{\text {mir}}\) and \({C}_{\mathrm{cs}}\) are maxi–min and Gärdenfors manipulable at profile P by agent 1.
Theorem 7

maxi–max manipulable, and

maxi–min and Gärdenfors manipulable.
5.2 Core stable assignments
As a general positive result, we can show that \({C}_{\mathrm{cs}}\) is maxi–max strategyproof when all agents have decreasing preferences (irrespective of the number of activities available). Unfortunately, for the maxi–min extension and the Gärdenfors extension an analogous result does not hold.
Theorem 8
When all agents have decreasing preferences then the csaggregation correspondence \({C}_{\mathrm{cs}}\) is maxi–max strategyproof.
Proof
It suffices to show that for each agent i there is a core stable assignment that, for her topranked alternative \((a_{i},1)\), assigns i alone to \(a_{i}\). Fix agent i. Consider assignment \(\pi \) which assigns agent i alone to \(a_{i}\), and assigns the remaining agents to activities sequentially according to some fixed order s over these agents as follows. As long as there are unassigned agents, activities to which no agent has been assigned yet and corresponding alternatives approved by some of these agents, \(\pi \) assigns the first (w.r.t. s) of these agents, say j, to \(b_{j}\), where \((b_{j},1)\) denotes j’s topranked approved alternative among the still available activities.
The resulting assignment \(\pi \) is core stable by construction: no agent assigned to an activity wants an additional agent to join; at the same time, no agent assigned to the void activity approves of an alternative to whose corresponding activity no agent has been assigned.\(\square \)
Observe that the above proof implies that the case of decreasing preferences admits a (singlevalued) strategyproof csaggregation function. Consider the serial dictatorship aggregation function \(f^{\text {sd}}\) which lets, according to some predefined order, agents sequentially pick their best ranked approved alternative (a, k) among the yet unused activities \(a\in A^{*}\) and assign a total of k agents including herself to a (or assign themselves to \(a_{\emptyset }\) if such an alternative does not exist). In the case of decreasing preferences, that means the agents in turn assign themselves alone to a for their best ranked available alternative (a, 1) (or to \(a_{\emptyset }\) if no such alternative is available). Also, observe that for decreasing preferences the resulting assignment is both core stable and Pareto optimal. We hence get the following proposition.
Proposition 3
When all agents have decreasing preferences, \(f^{\text {sd}}\) is a strategyproof cs and poaggregation function.
Theorem 9

maxi–max strategyproof,

maxi–min manipulable,

and Gärdenfors manipulable.
Proof
Maxi–max strategyproofness is implied by Theorem 8, while maxi–min and Gärdenfors manipulability follow from Example 5.\(\square \)
In the remainder of this section, we first show that \({C}_{\mathrm{cs}}\) is Gärdenfors and maxi–min manipulable for two activities and increasing preferences already (see Example 6 below); on the positive side, in this case \({C}_{\mathrm{cs}}\) is maxi–max strategyproof (Theorem 10). This result, however, establishes a boundary for maxi–max strategyproofness of \({C}_{\mathrm{cs}}\): Neither can maxi–max strategyproofness be achieved in the two activity case with singlepeaked preferences (Theorem 11) nor in the case of increasing preferences and three activities (Theorem 12).
Example 6
Theorem 10

maxi–max strategyproof,

maxi–min manipulable, and

Gärdenfors manipulable.
Proof
maxi–min/Gärdenfors manipulability: follows from Example 6.
Case 1 In \(\mathcal {I}\), there is no set of agents containing\(\pi ^{a}\)that wants to deviate toa. Since \(\pi \) is not core stable in \(\mathcal {I}\), there hence must be a set of agents containing\(\pi ^{b}\)that wants to deviate tob. I.e., there is a nonempty set \(D\supset \pi ^{b}\) such that \((b,D)\succ _{j}(\pi (j),\pi _{j})\) holds for each \(j\in D\). Note that \(i\in D\) must hold by core stability of \(\pi \) in instance \(\mathcal {I}'\). In the remainder of Case 1 consider the original instance \(\mathcal {I}\). Starting from \(\pi \), the idea is to construct another assignment which augments the set of agents assigned to b and from which again no set of agents wants to deviate to a. Stepwise application of this idea finally leads to an assignment from which no set of agents wants to deviate at all but making agent i better off than under \(\pi \), which contradicts with the choice of \(\pi \) and hence with maxi–max manipulability.
Next, we show that in assignment \(\mu \) there is no set of agents including \(\mu ^{a}\) that wishes to deviate to a. For the sake of contradiction, assume that, in assignment \(\mu \), there is a set \(R\supset \mu ^{a}\) of agents that prefers (a, R) over the alternative assigned under \(\mu \). Observe that \(R\subseteq \pi ^{a}\) cannot hold: each \(j\in \pi ^{a}\cap \mu ^{b}\) prefers \((b,\mu ^{b})\) over (a, k) and thus over (a, R); hence \(R\subseteq \pi ^{a}\) implies \(R\subseteq (\pi ^{a}{\setminus }\mu ^{b})\), which is ruled out by the choice of F.
Therefore, R must contain some agents of the set \(\pi ^{b}\cup \pi ^{a_{\emptyset }}\). Observe that R can be rewritten as \(R=R\cap \pi ^{a}+R\cap \pi ^{b}+R\cap \pi ^{a_{\emptyset }}\) because the sets \(\pi ^{a},\pi ^{b},\pi ^{a_{\emptyset }}\) form a partition of the set N of agents. Recall that \(k\ge R\cap \pi ^{a}\) holds and no agent of \(\pi ^{b}\cup \pi ^{a_{\emptyset }}\) is better off with \(\pi \) than with \(\mu \). Hence, the fact that \(j\in \pi ^{b}\cup \pi ^{a_{\emptyset }}\) prefers (a, R) over \((\mu (j),\mu _{j})\) implies that j prefers \((a,k+R\cap \pi ^{b}+R\cap \pi ^{a_{\emptyset }})\) over \((\pi (j),\pi _{j})\), which, with the deviating set of agents \(E=\pi ^{a}\cup (R\cap (\pi ^{b}\cup \pi ^{a_{\emptyset }}))\), contradicts with our assumption in Case 1.
Thus, also in assignment\(\mu \)there is no coalition that wishes to deviate toain instance\(\mathcal {I}\). Recall that agent i is assigned to b under \(\mu \) and we have \((b,\mu ^{b})\succ _{i}(a,k)\). If \(\mu \) is core stable in \(\mathcal {I}\), we thus have a contradiction with maxi–max manipulability. Hence, again there must be a coalition that wants to deviate to b. Arguing for assignment \(\mu \) in analogous manner as for assignment \(\pi \), we end up with an assignment \(\gamma \) with \(\gamma ^{b}\supset \mu ^{b}\) and \(\gamma ^{a}\subseteq \mu ^{a}\) from which no coalition of agents wishes to deviate to a. By increasing preferences and \(i\in \mu ^{b}\), for agent i we get \((b,\gamma ^{b})\succ _{i}(b,\mu ^{b})\). By the fact that the number of agents assigned to b is strictly growing in each step, repeating this argumentation we finally must end up with an assignment under which no coalition of agents wishes to deviate at all. Thus, in instance \(\mathcal {I}\) there is a core stable assignment \(\eta \) which assigns agent i to activity b such that \((b,\eta ^{b})\succ _{i}(a,k)\) holds. This contradicts with maxi–max manipulability.
Case 2 In \(\mathcal {I}\), there is a set of agents containing\(\pi ^{a}\)that wants to deviate toa. In this case, starting from \(\pi \) we will construct an assignment \({\rho }\) with \({\rho }^a \supset \pi ^a\) and \({\rho }^b \subseteq \pi ^b\) from which no set of agents wants to deviate to a. Under \({\rho }\), some set of agents might wish to deviate to b, but that set has to contain i; making use of Case 1 then concludes the proof.
Consider assignment \(\pi \). In this case, in instance \(\mathcal {I}\) there hence is a nonempty set \(E\supset \pi ^{a}\) such that \((a,E)\succ _{j}(\pi (j),\pi _{j})\) holds for each \(j\in E\). We proceed as follows. Construct assignment \(\delta \) by assigning each agent of E to a; from the remaining agents (i.e., agents of \(N{\setminus } E\)) assign the largest set H of agents to b that satisfies \((b,H)\in S_{h}\) for each \(h\in H\). If under \(\delta \) there is no set of agents including \(\delta ^a\) that wishes to deviate to a, set \({\rho }=\delta \). Otherwise, we stepwise derive our desired assignment \({\rho }\): in the next step we repeat the above procedure for assignment \(\delta \) instead of \(\pi \), and so on, until we end up with assignment \({\rho }\) under which no set of agents (including \({\rho }^a\)) wishes to deviate to a. This requires less than n such steps due to increasing preferences and the fact that in each step the number of agents assigned to a is strictly increasing. From the latter fact and by construction we immediately get \({\rho }^a \supset \pi ^a\), which in turn implies \({\rho }^b \subset (\pi ^b \cup \pi ^{a{\emptyset }})\). We now show that \({\rho }^b \subseteq \pi ^b\) holds as well. In order to do so, it is sufficient to show that \({\rho }^b\cap \pi ^{a_{\emptyset }}=\emptyset \) holds. Assume the opposite, that is, the set \({\rho }^b\cap \pi ^{a_{\emptyset }}\) is nonempty. Then, each agent of \({\rho }^b\cap \pi ^{a_{\emptyset }}\) approves of \((b,{\rho }^b)\). Recall that \({\rho }^b = ({\rho }^b\cap \pi ^{b}) \cup ({\rho }^b\cap \pi ^{a_{\emptyset }})\) holds, and thus \({\rho }^b\le \ell + {\rho }^b\cap \pi ^{a_{\emptyset }}\) is satisfied. Therefore, due to increasing preferences each \(j \in {\rho }^b\cap \pi ^{a_{\emptyset }}\) approves of \((b, \ell + {\rho }^b\cap \pi ^{a_{\emptyset }})\), violating (3).
Now, assume that there is a set D of agents including \({\rho }^b\) that prefers (b, D) over the alternative assigned under \({\rho }\). In what follows, we show that D has to contain agent i. Assume \(D\subseteq \pi ^b\). Since D is a superset of \({\rho }^b\), by the choice of H in constructing \({\rho }\), D must contain at least one agent assigned to a under \({\rho }\). Now, in the above procedure to construct \({\rho }\), consider the last assignment \(\gamma \) where all agents of D where jointly assigned to b. Such an assignment must exist due to our assumption \(D\subseteq \pi ^b\). In the assignment constructed from \(\gamma \), some of the agents of D must have been assigned to a; by the fact that the number of agents assigned to a is increasing during the construction of \({\rho }\), this means these agents prefer \((a,{\rho }^a)\) over (b, D) due to increasing preferences. This, however, contradicts with our assumption that all agents of D are better off with joining b. Thus, any set D of agents that, under \({\rho }\), wants to deviate to b hence has to contain at least one agent of \(\pi ^a \cup \pi ^{a_{\emptyset }}\). Let \(D'=D\cap (\pi ^a \cup \pi ^{a_{\emptyset }})\). Then, \(D'+\ell \ge D\) holds due to \(D=D' \cup (D\cap \pi ^b)\). Recall that by increasing preferences each agent of \(\pi ^a\) is better off under \({\rho }\) than under \(\pi \). Also, no agent of \(\pi ^{a_{\emptyset }}\) is made worse off under \({\rho }\). Hence, each agent of \(D'\) prefers (b, D) and thus \((b,D'+\ell )\) over the alternative assigned by \(\pi \). As a consequence, D has to contain agent i because otherwise \(\pi \) is not core stable in \(\mathcal {I}'\). Then, since there is no set of agents under \({\rho }\) that wishes to deviate to a, we can apply Case 1 (starting with \({\rho }\) instead of \(\pi \)) and again get a contradiction to maxi–max manipulability.\(\square \)
Unfortunately, the above maxi–max strategyproofness result for two activities and increasing preferences cannot be generalized to the domain of singlepeaked preferences, as we show in Theorem 11 below.
Theorem 11
When all agents have singlepeaked preferences and \(A^{*}\) consists of two activities, then the csaggregation correspondence \({C}_{\mathrm{cs}}\) is maxi–max manipulable.
Proof
Preference profile P used in the proof of Theorem 11
1  2  3 

(a, 1)  (b, 2)  (a, 2) 
(b, 2)  (a, 2)  \(a_{\emptyset }\) 
\(a_{\emptyset }\)  \(a_{\emptyset }\)  (a, 1) 
(a, 2)  (b, 1)  (a, 3) 
(a, 3)  (b, 3)  (b, 3) 
(b, 1)  (a, 1)  (b, 2) 
(b, 3)  (a, 3)  (b, 1) 
Finally, for the sake of completeness we provide the details for the above sets of core stable assignments. In \(\mathcal {I}\), for identifying the set of core stable assignments, let us consider all individually rational assignments. The trivial assignment is not core stable since, e.g., agent 1 would like to deviate to a. Assignment \(\gamma \) with \(\gamma (1)=\gamma (2)=b\), \(\gamma (3)=a_{\emptyset }\) is not core stable because agent 1 prefers (a, 1) over (b, 2). Assignment \(\mu \) with \(\mu (1)=a_{\emptyset }\), \(\mu (2)=\mu (3)=a\) is not core stable either, because agents 1 and 2 prefer (b, 2) over their assigned alternative and hence want to deviate to b. Finally, assignment \(\pi \) with \(\pi (1)=a\), \(\pi (2)=\pi (3)=a_{\emptyset }\) is core stable because agent 1 is the only agent assigned to a and (a, 1) is the agent’s topranked alternative; thus, (1) agent 1 would be worse off with a set of agents joining a, and (2) no set of agents can deviate to b, because the latter would require also agent 1 to deviate to b. Considering \(P'\) instead of P, the set of individually rational assignments remains unchanged. Exactly as argued above it follows that \(\pi \) is core stable whereas \(\gamma \) and the trivial assignment are not core stable. On the other hand, given \(P'\) instead of P, assignment \(\mu \) is core stable, because (1) (a, 2) is the topranked alternative of agents 2 and 3 in \(P'\), and (2) agent 2 does not want to deviate to b. \(\square \)
As the maxi–max strategyproofness result of \({C}_{\mathrm{cs}}\) stated in Theorem 10 cannot be generalized to the more general domain of singlepeaked preferences, we might be interested whether, keeping restricted to increasing preferences, the result still holds when we increase the number of activities involved. However, this is not the case as the following theorem shows.
Theorem 12
When all agents have increasing preferences and \(A^{*}\) consists of three activities, then the csaggregation correspondence \({C}_{\mathrm{cs}}\) is maxi–max manipulable.
Preference profiles P (left) and \(P'\)(right) used in the proof of Theorem 12
1  2  3  4  1  2  3  4 

(c, 4)  (c, 4)  (c, 4)  (b, 4)  (c, 4)  (c, 4)  (c, 4)  (a, 4) 
(c, 3)  (a, 4)  (c, 3)  (b, 3)  (c, 3)  (a, 4)  (c, 3)  (a, 3) 
(b, 4)  (a, 3)  (b, 4)  (a, 4)  (b, 4)  (a, 3)  (b, 4)  (a, 2) 
(b, 3)  (a, 2)  (b, 3)  (a, 3)  (b, 3)  (a, 2)  (b, 3)  (b, 4) 
\(a_{\emptyset }\)  (c, 3)  \(a_{\emptyset }\)  (a, 2)  \(a_{\emptyset }\)  (c, 3)  \(a_{\emptyset }\)  (b, 3) 
\(a_{\emptyset }\)  (c, 4)  \(a_{\emptyset }\)  (c, 4)  
\(a_{\emptyset }\)  \(a_{\emptyset }\) 
Proof
Let \(A^{*}=\{a,b,c\}\), \(N=\{1,2,3,4\}\), and let the (truncated) profiles \(P,\text { }P'\) be given as in Table 2. Let \(\mathcal {I}=(N,A,P)\). Note that the preferences of each agent are increasing.^{4} The only core stable assignment in \(\mathcal {I}\) is \(\pi \) with \(\pi (i)=c\) for all \(i\in N\) (see below for details); i.e., \({C}_{\mathrm{cs}}(P)=\{\pi \}\). Observe that \(P'_{N{\setminus }\{4\}}=P_{N{\setminus }\{4\}}\). In addition, we have \({C}_{\mathrm{cs}}(P')=\{\pi ,\mu \}\) with \(\mu (2)=\mu (4)=a\) and \(\mu (1)=\mu (3)=a_{\emptyset }\). Hence, \({C}_{\mathrm{cs}}\) is maxi–max manipulable at profile P by agent 4.
The details concerning the sets of core stable assignments are as follows. In both instances \(\mathcal {I}=(N,A,P)\) and \(\mathcal {I}'=(N,A,P')\), there are four nontrivial individually rational assignments: \(\pi \) with \(\pi (i)=c\) for all \(i\in N\); \(\mu \) with \(\mu (2)=\mu (4)=a\) and \(\mu (1)=\mu (3)=a_{\emptyset }\); \(\lambda \) with \(\lambda (1)=\lambda (3)=\lambda (4)=b\) and \(\lambda (2)=a_{\emptyset }\); and \(\gamma \) with \(\gamma (1)=\gamma (2)=\gamma (3)=c\) and \(\gamma (4)=a_{\emptyset }\). Consider instance \(\mathcal {I}\) with profile P. In that instance, \(\mu \) is not core stable because agents 1, 3, 4 want to deviate to b. Assignment \(\lambda \) is not core stable either since the agents 1, 2, 3 want to deviate to c. In addition, \(\gamma \) is not core stable because all four agents would prefer (c, 4) over the assigned alternative. For the same reason, the trivial assignment is not core stable. In contrast, \(\pi \) is core stable because, due to the agents’ preferences, it would require at least two agents for a deviation but under \(\pi \) the agents 1, 2, 3—and hence three out of four agents—are assigned to their topranked alternative (c, 4). Thus, we have \({C}_{\mathrm{cs}}(P)=\{\pi \}\).
On the other hand, consider profile \(P'\). With the same argumentation as above it follows that in \(\mathcal {I}'\) none of \(\lambda \), \(\gamma \) or the trivial assignment is core stable. Also analogously to above it follows that \(\pi \) is core stable in \(\mathcal {I}'\). In addition, it turns out that now \(\mu \) is core stable as well. In order to verify this, observe that agents 1, 3 do not approve of (a, k) for any choice of k, hence there is no group of agents that wishes to deviate to a. Consider a possible deviation to c. Agents 1, 2, 3 prefer (c, 4) over the assigned alternative, but agent 4 does not; furthermore, agents 1 and 3 prefer (c, 3) over \(a_{\emptyset }\), but agent 2, the only remaining agent approving of (c, 3), prefers the assigned alternative (a, 2) over (c, 3). Hence, no deviation to c would make all members of the deviating group of agents better off. Finally, consider a possible deviation to b. Since (b, 4) is not approved by 4 agents, we can restrict to (b, 3). Whereas agents 1, 3 want to deviate to b, the only remaining agent approving of (b, 3), agent 4, prefers (a, 2) over (b, 3). Therefore, we can rule out also such a deviation, and \(\mu \) is core stable in \(\mathcal {I}'\). Hence, \({C}_{\mathrm{cs}}(P')=\{\pi ,\mu \}\). \(\square \)
5.3 Pareto optimal assignments
Consider an agent \(i\in N\), and let, among the alternatives i approves of, (a, k) be the bestranked (w.r.t. \(\succ _{i}\)) among the alternatives \((b,\ell )\) approved by at least \(\ell \) agents. Then there is always a Pareto optimal assignment which assigns i to a such that exactly k agents are assigned to a (see Darmann 2018). Thus, it immediately follows that \({C}_{\mathrm{po}}\) is maxi–max strategyproof.
Theorem 13
The poaggregation correspondence \({C}_{\mathrm{po}}\) is maxi–max strategyproof.
In the case of decreasing preferences, \({C}_{\mathrm{po}}\) also turns out to be maxi–min strategyproof; on the negative side, \({C}_{\mathrm{po}}\) is Gärdenfors manipulable even if restricted to instances with only two activities.
Theorem 14
When all agents have decreasing preferences, then the poaggregation correspondence \({C}_{\mathrm{po}}\) is maxi–min strategyproof.
Proof
Given instance \(\mathcal {I}=(N,A,P)\) of \({\textsf {o}}\hbox {}{\textsf {GASP}}\), assume some agent \(i\in N\) tries to maxi–min manipulate at P. Let \(P'\) the respective profile with \(P'_{N{\setminus }\{i\}}=P_{N{\setminus }\{i\}}\), and let \(\mathcal {I}'=(N,A,P')\). Assume \({C}_{\mathrm{po}}\) is maxi–min manipulable at P by agent i. Let \(\pi \in \min _{i}{C}_{\mathrm{po}}(P)\). Consider assignment \(\tilde{\pi }\) with \(\tilde{\pi }(i)=a_{\emptyset }\) and \(\tilde{\pi }(j)=\pi (j)\) for all \(j\in N{\setminus }\{i\}\). Observe that in instance \(\mathcal {I}'\), assignment \(\tilde{\pi }\) is individually rational because the agents have decreasing preferences and for each activity \(a\in A^{*}\) we have \(\tilde{\pi }^{a}\le \pi ^{a}\). Due to \(\tilde{\pi }(i)=a_{\emptyset }\), assignment \(\tilde{\pi }\) cannot be Pareto optimal in \(\mathcal {I}'\) since this would contradict with our assumption that \({C}_{\mathrm{po}}\) is maxi–min manipulable at P by agent i. Thus, in \(\mathcal {I}'\) there is a Paretooptimal assignment \(\pi ^{*}\) which Paretodominates \(\tilde{\pi }\). If \((\pi ^{*}(i),\pi ^{*}(i))\succ _{i}(\pi (i),\pi (i))\), then in \(\mathcal {I}\) assignment \(\pi ^{*}\) is individually rational and also Paretodominates \(\pi \); this contradicts with the Pareto optimality of \(\pi \). Hence, either \((\pi ^{*}(i),\pi ^{*}(i))=(\pi (i),\pi (i))\) or \((\pi (i),\pi (i))\succ _{i}(\pi ^{*}(i),\pi ^{*}(i))\) holds, which both contradicts with maxi–min manipulability. \(\square \)
Theorem 15
When all agents have decreasing preferences and \(A^{*}\) consists of two activities, then the poaggregation correspondence \({C}_{\mathrm{po}}\) is Gärdenfors manipulable.
Preference profiles P (left) and \(P'\)(right) used in the proof of Theorem 15
1  2  3  1  2  3 

(a, 1)  (b, 1)  (b, 1)  (b, 1)  (b, 1)  (b, 1) 
(b, 1)  (a, 1)  (a, 1)  (a, 1)  (a, 1)  (a, 1) 
(b, 2)  (a, 2)  \(a_{\emptyset }\)  (a, 2)  (a, 2)  \(a_{\emptyset }\) 
(a, 2)  (b, 2)  (b, 2)  (b, 2)  
\(a_{\emptyset }\)  \(a_{\emptyset }\)  \(a_{\emptyset }\)  \(a_{\emptyset }\) 
Proof
Let \(\mathcal {I}=(N,A,P)\) with \(N=\{1,2,3\}\), \(A^{*}=\{a,b\}\) and the profile P be given as displayed on the lefthand side of Table 3. Let agent 1 misreport her preferences as stated in profile \(P'\) (also displayed in Table 3). In \(\mathcal {I}\), there are six Pareto optimal assignments: \({C}_{\mathrm{po}}(P)=\{\pi ,\lambda ,\mu ,\delta ,\rho ,\omega \}\), where \(\pi (1)=a\), \(\pi (2)=b\), \(\pi (3)=a_{\emptyset }\); \(\lambda (1)=a\), \(\lambda (2)=a_{\emptyset }\), \(\lambda (3)=b\); \(\mu (1)=b\), \(\mu (2)=b\), \(\mu (3)=a\); \(\delta (1)=a\), \(\delta (2)=a\), \(\delta (3)=b\); \(\rho (1)=a_{\emptyset }\), \(\rho (2)=a\), \(\rho (3)=b\); \(\omega (1)=a_{\emptyset }\), \(\omega (2)=b\), \(\omega (3)=a\). Observe that all of these assignments but \(\mu \) are also Pareto optimal in \(\mathcal {I}'=(N,A,P')\). Therefore, \({C}_{\mathrm{po}}(P){\setminus }{C}_{\mathrm{po}}(P')=\{\mu \}\).
On the other hand, \({C}_{\mathrm{po}}(P')=({C}_{\mathrm{po}}(P){\setminus }\{\mu \})\cup \{\alpha ,\beta \}\), where \(\alpha (1)=b\), \(\alpha (2)=a\), \(\alpha (3)=a_{\emptyset }\) and \(\beta (1)=b\), \(\beta (2)=a_{\emptyset }\), \(\beta (3)=a\). Thus, \({C}_{\mathrm{po}}(P'){\setminus }{C}_{\mathrm{po}}(P)=\{\alpha ,\beta \}\). Observe that both under \(\alpha \) and \(\beta \) agent 1 is the only agent assigned to b, whereas \(\mu \) assigns both agents 1 and 2 to b. By the decreasing preferences of agent 1, however, we have \((b,1)\succ _{1}(b,2)\). Hence, \({C}_{\mathrm{po}}\) is Gärdenfors manipulable by agent 1 at profile P. \(\square \)
For increasing (and thus, for singlepeaked) preferences, and two activities it turns out that \({C}_{\mathrm{po}}\) is maxi–min and Gärdenfors manipulable (Theorem 16 below summarizes the results for increasing preferences); more generally, for that case in the subsequent section we provide a manipulability result for each individually rational aggregation correspondence satisfying the mild condition of unanimity.
Theorem 16

maxi–max strategyproof,

maxi–min manipulable, and

Gärdenfors manipulable.
5.4 A general impossibility result
Unfortunately, it turns out that any individually rational aggregation correspondence is susceptible to strategic manipulation under a very mild and natural condition already. This condition states that, if there is a unique activity a such that all agents prefer that the whole group of agents takes part in that activity a over each other alternative, then the aggregation correspondence should—if there is no other nontrivial individually rational assignment—guarantee that each agent is assigned to a. This property is in the spirit of the unanimity property for social choice functions [see, e.g., Moulin (1983) and Özyurt and Sanver (2008)] in the setting of \({\textsf {o}}\hbox {}{\textsf {GASP}}\).
Definition 9
An aggregation correspondence \({C}\) satisfies unanimity, if, for any instance (N, A, P) of \({\textsf {o}}\hbox {}{\textsf {GASP}}\) such that for some \(a\in A^{*}\) (1) each agent’s topranked alternative is (a, n), and (2) \(\pi (i)=a\) for all \(i\in N\) is the only nontrivial individually rational assignment, \({C}(P)=\{\pi \}\) holds.
Observe that \({C}_{\text {mir}}\), \({C}_{\mathrm{cs}}\), and \({C}_{\mathrm{po}}\) satisfy the unanimity condition.
Theorem 17
Even when \(A^{*}\) consists of only two activities and each agent has increasing preferences, any individually rational aggregation correspondence that satisfies unanimity is maxi–min and Gärdenfors manipulable.
Proof

preference profile \(P^{(1)}\) with \(\succ _{1}^{(1)}=\succ _{1}\) and \(\succ _{2}^{(1)}:(b,2)\succ _{2}^{(1)}a_{\emptyset }\), and profile \(P^{(2)}\) with \(\succ _{i}^{(2)}:(b,2)\succ _{i}^{(2)}a_{\emptyset }\) for \(i \in \{1,2\}\);

preference profile \(P^{(3)}\) with \(\succ _{2}^{(3)}=\succ _{2}\) and \(\succ _{1}^{(3)}:(a,2)\succ _{1}^{(3)}a_{\emptyset }\), and profile \(P^{(4)}\) with \(\succ _{i}^{(4)}:(a,2)\succ _{i}^{(4)}a_{\emptyset }\) for \(i \in \{1,2\}\).
Also, \({C}(P^{(4)})=\{\pi \}\) holds due to unanimity. Analogously to above we hence may assume that \({C}(P^{(3)})=\{\pi \}\) must hold because otherwise agent 2 can maxi–min and Gärdenfors manipulate at \(P^{(3)}\).
Now, consider instance \(\mathcal {I}\). Assume \(\pi \in {C}(P)\) or \(\pi _{\emptyset } \in {C}(P)\). Then, agent 2 can maxi–min and Gärdenfors manipulate at profile P (resulting in the above profile \(P^{(1)}\)), because \({C}(P^{(1)})=\{\mu \}\) holds by assumption and (b, 2) is agent 2’s topranked alternative according to \(\succ _2\). Hence, we may assume \({C}(P)=\{\mu \}\). But then agent 1 can Gärdenfors and maximin manipulate at profile P by dropping (b, 2) from her approval set, resulting in profile \(P^{(3)}\) with \({C}(P^{(3)})=\{\pi \}\). \(\square \)
The above theorem immediately implies the following proposition for iraggregation functions.
Proposition 4
Even when \(A^{*}\) consists of only two activities and each agent has increasing preferences, any iraggregation function that satisfies unanimity is manipulable.
With respect to iraggregation functions, Proposition 4 rules out a general strategyproofness result for increasing preferences even for two activities only. Observe that also the serial dictatorship function \(f^{\text {sd}}\) satisfies unanimity. Thus, in contrast to the case of increasing preferences (Proposition 4) by Proposition 3 irrespective of the number of activities involved the domain of decreasing preferences allows for a strategyproof cs and poaggregation function (respecting unanimity).
6 Conclusion and outlook
We have analyzed the aspect of strategic manipulation in the group activity selection problem when the minimum requirement of individual rationality should be respected. Even when restricted to instances with increasing preferences and two activities, it turns out that a strategyproof aggregation correspondence (w.r.t. maximin and Gärdenfors extension) or a strategyproof aggregation function meeting the mild condition of unanimity does not exist. Observe that basically all reasonable aggregation processes satisfy unanimity, including the main solution concepts considered in the group activity selection problem—maximum individually rational, core stable, and Pareto optimal assignments—to which we paid our particular attention. Thus, a strategyproofness result (w.r.t. maximin and Gärdenfors extension) in that domain is ruled out for basically all reasonable aggregation processes respecting individual rationality.
The latter also applies for singlevalued aggregation functions. Concerning the particular iraggregation functions considered, we have shown that in the decreasing preference case a strategyproof cs and poaggregation function always exists, while a strategyproof miraggregation function is ruled out even in restricted instances with only one activity.
Overview over the results for correspondences \({C}_{\text {mir}}\) and \({C}_{\mathrm{cs}}\) w.r.t. different preference extensions
Preferences, #activities  \({C}_{\text {mir}}\)  \({C}_{\mathrm{cs}}\)  

Maxi–min/Gärd.  Maxi–max  Maxi–min  Gärd.  Maxi–max  
Decreasing, 1  sp  man  (Theorem 1)  sp  sp  sp  (Theorem 4) 
Decreasing, \(\ge 2\)  man  man  (Theorem 7)  man  man  sp  
Increasing, 1  sp  sp  sp  sp  sp  
Increasing, 2  man  man  man  man  sp  
Increasing, \(>2\)  man  man  (Theorem 6)  man  man  man  (Theorem 12) 
Singlepeaked, 1  man  man  (Theorem 2)  man  man  sp  (Theorem 5) 
Singlepeaked, 2  man  man  (Theorem 6)  man  man  man 
Overview over the results for correspondence \({C}_{\mathrm{po}}\) w.r.t. different preference extensions
There are different approaches for further work on strategic manipulation in the group activity selection problem. For instance, other preference extensions could be adapted to the setting and analyzed with respect to manipulability. Of course, also a characterization of preference extensions and corresponding domains for which strategyproofness is guaranteed would be of interest.
Alternatively, one could hope for positive results at the price of dropping the requirement of individual rationality. This is, for instance, the case in serial dictatorship where essentially, according to some predefined order, notyet assigned agents sequentially pick their topranked alternative (a, k) among the yet unused activities a with at least k unassigned agents remaining, and assign herself and \(k1\) other unassigned agents to a—irrespective of whether or not they approve of (a, k). An interesting approach for future research in this respect could be to find meaningful aggregation functions and correspondences, as well as corresponding domains for which strategyproofness can be guaranteed.
Finally, also with respect to practical applications it might be reasonable to consider alternative, less strict notions of strategyproofness.
Footnotes
 1.
Throughout this paper, the number of activities actually refers to the number of activities different from the void activity (see also Sect. 2).
 2.
The appealing concept of core stability hence is a reasonable solution concept as well in scenarios where agents do not have some kind of “hierarchical power” over others.
This condition can also be understood as a distribution of property rights taking as reference the initial assignment. Also, observe that in our model we are not concerned with initial endowments.
 3.
We point out, however, that our results also hold when restricted to instances which always admit a core stable assignment.
 4.
In the description of the profile the alternatives ranked below \(a_{\emptyset }\) are omitted for the sake of brevity. In fact, it is easy to complete the profile such that each agents’ preferences are indeed increasing.
Notes
Acknowledgements
Open access funding provided by University of Graz. The author would like to thank Jérôme Lang, Daniel Eckert and Christian Klamler for fruitful discussions. In addition, the author is very grateful to three anonymous reviewers for helping to improve the paper with their valuable comments. This work was supported by the University of Graz project “VotingSelectionDecision”.
References
 Alcalde J, Revilla P (2004) Researching with whom? Stability and manipulation. J Math Econ 40(8):869–887CrossRefGoogle Scholar
 Barberà S (2010) Strategyproof social choice. In: Arrow K, Sen A, Suzumura K (eds) Handbook of social choice and welfare, vol 2. Elsevier, Amsterdam, pp 731–832 chapter 25CrossRefGoogle Scholar
 Barberà S, Dutta B, Sen A (2001) Strategyproof social choice correspondences. J Econ Theory 101(2):374–394CrossRefGoogle Scholar
 Barberà S, Bossert W, Pattanaik PK (2004) Ranking sets of objects. In: Barberà S, Hammond PJ, Seidl C (eds) Handbook of utility theory. Springer, Berlin, pp 893–977CrossRefGoogle Scholar
 Brandt F, Brill M (2011) Necessary and sufficient conditions for the strategyproofness of irresolute social choice functions. In: Proceedings of the 13th conference on theoretical aspects of rationality and knowledge, TARK XIII, pp 136–142Google Scholar
 Brandt F, Geist C (2014) Finding strategyproof social choice functions via SAT solving. In: Proceedings of the 2014 international conference on autonomous agents and multiagent systems, AAMAS ’14, pp 1193–1200. ISBN: 9781450327381Google Scholar
 Cechlárová K, RomeroMedina A (2001) Stability in coalition formation games. Int J Game Theory 29(4):487–494CrossRefGoogle Scholar
 Darmann A (2018) Stable and Pareto optimal group activity selection problem from ordinal preferences. Int J Game Theory. https://doi.org/10.1007/s0018201806123
 Darmann A, Lang J (2017) Group activity selection problems. In Endriss U (ed) Trends in computational social choice, pp 385–410. AI AccessGoogle Scholar
 Darmann A, Elkind E, Kurz S, Lang J, Schauer J, Woeginger G (2018) Group activity selection problem with approval preferences. Int J Game Theory 47(3):767–796CrossRefGoogle Scholar
 Gärdenfors P (1976) Manipulation of social choice functions. J Econ Theory 13(2):217–228CrossRefGoogle Scholar
 Gibbard A (1973) Manipulation of voting schemes: a general result. Econometrica 41(4):587–601CrossRefGoogle Scholar
 Jackson M, Nicolò A (2004) The strategyproof provision of public goods under congestion and crowding preferences. J Econ Theory 115(2):278–308CrossRefGoogle Scholar
 Lee H, Shoham Y (2015) Stable invitations. In: Proceedings of the twentyninth AAAI conference on artificial intelligence, AAAI’15, pp 965–971Google Scholar
 Long Y (2018) Strategyproof group selection under singlepeaked preferences over group size. Econ Theory. https://doi.org/10.1007/s0019901811357
 Massó J, Nicolò A (2008) Efficient and stable collective choices under gregarious preferences. Games Econ Behav 64(2):591–611CrossRefGoogle Scholar
 Moretti S, Tsoukiàs A (2012) Ranking sets of possibly interacting objects using Shapley extensions. In: Proceedings of the thirteenth international conference on principles of knowledge representation and reasoning, KR’12. AAAI PressGoogle Scholar
 Moulin H (1983) The strategy of social choice, advanced textbooks in economics. NorthHolland Publishing Company, AmsterdamGoogle Scholar
 Özyurt S, Sanver R (2008) Strategyproof resolute social choice correspondences. Soc Choice Welf 30(1):89–101CrossRefGoogle Scholar
 Pápai S (2004) Unique stability in simple coalition formation games. Games Econ Behav 48(2):337–354CrossRefGoogle Scholar
 RodríguezÁlvarez C (2009) Strategyproof coalition formation. Int J Game Theory 38(3):431–452CrossRefGoogle Scholar
 Satterthwaite M (1975) Strategyproofness and Arrow’s conditions: existence and correspondence theorems for voting procedures and social welfare functions. J Econ Theory 10(2):187–217CrossRefGoogle Scholar
 Skowron P, Faliszewski P, Slinko A (2015) Achieving fully proportional representation: approximability results. Artif Intell 222:67–103CrossRefGoogle Scholar
 Sönmez T (1999) Strategyproofness and essentially singlevalued cores. Econometrica 67(3):677–689CrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.