Size Versus Truthfulness in the House Allocation Problem
 251 Downloads
Abstract
We study the House Allocation problem (also known as the Assignment problem), i.e., the problem of allocating a set of objects among a set of agents, where each agent has ordinal preferences (possibly involving ties) over a subset of the objects. We focus on truthful mechanisms without monetary transfers for finding large Pareto optimal matchings. It is straightforward to show that no deterministic truthful mechanism can approximate a maximum cardinality Pareto optimal matching with ratio better than 2. We thus consider randomised mechanisms. We give a natural and explicit extension of the classical Random Serial Dictatorship Mechanism (RSDM) specifically for the House Allocation problem where preference lists can include ties. We thus obtain a universally truthful randomised mechanism for finding a Pareto optimal matching and show that it achieves an approximation ratio of \(\frac{e}{e1}\). The same bound holds even when agents have priorities (weights) and our goal is to find a maximum weight (as opposed to maximum cardinality) Pareto optimal matching. On the other hand we give a lower bound of \(\frac{18}{13}\) on the approximation ratio of any universally truthful Pareto optimal mechanism in settings with strict preferences. By using a characterisation result of Bade, we show that any randomised mechanism that is a symmetrisation of a truthful, nonbossy and Pareto optimal mechanism has an improved lower bound of \(\frac{e}{e1}\). Since our new mechanism is a symmetrisation of RSDM for strict preferences, it follows that this lower bound is tight. We moreover interpret our problem in terms of the classical secretary problem and prove that our mechanism provides the best randomised strategy of the administrator who interviews the applicants.
Keywords
House allocation problem Assignment problem Pareto optimal matching Randomised mechanisms Truthfulness1 Introduction
We study the problem of allocating a set of indivisible objects among a set of agents. Each agent has private ordinal preferences over a subset of objects—those they find acceptable, and each agent may be allocated at most one object. This problem has been studied by both economists and computer scientists. When monetary transfers are not permitted, the problem is referred to as the House Allocation problem (henceforth abbreviated by HA [1, 20, 38] or the Assignment problem [9, 19] in the literature. In this paper we opt for the term House Allocation problem. Most of the work in the literature assumes that the agents have strict preferences over their acceptable objects. However, it often happens though that an agent is indifferent between two or more objects. Here we let agents express indifference, and hence preferences may involve ties unless explicitly stated otherwise.
It is often desired that as many objects as possible become allocated among the agents—i.e., an allocation of maximum size is picked, hence making as many agents happy as possible. Usually, depending on the application of the problem, we are required to consider some other optimality criteria, sometimes instead of, and sometimes in addition to, maximising the size of the allocation. Several optimality criteria have been considered in the HA setting, and perhaps the most studied such concept is Pareto optimality (see, e.g., [1, 2, 12, 15, 33]), sometimes referred to as Pareto efficiency. Economists, in particular, regard Pareto optimality as the most fundamental requirement for any “reasonable” solution to a noncooperative game. Roughly speaking, an allocation \(\mu \) is Pareto optimal if there does not exist another allocation \(\mu '\) in which no agent is worse off, and at least one agent is better off, in \(\mu '\). In this work we are mainly concerned with Pareto optimal allocations of maximum size, but will also consider weighted generalisations.
The related Housing Market problem (HM) [29, 30, 34] is the variant of HA in which there is an initial endowment, i.e., each agent owns a unique object initially (in this case the numbers of agents and objects are usually defined to be equal). In this setting, the most widely studied solution concept is that of the core, which is an allocation of agents to objects satisfying the property that no coalition C of agents can improve (i.e., every agent in C is better off) by exchanging their own resources (i.e., the objects they brought to the market). In the case of strict preferences, the core is always nonempty [30], unique, and indeed Pareto optimal. When preferences may include ties, the notion of core that we defined is sometimes referred to as the weak core. In this case a core allocation need not be Pareto optimal. Jaramillo and Manjunath [21], Plaxton [27], and Saban and Sethuraman [32] provide polynomialtime algorithms for finding a core allocation that does additionally satisfy Pareto optimality. Our problem differs from HM in that there is no initial endowment, and hence our focus is on Pareto optimal matchings rather than outcomes in the core.
For strictlyordered preference lists, Abraham et al. [2] gave a characterisation of Pareto optimal matchings that led to an O(m) algorithm for checking an arbitrary matching for Pareto optimality, where m is the total length of the agents’ preference lists. This characterisation was extended to the case that preference lists may include ties by Manlove [24, Sect. 6.2.2.1], also leading to an O(m) algorithm for checking a matching for Pareto optimality. For strictlyordered lists, a maximum cardinality Pareto optimal matching can be found in \(O(\sqrt{n_1}m)\) time, where \(n_1\) is the number of agents [2]. The fastest algorithm currently known for this problem when preference lists may include ties is based on minimum cost maximum cardinality matching and has complexity \(O(\sqrt{n}m\log n)\) (see, e.g., [24, Sec. 6.2.2.1], where n is the total number of agents and objects.
As stated earlier, agents’ preferences are private knowledge. Hence, unless they find it in their own best interests, agents may not reveal their preferences truthfully. An allocation mechanism is truthful if it gives agents no incentive to misrepresent their preferences. Perhaps unsurprisingly, a mechanism based on constructing a maximum cardinality Pareto optimal allocation is manipulable by agents misrepresenting their preferences (Theorem 2.3 in Sect. 2). Hence, we need to make a compromise and weaken at least one of these requirements. In this work, we relax our quest for finding a maximum cardinality Pareto optimal allocation by trading off the size of a Pareto optimal allocation against truthfulness; more specifically, we seek truthful Pareto optimal mechanisms that provide good approximations to the maximum size.
Under strict preferences, Pareto optimal matchings can be computed by a classical algorithm called the Serial Dictatorship Mechanism (SDM) (see, e.g., [1]), also referred to as the Priority Mechanism (see, e.g., [9]). SDM is a straightforward greedy algorithm that takes each agent in turn and allocates to him the most preferred available object on his preference list. Precisely due to this greedy approach, SDM is truthful. Furthermore, SDM is guaranteed to find a Pareto optimal allocation that has size at least half that of a maximum one, merely because any Pareto optimal allocation has size at least half that of a maximum one (see, e.g., [2]). Hence, at least in the case of strict preferences, we are guaranteed an approximation ratio of 2. Can we do better? It turns out that if we stay in the realm of deterministic mechanisms, a 2approximation is the best we can hope for (Theorem 2.3, Sect. 2).
Hence we turn to randomised mechanisms in order to achieve a better approximation ratio. The obvious candidate to consider is the Random Serial Dictatorship Mechanism (RSDM) (see, e.g., [1]), also known as the Random Priority mechanism (see, e.g., [9]), that is defined for HA instances with strict preferences. RSDM randomly generates an ordering of the agents and then proceeds by running SDM relative to this ordering.
When indifference is allowed, finding a Pareto optimal allocation is not as straightforward as for strict preferences. For example, one may consider breaking the ties randomly and then applying SDM. This approach, unfortunately, may produce an allocation that is not Pareto optimal. To see this, consider a setting with two agents, \(1\) and \(2\), and two objects, \(o_1\) and \(o_2\). Assume that \(1\) finds both objects acceptable and is indifferent between them, and that \(2\) finds only \(o_1\) acceptable. The only Pareto optimal matching for this setting is the one in which \(1\) and \(2\) are assigned \(o_2\) and \(o_1\) respectively. Assume that \(1\) is served first and that, as both objects are equally acceptable to him, is assigned \(o_1\) (after an arbitrary tiebreaking). Therefore when \(2\)’s turn arrives, there is no object remaining that he finds acceptable, and is hence left unmatched, resulting in a matching that is not Pareto optimal.
Few works in the literature have considered extensions of SDM to the case where agents’ preferences may include ties. However Bogomolnaia and Moulin [10] and Svensson [35] provide an implicit extension of SDM (in the former case for dichotomous preferences)^{1} but do not give an explicit description of an algorithm. Aziz et al. [4] provide an explicit extension for a more general class of problems, including HA. Pareto optimal matchings in HA can also be found by reducing to the HM setting [21], which involves creating dummy objects as endowments for the agents. This allows one of the aforementioned algorithms for HM [21, 27, 32] to be utilised to find a Pareto optimal matching in the core. However the reduction increases the instance size, and in particular the number of agents \(n_1\) and the maximum length of a tie in any agent’s preference list. Consequently, even the fastest truthful Pareto optimal mechanism for HM, that of Saban and Sethuraman [32], has time complexity no better than \(O(n_1^3)\) in the worst case.
 1.
By extending RSDM to the case of preference lists with ties, we give a universally truthful randomised mechanism^{2} for finding a Pareto optimal matching that has an approximation ratio of \(\frac{e}{e1}\) with respect to the size of a maximum cardinality Pareto optimal matching.
 2.
We give a lower bound of \(\frac{18}{13}\) on the approximation ratio of any universally truthful Pareto optimal mechanism in settings with strict preferences. By using a characterisation result of Bade [7], we show that any randomised mechanism that is a symmetrisation^{3} of a truthful, nonbossy^{4} and Pareto optimal mechanism has an improved lower bound of \(\frac{e}{e1}\). Since our new mechanism is a symmetrisation of RSDM for strict preferences, it follows that this lower bound is tight.
 3.
We extend RSDM to the setting where agents have priorities (weights) and our goal is to find a maximum weight (as opposed to maximum cardinality) Pareto optimal matching. Our mechanism is universally truthful and guarantees a \(\frac{e}{e1}\)approximation with respect to the weight of a maximum weight Pareto optimal matching.
 4.
We finally observe that our problem has an “online” or sequential flavour similar to secretary problems.^{5} Given this interpretation, we prove that our mechanism uses the best random strategy of interviewing the applicants in the sense that any other strategy would lead to an approximation ratio worse than \(\frac{e}{e1}\) (see also below under related work).
Related Work This work can be placed in the context of designing truthful approximate mechanisms for problems in the absence of monetary transfer [28]. Bogomolnaia and Moulin [9] designed a randomised weakly truthful and envyfree mechanism, called the Probabilistic Serial mechanism (PS), for HA with complete lists. Very recently the same authors considered the same approximation problem as ours but in the context of envyfree rather than truthful mechanisms, and for strict preference lists and unweighted agents [11]. They showed that PS has an approximation ratio of \(\frac{e}{e1}\), which is tight for any envyfree mechanism. Bhalgat et al. [8] investigated the social welfare of PS and RSDM. Tight deterministic truthful mechanisms for weighted matching markets were proposed by Dughmi and Ghosh [18] and they also presented an \(O(\log n)\)approximate random truthful mechanism for the Generalised Assignment Problem (GAP) by reducing, with logarithmic loss in the approximation, to the solution for the valueinvariant GAP. In subsequent work Che et al. [14] provided an O(1)approximation mechanism for GAP. Aziz et al. [5] studied notions of fairness involving the stochastic dominance relation in the context of HA, and presented various complexity results for problems involving checking whether a fair assignment exists. Chakrabarty and Swamy [13] proposed rank approximation as a measure of the quality of an outcome and introduced the concept of lextruthfulness as a notion of truthfulness for randomised mechanisms in HA.
RSDM is related to online bipartite matching algorithms. The connection was observed by Bhalgat et al. [8], who noted the similarity between RSDM and the RANKING algorithm of Karp et al. [22]. Karp et al. [22] proved that the expected size of the matching given by their RANKING algorithm is at least \(\frac{e1}{e}\) times the optimal size. Bhalgat et al. [8] observed that RSDM will essentially behave the same way as RANKING for instances of HA where the agents’ preference lists relate to the order in which the objects arrive. Hence for this family of instances an approximation ratio of \(\frac{e}{e1}\) holds for RSDM.
The weighted version of our problem is related to two widelystudied online settings, known in the literature as the online vertexweighted bipartite matching problem [3] and secretary problems [6]. In our problem the administrator holds all the objects (they can be thought of as available positions), and all agents with unknown preference lists are applicants for these objects. Each applicant also has a private weight, which can be thought of as their quality (reflecting the fact that some of an agent’s skills may not be evident from their CV, for example). However we assume that they cannot overstate their weights (skills), because they might be checked and punished. This is similar to the classical assumption of no overbidding (e.g., in sponsored search auctions). Applicants are interviewed onebyone in a random order. When an applicant arrives he chooses his mostpreferred available object and the decision as to whether it is allocated to him is made immediately, and cannot be changed in the future.
Our weighted agents correspond to weighted vertices in the vertexweighted bipartite matching context, but our objects do not arrive online as in the setting of [3]. However, if the preference ordering of each agent in our setting, over his acceptable objects, coincides with the arrival order of the objects in [3], then the two problems are the same. In the transversal matroid secretary problem, see, e.g., [17], objects are known in advance as in our setting, weighted agents arrive in a (uniform) random order, and the goal is to match them to previously unmatched objects. The administrator’s goal is to find a (random) arrival order of agents that maximises the ratio between the total weight of matched agents and the maximum weight of a matching if all the agents preference lists are known in advance. We show that even if the weights of all agents are the same, our algorithm uses the best possible random strategy; no other such strategy leads to better than \(\frac{e}{e1}\)approximate matching.
Organisation of the Paper The remainder of the paper is organised as follows. In Sect. 2 we define notation and terminology used in this paper, and show the straightforward lower bound for the approximation ratio of deterministic truthful mechanisms. SDMT1 and SDMT2 are presented in Sects. 3 and 4 respectively, and in the latter section it is proved that the two mechanisms are essentially equivalent. The approximation ratio of \(\frac{e}{e1}\) for the randomised version of the two mechanisms is established in Sect. 5, whilst Sect. 6 contains our lower bound results. Finally, some concluding remarks are given in Sect. 7.
2 Definitions and Preliminary Observations
Let \(N=\{1,2,\ldots ,n_1\}\) be a set of \(n_1\)agents and \(O={\{o_1,o_2,\ldots ,o_{n_2}\}}\) be a set of \(n_2\)objects. Let \(n=n_1+n_2\). Let [i] denote the set \(\{1,2,\ldots ,i\}\). We assume that each agent \(i\in N\) finds a subset of objects acceptable and has a preference ordering, not necessarily strict, over these objects. We write \(o_t \succ _{i} o_s\) to denote that agent \(i\)strictly prefers object \(o_t\) to object \(o_s\), and write \(o_t \simeq _{i} o_s\) to denote that \(i\) is indifferent between\(o_t\) and \(o_s\). We use \(o_t \succeq _{i} o_s\) to denote that agent \(i\) either strictly prefers \(o_t\) to \(o_s\) or is indifferent between them, and say that \(i\)weakly prefers \(o_t\) to \(o_s\). In some cases a weight \(w_{i}\) is associated with each agent \(i\), representing the priority or importance of the agent. Weights need not be distinct. Let \(W= (w_1, w_2, \ldots , w_{n_1})\). To simplify definitions, we assume that all agents are assigned weight equal to 1 if we are in an unweighted setting.
We assume that the indifference relation is transitive. This implies that each agent essentially divides his acceptable objects into different bins or indifference classes such that he is indifferent between the objects in the same indifference class and has a strict preference ordering over these indifference classes. For each agent \(i\), let \(C^{i}_k\), \(1 \le k \le n_2\), denote the kth indifference class, or tie, of agent \(i\). We also assume that if there exists \(l\in [n_2]\), where \(C^{i}_l = \emptyset \), then \(C^{i}_q = \emptyset \), \(\forall q, l \le q\le n_2\). We let \(L(i) = (C^{i}_1 \succ _{i} C^{i}_2 \succ _{i}\cdots \succ _{i} C^{i}_{n_2})\) and call \(L(i)\) the preference list of agent \(i\). We abuse notation and write \(o\in L(i)\) if \(o\) appears in preference list \(L(i)\), i.e., if agent \(i\) finds object \(o\) acceptable. We say that agent \(i\) ranks object \(o\) in kth position if \(o\in C^{i}_k\). We denote by \(rank(i,o)\) the rank of object \(o\) in agent \(i\)’s preference list and let \(rank(i,o)=n_2+1\) if \(o\) is not acceptable to \(i\). Therefore \(o_t \succ _{i} o_s\) if and only if \(rank(i,o_t) < rank(i,o_s)\), and \(o_t \simeq _{i} o_s\) if and only if \(rank(i,o_t) = rank(i,o_s)\).
Let \(L=(L(1),L(2),\ldots ,L(n_1))\) denote the joint preference list profile of all agents. We write \(L(i)\) to denote the joint preference list profile of all agents except agent \(i\); i.e., \(L(i) = (L(1),\ldots ,L(i1),L(i+1),\ldots ,L(n_1))\). Let \(\mathcal {L}\) denote the set of all possible joint preference list profiles. An instance of HA is denoted by \(I=(N, O, L, W)\). We drop \(W\) and write \(I=(N, O, L)\) if we are dealing with an instance where agents are not assigned weights, or equivalently if they all have the same weight. Let \(\mathcal {I}\) denote the set of all possible instances of HA.
A matching\(\mu \) is a subset of \(N\times A\) such that each agent and object appears in at most one pair of \(\mu \). If \((i,o) \in \mu \), agent \(i\) and object \(o\) are said to be matched together, and \(o\) is the partner of \(i\) and vice versa. If \((i,o)\in \mu \) for some \(o\), we say that \(i\) is matched, and unmatched otherwise. The definitions of matched and unmatched for an object are analogous. If agent \(i\) is matched, \(\mu (i)\) denotes the object matched to \(i\). Similarly if object \(o\) is matched, \(\mu ^{1}(o)\) denotes the agent matched to \(o\). In what follows, we will refer to the underlying graph of \(I\), which is the undirected graph \(G_0=(V,E)\) where \(V=N\cup O\) and \(E=\{(i,o),i\in N,o\in L(i)\}\). We also use \(\mu \) to denote a matching (in the standard graphtheoretic sense) in \(G_0\). The size of a matching \(\mu \) is equal to the number of agents matched under \(\mu \). In the presence of weights, the weight of a matching is equal to the sum of the weights of the matched agents.
For two given matchings \(\mu _1, \mu _2\), we will frequently use \(\mu _1 \oplus \mu _2\) to denote the symmetric difference with respect to their sets of edges. An alternating path in \(G_0\), given a matching \(\mu _1\), is a path that consists of edges that alternately belong to \(\mu _1\) and do not belong to \(\mu _1\). An augmenting path in \(G_0\) is an alternating path where the first and the last vertices on the path are unmatched in \(\mu _1\). To augment along an augmenting path, given matching \(\mu _1\), means that a new matching \(\mu _2\) is created by removing edges on the path that belong to \(\mu _1\) and adding edges on the path that do not belong to \(\mu _1\).
A matching \(\mu \) is Pareto optimal if there is no other matching under which some agent is better off while none is worse off. Formally, \(\mu \) is Pareto optimal if there is no other matching \(\mu '\) such that (1) \(\mu '(i) \succeq _{i} \mu (i)\) for all \(i\in N\), and (2) \(\mu '(i') \succ _{i'} \mu (i')\) for some agent \(i' \in N\). Manlove [24, Sect. 6.2.2.1] gave a characterisation of Pareto optimal matchings in instances of HA (potentially with ties) in terms of a number of graphtheoretic structures, which we will now define.
An alternating path coalition w.r.t. \(\mu \) comprises a sequence \(P=\left\langle i_0,i_1,\ldots , i_{r1}, o_k\right\rangle \), for some \(r\ge 1\), where \(i_j\) is a matched agent (\(0\le j \le r1\)) and \(o_k\) is an unmatched object. If \(r=1\) then \(i_0\) strictly prefers \(o_k\) to \(\mu (i_0)\). Otherwise, if \(r\ge 2\), \(i_0\) strictly prefers \(\mu (i_1)\) to \(\mu (i_0)\), \(i_j\) weakly prefers \(\mu (i_{j+1})\) to \(\mu (i_j)\) (\(1\le j \le r2\)), and \(i_{r1}\) weakly prefers \(o_k\) to \(\mu (i_{r1})\).
An augmenting path coalition w.r.t. \(\mu \) comprises a sequence \(P=\left\langle i_0,i_1,\ldots , i_{r1}, o_k\right\rangle \), for some \(r\ge 1\), where \(i_0\) is an unmatched agent and \(o_k\) is an unmatched object. If \(r=1\) then \(i_0\) finds \(o_k\) acceptable. Otherwise, if \(r\ge 2\), \(i_j\) is a matched agent (\(1\le j \le r1\)), \(i_0\) finds \(\mu (i_1)\) acceptable, \(i_j\) weakly prefers \(\mu (i_{j+1})\) to \(\mu (i_j)\) (\(1\le j \le r2\)), and \(i_{r1}\) weakly prefers \(o_k\) to \(\mu (i_{r1})\).
A cyclic coalition w.r.t. \(\mu \) comprises a sequence of applicants \(P=\left\langle i_0,i_1,\ldots , i_{r1} \right\rangle \), for some \(r\ge 2\), all matched in \(\mu \), such that \(i_j\) weakly prefers \(\mu (i_{j+1})\) to \(\mu (i_j)\) for each j (\(0\le j \le r1\)), and \(i_j\) strictly prefers \(\mu (i_{j+1})\) to \(\mu (i_j)\) for some j (\(0\le j \le r1\)) (all subscripts are taken modulo r when reasoning about cyclic coalitions).
Proposition 2.1
([24]) Given an instance \(I\) of HA and a matching \(\mu \) in \(I\), \(\mu \) is Pareto optimal if and only if \(\mu \) admits no alternating path coalition, no augmenting path coalition, and no cyclic coalition.
Let \(\mathcal {M}\) denote the set of all possible matchings. A deterministic mechanism\(\phi \) maps an instance of HA to a matching, i.e., \(\phi : \mathcal {I}\rightarrow \mathcal {M}\). Let \(R: \mathcal {M}\rightarrow [0,1]\) denote a distribution over possible matchings (which we also call a random matching); i.e., \(\sum _{\mu \in \mathcal {M}}R(\mu ) = 1\). A randomised mechanism\(\phi \) is a mapping from \(\mathcal {I}\) to a distribution over possible matchings, i.e., \(\phi :\mathcal {I}\rightarrow Rand(\mathcal {M})\), where \(Rand(\mathcal {M})\) is the set of all random matchings. A deterministic mechanism is Pareto optimal if it always returns a Pareto optimal matching. A randomised mechanism is Pareto optimal if it always returns a distribution over Pareto optimal matchings.
Agents’ preferences are private knowledge and an agent may prefer not to reveal his preferences truthfully if it is not in his best interests, for a given mechanism. A deterministic mechanism \(\phi \) is dominant strategy truthful (or truthful) if agents always find it in their best interests to declare their preferences truthfully, no matter what other agents declare, i.e., for every joint preference list profile L, for every agent \(i\), and for every possible declared preference list \(L'(i)\) for \(i\), \(\phi (L(i),L(i))\succeq _{i}\phi (L'(i),L(i))\). A randomised mechanism \(\phi \) is universally truthful if it is a probability distribution over deterministic truthful mechanisms.
Denote by \(w(\phi (I))\) the (expected) weight of the (random) matching generated by mechanism \(\phi \) on instance \(I\in \mathcal {I}\), and by \(w(I)\) the weight of a maximum weight matching in \(I\). The approximation ratio of \(\phi \) is then defined as \(\max _{I\in \mathcal {I}}\frac{w(I)}{w(\phi (I))}\). Note that a maximum weight matching has the same weight as a maximum weight Pareto optimal matching, as the following proposition shows.
Proposition 2.2
Given an instance \(I\) of HA, a maximum weight matching has the same weight as a maximum weight Pareto optimal matching.
Proof
We provide a procedure for transforming a maximum weight matching \(\mu \) in \(I\) to a Pareto optimal matching \(\mu '\) in \(I\) with the same weight.
Let \(G_0\) be the underlying graph for \(I\) and let \(G'_0\) be the subgraph of \(G_0\) induced by \(N'\cup A\), where \(N'\) is the set of agents who are matched in \(\mu \). Define the cost of each edge \((i,o_j)\) in \(G_0'\) to be \(rank(i,o_j)\). Find a maximum cardinality matching \(\mu '\) of minimum cost in \(G_0'\). It is easy to see that \(\mu \) and \(\mu '\) are of the same cardinality and have the same weight, as they each match all agents in \(N'\). It remains to show that \(\mu '\) is Pareto optimal in \(I\).
If \(\mu '\) is not Pareto optimal in \(I\) then by Proposition 2.1, \(\mu '\) admits a coalition C that is either an alternating path coalition, or an augmenting path coalition, or a cyclic coalition. If C is an augmenting path coalition then \(\mu '\oplus C\) has larger weight than \(\mu \), a contradiction as \(\mu \) is a maximum weight matching. Hence C is an alternating path coalition or a cyclic coalition. In either case let \(\mu ''=\mu '\oplus C\). Then \(\mu ''=\mu '\) but the cost of \(\mu ''\) is less than the cost of \(\mu '\), a contradiction as \(\mu '\) is a maximum cardinality minimum cost matching in \(G_0'\). Hence \(\mu '\) is Pareto optimal in \(I\). \(\square \)
We now give a straightforward lower bound for the approximation ratio of any deterministic truthful mechanism for HA with strict preferences.
Theorem 2.3
No deterministic truthful mechanism for HA can achieve approximation ratio better than 2. The result holds even for strict preferences.
Proof
Consider an HA instance \(I\) with two agents, \(1\) and \(2\), and two objects, \(o_1\) and \(o_2\). Assume that both agents have weight 1 and strictly prefer \(o_1\) to \(o_2\). Then \(I\) admits two matchings of size (weight) 2. Assume, for a contradiction, that there exists a truthful mechanism \(\phi \) with approximation ratio strictly smaller than 2. Then in \(I\), \(\phi \) must pick one of the two matchings of size 2. Assume, without loss of generality, that \(\phi \) picks \(\mu =\{(1,o_2),(2,o_1)\}\). Now, assume that agent \(1\) misrepresents his acceptable objects and declares \(o_1\) as the only object acceptable to him. Let \(I'\) denote the instance of HA so obtained. As \(\phi \) is truthful, when executed on \(I'\) it must not assign \(o_1\) to \(1\), or else \(1\) finds it in his best interests to misrepresent his preferences as he would strictly prefer his allocated object in \(I'\) to his allocated object in \(I\). Hence \(\phi \) must return a matching of size at most 1 (by assigning an object to agent \(2\)) when applied to \(I'\). However, \(I'\) admits a matching of size 2, namely \(\mu ' = \{(1,o_1),(2,o_2)\}\). Therefore the approximation ratio of \(\phi \) is at least 2, a contradiction. \(\square \)
Corollary 2.4
No deterministic truthful Pareto optimal mechanism for HA can achieve approximation ratio better than 2. The result holds even for strict preferences.
As mentioned in Sect. 1, the upper bound of 2 is achievable via SDM for HA with strict preferences [2]. If weights and ties exist, simply ordering the agents in decreasing order of their weights and running SDMT1 (see Algorithm 1 in Sect. 3) or SDMT2 (see Algorithm 2 in Sect. 4) gives a deterministic truthful and Pareto optimal mechanism with approximation ratio 2 (Theorem 3.6 in Sect. 3). This resolves the problem for deterministic mechanisms and motivates looking into relaxing our requirements. In the following sections we look for randomised truthful mechanisms that construct ‘large’ weight Pareto optimal matchings.
3 First Truthful Mechanism: SDMT1
3.1 Introduction
When preferences are strict, SDM produces a Pareto optimal matching. However when indifference is allowed, finding an arbitrary Pareto optimal matching is not as straightforward as in the case of strict preferences, as illustrated via an example in Sect. 1.
In Sect. 3.2 we introduce SDMT1, Serial Dictatorship Mechanism with Ties, a mechanism that generalises SDM to the case where agents’ preferences may involve ties. Then in Sect. 3.3, we show that SDMT1 is truthful and is guaranteed to produce a Pareto optimal matching. We further show that SDMT1 is capable of generating any given Pareto optimal matching.
3.2 Mechanism SDMT1
Let \(I=(N,O,L)\) be an instance of HA, and let a fixed order \(\sigma \) of the agents be given. Assume, w.l.o.g., that \(\sigma (i) = i\) for all agents \(i\in N\). The formal description of SDMT1 is given in Algorithm 1; an informal description follows.
Notice that, at any stage of the mechanism, an edge \((i,o)\) belongs to E if and only if either agent \(i\) is matched in \(\mu \) and \(o\simeq _{i} \mu (i)\), or SDMT1 is at phase \(i\) and examining the indifference class to which \(o\) belongs. Therefore, it is fairly straightforward to observe the following.
Observation 3.1
At the end of phase \(i\) of SDMT1, if agent \(i\) is assigned no object then he will be assigned no object when SDMT1 terminates. Otherwise, if \(i\) is provisionally assigned an object \(o\), then he will be allocated an object that he ranks the same as \(o\) in the final matching.
3.3 Properties of SDMT1
Before proceeding to prove our main claim, namely that SDMT1 is truthful and produces a Pareto optimal matching, let us discuss a relevant concept that is both interesting in its own right and useful in the proofs that follow. In practice agents may have priorities and the mechanism designer may wish to ensure that the agents with higher priorities are served before satisfying those with lower priorities. Roth et al. [31] studied this concept under the term priority matchings in the case where each agent’s preference list is one single tie. This work was motivated by the kidney exchange problem in which patients are assigned priorities based on various criteria; e.g., children and hardtomatch patients have higher priorities. Prior to Roth et al. [31], Svensson [35] studied a similar concept under the name queue allocation in a setting similar to ours. We formally define this concept using the terminology strong priority matching, reflecting both the definition in [31] and the fact that preference lists are more general than single ties.
In general, assume that we are given an ordering of the agents \(\sigma =i_{1},\ldots ,i_{n_1}\). However, recall that in this section we are assuming, without loss of generality, that \(i_j=j\), i.e., \(\sigma =1,2,\dots ,n_1\). For each matching \(\mu \), the signature of \(\mu \) w.r.t. \(\sigma \), denoted by \(\rho (\mu ,\sigma )\), is a vector \(\left\langle \rho _{1},\ldots ,\rho _{n_1}\right\rangle \) where for each \({i} \in [n_1]\), \(\rho _i= rank(i,\mu (i))\) if \(i\) is matched under \(\mu \), and \(\rho _i= n_2+1\) otherwise. A matching \(\mu \) is a strong priority matching (SPM) w.r.t. \(\sigma \) if \(\rho (\mu ,\sigma )\) is lexicographically minimum, taken over all matchings \(\mu \). That is, (i) the highest priority agent \(1\) has one of his firstchoice objects (assuming \(L(1)\ne \emptyset \)); (ii) subject to (i), there is no matching \(\mu '\) such that \(\mu '(2)\succ _2\mu (2)\), where \(2\) is the agent with the secondhighest priority; (iii) subject to (i) and (ii), there is no matching \(\mu ''\) such that \(\mu ''(3)\succ _3\mu (3)\), where \(3\) is the agent with the thirdhighest priority, etc. It is easy to see that a given HA instance may admit more than one SPM w.r.t. \(\sigma \), but all of them have the same signature. When \(\sigma \) is fixed and known, we simply say that \(\mu \) is an SPM.
Theorem 3.2
The matching produced by SDMT1 is a strong priority matching w.r.t. \(\sigma \).
Proof
Let \(\mu _k\) denote the matching at the end of phase k (hence \(\mu _{n_1} = \mu \)). Assume, for a contradiction, that the claim does not hold. Hence \(\mu \) is not an SPM in \(I\). Let \(\mu ^*\) be an SPM in \(I\). Let \(i\) be the first agent in \(\sigma \) (i.e., the lowestindexed agent) who strictly prefers his partner under \(\mu ^*\) to his partner under \(\mu \), i.e., \(\mu ^*(i)\succ _i\mu (i)\) and \(\mu ^*(j) \simeq _{j} \mu (j)\), \(\forall j < i\) (we denote this fact by D1). Therefore, in phase \(i\) of SDMT1 no augmenting path has been found starting from \((i,o)\), for any object \(o\) such that \(o\succeq _i\mu ^*(i)\) (we denote this fact by D2). Also, it follows from D1 and Observation 3.1 that, \(\mu ^*(j) \simeq _j \mu _{i1}(j)\), \(\forall j < i\) (we denote this fact by D3).
Let \(G^*\) denote the graph G in phase \(i\) during the examination of the indifference class to which \(\mu ^*(i)\) belongs. By D2, \(G^*\) must admit no augmenting path w.r.t. \(\mu _{i1}\). We show, however, that \(G^*\) admits an augmenting path starting from \(i\). To see this note that, by D1 and D3, and by the construction of edges E, edges \((j,\mu ^*(j))\) belong to \(G^*\)\(\forall j<i\). If \(\mu ^*(i)\) is unmatched in \(\mu _{i1}\) then \((i,\mu ^*(i))\) constitutes an augmenting path of size 1 in \(G^*\). Otherwise, let \({j_1}\) denote the partner of \(\mu ^*(i)\) under \(\mu _{i1}\) (note that \(j_1 < i\)). It follows from D1 and D3, and the construction of E, that \({j_1}\) is matched under \(\mu ^*\). If \(\mu ^*({j_1})\) is unmatched under \(\mu _{i1}\) then we have found an augmenting path of length 3. Otherwise, let \({j_2}\) denote the partner of \(\mu ^*({j_1})\) under \(\mu _{i1}\) (note that \(j_2 < i\)). The same argument we used for \({j_1}\) can be used for \({j_2}\), resulting in either the discovery of an augmenting path of size 5 or reaching a new agent. We can repeatedly use this argument and each time we either find an augmenting path (and stop) or visit an agent that appears in \(\sigma \) before \(i\). As each agent is assigned at most one object in every matching, and vice versa, the agents \({j_r}\) that we encounter on our search for an augmenting path are all distinct. Therefore, since there are a finite number of agents and objects, we are bound to reach an object \(o\) that is unmatched under \(\mu _{i1}\), hence exposing an augmenting path in \(G^*\), a contradiction. \(\square \)
Corollary 3.3
The matching produced by SDMT1 is a Pareto optimal matching.
Proof
By Theorem 3.2, SDMT1 produces an SPM. It follows from Theorems 1 and 2 in [35] that any SPM is a Pareto optimal matching. Hence, the matching produced by SDMT1 is a Pareto optimal matching. \(\square \)
SDMT1 is truthful, no matter which augmenting path is selected in each phase of the mechanism, as the next result shows. The proof idea is as follows. Note that when an agent’s turn arrives, SDMT1 assigns him an object from what the algorithm identifies as his “best possible indifference class”; i.e., the topmost indifference class from which he can be assigned an object without harming any previouslyarrived agent. Then as soon as he is assigned an object, by Observation 3.1, he is guaranteed to be allocated the same object, or one that he equally values, when the algorithm terminates. Hence, as long as we can show that the algorithm correctly identifies these “best possible indifference classes”, it is straightforward to see that no agent can benefit from misreporting. The proof of the next theorem formalises this argument.
Theorem 3.4
The mechanism SDMT1 is truthful.
Proof
Assume, for a contradiction, that the claim does not hold. Let \(i\) be the first agent in \(\sigma \) (i.e., the lowestindexed agent) who benefits from misrepresenting his preferences and reporting \(L'(i)\) instead of \(L(i)\). Let \(L' = {(L'(i),L(i))}\).
Let \(\mu \) denote the matching returned by SMDT1 on instance \(I=(N,O,L)\), i.e., the instance in which agent \(i\) reports truthfully, and let \(\mu ^*\) denote the matching returned on instance \(I'=(N,O,L')\). Then in I, \(\mu ^*(i)\succ _i\mu (i)\) and \(\mu (j) \succeq _j \mu ^*(j)\), \(\forall j < i\).
By Theorem 3.2, \(\mu \) is an SPM in \(I\), and \(\mu ^*\) is an SPM in \(I'\). Suppose that in \(I\), \(\mu (j) \succ _{j} \mu ^*(j)\), for some \(j < i\). Let k be the smallest integer such that \(\mu (k) \succ _k \mu ^*(k)\) in \(I\). As \(k<i\), for each j (\(1\le j\le k\)), agent j has the same preference list in \(I\) and \(I'\), by construction of \(L'\). Hence \(\mu ^*\) cannot be an SPM in \(I'\) after all, a contradiction.
It follows that in I, \(\mu ^*(i)\succ _i\mu (i)\) and \(\mu (j) \simeq _j \mu ^*(j)\), \(\forall j < i\). We now obtain a contradiction to the fact that \(\mu \) is an SPM in \(I\). \(\square \)
We now show a bound on the time complexity of SDMT1. Let \(\gamma \) denote the size of the largest indifference class for a given instance \(I\).
Theorem 3.5
SDMT1 terminates in time \(O(n_1^2\gamma {+m})\).
Proof
For each agent \(i\) matched under \(\mu \), let \(\ell _{i}\) denote the length of the indifference class to which \(\mu (i)\) belongs. Let \(L(i)\) denote the length of agent \(i\)’s preference list, \(\forall i\in N\). Searching for an augmenting path in a graph \(G=(V,E)\) can be done in time O(E) using BreadthFirst Search (BFS). Hence the search for an augmenting path in each phase \(i\) can be done in time \(O(\ell _1+\ell _2+\cdots +\ell _{i1}+L(i))\). Therefore SDMT1 terminates in time \(O((n_11)\cdot \ell _{1}+ (n_12)\ell _{2}+\cdots +\ell _{n_11} + \sum _{i\in N}L(i))\). However, \(\ell _{i} \le \gamma \), \(\forall i\in N\), therefore \({(n_11)\cdot \ell _{1}+ (n_12)\ell _{2}+\cdots +\ell _{n_11} + \sum _{i\in N}L(i)}\le n_1^2\gamma + m\), where m is the number of (agent,object) acceptable pairs. Hence SDMT1 terminates in time \(O(n_1^2\gamma {+m})\). \(\square \)
As noted in Sect. 1, in the strict preferences case, any Pareto optimal matching is at least half the size of a maximum size such matching. The same is true in the general case with indifferences, since any Pareto optimal matching is a maximal matching in the underlying bipartite graph \(G_0\) for \(I\), and any maximal matching in \(G_0\) is at least half the size of a maximum matching in \(G_0\) [23]. Hence SDMT1 obviously achieves approximation ratio 2 when we are concerned with the cardinality of the matching. We next show that, when agents are assigned arbitrary weights, SDMT1 achieves the same approximation ratio (relative to a maximum weight Pareto optimal matching) if the agents are ordered in \(\sigma \) in nonincreasing order of their weights.
Theorem 3.6
SDMT1 achieves approximation ratio of 2 relative to the size of a maximum weight Pareto optimal matching, if the agents are ordered in \(\sigma \) in nonincreasing order of their weights.
Proof
Given an HA instance \(I\), let \(\mu \) be the matching produced by SDMT1 and let \(\mu '\) be a maximum weight Pareto optimal matching in \(I\). List the agents matched under each of these matchings in nonincreasing order of weight. Let \(i_1,\ldots , i_k\) denote such an order under \(\mu \), and let \(i'_1,\ldots ,i'_l\) denote such an order under \(\mu '\).
Take any agent \(i'_r\) who is matched under \(\mu '\), to say \(o\), but not matched under \(\mu \) (if no such agent exists then \(\mu \) is itself a maximum weight Pareto optimal matching). Note that, as \(\mu \) is Pareto optimal, \(o\) must be matched under \(\mu \), for otherwise \(\mu \cup \{(i'_r,o)\}\) Pareto dominates \(\mu \). As SDMT1 generates an SPM w.r.t. \(\sigma \) (Theorem 3.2) and agents are listed in nonincreasing order of weight under \(\sigma \), it follows that \(o\) must be allocated in \(\mu \) to an agent \(i_s\) who has at least as large a weight as \(i'_r\) (for otherwise \((\mu \setminus \{(i_s,o)\})\cup \{(i'_r,o)\}\) has a lexicographically smaller signature than \(\mu \), a contradiction).
We claim that \(i_s\) must be matched under \(\mu '\) as well, as otherwise \((\mu '\setminus \{(i'_r,o)\}){\cup }\{(i_s,o)\}\) has a higher weight than \(\mu '\), a contradiction. (Recall that a maximum weight Pareto optimal matching must be a maximum weight matching as well by Proposition 2.2.) Hence we have established that, for each agent \(i'_r\) matched under \(\mu '\) but not matched under \(\mu \), there exists a unique agent \(i_s\), with weight at least as large as that of \(i'_r\), who is matched under \(\mu \). Thus if \(N_1\) is the set of agents matched in \(\mu '\) and \(N_2\) is the set of agents matched in \(\mu \), it follows that \(wt(N_2)\ge wt(N_1\backslash N_2)\), where \(wt(N')\) is the sum of the weights of the agents in \(N'\), for \(N'\subseteq N\). Also \(wt(N_2) = wt(N_2\backslash N_1)+wt(N_2\cap N_1)\ge wt(N_2\cap N_1) = wt(N_1)wt(N_1\backslash N_2)\ge wt(N_1)wt(N_2)\), hence the result. \(\square \)
It is known (see, e.g., [2]) that, in the case of strict preferences, not only can we find a Pareto optimal matching using SDM, but we can also generate all Pareto optimal matchings by executing SDM on all possible permutations of the agents. In other words, given any Pareto optimal matching \(\mu \), there exists an order of the agents such that executing SDM on that order returns \(\mu \). A similar characterisation of Pareto optimal matchings holds in the case of preferences with ties. This is stated by the following result, whose proof is given in the Appendix.
Theorem 3.7
Any Pareto optimal matching can be generated by some execution of SDMT1.
4 Randomised Mechanism with Weights and Ties
In this section we will analyse our mechanism for the weighted version of our problem. Our algorithm in the next section is truthful with respect to agents’ preferences and weights (under the nooverbidding assumption, see also the beginning of Sect. 6) and provides an \(\frac{e}{e1}\)approximate Pareto optimal matching. We will show in Sect. 6 that, even if the weights of all agents are the same our algorithm uses the best possible random strategy – no other such strategy leads to better than \(\frac{e}{e1}\)approximate matching.
4.1 Second Truthful Mechanism: SDMT2
The approximation ratio analysis of the randomised version of SDMT1 is complex, because it requires additional information which is not maintained by SDMT1. For the sake of the analysis, we introduce a variant of SDMT1, called SDMT2. After introducing some terminology we present SDMT2, and then establish the equivalence between SDMT1 and SDMT2. Pareto optimality and truthfulness of SDMT2 will then follow from this equivalence and these same two properties of SDMT1. We will prove that the randomised version of SDMT2 is \(\frac{e}{e1}\)approximate. By the equivalence of the two algorithms, a randomised version of SDMT1 has the same approximation ratio.
Let \(H=\left\{ a\in L(i)\,\, \text {there is a (directed) path from}\, i \, \text {to a labelled agent in}\,\, TG(i,a,S)\right\} .\) Note that, as labelled agents do not point to any agents, no intermediate agent on a directed path is labelled. Note that H may be empty, and it can be found, for instance,^{6} by breadth first search (BFS). If \(H\ne \emptyset \), let \(\ell \) be the highest indifference class of i with \(H\cap C^i_{\ell }\ne \emptyset \). Define \(\max TG(i,L(i),S)\) to be the highest order object in \(H\cap C^i_{\ell }\) (e.g., in Fig. 1, \(\max TG(7,\{d,e,g,h\},[6])=g\)). We also explicitly define \(\max TG(i,L(i),S) =\emptyset \) if \(H = \emptyset \). If \(\max TG(i,L(i),S)\ne \emptyset \), then there is a path from i to a labelled agent in TG(i, a, S), which can be found by BFS. Suppose the path is \((i_0,i_1,i_2,\ldots ,i_k)\), where \(i_0=i\) and only \(i_k\) is labelled. Now denote \({ Trading}(i,a,S)\) to be a procedure that allocates the object owned by \(i_{s+1}\) to \(i_s\), for \(s=0,1,\ldots ,k1\). Note that \(i_k\) may own more than one object for which \(i_{k1}\) has pointed to \(i_k\). In this case, the highest order object among such objects is allocated to \(i_{k1}\). After trading, if \(i_k\) still owns more than one object, keep \(i_k\) labelled and unlabel \(i_k\) otherwise. In Fig. 1, considering procedure \(\textit{Trading}(7,g,[6])\), we note that there are two paths from agent 7 to a labelled agent: (7, 5, 1) and (7, 5, 4, 2). Procedure \(\textit{Trading}(7,g,[6])\) can use any of those two paths. If \(\textit{Trading}(7,g,[6])\) uses the first path, then it allocates g to agent 7 and b to agent 5, since \(b\succ c\), and keeps agent 1 labelled. If procedure \(\textit{Trading}(7,g,[6])\) uses the second path, then it allocates g to agent 7 and e to agent 5, anf f to agent 4, since \(f\succ p\), and changes agent 2 to unlabelled.
Observation 4.1
For each agent \(i\), after \(i\)’s turn in “for loop” of SDMT2, if \(i\) is allocated no object, then he will be allocated no object when SDMT2 terminates. Otherwise, if \(i\) is provisionally allocated some objects in his turn, then in the final matching he will be allocated an object in the same indifference class as his initially allocated objects.
Observation 4.2
For each agent \(i\), after \(i\)’s turn, if \(i\) is allocated an object \(o\in C^{i}_j\), then all the objects in \(\cup _{k=1}^j C^{i}_k\) have been allocated to either i or to some agents prior to i. Once an object is allocated, it remains allocated until the end of the for loop.
Now we establish the equivalence of SDMT1 and SDMT2.
Theorem 4.3
Given the same input, SDMT1 and SDMT2 match the same set of agents. Furthermore, for each matched agent \(i\), the object allocated to \(i\) in SDMT1 is in the same indifference class of i as the object allocated to him in SDMT2. This equivalence between SDMT1 and SDMT2 holds for any fixed common order \(\succ \) of the objects used in SDMT2 and it is also independent of how SDMT2 finds the directed paths from agent i to a labelled agent in the trading graph \(TG(i,L(i),[i1])\).
Proof
 1.
for each agent \(k \le i1\), the allocated objects of SDMT1 and SDMT2 to k are in the same indifference class, (if one of them is empty, the other is empty as well)
 2.
for each \(\ell \le n_2\), and \(a\in C^i_{\ell }\), there is an augmenting path starting from (i, a) in SDMT1 if and only if a is unallocated in SDMT2 or there is a path from i to a labelled agent in \(TG(i,a,[i1])\) in SDMT2.
For the proof of the induction step, suppose properties 1 and 2 are true for all the steps \(k \le i1\), we now prove that they are true for step i. For property 1, by inductive hypothesis, property 1 holds for any \(k \le i2\). Since property 2 holds for agent \(i1\) by inductive hypothesis, the objects allocated to agent \(i1\) in SDMT1 and SDMT2 will be in the same indifference class, thus, property 1 holds for step i. Now property 2 will be proved true for agent i, for each \(\ell \le n_2\), and \(a\in C^i_{\ell }\):
For \(\Rightarrow \) direction, if there is an augmenting path starting from (i, a) in SDMT1, and if a is allocated previously in SDMT2, suppose the new matching generated in SDMT1 due to the augmenting path is \((k,\mu (k))\), \(k\le i\), where \(\mu (i)=a\). By property 1 of inductive hypothesis and Observation 4.2, all the objects in \(\{\mu (k),k\le i\}\) have been allocated to some agents \(k \le i1\) in SDMT2. For object b, we use \(\nu ^{1}(b)\) to denote the agent whom b is allocated to in SDMT2. Now consider the following path in \(TG(i,a,[i1])\): let \(i_1=\nu ^{1}(\mu (i))\), and if \(i_1\) is labelled then we are done, otherwise, let \(i_2=\nu ^{1}(\mu (i_1))\). If \(i_2\) is labelled, then we are done, otherwise continue this process. Finally, we will reach by this process a labelled agent among the agents in \([i1]\). This is true because of the pigeonhole principle: i objects from \(\{\mu (k), k \le i\}\) are allocated in SDMT2 to \(i1\) agents in \([i1]\).
For \(\Leftarrow \) direction, now suppose a is unallocated or there is a path from i to a labelled agent in \(TG(i,a,[i1])\) in SDMT2. Suppose a is allocated and there is a path from i to a labelled agent in \(TG(i,a,[i1])\). Then by \(\textit{Trading}(i,a,[i1])\), we can make all the agents \(k\le i\) allocated at least one object and i is allocated a. This defines an allocation of (sets of) objects to agents \(k\le i\) in SDMT2. Let us now select any matching using this allocation, e.g., \(M=\{(k,\nu (k)), k\le i\}\), where \(\nu (i)=a\) (we can also select such a matching if a is unallocated in SDMT2). For instance, matching \(\nu \) can assign the hightest order object to each agent \(k\le i1\) from the current set of objects allocated to k, and assign object a to agent i. Suppose the matching generated after step \(i1\) in SDMT1 is \(M'=\{(k,\mu (k)),k\le i1\}\). By property 1 of inductive hypothesis, we know \(\mu (k)\) and \(\nu (k)\) are in the same indifference class of agent k, for any \(k\in [i1]\). Now consider \(M \oplus M'\), which consists of alternating paths and cycles. Then a connected component of \(M \oplus M'\) that contains \((i,\nu (i))\) must be an odd length alternating path in \(M \oplus M'\) w.r.t. \(M'\), implying an augmenting path starting from (i, a) in SDMT1. The argument showing that the connected component that contains \((i,\nu (i))\) must be an odd length alternating path is as follows. If \(\nu (i)=a\) is unallocated in SDMT1, then \((i,\nu (i))\) is an odd length alternating path. Otherwise, suppose \(i_1=\mu ^{1}(\nu (i))\), then consider whether \(\nu (i_1)\) is allocated or not in SDMT1. If not we get an odd path \((i,\nu (i),i_1,\nu (i_1))\). Otherwise continue the search, and let \(i_2=\mu ^{1}(\nu (i_1))\), then consider whether \(\nu (i_2)\) is allocated or not in SDMT1. If not we get an odd length path \((i,\nu (i),i_1,\nu (i_1),i_2,\nu (i_2))\), and so on. Finally, we will get an odd length alternating path starting from \((i,\nu (i))=(i,a)\) w.r.t. \(M'\), which is indeed an augmenting path starting from (i, a) in SDMT1. This concludes the proof of the induction step. \(\square \)
It is easy to see that both SDMT1 and SDMT2 reduce to SDM if all agents have strict preference over objects.
Theorem 4.4
SDMT2 is truthful, Pareto optimal, and terminates in \(O(n_1^2\gamma +m)\) running time.
Proof
The first two properties follow from the equivalence between SDMT1 and SDMT2 (Theorem 4.3) and the Pareto optimality (Corollary 3.3) and truthfulness (Theorem 3.4) of SDMT1. It remains to establish the running time of SDMT2.
By the previous analysis given in the proof of Theorem 3.5, in each loop iteration i, the running time is \(O(L(i)+(i1)\gamma )\). Summing i over \([n_1]\), we obtain that the running time of SDMT2 is \(O(m+n_1^2\gamma )\). \(\square \)
4.2 Randomised Mechanism
When preferences are strict, Algorithm 3 reduces to a variant of RSDM that has been used in weighted online bipartite matching with approximation ratio \(\frac{e}{e1}\) (see [3] and [16]). Our analysis of Algorithm 3 is a nontrivial extension of the primaldual analysis from [16] to the case where agents’ preferences may involve ties. Before analysing the approximation ratio, we will argue about the universal truthfulness of Algorithm 3 when agents’ preferences are private and they in addition have weights.
If the weights are public, Algorithm 3 is universally truthful and Pareto optimal. This is because it chooses a random order of the agents, given the weights, and then runs SDMT2 according to this order. It follows by inspection of SDMT2 that, if the order of the other agents is given, an agent can get a better object if he appears earlier in this order. Then it is not difficult to see that if the weights are private, and under the assumption that no agent is allowed to bid over his private weight (the socalled nooverbidding assumption – see the beginning of Sect. 6), Algorithm 3 is still universally truthful in the sense that no agent will lie about his preferences or weight.
Theorem 4.5
Algorithm 3 is universally truthful, even if the weights and preference lists of the agents are their private knowledge, assuming that no agent can overbid his weight.
Proof
Algorithm 3 is a distribution over deterministic mechanisms due to the selection of random variables \(Y_i\). For each deterministic mechanism (i.e., SDMT2 when \(Y_i\), \(i\in N\) is fixed), we prove that it is truthful with respect to weights and preference lists. Let us denote by \(\phi \) the mechanism of SDMT2 when \(Y_i\), \(i\in N\) is fixed. If is not difficult to see that for any (W, L), \(w'_i\le w_i\) and \(L'(i)\), \(i\in N\), we have \(\phi _i(W,L)\succeq _i\phi _i((w'_i,w_{i}),L)\succeq _i\phi _i((w'_i,w_{i}),(L'(i),L(i)))\). The first preferred order in this chain follows from the fact that the order of i when i bids \(w_i\) is better than or equal to his order when he bids \(w'_i\). The second preferred order in this chain follows by the truthfulness of SDMT2 when weights are public. \(\square \)
5 Analysis of the Approximation Ratio
To gain some highlevel intuition behind our extension from strict preferences to preferences with ties, we highlight here the similarities and differences between our problem and that of online bipartite matching. Our problem with strict preferences and without weights is closely related to online bipartite matching.^{7} If each agent in our problem ranks his desired objects in the order that precisely follows the arrival order of objects in the online bipartite matching, the two problems are equivalent. Therefore, we extend the analysis of this particular setting, where each agent’s preference list is a sublist of a global preference list, to the general case where agents preferences are not constrained and may involve ties, and furthermore agents may have weights.
Lemma 5.1
 1.
for some universal constant F, \(P\ge F\cdot D\), always, and
 2.
the expectation of the randomised dual variables forms a feasible dual solution, that is, \(\mathbb {E}(\alpha _i)\) and \(\mathbb {E}(\beta _a)\) are dual feasible.
Proof
Since \(P\ge F\cdot D\), taking expectations, \(\mathbb {E}(P)\ge F\cdot \mathbb {E}(D)\). The cost of the dual solution obtained by taking expectations of the dual random variables is \(\mathbb {E}(D)\) and they form a feasible dual solution, therefore \(\mathbb {E}(D)\ge \) OPT. Hence, \(\mathbb {E}(P)\ge F\cdot \)OPT. \(\square \)
Hence, to satisfy the conditions of Lemma 5.1, we need to show that the expectation of the dual solution \(\{\alpha _i(\vec {Y}), i\in N, \beta _a(\vec {Y}), a\in O\}\) is feasible for the dual LP, implying that the approximation ratio of Algorithm 3 is at most \(\frac{1}{F}=\frac{e}{e1}\). The main technical difficulty lies in proving the dominance lemma and the monotonicity lemma (see Lemma 5.4 and 5.6). To prove these two lemmas, for any fixed agent i, and any fixed object \(a \in O\), we define a threshold, denoted by \(\theta =\theta ^i_a\), of the random variable for \(Y_i\), which specifies whether agent i will get matched—see Lemma 5.4. This threshold will depend on the other agents \(Y_{i}\). For an agent with strict preferences, such a threshold is the same as that defined in the online bipartite matching problem. However, in the presence of ties, the same defined threshold does not work. We show how to define such a threshold for our algorithm.
 1.
If a is unmatched in \(ALG_{i}\), let \(\theta =1\).
 2.
Otherwise, suppose that a is matched in \(ALG_{i}\) to some agent \(i'\). Then consider the allocations just after the “for loop” in SDMT2 within \(ALG_{i}\) terminated.
If \(i'\) is labelled, set \(\theta =1\).
 3.
Otherwise, suppose \(a\in C^{i'}_j\) and construct the trading graph \(TG(i',C^{i'}_j\backslash \{a\},[n_1]\backslash \{i\})\) from all the objects in \(C^{i'}_j\) other than a (note that \(\sigma ([n_11])=[n_1]\backslash \{i\}\)). Recall that graph \(TG(i',C^{i'}_j\backslash \{a\},[n_1]\backslash \{i\})\) contains directed paths to all agents who can potentially provide an object for \(i'\) to trade without affecting any other agent.
If there is a path in \(TG(i',C^{i'}_j\backslash \{a\},[n_1]\backslash \{i\})\) from \(i'\) to a labelled agent, set \(\theta =1\).
 4.
Otherwise, define
\(i''= arg\min _{\ell } \{w_{\ell }(1g(Y_{\ell })) \text { there is a path from}\,\, i' \,\, \text {to}\,\, \ell \,\, \text {in} \,\, TG(i',C^{i'}_j\backslash \{a\},[n_1]\backslash \{i\}) \}\)
Note: If index \(\ell \) with minimum value of \(w_{\ell }(1g(Y_{\ell }))\) is not unique, we take for \(i''\) the largest such index. Also, observe that either \(i'=i''\) or agent \(i'\) is before \(i''\) with respect to order \(\sigma \).
If \(w_i(1g(y))=w_{i''}(1g(Y_{i''}))\) has a solution \(y \in [0,1]\) define \(\theta \) to be this solution.
(g(y) is strictly increasing so if there is a solution, it is unique)
 5.
Otherwise define \(\theta \) to be 0.
In our further analysis, we will need the following notion of a frozen agent or object.
Definition 5.2
We say an agent (respectively, an object) is frozen if the allocation of this agent (respectively, object) remains the same until the termination of SDMT2. We also say a trading graph is frozen if all of its agents are frozen.
Observation 5.3
Assume that agent \(i\) is unmatched in his turn in the “for loop” of ALG. Suppose \(\tau (u)=i\), which means \(i\) selects his object in uth iteration of the “for loop”. Then at the end of the kth iteration of the “for loop”, for every \(k\ge u\), there is no path from \(i\) to a labelled agent in \(TG(i,L(i),\tau ([k]))\), meaning this graph is frozen.
Proof
By SDMT2, we know that \({O}_u\cap L(\tau (u))={O}_u\cap L(i)=\emptyset \), which means that all the objects in L(i) have been allocated to agents \(\tau ([u1])\). Since \(\tau (u)\) is unmatched, there is no path from \(\tau (u)\) to any labelled agent in \(TG(\tau (u),L(\tau (u)),\tau ([u1]))\). Let \(S\subseteq \tau ([u1])\) be the set that is reachable from \(\tau (u)\) in \(TG(\tau (u),L(\tau (u)),\tau ([u1]))\). Clearly, each agent in S is unlabelled. Actually, notice that any agent in S is frozen. Therefore, any path through i after uth iteration will reach an unlabelled agent. \(\square \)
The following two properties (dominance and monotonicity) are well known for agents with strict preference orderings. We generalise them to agents with indifferences. The difficulty of proving both dominance and monotonicity lemmas (Lemma 5.4 and 5.6) lies in case 4. (in the definition of threshold \(\theta \)). This is our main technical contribution as compared to the analysis in [16].
Recall that \(\tau \) (\(\sigma \), resp.) is the order of agents under the execution of ALG (\(ALG_{i}\), resp.). We first discuss intuitions behind case 4. in the context of the Dominance Lemma (Lemma 5.4). Note that in this case there is a path from \(i'\) to \(i''\) in \(TG(i',C^{i'}_j\backslash \{a\},[n_1]\backslash \{i\})\) and agent \(i''\) is unlabelled. We will prove the Dominance Lemma by contradiction, using the following two main steps. Indeed, let us assume towards a contradiction, see the text of Lemma 5.4, that \(Y_i < \theta \) and i is not matched in ALG. Then the outcome of ALG is the same as that of \(ALG_{i}\) for all the other agents (except agent i). Suppose \(\sigma (u)=i''\) in \(ALG_{i}\), then \(\tau (u+1)=i''\) in ALG under case 4. Based on the fact that outcomes of ALG and \(ALG_{i}\) are the same (for all agents except agent i), first, we prove that either \(i'\) is labelled or there is a path, let us call it \(P_1\), from \(i'\) to a labelled agent in \(TG(i',C^{i'}_j,\tau ([u]))\) at the end of the uth iteration of the “for loop” in ALG. Secondly, due to the above property, we argue that there is a path, let us call it \(P_2\), from i to a labelled agent in \(TG(i,a,\tau ([u]))\) at the end of the uth iteration of the “for loop” in ALG, contradicting Observation 5.3; thus i will be matched. Path \(P_2\) is constructed by the concatenation of arc \((i, i')\) and path \(P_1\), or the concatenation of arc \((i,i''')\), for some \(i'''\) on path \(P_1\), and the rest of path \(P_1\). The existence of \(P_1\) is proved by a careful analysis of the structure of frozen subgraphs of the trading graph as the algorithm proceeds; the details can be found in the proof of Lemma 5.4.
Lemma 5.4
(Dominance Lemma) Given \(Y_{i}\), i gets matched (to some object) if \(Y_i<\theta \).
Proof

Case 1 (Corresponding to case 1 in the definition of threshold \(\theta \).) If a is unmatched in \(ALG_{i}\), then \(\theta =1\). Suppose agent i is unmatched in ALG, then procedure ALG is the same as \(ALG_{i}\) for all the other agents except i. But then a is always available to agent i, meaning a will be matched to agent i by process of SDMT2, contradiction.

Case 2 If a is matched to \(i'\) in \(ALG_{i}\):

Case 2(i) (Corresponding to cases 2 and 3 in the definition of threshold \(\theta \).) If \(i'\) is labelled or if there is a path from \(i'\) to a labelled agent in \(TG(i',C^{i'}_j\backslash \{a\}, [n_1]\backslash \{i\})\), and i is unmatched, then there is a path from i to a labelled agent in \(TG(i,a,[n_1])\). In this case, by \(\textit{Trading}(i,a,[n_1])\), we obtain a Pareto improvement, contradicting that SDMT2 is Pareto optimal.

Case 2(ii) (Corresponding to cases 5 and 4 in the definition of threshold \(\theta \).) The case \(\theta =0\) is trivial, so we consider that \(w_i(1g(y))=w_{i''}(1g(Y_{i''}))\) has a solution. Suppose that \(\sigma (u)=i''\) in \(ALG_{i}\), then if \(Y_i<\theta \), we know that \(w_i(1g(Y_i))>w_{i''}(1g(Y_{i''}))\), meaning the agent i is prior to agent \(i''\) in ALG. Then \(\tau (u+1)=i''\) in ALG. If i is unmatched in ALG, then procedure ALG is the same as \(ALG_{i}\) for all the other agents except i. Suppose \(i''\) is allocated an object b in ALG. If \(i''=i'\), then \(b=a\), and if in addition \(a\in O_{u+1}\), this means a is always available to all the agents prior to \(\tau (u+1)=i''=i'\). Therefore, a will be available to i when i initially selects objects, implying that i must be allocated to some object in his turn, leading to a contradiction. The case \(a\notin O_{u+1}\) is analyzed similarly to the case \(i''\ne i'\), so we consider that \(i''\ne i'\). Since there is a path from \(i'\) to \(i''\) after the “for loop” in ALG terminates, i is still unmatched because of our assumption towards a contradiction. Suppose that in this path \(\tau (k)\) points to \(\tau (u+1) = i''\), for some \(k\le u\), then b is available to \(\tau (k)\) or b has been allocated before the kth iteration of the “for loop” in ALG. Since finally \(\tau (k)\) gets an object in the same indifference class as b by Observation 4.1, before the \((u+1)\)st iteration of the “for loop” in ALG, b has been allocated by Observation 4.2. Hence, in the \((u+1)\)st iteration of the “for loop” in ALG, \(\tau (u+1)\) gets object b through the trading graph. Observation 5.5The trading graph\(TG(i',C^{i'}_j\backslash \{a\},[n_1])\)after the “for loop” inALGterminates, is exactly the same as\(TG(i',C^{i'}_j\backslash \{a\},\tau ([u+1]))\)at the end of the\((u+1)\)st iteration. This observation follows from the fact that otherwise, some agent \(\tau (\ell )\) may be reachable from \(i'\), where \(\ell >u+1\), by process of SDMT2, contradicting the definition of \(i''\); note that we used here the largest index tie breaking rule. Therefore, at the end of the uth iteration of the “for loop” of ALG, suppose that B is the set of objects allocated to \(i'\). Then we have the following three cases (note that ALG is the same as \(ALG_{i}\) for all the other agents except i):

Case 2(ii)1\(i'\) is labelled, then \(a\in B\). Otherwise, if \(a \not \in B\), then in the \((u+1)\)st “for loop” iteration of ALG, a will not be allocated to \(i'\) at the end of this \((u+1)\)st iteration by the process of SDMT2. Thus \(a \in B\), and since the trading graph \(TG(i',C^{i'}_j\backslash \{a\},[n_1])\) after the “for loop” in ALG terminates is exactly the same as \(TG(i',C^{i'}_j\backslash \{a\},\tau ([u+1]))\) at the end of the \((u+1)\)st iteration, it follows that \(i'\) will not be matched to a at the end of the “for loop” of ALG, contradiction.

Case 2(ii)2\(i'\) is unlabelled and \(B=\{a\}\). Then there is a path from \(i'\) to a labelled agent in \(TG(i',C^{i'}_j\backslash \{a\},\tau ([u]))\). Otherwise, all the agents reachable from \(i'\) are frozen after the uth iteration of the “for loop”. This means that the allocations of those agents are fixed, because all the objects in their indifference class have been allocated by Observation 4.2. Thus, \(TG(i',C^{i'}_j\backslash \{a\},\tau ([u]))\) should be the same as \(TG(i',C^{i'}_j\backslash \{a\},\tau ([u+1]))\). However, since \(\tau (u+1)\) is reachable from \(i'\) in \(TG(i',C^{i'}_j\backslash \{a\},\tau ([u+1]))\), while \(\tau (u+1)\) does not appear in \(TG(i',C^{i'}_j\backslash \{a\},\tau ([u]))\), we reach a contradiction.

Case 2(ii)3\(i'\) is unlabelled and \(B=\{c\}\), where \(c\ne a\). Then there is a path from \(i'\) to a labelled agent in \(TG(i',\{a\},\tau ([u]))\). Otherwise, a and the agent matched to a is frozen at the end of the uth “for loop” iteration in ALG. This means that a will not be matched to \(i'\) at the end of the “for loop” of ALG, contradiction. As a result, in either of the above three cases, there is a path from i to a labelled agent in \(TG(i,a,\tau ([u]))\) at the end of the uth “for loop” iteration in ALG. Namely, for case 2(ii)1, i points to \(i'\), which is labelled in \(TG(i,a,\tau ([u]))\); for case 2(ii)2, i points to \(i'\) in \(TG(i,a,\tau ([u]))\) and there is a path from \(i'\) to a labelled agent in \(TG(i',C^{i'}_j\backslash \{a\},\tau ([u]))\subseteq TG(i,a,\tau ([u]))\). Finally, for case 2(ii)3, suppose a is assigned to \(i'''\) at the end of the uth “for loop” iteration, then there is a path from \(i'''\) to a labelled agent in \(TG(i,a,\tau ([u]))\) and i points to \(i'''\) in \(TG(i,a,\tau ([u]))\). This contradicts Observation 5.3. Hence, i must be matched to some object.\(\square \)
Let \(\beta ^{s}_a=\beta _a((s,Y_{i}))\), when ALG denotes the execution of Algorithm 3 on the original instance and \(Y_{i}\) is fixed and \(Y_i=s\). Note that \(\beta ^{\theta }_a=w_i(1g(\theta ))/F\). This last equality is easy to check in cases 1, 2, 3 and 5 of the definition of threshold \(\theta \). In case 4, we note that \(w_i(1g(\theta ))=w_{i''}(1g(Y_{i''}))\) for some agent \(i'' \not = i\). And because \(\beta ^{\theta }_a\) is the value of the dual variable for object a when ALG is run with \(Y_i=\theta \), case 4 means that \(\beta ^{\theta }_a=w_i(1g(\theta ))/F\), despite the fact that object a might not necessarily be assigned to agent i (however, agent i will be assigned some object).
We will now turn our attention to proving the monotonicity lemma.
Lemma 5.6
(Monotonicity Lemma) Given \(Y_{i}\), for all choices of \(Y_i\), \(\beta ^{Y_i}_a\ge \beta ^{\theta }_a\).

Firstly, we prove in Claim 5.7 below, that no agent, except i, is allocated a better object in ALG compared to \(ALG_{i}\). The argument is by contradiction: suppose there exists an agent \(i'''\) who receives a better object in ALG than in \(ALG_{i}\), then i must be inserted before \(i'''\). Consequently, there exists an agent s prior to \(i'''\) who will get a worse object in ALG than in \(ALG_{i}\). Based on this fact, and using an alternating path argument, it is proved that there exists a path from s to \(i'''\) in s’s trading graph constructed from a higher indifference class of s (than s’s allocated indifference class in ALG) after \(i'''\) is allocated in ALG. This contradicts the fact that this path should not exist since the graph from that higher indifference class is frozen.

Secondly, we prove in Claim 5.8 below, that if \(i'\) gets a worse object in ALG compared to \(ALG_{i}\), we prove that a must be allocated to an agent prior to \(i'\), which is in turn prior to \(i''\). The reason is as follows: by Observation 4.2, a must be allocated and frozen before \(i'\) is considered in ALG. Then, if \(i'\) gets an object in ALG in the same indifference class as a, then we prove that there exists an agent \(s^*\) prior to \(i''\), and suppose \(\tau (u^*)=s^*\), such that there is a path from \(s^*\) to \(i'\) in \(TG(s^*,C^{s^*}_{j^*},\tau ([u^*]))\) at the end of the \(u^*\)th “for loop” iteration of ALG. Here, \(C^{s^*}_{j^*}\) is the indifference class in which \(s^*\) is allocated an object in \(ALG_{i}\). As a consequence, by Observation 4.2, a is allocated to an agent prior to \(s^*\) and all the agents reachable from \(s^*\) in \(TG(s^*,C^{s^*}_{j^*},\tau ([u^*]))\) are frozen, then a will finally be allocated to an agent prior to \(s^*\) in ALG. This means that a is allocated to an agent prior to \(i''\).
Proof

Case 1 (Corresponding to case 1 in the definition of threshold \(\theta \).) If a is unmatched in \(ALG_{i}\), or if a is matched to \(i'\) in \(ALG_{i}\) and \(i'\) is labelled, or a is matched to \(i'\) in \(ALG_{i}\) and there is a path from \(i'\) to a labelled agent in \(TG(i',C^{i'}_j\backslash \{a\}, [n_1])\), then \(\theta =1\) and \(\beta ^{\theta }_a=w_i(1g(\theta ))/F=0\), so \(\beta ^{Y_i}_a\ge \beta ^{\theta }_a=0\).

Case 2 (Corresponding to cases 2 and 3 in the definition of threshold \(\theta \).) If a is matched to \(i'\) in \(ALG_{i}\), there is no path from \(i'\) to a labelled agent in \(TG(i',C^{i'}_j\backslash \{a\}, [n_1]\backslash \{i\})\). Suppose \(\sigma (u)=i''\) in \(ALG_{i}\). Notice that \(\tau ([u+1]) = \sigma ([u])\). Then by Observation 5.5, the trading graph \(TG(i',C^{i'}_j\backslash \{a\}, \sigma ([u]))\) at the end of the uth “for loop” iteration is the same as \(TG(i',C^{i'}_j\backslash \{a\}, [n_1]\backslash \{i\})\) at the termination of the “for loop” in \(ALG_{i}\). Otherwise, the \(TG(i',C^{i'}_j\backslash \{a\},\)\(\sigma ([u]))\) is not frozen after the uth “for loop” iteration of \(ALG_{i}\), meaning that there is a path from \(i'\) to a labelled agent in \(TG(i',C^{i'}_j\backslash \{a\},\)\(\sigma ([u]))\). Therefore, either \(i'\) will reach an agent inferior to \(i''\) or a labelled agent in \(TG(i',C^{i'}_j\backslash \{a\},\)\([n_1]\backslash \{i\})\) by SDMT2. This contradicts the definition of \(i''\).

Case 3 (Corresponding to case 5 in the definition of threshold \(\theta \).) Suppose that equation \(w_i(1g(y))=w_{i''}(1g(Y_{i''}))\) does not have a solution, which means that \(\theta =0\) and \(w_i(1g(Y_i))/F<w_{i''}(1g(Y_{i''}))/F\), for any \(Y_i\in [0,1]\). This shows that the process is the same for agents prior to agent \(i''\) until the end of the uth “for loop” iteration in ALG and \(ALG_{i}\). Since there is no path from \(i'\) to a labelled agent in \(TG(i',C^{i'}_j\backslash \{a\}, \sigma ([u]))\), the agents reachable from \(i'\) are frozen. Hence, a will be finally still allocated to \(i'\) in ALG, implying \(\beta ^{Y_i}_a=w_{i'}(1g(Y_{i'}))/F\ge w_{i''}(1g(Y_{i''}))/F>\beta ^{\theta }_a=w_i(1g(0))/F\).

Case 4 (Corresponding to case 4 in the definition of threshold \(\theta \).) Now consider the last case that equation \(w_i(1g(y))=w_{i''}(1g(Y_{i''}))\) has a solution, then \(\beta ^{\theta }_a=w_i(1g(\theta ))/F=w_{i''}(1g(Y_{i''}))/F\). Consider the following three cases:

Case (4i) If \(Y_i> \theta \), this means \(w_i(1g(Y_i))/F< w_{i''}(1g(Y_{i''}))/F\), and the analysis of this case is the same as above (the case that equation \(w_i(1g(y))=w_{i''}(1g(Y_{i''}))\) does not have a solution), since i will select objects after \(i''\). Thus, we have \(\beta ^{Y_i}_a=w_{i'}(1g(Y_{i'}))/F\ge \beta ^{\theta }_a\).

Case (4ii) If \(Y_i<\theta \), then \(w_i(1g(Y_i))/F> w_{i''}(1g(Y_{i''}))/F\), which means that i is prior to \(i''\) in ALG. We have the following claim:
Claim 5.7
No agent can get a better object in ALG than in \(ALG_{i}\) after inserting i into some position from 1 to u.
Proof
Suppose, towards a contradiction, that there exists an agent getting a better object, and let k be the smallest position where such agents are placed in ALG. Then i must be inserted in a position before k (otherwise, the process is the same for the first k agents in \(ALG_{i}\) and ALG, so agent \(\tau (k)\) can not get a better object). Let \(i'''=\tau (k)\) and suppose that \(i'''\) gets an object b in ALG and object c in \(ALG_{i}\), where \(b\succ _{i'''}c\). Observe that \(\sigma (k1)=i'''\) in \(ALG_{i}\). Suppose that \(b\in C^{i'''}_j\) and consider the trading graph \(TG(i''',C^{i'''}_j,\sigma ([k1]))\) at the end of the \((k1)\)st “for loop” iteration of \(ALG_{i}\).
Let S be the set of agents reachable from \(i'''\) in \(TG(i''',C^{i'''}_j,\sigma ([k1]))\) at the end of the \((k1)\)st “for loop” iteration of \(ALG_{i}\). Note that any agent in S is prior to \(i'''\). Any agent in S is allocated only one object and frozen in \(ALG_{i}\). Since in ALG, b is allocated to \(i'''\), then in the kth “for loop” iteration of ALG, \(i'''\) will be allocated some objects in \(C^{i'''}_j\). This means that some agent in S will get worse object compared to the allocation in \(ALG_{i}\).
The reason is as follows: no agent can get a better object by the definition of k. If all the agents in S can remain the same in ALG compared with \(ALG_{i}\) (i.e., get the objects in the same indifference class in ALG and in \(ALG_{i}\)), then the only possible allocation of S in ALG is reallocating all the objects matched to S in \(ALG_{i}\) to S again such that each agent gets exactly one object. If there is some extra object e in ALG allocated to an agent from S in ALG, then e must be allocated to some agent j in \(ALG_{i}\). Since e in ALG is allocated to some agent in S, thus, j can be reached by some agent in S in \(ALG_{i}\). Thus, \(j\in S\), which leads to a contradiction. All the objects in \(C^{i'''}_j\) have been allocated to some agents in S. In ALG, we will need to allocate S objects to \(S\cup \{i'''\}\) agents because some objects owned by S in \(ALG_{i}\) will be allocated to agent \(i'''\). This is not possible, which gives a contradiction.
Let s be an agent in S who gets a worse object and there is a path from s to \(i'''\) in the trading graph \(TG(s,d,\tau [k])\) at the end of the kth “for loop” iteration in ALG, where d is the allocated object of s in \(ALG_{i}\). (Such an agent must exist: it can be found by the following procedure. Suppose \(d_1\simeq _{s_1} b\) owned by \(s_1\) in \(ALG_{i}\) is allocated to \(i'''\) in ALG at the end of the kth “for loop” iteration in ALG. If \(s_1\) gets worse in ALG compared to \(ALG_{i}\), then \(s_1\) is the agent we are looking for. Otherwise, \(s_1\) will be allocated object \(d_2\) owned by \(s_2 \in S\) in \(ALG_{i}\) at the end of the kth “for loop” iteration of ALG. If \(s_2\) gets a worse object, then \(s_2\) is the agent we are looking for. Otherwise, we continue with this procedure. By finiteness of the set S and by the fact that the agents in S own S objects in \(ALG_{i}\), these objects will be allocated to agents in \(S\cup \{i'''\}\) in ALG, and one of these objects will be allocated to \(i'''\). Thus, we can find such an agent. The path from s to \(i'''\) in the trading graph \(TG(s,d,\tau [k])\) at the end of the kth “for loop” iteration in ALG is just the reverse path by the above procedure). Suppose \(\tau (\ell )=s\) in ALG and \(d\in C^s_{h}\). Consider the \(\ell \)th “for loop” iteration in ALG: all the agents reachable from s in \(TG(s,C^s_h,\sigma ([\ell ]))\) are frozen and prior to agent s since s does not obtain any object in the indifference class \(C^s_h\).
This contradicts the fact that there is a path from s to \(i'''\) (which is inferior to s) in the trading graph \(TG(s,d,\tau [k])\) at the end of the kth “for loop” iteration in ALG. \(\square \)
Claim 5.8
Object a must be allocated to an agent prior to \(i''\) or to \(i''\), that is, we must have \(\beta ^{Y_i}_a\ge w_{i''}(1g(Y_{i''}))=\beta ^{\theta }_a\).
Proof

Case (1) If \(i'\) gets worse, meaning he gets a worse object in ALG than a in \(ALG_{i}\), then \(\tau (u_1+1)=i'\) in ALG (i is inserted back prior to \(i'\)). Thus, all the agents reachable from \(i'\) in \(TG(i',a,\tau ([u_1+1]))\) are frozen and the agent who owns a will finally get a. This agent is prior to \(i'\), giving that \(\beta ^{Y_i}_a\ge w_{i'}(1g(Y_{i'}))\ge w_{i''}(1g(Y_{i''}))=\beta ^{\theta }_a\).

Case (2) If \(i'\) gets a in ALG, then we are done. Otherwise, suppose \(i'\) gets an object \(a'\simeq _{i'} a\), \(a', a \in C^{i'}_j\) in ALG. Denote by \(S^*\) the set of agents reachable from \(i'\) in \(TG(i',C^{i'}_j\backslash \{a\},\sigma ([n_11]))\) at the end of the “for loop” of \(ALG_{i}\) (note that \(\sigma ([n_11])=[n_1]\backslash \{i\}\)). If no one in \(S^*\) gets worse in ALG than in \(ALG_{i}\), then a must be allocated to some agent in \(S^*\). The reason is similar to the above argument. All agents in \(S^*\) get exactly one object. If a is not allocated in \(S^*\), no one gets worse in \(S^*\), and there must be an extra object b allocated to some agent j in \(S^*\). No matter whom b is allocated to in \(ALG_{i}\), there is a path from j to this agent. Hence, this agent belongs to \(S^*\), a contradiction. Note that, by the definition of \(i''\), for any \(s\in S^*\), \(\sigma ^{1}(s)>\sigma ^{1}(i'')\) (\(\sigma ^{1}(s)\) denotes the order of s in \(\sigma \) or in \(ALG_{i}\)) implies that \(w_{s}(1g(Y_{s}))\ge w_{i''}(1g(Y_{i''}))\). Therefore \(\beta ^{Y_i}_a\ge w_{i''}(1g(Y_{i''}))=\beta ^{\theta }_a\).
Suppose finally that \((*)\) is not true, then all the agents \(U^*\) that are reachable from \(s^*\) in \(TG(s^*,d^*,\tau ([u^*]))\) have been frozen. \(U^*\) will remain the same until the end of ALG and \(i'\notin U^*\). However, by the definition of \(S^*\) and by \(s^* \in S^*\), there is a path from \(s^*\) to \(i'\) in \(TG(s^*,d^*,[n_1])\) at the end of ALG, meaning \(i'\in U^*\), a contradiction. \(\square \)
 Case (4iii)

If \(Y_i= \theta \), this means that \(w_i(1g(Y_i))/F= w_{i''}(1g(Y_{i''}))/F\). If \(i''<i\), the case is same as if \(Y_i>\theta \). Otherwise, it falls into the case \(Y_i<\theta \).
To summarise, for all choices of \(Y_i\), \(\beta ^{Y_i}_a\ge \beta ^{\theta }_a\). \(\square \)
Lemma 5.9
([16]) \(\forall (i,a)\in E\), \(\mathbb {E}_{\vec {Y}}(\alpha _i(\vec {Y})+\beta _a(\vec {Y}))\ge w_i\).
Proof
From Lemmas 5.1 and 5.9, we have the following theorem.
Theorem 5.10
Algorithm 3 achieves an approximation ratio of \(\frac{e}{e1}\) for weighted agents with indifferences.
6 Online Interpretation and Lower Bounds
We will first provide here an “online” flavour interpretation of the weighted version of our problem. We interpret it in the following way. An administrator holds all the objects, and all agents with unknown preference lists are applicants for these objects. We assume that weights are private information of each agent, but that they cannot overstate their weights, a socalled nooverbidding assumption. Applicants are interviewed onebyone in a random order. A decision about each particular applicant is to be made immediately after the interview. During the interview, the applicant selects his favourite object among the available remaining objects if there exists one in his preference list and must be allocated (matched to) that object because we consider only truthful mechanisms.^{8} This applicant will not be interviewed again. The administrator can know the number of matched applicants interviewed so far, but is unaware whether yet unseen applicants will be matched or not. Our goal is to find the optimal strategy, that is a (random) arrival order of agents that maximises the ratio between the total weight of matched agents and the maximum weight of a matching if all the agents preference lists are known in advance.
We will now describe the required preliminaries that will be used in the remainder of this section to prove the lower bounds.
Preliminaries We will use Yao’s minmax principle, see [25, Proposition 2.5 (page 35)] and [37], to obtain a nontrivial lower bound for universally truthful and Pareto optimal mechanisms and another lower bound for an “online” version of our problem. We first need some preliminaries.
Let \(\mathcal {S}\) denote the set of all the permutations of agents’ preference lists of the triangle instance. Consider now a random instance \(S_{uni}\) as the uniform distribution of \(\mathcal {S}\). It is obvious that the output of any serial dictatorship mechanism (which is a deterministic, truthful and Pareto optimal mechanism, defined by a specific fixed order of the agents) running on \(\mathcal {S}\) is the same. Hence, for any serial dictatorship mechanism (SDM), \(\mathbb {E}_{uni}(r(SDM,S_{uni}))\) is equal to the inverse of the approximation ratio of RSDM, which is just SDM with the order of agents chosen uniformly at random, when running on the triangle instance.
Online Lower Bound We now apply these preliminaries to the online version of our problem. Recall that applicants in this online problem are truthful due to the truthfulness of serial dictatorship mechanism. The strategy of the administrator is a random order in which the applicants are interviewed. More precisely, let \(\Pi \) denote the set of all the permutations of applicants and \(P(\Pi )\) be the set of probability distributions on \(\Pi \). Let \(\Pi _p\) be a random order of applicants, where the order is selected according to the distribution \(p\in {P(\Pi )}\) on \(\Pi \), and then the strategy set of the administrator is \(\{\Pi _p {\,\, : \,\,} p\in {P(\Pi )}\}\). We will show that the best strategy for the administrator is to select applicants’ order uniformly at random.
Theorem 6.1
The best strategy for the administrator in the online problem is to select the applicants’ order uniformly at random. Thus, any other randomised strategy, than the one used in Algorithm 3, would lead to an approximation guarantee worse than \(\frac{e}{e1}\).
Proof
This proof is similar to the classical proof from [22]. In particular it uses the same class of instances. Let \(\mathbb {E}_{p,q}(r(\Pi _p,I_q))\) be the inverse of the approximation ratio when the random order is \(\Pi _p\) and the random instance is \(I_q\), and let \(\Pi _{uni}\) denote the uniform order. By the approximation ratio of RSDM, for any \(I\), \(\mathbb {E}_{uni}(r(\Pi _{uni},I))\ge \frac{e1}{e}\). Now for upper bound of \(\mathbb {E}_{p,q}(r(\Pi _p,I_q))\), by Yao’s principle [25, Proposition 2.5], \(\max _{T\in \Pi }\mathbb {E}_{q}(r(T,I_q))\ge \min _{I\in \mathcal {I}}\mathbb {E}_{p}(r(T_p,I)).\) Recall that \(S_{uni}\) is the uniform distribution over \(\mathcal {S}\). Then we need to upper bound \(\max _{T\in \Pi }\mathbb {E}_{q}(r(T,S_{uni}))\), which in fact is equal to the inverse of the approximation ratio obtained by running RSDM on the triangle instance, which is \(\frac{e1}{e}\). The argument is as follows. Suppose object \(o_k\) is allocated by RSDM with probability \(p_k\le 1\) on the triangle instance. Then, because there are \(n_1k +1\) agents with \(o_k\) in their preference lists, each such agent obtains \(o_k\) with equal probability \(\frac{p_k}{n_1k+1}\). Therefore, agent i is allocated an object with probability \(\sum _{j=1}^i\frac{p_j}{n_1j+1}\), which is at most \(\min \{1,\sum _{j=1}^i\frac{1}{n_1j+1}\}\). Now, summing over all the agents, by a simple calculation we get that the expected cardinality of the number of allocated agents is at most \(n_1(1\frac{1}{e})\), for large enough \(n_1\). Hence, the approximation ratio is tight. \(\square \)
Lower Bound for Randomised Mechanisms If we can prove that the output of any deterministic truthful and Pareto optimal mechanism running on \(\mathcal {S}\) is the same as that of SDM then \(\max _{T\in \mathcal {T}}\mathbb {E}_{q}(r(T,I))=1\frac{1}{e}\). To show our lower bound it suffices to show that the sum of the sizes of all the matchings returned by any deterministic truthful and Pareto optimal mechanism executed on \(\mathcal {S}\) is smaller than that of returned by any SDM executed on \(\mathcal {S}\). Then \(\max _{T\in \mathcal {T}}\mathbb {E}_{q}(r(T,I))=1\frac{1}{e}\). We use \(\#^{\phi }(\mathcal {S})\) to denote the sum of the sizes of all the matchings returned by mechanism \(\phi \) when executed on \(\mathcal {S}\). We want to to prove that \(\#^{\phi }(\mathcal {S})\le \#^{SDM}(\mathcal {S})\), for any \(n_1\) and \(n_2\) and for any universally truthful and Pareto optimal mechanism \(\phi \). We can prove this inequality assuming \(n_1=n_2=3\), which gives us the lower bound of \(\frac{18}{13}\) for any universally truthful and Pareto optimal mechanism.
Theorem 6.2
For any deterministic truthful and Pareto optimal mechanism \(\phi \), \(\#^{\phi }(\mathcal {S})\le 13\), when \(n_1=3\). Thus, any universally truthful and Pareto optimal mechanism for this problem has an approximation ratio of at least \(\frac{18}{13}\).
Proof
Suppose the agents are 1, 2, 3 and objects are a, b, c. We use the notation \(\left( \begin{array}{ccc} \underline{a}&{} b &{} c\\ a &{} \underline{b} &{} \\ a&{} b&{} \underline{c} \end{array}\right) \) to denote assignments that allocate a to agent 1, b to agent 2 and c to agent 3, where row i denotes agent i’s preference list and preference ordering is the increasing order of column indices, \(i=1,2,3\). If there are no underlines of the objects, then this notation denotes the input of mechanism. Note that in this setting, \(\mathcal {S}=\left\{ \left( \begin{array}{ccc} a&{} &{} \\ a &{} b &{} \\ a&{} b&{} c \end{array}\right) \right. \), \(\left( \begin{array}{ccc} a&{} &{} \\ a &{} b &{}c \\ a&{} b&{} \end{array}\right) \), \(\left( \begin{array}{ccc} a&{} b &{} \\ a &{} &{} \\ a&{} b&{}c \end{array}\right) \), \(\left( \begin{array}{ccc} a&{} b &{} \\ a &{} b &{}c \\ a&{} &{} \end{array}\right) \), \(\left( \begin{array}{ccc} a&{} b &{} c\\ a &{} &{} \\ a&{} b&{} \end{array}\right) \), \(\left. \left( \begin{array}{ccc} a&{} b&{}c \\ a &{} b &{} \\ a&{} &{} \end{array}\right) \right\} \). We would like to show that for any deterministic truthful and Pareto optimal mechanism \(\phi \), \(\#^{\phi }(\mathcal {S})\le 13\). Without loss of generality, suppose \(\left( \begin{array}{ccc} \underline{a}&{} b &{} c\\ a &{} \underline{b} &{} c\\ a&{} b&{} \underline{c} \end{array}\right) \), and we will consider the following two cases:
Case (i) If \(\left( \begin{array}{ccc} a&{} b &{} \underline{c}\\ a &{} \underline{b} &{} \\ \underline{a}&{} b&{} c \end{array}\right) \), then we will show that \(\left( \begin{array}{ccc} \underline{a}&{} b &{} \\ a &{} \underline{b} &{} c\\ a&{} b&{} \underline{c} \end{array}\right) \). (Observe that the first agent must get a because otherwise we have contradiction with truthfulness by \(\left( \begin{array}{ccc} \underline{a}&{} b &{} c\\ a &{} \underline{b} &{} c\\ a&{} b&{} \underline{c} \end{array}\right) \).)
Now, if \(\left( \begin{array}{ccc} \underline{a}&{} b &{} \\ a &{} \underline{b} &{} c\\ a&{} b&{} \underline{c} \end{array}\right) \) would not hold then \(\left( \begin{array}{ccc} \underline{a}&{} b &{} \\ a &{}b &{} \underline{c}\\ a&{} \underline{b}&{}c \end{array}\right) \). Then we obtain \(\left( \begin{array}{ccc} a&{} b &{} \\ a &{} b &{} \\ \underline{a}&{} b&{} c \end{array}\right) \). The reason is as follows: \(\left( \begin{array}{ccc} a&{} b &{} \underline{c}\\ a &{} \underline{b} &{} \\ \underline{a}&{} b&{} c \end{array}\right) \) implies that the first agent in the input \(\left( \begin{array}{ccc} a&{} b &{} \\ a &{} b &{} \\ a &{} b&{} c \end{array}\right) \) cannot get any object by truthfulness. Similarly, from \(\left( \begin{array}{ccc} \underline{a}&{} b &{} \\ a &{}b &{} \underline{c}\\ a&{} \underline{b}&{}c \end{array}\right) \), the second agent in \(\left( \begin{array}{ccc} a&{} b &{} \\ a &{} b &{} \\ a &{} b&{} c \end{array}\right) \) cannot get any object by truthfulness. Thus we have that \(\left( \begin{array}{ccc} a&{} b &{} \\ a &{} b &{} \\ \underline{a}&{} b&{} c \end{array}\right) \), which is a contradiction to Pareto optimality.
 Case (ii)

If \(\left( \begin{array}{ccc} \underline{a}&{} b &{} c\\ a &{} \underline{b} &{} \\ a&{} b&{} \underline{c} \end{array}\right) \), we consider the following two cases:
 Case (iia)

If \(\left( \begin{array}{ccc} \underline{a}&{} b &{}\\ a &{} b &{} \underline{c}\\ a&{} \underline{b}&{} c \end{array}\right) \), then \(\left( \begin{array}{ccc} \underline{a}&{} b &{}\\ a &{} b &{} \\ a&{} \underline{b}&{} c \end{array}\right) \), and we conclude that \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{} \underline{b} &{} c\\ a&{} b&{} \end{array}\right) \), otherwise suppose \(\left( \begin{array}{ccc} a&{} \underline{b} &{}c\\ \underline{a} &{}b &{} c\\ a&{} b&{} \end{array}\right) \) (since \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{}\underline{b} &{} c\\ a&{} b&{}\underline{c} \end{array}\right) \)), then b is allocated to agent 1 in \(\left( \begin{array}{ccc} a&{} b &{}\\ a &{}b &{} c\\ a&{} b&{} \end{array}\right) \). From \(\left( \begin{array}{ccc} \underline{a}&{} b &{}\\ a &{} b &{} \underline{c}\\ a&{} \underline{b}&{} c \end{array}\right) \), we know b is allocated to agent 3 in \(\left( \begin{array}{ccc} a&{} b &{}\\ a &{}b &{} c\\ a&{} b&{} \end{array}\right) \), a contradiction. Hence, \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{} \underline{b} &{} c\\ a&{} b&{} \end{array}\right) \), then we know \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{} \underline{b} &{} \\ a&{} b&{} \end{array}\right) \) (from \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{} \underline{b} &{} c\\ a&{} b&{} \end{array}\right) \) and \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{} \underline{b} &{} \\ a&{} b&{}\underline{c} \end{array}\right) \) ), and \(\left( \begin{array}{ccc} \underline{a}&{} b &{}\\ a &{} b &{} \underline{c}\\ a&{} \underline{b}&{} \end{array}\right) \) (From \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{} \underline{b} &{} c\\ a&{} b&{} \end{array}\right) \) and \(\left( \begin{array}{ccc} \underline{a}&{} b &{}\\ a &{} b &{} \underline{c}\\ a&{} \underline{b}&{} c \end{array}\right) \)). Now the matching size of assignment of\(\left( \begin{array}{ccc} a&{} b &{}c\\ a &{} &{}\\ a&{} b&{} \end{array}\right) \) and \(\left( \begin{array}{ccc} a&{} b &{}c\\ a &{} b &{}\\ a&{} &{} \end{array}\right) \) is both 2 (from \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{} \underline{b} &{} \\ a&{} b&{} \end{array}\right) \)). The matching size of assignment of \(\left( \begin{array}{ccc} a&{} b &{}\\ a &{} b &{}c\\ a&{} &{} \end{array}\right) \) is 2 since \(\left( \begin{array}{ccc} \underline{a}&{} b &{}\\ a &{} b &{} \underline{c}\\ a&{} \underline{b}&{} \end{array}\right) \). The matching size of assignment of\(\left( \begin{array}{ccc} a&{} b &{}\\ a &{} &{}\\ a&{} b&{}c \end{array}\right) \) is 2 following from \(\left( \begin{array}{ccc} \underline{a}&{} b &{}\\ a &{} b &{} \\ a&{} \underline{b}&{} c \end{array}\right) \). Consider the assignment of \(\left( \begin{array}{ccc} a&{} &{}\\ a &{} b &{}c\\ a&{} b&{}c \end{array}\right) \), no matter what the assignment is, at most one matching size of assignment of \(\left( \begin{array}{ccc} a&{} &{}\\ a &{} b &{}\\ a&{} b&{}c \end{array}\right) \) and \(\left( \begin{array}{ccc} a&{} &{}\\ a &{} b &{}c\\ a&{} b&{} \end{array}\right) \) is 3. Denote the mechanism in this case by \(\phi _2\), then \(\#^{\phi _2}(\mathcal {S})\le 13\).
 Case (iib)

If \(\left( \begin{array}{ccc} \underline{a}&{} b &{}\\ a &{} \underline{b} &{}c\\ a&{} b&{}\underline{c} \end{array}\right) \), recall that \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{} \underline{b} &{}\\ a&{} b&{}\underline{c} \end{array}\right) \) and \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{} \underline{b} &{}c\\ a&{} b&{}\underline{c} \end{array}\right) \), consider the following two cases:
 Case (iib1)

If \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{} \underline{b} &{}c\\ a&{} b&{} \end{array}\right) \), then \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{} \underline{b} &{}\\ a&{} b&{} \end{array}\right) \) since \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{} \underline{b} &{}\\ a&{} b&{}\underline{c} \end{array}\right) \). We know the matching sizes of assignment of \(\left( \begin{array}{ccc} a&{} b &{}c\\ a &{} b &{}\\ a&{} &{} \end{array}\right) \) and \(\left( \begin{array}{ccc} a&{} b &{}c\\ a &{} &{}\\ a&{} b&{} \end{array}\right) \) are both 2. Since \(\left( \begin{array}{ccc} \underline{a}&{} b &{}\\ a &{} \underline{b} &{}\\ a&{} b&{}\underline{c} \end{array}\right) \) due to \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{} \underline{b} &{}\\ a&{} b&{}\underline{c} \end{array}\right) \) and \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{} \underline{b} &{}c\\ a&{} b&{} \end{array}\right) \), the matching size of assignment of \(\left( \begin{array}{ccc} a&{} b &{}\\ a &{} &{}\\ a&{} b&{}c \end{array}\right) \) is 2. Since \(\left( \begin{array}{ccc} \underline{a}&{} b &{}\\ a &{} \underline{b} &{}c\\ a&{} b&{} \end{array}\right) \) due to \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{} \underline{b} &{}c\\ a&{} b&{} \end{array}\right) \) and \(\left( \begin{array}{ccc} \underline{a}&{} b &{}\\ a &{} \underline{b} &{}c\\ a&{} b&{}\underline{c} \end{array}\right) \), then the matching size of assignment of \(\left( \begin{array}{ccc} a&{} b &{}\\ a &{} b &{}c\\ a&{} &{} \end{array}\right) \) is 2. Similar as the above argument, consider the assignment of \(\left( \begin{array}{ccc} a&{} &{}\\ a &{} b &{}c\\ a&{} b&{}c \end{array}\right) \), no matter what the assignment is, at most one matching size of assignment of \(\left( \begin{array}{ccc} a&{} &{}\\ a &{} b &{}\\ a&{} b&{}c \end{array}\right) \) and \(\left( \begin{array}{ccc} a&{} &{}\\ a &{} b &{}c\\ a&{} b&{} \end{array}\right) \) is 3. Denote the mechanism in this case by \(\phi _3\), then \(\#^{\phi _3}(\mathcal {S})\le 13\).
 Case (iib2)

If \(\left( \begin{array}{ccc} a&{} \underline{b} &{}c\\ \underline{a} &{} b &{}c\\ a&{} b&{} \end{array}\right) \), recall that we have \(\left( \begin{array}{ccc} \underline{a}&{} b &{}\\ a &{} \underline{b} &{}c\\ a&{} b&{}\underline{c} \end{array}\right) \), \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{} \underline{b} &{}\\ a&{} b&{}\underline{c} \end{array}\right) \) and \(\left( \begin{array}{ccc} \underline{a}&{} b &{}c\\ a &{} \underline{b} &{}c\\ a&{} b&{}\underline{c} \end{array}\right) \). From \(\left( \begin{array}{ccc} a&{} \underline{b} &{}c\\ \underline{a} &{} b &{}c\\ a&{} b&{} \end{array}\right) \) and \(\left( \begin{array}{ccc} \underline{a}&{} b &{}\\ a &{} \underline{b} &{}c\\ a&{} b&{}\underline{c} \end{array}\right) \), we get \(\left( \begin{array}{ccc} a&{} \underline{b} &{}\\ \underline{a} &{} b &{}c\\ a&{} b&{} \end{array}\right) \), then the matching sizes of assignment of \(\left( \begin{array}{ccc} a&{} &{}\\ a &{} b &{}c\\ a&{} b&{} \end{array}\right) \) and \(\left( \begin{array}{ccc} a&{} b &{}\\ a &{} b &{}c\\ a&{} &{} \end{array}\right) \) are both 2. From \(\left( \begin{array}{ccc} \underline{a}&{}b &{}c\\ a &{} \underline{b} &{}\\ a&{} b&{}\underline{c} \end{array}\right) \) and \(\left( \begin{array}{ccc} \underline{a}&{} b &{}\\ a &{} \underline{b} &{}c\\ a&{} b&{}\underline{c} \end{array}\right) \), we get \(\left( \begin{array}{ccc} \underline{a}&{} b &{}\\ a &{} \underline{b} &{}\\ a&{} b&{}\underline{c} \end{array}\right) \), then the matching size of assignment of \(\left( \begin{array}{ccc} a&{} b &{}\\ a &{} &{}\\ a&{} b&{}c \end{array}\right) \) is 2. From \(\left( \begin{array}{ccc} \underline{a}&{}b &{}c\\ a &{} \underline{b} &{}\\ a&{} b&{}\underline{c} \end{array}\right) \) and \(\left( \begin{array}{ccc} a&{} \underline{b} &{}c\\ \underline{a} &{}b &{}c\\ a&{} b&{} \end{array}\right) \), we get \(\left( \begin{array}{ccc} a&{} \underline{b} &{}c\\ \underline{a} &{}b &{}\\ a&{} b&{} \end{array}\right) \), then the matching size of assignment of \(\left( \begin{array}{ccc} a&{} b &{}c\\ a &{}b &{}\\ a&{} &{} \end{array}\right) \) is 2. From \(\left( \begin{array}{ccc} a&{} \underline{b} &{}c\\ \underline{a} &{}b &{}\\ a&{} b&{} \end{array}\right) \) and \(\left( \begin{array}{ccc} \underline{a}&{} b &{}\\ a &{} \underline{b} &{}\\ a&{} b&{}\underline{c} \end{array}\right) \), it follows that \(\left( \begin{array}{ccc} a &{} \underline{b} &{}\\ \underline{a}&{} b &{}\\ a&{} b&{} \end{array}\right) \), we conclude \(\left( \begin{array}{ccc} \underline{a}&{} &{}\\ a &{} \underline{b} &{}\\ a&{} b&{}\underline{c} \end{array}\right) \) is not true. Otherwise from \(\left( \begin{array}{ccc} \underline{a}&{} &{}\\ a &{} \underline{b} &{}\\ a&{} b&{}\underline{c} \end{array}\right) \) and \(\left( \begin{array}{ccc} a &{} \underline{b} &{}\\ \underline{a}&{} b &{}\\ a&{} b&{} \end{array}\right) \), it follows that \(\left( \begin{array}{ccc} a&{} &{}\\ \underline{a} &{} b &{}\\ a&{} b&{} \end{array}\right) \), which contradicts to the Pareto optimality of the mechanism. Hence, the matching size of assignment of \(\left( \begin{array}{ccc} a&{} &{}\\ a &{}b &{}\\ a&{} b&{}c \end{array}\right) \) is 2. It is obvious to see that the matching size of assignment of \(\left( \begin{array}{ccc} a&{}b &{}c\\ a &{} &{}\\ a&{} b&{} \end{array}\right) \) is at most 3. Denote the current mechanism as \(\phi ^4\), we know that \(\#^{\phi ^4}(\mathcal {S})\le 13\). \(\square \)
Note that Theorem 6.2 shows that \(\min _{I\in \mathcal {I}}\mathbb {E}_{p}(r(T_p,I))\le \max _{T\in \mathcal {T}}\mathbb {E}_{q}(r(T,S_{uni}))\le \frac{13}{18}\), for any \(p\in \mathrm {P}\), and when \(S_{uni}\) is the uniform distribution over \(\mathcal {S}\). Hence, the approximation ratio is at least \(\frac{18}{13}\).
Lower Bound for Nonbossy Mechanisms In this subsection we only consider the unweighted HA problem and with strict preferences. Thus an instance of HA is just \(I = (N,O,L)\), where \(L = (L(1),\ldots ,L(n_1))\) is the joint list of (strict) preferences of the agents.
We first define the concept of nonbossiness for a deterministic mechanism (see, e.g., [26]). A deterministic mechanism \(\phi \) is nonbossy if for any agent \(i \in N\), any joint preference list profile \(L\), and any preference list \(L'(i)\) of agent i, if \(\phi _i(L(i),L(i)) = \phi _i(L'(i),L(i))\) then \(\phi (L(i),L(i)) = \phi (L'(i),L(i))\). Roughly speaking, nonbossiness ensures that no agent can change the allocation of other agents, by reporting a different preference list, without changing his own allocation.
Bade [7] showed that (Theorem 1 in [7]) any mechanism that is truthful, Pareto optimal and nonbossy is sequivalent to SDM, in the sense that if the order of agents is generated uniformly at random, the matching returned by SDM is the same as the one returned by any truthful, Pareto optimal and nonbossy mechanism.
We will now briefly introduce the required notions from [7] to be able to formally use the result of Bade [7]. Let \(\phi : \mathcal {I}\rightarrow \mathcal {M}\) be any deterministic mechanism for HA and \(\sigma \) be any order (permutation) of the agents. We define a permuted mechanism\(\sigma \odot \phi : \mathcal {I}\rightarrow \mathcal {M}\) via \((\sigma \odot \phi )_i(L) = \phi _{\sigma ^{1}(i)}(L(\sigma (1)),\ldots ,L(\sigma (n_1)))\) for any agent \(i \in N\). Intuitively, permutation \(\sigma \) assigns each agent in N to a role in the mechanism, such that the agent \(\sigma (i)\) under \(\sigma \odot \phi \) assumes the role that agent i plays under \(\phi \).
We say that two deterministic mechanisms \(\phi \) and \(\phi '\) are sequivalent if \(Rand(\phi ) = Rand(\phi ')\). The main result of Bade [7] can now be stated as.
Theorem 6.3
(Theorem 1 in [7]) Any (deterministic) truthful, Pareto optimal and nonbossy mechanism for HA is sequivalent to serial dictatorship mechanism (SDM).
Using this theorem we can now prove the following tight lower bound.
Theorem 6.4
No randomised mechanism that is a symmetrisation of any truthful, nonbossy and Pareto optimal mechanism can achieve an approximation ratio better than \(\frac{e}{e1}\).
Proof
Let \(\phi \) be any deterministic truthful, nonbossy and Pareto optimal mechanism for the HA problem with strict preferences. By Theorem 6.3 the randomised mechanism \(Rand(\phi )\) is equivalent to RSDM. In the proof of Theorem 6.1, we have shown that the expected aproximation ratio of RSDM cannot be better than \(\frac{e}{e1}\) on the triangle instances of HA for large enough \(n_1\). This concludes the argument. \(\square \)
Note that our mechanism Random SDMT2 (see Algorithm 3) with strict preference lists and weights is a symmetrisation of a truthful, nonbossy and Pareto optimal mechanism SDMT2.
7 Conclusion
Whilst this paper has focused on Pareto optimality in the HA context, stronger forms of optimality are possible. For example, minimum cost (or maximum utility), rankmaximal and popular matchings can also be studied in the HA context, and a matching of each of these types is Pareto optimal (see, e.g., [24, Sect. 1.5] for definitions). As Pareto optimality is a unifying feature of all of these other forms of optimality, we chose to concentrate on this concept in our search for randomised truthful mechanisms that can provide good approximations to maximum matchings with desirable properties. Note that the lower bound on the performance of deterministic truthful mechanisms that produce Pareto optimal matchigns extends to those producing matchings that satisfy these stronger optimality criteria. It will thus be the focus of future work to consider the performance of randomised truthful mechanisms for these problems.
As far as lower bounds for randomised Pareto optimal mechanisms are concerned, we proved a lower bound of \(\frac{18}{13}\) for any universally truthful Pareto optimal mechanism. Moreover, we obtained a tight lower bound for the class of symmetrisation of truthful, Pareto optimal and nonbossy mechanisms using a characterisation due to Bade [7]. We believe that the existence of a lower bound of \(\frac{e}{e1}\) for any universally truthful Pareto optimal mechanism is an interesting open question, and our lower bound of \(\frac{18}{13}\) is a useful step towards resolving this.
Footnotes
 1.
An agent’s preference list is dichotomous if it comprises a single tie containing all acceptable objects.
 2.
A randomised mechanism is universally truthful if it is a probability distribution over truthful deterministic mechanisms. This is the strongest known notion of truthfulness for randomised mechanisms.
 3.
This technical term is defined formally in Sect. 6; intuitively, a symmetrisation of a deterministic mechanism is a randomised mechanism that chooses uniformly at random a permutation of the agents, reassigns the agents’ roles according to this random permutation, and then executes the deterministic mechanism with these new assignments of the roles.
 4.
A deterministic mechanism in settings with strict preferences is nonbossy if no agent can misreport his preferences in such a way that his allocation is not changed but the allocation of some other agent is changed.
 5.
In the secretary problem, an administrator is willing to hire the best secretary out of n rankable applicants for a position. The applicants are interviewed onebyone in random order. A decision about each particular applicant is to be made immediately after the interview. Once rejected, an applicant cannot be recalled. During the interview, the administrator can rank the applicant among all applicants interviewed so far, but is unaware of the quality of yet unseen applicants. The question is about the optimal strategy to maximise the probability of selecting the best applicant.
 6.
Here, what only matters is the reachability, that is, existence of such directed path in TG(i, a, S) from agent i to a labelled agent.
 7.
In the online bipartite matching problem [8], vertices of one partition (think of them as agents) are given and fixed, while vertices of the other partition (think of them as objects) arrive in an adversarial order. When an item arrives, we get to see the incident edges on agents. These edges indicate the set of agents that desire this object. The algorithm must immediately match this object to one of the unmatched agents desiring it (or choose to throw it away). In the end, the size of the obtained matching is compared with the optimum matching in the realised graph.
 8.
We can extend this setting to the case where the administrator can decide whether to let the applicant select his favourite object or to reject this applicant, meaning that the applicant gets nothing. In this more general problem, it is not difficult to prove that for any fixed order of the applicants, the decision that the administrator does not reject any applicant will maximise the number of matched applicants. Therefore, this more general problem is reduced to the setting where the administrator lets each applicant select his favourite object, and hence our lower bound from Sect. 6 also applies to this setting.
Notes
Acknowledgements
We like to convey our sincere gratitude to an anonymous reviewer for a very careful reading of this paper, and for valuable remarks that have helped to improve the presentation. We would also like to thank anonymous reviewers of earlier versions of this paper for useful comments. Finally, we would like to thank Anna Bogomolnaia and Hervé Moulin for helpful discussions concerning the results in this paper.
References
 1.Abdulkadiroǧlu, A., Sönmez, T.: Random serial dictatorship and the core from random endowments in house allocation problems. Econometrica 66(3), 689–701 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
 2.Abraham, D.J., Cechlárová, K., Manlove, D.F., Mehlhorn, K.: Pareto optimality in house allocation problems. In: Proceedings of ISAAC ’04: the 15th Annual International Symposium on Algorithms and Computation, vol. 3341 of Lecture Notes in Computer Science, pp. 3–15. Springer (2004)Google Scholar
 3.Aggarwal, G., Goel, G., Karande, C., Mehta, A.: Online vertexweighted bipartite matching and singlebid budgeted allocations. In: Proceedings of SODA ’11: the 22nd ACMSIAM Symposium on Discrete Algorithms, pp. 1253–1264 (2011)Google Scholar
 4.Aziz, H., Brandt, F., Harrenstein, P.: Pareto optimality in coalition formation. Games Econ. Behav. 82, 562–581 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
 5.Aziz, H., Gaspers, S., Mackenzie, S., Walsh, T.: Fair assignment of indivisible objects under ordinal preferences. Artif. Intell. 227, 71–92 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
 6.Babaioff, M., Immorlica, N., Kempe, D., Kleinberg, R.: Online auctions and generalized secretary problems. SIGecom Exch. 7(2), 7 (2008)CrossRefGoogle Scholar
 7.Bade, S.: Random serial dictatorship: the one and only. Math. Oper. Res. (2019, to appear)Google Scholar
 8.Bhalgat, A., Chakrabarty, D., Khanna, S.: Social welfare in onesided matching markets without money. In: Proceedings of APPROX+RANDOM ’11: the 14th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems and the 15th International Workshop on Randomization and Computation, vol. 6845 of Lecture Notes in Computer Science, pp. 87–98. Springer (2011)Google Scholar
 9.Bogomolnaia, A., Moulin, H.: A new solution to the random assignment problem. J. Econ. Theory 100(2), 295–328 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
 10.Bogomolnaia, A., Moulin, H.: Random matching under dichotomous preferences. Econometrica 72(1), 257–279 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
 11.Bogomolnaia, A., Moulin, H.: Size versus fairness in the assignment problem. Games Econ. Behav. 90, 119–127 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
 12.Cechlárová, K., Eirinakis, P., Fleiner, T., Magos, D., Mourtos, I., Potpinková, E.: Pareto optimality in manytomany matching problems. Discrete Optim. 14, 160–169 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
 13.Chakrabarty, D., Swamy, C.: Welfare maximization and truthfulness in mechanism design with ordinal preferences. In: Proceedings of ITCS ’14: ACM the 5th Innovations in Theoretical Computer Science conference, pp. 105–120 (2014)Google Scholar
 14.Chen, N., Garvin, N., Lu, P.: Truthful generalized assignments via stable matching. Math. Oper. Res. 39(3), 722–736 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
 15.Chen, N., Ghosh, A.: Algorithms for Pareto stable assignment. In: Conitzer, V., Rothe, J. (eds.) Proceedings of COMSOC ’10: the 3rd International Workshop on Computational Social Choice, Düsseldorf University Press, pp. 343–354 (2010)Google Scholar
 16.Devanur, N.R., Jain, K., Kleinberg, R.: Randomized primaldual analysis of RANKING for online bipartite matching. In: Proceedings of SODA ’13: the 24th ACMSIAM Symposium on Discrete Algorithms, pp. 101–107 (2013)Google Scholar
 17.Dimitrov, N.B., Plaxton, C.G.: Competitive weighted matching in transversal matroids. Algorithmica 62(1–2), 333–348 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
 18.Dughmi, S., Ghosh, A.: Truthful assignment without money. In: Proceedings of EC ’10: the 11th ACM Conference on Electronic Commerce, pp. 325–334 (2010)Google Scholar
 19.Gärdenfors, P.: Assignment problem based on ordinal preferences. Manag. Sci. 20(3), 331–340 (1973)CrossRefzbMATHGoogle Scholar
 20.Hylland, A., Zeckhauser, R.: The efficient allocation of individuals to positions. J. Polit. Econ. 87(2), 293–314 (1979)CrossRefzbMATHGoogle Scholar
 21.Jaramillo, P., Manjunath, V.: The difference indifference makes in strategyproof allocation of objects. J. Econ. Theory 147(5), 1913–1946 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
 22.Karp, R.M., Vazirani, U.V., Vazirani, V.V. : An optimal algorithm for online bipartite matching. In: Proceedings of STOC ’90: the 22nd ACM Symposium on Theory of Computing, pp. 352–358 (1990)Google Scholar
 23.Korte, B., Hausmann, D.: An analysis of the greedy heuristic for independence systems. In: Annals of Discrete Mathematics, vol. 2, pp. 65–74. NorthHolland (1978)Google Scholar
 24.Manlove, D.F.: Algorithmics of Matching Under Preferences. World Scientific, Singapore (2013)CrossRefzbMATHGoogle Scholar
 25.Motwani, R., Raghavan, P.: Randomized Algorithms. Cambridge University Press, Cambridge (1995)CrossRefzbMATHGoogle Scholar
 26.Pápai, S.: Strategyproof assignment by hierarchical exchange. Econometrica 68(6), 1403–1433 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
 27.Plaxton, C.G.: A simple family of top trading cycles mechanisms for housing markets with indifferences. In: Proceedings of the 24th International Conference on Game Theory, Stony Brook, New York, USA (2013). Available from https://www.cs.utexas.edu/users/plaxton/pubs/2013/icgt.pdf
 28.Procaccia, A.D., Tennenholtz, M.: Approximate mechanism design without money. ACM Trans. Econ. Comput. 1(4), 18 (2013)CrossRefGoogle Scholar
 29.Roth, A.E.: Incentive compatibility in a market with indivisible goods. Econ. Lett. 9, 127–132 (1982)MathSciNetCrossRefzbMATHGoogle Scholar
 30.Roth, A.E., Postlewaite, A.: Weak versus strong domination in a market with indivisible goods. J. Math. Econ. 4, 131–137 (1977)MathSciNetCrossRefzbMATHGoogle Scholar
 31.Roth, A.E., Sonmez, T., Ünver, M.U.: Pairwise kidney exchange. J. Econ. Theory 125(2), 151–188 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
 32.Saban, D., Sethuraman, J.: House allocation with indifferences: A generalization and a unified view. In: Proceedings of EC ’13: the 14th ACM Conference on Electronic Commerce, pp. 803–820 (2013)Google Scholar
 33.Saban, D., Sethuraman, J.: The complexity of computing the random priority allocation matrix. Math. Oper. Res. 40(4), 797–1088 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
 34.Shapley, L., Scarf, H.: On cores and indivisibility. J. Math. Econ. 1, 23–37 (1974)MathSciNetCrossRefzbMATHGoogle Scholar
 35.Svensson, L.G.: Queue allocation of indivisible goods. Social Choice Welf. 11(4), 323–330 (1994)CrossRefzbMATHGoogle Scholar
 36.Uno, T.: Algorithms for enumerating all perfect, maximum and maximal matchings in bipartite graphs. In: Proceedings of ISAAC ’97: The 8th International Symposium on Algorithms and Computation, vol. 1350, pp. 92–101. Springer (1997)Google Scholar
 37.Yao, A.C.C.: Probabilistic computations: Toward a unified measure of complexity (extended abstract). In: Proceedings of FOCS ’77: IEEE Computer Society 18th Annual IEEE Symposium on Foundations of Computer Science, pp. 222–227 (1977)Google Scholar
 38.Zhou, L.: On a conjecture by Gale about onesided matching problems. J. Econ. Theory 52(1), 123–135 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.