1 Introduction

An online algorithm solves a computational problem piecewise. Its input arrives as a sequence and each piece of the sequence is a request. The algorithm is required to compute a partial output after each time step, before the next request arrives, i.e., the partial output only depends on the previous requests. This partial output may not be changed. Commonly, competitive analysis is used for analyzing the output quality of online algorithms.

1.1 Different Models for Online Algorithms Used in This Paper

Competitive analysis was introduced in 1985 by Sleator and Tarjan [17] and is a powerful tool to measure the performance of online algorithms. For an overview on competitive analysis and online algorithms, we refer to the standard literature [3, 8, 10, 13].

Let us consider an online maximization problem \(\mathtt {p} \), and a deterministic algorithm \(\mathtt {A} \) that solves that problem. Let \(\mathtt {I} \) be an adversarial instance for the problem \(\mathtt {p} \) presented piecewise to \(\mathtt {A} \). We name \(\mathrm {Cost} _{\mathtt {p}}(\mathtt {A},\mathtt {I})\) the cost of the solution computed by \(\mathtt {A} \) on the instance \(\mathtt {I} \) of \(\mathtt {p} \). We measure the performance of \(\mathtt {A} \) on \(\mathtt {I} \) comparing its cost to the cost of an algorithm performing optimally on \(\mathtt {I} \). We name such an algorithm \(\mathtt {Opt} \) and the cost of its solution \(\mathrm {Cost} _{\mathtt {p}}(\mathtt {Opt},\mathtt {I})\). The strict competitive ratio of an algorithm \(\mathtt {A} \) on the instance \(\mathtt {I} \) is then

$$\begin{aligned} \mathbf {r} _{\mathtt {p}}(\mathtt {A},\mathtt {I})=\frac{\mathrm {Cost} _{\mathtt {p}}(\mathtt {Opt},\mathtt {I})}{\mathrm {Cost} _{\mathtt {p}}(\mathtt {A},\mathtt {I})}. \end{aligned}$$

Observe that, as \(\mathtt {p} \) is a maximization problem, the cost of \(\mathtt {A} \) is always going to be smaller than the cost of \(\mathtt {Opt} \) on any instance. This means that \(\mathbf {r} _{\mathtt {p}}(\mathtt {A},\mathtt {I})\ge 1\).

Now, the competitive ratio of an algorithm \(\mathtt {A} \) for any possible instance \(\mathtt {I} \in \mathcal {I} \) is defined in a worst-case manner as

$$\begin{aligned} \mathbf {r} _{\mathtt {p}}(\mathtt {A})=\sup _{\mathtt {I} \in \mathcal {I}}(\mathbf {r} _{\mathtt {p}}(\mathtt {A},\mathtt {I})). \end{aligned}$$

Finally, we define the competitive ratio of a problem \(\mathtt {p} \) as the best case, i.e.,

$$\begin{aligned} \mathbf {r} _{\mathtt {p}}=\inf _{\mathtt {A} \in \mathcal {A}}(\mathbf {r} _{\mathtt {p}}(\mathtt {A})) \end{aligned}$$

where \(\mathcal {A} \) is the set of deterministic online algorithms solving problem \(\mathtt {p} \).

Let \(\mathtt {RA} \) be a randomized online algorithm that solves \(\mathtt {p} \). It can be modeled as a set \(\mathtt {RA} =\{\mathtt {A} _1,\mathtt {A} _2,\ldots ,\mathtt {A} _{k}\}\) of deterministic algorithms one of which will be picked uniformly at random. The competitive ratio of a randomized algorithm \(\mathtt {RA} \) on the instance \(\mathtt {I} \) is defined as

$$\begin{aligned} \bar{\mathbf {r}}_{\mathtt {p}}(\mathtt {RA} {,}\mathtt {I})=\frac{1}{\mathbb {E} (\frac{1}{\mathbf {r} _{\mathtt {p}}(\mathtt {RA},\mathtt {I})})}, \end{aligned}$$

and the competitive ratio of a randomized algorithm on \(\mathtt {p} \) is defined analogously as

$$\begin{aligned} \bar{\mathbf {r}}_{\mathtt {p}}(\mathtt {RA})=\sup _{\mathtt {I} \in \mathcal {I}}\{\bar{\mathbf {r}}_{\mathtt {p}}(\mathtt {RA},\mathtt {I})\} \end{aligned}$$

and

$$\begin{aligned} \bar{\mathbf {r}}_{\mathtt {p}}=\inf _{\mathtt {RA} \in \mathcal {RA}}\{\bar{\mathbf {r}}_{\mathtt {p}}(\mathtt {RA})\}, \end{aligned}$$

respectively.

The definition of the competitive ratio for a randomized algorithm may sound a little odd. The reason of such a definition is that the competitive ratio is typically defined for minimization problems instead of maximization problems. In case of minimization the competitive ratio is defined as \(\mathbf {r} _{\mathtt {p}} (\mathtt {A},\mathtt {I})=\mathrm {Cost} _{\mathtt {p}}(\mathtt {A},\mathtt {I})/\mathrm {Cost} _{\mathtt {p}}(\mathtt {Opt},\mathtt {I})\) (i.e., the inverse of the competitive ratio for maximization). In order to have similar behaviors in minimization and maximization problems with randomization, it is more convenient to use that definition.

The randomized adversary model we use here was introduced by Hromkovič (see [18]) and further investigated by Burjons et al. [4] and Dobrev et al. [7].

In this model, the adversary has access to s random bits, and uses them to pick uniformly at random an instance from the set \(\mathtt {RI} =\{\mathtt {I} _1,\mathtt {I} _2,\ldots ,\mathtt {I} _{2^s}\}\) for the problem \(\mathtt {p} \). The online algorithm \(\mathtt {A}\) is deterministic and knows in advance all the information from the adversary by means of an oracle, except the outcome of the random bits used by the adversary. This means that the algorithm knows the full set of instances \(\mathtt {RI}\) but does not know which of the instances will be chosen by the adversary. The competitive ratio of \(\mathtt {A} \) with a randomized adversary is defined as

$$\begin{aligned} \mathbf {r} ^*_{\mathtt {p}}(\mathtt {A},\mathtt {RI})=\frac{1}{\mathbb {E} (\frac{1}{\mathbf {r} _{\mathtt {p}}(\mathtt {A},\mathtt {RI})})}=\frac{2^s}{\sum _i\frac{1}{\mathbf {r} _{\mathtt {p}}(\mathtt {A},\mathtt {I} _i)}}. \end{aligned}$$

Analogously to the previous models, the competitive ratio for a specific algorithm \(\mathtt {A} \) in the class of instance sets \(\mathcal {RI} _s\) using no more than s random bits and the competitive ratio for the problem \(\mathtt {p} \) in the same set of instances can be defined as

$$\begin{aligned} \mathbf {r} ^*_{\mathtt {p}}(\mathtt {A},s)=\sup _{\mathtt {RI} \in \mathcal {RI} _s}\{\mathbf {r} ^*_{\mathtt {p}}(\mathtt {A},\mathtt {RI})\} \end{aligned}$$

and

$$\begin{aligned} \mathbf {r} ^*_{\mathtt {p}}(s)=\inf _{\mathtt {A} \in \mathcal {A}}\{\mathbf {r} ^*_{\mathtt {p}}(\mathtt {A},s)\}, \end{aligned}$$

respectively.

Using the introduced notation, Yao’s principle for online algorithms reads as follows:

Theorem 1

(Yao [19]). Let \(\mathtt {p} \) be an online maximization problem. Then, for any \(s\in \mathbb {N}\),

$$\begin{aligned} \mathbf {r} ^*_{\mathtt {p}}(s)\le \bar{\mathbf {r}}_{\mathtt {p}}. \end{aligned}$$

This theorem states that the competitive ratio of a problem \(\mathtt {p} \) solved using any randomized algorithm against instances of finite length is bounded from below by the competitive ratio of a problem with a randomized adversary using any finite number of random bits s to select amongst finite instances.

In order to simplify the expressions and the formulae throughout the paper, let us define the inverse competitive ratio \(\rho \) as

$$\begin{aligned} \rho _\mathtt {p} (\mathtt {A},\mathtt {I})=\frac{1}{\mathbf {r} _\mathtt {p} (\mathtt {A},\mathtt {I})}. \end{aligned}$$

It is easy to check that, with this parameter,

$$\begin{aligned} \rho ^*_{\mathtt {p}}(\mathtt {A},\mathtt {RI})=\mathbb {E} (\rho _{\mathtt {p}}(\mathtt {A},\mathtt {RI}))=\frac{\sum _i\rho _{\mathtt {p}}(\mathtt {A},\mathtt {I} _i)}{2^s}. \end{aligned}$$

1.2 Online Matching in Regular Graphs with Randomized Adversary

In the online bipartite matching problem, we are given a bipartite graph, \(G=(U\cup W,E)\). The online algorithm processes the input as follows. The vertices in U are known; the vertices in W arrive one at a time, together with their list of adjacent vertices, and each edge must be matched (or not matched) exactly at the time it arrives. The goal is to obtain a matching as close as possible to the optimum one.

First results on this problem are due to the seminal paper by Karp, Vazirani, and Vazirani [12]. Since then, several variants of the problem have been studied, some of them motivated by the following applications: adwords [14, 16], matching in metric space [15], weighted matching in metric spaces [11], matching on the real line [9].

The competitive ratio of deterministic algorithms for the online problem is 2 [12], as any algorithm that always matches a vertex if possible constructs a maximal matching. Also, given any deterministic algorithm, it is easy to construct an input that forces the algorithm to find a matching of size no more than half of the optimum. It is also proven in [12] that choosing an edge uniformly at random every time a vertex arrives does not provide significant improvement. Indeed, the expected size of the matching is bounded by \(\frac{n}{2}+O(\log n)\).

Several randomized algorithms for online bipartite matching have been proposed. The first known is an algorithm called RANKING [12]. It initially ranks the known vertices, so that, each time a new vertex arrives, it is matched to the highest ranked available vertex that is adjacent to it. The competitive ratio is proven to be \(1+\frac{1}{e-1}\). In [2], the authors revisit the proof and give a simpler one.

The online matching problem for the case of regular bipartite graphs has recently been studied in the deterministic model [1, 16] and also in the advice complexity model [1].

In this paper, we deal with the online bipartite matching problem in regular graphs under the randomized adversary model.

Recall that any regular bipartite graph, \(G(U\cup W,E)\), has a perfect matching (see [6, Chap. 2]), and therefore an optimum matching will contain \(n=|U|=|W|\) edges.

Along this paper, in order to count the size of a matching, we may sometimes refer to a matched vertex \(u\in U\) standing for a matched edge incident to u.

2 Online Bipartite Matchings in 2-Regular Bipartite Graphs

In this section, we determine the competitive ratio for the online matching problem for the case of 2-regular graphs with any number of random bits used by the adversary.

Proposition 1

Let \(\mu _2\) be the problem of finding a matching in a 2-regular bipartite graph. Then, for any number of random bits s, \(\mathbf {r} ^*_{\mu _2}(s) \ge 8/7\).

Proof

Take first \(s=1\) and consider the set \(\mathcal {RI} \) constituted by the following two bipartite graphs with eight vertices (see Fig. 1), \(G_1=(U\cup W, E_1)\) and \(G_2=(U\cup W, E_2)\), where \(U=\{u_1,u_2,u_3,u_4\}\), \(W=\{w_1,w_2,w_3,w_4\}\),

$$\begin{aligned} E_1=\{u_1w_1,u_1w_3,u_2w_1,u_2w_4,u_3w_2,u_3w_3,u_4w_2,u_4w_4\} \end{aligned}$$

and

$$\begin{aligned} E_2=\{u_1w_1,u_1w_3,u_2w_1,u_2w_4,u_3w_2,u_3w_4,u_4w_2,u_4w_3\}. \end{aligned}$$

The adversary selects at random one of the two graphs and presents the vertices of W in the order \(w_1\), \(w_2\), \(w_3\), \(w_4\). Notice that, before presenting vertex \(w_3\), both instances are indistinguishable.

Fig. 1.
figure 1

The two adversarial instances in the set \(\mathcal {RI} \) in the lower bound for \(s=1\) with the two first edges selected.

It is easy to check that, for any choice of the algorithm in the first two rounds, there will always be exactly one round, either in \(G_1\) or in \(G_2\), in which no edge can be selected. Therefore, the competitive ratio of the algorithm is 4/3 for one of the instances and 1 for the other. This proves that, for any online algorithm \(\mathtt {A} \),

$$\begin{aligned} \mathbf {r} _{\mu _2}^*(\mathtt {A},s=1)\ge \mathbf {r} _{\mu _2}^*(s=1)=\frac{2}{3/4+1}=\frac{8}{7}, \end{aligned}$$

as claimed.

For a general value of the number of random bits, \(s\in \mathbb {N}\), let \(\mathcal {RI} '\) be a set of size \(2^s\) from which the adversary chooses the instance to be presented, and assume that \(\mathcal {RI} '\) is formed by \(2^{s-1}\) copies of \(G_1\) and \(2^{s-1}\) copies of \(G_2\). Then,

$$\begin{aligned} \mathbf {r} _{\mu _2}^*(\mathtt {A},s)\ge \mathbf {r} _{\mu _2}^*(s=1)\ge \frac{8}{7}, \end{aligned}$$

which finishes the proof.   \(\square \)

Proposition 2

The competitive ratio for the bipartite matching problem for 2-regular graphs, \(\mu _2\), in the randomized algorithm model is \(\bar{\mathbf {r}}_{\mu _2}\le \frac{8}{7}\).

Proof

Let us consider an instance \(\mathtt {I} (G)\) presented by the adversary, with \(G=(U\cup W, E)\). Every new vertex \(w\in W\) shown by the adversary appears together with two edges \(u_1w\) and \(u_2w\) and the algorithm selects one of them for the matching whenever it is possible. We can assume that the algorithm is greedy, since not choosing an edge when it is possible will never produce a better solution than choosing a wrong edge (a wrongly chosen edge will produce at most one error).

To make the proof clearer, we transform the problem as follows: The instance \(\mathtt {I} (G)\) will be represented by \(\mathtt {I} '(G')\) with \(G'=(U,E')\). Every new vertex \(w\in W\) appearing in the original problem together with edges \(u_1w,u_2w\in E(G)\) will be represented by a new edge \(u_1u_2\in E'(G')\). It is clear that, if G is a 2-regular bipartite graph, \(G'\) will be a set of disjoint cycles.

The algorithm choosing one of the edges \(u_1w,u_2w\in E(G)\) when vertex \(w \in G\) appears is translated to the assignment of a direction to the edge \(u_1u_2\in E'(G')\). The new arc will be adjacent to \(u_1\in U(G')\) if the selected edge in the original problem is \(u_1w\in E(G)\) (see Fig. 2). The algorithm should assign a direction to every new edge, in such a way that the newly built directed graph maintains an in-degree not larger than 1.

Fig. 2.
figure 2

Translation of the matching problem for \(\delta =2\)

Notice that, if the adversary shows a new edge incident to an already shown path, the algorithm can always assign to that edge the direction of the path and no error will be produced. Therefore, we can assume that the adversary will always present disjoint edges as long as it is possible, and this will be the worst-case adversary.

We can further assume (for the sake of simplicity in the analysis) that the adversary will consist of two phases: In the first one, a set of disjoint edges will be presented. In the second phase, the edges presented will be incident to vertices that have already appeared in the first phase, closing cycles of even length, say 2m.

Any randomized algorithm \(\mathtt {RA}\) will choose at random the directions of the edges in the first phase. In the second phase, no decision has to be taken: If an edge \(u_1u_2\) appears in the second phase, either \(u_1\) and \(u_2\) have already in-degree 1, and no direction can be assigned (producing an error), or at least one of them has in-degree 0, and it is possible to assign the direction towards that vertex.

Therefore, all mistakes are produced during the random choices in the first phase. This problem can be translated again into the following game: Given a cycle \(C_m\), let the algorithm choose at random the direction of each edge in the cycle. The number of errors is the number of vertices with in-degree 2. That number of errors is equal to the number of edges missing in the matching in a bipartite graph with 2m vertices in each vertex set.

We claim that the expected number of vertices with in-degree 2 in a digraph obtained by orienting uniformly at random the edges of the cycle \(C_m\) is m/4: Indeed, let \(v_1,\ldots ,v_m\) be the vertices of \(C_m\) and let N be the number of vertices with in-degree 2 in a random orientation of its edges. Then \(N=\sum _{i=1}^m I_i\), where \(I_i\) is the indicator random variable of the event \(\delta ^+(v_i)=2\). Clearly, \(\mathbb {E}(I_i)=\mathbb {P}(I_i=1)=1/4\). Hence, \(\mathbb {E}(N)=\sum _{i=1}^m \mathbb {E}(I_i)=m/4\), as claimed.

The back-translation of this result into the original matching problem states that the number of missed edges in expectation in the matching is m/4 out of 2m possible, and therefore \(\bar{\rho }_{\mu _2}(\mathtt {RA})=\frac{2m-m/4}{2m}=\frac{7}{8}\). Finally, \(\bar{\mathbf {r}}_{\mu _2}\le \frac{1}{\bar{\rho }_{\mu _2}(\mathtt {RA})}=\frac{8}{7}\).      \(\square \)

As a consequence of Propositions 1 and 2 and Theorem 1, we conclude:

Theorem 2

The competitive ratio for the bipartite matching problem \(\mu _2\) for 2-regular graphs with randomized adversary for any number of random bits s is given by \(\mathbf {r} _{\mu _2}^*(s)=8/7\).

3 Adversary with One Random Bit for Other Values of \(\delta \)

3.1 Lower Bounds on the Competitive Ratio

Proposition 3

Let \(\mu _3\) be the problem of finding a matching in a 3-regular bipartite graph with a randomized adversary. Then \(\mathbf {r} _{\mu _3}^*(s=1)\ge 18/17\).

Proof

We will prove that there exists an adversary using one random bit for which, for any online algorithm, \(\mathbf {r} _{\mu _3}^*(\mathtt {A},s=1)\ge 18/17\). In order to define such an adversary, let us use a result borrowed from the theory of combinatorial designs. A balanced incomplete block design (BIBD for short) is a pair \((V, \mathcal {B})\) where V is a v-set of points, and \(\mathcal {B}\) is a collection of b k-subsets of V (blocks) such that each element of V is contained in exactly r blocks and any 2-subset of V is contained in exactly \(\lambda \) blocks. The numbers vbrk, and \(\lambda \) are parameters of the BIBD. A parallel class or resolution class in a design is a set of blocks that partition the point set. A BIBD is said to be resolvable if its blocks can be partitioned into parallel classes. The notation RBIBD\((v, k, \lambda )\) is commonly used for a resolvable balanced incomplete block design (see [5] for more information on resolvable designs).

Let us consider the resolvable design RBIBD(9, 3, 1) given by

$$\begin{aligned} \begin{array}{cccc} I &{} II &{} III &{} IV\\ \{1,2,3\} &{} \{1,4,7\} &{} \{1,5,9\} &{} \{1,6,8\}\\ \{4,5,6\} &{} \{2,5,8\} &{} \{2,6,7\}&{} \{2,4,9\}\\ \{7,8,9\} &{} \{3,6,9\} &{} \{3,4,8\}&{} \{3,5,7\}. \end{array} \end{aligned}$$

Observe that each column is a parallel class, i.e., a partition of the set \(\{1, \dots , 9\}\).

Let the adversary present one instance out of two possible bipartite graphs \(G_0=(U\cup W, E_0)\) and \(G_1=(U\cup W, E_1)\), with \(U=\{1, \dots , 9\}\) and \(W=\{w_1,\dots ,w_9\}\). The edge sets \(E_0\) and \(E_1\) are given by the blocks in the design shown above as follows: Let the blocks in the parallel classes I and II be the neighbors of vertices \(w_1, \dots , w_6\), respectively, in both graphs, and let the neighbors of vertices \(w_7, w_8\) and \(w_9\) be the blocks in class III in \(G_0\), and the blocks in class IV in \(G_1\). Let the adversary present the vertices \(w_1, \dots , w_9\) in increasing order.

In order to analyze the performance of this algorithm, let us consider two phases:

  • Phase I. During this phase, vertices \(w_1, \dots , w_6\) are presented. At the end of this phase, 6 edges incident to different vertices in U have been selected for the matching. Notice that both instances are indistinguishable at this point.

  • Phase II. During this phase, the adversary presents vertices \(w_7, w_8\) and \(w_9\). At this point, the algorithm knows which of both instances was selected by the adversary, and the algorithm should select three edges incident to the three vertices in U not used during the first phase. Let \(U_m\) be the set of those three edges.

Let us show that it is not possible to match all vertices in \(U_m\) during the second phase in both instances. Let us recall that \(U_m\) cannot be a block in classes I or II (if it were, no edge would have been selected from that block). Therefore there are at least two vertices in \(U_m\) that are not in any block in classes I or II. Because of the properties of block designs, every pair of elements belongs to exactly \(\lambda \) blocks (in our case \(\lambda =1\)), and therefore these two vertices belong to the same block either in class III or IV. Due to the pigeonhole principle, it will not be possible to select all three vertices in the instance corresponding to the class where the pair is in the same block. Therefore, at least one edge is missed in one of the instances. The bound for the inverse competitive ratio follows straightforwardly.   \(\square \)

Notice that this adversary is a generalization of the adversary shown for \(\delta =2\) (the adversary in the proof for \(\delta =2\) is actually an RBIBD(4, 2, 1)).

The next proposition shows a generalization of the adversary for \(\delta =2\) to adversaries for any even degree.

Proposition 4

Let \(\mu _{2\delta }\) be the problem of finding a matching in a \(2\delta \)-regular bipartite graph. Then \(\mathbf {r} ^*_{\mu _{2\delta }}(s=1)\ge \frac{8\delta }{8\delta -1}\).

Proof

Let us consider the graphs \(G_1\) and \(G_2\) used in the case \(\delta =2\) (Fig. 1) and let us construct the Cartesian product \(G_i'= G_i \times \{1,\ldots ,\delta \}\), i.e., \(V(G_i')=\{(v,j)\mid v\in G_i \ \mathrm {and}\ j\in \{1\dots \delta \}\}\) and \((v_a,j_1)\) is adjacent to \((v_b,j_2)\) in \(G_i'\) whenever \(v_av_b\) is an edge in \(G_i\). Two instances are given by presenting the vertices of the graphs \(G_1'\) and \(G_2'\) in the order

$$\begin{aligned} (w_1,1),\dots , (w_1,\delta ), (w_2,1),\dots , (w_2,\delta ), (w_3,1),\dots , (w_3,\delta ), (w_4,1),\dots , (w_4,\delta )\,. \end{aligned}$$

It is a simple exercise to check that the claimed bound holds by using the same reasoning as in the proof of Proposition 1.   \(\square \)

3.2 Upper Bounds on the Competitive Ratio

Proposition 5

Let \(\mu _\delta \) be the problem of finding a matching in a \(\delta \)-regular bipartite graph with a randomized adversary using a single random bit. Then \(\mathbf {r} ^*_{\mu _\delta }(s=1)\le \frac{4\delta }{3\delta +1}\).

Proof

We prove that, for any adversary using one random bit, there exists an online algorithm \(\mathtt {A} \) solving \(\mu _\delta \) for a \(\delta \)-regular bipartite graph, for which the inverse competitive ratio satisfies \(\rho _\mathtt {A} (s=1)\ge \frac{3\delta +1}{4\delta }\).

Since the adversary is using one random bit, there are two possible instances \(I_0, I_1\) that can each be presented with probability \(\frac{1}{2}\). Let the algorithm start taking edges in the matching assuming it is one of them (let us suppose it is \(I_0\)). If the presented instance is \(I_0\), then the algorithm will achieve a perfect matching. If the presented instance is \(I_1\), let us assume that the algorithm will realize the wrong choice after m vertices have appeared. Since no error was detected before, it can be guaranteed that the algorithm will have selected m edges in the matching at this point.

Fig. 3.
figure 3

Counting the number of edges when an error is detected

Let \(G_1=(U\cup W,E)\) be the graph presented by the adversary in instance \(I_1\). Let \(A_0 \subset W\) be the subset of vertices in W that have been shown at time step m, and let \(B_0\subset U\) the subset of vertices in U that have been chosen in the matching. \(A_1\subset W\) and \(B_1\subset U\) contain the rest of the vertices of the graph. Let \(e_b\) be the number of edges that have already appeared between sets \(A_0\) and \(B_1\). This situation is illustrated in Fig. 3.

Let us point out the following facts:

  • \(e_b \le (n-m)(\delta -1)\), since the vertices of \(B_1\) have degree \(\delta \), but there is at least one edge per vertex which is not incident to \(A_0\). Otherwise an error would have been detected before.

  • \(e_b \le (\delta -1)m\), since the number of edges going from \(A_0\) to \(B_0\) is at least m.

  • The number of edges going from \(A_1\) to \(B_0\) is also \(e_b\).

At this point, the algorithm knows exactly what is the graph that will be presented by the adversary, and therefore the algorithm can proceed as in an off-line model (since the algorithm discovered that the instance was not \(I_0\), and then it must be \(I_1\)).

Let h be the number of edges that the algorithm may miss to put in the matching. It is easy to see that \(h \le e_b/\delta \): Take the subgraph induced by the vertex sets \(A_1\) and \(B_1\) and add \(e_b\) edges so that it becomes \(\delta \)-regular. Since it is a \(\delta \)-regular bipartite graph, it can be edge-colored with \(\delta \) colors. This coloring is equivalent to the partition of the edge set into \(\delta \) perfect matchings. Therefore, at least one of those matchings will contain at most \(e_b/\delta \) edges among the added ones.

Thus, \(h \le \frac{e_b}{\delta }\le \min \{\frac{\delta -1}{\delta }m, \frac{\delta -1}{\delta }(n-m)\}\). The maximum value of that minimum is obtained when \(m=\frac{n}{2}\). For that value of m, we have \(h=n-m\), and therefore the number of edges in the matching is \(n- h\).

Finally, the inverse competitive ratio in expectation is \(\rho \ge \frac{1+\frac{\delta +1}{2\delta }}{2}= \frac{3\delta +1}{4\delta }\).   \(\square \)

4 Adversaries with a Large Number of Random Bits

Let us consider the set \(\mathtt {RI} _{k\delta }\) formed by the instances \(\mathtt {I} _i(G_i)\) in which \(G_i=(U\cup W,E)\) is a \(\delta \)-regular bipartite graph with \(n=|U|=|W|= k\delta \). For any instance \(\mathtt {I} _i(G_i)\), let the adversary present the vertices \(w\in W\) in \(\delta \) phases. In phase i, \(1\le i\le \delta \), a subset of k vertices \(F_i\), \(F_i\subset W\), is presented in such a way that \(\varGamma (w)\cap \varGamma (w')=\emptyset \), for any two distinct vertices \(w,w'\in F_i\), where \(\varGamma (v)\) stands for the set of neighbors of vertex v.

In this section, we want to compute the competitive ratio \(\mathbf {r} ^*_{\mathtt {RI} _{k\delta }}\) for the randomized adversary given by the set \(\mathtt {RI} _{k\delta }\). Notice that, with such instance set, all deterministic greedy algorithms will have the same behavior, since at each time step all edges will look the same and any deterministic choice will have the same probability of being a good choice.

Theorem 3

The competitive ratio for the randomized adversary given by the set \(\mathtt {RI} _{k\delta }\) for large values of k is \(\mathbf {r} ^*_{\mathtt {RI} _{k\delta }} = \frac{\delta }{\delta -g(\delta )}\) with \(g(\delta )=\sum _{i=1}^\delta \left( \frac{i-1}{\delta }\right) ^\delta \).

Proof

Let \(G=(U\cup W,E)\) be the graph presented by the adversary. Let us recall that G is a \(\delta \)-regular bipartite graph with \(n=|U|=|W|= k\delta \). Let the adversary present the vertices \(w\in W\) in \(\delta \) phases. In phase i, \(1\le i\le \delta \), a subset of k vertices \(F_i\), \(F_i\subset W\), is presented in such a way that \(\varGamma (w)\cap \varGamma (w')=\emptyset \), for any two distinct vertices \(w,w'\in F_i\).

For every \(w\in F_i\), the algorithm tries to match w with a vertex of \(\varGamma (w)\). From the point of view of the algorithm the vertices of \(F_i\) are selected at random, and each vertex \(w\in F_i\) with at least one available neighbor in \(\varGamma (w)\) will produce a new matched vertex in U that will not be used in the following phases.

Let us say that a vertex of U is marked if it has already been used by the algorithm to construct the matching. In phase i the algorithm will mark \(k-N_i\) vertices of U, with \(N_i\) being the random variable that counts the number of vertices \(w\in F_i\) such that all the vertices of \(\varGamma (w)\) are already marked. Notice that \(N_1=0\) because k vertices are always marked in the first phase. Moreover, since the total number of vertices of U that have been marked previous to phase i is \(\sum _{j=1}^{i-1}(k-N_j)\), \(1\le i\le \delta \), we have

$$\begin{aligned} N_i\le \left\lfloor \frac{\sum _{j=1}^{i-1} (k-N_j)}{\delta }\right\rfloor . \end{aligned}$$
(1)

Moreover, \(N_i=\sum _{w\in F_i} I_w\), where \(I_w\) is the indicator random variable of the event that all the vertices of \(\varGamma (w)\) have been previously marked. If \(M_i=\sum _{j=1}^{i-1} N_j\), \(1\le i\le \delta \), then

$$\begin{aligned} \mathbb {P}\left( I_w=1 \mid M_i=m_i\right) =\frac{\left( {\begin{array}{c}(i-1)k-m_i\\ \delta \end{array}}\right) }{\left( {\begin{array}{c}k\delta \\ \delta \end{array}}\right) }. \end{aligned}$$
(2)

(For \(i=1\) one has \(M_1=0\) and \(\mathbb {P}\left( I_w=1\right) =0\).) Therefore, the expected number of edges that will be missed during phase i, conditioned by that during previous phases (1 to \(i-1\)) the number of missed edges is \(m_i\), is given by

$$\begin{aligned}&\mathbb {E}\left( N_i \mid M_i=m_i\right) = \sum _{w\in F_i}\mathbb {E}\left( I_w \mid M_i=m_i\right) \nonumber \\&\quad = \sum _{w\in F_i}\mathbb {P}\left( I_w=1 \mid M_i=m_i\right) = k\ \frac{\left( {\begin{array}{c}(i-1)k-m_i\\ \delta \end{array}}\right) }{\left( {\begin{array}{c}k\delta \\ \delta \end{array}}\right) }. \end{aligned}$$
(3)

Notice that, for \(2\le i\le \delta \), the binomial coefficient \(\left( {\begin{array}{c}(i-1)k-m_i\\ \delta \end{array}}\right) \) will give a positive contribution only if \(m_i\le (i-1)k-\delta \), and that, for large values of k, the probability \(\mathbb {P}\left( I_w=1 \mid M_i=m_i\right) \) is asymptotically \(\left( (i-1)/\delta \right) ^\delta \), because

$$\begin{aligned} \frac{\left( {\begin{array}{c}(i-1)k-m_i\\ \delta \end{array}}\right) }{\left( {\begin{array}{c}k\delta \\ \delta \end{array}}\right) }=\prod _{j=1}^\delta \frac{(i-1)k-m_i-j+1}{k\delta -j+1}= \left( \frac{i-1}{\delta }\right) ^\delta \cdot \prod _{j=1}^\delta \frac{ 1-\frac{m_i+j-1}{(i-1)k} }{ 1-\frac{j-1}{k\delta } }. \end{aligned}$$
(4)

When the \(\delta \) phases of the algorithm have been accomplished, the total number of matched vertices is \(\sum _{i=1}^\delta (k-N_i)\) and the inverse competitive ratio can be expressed as

$$\begin{aligned} \rho =\frac{\sum _{i=1}^\delta (k-N_i)}{k\delta }=1-\frac{1}{k\delta }\sum _{i=1}^\delta N_i. \end{aligned}$$

The inverse competitive ratio \(\rho \) is a random variable and we are interested in estimating its expected value

$$\begin{aligned} \mathbb {E}(\rho )=1-\frac{1}{k\delta }\sum _{i=1}^\delta \mathbb {E}(N_i). \end{aligned}$$

A more detailed analysis of expressions (2) to (4) allows us to write (for \(2\le i\le \delta \))

$$ \left( 1-\frac{m_i+\delta -1}{(i-1)k}\right) ^\delta \le \frac{\mathbb {P}\left( I_w=1 \mid M_i=m_i\right) }{((i-1)/\delta )^\delta } \le \left( \frac{1-m_i/((i-1)k)}{1-(\delta -1)/(k\delta )}\right) ^\delta $$

and

$$ \left( 1-\frac{m_i+\delta -1}{(i-1)k}\right) ^\delta \le \frac{\mathbb {E}\left( N_i \mid M_i=m_i\right) }{k((i-1)/\delta )^\delta } \le \left( \frac{1-m_i/((i-1)k)}{1-(\delta -1)/(k\delta )}\right) ^\delta . $$

Hence, the expression

$$\begin{aligned} \frac{1}{k\delta }\sum _{i=1}^{\delta }\mathbb {E}\left( N_i \mid M_1=0, M_2=m_2,\ldots M_\delta =m_\delta \right) =\frac{1}{k\delta }\sum _{i=1}^{\delta }\mathbb {E}\left( N_i \mid M_i=m_i\right) \end{aligned}$$

is bounded from below by

$$\begin{aligned} a(k,\delta )=\frac{1}{\delta }\sum _{i=2}^\delta \left( \frac{i-1}{\delta }\right) ^\delta \left( 1-\frac{m_i+\delta -1}{(i-1)k}\right) ^\delta \end{aligned}$$

and bounded from above by

$$\begin{aligned} b(k,\delta )=\frac{1}{\delta }\sum _{i=2}^\delta \left( \frac{i-1}{\delta }\right) ^d \left( \frac{1-m_i/((i-1)k)}{1-(\delta -1)/(k\delta )}\right) ^\delta . \end{aligned}$$

The expected inverse competitive ratio can be calculated as

$$ \mathbb {E}\left( \rho \right) =1-\frac{1}{k\delta }\sum _{i=1}^{\delta }\mathbb {E}\left( N_i\right) =1-\mathbb {E}\left( \frac{1}{k\delta }\sum _{i=1}^{\delta }\mathbb {E}\left( N_i \mid M_i\right) \right) . $$

Therefore,

$$\begin{aligned} l(k,\delta )\le \mathbb {E}\left( \rho \right) \le u(k,\delta ) \end{aligned}$$

where

$$\begin{aligned} l(k,\delta )=1-\frac{1}{\delta }\sum _{i=2}^\delta \left( \frac{i-1}{\delta }\right) ^\delta \mathbb {E}\left( \left( \frac{1-M_i/((i-1)k)}{1-(\delta -1)/(k\delta )}\right) ^\delta \right) \end{aligned}$$

and

$$\begin{aligned} u(k,\delta )=1-\frac{1}{\delta }\sum _{i=2}^\delta \left( \frac{i-1}{\delta }\right) ^\delta \mathbb {E}\left( \left( 1-\frac{M_i+\delta -1}{(i-1)k}\right) ^\delta \right) . \end{aligned}$$

From (1), we deduce that

$$\begin{aligned} M_i=\sum _{j=1}^{i-1}N_j\le \sum _{j=1}^{i-1}\frac{(j-1)k}{\delta }=\frac{(i-1)(i-2)}{2\delta }. \end{aligned}$$

Therefore, \(u(k,\delta )\le u'(k,\delta )\) where

$$\begin{aligned} u'(k,\delta )= 1-\frac{1}{\delta }\sum _{i=2}^\delta \left( \frac{i-1}{\delta }\right) ^\delta \left( 1-\frac{(i-1)(i-2)/{2\delta }+\delta -1}{(i-1)k}\right) ^\delta \end{aligned}$$

and

$$\begin{aligned} u'(k,\delta )\longrightarrow 1-\frac{1}{\delta }\sum _{i=1}^\delta \left( \frac{i-1}{\delta }\right) ^\delta \text { as } k\longrightarrow \infty . \end{aligned}$$

Similarly, since \(M_i\ge 0\), we have \(l(k,\delta )\ge l'(k,\delta )\) where

$$\begin{aligned} l'(k,\delta )= 1-\frac{1}{\delta }\sum _{i=2}^\delta \left( \frac{i-1}{\delta }\right) ^\delta \left( \frac{1}{1-(\delta -1)/(k\delta )}\right) ^\delta , \end{aligned}$$

and, also,

$$\begin{aligned} l'(k,\delta )\longrightarrow 1-\frac{1}{\delta }\sum _{i=1}^\delta \left( \frac{i-1}{\delta }\right) ^\delta \text { as }k\longrightarrow \infty . \end{aligned}$$

Finally, since \(l'(k,\delta )\le \mathbb {E}\left( \rho \right) \le u'(k,\delta )\), we conclude that

$$\begin{aligned} \lim _{k \rightarrow \infty } \mathbb {E}\left( \rho \right) = 1-\frac{g(\delta )}{\delta } \end{aligned}$$

with

$$\begin{aligned} g(\delta )=\sum _{i=1}^\delta \left( \frac{i-1}{\delta }\right) ^\delta . \end{aligned}$$

The value for the competitive ratio \(\mathbf {r} ^*_{\mathtt {RI} _{k\delta }}\) is obtained by taking into account that \(\mathbf {r} ^*_{\mathtt {RI} _{k\delta }}= \frac{1}{\mathbb {E}\left( \rho \right) }\).   \(\square \)

Theorem 3 shows the competitive ratio for the online bipartite matching problem for regular graphs with a given randomized adversary choosing between \({\text {O}}\left( n!^2\right) \) instances. This is therefore a lower bound on the competitive ratio of the matching problem for regular bipartite graphs for any randomized adversary.

It can be shown that g(m) is a monotonically increasing function and that

$$\begin{aligned} \lim _{m\rightarrow \infty } g(m)=\lim _{m\rightarrow \infty }\sum _{i=1}^m \left( \frac{i-1}{m}\right) ^m=\frac{1}{e-1}. \end{aligned}$$

5 Conclusions

We have studied the online bipartite matching problem for regular graphs with a randomized adversary. We have determined the competitive ratio for the case \(\delta =2\) for any number of random bits. This turns out to be an interesting example of a problem in which one random bit used by the adversary makes the problem as hard as for many random bits. It is also an example in which the competitive ratio for the randomized adversary model is equal to the ratio for the randomized algorithm model. We also gave upper and lower bounds for other values of the degree for adversaries using one random bit.

Finally, we studied the case for adversaries with a large number of random bits and found the competitive ratio for a given class of instances. We were not able to find an upper bound for the competitive ratio in the general case, but we are convinced that the exact value for the competitive ratio should not be far from the given result.