1 Introduction

Restricted Boltzmann machines (RBMs, Smolensky 1986; Hinton 2002) are undirected graphical models describing stochastic neural networks. They have raised much attention recently as building blocks of deep belief networks (Hinton and Salakhutdinov 2006). Learning an RBM corresponds to maximizing the likelihood of the parameters given data. Training large RBMs by steepest ascent on the log-likelihood gradient is in general computationally intractable, because the gradient involves averages over an exponential number of terms. Therefore, the computationally demanding part of the gradient is approximated by Markov chain Monte Carlo (MCMC, see, e.g., Neal 1993) methods usually based on Gibbs sampling (e.g., Hinton 2002; Tieleman and Hinton 2009; Desjardins et al. 2010). The higher the mixing rate of the Markov chain, the fewer sampling steps are usually required for a proper MCMC approximation. For RBM learning algorithms it has been shown that the bias of the approximation increases with increasing absolute values of the model parameters (Bengio and Delalleau 2009; Fischer and Igel 2011) and that this can indeed lead to severe distortions of the learning process (Fischer and Igel 2010). Thus, increasing the mixing rate of the Markov chains in RBM training is highly desirable.

In this paper, we propose to employ a Metropolis-type transition operator for RBMs that maximizes the probability of state changes in the framework of periodic sampling and can lead to a faster mixing Markov chain. This operator is related to the Metropolized Gibbs sampler introduced by Liu (1996) and the flip-the-spin operator with Metropolis acceptance rule used in Ising models (see related methods in Sect. 3) and is, thus, referred to as flip-the-state operator. In contrast to these methods, our main theoretical result is that the proposed operator is also guaranteed to lead to an ergodic and thus properly converging Markov chain when using a periodic updating scheme (i.e., a deterministic scanning policy). It can replace Gibbs sampling in existing RBM learning algorithms without introducing computational overhead.

After a brief overview over RBM training and Gibbs sampling in Sects. 2, 3 introduces the flip-the-state transition operator and shows that the induced Markov chain converges to the RBM distribution. In Sect. 4 we empirically analyze the mixing behavior of the proposed operator compared to Gibbs sampling by looking at the second largest eigenvector modulus (SLEM), the autocorrelation time, and the frequency of class changes in sample sequences. While the SLEM describes the speed of convergence to the equilibrium distribution, the autocorrelation time concerns the variance of an estimate when averaging over several successive samples of the Markov chain. The class changes quantify mixing between modes in our test problems. Furthermore, the effects of the proposed sampling procedure on learning in RBMs is studied. We discuss the results and conclude in Sects. 5 and 6.

2 Background

An RBM is an undirected graphical model with a bipartite structure (Smolensky 1986; Hinton 2002) consisting of one layer of m visible variables V=(V 1,…,V m ) and one layer of n hidden variables H=(H 1,…,H n ) taking values (v,h)∈Ω:={0,1}m+n. The modeled joint distribution is \(p(\mathbf{v},\mathbf{h})={e^{-\mathcal {E}(\mathbf{v}, \mathbf{h})}}/\sum_{ \mathbf{v}, \mathbf{h}}e^{-\mathcal {E}(\mathbf{v},\mathbf{h})} \) with energy \(\mathcal {E}\) given by \(\mathcal {E}(\mathbf{v},\mathbf{h})= -\sum_{i=1}^{n}\sum_{j=1}^{m} w_{ij} h_{i} v_{j} - \sum_{j=1}^{m} b_{j} v_{j} - \sum_{i=1}^{n} c_{i} h_{i}\) with weights w ij and biases b j and c i for i∈{1,…,n} and j∈{1,…,m}, jointly denoted as θ. By v i and h i we denote the vectors of the states of all visible and hidden variables, respectively, except the ith one.

Typical RBM training algorithms perform steepest ascent on approximations of the log-likelihood gradient. One of the most popular is Contrastive Divergence (CD, Hinton 2002), which approximates the gradient of the log-likelihood by \(- \!\sum_{{\mathbf{h}}} p(\mathbf{h}|\mathbf{v}^{(0)}) \frac{\partial \mathcal {E}(\mathbf{v}^{(0)},\mathbf{h})}{\partial\boldsymbol{\theta}} + \!\sum_{{\mathbf{h}}} p(\mathbf{h}|\mathbf{v}^{(k)}) \frac{\partial \mathcal {E}(\mathbf{v}^{(k)}, \mathbf{h})}{\partial\boldsymbol{\theta}}\), where v (k) is a sample gained after k steps of Gibbs sampling starting from a training example v (0).

Several variants of CD have been proposed. For example, in Persistent Contrastive Divergence (PCD, Tieleman 2008) and its refinement Fast PCD (Tieleman and Hinton 2009) the Gibbs chain is not initialized by a training example but maintains its current value between approximation steps. Parallel Tempering (PT, also known as replica exchange Monte Carlo sampling) has also been applied to RBMs (Cho et al. 2010; Desjardins et al. 2010; Salakhutdinov 2010). It introduces supplementary Gibbs chains that sample from more and more smoothed variants of the true probability distribution and allows samples to swap between chains. This leads to faster mixing, but introduces computational overhead.

In general, a homogeneous Markov chain on a finite state space Ω with N elements can be described by an N×N transition probability matrix A=(a x,y ) x,yΩ , where a x,y is the probability that the Markov chain being in state x changes its state to y in the next time step. We denote the one step transition probability a x,y by A(x,y), the n-step transition probability (the corresponding entry of the matrix A n) by A n(x,y). The transition matrices are also referred to as transition operators. We write p for the N-dimensional probability vector corresponding to some distribution p over Ω.

When performing periodic Gibbs sampling in RBMs, we visit all hidden and all visible variables alternately in a block-wise fashion and update them according to their conditional probability given the state of the other layer (i.e., p(h i |v), i=1,…,n and p(v j |h), j=1,…,m, respectively). Thus, the Gibbs transition operator G can be decomposed into two operators G h and G v (with G=G h G v ) changing only the state of the hidden layer or the visible layer, respectively. The two operators can be further decomposed into a set of basic transition operators G k , k=1,…,(n+m), each updating just a single variable based on the conditional probabilities. An example of such a transition of a single variable based on these probabilities is depicted in the transition diagram in Fig. 1(a).

Fig. 1
figure 1

Transition diagrams for a single variable V i (a) updated by Gibbs sampling (b) updated by the flip-the-state transition operator (here p(V i =0|h)<p(V i =1|h))

3 The flip-the-state transition operator

In order to increase the mixing rate of the Markov chain, it seems desirable to change the basic transition operator of the Gibbs sampler in such a way that each single variable tends to change its state rather than sticking to the same state. This can be done by making the sample probability of a single neuron dependent on its current state. Transferring this idea to the transition graph shown in Fig. 1, this means that we wish to decrease the probabilities associated to the self-loops and increase the transition probabilities between different states as much as possible. Of course, we have to ensure that the resulting Markov chain remains ergodic and still has the RBM distribution p as equilibrium distribution.

The transition probabilities are maximized by scaling the probability for a single variable to change from the less probable state to the more probable state to one (making this transition deterministic) while increasing the transition in the reverse direction accordingly with the same factor. In the—in practice not relevant but for theoretical considerations important—case of two states with the exact same conditional probability, we use the transition probabilities of Gibbs sampling to avoid a non-ergodic Markov chain.

These considerations can be formalized by first defining a variable \(v^{*}_{i}\) that indicates what the most probable state of the random variable V i is or if both states are equally probable:

$$ v^*_i = \begin{cases} 1 ,& \mbox{if}\ p(V_i =1| \mathbf{h}) > p(V_i =0| \mathbf{h}) \\ 0 ,& \mbox{if}\ p(V_i =1| \mathbf{h}) < p(V_i =0| \mathbf{h}) \\ -1 , &\mbox{if}\ p(V_i =1| \mathbf{h}) = p(V_i =0| \mathbf{h}) . \end{cases} $$
(1)

Now we define the flip-the-state transition operator T as follows:

Definition 1

For i=1,…,m, let the basic transition operator T i for the visible unit V i be defined through its transition probabilities: T i ((v,h),(v′,h′))=0 if (v,h) and (v′,h′) differ in another variable than V i and as

$$ T_i \bigl( (\mathbf{v}, \mathbf{h}), \bigl(\mathbf {v'}, \mathbf {h'}\bigr) \bigr) = \begin{cases} \frac{p(v_i'|h)}{p(v_i|h)} , &\mbox{if}\ v_i^* = v_i \neq v_i' \\ 1-\frac{p(v_i'|h)}{p(v_i|h)} , &\mbox{if}\ v_i^* = v_i = v_i' \\ 1 , &\mbox{if}\ v_i \neq v_i' = v_i^* \\ 0 , &\mbox{if}\ v_i = v_i' \neq v_i^* \\ \frac{1}{2}, &\mbox{if}\ v_i^* = -1 \end{cases} $$
(2)

otherwise. The transition matrix containing the transition probabilities of the visible layer is given by T v =∏ i T i . The transition matrix for the hidden layer T h is defined analogously, and the flip-the-state transition operator is given by T=T h T v .

3.1 Activation function & computational complexity

An RBM corresponds to a stochastic, recurrent neural network with activation function σ(x)=1/(1+e x). Similarly, the transition probabilities defined in (2) can be interpreted as resulting from an activation function depending not only on the weighted input to a neuron and its bias but also on the neuron’s current state V j (or analogously H i ):

$$ \sigma'(x) = \begin{cases} \min\lbrace e^{x}, 1 \rbrace&\mbox{if}\ V_j = 0 \\ \max\lbrace1 - e^{-x}, 0 \rbrace&\mbox{if } V_j = 1. \end{cases} $$
(3)

Corresponding graphs are shown in Fig. 2.

Fig. 2
figure 2

Activation function for Gibbs sampling (black) and for transitions based on T when the current state is 0 (red, dashed) or 1 (blue, dotted) (Color figure online)

The differences in computational complexity between the activation functions σ and σ′ can be neglected. The transition operator described here requires a switch based on the current state of the neuron on the one hand, but saves the computationally expensive call to the random generator in deterministic transitions on the other hand. Furthermore, in the asymptotic case, the running time of a sampling step is dominated by the matrix multiplications, while the number of activation function evaluations in one step increases only linearly with the number of neurons.

If the absolute value of the sum of the weighted inputs and the bias is large (i.e., extreme high conditional probability for one of the two states), the transition probabilities between states under Gibbs sampling are already almost deterministic. Thus, the difference between G and T decreases in this case. This is illustrated in Fig. 2.

3.2 Related work

Both G and T are (local) Metropolis algorithms (Neal 1993). A Metropolis algorithm proposes states with a proposal distribution and accepts them in a way which ensures detailed balance. In this view, Gibbs sampling corresponds to using the proposal distribution “flip current state” and the Boltzmann acceptance probability \(\frac{p(\mathbf{x}')}{p(\mathbf{x}) + p(\mathbf{x}')}\), where x and x′ denote the current and the proposed state, respectively. This proposal distribution has also been used with the Metropolis acceptance probability \(\min ( 1, \frac{p(\mathbf{x}')}{p(\mathbf{x})} )\) for sampling from Ising models. The differences between the two acceptance functions are discussed, for example, by Neal (1993). He comes to the conclusion that “the issues still remain unclear, though it appears that common opinion favours using the Metropolis acceptance function in most circumstances” (p. 69).

The work by Peskun (1973) and Liu (1996) shows that the Metropolis acceptance function is optimal with respect to the asymptotic variance of the Monte Carlo estimate of the quantity of interest. This result only holds if the variables to be updated are picked randomly in each step of the (local) algorithm. Thus, they are not applicable in the typical RBM training scenario, where block-wise sampling in a predefined order is used. In this scenario, it can indeed happen that the flip-the-state proposal combined with the Metropolis acceptance function leads to non-ergodic chains as shown by the counter-examples given by Neal (1993, p. 69).

The transition operator T also uses the Metropolis acceptance probability, but the proposal distribution differs from the one used in Ising models in one detail, namely that it selects a state at random if the conditional probabilities of both states are equal. This is important from a theoretical point of view, because it ensures ergodicity as proven in the next section. This is the reason why our method does not suffer from the problems mentioned above.

Furthermore, Breuleux et al. (2011) discuss a similar idea to the one underlying our transition operator as a theoretic framework for understanding fast mixing, where one increases the probability to change states by defining a new transition matrix A′ based on an existing transition matrix A by A′=(Aλ I)(1−λ)−1, where λ≤min xΩ A(x,x) and I is the identity matrix. Our method corresponds to applying this kind of transformation, not to the whole transition matrix, but rather to the transition probabilities of a single binary variable (i.e., the base transition operator). This makes the method not only computationally feasible in practice, but even more effective, because it allows us to redistribute more probability mass (because the redistribution is not limited by min xΩ A(x,x)), so that more than one entry of the new transition matrix is 0.

3.3 Properties of the transition operator

To prove that a Markov chain based on the suggested transition operator T converges to the probability distribution p defined by the RBM, it has to be shown that p is invariant with respect to T and that the Markov chain is irreducible and aperiodic.

As stated above, the described transition operator belongs to the class of local Metropolis algorithms. This implies that detailed balance holds for all the base transition operators (see, e.g., Neal 1993). If p is invariant w.r.t. the basic transition operators it is also invariant w.r.t. the concatenated transition matrix T.

However, there is no general proof of ergodicity of Metropolis algorithms if neither the proposal distribution nor the acceptance distribution are strictly positive and the base transitions are applied deterministically in a fixed order. Therefore irreducibility and aperiodicity still remain to be proven (see, e.g., Neal 1993, p. 56).

To show irreducibility, we need some definitions and a lemma first. For a fixed hidden state h let us define v max(h) as the visible state that maximizes the probability of the whole state,

$$ \mathbf{v}_{\max}(\mathbf{h}) := \arg\max_{\mathbf{v}} p(\mathbf{v},\mathbf{h}) , $$
(4)

and analogously

$$ \mathbf{h}_{\max}(\mathbf{v}) := \arg\max_{\mathbf{h}} p(\mathbf{v},\mathbf{h}) . $$
(5)

We assume that argmax is unique and that ties are broken by taking the greater state according to some arbitrary predefined strict total order ≺.

Furthermore, let \(\mathcal{M}\) be the set of states, for which the probability can not be increased by changing either only the hidden or only the visible states:

$$ \mathcal{M} = \bigl\lbrace (\mathbf{v},\mathbf{h} ) \in\varOmega\, |\, \\ (\mathbf{v},\mathbf{h} ) = \bigl(\mathbf{v}_{\max}(\mathbf{h}), \mathbf{h} \bigr) = \bigl( \mathbf{v}, \mathbf{h}_{\max }(\mathbf{v}) \bigr) \bigr\rbrace. $$
(6)

Note, that \(\mathcal{M}\) is not the empty set, since it contains at least the most probable state argmax(v,h) p(v,h). Now we have:

Lemma 1

From every state (v,h)∈Ω one can reach (v max(h),h) by applying the visible transition operator T v once and (v,h max(v)) in one step of T h . It is possible to reach every state (v,h)∈Ω in one step of T v from (v max(h),h) and in one step of T h from (v,h max(v)).

Proof

From the definition of v max(h) and the independence of the conditional probabilities of the visible variables given the state of the hidden layer it follows:

$$ p \bigl( \mathbf{v}_{\max}(\mathbf{h})|\mathbf{h} \bigr) = \max_{v_1,\ldots,v_n} \prod_i p(v_i | \mathbf{h}). $$
(7)

Thus, in v max(h) every single visible variable is in the state with the higher conditional probability (i.e., in \(v_{i}^{*}\)) or both states are equally probable (in which case \(v_{i}^{*}=-1\)). By looking at the definition of the base transitions (2) it becomes clear that this means that T i ((v,h),(v max(h) i ,v i ,h))>0 and T i ((v max(h),h),(v i ,v max(h)i ,h))>0. So we get for all (v,h)∈Ω:

$$\begin{aligned} T_v \bigl( (\mathbf{v},\mathbf{h}), \bigl(\mathbf{v}_{\max}(\mathbf{h}), \mathbf{h} \bigr) \bigr) =& \prod_i T_i \bigl( (\mathbf{v},\mathbf{h}), \bigl(v_{\max}(\mathbf{h})_i,\mathbf{v}_{-i},\mathbf{h}\bigr) \bigr) > 0 \end{aligned}$$
(8)
$$\begin{aligned} T_v \bigl(\bigl(\mathbf{v}_{\max}(\mathbf{h}),\mathbf{h}\bigr), (\mathbf{v},\mathbf{h}) \bigr) =& \prod_i T_i \bigl( \bigl(\mathbf{v}_{\max}(\mathbf{h}),\mathbf{h}\bigr), \bigl(v_i, \mathbf{v}_{\max}(\mathbf{h})_{-i},\mathbf{h} \bigr) \bigr)> 0. \end{aligned}$$
(9)

This holds equivalently for the hidden transition operator T h and (v,h max(v)). For all (v,h)∈Ω:

$$\begin{aligned} T_h \bigl( (\mathbf{v},\mathbf{h}), \bigl(\mathbf{v}, \mathbf{h}_{\max}( \mathbf{v})\bigr) \bigr) =& \prod_i T_i \bigl( (\mathbf{v},\mathbf{h}), \bigl(\mathbf{v}, h_{\max}(\mathbf{v})_i,\mathbf{h}_{-i}\bigr) \bigr) > 0 \end{aligned}$$
(10)
$$\begin{aligned} T_h \bigl(\bigl(\mathbf{v}, \mathbf{h}_{\max}(\mathbf{v})\bigr), (\mathbf{v},\mathbf{h}) \bigr) =& \prod_i T_i \bigl(\bigl(\mathbf{v}, \mathbf{h}_{\max}(\mathbf{v})\bigr), \bigl(\mathbf{v},h_i,\mathbf{h}_{\max }(\mathbf{v})_{-i}\bigr) \bigr)> 0. \end{aligned}$$
(11)

 □

Now we prove the irreducibility:

Theorem 1

The Markov chain induced by T is irreducible:

$$ \forall(\mathbf{v}, \mathbf{h}), \bigl(\mathbf{v}', \mathbf{h}' \bigr) \in\varOmega: \exists n > 0:\quad T^n \bigl((\mathbf{v}, \mathbf{h}),\bigl( \mathbf{v}', \mathbf{h}'\bigr) \bigr) > 0. $$
(12)

Proof

The proof is divided into three steps showing:

  1. (i)

    from every state (v,h)∈Ω one can reach an element of \(\mathcal{M}\) in a finite number of transitions, i.e., \(\forall(\mathbf{v},\mathbf{h}) \in\varOmega\text{ } \exists(\mathbf{v}^{*},\mathbf{h}^{*}) \in\mathcal{M} \text{ and } n \in\mathbb{N}, \text{ with } T^{n} ( (\mathbf{v},\mathbf{h}), (\mathbf{v}^{*},\mathbf{h}^{*}) ) > 0\),

  2. (ii)

    for every state (v,h)∈Ω there exists a state \((\mathbf{v}^{*},\mathbf{h}^{*}) \in\mathcal{M}\) from which it is possible to reach (v,h)∈Ω in a finite number of transitions, i.e., \(\forall(\mathbf{v},\mathbf{h}) \in\varOmega\; \exists(\mathbf{v}^{*},\mathbf{h}^{*}) \in \mathcal{M}\) and \(n \in\mathbb{N}\) with T n((v ,h ),(v,h))>0, and

  3. (iii)

    any transition between two arbitrary elements in \(\mathcal {M}\) is possible, i.e., ∀(v ,h ), \((\mathbf{v}^{**},\mathbf{h}^{**})\in \mathcal{M}: T ( (\mathbf{v}^{*},\mathbf{h}^{*}),(\mathbf{v}^{**},\mathbf{h}^{**}) ) > 0\).

Step (i): Let us define a sequence \(((\mathbf{v}_{k}, \mathbf{h}_{k}) )_{k \in\mathbb{N}}\) with v 0:=v, h 0:=h and h k :=h max(v k−1) and v k :=v max(h k ) for k>0. From the definition of v max and h max it follows that (v k−1,h k−1)≠(v k ,h k ) unless \((\mathbf{v}_{k-1},\mathbf{h}_{k-1}) \in\mathcal{M}\) and that no state in \(\varOmega\setminus\mathcal{M}\) is visited twice. The latter follows from the fact that in the sequence two successive states (v k ,h k ) and (v k+1,h k+1) from \(\varOmega\setminus\mathcal{M}\) have either increasing probabilities or (v k ,h k )≺(v k+1,h k+1). Since Ω is a finite set, such a sequence must reach a state \((\mathbf{v}_{n}, \mathbf{h}_{n}) = (\mathbf{v}_{n+i}, \mathbf{h}_{n+i}) \in\mathcal{M}, i \in\mathbb{N}\) after a finite number of steps n.

Finally, this sequence can be produced by T since from Eq. (8) and Eq. (10) it follows that ∀k>0:

$$\begin{aligned} &T \bigl((\mathbf{v}_{k-1},\mathbf{h}_{k-1}) ,(\mathbf{v}_k, \mathbf{h}_k) \bigr) \\ &\quad=T_h \bigl((\mathbf{v}_{k-1},\mathbf{h}_{k-1}) , (\mathbf{v}_{k-1},\mathbf{h}_k) \bigr)\cdot T_v \bigl((\mathbf{v}_{k-1},\mathbf{h}_k), (\mathbf{v}_k,\mathbf{h}_k) \bigr)> 0. \end{aligned}$$
(13)

Hence, one can get from (v k−1,h k−1) to (v k ,h k ) in one step of the transition operator T.

Step (ii): We now consider a similar sequence \(((\mathbf{v}_{k}, \mathbf{h}_{k}) )_{k \in\mathbb{N}}\) with v 0:=v, h 0:=h and v k :=v max(h k−1) and h k :=h max(v k ), for k>0. Again, there exists \(n \in\mathbb{N} \), so that \((\mathbf{v}_{n},\mathbf{h}_{n}) = (\mathbf{v}_{n+i}, \mathbf{h}_{n+i}) \in\mathcal{M}, i \in\mathbb{N}\). From equations (9) and (11) it follows that ∀k>0:

$$\begin{aligned} &T \bigl((\mathbf{v}_{k},\mathbf{h}_{k}) ,(\mathbf{v}_{k-1}, \mathbf{h}_{k-1}) \bigr) \\ &\quad=T_h \bigl((\mathbf{v}_{k},\mathbf{h}_{k}) ,(\mathbf{v}_{k},\mathbf{h}_{k-1}) \bigr) \cdot T_v \bigl((\mathbf{v}_{k},\mathbf{h}_{k-1}),(\mathbf{v}_{k-1},\mathbf{h}_{k-1}) \bigr) > 0. \end{aligned}$$
(14)

That is, one can get from (v k+1,h k+1) to (v k ,h k ) in one step of the transition operator T and follow the sequence backwards from \((\mathbf{v}_{n},\mathbf{h}_{n}) \in\mathcal{M}\) to (v,h).

Step (iii): From equations (8)–(11) it follows directly that a transition between two arbitrary points in \(\mathcal{M}\) is always possible. □

Showing the aperiodicity is straight-forward:

Theorem 2

The Markov chain induced by T is aperiodic.

Proof

For every state (v ,h ) in the nonempty set \(\mathcal{M}\) it holds that

$$ T \bigl(\bigl(\mathbf{v}^*,\mathbf{h}^*\bigr),\bigl(\mathbf{v}^*,\mathbf{h}^*\bigr) \bigr)>0 , $$
(15)

so the state is aperiodic. This means that the whole Markov chain is aperiodic, since it is irreducible (see, e.g., Brémaud 1999). □

Theorems 1 and 2 show that the Markov chain induced by the operator T has p as its equilibrium distribution, i.e., the Markov chain is ergodic with stationary distribution p.

4 Experiments

First, we experimentally compare the mixing behavior of the flip-the-state method with Gibbs sampling by analyzing T and G for random RBMs. Then, we study the effects of replacing G by T in different RBM learning algorithms applied to benchmark problems. After that, the operators are used to sample sequences from trained RBMs. The autocorrelation times and the number of class changes reflecting mode changes are compared. Training and sampling the RBMs was implemented using the open-source machine learning library Shark (Igel et al. 2008).

4.1 Analysis of the convergence rate

The convergence speed of an ergodic, homogeneous Markov chain with finite state space is governed by the second largest eigenvector modulus (SLEM). This is a direct consequence of the Perron-Frobenius theorem. Note that the SLEM computation considers absolute values, in contrast to the statements by Liu (1996) referring to the signed eigenvalues. We calculated the SLEM for transition matrices of Gibbs sampling and the new transition operator for small, randomly generated RBMs by solving the eigenvector equation of the resulting transition matrices G and T. To handle the computational complexity we had to restrict our considerations to RBMs with only 2, 3, and 4 visible and hidden neurons, respectively. The weights of these RBMs were drawn randomly and uniformly from [−c;c], with c∈{1,…,10}, and bias parameters were set to zero. For each value of c we generated 100 RBMs and compared the SLEMs of G and T.

4.2 Log-likelihood evolution during training

We study the evolution of the exact log-likelihood, which is tractable if either the number of the hidden or the visible units is chosen to be small enough, during gradient-based training of RBMs using CD, PCD, or PT based on samples produced by Gibbs sampling and the flip-the-state transition operator.

We used three benchmark problems taken from the literature. Desjardins et al. (2010) consider a parametrized artificial problem, referred to as Artificial Modes in the following, for studying mixing properties. The inputs are 4×4 binary images. The observations are distributed around four equally likely basic modes, from which samples are generated by flipping pixels. The probability of flipping a pixel is given by the parameter p mut, controlling the “effective distance between each mode” (Desjardins et al. 2010). In our experiments, p mut was either 0.01 or 0.1. Furthermore, we used a 4×4 pixel version of Bars and Stripes (MacKay 2002) and finally the MNIST data set of handwritten digits.

In the small toy problems (Artificial Modes and Bars and Stripes) the number of hidden units was set to be the same as the number of visible units, i.e., n=16. For MNIST the number of hidden units was set to 10. The RBMs were initialized with weights and biases drawn uniformly from a Gaussian distribution with 0 mean and standard deviation 0.01.

The models were trained on all benchmark problems using gradient ascent on the gradient approximation of either CD or PCD with k sampling steps (which we refer to as CD k or PCD k ) or PT. Note that Contrastive Divergence learning with k=1 does not seem to be a reasonable scenario for applying the new operator. The performance of PT depends on the number t of tempered chains and on the number of sampling steps k carried out in each tempered chain before swapping samples between chains. We call PT with t temperatures and k sampling steps t-PT k . The inverse temperatures were distributed uniformly between 0 and 1. Samples for each learning method where either obtained by G or T.

We performed mini-batch learning with a batch size of 100 training examples in the case of MNIST and Artificial Modes and batch learning for Bars and Stripes. The number of samples used for the gradient approximation was set to be equal to the number of training examples in a (mini) batch. We tested different learning rates η∈{0.01,0.05,0.1} and used neither weight decay nor a momentum parameter. All experiments were run for a length of 20000 update steps and repeated 25 times. We calculated the log-likelihood every 100th step of training. In the following, all reported log-likelihood values are averaged over the training examples.

4.3 Autocorrelation analysis

To measure the mixing properties of the operators on larger RBMs, we performed an autocorrelation analysis.

We estimated the autocorrelation function

$$ R(\Delta t) = \frac{\mathbb{E}[\mathcal{E}(\mathbf{V}_{k}, \mathbf{H}_{k})\mathcal{E}(\mathbf{V}_{k+\Delta t}, \mathbf{H}_{k+\Delta t})]}{\sigma _{\mathcal{E}(\mathbf{V}_{k}, \mathbf{H}_{k})}\sigma_{\mathcal{E}(\mathbf{V}_{k+\Delta t}, \mathbf{H}_{k+\Delta t})}}. $$
(16)

The random variables V k and H k are the state of the visible and hidden variables after running the chain for k steps. The expectation \(\mathbb{E}\) is over all k>0, and the standard deviation of a random variable X is denoted by σ X . The autocorrelation function is always defined with respect to a specific function on the state space. Here the energy function \(\mathcal{E}\) is a natural choice.

The autocorrelation time is linked to the asymptotic variance of an estimator based on averaging over consecutive samples from a Markov chain. It is defined as

$$ \tau= \sum_{\Delta t=-\infty}^{\infty} R(\Delta t) . $$
(17)

An estimator based on consecutive samples from a Markov chain has the same variance as an estimator based on l independent samples (see, e.g., Neal 1993). In this sense τ consecutive samples are equivalent to one independent sample.

For the autocorrelation experiments we trained 25 RBMs on each of the previously mentioned benchmark problems with 20-PT10. In addition to the RBMs with 10 hidden units we trained 24 RBMs with 500 hidden neurons on MNIST for 2000 parameter updates. To estimate the autocorrelations we sampled these RBMs for one million steps using G and T, respectively. We followed the recommendations by Thompson (2010) and, in addition to calculating and plotting the autocorrelations directly, fitted AR-models to the times series to estimate the autocorrelation time using the software package SamplerCompare (Thompson 2011).

4.4 Frequency of class changes

To access the ability of the two operators to mix between different modes, we observed the class changes in sample sequences, similar to the experiments by Bengio et al. (2013). We trained 25 RBMs with CD-5 on Artificial Modes with p mut=0.01 and p mut=0.1. After training, we sampled from the RBMs using either T or G as transition operator and analyzed how often subsequent samples belong to different classes. We considered four classes. Each class was defined by one of the four basic modes used to generate the dataset. A sample belongs to the same class as the mode to which it has the smallest Hamming distance. Ambiguous samples which could not be assigned to a single class, because they were equally close to at least two of the modes, were discarded. In one experimental setting, all trained RBMs were initialized 1000 times with samples drawn randomly from the training distribution (representing the starting distribution of CD learning), and the number of sampling steps before the first class change was measured. In a second setting, for each RBM one chain was started with all visible units set to one and run for 10000 steps. Afterwards, the number of class changes was counted.

5 Results and discussion

5.1 Analysis of the convergence rate

The upper plot in Fig. 3 shows the fraction of RBMs (out of 100) for which the corresponding transition operator T has a smaller SLEM than the Gibbs operator G (and therefore T promises a faster mixing Markov chain than G) in dependence on the value of c, which upper bounds the weights. If all the weights are equal to zero, Gibbs sampling is always better, but the higher the weights get the more often T has a better mixing rate. This effect is the more pronounced the more neurons the RBM has, which suggests that the results of our analysis can be transfered to real world RBMs.

Fig. 3
figure 3

The upper figure compares the mixing rates for 2×2 RBMs (black), 3×3 RBMs (red, dashed) and 4×4 RBMs (blue, dotted). The lower figure depicts the learning curves for CD5 on Bars and Stripes with learning rate 0.05 using G (black) or T (red, dashed). The inset shows the difference between the two and is positive if the red curve is higher. The dashed horizontal line indicates the maximum possible value of the average log-likelihood (Color figure online)

In the hypothetical case that all variables are independent (corresponding to an RBM where all weights are zero), Gibbs sampling is optimal and converges in a single step. With the flip-the-state operator, however, the probability of a neuron to be in a certain state would oscillate and converge exponentially by a factor of \(\frac{1- p(v_{i}^{*})}{p(v_{i}^{*})}\) (i.e., the SLEM of the base transition matrix in this case) to the equilibrium distribution. As the variables get more and more dependent, the behavior of Gibbs sampling is no longer optimal and the Gibbs chain converges more slowly than the Markov chain induced by T. Figure 3 directly supports our claim that in this relevant scenario changing states more frequently by the flip-the-state method can improve mixing.

5.2 Log-likelihood evolution during training

To summarize all trials of one experiment into a single value we calculated the maximum log-likelihood value reached during each run and finally calculated the median over all runs. The resulting maximum log-likelihood values for different experimental settings for learning the Bars and Stripes and the MNIST data set with CD and PT are shown in Table 1. Similar results were found for PCD and for experiments on Artificial Modes, see appendix. For most experimental settings, the RBMs reaches statistically significant higher likelihood values during training with the new transition operator (Wilcoxon signed-rank test, p<0.05).

Table 1 Median maximum log-likelihood values on Bars and Stripes (top) and MNIST (bottom). Significant differences are marked with a star

If we examine the evolution of likelihood values over time (as shown, e.g., in the lower plot of Fig. 3) more closely, we see that the proposed transition operator is better in the end of training, but Gibbs sampling is actually slightly better in the beginning when weights are close to their small initialization. Learning curves as in Fig. 3 also show that if divergence occurs with Gibbs sampling (Fischer and Igel 2010), it will be slowed down, but not completely avoided with the new transition operator.

It is not surprising that Gibbs sampling mixes better at the beginning of the training, because the variables are almost independent when the weights are still close to their initial values near zero. Still, the results confirm that the proposed transition operator mixes better in the difficult phase of RBM training and that the faster mixing helps reaching better learning results.

The results suggest that it may be reasonable to mix the two operators. Either, one could start with G and switch to T as the weights grow larger, or one can softly blend between the basic operators and consider \(\mathbf{T}_{i}^{\alpha} = \alpha \mathbf{T}_{i} + (1-\alpha) \mathbf{G}_{i}\), α∈[0,1].

5.3 Autocorrelation analysis

The autocorrelation analysis revealed that sampling using the flip-the-state operator leads to shorter autocorrelation times in the considered benchmark problems, see Table 2 and Fig. 4. For example, an RBM trained on MNIST with 500 hidden neurons needed on average to be sampled for 17.28 % fewer steps to achieve the same variance of the estimate if T is used instead of G—without overhead in computation time or implementation complexity. The results with n=500 demonstrate that our previous findings carry over to larger RBMs.

Fig. 4
figure 4

Autocorrelation function Rt) for RBMs with 500 hidden neurons trained on MNIST based on 24 trials, sampled 106 steps each. The dotted line corresponds to T and the solid one to G (Color figure online)

Table 2 Mean autocorrelation times τ G and τ T for single Markov chains using the Gibbs sampler and the flip-the-state operator. The last column shows the gain defined as \(1- \frac{\tau_{\mathbf{T}}}{\tau _{\mathbf{G}}}\)

5.4 Frequency of class changes

The numbers of class changes observed in sequences of 10000 samples starting from the visible nodes set to one produced by G and T are given in Table 3. Table 4 shows the number of samples before the first class change when initializing the Markov chain with samples randomly drawn from the training distribution. Markov chains based on T led to more and faster class changes than chains using Gibbs sampling. As the modes in the training set get more distinct (comparing p mut=0.1 to p mut=0.01) class changes get less frequent and more sampling steps are needed to yield a class change. Nevertheless, T is superior to G even in this setting.

Table 3 Frequencies of class changes for the Gibbs sampler and the flip-the-state operator in sequences of 10000 samples (medians and quantiles over samples from 25 RBMs)
Table 4 Number of samples before the first class change when starting a Markov chain with samples from the training distribution (medians and quantiles over samples from 25 RBMs)

6 Conclusions

We proposed the flip-the-state transition operator for MCMC-based training of RBMs and proved that it induces a converging Markov chain. Large weights lead to slow mixing Gibbs chains that can severely harm RBM training. In this scenario, the proposed flip-the-state method increases the mixing rate compared to Gibbs sampling. The way of sampling is generally applicable in the sense that it can be employed in every learning method for binary RBMs relying on Gibbs sampling, for example Contrastive Divergence learning and its variants as well as Parallel Tempering. As empirically shown, the better mixing indeed leads to better learning results in practice. As the flip-the-state sampling does not introduce computational overhead, we see no reason to stick to standard Gibbs sampling.