1 Introduction

The research on mobile robots with visual perception is very popular and mainly focuses on SLAM [1], visual odometry (VO) [2], visual servoing [3], and so on. In particular, visual servoing is one of the hot topics and includes visual stabilization [5, 6] and visual tracking [7, 8]. In this paper, we address the problem of eye-in-hand visual stabilization for mobile robots, which means driving the mobile robot from an initial pose to a desired one using real time image feedback [9, 10].

There are several challenges in the problem of visual stabilization for mobile robots. Nonholonomic constraint is a key difficulty according to the famous Brockett’ s necessary condition [11], which means that there is no continuous and time-invariant control for the stabilization of mobile robots. For the monocular camera-robot system, the image depth is unknown which results in more complicated controller design [12, 13]. In addition, the limit of the camera field of view [14], uncalibrated camera extrinsic and intrinsic parameters [15, 16], state and input saturation [17] are also critical issues in the study of visual servoing for mobile robots. Particularly, the control input saturation is a quite realistic problem from a practical point of view. This is because the actual actuators of the mobile robots always have maximum output values and thus the control input of mobile robots is impossible to be greater than this maximum level. If the control design takes no consideration of the input saturation constraint, the visual servoing process may be failed in the case that the computed ideal input of mobile robots exceeds the actuator output limit. Therefore, it is necessary to ensure the designed control input of mobile robots always satisfies the input saturation constraint during the whole visual servoing process.

There are some literatures related to the input saturation control of mobile robots. In [18], a framework of saturated stabilization and tracking control for wheeled mobile robots is proposed based on the passivity. In [19], Huang et al. address the problem of global tracking and stabilization for mobile robots with input saturation at the torque level. Specifically, the bounds of control torques can be converted into the function of design parameters and reference trajectories. Thus, proper parameters can be determined to ensure the bounds of control inputs to be within the saturation level. In [20], Chen presents a robust stabilization controller for a class of nonholonomic mobile robots with torque saturation limits, in which the finite-time theory and backstepping-like method are used. The work in [21] designs a switching controller to solve the input saturation problem of mobile robots. The work in [22] also applies the switch function for the saturated control design. The above mentioned methods are important references to solving the stabilization problem of mobile robots. However, there has been no approaches to deal with the visual servoing of mobile robots with input saturation.

In this paper, a visual servoing approach for eye-in-hand mobile robots is proposed with velocity input saturation. Specifically, the vision-based system model for eye-in-hand wheeled mobile robots is firstly established. Then, a class of continuous and bounded functions is defined. Next, based on Lyapunov techniques and the property of those functions, the visual servoing controller is designed for mobile robots under the velocity input saturation. The convergence to zero of the closed-loop system states is proven using LaSalle’s invariance principle. Though the image depth of feature points is unknown, there is no need to design the parameter updating law, as shown in the rigorous stability proof. Thus, the main contribution of this paper is extending the stabilization method in [18] to the problem of eye-in-hand visual servoing for mobile robots in the presence of both unknown depth and velocity input saturation.

The rest of the paper is organized as follows. The vision-based system model is established in Sect. 2. The saturated velocity controller design and the closed-loop stability analysis are demonstrated in Sect. 3. Simulation results are provided in Sect. 4 to validate the effectiveness of the proposed controller. Finally, the conclusion is summarized in Sect. 5.

2 System Model Development

Figure 1 is the top view of the eye-in-hand wheeled mobile robot system. The mobile robot frame \(\mathcal {F}_C\) coincides with the camera frame for the simplicity of analysis. \(x^c\) axis is along the direction of the camera optical and \({z}^c\) axis is outwardly perpendicular to the paper. \(\mathcal {P}\) is a static point within the field of the camera view. We assume that the mobile robot moves on the \({x}^c {y}^c\) plane and there is no movement along the \(z^c\) direction.

Fig. 1.
figure 1

Current and desired poses of the eye-in-hand mobile robot system

2.1 System Modeling in 3D Space

Assuming that the current pose of the mobile robot is \(\mathcal {F}_C\) and the desired one is \(\mathcal {F}_R\) as Fig. 1 shows, the purpose of the control design is to drive the mobile robot to \(\mathcal {F}_R\) from its initial pose \(\mathcal {F}_C\). The coordinate of \(\mathcal {P}\) is represented as \({\pmb P}_C=[x_C(t)~~y_C(t)~~z_C(t)]^\mathrm{T}\) in \(\mathcal {F}_C\). Then, \({\pmb P}_C\) will change with the movement of the mobile robot as a result of the motion of \(\mathcal {F}_C\). The dynamics of \({\pmb P}_C\) can be formulated as \(\dot{\pmb P}_C = - {\pmb V} - {\pmb W} \times \pmb P_C,\) where \(\pmb V=[v(t)~~0~~0]^\mathrm{T}\) and \(\pmb W = [0~~0~~w(t)]^\mathrm{T}\), with \(v(t)\in \mathbb {R}\) and \(w(t)\in \mathbb {R}\) denoting the linear velocity and angular velocity of the mobile robot, respectively. Thus, it is obtained that

$$\begin{aligned} {\dot{x}_C} = wy_C - v,~ {\dot{y}_C} = - wx. \end{aligned}$$
(1)

Then, rewriting (1) into the matrix form, we have

$$\begin{aligned} \left[ {\begin{array}{c} {\dot{x}_C}\\ {\dot{y}_C} \end{array}} \right] =&w\left[ {\begin{array}{cc} 0&{}1\\ { - 1}&{}0 \end{array}} \right] \left[ {\begin{array}{c} {x_C}\\ {y_C} \end{array}} \right] - \left[ {\begin{array}{cc} 1&{}0\\ 0&{}1 \end{array}} \right] \left[ {\begin{array}{c} v\\ 0 \end{array}} \right] . \end{aligned}$$
(2)

Besides, \(\theta (t) \in (-\pi ,~\pi ]\) in Fig. 1 denotes the rotational angle between \(\mathcal {F}_C\) and \(\mathcal {F}_R\). Based on the knowledge of kinematics, we have

$$\begin{aligned} \dot{\theta }= w. \end{aligned}$$
(3)

Since point \(\mathcal {P}\) can be chosen arbitrarily within the field of the camera view, without loss of generality, we can just choose the origin of \(\mathcal {F}_R\) as the static point. Therefore, we have

$$\begin{aligned} x_C(t)=t_x(t),~y_C(t)=t_y(t). \end{aligned}$$
(4)

Substituting (4) into (2), it is obtained that

$$\begin{aligned} \left[ {\begin{array}{c} {\dot{t}}_x\\ {\dot{t}}_y \end{array}} \right] =&w\left[ {\begin{array}{cc} 0&{}1\\ { - 1}&{}0 \end{array}} \right] \left[ {\begin{array}{c} {t_x}\\ {t_y} \end{array}} \right] - \left[ {\begin{array}{cc} 1&{}0\\ 0&{}1 \end{array}} \right] \left[ {\begin{array}{c} v\\ 0 \end{array}} \right] . \end{aligned}$$
(5)

In the system model (5), the translation \(t_x(t),~t_y(t)\) between the current pose and the desired one can not be fully reconstructed through 2-D images with an unknown 3-D scene model. To be able to design the visual servoing controller, it is necessary to transform the model (3) and (5) into a form containing completely measurable state variables using pose estimation techniques. The next subsection will discuss this problem in detail.

2.2 Vision-Based System Model

The visual servoing of mobile robots means that real time images of feature points are feedback so that relative pose errors in 3D space of the mobile robot can be computed using some pose estimation techniques. Then, the control input of the mobile robot is calculated according to those pose errors. With the proper control input, the mobile robot gradually regulates to the desired pose. In this paper, homography matrix decomposition method is applied for the pose estimation [4, 12]. Then, the rotation angle \(e_{\theta }(t)\) and scaled translation \(e_x(t)\in \mathbb {R},~e_y(t)\in \mathbb {R}\) between \(\mathcal {F}_R\) and \(\mathcal {F}_C\) can be obtained as

$$\begin{aligned} e_{\theta }=\theta ,~~e_x = \frac{t_x}{d^*},~e_y = \frac{t_y}{d^*}, \end{aligned}$$
(6)

where \(d^*\in \mathbb {R}^{+}\) is an unknown positive constant denoting the distance between the origin of \(\mathcal {F}_R\) and the reference plane. Substituting (6) into (3) and (5), the vision-based system model of the mobile robot is established as

$$\begin{aligned} {\dot{e}}_{\theta } = w, \end{aligned}$$
(7)
$$\begin{aligned} \left[ {\begin{array}{c} {\dot{e}}_x\\ {\dot{e}}_y \end{array}} \right] =&w\left[ {\begin{array}{cc} 0&{}1\\ { - 1}&{}0 \end{array}} \right] \left[ {\begin{array}{c} {e_x}\\ {e_y} \end{array}} \right] - \frac{1}{d^*}\left[ {\begin{array}{cc} 1&{}0\\ 0&{}1 \end{array}} \right] \left[ {\begin{array}{c} v\\ 0 \end{array}} \right] . \end{aligned}$$
(8)

In the above model (7) and (8), \(e_{\theta }(t),~e_x(t)\) and \(e_y(t)\) are completely measurable, while \(d^*\) is unknown. Rewriting (7) and (8) as a simpler form, we have

$$\begin{aligned} {{\dot{e}}_{\theta }}&= w,\end{aligned}$$
(9)
$$\begin{aligned} {{\dot{e}}_y}&= - w{e_x},\end{aligned}$$
(10)
$$\begin{aligned} {{\dot{e}}_x}&= w{e_y} - \frac{1}{{{d^*}}}v, \end{aligned}$$
(11)

In addition, the actual linear and angular velocities of the mobile robot are subjected to the saturation constraints as follows:

$$\begin{aligned} |v(t)|\le v_{max},~ |w(t)|\le w_{max} \end{aligned}$$
(12)

where \(v_{max},w_{max}\) are two positive constants, denoting the known the saturation level of the linear and angular velocities, respectively. The goal is to design linear velocity controller v(t) and angular velocity controller w(t) under the constraints (12) to drive the mobile robot from the initial pose to the desired one.

3 Saturated Velocity Control Design

The purpose of this section is to show that, by applying a class of continuous and bounded function, the eye-in-hand visual stabilization controller for mobile robots under the velocity input saturation can be obtained in the presence of unknown depth information. The asymptotical convergence to zero of the closed-loop system states is proven using Lyapunov techniques and LaSalle’s invariance principle.

Firstly, a set of continuous and bounded functions is defined as follows:

$$\begin{aligned} \begin{aligned} \varPhi _r =\left\{ \phi : R\rightarrow R| {\phi }~\text {is continuous and} -r\le \phi (x)\le r \right. \\ \left. \forall x\in R, x\phi (x)>0~\text {for all}~x\ne 0 \right\} \end{aligned} \end{aligned}$$
(13)

Specific examples of the function \(\phi (x)\) in \(\varPhi _r\) include

$$\begin{aligned} \phi (x)=\frac{2rx}{1+x^2},~\phi (x)=rtanh(x). \end{aligned}$$
(14)

Generally, \(\phi (x)=rtanh(x)\) is mostly used for saturated controller design according to the previous works [18, 20, 23].

Based on this, the linear velocity controller can be designed for the \((e_x, e_y)\)-subsystem. Define a Lyapunov candidate function

$$\begin{aligned} V=\frac{1}{2}d^*{e_x}^2+\frac{1}{2}d^*{e_y}^2. \end{aligned}$$
(15)

Taking the time derivative of V and substituting (10) and (11) into it, we have

$$\begin{aligned} \dot{V}=-e_x v. \end{aligned}$$
(16)

Inspired by the property of the function \(\phi (x)\) in (13), the controller v(t) is just defined as

$$\begin{aligned} v(t)=k_1tanh(e_x), \end{aligned}$$
(17)

where \(k_1\) is a positive constant and \(k_1\le v_{max}\), such that

$$\begin{aligned} \dot{V}=-k_1e_xtanh(e_x)\le 0. \end{aligned}$$
(18)

Thus, it is concluded that \(e_x\) and \(e_y\) are bounded. Besides, we know that \(|v(t)|\le k_1\le v_{max}\).

For the \(e_{\theta }\)-subsystem, the angular controller w(t) can be designed as

$$\begin{aligned} w(t)=-k_2tanh(e_\theta )+k_3tanh(e_y)sint \end{aligned}$$
(19)

where \(k_2, k_3>0\). Considering the saturation constraint of the angular velocity, it should also satisfy that \(k_2+k_3\le w_{max}\) and \(k_2>k_3\).

Substituting (19) into (9), we have

$$\begin{aligned} {\dot{e}}_\theta =-k_2tanh(e_\theta )+k_3tanh(e_y)sint. \end{aligned}$$
(20)

For the closed-loop angular subsystem given in (20), the left two terms represent an asymptotical stable system and the last term can be seen as additive disturbance. Therefore, if \(\lim \limits _{t \rightarrow \infty }{e_y}=0\), it can be obtained that \(\lim \limits _{t \rightarrow \infty }{e_\theta }=0\).

Next, the asymptotical convergence analysis of the closed-loop system states will be given.

Theorem 1

Considering the open-loop system (9)–(11) with control laws (17) and (19), the system states can uniformly asymptotically converge to zero under the input saturation (12).

Proof:

Following LaSalle’s invariance principle, it is known that any bounded trajectory goes to the largest invariant set E. If E contains only one equilibrium point \(x=0\), \(x=0\) is asymptotically stable.

Then, substituting (17) and (19) into the open-loop model (9)–(11), it is obtained that

$$\begin{aligned} {\dot{e}}_\theta =&-k_2tanh(e_\theta )+k_3tanh(e_y)sint, \nonumber \\ {\dot{e}}_y=&[k_2tanh(e_\theta )-k_3tanh(e_y)sint]e_x, \nonumber \\ {\dot{e}}_x=&[-k_2tanh(e_\theta )+k_3tanh(e_y)sint]e_y-\frac{1}{d^*}k_1tanh(e_x). \end{aligned}$$
(21)

For the \((e_x, e_y)\)-subsystem, let \(\dot{V}=0\). Then, we have

$$\begin{aligned} -k_1e_xtanh(e_x)=0. \end{aligned}$$
(22)

Thus, it is obtained that

$$\begin{aligned} e_x=0. \end{aligned}$$
(23)

We claim that

$$\begin{aligned} E=\left\{ (t,e_x,e_y,e_\theta )\in {\phi }^1 \times R^3|e_x=e_y=0 \right\} . \end{aligned}$$
(24)

The above conclusion can be proven by contradiction. If \(E=\left\{ (t,e_x,e_y,e_\theta )\in \right. \left. {\phi }^1 \times R^3|e_x=e_y=0 \right\} \) is not the largest invariant set, then there exists a trajectory \((t, e_x(t), e_y(t), e_\theta (t))\) that \(e_x (t)=0~\forall t \ge 0\) but \(e_y(t) \ne 0\) for each t in the open subset I of \([0, \infty )\). Thus, it is obtained that \(e_x \equiv 0, \dot{e}_x \equiv 0\). From (21), it is seen that \({\dot{e}}_y=0\) and it is concluded that \(e_y\) is a nonzero constant \(e_y^*.\) For the \((e_\theta , e_y)\)-subsystem, we have

$$\begin{aligned} {\dot{e}}_\theta e_y+{\dot{e}}_x=-\frac{1}{d^*}k_1tanh(e_x)\equiv 0. \end{aligned}$$
(25)

Because \(\dot{e}_x \equiv 0\), it can be obtained that \({\dot{e}}_\theta e_y={\dot{e}}_\theta e_y^*\equiv 0.\) Hence, it implies that \({\dot{e}}_\theta \equiv 0\) and thus \(e_\theta ={e_\theta }^*.\) Substituting \(e_y=e_y^*,~e_\theta =e_\theta ^*\) into (21), we have

$$\begin{aligned} -k_2tanh(e_\theta ^*)+k_3tanh(e_y^*)sint\equiv 0. \end{aligned}$$
(26)

Obviously, the above equation is not established, which leads to a contradiction. Consequently, the set \(E=\left\{ (t,e_x,e_y,e_\theta )\in {\phi }^1 \times R^3|e_x=e_y=0 \right\} \) is the largest invariant set. That is,

$$\begin{aligned} \lim \limits _{t \rightarrow \infty }{e_x}=0,~\lim \limits _{t \rightarrow \infty }{e_y}=0. \end{aligned}$$
(27)

Combining (20), it can be concluded that

$$\begin{aligned} \lim \limits _{t \rightarrow \infty }{e_\theta }=0. \end{aligned}$$
(28)

Thus, the asymptotical stability of the closed-loop system is completed. It is concluded that the mobile robot can be asymptotically stabilized to the desired pose under the control (17), (19) and control inputs always satisfy the input saturation constraints (Fig. 2).    \(\blacksquare \)

Fig. 2.
figure 2

The scene of visual servoing for mobile robots

Remark:

In the vision based open-loop system (9)–(11), though the image depth \(d^*\) is unknown, the proposed saturated velocity controller is irrelevant to it and only contains measurable system states, as shown in the control design process and stability analysis. Thus, the design of parameter updating law for \(d^*\) can be avoided and the complexity of the saturated controller can be greatly reduced.

Fig. 3.
figure 3

States of the vision based closed-loop system

Fig. 4.
figure 4

Linear and angular input velocities of the mobile robot

4 Simulation Results

In this section, simulation results are provided to validate the effectiveness of the proposed eye-in-hand visual servoing controller under the saturation constraints. We use MATLAB/Simulink model to simulate the real visual servoing process of the nonholonomic mobile robot. The monocular camera model is adopted and the image size is assumed as \(960 \times 540\) pixels. The linear and angular velocity saturation levels are assumed as \(v_{max}=1.5\) m/s and \(w_{max}=1\) rad/s, respectively. The initial pose of the mobile robot is (−1.5 m, 0.37 m, 31\(^{\circ }\)) and the desired one is (0, 0, 0). The control parameters are given as \(k_1=1.2, k_2=0.7, k_3=0.2\).

Figure 3 shows the evolution of the closed-loop system states. It can been seen that the scaled translation errors \(e_x\), \(e_y\) and the rotation error \(e_\theta \) all asymptotically converge to zero, which implies that the mobile robot is successfully driven to the desired pose from the initial one. The velocity control inputs are illustrated in Fig. 4. Obviously, the linear velocity input and the angular velocity input of the mobile robot both satisfy the saturation constrains. Figure 5 displays the trajectories of four feature points in the image space. The stars and the circular points denote the desired and initial image, respectively. We see that four points move along their trajectories with the movement of the mobile robot and finally coincide with the desired image. Thus it can be concluded that the mobile robot achieves the pose regulation. Figure 6 shows the path of the mobile robot during the visual servoing process. It intuitively demonstrates that the mobile robot is stabilized to the desired pose.

Fig. 5.
figure 5

The trajectories of the feature points in the image space

Fig. 6.
figure 6

The path of the mobile robot

In summary, the simulation results verify that the proposed controller (17), (19) is effective for the visual stabilization of mobile robots with the velocity input saturation.

5 Conclusion

In this paper, a saturated eye-in-hand visual servoing controller is proposed for nonholonomic mobile robots with unknown image depth. A class of continuous and bounded functions is applied for the velocity controller design. The asymptotic stability of the closed-loop system is proven using Lyapunov techniques and LaSalle’s invariance principle. Simulation results are provided to show the good performance of the controller. In the future, we will take the dynamics of mobile robots into consideration and try to design the saturated visual servoing controller at acceleration level.