Abstract
This paper addresses the eye-in-hand visual servoing problem of mobile robots with velocity input saturation. A class of continuous and bounded functions is applied for the saturated visual servoing control design. The asymptotical convergence to zero of pose errors is proven using Lyapunov techniques and LaSalle’ s invariance principle. Simulation results are provided to show that the proposed controller can stabilize the mobile robot to the desired pose under the velocity saturation constraints.
B. Li—This work is supported in part by National Natural Science Foundation of China under Grants 61573195 and 61603271.
You have full access to this open access chapter, Download conference paper PDF
1 Introduction
The research on mobile robots with visual perception is very popular and mainly focuses on SLAM [1], visual odometry (VO) [2], visual servoing [3], and so on. In particular, visual servoing is one of the hot topics and includes visual stabilization [5, 6] and visual tracking [7, 8]. In this paper, we address the problem of eye-in-hand visual stabilization for mobile robots, which means driving the mobile robot from an initial pose to a desired one using real time image feedback [9, 10].
There are several challenges in the problem of visual stabilization for mobile robots. Nonholonomic constraint is a key difficulty according to the famous Brockett’ s necessary condition [11], which means that there is no continuous and time-invariant control for the stabilization of mobile robots. For the monocular camera-robot system, the image depth is unknown which results in more complicated controller design [12, 13]. In addition, the limit of the camera field of view [14], uncalibrated camera extrinsic and intrinsic parameters [15, 16], state and input saturation [17] are also critical issues in the study of visual servoing for mobile robots. Particularly, the control input saturation is a quite realistic problem from a practical point of view. This is because the actual actuators of the mobile robots always have maximum output values and thus the control input of mobile robots is impossible to be greater than this maximum level. If the control design takes no consideration of the input saturation constraint, the visual servoing process may be failed in the case that the computed ideal input of mobile robots exceeds the actuator output limit. Therefore, it is necessary to ensure the designed control input of mobile robots always satisfies the input saturation constraint during the whole visual servoing process.
There are some literatures related to the input saturation control of mobile robots. In [18], a framework of saturated stabilization and tracking control for wheeled mobile robots is proposed based on the passivity. In [19], Huang et al. address the problem of global tracking and stabilization for mobile robots with input saturation at the torque level. Specifically, the bounds of control torques can be converted into the function of design parameters and reference trajectories. Thus, proper parameters can be determined to ensure the bounds of control inputs to be within the saturation level. In [20], Chen presents a robust stabilization controller for a class of nonholonomic mobile robots with torque saturation limits, in which the finite-time theory and backstepping-like method are used. The work in [21] designs a switching controller to solve the input saturation problem of mobile robots. The work in [22] also applies the switch function for the saturated control design. The above mentioned methods are important references to solving the stabilization problem of mobile robots. However, there has been no approaches to deal with the visual servoing of mobile robots with input saturation.
In this paper, a visual servoing approach for eye-in-hand mobile robots is proposed with velocity input saturation. Specifically, the vision-based system model for eye-in-hand wheeled mobile robots is firstly established. Then, a class of continuous and bounded functions is defined. Next, based on Lyapunov techniques and the property of those functions, the visual servoing controller is designed for mobile robots under the velocity input saturation. The convergence to zero of the closed-loop system states is proven using LaSalle’s invariance principle. Though the image depth of feature points is unknown, there is no need to design the parameter updating law, as shown in the rigorous stability proof. Thus, the main contribution of this paper is extending the stabilization method in [18] to the problem of eye-in-hand visual servoing for mobile robots in the presence of both unknown depth and velocity input saturation.
The rest of the paper is organized as follows. The vision-based system model is established in Sect. 2. The saturated velocity controller design and the closed-loop stability analysis are demonstrated in Sect. 3. Simulation results are provided in Sect. 4 to validate the effectiveness of the proposed controller. Finally, the conclusion is summarized in Sect. 5.
2 System Model Development
Figure 1 is the top view of the eye-in-hand wheeled mobile robot system. The mobile robot frame \(\mathcal {F}_C\) coincides with the camera frame for the simplicity of analysis. \(x^c\) axis is along the direction of the camera optical and \({z}^c\) axis is outwardly perpendicular to the paper. \(\mathcal {P}\) is a static point within the field of the camera view. We assume that the mobile robot moves on the \({x}^c {y}^c\) plane and there is no movement along the \(z^c\) direction.
2.1 System Modeling in 3D Space
Assuming that the current pose of the mobile robot is \(\mathcal {F}_C\) and the desired one is \(\mathcal {F}_R\) as Fig. 1 shows, the purpose of the control design is to drive the mobile robot to \(\mathcal {F}_R\) from its initial pose \(\mathcal {F}_C\). The coordinate of \(\mathcal {P}\) is represented as \({\pmb P}_C=[x_C(t)~~y_C(t)~~z_C(t)]^\mathrm{T}\) in \(\mathcal {F}_C\). Then, \({\pmb P}_C\) will change with the movement of the mobile robot as a result of the motion of \(\mathcal {F}_C\). The dynamics of \({\pmb P}_C\) can be formulated as \(\dot{\pmb P}_C = - {\pmb V} - {\pmb W} \times \pmb P_C,\) where \(\pmb V=[v(t)~~0~~0]^\mathrm{T}\) and \(\pmb W = [0~~0~~w(t)]^\mathrm{T}\), with \(v(t)\in \mathbb {R}\) and \(w(t)\in \mathbb {R}\) denoting the linear velocity and angular velocity of the mobile robot, respectively. Thus, it is obtained that
Then, rewriting (1) into the matrix form, we have
Besides, \(\theta (t) \in (-\pi ,~\pi ]\) in Fig. 1 denotes the rotational angle between \(\mathcal {F}_C\) and \(\mathcal {F}_R\). Based on the knowledge of kinematics, we have
Since point \(\mathcal {P}\) can be chosen arbitrarily within the field of the camera view, without loss of generality, we can just choose the origin of \(\mathcal {F}_R\) as the static point. Therefore, we have
Substituting (4) into (2), it is obtained that
In the system model (5), the translation \(t_x(t),~t_y(t)\) between the current pose and the desired one can not be fully reconstructed through 2-D images with an unknown 3-D scene model. To be able to design the visual servoing controller, it is necessary to transform the model (3) and (5) into a form containing completely measurable state variables using pose estimation techniques. The next subsection will discuss this problem in detail.
2.2 Vision-Based System Model
The visual servoing of mobile robots means that real time images of feature points are feedback so that relative pose errors in 3D space of the mobile robot can be computed using some pose estimation techniques. Then, the control input of the mobile robot is calculated according to those pose errors. With the proper control input, the mobile robot gradually regulates to the desired pose. In this paper, homography matrix decomposition method is applied for the pose estimation [4, 12]. Then, the rotation angle \(e_{\theta }(t)\) and scaled translation \(e_x(t)\in \mathbb {R},~e_y(t)\in \mathbb {R}\) between \(\mathcal {F}_R\) and \(\mathcal {F}_C\) can be obtained as
where \(d^*\in \mathbb {R}^{+}\) is an unknown positive constant denoting the distance between the origin of \(\mathcal {F}_R\) and the reference plane. Substituting (6) into (3) and (5), the vision-based system model of the mobile robot is established as
In the above model (7) and (8), \(e_{\theta }(t),~e_x(t)\) and \(e_y(t)\) are completely measurable, while \(d^*\) is unknown. Rewriting (7) and (8) as a simpler form, we have
In addition, the actual linear and angular velocities of the mobile robot are subjected to the saturation constraints as follows:
where \(v_{max},w_{max}\) are two positive constants, denoting the known the saturation level of the linear and angular velocities, respectively. The goal is to design linear velocity controller v(t) and angular velocity controller w(t) under the constraints (12) to drive the mobile robot from the initial pose to the desired one.
3 Saturated Velocity Control Design
The purpose of this section is to show that, by applying a class of continuous and bounded function, the eye-in-hand visual stabilization controller for mobile robots under the velocity input saturation can be obtained in the presence of unknown depth information. The asymptotical convergence to zero of the closed-loop system states is proven using Lyapunov techniques and LaSalle’s invariance principle.
Firstly, a set of continuous and bounded functions is defined as follows:
Specific examples of the function \(\phi (x)\) in \(\varPhi _r\) include
Generally, \(\phi (x)=rtanh(x)\) is mostly used for saturated controller design according to the previous works [18, 20, 23].
Based on this, the linear velocity controller can be designed for the \((e_x, e_y)\)-subsystem. Define a Lyapunov candidate function
Taking the time derivative of V and substituting (10) and (11) into it, we have
Inspired by the property of the function \(\phi (x)\) in (13), the controller v(t) is just defined as
where \(k_1\) is a positive constant and \(k_1\le v_{max}\), such that
Thus, it is concluded that \(e_x\) and \(e_y\) are bounded. Besides, we know that \(|v(t)|\le k_1\le v_{max}\).
For the \(e_{\theta }\)-subsystem, the angular controller w(t) can be designed as
where \(k_2, k_3>0\). Considering the saturation constraint of the angular velocity, it should also satisfy that \(k_2+k_3\le w_{max}\) and \(k_2>k_3\).
Substituting (19) into (9), we have
For the closed-loop angular subsystem given in (20), the left two terms represent an asymptotical stable system and the last term can be seen as additive disturbance. Therefore, if \(\lim \limits _{t \rightarrow \infty }{e_y}=0\), it can be obtained that \(\lim \limits _{t \rightarrow \infty }{e_\theta }=0\).
Next, the asymptotical convergence analysis of the closed-loop system states will be given.
Theorem 1
Considering the open-loop system (9)–(11) with control laws (17) and (19), the system states can uniformly asymptotically converge to zero under the input saturation (12).
Proof:
Following LaSalle’s invariance principle, it is known that any bounded trajectory goes to the largest invariant set E. If E contains only one equilibrium point \(x=0\), \(x=0\) is asymptotically stable.
Then, substituting (17) and (19) into the open-loop model (9)–(11), it is obtained that
For the \((e_x, e_y)\)-subsystem, let \(\dot{V}=0\). Then, we have
Thus, it is obtained that
We claim that
The above conclusion can be proven by contradiction. If \(E=\left\{ (t,e_x,e_y,e_\theta )\in \right. \left. {\phi }^1 \times R^3|e_x=e_y=0 \right\} \) is not the largest invariant set, then there exists a trajectory \((t, e_x(t), e_y(t), e_\theta (t))\) that \(e_x (t)=0~\forall t \ge 0\) but \(e_y(t) \ne 0\) for each t in the open subset I of \([0, \infty )\). Thus, it is obtained that \(e_x \equiv 0, \dot{e}_x \equiv 0\). From (21), it is seen that \({\dot{e}}_y=0\) and it is concluded that \(e_y\) is a nonzero constant \(e_y^*.\) For the \((e_\theta , e_y)\)-subsystem, we have
Because \(\dot{e}_x \equiv 0\), it can be obtained that \({\dot{e}}_\theta e_y={\dot{e}}_\theta e_y^*\equiv 0.\) Hence, it implies that \({\dot{e}}_\theta \equiv 0\) and thus \(e_\theta ={e_\theta }^*.\) Substituting \(e_y=e_y^*,~e_\theta =e_\theta ^*\) into (21), we have
Obviously, the above equation is not established, which leads to a contradiction. Consequently, the set \(E=\left\{ (t,e_x,e_y,e_\theta )\in {\phi }^1 \times R^3|e_x=e_y=0 \right\} \) is the largest invariant set. That is,
Combining (20), it can be concluded that
Thus, the asymptotical stability of the closed-loop system is completed. It is concluded that the mobile robot can be asymptotically stabilized to the desired pose under the control (17), (19) and control inputs always satisfy the input saturation constraints (Fig. 2). \(\blacksquare \)
Remark:
In the vision based open-loop system (9)–(11), though the image depth \(d^*\) is unknown, the proposed saturated velocity controller is irrelevant to it and only contains measurable system states, as shown in the control design process and stability analysis. Thus, the design of parameter updating law for \(d^*\) can be avoided and the complexity of the saturated controller can be greatly reduced.
4 Simulation Results
In this section, simulation results are provided to validate the effectiveness of the proposed eye-in-hand visual servoing controller under the saturation constraints. We use MATLAB/Simulink model to simulate the real visual servoing process of the nonholonomic mobile robot. The monocular camera model is adopted and the image size is assumed as \(960 \times 540\) pixels. The linear and angular velocity saturation levels are assumed as \(v_{max}=1.5\) m/s and \(w_{max}=1\) rad/s, respectively. The initial pose of the mobile robot is (−1.5 m, 0.37 m, 31\(^{\circ }\)) and the desired one is (0, 0, 0). The control parameters are given as \(k_1=1.2, k_2=0.7, k_3=0.2\).
Figure 3 shows the evolution of the closed-loop system states. It can been seen that the scaled translation errors \(e_x\), \(e_y\) and the rotation error \(e_\theta \) all asymptotically converge to zero, which implies that the mobile robot is successfully driven to the desired pose from the initial one. The velocity control inputs are illustrated in Fig. 4. Obviously, the linear velocity input and the angular velocity input of the mobile robot both satisfy the saturation constrains. Figure 5 displays the trajectories of four feature points in the image space. The stars and the circular points denote the desired and initial image, respectively. We see that four points move along their trajectories with the movement of the mobile robot and finally coincide with the desired image. Thus it can be concluded that the mobile robot achieves the pose regulation. Figure 6 shows the path of the mobile robot during the visual servoing process. It intuitively demonstrates that the mobile robot is stabilized to the desired pose.
In summary, the simulation results verify that the proposed controller (17), (19) is effective for the visual stabilization of mobile robots with the velocity input saturation.
5 Conclusion
In this paper, a saturated eye-in-hand visual servoing controller is proposed for nonholonomic mobile robots with unknown image depth. A class of continuous and bounded functions is applied for the velocity controller design. The asymptotic stability of the closed-loop system is proven using Lyapunov techniques and LaSalle’s invariance principle. Simulation results are provided to show the good performance of the controller. In the future, we will take the dynamics of mobile robots into consideration and try to design the saturated visual servoing controller at acceleration level.
References
Fuentes-Pacheco, J., Ruiz-Ascencio, J., Rendón-Mancha, J.M.: Visual simultaneous localization and mapping: a survey. Artif. Intell. Rev. 43(1), 55–81 (2015)
Liu, Y., Xiong, R., Wang, Y., Huang, H., Xie, X., Liu, X., Zhang, G.: Stereo visual-inertial odometry with multiple Kalman filters ensemble. IEEE Trans. Ind. Electron. 63(10), 6205–6216 (2016)
Mariottini, G.L., Oriolo, G., Prattichizzo, D.: Image-based visual servoing for nonholonomic mobile robots using epipolar geometry. IEEE Trans. Robot. 23(1), 87–100 (2007)
Zhang, X., Fang, Y., Liu, X.: Motion-estimation-based visual servoing of nonholonomic mobile robots. IEEE Trans. Robot. 27(6), 1167–1175 (2011)
Fattahi, M., Vasegh, N., Momeni, H.: Stabilization of a class of nonlinear discrete time systems with time varying delay. Control Sci. Inf. Eng. 10(8), 181–208 (2014)
Zhang, X., Fang, Y., Sun, N.: Visual servoing of mobile robots for posture stabilization: from theory to experiments. Int. J. Robust and Nonlinear Control 25(1), 1–15 (2015)
Chen, X., Jia, Y., Matsuno, F.: Tracking control for differential-drive mobile robots with diamond-shaped input constraints. IEEE Trans. Control Syst. Technol. 22(5), 1999–2006 (2014)
Wang, K., Liu, Y., Li, L.: Visual servoing trajectory tracking of nonholonomic mobile robots without direct position measurement. IEEE Trans. Robot. 30(4), 1026–1035 (2014)
Chaumette, F., Hutchinson, S.: Visual servo control. Part I: basic approaches. IEEE Robot. Autom. Mag. 13(4), 82–90 (2006)
Li, B., Fang, Y., Hu, G., Zhang, X.: Model-free unified tracking and regulation visual servoing of wheeled mobile robots. IEEE Trans. Control Syst. Technol. 24(4), 1328–1339 (2016)
Brockett, R.: The early days of geometric nonliear control. Automatica 50(9), 2203–2224 (2014)
Fang, Y., Dixon, W., Dawson, D., Chawda, P.: Homography-based visual servo regulation of mobile robots. IEEE Trans. Syst. Man Cybern. Part B-Cybern. 35(5), 1041–1050 (2005)
Siradjuddin, I., Tundung, S., Indah, A.: A real-time model based visual servoing application for a differential drive mobile robot using beaglebone black embedded system (IRIS). In: IEEE International Symposium on Robotics & Intelligent Sensors, pp. 186–192 (2015)
Fang, Y., Liu, X., Zhang, X.: Adaptive active visual servoing of nonholonomic mobile robots. IEEE Trans. Ind. Electron. 20(1), 241–248 (2012)
Li, B., Fang, Y., Zhang, X.: Visual servo regulation of wheeled mobile robots with an uncalibrated onboard camera. IEEE Trans. Mechatron. 21(5), 2330–2342 (2016)
Zhang, X., Fang, Y., Li, B., Wang, J.: Visual servoing of nonholonomic mobile robots with uncalibrated camera-to-robot parameters. IEEE Trans. Ind. Electron. 64(1), 390–400 (2017)
Ke, F., Li, Z., Xiao, H., Zhang, X.: Visual servoing of constrained mobile robots based on model predictive control. IEEE Trans. Syst. Man Cybern. Syst. 47, 1428–1438 (2016)
Jiang, Z., Lefeber, E., Nijmeijer, H.: Saturated stabilization and tracking of a nonholonomic mobile robot. Syst. Control Lett. 42(5), 327–332 (2001)
Huang, J., Wen, C., Wang, W., Jiang, Z.: Adaptive stabilization and tracking control of a nonholonomic mobile robot with input saturation and disturbance. Syst. Control Lett. 62(3), 234–241 (2013)
Chen, H.: Robust stabilization for a class of dynamic feedback uncertain nonholonomic mobile robots with input saturation. Int. J. Control Autom. Syst. 12(6), 1216–1224 (2014)
Chen, H., Zhang. J.: Semiglobal saturated practical stabilization for nonholonomic mobile robots with uncertain parameters and angle measurement disturbance. In: IEEE Control and Decision Conference, pp. 3731–3736 (2013)
Izumi, K., Tanaka, H., Tsujimura, T.: Nonholonomic control considering with input saturation for a mobile robot. In: Conference of the Society of Instrument and Control Engineers of Japan, pp. 1173–1172 (2016)
Su, Y., Zheng, C.: Global asymptotic stabilization and tracking of wheeled mobile robots with actuator saturation. In: IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 345–350 (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Wang, R., Zhang, X., Fang, Y., Li, B. (2017). Visual Servoing of Mobile Robots with Input Saturation at Kinematic Level. In: Zhao, Y., Kong, X., Taubman, D. (eds) Image and Graphics. ICIG 2017. Lecture Notes in Computer Science(), vol 10666. Springer, Cham. https://doi.org/10.1007/978-3-319-71607-7_38
Download citation
DOI: https://doi.org/10.1007/978-3-319-71607-7_38
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-71606-0
Online ISBN: 978-3-319-71607-7
eBook Packages: Computer ScienceComputer Science (R0)