Skip to main content

Unified Passivity-Based Visual Control for Moving Object Tracking

  • Chapter
  • First Online:
Machine Vision and Navigation

Abstract

In this chapter, a unified passivity-based visual servoing control structure considering a vision system mounted on the robot is presented. This controller is suitable to be applied for robotic arms, mobile robots, as well as mobile manipulators. The proposed control law makes the robot able to perform a moving target tracking in its workspace. Taking advantage of the passivity properties of the control system and considering exact knowledge of the target velocity, the asymptotic convergence of the control errors to zero is proved. Later, a robustness analysis is carried out based on L 2-gain performance, thus proving that control errors are ultimately bounded even when bounded errors exist in the estimation of the target velocity. Both numerical simulation and experimental results illustrate the performance of the algorithm in a robotic manipulator, a mobile robot, and also a mobile manipulator.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 229.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 299.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 299.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Abbreviations

DOF:

Degree of freedom

References

  1. Basaca-Preciado, L. C., Sergiyenko, O. Y., Rodríguez-Quinonez, J. C., Garcia, X., Tyrsa, V. V., Rivas-Lopez, M., Hernandez-Balbuena, D., Mercorelli, P., Podrygalo, M., Gurko, A., Tabakova, I., & Starostenko, O. (2014). Optical 3D laser measurement system for navigation of autonomous mobile robot. Optics and Lasers in Engineering, 54, 159–169.

    Article  Google Scholar 

  2. Toibero, J. M., Roberti, F., & Carelli, R. (2009). Stable contour-following control of wheeled mobile robots. Robotica, 27(1), 1–12.

    Article  Google Scholar 

  3. Toibero, J. M., Roberti, F., Carelli, R., & Fiorini, P. (2011). Switching control approach for stable navigation of mobile robots in unknown environments. Robotics and Computer Integrated Manufacturing, 27(3), 558–568.

    Article  Google Scholar 

  4. Rodríguez-Quiñonez, J. C., Sergiyenko, O., Flores-Fuentes, W., Rivas-Lopez, M., Hernandez-Balbuena, D., Rascón, R., & Mercorelli, P. (2017). Improve a 3D distance measurement accuracy in stereo vision systems using optimization methods’ approach. Opto-Electronics Review, 25(1), 24–32.

    Article  Google Scholar 

  5. Toibero, J. M., Soria, C., Roberti, F., Carelli, R., & Fiorini, P. (2009). Switching visual servoing approach for stable corridor navigation. In Proceedings of the International Conference on Advanced Robotics, Munich, Germany, 22–26 June 2009.

    Google Scholar 

  6. Traslosheros, A., Sebastián, J. M., Torrijos, J., Carelli, R., & Roberti, F. (2014). Using a 3DOF parallel robot and a spherical bat to hit a Ping-Pong ball. International Journal of Advanced Robotic Systems, 11(5). https://doi.org/10.5772/58526.

  7. Weiss, L. E., Sanderson, A., & Neuman, P. (1987). Dynamic sensor-based control of robots with visual feedback. IEEE Journal of Robotics and Automation, 3(9), 404–417.

    Article  Google Scholar 

  8. Carelli, R., De la Cruz, C., & Roberti, F. (2006). Centralized formation control of non-holonomic mobile robots. Latin American Applied Research, 36(2), 63–69.

    Google Scholar 

  9. Gong, Z., Tao, B., Yang, H., Yin, Z., & Ding, H. (2018). An uncalibrated visual servo method based on projective homography. IEEE Transactions on Automation Science and Engineering, 15(2), 806–817.

    Article  Google Scholar 

  10. López-Nicolás, G., Guerrero, J. J., & Sagüés, C. (2010). Visual control of vehicles using two-view geometry. Mechatronics, 20(2), 315–325.

    Article  Google Scholar 

  11. Carelli, R., Kelly, R., Nasisi, O. H., Soria, C., & Mut, V. (2006). Control based on perspective lines of a nonholonomic mobile robot with camera-on-board. International Journal of Control, 79, 362–371.

    Article  MathSciNet  Google Scholar 

  12. Wang, H., Guo, D., Xu, H., Chen, W., Liu, T., & Leang, K. K. (2017). Eye-in-hand tracking control of a free-floating space manipulator. IEEE Transactions on Aerospace and Electronic Systems, 53(4), 1855–1865.

    Article  Google Scholar 

  13. Carelli, R., Santos-Victor, J., Roberti, F., & Tosetti, S. (2006). Direct visual tracking control of remote cellular robots. Robotics and Autonomous Systems, 54(10), 805–814.

    Article  Google Scholar 

  14. Taryudi, & Wang, M. S. (2017). 3D object pose estimation using stereo vision for object manipulation system. In Proceedings of the IEEE International Conference on Applied System Innovation, Sapporo, Japan, 13–17 May 2017.

    Google Scholar 

  15. López-Nicolás, G., Guerrero, J. J., & Sagüés, C. (2010). Visual control through the trifocal tensor for nonholonomic robots. Robotics and Autonomous Systems, 58(2), 216–226.

    Article  Google Scholar 

  16. Andaluz, V., Carelli, R., Salinas, L., Toibero, J. M., & Roberti, F. (2012). Visual control with adaptive dynamical compensation for 3D target tracking by mobile manipulators. Mechatronics, 22(4), 491–502.

    Article  Google Scholar 

  17. Roberti, F., Toibero, J. M., Soria, C., Vassallo, R., & Carelli, R. (2009). Hybrid collaborative stereo vision system for mobile robots formation. International Journal of Advanced Robotic Systems, 6(4), 257–266.

    Article  Google Scholar 

  18. Zhang, K., Chen, J., Li, Y., & Gao, Y. (2018). Unified visual servoing tracking and regulation of wheeled mobile robots with an uncalibrated camera. IEEE/ASME Transactions on Mechatronics, 23(4), 1728–1739.

    Article  Google Scholar 

  19. Fujita, M., Kawai, H., & Spong, M. W. (2007). Passivity-based dynamic visual feedback control for three dimensional target tracking: Stability and L2-gain perfomance analysis. IEEE Transactions on Control Systems Technology, 15(1), 40–52.

    Article  Google Scholar 

  20. Kawai, H., Toshiyuki, M., & Fujita, M. (2006). Image-based dynamic visual feedback control via passivity approach. In Proceedings of the IEEE International Conference on Control Applications, Munich, Germany, 4–6 October 2006.

    Google Scholar 

  21. Murao, T., Kawai, H., & Fujita, M. (2005). Passivity-based dynamic visual feedback control with a movable camera. In Proceedings of the 44th IEEE International Conference on Decision and Control, Sevilla, Spain, 12–15 December 2005.

    Google Scholar 

  22. Soria, C., Roberti, F., Carelli, R., & Sebastián, J. M. (2008). Control Servo-visual de un robot manipulador planar basado en pasividad. Revista Iberoamericana de Automática e Informática Industrial, 5(4), 54–61.

    Article  Google Scholar 

  23. Martins, F., Sarcinelli, M., Freire Bastos, T., & Carelli, R. (2008). Dynamic modeling and trajectory tracking control for unicycle-like mobile robots. In Proceedings of the 3rd International Symposium on Multibody Systems and Mechatronics, San Juan, Argentina, 8–12 April 2008.

    Google Scholar 

  24. Andaluz, V., Roberti, F., Salinas, L., Toibero, J. M., & Carelli, R. (2015). Passivity-based visual feedback control with dynamic compensation of mobile manipulators: Stability and L2-gain performance analysis. Robotics and Autonomous Systems, 66, 64–74.

    Article  Google Scholar 

  25. Morales, B., Roberti, F., Toibero, J. M., & Carelli, R. (2012). Passivity based visual servoing of mobile robots with dynamics compensation. Mechatronics, 22(4), 481–490.

    Article  Google Scholar 

  26. El-Hawwary, M. I., & Maggiore, M. (2008). Global path following for the unicycle and other results. In Proceedings of the American Control Conference, Seattle, Washington, 11–13 June 2008.

    Google Scholar 

  27. Lee, D. (2007). Passivity-based switching control for stabilization of wheeled mobile robots. In Proceedings of the Robotics: Science and Systems, Atlanta, Georgia, 27–30 June 2007.

    Google Scholar 

  28. Arcak, M. (2007). Passivity as a design tool for group coordination. IEEE Transactions on Automatic Control, 52(8), 1380–1390.

    Article  MathSciNet  Google Scholar 

  29. Igarashi, Y., Hatanaka, T., Fujita, M., & Spong, M. W. (2007). Passivity-based 3D attitude coordination: Convergence and connectivity. In Proceedings of the IEEE Conference on Decision and Control, New Orleans, LA,10–11 December 2007.

    Google Scholar 

  30. Ihle, I., Arcak, M., & Fossen, T. (2007). Passivity-based designs for synchronized path-following. Automatica, 43(9), 1508–1518.

    Article  MathSciNet  Google Scholar 

  31. Spong, M. W., Holm, J. K., & Lee, D. J. (2007). Passivity-based control of biped locomotion. IEEE Robotics & Automation Magazine, 14(2), 30–40.

    Article  Google Scholar 

  32. Fujita, M., Hatanaka, T., Kobayashi, N., Ibuki, T., & Spong, M. (2009). Visual motion observer-based pose synchronization: A passivity approach. In Proceedings of the IEEE International Conference on Decision and Control, Shanghai, China, 16–18 December 2009.

    Google Scholar 

  33. Kawai, H., Murao, T., & Fujita, M. (2011). Passivity-based visual motion observer with panoramic camera for pose control. Journal of Intelligent & Robotic Systems, 64(3–4), 561–583.

    Article  Google Scholar 

  34. Hu, Y. M., & Guo, B. H. (2004). Modeling and motion planning of a three-link wheeled mobile manipulator. In Proceedings of the International Conference on Control, Automation, Robotics and Vision, Kunming, China, 6–9 December 2004.

    Google Scholar 

  35. Andaluz, V., Roberti, F., Toibero, J. M., & Carelli, R. (2012). Adaptive unified motion control of mobile manipulators. Control Engineering Practice, 20(12), 1337–1352.

    Article  Google Scholar 

  36. Hutchinson, S., Hager, G., & Corke, P. (1996). A tutorial on visual servo control. IEEE Transactions on Robotics and Automation, 12(5), 651–670.

    Article  Google Scholar 

  37. Hill, D., & Moylan, P. (1976). Stability results for nonlinear feedback systems. Automatica, 13, 373–382.

    Google Scholar 

  38. Bynes, C. I., Isidori, A., & Willems, J. C. (1991). Passivity, feedback equivalence, and the global stabilization of minimun phase nonlinear systems. IEEE Transactions on Automatic Control, 36(11), 1228–1240.

    Article  MathSciNet  Google Scholar 

  39. Ortega, R., Loria, A., Nelly, R., & Praly, L. (1995). On passivity based output feedback global stabilization of Euler-Lagrange systems. International Journal of Robust and Nonlinear Control, 5, 313–324.

    Article  MathSciNet  Google Scholar 

  40. Vidyasagar, M. (1979). New passivity-type criteria for large-scale interconnected systems. IEEE Transactions on Automat. Control, 24, 575–579.

    Article  MathSciNet  Google Scholar 

  41. Vidyasagar, M. (1978). Nonlinear systems analysis. Englewood Cliffs, NJ: Prentice Hall International Editions.

    MATH  Google Scholar 

  42. Bayle, B., & Fourquet, J. Y. (2001). Manipulability analysis for mobile manipulators. In Proceedings of the IEEE International Conference on Robotics and Automation, Seoul, Korea, May 2001.

    Google Scholar 

  43. Kalata, P. (1994). The tracking index: A generalized parameter for α-β and α-β-γ target trackers. IEEE Transactions on Aerospace and Electronic Systems, 20(2), 174–182.

    Article  Google Scholar 

  44. Van der Schaft, A. (2000). L2 gain and passivity techniques in nonlinear control. London: Springer.

    Book  Google Scholar 

  45. Das, A. K., Fierro, R., Kumar, V., Ostrowski, J. P., Spletzer, J., & Taylor, C. J. (2002). A vision-based formation control framework. IEEE Transactions on Robotics and Automation, 18(5), 813–825.

    Article  Google Scholar 

  46. Zulli, R., Fierro, R., Conte, G., & Lewis, F. L. (1995). Motion planning and control for nonholonomic mobile robots. In Proceedings of the IEEE International Symposium on Intelligent Control, CA, 27–29 August 1995.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Flavio Roberti .

Editor information

Editors and Affiliations

Appendices

Appendix 1

1.1 Mobile Robot Model

This chapter considers a unicycle-like robot, consisting of two self-driven wheels located on the same axle and a castor wheel, as Fig. 12.37 shows. Therefore, if the robot is considered as a concentrated mass placed in the point C at the middle of the wheel’s axle, the kinematic model that represents the pose of the robot in the plane are

$$ {\displaystyle \begin{array}{c}\dot{x}=u\cos \phi \\ {}\dot{y}=u\sin \phi \\ {}\dot{\phi}=\omega \end{array}} $$
(12.41)

where (x, y) represents the position of the robot, ϕ represents the robot’s orientation, u is the linear velocity of the robot, and ω is the angular velocity of the robot.

Fig. 12.37
figure 37

Geometric description of the mobile robot

This type of robots has a non-holonomic constraint given by

$$ \dot{y}\cos \phi -\dot{x}\sin \phi =0 $$
(12.42)

This restriction states that the robot is not able to move laterally but it is able only to navigate in the perpendicular direction to the wheels’ axle.

1.2 Feature Selection

Without loss of generality for the control laws proposed in this chapter, a cylindrical object is selected, defining the vector of image features as \( \boldsymbol{\upxi} ={\left[\begin{array}{cc}{\xi}_1& {\xi}_2\end{array}\right]}^{\mathrm{T}}={\left[\begin{array}{cc}{x}_{\mathrm{m}}& {d}_{\mathrm{m}}\end{array}\right]}^{\mathrm{T}} \), where x m is the projection of the cylinder middle point x-coordinate on the image plane; and d m represents the projection of the actual width D of the cylinder on the image plane [25]. This feature definition is depicted in Fig. 12.38. According to the camera projection model (12.4), the image features are straightforwardly obtained as

$$ {x}_{\mathrm{m}}=f\frac{x_{\mathrm{Tmc}}}{z_{\mathrm{Tmc}}};\kern1em {d}_{\mathrm{m}}=f\frac{D}{z_{\mathrm{Tmc}}}\vspace*{-10pt} $$
(12.43)
Fig. 12.38
figure 38

Image features

Now, the problem consists of obtaining the vision system model. This model has to describe the time variation of the image characteristics \( \dot{\boldsymbol{\upxi}} \) as a function of both the robot motion \( {\left[\begin{array}{cc}u& \omega \end{array}\right]}^{\mathrm{T}} \) and the target velocity v T. For this purpose, the pose of the target on the X mc − Z mc plane with respect to the vision system can be written as a function of the distance d and the angle φ, defined as depicted in Fig. 12.39, as follows:

$$ {\displaystyle \begin{array}{c}{x}_{\mathrm{Tmc}}=d\sin \varphi \\ {}{z}_{\mathrm{Tmc}}=d\cos \varphi \end{array}} $$
(12.44)
Fig. 12.39
figure 39

Relative posture between the target and the robot

Replacing (12.44) with (12.43), the vector of image characteristics is expressed as a function of the relative pose between the target and the camera

$$ \boldsymbol{\upxi} ={\left[\begin{array}{cc}f\tan \varphi & f\frac{D}{d\cos \varphi}\end{array}\right]}^{\mathrm{T}} $$
(12.45)

and the time derivative of (12.45) is

$$ \dot{\boldsymbol{\upxi}}=\frac{\partial \left({\xi}_1,{\xi}_2\right)}{\partial \left(\varphi, d\right)}{\left[\begin{array}{cc}\dot{\varphi}& \dot{d}\end{array}\right]}^{\mathrm{T}}={\mathbf{J}}_1{\left[\begin{array}{cc}\dot{\varphi}& \dot{d}\end{array}\right]}^{\mathrm{T}} $$
(12.46)

with

$$ {\mathbf{J}}_1=\left[\begin{array}{cc}f{\sec}^2\varphi & 0\\ {}\frac{fD}{d}\sec \left(\varphi \right)\tan \left(\varphi \right)& -\frac{fD}{d^2}\sec \left(\varphi \right)\end{array}\right] $$
(12.47)

The variation in relative position between the robot and the target is due to both the robot motion and the target motion: \( {\left[\begin{array}{cc}\dot{\varphi}& \dot{d}\end{array}\right]}^{\mathrm{T}}={\left[\begin{array}{cc}\dot{\varphi}& \dot{d}\end{array}\right]}_{\mathrm{R}}^{\mathrm{T}}+{\left[\begin{array}{cc}\dot{\varphi}& \dot{d}\end{array}\right]}_{\mathrm{T}}^{\mathrm{T}} \). Now, from the kinematic model of the mobile robot in polar coordinates (see Fig. 12.39) and first considering a static object, the time variation of the relative posture between the target and the robot (time variation of d and φ) is obtained as a function of the linear and angular velocities of the robot \( \boldsymbol{\upmu} ={\left[\begin{array}{cc}u& \omega \end{array}\right]}^{\mathrm{T}} \) as follows:

$$ {\left[\begin{array}{c}\dot{\varphi}\\ {}\dot{d}\end{array}\right]}_{\mathrm{R}}=\left[\begin{array}{cc}\frac{\sin \left(\varphi \right)}{d}& 1\\ {}-\cos \left(\varphi \right)& 0\end{array}\right]\left[\begin{array}{c}u\\ {}\omega \end{array}\right]={\mathbf{J}}_2\boldsymbol{\upmu} $$
(12.48)

Now consider a static position for the robot, that is, constant values for x, y, ϕ; obtaining the time derivative of (12.44), the target velocity \( {\mathbf{v}}_{\mathrm{T}}={\left[\begin{array}{cc}{\dot{x}}_{\mathrm{T}\mathrm{mc}}& {\dot{z}}_{\mathrm{T}\mathrm{mc}}\end{array}\right]}^{\mathrm{T}}=\mathbf{A}{\left[\begin{array}{cc}\dot{\varphi}& \dot{d}\end{array}\right]}_{\mathrm{T}}^{\mathrm{T}} \) is expressed as

$$ {\mathbf{v}}_{\mathrm{T}}=\left[\begin{array}{cc}d\cos \left(\varphi \right)& \sin \left(\varphi \right)\\ {}-d\sin \left(\varphi \right)& \cos \left(\varphi \right)\end{array}\right]{\left[\begin{array}{c}\dot{\varphi}\\ {}\dot{d}\end{array}\right]}_{\mathrm{T}} $$
(12.49)

Then, as A is invertible, the following expression can be obtained

$$ {\displaystyle \begin{array}{c}{\left[\begin{array}{c}\dot{\varphi}\\ {}\dot{d}\end{array}\right]}_{\mathrm{T}}=\left[\begin{array}{cc}\frac{1}{d}\cos \left(\varphi \right)& -\frac{1}{d}\sin \left(\varphi \right)\\ {}\sin \left(\varphi \right)& \cos \left(\varphi \right)\end{array}\right]{\mathbf{v}}_{\mathrm{T}}\\ {}{\left[\begin{array}{c}\dot{\varphi}\\ {}\dot{d}\end{array}\right]}_{\mathrm{T}}={\mathbf{J}}_0{\mathbf{v}}_{\mathrm{T}}\end{array}} $$
(12.50)

Finally, by introducing the motions of both the robot (12.48) and the target (12.50) into (12.46), the model of the vision system is obtained

$$ \dot{\boldsymbol{\upxi}}={\mathbf{J}}_1\left({\mathbf{J}}_2\boldsymbol{\upmu} +{\mathbf{J}}_0{\mathbf{v}}_{\mathrm{t}}\right) $$
(12.51)

Defining

$$ {\displaystyle \begin{array}{c}\mathbf{J}={\mathbf{J}}_1{\mathbf{J}}_2\\ {}{\mathbf{J}}_{\mathrm{T}}={\mathbf{J}}_1{\mathbf{J}}_0\end{array}} $$
(12.52)

a compact form for the vision system model is obtained

$$ \dot{\boldsymbol{\upxi}}=\mathbf{J}\boldsymbol{\upmu } +{\mathbf{J}}_{\mathrm{T}}{\mathbf{v}}_{\mathrm{t}} $$
(12.53)

where

$$ \mathbf{J}=\left[\begin{array}{cc}\frac{x_{\mathrm{m}}{d}_{\mathrm{m}}}{fD}& \frac{f^2+{x}_{\mathrm{m}}^2}{f}\\ {}\frac{d_{\mathrm{m}}^2}{fD}& \frac{x_{\mathrm{m}}{d}_{\mathrm{m}}}{f}\end{array}\right] $$
(12.54)
$$ {\mathbf{J}}_{\mathrm{T}}=\left[\begin{array}{cc}\frac{d_{\mathrm{m}}}{D}& -\frac{x_{\mathrm{m}}{d}_{\mathrm{m}}}{fD}\\ {}0& -\frac{d_{\mathrm{m}}^2}{fD}\end{array}\right] $$
(12.55)

Appendix 2

Formal definitions associated with passivity of operators relating functional spaces, used in this work, follow [44].

Definition 1

L p signal spaces.

For all p ∈ [1, ∞), L p signal spaces are defined as

$$ {L}_{\mathrm{p}}=\left\{f:{\mathrm{\mathbb{R}}}_{+}\to \mathrm{\mathbb{R}}/{\int}_0^{\infty }{\left|f(t)\right|}^p dt<\infty \right\}. $$

L p are the Banach spaces with respect to the norm

$$ {\left\Vert f\right\Vert}_p={\left({\int}_0^{\infty }{\left|f(t)\right|}^p dt\right)}^{\frac{1}{p}}. $$

Definition 2

L signal spaces.

L signal spaces are defined as

$$ {L}_{\infty }=\left\{f:{\mathrm{\mathbb{R}}}_{+}\to \mathrm{\mathbb{R}}/\underset{t\in {\mathrm{\mathbb{R}}}_{+}}{\sup}\kern0.5em \left|f(t)\right|<\infty \right\}. $$

L are the Banach spaces with respect to the norm

$$ {\left\Vert f\right\Vert}_{\infty }=\underset{t\in {\mathrm{\mathbb{R}}}_{+}}{\sup}\kern0.5em \left|f(t)\right|. $$

Definition 3

Truncated function.

Let f : ℝ+ → ℝ. Then, for each T ∈ ℝ+, the function f T : ℝ+ → ℝ is defined by

$$ {f}_{\mathrm{T}}(t)=\left\{\begin{array}{ll}f(t)& 0\le t<T\\ {}0& \kern1.75em t\ge T\end{array}\right. $$

Definition 4

Extended L p signal spaces.

Extended L p spaces are defined as

$$ {L}_{\mathrm{p}\mathrm{e}}=\left\{f/{f}_{\mathrm{T}}\in {L}_{\mathrm{p}}\kern1em \forall T<\infty \right\}. $$

Definition 5

Given g, h ∈ L 2e, the inner product and the norm ‖•‖2e in the set L 2e are defined as

$$ {\displaystyle \begin{array}{c}{\left\langle g,h\right\rangle}_{\mathrm{T}}={\int}_0^{\mathrm{T}}g(t)h(t)\ dt\kern1em \forall T\in \left[0,\infty \right)\\ {}{\left\Vert g\right\Vert}_{2,\mathrm{T}}={\left\langle g,g\right\rangle}_{\mathrm{T}}^{\frac{1}{2}}={\left({\int}_0^{\mathrm{T}}g(t)g(t)\ dt\right)}^{\frac{1}{2}}\end{array}} $$

Definition 6

Let G : L 2e → L 2e be an input-output mapping. Then, G is passive if there exists some constant β such that

$$ {\left\langle \mathbf{Gx},\mathbf{x}\right\rangle}_{\mathrm{T}}\ge \beta \kern1em \forall \mathbf{x}\in {L}_{2e}\kern1em \forall T\in \left[0,\infty \right) $$

Definition 7

Let G : L 2e → L 2e be an input-output mapping. Then, G is strictly input passive if there exists some constants β ∈ ℝ and δ > 0 such that

$$ {\left\langle \mathbf{Gx},\mathbf{x}\right\rangle}_{\mathrm{T}}\ge \beta +\delta\ {\left\Vert \mathbf{x}\right\Vert}_{2,\mathrm{T}}^2\kern1em \forall \mathbf{x}\in {L}_{2e} $$

Definition 8

Let G : L 2e → L 2e be an input-output mapping. Then, G is strictly output passive if there exists some constants β ∈ ℝ and δ > 0such that

$$ {\left\langle \mathbf{Gx},\mathbf{x}\right\rangle}_{\mathrm{T}}\ge \beta +\delta\ {\left\Vert \mathbf{Gx}\right\Vert}_{2,\mathrm{T}}^2\kern1em \forall \mathbf{x}\in {L}_{2e} $$

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Roberti, F., Toibero, J.M., Sarapura, J.A., Andaluz, V., Carelli, R., Sebastián, J.M. (2020). Unified Passivity-Based Visual Control for Moving Object Tracking. In: Sergiyenko, O., Flores-Fuentes, W., Mercorelli, P. (eds) Machine Vision and Navigation. Springer, Cham. https://doi.org/10.1007/978-3-030-22587-2_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-22587-2_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-22586-5

  • Online ISBN: 978-3-030-22587-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics