Skip to main content
Log in

Control and Machine Intelligence for System Autonomy

  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

Autonomous systems evolve from control systems by adding functionalities that increase the level of system autonomy. It is very important to the research in the field that autonomy be well defined and so in the present paper a precise, useful definition of autonomy is introduced and discussed. Autonomy is defined as the ability of the system to attain a set of goals under a set of uncertainties. This leads to the notion of degrees or levels of autonomy. The Quest for Autonomy in engineered systems throughout the centuries is noted, connections to research work of 30 years ago are made and a hierarchical functional architecture for autonomous systems together with needed functionalities are outlined. Adaptation and Learning, which are among the most important functions in achieving high levels of autonomy are then highlighted and recent research contributions are briefly discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Antsaklis, P.: Control systems and the quest for autonomy, Editorial. IEEE Trans. Autom. Control 62(3), 1013–1016 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  2. Antsaklis, P.J.: Defining intelligent control. IEEE Control Systems Society Report of the Task Force on Intelligent Control. IEEE Control. Syst. 14(3), 4–5, 58–66 (1994)

    Google Scholar 

  3. Antsaklis, P.J.: On intelligent control: report of the IEEE CSS task force on intelligent control. Technical Report of the Interdisciplinary Studies of Intelligent Systems Group. University of Notre Dame 94, 001 (1994)

    Google Scholar 

  4. Antsaklis, P.J.: Intelligent learning control. Introduction to Special Issue, IEEE Control. Syst. 15(3), 5–7 (1995)

    Google Scholar 

  5. Antsaklis, P.J.: Intelligent control. Wiley Encyclopedia of Electrical and Electronics Engineering (1999)

  6. Antsaklis, P.J.: The quest for autonomy revisited. Technical Report of the Interdisciplinary Studies of Intelligent Systems Group, University of Notre Dame 11, 004 (2011)

    Google Scholar 

  7. Antsaklis, P.J., Passino, K.: Autonomous control systems: Architecture and concepts for future space vehicles. Final Report, Contract 957856, Jet Propulsion Laboratory (1987)

  8. Antsaklis, P.J., Passino, K.M.: Introduction to intelligent control systems with high degrees of autonomy. Kluwer Academic Publishers (1993)

  9. Antsaklis, P.J., Passino, K.M., Wang, S.: Towards intelligent autonomous control systems: architecture and fundamental issues. J. Intell. Robot. Syst. 1(4), 315–342 (1989)

    Article  Google Scholar 

  10. Antsaklis, P.J., Passino, K.M., Wang, S.: An introduction to autonomous control systems. IEEE Control. Syst. 11(4), 5–13 (1991)

    Article  Google Scholar 

  11. Åström, K.J., Wittenmark, B.: Adaptive control. Courier Corporation (2013)

  12. Aström, K.J., Albertos, P., Blanke, M., Isidori, A., Schaufelberger, W., Sanz, R.: Control of complex systems. Springer, Berlin (2011)

    Google Scholar 

  13. Barto, A.G., Bradtke, S.J., Singh, S.P.: Learning to act using real-time dynamic programming. Artif. Intell. 72(1-2), 81–138 (1995)

    Article  Google Scholar 

  14. Bcrtsekas, D.: Dynamic programming and optimal control, vol. I. Athena Scientific, Bellmont (1995)

    Google Scholar 

  15. Benard, N., Pons-Prat, J., Periaux, J., Bugeda, G., Bonnet, J.P., Moreau, E.: Multi-input genetic algorithm for experimental optimization of the reattachment downstream of a backward-facing-step with surface plasma actuator. In: 46th AIAA Plasmadynamics and lasers conference, pp. 2957–2980 (2015)

  16. Bertsekas, D.P., Tsitsiklis, J.N.: Neuro-Dynamic programming: an overview. In: Proceedings of the 34Th IEEE conference on decision and control, IEEE, vol. 1, pp 560–564 (1995)

  17. Bukkems, B., Kostic, D., De Jager, B., Steinbuch, M.: Learning-based identification and iterative learning control of direct-drive robots. IEEE Trans. Control Syst. Technol. 13(4), 537–549 (2005)

    Article  Google Scholar 

  18. Chi, R., Liu, X., Zhang, R., Hou, Z., Huang, B.: Constrained data-driven optimal iterative learning control. J. Process. Control 55, 10–29 (2017)

    Article  Google Scholar 

  19. Chowdhary, G.V., Johnson, E.N.: Theory and flight-test validation of a concurrent-learning adaptive controller. J. Guid. Control. Dyn. 34(2), 592–607 (2011)

    Article  Google Scholar 

  20. Dai, S.L., Wang, C., Wang, M.: Dynamic learning from adaptive neural network control of a class of nonaffine nonlinear systems. IEEE Transactions on Neural Networks and Learning Systems 25(1), 111–123 (2014)

    Article  Google Scholar 

  21. Doya, K.: Reinforcement learning in continuous time and space. Neural Comput. 12(1), 219–245 (2000)

    Article  Google Scholar 

  22. Dracopoulos, D.C.: Genetic algorithms and genetic programming for control. In: Evolutionary algorithms in engineering applications, pp. 329–343. Springer (1997)

  23. Feng, L., Zhang, K., Chai, Y., Yang, Z., Xu, S.: Observer-based fault estimators using iterative learning scheme for linear time-delay systems with intermittent faults. Asian J. Control 19(6), 1991–2008 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  24. Foroutan, S.A., Salmasi, F.R.: Detection of false data injection attacks against state estimation in smart grids based on a mixture gaussian distribution learning method. IET Cyber-Physical Systems: Theory & Applications 2(4), 161–171 (2017)

    Google Scholar 

  25. Fu, K.S.: Learning control systems–review and outlook. IEEE Trans. Autom. Control 15(2), 210–221 (1970)

    Article  MathSciNet  Google Scholar 

  26. Goebel, G., Allgöwer, F: Semi-explicit mpc based on subspace clustering. Automatica 83, 309–316 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  27. Hein, D., Hentschel, A., Runkler, T., Udluft, S.: Particle swarm optimization for generating interpretable fuzzy reinforcement learning policies. Eng. Appl. Artif. Intel. 65, 87–98 (2017)

    Article  Google Scholar 

  28. Hu, J., Zhou, M., Li, X., Xu, Z.: Online model regression for nonlinear time-varying manufacturing systems. Automatica 78, 163–173 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  29. Kamalapurkar, R., Reish, B., Chowdhary, G., Dixon, W.E.: Concurrent learning for parameter estimation using dynamic state-derivative estimators. IEEE Trans. Autom. Control 62(7), 3594–3601 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  30. Kiumarsi, B., Lewis, F.L., Jiang, Z.P.: H\(\infty \) control of linear discrete-time systems: Off-policy reinforcement learning. Automatica 78, 144–152 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  31. Kiumarsi, B., Vamvoudakis, K.G., Modares, H., Lewis, F.L.: Optimal and autonomous control using reinforcement learning: A survey. IEEE Transactions on Neural Networks and Learning Systems (2017)

  32. Kokar, M.: Machine learning in a dynamic world. In: Proceedings of IEEE international symposium on intelligent control, pp. 500–507. IEEE (1988)

  33. Lagoudakis, M.G., Parr, R., Littman, M.L.: Least-squares methods in reinforcement learning for control. In: Hellenic conference on artificial intelligence, pp. 249–260. Springer (2002)

  34. Lee, C., Kim, J., Babcock, D., Goodman, R.: Application of neural networks to turbulence control for drag reduction. Phys. Fluids 9(6), 1740–1747 (1997)

    Article  Google Scholar 

  35. Lewis, F.L., Vrabie, D., Syrmos, V.L.: Optimal control. Wiley , Hoboken (2012)

    Book  MATH  Google Scholar 

  36. Lewis, F.L., Vrabie, D., Vamvoudakis, K.G.: Reinforcement learning and feedback control: using natural decision methods to design optimal adaptive controllers. IEEE Control. Syst. 32(6), 76–105 (2012)

    Article  MathSciNet  Google Scholar 

  37. Michalewicz, Z., Janikow, C.Z., Krawczyk, J.B.: A modified genetic algorithm for optimal control problems. Computers & Mathematics with Applications 23(12), 83–94 (1992)

    Article  MATH  Google Scholar 

  38. Michalski, R.S., Carbonell, J.G., Mitchell, T.M.: Machine learning: an artificial intelligence approach. Springer, Berlin (2013)

    MATH  Google Scholar 

  39. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529 (2015)

    Article  Google Scholar 

  40. Mohri, M., Rostamizadeh, A., Talwalkar, A.: Foundations of machine learning. MIT Press, Cambridge (2012)

    MATH  Google Scholar 

  41. Nageshrao, S.P., Lopes, G.A., Jeltsema, D., Babuška, R.: Port-hamiltonian systems in adaptive and learning control: a survey. IEEE Trans. Autom. Control 61(5), 1223–1238 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  42. Nedić, A, Olshevsky, A., Uribe, C.A.: Fast convergence rates for distributed non-bayesian learning. IEEE Trans. Autom. Control 62(11), 5538–5553 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  43. Plaat, A., Kosters, W., van den Herik, J.: Computers and games. Springer, Berlin (2017)

    MATH  Google Scholar 

  44. Sklansky, J.: Learning systems for automatic control. IEEE Trans. Autom. Control 11(1), 6–19 (1966)

    Article  MathSciNet  Google Scholar 

  45. Sutton, R.S., Barto, A.G.: Reinforcement learning: an introduction. 1. MIT Press, Cambridge (1998)

    Google Scholar 

  46. Sutton, R.S., Barto, A.G., Williams, R.J.: Reinforcement learning is direct adaptive optimal control. IEEE Control. Syst. 12(2), 19–22 (1992)

    Article  Google Scholar 

  47. Tsypkin, Y.: Self-learning–what is it? IEEE Trans. Autom. Control 13(6), 608–612 (1968)

    Article  MathSciNet  Google Scholar 

  48. Vrabie, D., Lewis, F.: Neural network approach to continuous-time direct adaptive optimal control for partially unknown nonlinear systems. Neural Netw. 22(3), 237–246 (2009)

    Article  MATH  Google Scholar 

  49. Vrabie, D., Vamvoudakis, K.G., Lewis, F.L.: Optimal adaptive control and differential games by reinforcement learning principles, vol. 2. IET (2013)

  50. Werbos, P.J.: Neural networks for control and system identification. In: Proceedings of the 28th IEEE conference on decision and control, pp. 260–265. IEEE (1989)

  51. Kraft, L. G., Campagna, D.: A summary comparison of CMAC neural network and traditional adaptive control systems. Neural Networks for Control, W. T. Miller, R. Sutton, and P. Werbos, MIT Press, Cambridge, MA (1990)

  52. Xie, J., Wan, Y., Mills, K., Filliben, J.J., Lewis, F.: A scalable sampling method to high-dimensional uncertainties for optimal and reinforcement learning-based controls. IEEE Control Systems Letters 1(1), 98–103 (2017)

    Article  Google Scholar 

  53. Yang, C., Teng, T., Xu, B., Li, Z., Na, J., Su, C.Y.: Global adaptive tracking control of robot manipulators using neural networks with finite-time learning convergence. Int. J. Control. Autom. Syst. 15 (4), 1916–1924 (2017)

    Article  Google Scholar 

  54. Yang, X., Ruan, X.: Reinforced gradient-type iterative learning control for discrete linear time-invariant systems with parameters uncertainties and external noises. IMA J. Math. Control. Inf. 34(4), 1117–1133 (2016)

    MathSciNet  Google Scholar 

  55. Yang, X., He, H., Liu, D., Zhu, Y.: Adaptive dynamic programming for robust neural control of unknown continuous-time non-linear systems. IET Control Theory Appl. 11(14), 2307–2316 (2017)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Panos J. Antsaklis.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: From the CSS Task Force on Intelligent Control Report [3]

Appendix: From the CSS Task Force on Intelligent Control Report [3]

In May 1993, a task force was created at the invitation of the Technical Committee of Intelligent Control of the IEEE Control Systems Society to look into the area of Intelligent Control and define what is meant by the term. Its findings are aimed mainly towards serving the needs of the Control System Society; hence the task force has not attempted to address the issue of intelligence in its generality, but instead has concentrated on deriving working characterizations of Intelligent Control. Many of the findings however may apply to other disciplines as well.

The charge to the task force was to characterize intelligent control systems, to be able to recognize them and distinguish them from conventional control systems; to clarify the role of control in intelligent systems and to help identify problems where intelligent control methods appear to be the only viable avenues.

In accomplishing these goals, the emphasis was on working definitions and useful characterizations rather aphorism. It was accepted early on that more than one definition of intelligent system may be necessary, depending on the view taken and the problems addressed.

In the following a brief description of the findings is given. The relation to autonomy is that intelligent control methods are used to achieve high levels of autonomy in systems.

1.1 A.1 Intelligence and Intelligent Control

It is appropriate to briefly comment on the meaning of the word intelligent in “intelligent control.” Note that the precise definition of “intelligence” has been eluding mankind for thousands of years. More recently, this issue has been addressed by disciplines such as psychology, philosophy, biology and of course by Artificial Intelligence (AI); note that AI is defined to be the study of mental faculties through the use of computational models. No consensus has emerged as yet of what constitutes intelligence. The controversy surrounding the widely used IQ tests, also points to the fact that we are well away from having understood these issues. In this Appendix, we discuss several characterizations of intelligent systems that appear to be useful when attempting to address complex control problems. Intelligent systems can be seen as machines which emulate human mental faculties such as adaptation and learning, planning under large uncertainty, coping with large amounts of data etc. in order to effectively control complex processes; and this is the justification for the use of the term intelligent in intelligent control, since these mental faculties are considered to be important attributes of human intelligence. An alternative term, that was discussed in this article, is “autonomous (intelligent) control;” it emphasizes the fact that an intelligent controller typically aims to attain higher degrees of autonomy in accomplishing and even setting control goals, rather than stressing the (intelligent) methodology that achieves those goals. We should keep in mind that “intelligent control” is only a name that appears to be useful today (this was the comment made over 20 years ago and has proven to be correct). In the same way, the “modern control” of the 60’s has now become “conventional (or traditional) control,” as it has become part of the mainstream, what is called intelligent control today may be called just “control” in the not so distant future (which is exactly what has happened). What is more important than the terminology used are the concepts and the methodology, and whether or not the control area and intelligent control will be able to meet the ever-increasing control needs of our technological society.

1.2 A.2 Defining Intelligent Control Systems

Intelligent systems can be characterized in a number of ways and along a number of dimensions. There are certain attributes of intelligent systems, which are of particular interest in the control of systems [2, 3]. We begin with a general characterization of intelligent systems: An intelligent system has the ability to act appropriately in an uncertain environment, where an appropriate action is that which increases the probability of success, and success is the achievement of behavioral sub-goals that support the system’s ultimate goal. In order for a man-made intelligent system to act appropriately, it may emulate functions of living creatures and ultimately human mental faculties. An intelligent system can be characterized along a number of dimensions. There are degrees or levels of intelligence that can be measured along the various dimensions of intelligence. At a minimum, intelligence requires the ability to sense the environment, to make decisions and to control action. Higher levels of intelligence may include the ability to recognize objects and events, to represent knowledge in a world model, and to reason about and plan for the future. In advanced forms, intelligence provides the capacity to perceive and understand, to choose wisely, and to act successfully under a large variety of circumstances so as to survive and prosper in a complex and often hostile environment. Intelligence can be observed to grow and evolve, both through growth in computational power and through accumulation of knowledge of how to sense, decide and act in a complex and changing world. The above characterization of an intelligent system is rather general. According to this, a great number of systems can be considered intelligent. In fact, according to this definition even a thermostat may be considered to be an intelligent system, although of low level of intelligence. It is common however to call a system intelligent when in fact it has a rather high level of intelligence. There exist a number of alternative but related definitions of intelligent systems, which emphasize systems with high degrees of intelligence. For example, the following definition emphasizes the fact that the system in question processes information, and it focuses on man-made systems and intelligent machines: Machine intelligence is the process of analyzing, organizing and converting data into knowledge; where (machine) knowledge is defined to be the structured information acquired and applied to remove ignorance or uncertainty about a specific task pertaining to the intelligent machine. This definition relates to the principle of increasing precision with decreasing intelligence of Saridis. Next, an intelligent system can be characterized by its ability to dynamically assign sub-goals and control actions in an internal or autonomous fashion: Many adaptive or learning control systems can be thought of as designing a control law to meet well-defined control objectives. This activity represents the system’s attempt to organize or order its “knowledge” of its own dynamical behavior, so to meet a control objective. The organization of knowledge can be seen as one important attribute of intelligence. If this organization is done autonomously by the system, then intelligence becomes a property of the system, rather than of the system’s designer. This implies that systems which autonomously (self)-organize controllers with respect to an internally realized organizational principle are intelligent control systems. A procedural characterization of intelligent systems is given next: Intelligence is a property of the system which emerges when the procedures of focusing attention, combinatorial search, and generalization are applied to the input information in order to produce the output. One can easily deduce that once a string of the above procedures is defined, the other levels of resolution of the structure of intelligence are growing as a result of the recursion. Having only one level structure leads to a rudimentary intelligence that is implicit in the thermostat, or to a variable-structure sliding mode controller.

1.3 A.3 Control and Intelligent Systems

The concepts of intelligence and control are closely related and the term “Intelligent Control” has a unique and distinguishable meaning. An intelligent system must define and use goals. Control is then required to move the system to these goals and to define such goals. Consequently, any intelligent system will be a control system. Conversely, intelligence is necessary to provide desirable functioning of systems under changing conditions, and it is necessary to achieve a high degree of autonomous behavior in a control system. Since control is an essential part of any intelligent system, the term “intelligent control systems” is sometimes used in engineering literature instead of “intelligent systems” or “intelligent machines.” The term “intelligent control system” simply stresses the control aspect of the intelligent system. Below, one more alternative characterization of intelligent (control) systems is included. According to this view, a control system consists of data structures or objects (the plant models and the control goals) and processing units or methods (the control laws): An intelligent control system is designed so that it can autonomously achieve a high-level goal, while its components, control goals, plant models and control laws are not completely defined, either because they were not known at the design time or because they changed unexpectedly.

1.4 A.4 Characteristics or Dimensions of Intelligent Systems

There are several essential properties present in different degrees in intelligent systems. One can perceive them as intelligent system characteristics or dimensions along which different degrees or levels of intelligence can be measured. Below we discuss three such characteristics that appear to be rather fundamental in intelligent control systems.

Adaptation and Learning

The ability to adapt to changing conditions is necessary in an intelligent system. Although adaptation does not necessarily require the ability to learn, for systems to be able to adapt to a wide variety of unexpected changes learning is essential. So, the ability to learn is an important characteristic of (highly) intelligent systems.

Autonomy and Intelligence

Autonomy in setting and achieving goals is an important characteristic of intelligent control systems. When a system has the ability to act appropriately in an uncertain environment for extended periods of time without external intervention it is considered to be highly autonomous. There are degrees of autonomy; an adaptive control system can be considered as a system of higher autonomy than a control system with fixed controllers, as it can cope with greater uncertainty than a fixed feedback controller. Although for low autonomy no intelligence (or “low” intelligence) is necessary, for high degrees of autonomy, intelligence in the system (or “high” degrees of intelligence) is essential.

Structures and Hierarchies

In order to cope with complexity, an intelligent system must have an appropriate functional architecture or structure for efficient analysis and evaluation of control strategies. This structure should provide a mechanism to build levels of abstraction (resolution, granularity) or at least some form of partial ordering so to reduce complexity. An approach to study intelligent machines involving entropy (of Saridis) emphasizes such efficient computational structures. Hierarchies (that may be approximate, localized or combined in heterarchies) that are able to adapt, may serve as primary vehicles for such structures to cope with complexity. The term “hierarchies” refers to functional hierarchies, or hierarchies of range and resolution along spatial or temporal dimensions, and it does not necessarily imply hierarchical hardware. Some of these structures may be hardwired in part. To cope with changing circumstances the ability to learn is essential so these structures can adapt to significant, unanticipated changes.

In view of the above

a working characterization of intelligent systems (or of (highly) intelligent (control) systems or machines) that captures the essential characteristics present in any such system is: An intelligent system must be highly adaptable to significant unanticipated changes, and so learning is essential. It must exhibit high degree of autonomy in dealing with changes. It must be able to deal with significant complexity, and this leads to certain types of functional architectures such as hierarchies.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Antsaklis, P.J., Rahnama, A. Control and Machine Intelligence for System Autonomy. J Intell Robot Syst 91, 23–34 (2018). https://doi.org/10.1007/s10846-018-0832-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10846-018-0832-6

Keywords

Navigation