Abstract
One fruitful way to approach the study of general control systems, or indeed systems in general, is to regard these systems as producing specific responses, or outputs, when confronted with various permissible environmental situations, or inputs. Until rather recently, this approach has dominated the engineering and scientific literature on control theory. In the present chapter, some of the important concepts from this literature are briefly discussed, and in the next chapter it is shown how these concepts can be applied directly to the study of certain biological control mechanisms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Notes to Chapter 8
If this equation is to represent a physical system, then generally one can put a 0=0. For, in the absence of external forces (i.e. for F(t) =0), one wishes the system to remain at rest (i.e. y(t) =0 must be a solution of the equation). If a 0≠0, then the system will maintain a constant state of activity in the absence of external forces.
There are numerous texts on the Laplace transform, ranging from purely mathematical studies to those geared explicitly for engineering applications. The reader might consult D. V. Widder, The Laplace Transform, Princeton University Press, Princeton, 1941, or R. V. Churchill, Operational Mathematics (2nd edn), McGraw-Hill, New York, 1958.
As may be recalled from Section 6.3, asymptotic stability in the sense of Lyapunov is concerned with the null solution y(t) =0 of (8.1) when F(t) =0 (where, according to the discussion of the previous note, (math). Now the polynomial (math) can be written in the form a n (s -s 1 ) (s -s 2 )... (s -s n ), where the si represent the roots of the equation (math). For the moment one may assume these roots to be all distinct. It is easy to verify that the general solution of (8.1), when F(r)=0, is .given by (math) (for a derivation of this solution, see e.g. Forsyth, loc. cit., p. 66), where the Ai are arbitrary constants. If an impulse is applied to the system at time t=0 and then removed (thus specifying the initial condition y(0) =K) the system will return to equilibrium if, and only if, the general solution y(t) approaches 0 as t→∞. In other words, each exponential term of the general solution must decay to 0. In general, the roots s i are complex; suppose that s i =a+bi, with a, b real numbers, then e s i t =e at e ibt . Now e ibt represents a harmonic oscillation, and in particular, for all t,e ibt ≤ 2. Thus the asymptotic behaviour of the term e s i t depends entirely on the real part a of the root si. If a>0, then e at tends to infinity as t increases, and stability is impossible. If a =0, then e s i t will oscillate indefinitely, and asymptotic stability will, in general, be impossible. Finally, if a<0, then e at decays to 0 as t increases. Hence asymptotic stability of the trivial solution will obtain only if the real part of each of the roots s i is negative.
A discussion of the relation of the lags and the gain of a linear system to the stability of the system is not difficult, but since it is only alluded to in the discussion of the pupillary servomechanism in Chapter 9 (Section 9.2), it seems preferable to refer the reader to the engineering literature at this point. An excellent treatment may be found, for example, in W. R. Ahrendt and J. F. Taplin, Automatic Feedback Control, McGraw-Hill, New York, 1951.
This result embodies one of the most important properties of linear differential equations and the systems which these equations describe. The essential fact in the above analysis is simply that, if y 1(t), y 2(t) are distinct solutions of a linear differential equation, then any linear combination Ay 1(t) + By 2(t) is again a solution. In terms of the systems described by such an equation, this means that the response of the system to successive inputs is just the algebraic sum of the responses of the system to each of the inputs separately (taking account, of course, of the phasing of the inputs). This fact is called the Principle of Superposition, and the convolution integral (8.11) is nothing but a general statement of that principle. It is in this sense that the integral relation (8.11) and the differential equation (8.1) are equivalent statements, and characterize the same system. However, it is not always possible to pass backwards from the integral expression (8.11) to a corresponding differential equation. The assertion that every continuous function (actually, everyintegrable function) may be regarded as a limit of a sequence of step functions may be found, e.g., in F. Riesz and B. v. Sz. Nagy, Functional Analysis, Ungar, New York, 1956, p. 30 et seq.
See Section 10.4, and especially the reference in Note 3 thereto, for a discussion of how such an inversion may be performed (at least partially) within a restricted class of control systems.
A most interesting text along these lines is H. F. Olson, Dynamical Analogies (2nd edn), van Nostrand, New York, 1958.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1967 Springer Science+Business Media New York
About this chapter
Cite this chapter
Rosen, R. (1967). Input-Output Systems. In: Optimality Principles in Biology. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-6419-9_8
Download citation
DOI: https://doi.org/10.1007/978-1-4899-6419-9_8
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4899-6207-2
Online ISBN: 978-1-4899-6419-9
eBook Packages: Springer Book Archive