Laws of Information Which Govern Systems
Information theory was created for the purpose of studying the communication of messages from one point to another, and since its appearance,14 its focus has remained on the question, “how can the constraint between the two variables X (message sent) and Y (message received) be measured and maximized”? Although the theory was generalized to N dimensions,10,2 and its relation to the analysis of variance noted,9 not much use seems to have been made of the result, perhaps in part because the descriptors “N-dimensional Information Theory” or “Uncertainty Analysis” did not adequately represent what can actually be seen as the analysis of constraints in multivariable systems. In any statistically-analyzable system of several variables interacting in a lively way, some variables (or sets of them) exert effects on others. These effects are reflected statistically as non-independence of the variables involved, and it is this deviation from independence which we indicate by the term “constraint.” We prefer this term to the term “dependence” because the latter suggests dependence of X on Y while the former is neutral as to direction. To the extent that the variables are not independent, they are “in communication” with one another, and information theory can be used to analyze the non-independence. In addition, the fluctuation of values taken by any variable can be viewed as a message it sends, a flow of information about itself to all other parts of the system which are “listening.” The view of systems as networks of information transfer leads to quantitative conclusions about system behavior and structure which are somewhat novel and of wide applicability.
KeywordsTransmission Rate Channel Capacity Output Rate Deterministic System Conditional Entropy
Unable to display preview. Download preview PDF.
- 1.W. R. Ashby, Introduction to Cybernetics. London: Chapman and Hall, 1956.Google Scholar
- 2.W. R. Ashby, “Measuring the internal information exchange in a system,” Cybernetica, vol 8, pp. 5–22, 1965.Google Scholar
- 3.W. R. Ashby, “Two tables of identities governing information flows within large systems,” Commun. Amer. Soc. Cybern., vol. 1, no. 2, pp. 2–7, 1969.Google Scholar
- 4.W. R. Ashby, “Information flows within co-ordinated systems,” in Progress of Cybernetics: Proc. 1st Int. Cong. Cybernetics, J. Rose, Ed. London: Gordon and Breach, 1970, pp. 57–64.Google Scholar
- 5.R. C. Conant, “The information transfer required in regulatory processes,” IEEE Trans. Syst. Sci. Cybern., vol. SSC-5, no. 4, pp. 334–338.Google Scholar
- 6.R. C. Conant, “Detecting subsystems of a complex system,” IEEE Trans. Syst., Man, Cybern, vol. SMC-2, no. 4, pp. 550–553.Google Scholar
- 7.R. C. Conant, “Information flows in hierarchical systems,” Int. J. Gen. Syst., vol. 1, no. 1, pp. 918, 1974.Google Scholar
- 11.G. A. Miller, “On the bias of information estimates,” in Information Theory in Psychology, H. Quastler, Ed. Glencoe, IL: The Free Press, 1955, pp. 95–100.Google Scholar
- 12.M. S. Rogers and B. F. Green, “Moments of sample information when the alternatives are equally likely,” in Information Theory in Psychology, H. Quastler, Ed. Glencoe, IL: The Free Press, 1955, pp. 101–107.Google Scholar
- 13.H. Simon, “The architecture of complexity,” in Proc. Amer. Phil. Soc., vol 106, pp. 467–482, 1962. Reprinted in Gen. Syst., vol. 10, pp. 63–76, 1965.Google Scholar
- 14.C. E. Shannon and W. Weaver, The Mathematical Theory of Communication. Urbana, IL: Univ. Illinois, 1949.Google Scholar
- 15.C. E. Shannon, “Prediction and entropy of printed English,” Bell Syst. Tech. J., vol. 30, pp. 5064, 1951.Google Scholar