Abstarct
The focus in the previous chapter was on the representation of finite groups by matrices and linear transformations (over the field of real or complex numbers). The related theory is very rich in content, and stands on its own as an area of study in mathematics. One could very justifiably ask at this point: What connections could group representation theory have with signal processing? The answer lies in relating the theory to what has been said in Chapters 1 and 4 about signals and systems.
The focus in the previous chapter was on the representation of finite groups by matrices and linear transformations (over the field of real or complex numbers). The related theory is very rich in content, and stands on its own as an area of study in mathematics. One could very justifiably ask at this point: What connections could group representation theory have with signal processing? The answer lies in relating the theory to what has been said in Chapters 1 and 4 about signals and systems.
Recall that, for an abelian group, all its irreducible representations on the complex number field are scalar functions, i.e., functions from the group to complex numbers under multiplication. For nonabelian groups, however, their irreps can not all be scalars, simply because numbers are necessarily commutative. The next possible option then is to consider matrices or, at a more abstract level, to consider linear transformations on a vector space, for their representation, and thus bypass the commutativity constraint. This is indeed what representation theory does. It deals with the nature and structure of such representations.
A high point of this theory is the result that every representation of a group is a direct sum of its irreducible representations. Equivalently, it means that the underlying representation space can be decomposed into a direct sum of subspaces of relatively small dimensions such that ( a) each of these subspaces is an invariant subspace under each of the linear transformations representing the group and ( b) each one of the subspaces is as small as possible. Representation theory supplies the algebraic tools for constructing such decompositions.
As discussed in Chapter 4, in a wide variety of signal processing problems, our starting point consists of a vector space of signals, together with a class of systems (processors) whose symmetries are characterized by a group of linear transformations on this space. A common objective in these cases is to identify a suitable decomposition of the signal space into subspaces that can be separately handled by the processing systems. In more precise terms, this amounts to the objective set out in Section ?? on page ??, which I reproduce here for convenience:
Given a vector space V and a group P of linear transformations on V, identify a decomposition V = V 1 ⊕ V 2 ⊕⋯ ⊕ V k such that the subspaces V i, i = 1,…,k, are as small as possible, and are each invariant under every member of P. ♠
It is in this context that the reduction techniques of group representation theory become directly relevant in signal processing. We first consider the case in which signals are functions on a group.
6.1 Signals as Functions on Groups
Consider the real line, \(\mathbb{R}\), as the domain of functions that represent continuous-time signals. A crucial operation on functions in this context is that of translation, or shift, in time, as discussed in Section ??. Thus, for a function \(f : \mathbb{R} \rightarrow \mathbb{R}\), we talk of its translates, or shifted versions, τ f, τf(t) = f(t − τ). Furthermore, scaling of signals in time is of no primary concern in the theory of LTI systems, i.e., for a function \(f : \mathbb{R} \rightarrow \mathbb{R}\), we are not interested here in functions g, g(t) := f(τt) for \(\tau \in \mathbb{R}\).
On the whole, we can say that we are concerned here with \(\mathbb{R}\) not merely as a set but as one with additional structure. Admittedly, we are not interested here in the full structure of the real line, which is that of a field, with addition and multiplication as its two binary operations. We do, however, take into account the fact that, with respect to the operation of addition, it has the structure of a group. In the specific case of continuous-time signals, it is the group \((\mathbb{R},+)\). In the discrete-time case, it is the group \((\mathbb{Z},+)\). In the discrete finite case, one of the choices is the group \({\mathbb{Z}}_{n}\) under addition modulo n. When we talk of a translate of a function, we implicitly assume such a group structure for the index set on which the function is defined. This is the motivation for thinking of signals very generally as functions on a group, and of systems for processing such signals as linear transformations with symmetries characterized by an isomorphic group of translation operators.Footnote 1
Going back to Remark 6, let the signal space V be a vector space of functions from a group G to \(\mathbb{C}\) (or \(\mathbb{R}\)), and let P be the group of associated translation operators isomorphic to G. Then the subspaces V i are the smallest possible subspaces of V, each of which is invariant under every member of P. A signal f in V is then uniquely decomposable into components belonging to the subspaces V i, and the action of every member of P on the components can be determined separately. These components are the so called harmonics (or spectral components) of the signal f.
Remark 7
In the special case when the group G, and consequently the associated isomorphic group P of translation operators, is abelian (commutative) then the subspaces V i are all of dimension one and their basis vectors coincide with the common eigenvectors of the members of P. Thus, if the group is \(\mathbb{R}\) under addition (as in the case of continuous-time signals) then the complex exponentials emerge as the harmonic components of signals. For electrical engineers, the term “harmonics” typically means sinusoids or complex exponentials. Within the group theoretic framework, the term has acquired a much broader interpretation. The spirit is nonetheless the same throughout. ♠
Very broadly, the subject of harmonic analysis on groups (also known as Fourier analysis on groups) deals with such decompositions of functions on groups (commutative as well as noncommutative), and their applications in mathematics and engineering. For an introduction to the subject, the paper by Gross [7] may be a good starting point. Edwards [4] and Devito [3] should provide further insights. For engineering applications in digital system design, see Stanković [15] and Karpovsky [10]. Applications in the area of image understanding are discussed in Kanatani [9].
6.2 Symmetries of Linear Equations
Another area of applications of representation theory is in the solution of algebraic and differential equations with symmetries. The collection of papers in Allgower [1] should give an idea of the kind of work that has of late been going on in this direction. One particular line of such work has to do with symmetry based block-diagonalization of a system of linear equations. A very simple example related to filter design is what I give next to illustrate the basic idea.
Example 6.2.1
Consider the LC 2-port analog filter shown in the Figure 6.1. It is the circuit of an insertion-loss low-pass filter of order 5, working between an input source V s and an output load resistance R L. A common property of such filters of odd orders is that their circuits have physical symmetry with respect to the input and output ports. In this particular example, this is reflected in the fact that the end inductors are both of value L 1 and the two capacitors are both of value C.
The state equations of the circuit, with v 2 , v 4 , i 1 , i 3 , and i 5 as the state variables, can be seen to be:
where coefficients a, b, c, and d are determined by the component values of the circuit. Let A denote the coefficient matrix of Eq. (6.1):
Based on symmetry arguments, we can see that the coefficient matrix A commutes with the following matrix P:
Note that P and I (the identity matrix of size 5) together form a group of order 2. Using the reduction procedure discussed in Chapters 5, we are then in a position to block-diagonalize the matrix A. The procedure leads us to the matrix α:
with the property that, under similarity transformation by it, the matrix \(\hat{A} = {\alpha }^{-1}A\alpha \) is in a block-diagonal form with two blocks, one of size 2 and another of size 3:
Now, define a new set of variables \(\hat{{v}}_{2}\), \(\hat{{v}}_{4}\), \(\hat{{i}}_{1}\), \(\hat{{i}}_{3}\) , and \(\hat{{i}}_{5}\) by the equality
Then the Eq. (6.1) takes the following equivalent form:
Owing to the block-diagonal form of the coefficient matrix, Eq. (6.6) separates out into two sets of uncoupled differential equations. This helps simplify the task of realizing the original LC filter as an equivalent active state-variable filter.Footnote 2
Remark 8
As in the example given in Section ??, here too the matrix α does not depend on the specific values of the parameters a, b, c, and d. It is entirely decided by the symmetries of the coefficient matrix A. What holds in this respect in the specific case of a filter of order 5, it also holds for all such insertion-loss filters of odd orders in general. ♠
6.3 Fast Discrete Signal Transforms
Yet another major connection of representation theory with signal processing is in the design of what are called fast algorithms for a discrete finite transform, such as the discrete Fourier transform (DFT).
Admittedly, the DFT has not come for explicit mention in our discussions. It is, however, reasonable to assume that the reader is familiar with its wide ranging applications in signal processing. The reader must also be familiar with the class of algorithms, collectively known as fast Fourier transform (FFT) algorithms, that are used in efficiently carrying out DFT computations. For someone interested in revising the basics, presented in the setting of linear algebra, a good introduction is given in Frazier [6, Chapters 2]. Although Frazier does not bring in group theory in his treatment, he does relate the DFT to the eigenvectors of translation-invariant linear transformations on finite dimensional vector spaces. In that sense his approach does connect with group representation theory. A tutorial overview of a group theoretic interpretation of the FFT can be found in Rockmore [14].
Egner and Püschel [5] may be consulted for more on the subject of symmetry based fast algorithms. They address the very general issue of deriving fast algorithms for a very wide class of what they have called “discrete signal transforms”, of which the DFT and the discrete cosine transform (DCT) are two special kinds. A proper understanding of these algorithms and related ideas requires as a prerequisite a background in group theory. Hopefully the contents of this book will initiate the reader in building up such a background.
In closing, I should like to add that several new avenues of exploiting symmetry considerations in signal processing are just about beginning to open up. One such avenue is related to the presence of partial symmetries in signals and systems. As pointed out in Veblen and Whitehead [16, p. 32] and Lawson [11, p. 2], there are spaces whose groups of automorphisms reduce to the identity.Footnote 3 Group theory as a tool for studying symmetries does not take us very far in such cases. As Lawson [11] argues, the theory of inverse semigroups, which deals with partial symmetries, may provide a more meaningful framework. This calls for fresh and substantive new investigations.
One can also make a case for considering signals not as members of a vector space, but rather as subspaces of a vector space. (We do a similar thing when we talk of events as subsets in probability theory.) In that case the signal space has the structure of a non-distributive lattice. What about the role of symmetry in the context of signal spaces so conceived? Will all this be relevant in the modeling of some real-life situation? The classic paper by Birkhoff and von Neumann [2] seems to suggest that there may well be such relevance, even in the area of signal processing. But all that is part of another story!
Notes
- 1.
Translation operators, as explained in Section ??, may be visualized as generalizations of the familiar delay lines of circuit and system theory. You may recall the way they are used in a transversal filter for simulating an LTI system with a given impulse response.
- 2.
- 3.
The real number field \(\mathbb{R}\) serves as an example. This point is also discussed in Marquis [12, p. 35].
References
Eugene L. Allgower, Kurt Georg, and Rick Miranda, editors. Exploiting Symmetry in Applied and Numerical Analysis. American Mathematical Society, Providence, 1993.
G. Birkhoff and J. von Neumann. The logic of quantum mechanics. Annals of Mathematics, 37(2):823–843, 1936.
Carl L. DeVito. Harmonic Analysis: A Gentle Introduction. Jones & Bartlett, Boston, 2007.
R.E. Edwards. Fourier Series: A Modern Introduction, volume 1. Springer-Verlag, New York, 1979.
Sebastian Egner and Mark Püschel. Automatic generation of fast discrete signal transforms. IEEE Trans. on Signal Processing, 49(9):1992–2002, 2001.
Michael W. Frazier. Introduction to Wavelets Through Linear Algebra. Springer-Verlag, New York, 2000.
Kenneth I. Gross. On the evolution of noncommutative harmonic analysis. Amer. Math. Month., 85:525–548, 1978.
Fred H. Irons. Active Filters for Integrated-Circuit Applications. Artech House, Boston, 2005.
Kenichi Kanatani. Group-Theoretical Methods in Image Understanding. Springer-Verlag, London, 1990.
Mark G. Karpovsky, Radomir S. Stanković, and Jaakko T. Astola. Spectral Logic and Its Applications for the Design of Digital Devices. Wiley, New Jersey, 2008.
M.V. Lawson. Inverse Semigroups: The Theory of Partial Symmetries. World Scientific, Singapore, 1998.
Jean-Pierre Marquis. From a Geometrical Point of View: A Study of the History and Philosophy of Category Theory. Springer, Dordrecht, 2009.
S.A. Pactitis. Active Filters: Theory and Design. CRC Press, Boca Raton, 2008.
Daniel N. Rockmore. The fft: An algorithm the whole family can use. Computing in Science and Engineering, 2(1):60–64, 2000.
Radomir S. Stanković, Claudio Moraga, and Jaakko T. Astola. Fourier Analysis on Finite Groups with Applications in Signal Processing and System Design. Interscience Publishers, New York, 2005.
Oswald Veblen and J.H.C. Whitehead. The Foundations of Differential Geometry. Cambridge University Press, Cambridge, 1932, 1953, 1960.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2010 Springer Science+Business Media B.V.
About this chapter
Cite this chapter
Sinha, V.P. (2010). Signal Processing and Representation Theory. In: Symmetries and Groups in Signal Processing. Signals and Communication Technology. Springer, Dordrecht. https://doi.org/10.1007/978-90-481-9434-6_6
Download citation
DOI: https://doi.org/10.1007/978-90-481-9434-6_6
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-90-481-9433-9
Online ISBN: 978-90-481-9434-6
eBook Packages: EngineeringEngineering (R0)