Abstract
In the previous chapter, we discussed two new methods of performing Canonical Correlation Analysis (CCA) with artificial neural networks. In this chapter, we re-derive learning rules from a probabilistic perspective which then enables us, by use of a specific prior on the weights, to simplify the algorithm. We then derive CCA-type rules from Becker’s models (see Appendix D), though with a very different methodology from that used in [7]. We finally derive a robust version of the above rules from probability theory and compare the convergence of all the various rules on artificial data sets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Rights and permissions
Copyright information
© 2005 Springer-Verlag London Limited
About this chapter
Cite this chapter
(2005). Alternative Derivations of CCA Networks. In: Hebbian Learning and Negative Feedback Networks. Advanced Information and Knowledge Processing. Springer, London. https://doi.org/10.1007/1-84628-118-0_10
Download citation
DOI: https://doi.org/10.1007/1-84628-118-0_10
Publisher Name: Springer, London
Print ISBN: 978-1-85233-883-1
Online ISBN: 978-1-84628-118-1
eBook Packages: Computer ScienceComputer Science (R0)