Abstract
Backwards calculation of derivatives – sometimes called the reverse mode, the full adjoint method, or backpropagation – has been developed and applied in many fields. This paper reviews several strands of history, advanced capabilities and types of application – particularly those which are crucial to the development of brain-like capabilities in intelligent control and artificial intelligence.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer
About this paper
Cite this paper
Werbos, P.J. (2006). Backwards Differentiation in AD and Neural Nets: Past Links and New Opportunities. In: Bücker, M., Corliss, G., Naumann, U., Hovland, P., Norris, B. (eds) Automatic Differentiation: Applications, Theory, and Implementations. Lecture Notes in Computational Science and Engineering, vol 50. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-28438-9_2
Download citation
DOI: https://doi.org/10.1007/3-540-28438-9_2
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28403-1
Online ISBN: 978-3-540-28438-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)