Uncertainty Analysis Based on Sensitivities Generated Using Automatic Differentiation

  • Jacob Barhen
  • David B. Reister
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2668)


The objective is to determine confidence limits for the outputs of a mathematical model of a physical system that consists of many interacting computer codes. Each code has many modules that receive inputs, write outputs, and depend on parameters. Several of the outputs of the system of codes can be compared to sensor measurements. The outputs of the system are uncertain because the inputs and parameters of the system are uncertain. The method uses sensitivities to propagate uncertainties from inputs to outputs through the complex chain of modules. Furthermore, the method consistently combines sensor measurements with model outputs to simultaneously obtain best estimates for model parameters and reduce uncertainties in model outputs. The method was applied to a test case where ADIFOR2 was used to calculate sensitivities for the radiation transport code MODTRAN.


Covariance Matrix Probability Density Function Uncertainty Analysis Sensor Measurement Automatic Differentiation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Myers, R., Response Surface Methodology, Edwards Bros. (1976).Google Scholar
  2. 2.
    Kosko, B., Fuzzy Engineering, Prentice Hall (1996).Google Scholar
  3. 3.
    Aminzadeh, F., J. Barhen, C. Glover, and N. Toomarian, “Estimation of Reservoir Parameters using a Hybrid Neural Network”, Jour. of Petroleum Science & Engineering, 24, 49–56 (1999).CrossRefGoogle Scholar
  4. 4.
    Bishop, M., Neural Networks for Pattern Recognition, Oxford University Press (1997).Google Scholar
  5. 5.
    Dubrawski, A., “Stochastic Validation for Automated Tuning of Neural Network’s Hyper Parameters”, Robotics and Autonomous Systems, 21, 83–93 (1997).CrossRefGoogle Scholar
  6. 6.
    Barhen, J. et al., “Uncertainty Analysis of Time-Dependent Nonlinear Systems”, Nucl. Sci. Eng., 81, 23–44 (1982).Google Scholar
  7. 7.
    Barhen, J., N. Toomarian, and S. Gulati, “Applications of Adjoint Operators to Neural Networks”, Appl. Math. Lett., 3(3), 13–18 (1990).zbMATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Toomarian, N. and J. Barhen, “Neural Network Training by Integration of Adjoint Systems of Equations Forward in Time”, U.S. Patent No. 5,930,781 (July 27, 1999).Google Scholar
  9. 9.
    Griewank, A., Evaluating derivatives: principles and techniques of algorithmic differentiation, Society for Industrial and Applied Mathematics, Philadelphia (2000).Google Scholar
  10. 10.
    Corliss, G., Faure, C., Griewank, A., Hascoet, L., and Naumann, U., Automatic Differentiation of Algorithms: From Simulation to Optimization, Springer-Verlag, New York Berlin Heidelberg (2002).zbMATHGoogle Scholar
  11. 11.
    Tolma, J. and P. Barton, “On Computational Differentiation”, Comp. Chem. Engin., 22(4/5), 475–490 (1998).CrossRefGoogle Scholar
  12. 12.
    Berk, A, L. S. Bernstrein, G. P. Anderson, P. K. Acharya, D.C. Robertson, J. H. Chetwynd, and S. M. Adler-Golden, “MODTRAN Cloud and Multiple Scattering Upgrades with Application to AVIRIS”, Remote Sens. Environ., 65(3/3), 367–375 (1998).CrossRefGoogle Scholar
  13. 13.
    Bischof, C., P. Khademi, A. Mauer, and A. Carle, “Adifor 2.0: Automatic Differentiation of Fortran 77 Programs”, IEEE Computat. Sci. Eng., 3(3/4), 18–32 (1996).CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Jacob Barhen
    • 1
  • David B. Reister
    • 1
  1. 1.Oak Ridge National LaboratoryOak Ridge

Personalised recommendations