All of the standard inferences in RSM as presented in previous chapters are based on point estimators which have sampling, or experimental, variability. Assuming a classical or frequentist point of view, every quantity computed based on experimental data is subject to sampling variability and is therefore a random quantity itself. As Draper [48] pointed out, one should not expect precise conclusions when using mathematical optimization techniques based on data subject to large errors. This comment applies to every technique previously discussed, namely, the steepest ascent/descent direction, eigenvalues of the quadratic matrix and point estimators of the stationary or optimal points in quadratic (second order) optimization for both canonical and ridge analysis. It also applies to more sophisticated mathematical programming techniques. In the RSM literature, there has been an over-emphasis on using different types of such mathematical techniques which neglect the main statistical issue that arises from random data: if the experiment is repeated and new models fitted, the parameters (or even the response model form) may change, and this will necessarily result in a different optimal solution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Rights and permissions
Copyright information
© 2007 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
(2007). Statistical Inference in First Order RSM Optimization. In: Process Optimization. International Series in Operations Research & Management Science, vol 105. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-71435-6_6
Download citation
DOI: https://doi.org/10.1007/978-0-387-71435-6_6
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-387-71434-9
Online ISBN: 978-0-387-71435-6
eBook Packages: Business and EconomicsBusiness and Management (R0)