Advertisement

Abhängigkeitsmaße: Korrelation und Regression

  • Lothar Sachs

Zusammenfassung

In vielen Situationen ist es wünschenswert, etwas über die Abhängigkeit zwischen zwei Merkmalen eines Individuums, Materials, Produktes oder Prozesses zu erfahren. In einigen Fällen mag es auf Grund theoretischer Überlegungen sicher sein, daß zwei Merkmale miteinander zusammenhängen. Das Problem besteht dann darin, Art und Grad des Zusammenhanges zu ermitteln. Zunächst wird man die Wertepaare (x i , y i ) in ein Koordinatensystem eintragen. Hierdurch erhält man eine Grundvorstellung über Streuung und Form der Punktwolke.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur

  1. Abbas, S.: Serial correlation coefficient. Bull. Inst. Statist. Res. Tr. 1 (1967), 65–76.MathSciNetGoogle Scholar
  2. Acton, F. S.: Analysis of Straight-Line Data. New York 1959.Google Scholar
  3. Anderson, R.L., and Houseman, E.E.: Tables of Orthogonal Polynomial Values Extended to N = 104. Res. Bull. 297, Argricultural Experiment Station, Ames, Iowa 1942 (Reprinted March 1963).Google Scholar
  4. Anderson, T.W.: An Introduction to Multivariate Statistical Analysis. New York 1958.Google Scholar
  5. —, Gupta, S.D., and Styan, G.P.H.: A Bibliography of Multivariate Statistical Analysis. (Oliver and Boyd; pp.654) Edinburgh and London 1973.Google Scholar
  6. Bancroft, T.A.: Topics in Intermediate Statistical Methods. (Iowa State Univ. Press) Ames, Iowa 1968.Google Scholar
  7. Bartlett, M.S.: Fitting a straight line when both variables are subject to error. Biometrics 5 (1949), 207–212.MathSciNetCrossRefGoogle Scholar
  8. Barton, D.E., and Casley, D.J.: A quick estimate of the regression coefficient. Biometrika 45 (1958), 431–435.MATHGoogle Scholar
  9. Berkson, J.: Are there two regressions? J. Amer. Statist. Assoc. 45 (1950), 164–180 [vgl. auch 48 (1953), 94-103].MATHCrossRefGoogle Scholar
  10. Binder, A.: Considerations of the place of assumptions in correlational analysis. American Psychologist 14 (1959), 504–510.CrossRefGoogle Scholar
  11. Blomqvist, N.: (1) On a measure of dependence between two random variables. Ann. Math. Statist. 21 (1950), 593–601. (2) Some tests based on dichotomization. Ann. Math. Statist. 22 (1951), 362-371.MathSciNetMATHCrossRefGoogle Scholar
  12. Brown, R.G.: Smoothing, Forecasting and Prediction of Discrete Time Series. (Prentice-Hall, pp.468) London 1962.Google Scholar
  13. Carlson, F.D., Sobel, E., and Watson, G.S.: Linear relationships between variables affected by errors. Biometrics 22 (1966), 252–267.CrossRefGoogle Scholar
  14. Cohen, J.: A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20 (1960), 37–46.CrossRefGoogle Scholar
  15. Cole, La M.C.: On simplified computations. The American Statistician 13 (February 1959), 20.Google Scholar
  16. Cooley, W. W., and Lohnes, P. R.: Multivariate Data Analysis. (Wiley, pp. 400) London 1971.Google Scholar
  17. Cornfield, J.: Discriminant functions. Rev. Internat. Statist. Inst. 35 (1967), 142–153 (vgl. auch J. Amer. Statist. Assoc. 63 [1968]), 1399-1412).MathSciNetCrossRefGoogle Scholar
  18. Cowden, D.J., and Rucker, N.L.: Tables for Fitting an Exponential Trend by the Method of Least Squares. Techn. Paper 6, University of North Carolina, Chapel Hill 1965.Google Scholar
  19. Cureton, E.E.: Quick fits for the lines y = bx and y = a + bx when errors of observation are present in both variables. The American Statistician 20 (June 1966), 49.Google Scholar
  20. Daniel, C., and Wood, F.S. (with J.W. Gorman): Fitting Equations to Data. Computer Analysis of Multifactor Data for Scientists and Engineers. (Wiley-Interscience, pp. 342) New York 1971.Google Scholar
  21. Dempster, A.P.: Elements of Continuous Multivariate Analysis. (Addison-Wesley, pp.400) Reading, Mass. 1968.Google Scholar
  22. Dietrich, G., und Stahl, H.: Matrizen und Determinanten und ihre Anwendung in Technik und Ökonomie. 2. Aufl. (Fachbuchverlag, 422 S.) Leipzig 1968 (3. Aufl. 1970).Google Scholar
  23. Draper, N.R., and Smith, H.: Applied Regression Analysis. New York 1966.Google Scholar
  24. Duncan, D.B.: Multiple comparison methods for comparing regression coefficients. Biometrics 26 (1970), 141–143 (vgl. auch Brown, 143 + 144).CrossRefGoogle Scholar
  25. Ehrenberg, A.S.C.: Bivariate regression is useless. Applied Statistics 12 (1963), 161–179.MATHCrossRefGoogle Scholar
  26. Elandt, Regina C.: Exact and approximate power function of the non-parametric test of tendency. Ann. Math. Statist. 33 (1962), 471–481.MathSciNetMATHCrossRefGoogle Scholar
  27. Emerson, Ph. L.: Numerical construction of orthogonal polynomials for a general recurrence formula. Biometrics 24 (1968), 695–701.CrossRefGoogle Scholar
  28. Enderlein, G.: Die Schätzung des Produktmoment-Korrelationsparameters mittels Rangkorrelation. Biometrische Zeitschr. 3 (1961), 199–212.MATHCrossRefGoogle Scholar
  29. Fels, E.: Inhärente Fehler in linearen Regressionsgleichungen und Schranken dafür. Ifo-Studien 8 (1962), 5–18.Google Scholar
  30. Ferguson, G.A.: Nonparametric Trend Analysis. Montreal 1965.Google Scholar
  31. Fisher, R.A.: Statistical Methods for Research Workers, 12th ed. Edinburgh 1954, pp. 197-204.Google Scholar
  32. Friedrich, H.: Nomographische Bestimmung und Beurteilung von Regressions-und Korrelationskoeffizienten. Biometrische Zeitschr. 12 (1970), 163–187.MATHCrossRefGoogle Scholar
  33. Geary, R.C.: Non-linear functional relationships between two variables when one is controlled. J. Amer. Statist. Assoc. 48 (1953), 94–103.MathSciNetMATHCrossRefGoogle Scholar
  34. Gebelein, H., und Ruhenstroth-Bauer, G.: Über den statistischen Vergleich einer Normalkurve und einer Prüfkurve. Die Naturwissenschaften 39 (1952), 457–461.CrossRefGoogle Scholar
  35. Gibson, Wendy M., and Jowett, G.H.: “Three-group” regression analysis. Part I. Simple regression analysis. Part II. Multiple regression analysis. Applied Statistics 6 (1957), 114–122 and 189-197.Google Scholar
  36. Glasser, G. J., and Winter, R.F.: Critical values of the coefficient of rank correlation for testing the hypothesis of independence. Biometrika 48 (1961), 444–448.MATHGoogle Scholar
  37. Gregg, I.V., Hossel, C.H., and Richardson, J.T.: Mathematical Trend Curves — An Aid to Forecasting. (I.C.I. Monograph No. 1), Edinburgh 1964.Google Scholar
  38. Griffin, H.D.: Graphic calculation of Kendall’s tau coefficient. Educ. Psychol. Msmt. 17 (1957), 281–285.CrossRefGoogle Scholar
  39. Hahn, G.J.: Simultaneous prediction intervals for a regression model. Technometrics 14 (1972), 203–214.MATHCrossRefGoogle Scholar
  40. Heald, M.A.: Least squares made easy. Amer. J. Phys. 37 (1969), 655–662.CrossRefGoogle Scholar
  41. Hotelling, H.: (1) The selection of variates for use in prediction with some comments on the general problem of nuisance parameters. Ann. Math. Statist. 11 (1940), 271–283 [vgl. auch O.J. Dunn and V. Clark: J. Amer. Statist. Assoc. 66 (1971), 904-908]. (2) New light on the correlation coefficient and its transforms. J. Roy. Statist. Soc. B 15 (1953), 193-232.MathSciNetCrossRefGoogle Scholar
  42. Hiorns, R.W.: The Fitting of Growth and Allied Curves of the Asymptotic Regression Type by Steven’s Method. Tracts for Computers No. 28. Cambridge Univ. Press 1965.Google Scholar
  43. Hoerl jr., A. E.: Fitting Curves to Data. In J. H. Perry (Ed.): Chemical Business Handbook. (McGraw-Hill) London 1954, 20-55/20-77 (vgl. auch 20-16).Google Scholar
  44. Kendall, M.G.: (1) A new measure of rank correlation. Biometrika 30 (1938), 81–93. (2) A Course in Multivariate Analysis. London 1957. (3) Rank Correlation Methods, 3rd ed. London 1962, pp. 38-41 (4th ed. 1970). (4) Ronald Aylmer Fisher, 1890–1962. Biometrika 50 (1963), 1-15. (5) Time Series. (Griffin) London 1973.MathSciNetMATHGoogle Scholar
  45. Kerrich, J.E.: Fitting the line yax when errors of observation are present in both variables. The American Statistician 20 (February 1966), 24.Google Scholar
  46. Koller, S.: (1) Statistische Auswertung der Versuchsergebnisse. In Hoppe-Seyler/Thierfelder’s Handb. d. physiologisch-und pathologisch-chemischen Analyse, 10. Aufl., Bd. II, S. 931-1036.Google Scholar
  47. Berlin-Göttingen-Heidelberg 1955, S. 1002-1004. (2) Typisierung korrelativer Zusammenhänge. Metrika 6 (1963), 65-75. (3) Systematik der statistischen Schlußfehler. Method. Inform. Med. 3 (1964), 113-117. (4) Graphische Tafeln zur Beurteilung statistischer Zahlen. 3. Aufl., Darmstadt 1953 (4. Aufl. 1969).Google Scholar
  48. Konijn, H.S.: On the power of certain tests for independence in bivariate populations. Ann. Math. Statist. 27 (1956), 300–323.MathSciNetMATHCrossRefGoogle Scholar
  49. Kramer, C.Y., and Jensen, D.R.: Fundamentals of multivariate analysis. Part I–IV. Journal of Quality Technology 1 (1969), 120–133, 189-204, 264-276, 2 (1970), 32-40 and 4 (1972), 177-180.Google Scholar
  50. Kres, H.: (1) Elemente der Multivariaten Analysis. (Springer) Heidelberg 1974. (2) Statistische Tafeln zur Multivariaten Analysis. (Springer) Heidelberg 1974.Google Scholar
  51. Krishnaiah, P.R. (Ed.): Multivariate Analysis and Multivariate Analysis II, III. (Academic Press; pp. 592 and 696, 450), New York and London 1966 and 1969, 1973.Google Scholar
  52. Kymn, K.O.: The distribution of the sample correlation coefficient under the null hypothesis. Econometrica 36 (1968), 187–189.CrossRefGoogle Scholar
  53. Lees, Ruth W., and Lord, F.M.: (1) Nomograph for computing partial correlation coefficients. J. Amer. Statist. Assoc. 56 (1961), 995–997. (2) Corrigenda 57 (1962), 917 + 918.MathSciNetMATHCrossRefGoogle Scholar
  54. Lieberson, S.: Non-graphic computation of Kendall’s tau. The American Statistician 15 (October 1961), 20 + 21.Google Scholar
  55. Linder, A.: (1) Statistische Methoden für Naturwissenschaftler, Mediziner und Ingenieure. 3. Aufl., Basel 1960, S.172. (2) Anschauliche Deutung und Begründung des Trennverfahrens. Method. Inform. Med. 2 (1963), 30–33. (3) Trennverfahren bei qualitativen Merkmalen. Metrika 6 (1963), 76-83.Google Scholar
  56. Lord, F.M.: Nomograph for computing multiple correlation coefficients. J. Amer. Statist. Assoc. 50 (1955), 1073–1077 [vgl. auch Biometrika 59 (1972), 175-189].MathSciNetMATHCrossRefGoogle Scholar
  57. Lubischew, A.A.: On the use of discriminatory functions in taxonomy. With editorial note and author’s note. Biometrics 18 (1962), 455–477.MATHGoogle Scholar
  58. Ludwig, R.: Nomogramm zur Prüfung des Produkt-Moment-Korrelationskoeffizienten r. Biometrische Zeitschr. 7 (1965), 94–95.CrossRefGoogle Scholar
  59. Madansky, A.: The fitting of straight lines when both variables are subject to error. J. Amer. Statist. Assoc. 54 (1959), 173–205 [vgl. auch 66 (1971), 587-589].MathSciNetMATHCrossRefGoogle Scholar
  60. Mandel, J.: (1) Fitting a straight line to certain types of cumulative data. J. Amer. Statist. Assoc. 52 (1957), 552–566. (2) Estimation of weighting factors in linear regression and analysis of variance. Technometrics 6 (1964), 1-25.MathSciNetMATHCrossRefGoogle Scholar
  61. Mandel, J., and Linning, F. J.: Study of accuracy in chemical analysis using linear calibration curves. Analyt. Chem. 29 (1957), 743–749.CrossRefGoogle Scholar
  62. Meyer-Bahlburg, H.F.L.: Spearmans rho als punktbiserialer Korrelationskoeffizient. Biometrische Zeitschr. 11 (1969), 60–66.CrossRefGoogle Scholar
  63. Miller, R. G.: Simultaneous Statistical Inference. (McGraw-Hill, pp. 272), New York 1966 (Chapter 5, pp. 189-210).Google Scholar
  64. Morrison, D. F.: Multivariate Statistical Methods. (McGraw-Hill, pp. 338), New York, London 1967.Google Scholar
  65. Olkin, I., and Pratt, J.W.: Unbiased estimation of certain correlation coefficients. Ann. Math. Statist. 29 (1958), 201–211.MathSciNetCrossRefGoogle Scholar
  66. Olmstead, P.S., and Tukey, J.W.: A corner test of association. Ann. Math. Statist. 18 (1947), 495–513.MathSciNetMATHCrossRefGoogle Scholar
  67. Ostle, B.: Statistics in Research: Basic Concepts and Techniques for Research Workers. 2nd ed., Ames, Iowa, 1963, Chapters 8 and 9.Google Scholar
  68. Pfanzagl, J.: Über die Parallelität von Zeitreihen. Metrika 6 (1963), 100–113.MATHCrossRefGoogle Scholar
  69. Plackett, R.L.: Principles of Regression Analysis. Oxford 1960.Google Scholar
  70. Porebski, O.R.: (1) On the interrelated nature of the multivariate statistics used in discriminatory analysis. Brit. J. Math. Stat. Psychol. 19 (1966), 197–214. (2) Discriminatory and canonical analysis of technical college data. Brit. J. Math. Stat. Psychol. 19 (1966), 213-236.MathSciNetCrossRefGoogle Scholar
  71. Potthoff, R.F.: Some Scheffé-type tests for some Behrens-Fisher type regression problems. J. Amer. Statist. Assoc. 60 (1965), 1163–1190.MathSciNetMATHGoogle Scholar
  72. Press, S.J.: Applied Multivariate Analysis. (Holt, Rinehart and Winston; pp.521) New York 1972.Google Scholar
  73. Prince, B.M., and Tate, R.F.: The accuracy of maximum likelihood estimates of correlation for a biserial model. Psychometrika 31 (1966), 85–92.MathSciNetCrossRefGoogle Scholar
  74. Puri, M.L. and Sen, P.K.: Nonparametric Methods in Multivariate Analysis. (Wiley, pp.450) London 1971.Google Scholar
  75. Quenouille, M.H.: Rapid Statistical Calculations. London 1959.Google Scholar
  76. Radhakrishna, S.: Discrimination analysis in medicine. The Statistician 14 (1964), 147–167.CrossRefGoogle Scholar
  77. Rao, C.R.: (1) Multivariate analysis: an indispensable aid in applied research (with an 81 reference bibliography). Sankhya 22 (1960), 317–338. (2) Linear Statistical Inference and Its Applications. New York 1965 (2nd ed. 1973). (3) Recent trends of research work in multivariate analysis. Biometrics 28 (1972), 3-22.MathSciNetMATHGoogle Scholar
  78. Robson, D.S.: A simple method for constructing orthogonal polynomials when the independent variable is unequally spaced. Biometrics 15 (1959), 187–191.MathSciNetMATHCrossRefGoogle Scholar
  79. Roos, C.F.: Survey of economic forecasting techniques. Econometrica 23 (1955), 363–395.MATHCrossRefGoogle Scholar
  80. Roy, S.N.: Some Aspects of Multivariate Analysis, New York and Calcutta 1957.Google Scholar
  81. Sachs, L.: Statistische Methoden. Ein Soforthelfer; 2. neubearb. Aufl. (Springer, 105 S.) Berlin, Heidelberg, New York 1972, S. 91-93.Google Scholar
  82. Salzer, H.E., Richards, Ch.H., and Arsham, Isabelle: Table for the Solution of Cubic Equations. New York 1958.Google Scholar
  83. Samiuddin, M.: On a test for an assigned value of correlation in a bivariate normal distribution. Biometrika 57 (1970), 461–464.MathSciNetGoogle Scholar
  84. Saxena, H.C., and Surendran, P.U.: Statistical Inference. (Chand, pp.396), Delhi, Bombay, Calcutta 1967 (Chapter 6, 258-342).Google Scholar
  85. Schaeffer, M.S., and Levitt, E.E.: Concerning Kendall’s tau, a nonparametric correlation coefficient. Psychol. Bull. 53 (1956), 338–346.CrossRefGoogle Scholar
  86. Seal, H.: Multivariate Statistical Analysis for Biologists. London 1964.Google Scholar
  87. Searle, S.R.: Linear Models. (Wiley, pp.532) New York 1971.Google Scholar
  88. Spearman, C.: (1) The proof and measurement of association between two things. Amer. J. Psychol. 15 (1904), 72–101. (2) The method “of right and wrong cases” (“constant stimuli”) without Gauss’ formulae. Brit. J. Psychol. 2 (1908), 227-242.CrossRefGoogle Scholar
  89. Stammberger, A.: Ein Nomogramm zur Beurteilung von Korrelationskoeffizienten. Biometrische Zeitschr. 10 (1968), 80–83.CrossRefGoogle Scholar
  90. Stilson, D.W., and Campbell, V.N.: A note on calculating tau and average tau and on the sampling distribution of average tau with a criterion ranking. J. Amer. Statist. Assoc. 57 (1962), 567–571.MathSciNetMATHCrossRefGoogle Scholar
  91. Student: Probable error of a correlation coefficient. Biometrika 6 (1908), 302–310.Google Scholar
  92. Tate, R.F.: (1) Correlation between a discrete and a continuous variable. Pointbiserial correlation. Ann. Math. Statist. 25 (1954), 603–607. (2) The theory of correlation between two continuous variables when one is dichotomized. Biometrika 42 (1955), 205-216. (3) Applications of correlation models for biserial data. J. Amer. Statist. Assoc. 50 (1955), 1078-1095. (4) Conditional-normal regression models. J. Amer. Statist. Assoc. 61 (1966), 477-489.MathSciNetMATHCrossRefGoogle Scholar
  93. Thöni, H.: Die nomographische Bestimmung des logarithmischen Durchschnittes von Versuchsdaten und die graphische Ermittlung von Regressionswerten. Experientia 19 (1963), 1–4.CrossRefGoogle Scholar
  94. Tukey, J.W.: Components in regression. Biometrics 7 (1951), 33–70.CrossRefGoogle Scholar
  95. Waerden, B.L. van der: Mathematische Statistik. 2. Aufl., (Springer, 360 S.), Berlin 1965, S. 324.Google Scholar
  96. Wagner, G.: Zur Methodik des Vergleichs altersabhängiger Dermatosen. (Zugleich korrelationsstatistische Kritik am sogenannten “Status varicosus“), Zschr. menschl. Vererb.-Konstit.-Lehre 53 (1955), 57–84.Google Scholar
  97. Walter, E.: Rangkorrelation und Quadrantenkorrelation. Züchter Sonderh. 6, Die Frühdiagnose in der Züchtung und Züchtungsforschung II (1963), 7-11.Google Scholar
  98. Weber, Erna: Grundriß der biologischen Statistik. 7. neubearb. Aufl., (Fischer, 706 S.), Stuttgart 1972, S. 550-578.Google Scholar
  99. Williams, E.J.: Regression Analysis. New York 1959.Google Scholar
  100. Yule, G. U., and Kendall, M. G.: Introduction to the Theory of Statistics. London 1965, pp. 264-266.Google Scholar

Faktorenanalyse

  1. Adam, J. und Enke, H.: Zur Anwendung der Faktorenanalyse als Trennverfahren. Biometr. Zeitschr. 12 (1970), 395–411.CrossRefGoogle Scholar
  2. Browne, M.W.: A comparison of factor analytic techniques. Psychometrika 33 (1968), 267–334.MathSciNetCrossRefGoogle Scholar
  3. Corballis, M.C., and Traub, R.E.: Longitudinal factor analysis. Psychometrika 35 (1970), 79–98 [vgl. auch 36 (1971), 243-249].MATHCrossRefGoogle Scholar
  4. Derflinger, G.: Neue Iterationsmethoden in der Faktorenanalyse. Biometrische Zeitschr. 10 (1968), 58–75.MathSciNetMATHCrossRefGoogle Scholar
  5. Gollob, H. F.: A statistical model which combines features of factor analytic and analysis of variance techniques. Psychometrika 33 (1968), 73–115.MathSciNetMATHCrossRefGoogle Scholar
  6. Harman, H.H.: Modern Factor Analysis. 2nd rev. ed. (Univ. of Chicago, pp.474), Chicago 1967.Google Scholar
  7. Jöreskog, K.G.: A general approach to confirmatory maximum likelihood factor analysis. Psychometrika 34 (1969), 183–202 [vgl. auch 36 (1971), 409-426 u. 37 (1972), 243-260, 425-440 sowie Psychol. Bull. 75 (1971), 416-423].CrossRefGoogle Scholar
  8. Lawley, D.N. and Maxwell, A.E.: Factor Analysis as a Statistical Method. 2nd ed. (Butterworths; pp. 153) London 1971 [vgl. auch Biometrika 60 (1973), 331–338].MathSciNetMATHGoogle Scholar
  9. McDonald, R.P.: Three common factor models for groups of variables. Psychometrika 35 (1970), 111–128.MATHCrossRefGoogle Scholar
  10. Rummel, R.J.: Applied Factor Analysis. (Northwestern Univ. Press, pp.617) Evanston, Ill. 1970.Google Scholar
  11. Sheth, J.N.: Using factor analysis to estimate parameters. J. Amer. Statist. Assoc. 64 (1969), 808–822.MATHCrossRefGoogle Scholar
  12. Überla, K.: Faktorenanalyse. Eine systematische Einführung in Theorie und Praxis für Psychologen, Mediziner, Wirtschafts-und Sozialwissenschaftler. 2. verb. Aufl. (Springer, 399 S.), Berlin-Heidelberg-New York 1971 (vgl. insbes. S. 355-363).Google Scholar

Multiple Regressionsanalyse

  1. Abt, K.: On the identification of the significant independent variables in linear models. Metrika 12 (1967), 1–15, 81-96.MathSciNetMATHCrossRefGoogle Scholar
  2. Anscombe, F. J.: Topics in the investigation of linear relations fitted by the method of least squares. With discussion. J. Roy. Statist. Soc. B 29 (1967), 1–52.MathSciNetMATHGoogle Scholar
  3. Beale, E.M.L.: Note on procedures for variable selection in multiple regression. Technometrics 12 (1970), 909–914 [vgl. auch Biometrika 54 (1967), 357-366].CrossRefGoogle Scholar
  4. Bliss, C.I.: Statistics in Biology. Vol.2. (McGraw-Hill, pp.639), New York 1970, Chapter 18.Google Scholar
  5. Cochran, W.G.: Some effects of errors of measurement on multiple correlation. J. Amer. Statist. Assoc. 65 (1970), 22–34.MATHCrossRefGoogle Scholar
  6. Cramer, E. M.: Significance tests and tests of models in multiple regression. The American Statistician 26 (Oct. 1972), 26–30 [vgl. auch 25 (Oct. 1971), 32-34, 25 (Dec. 1971), 37-39 und 26 (April 1972), 31-33].Google Scholar
  7. Darlington, R.B.: Multiple regression in psychological research and practice. Psychological Bulletin 69 (1968), 161–182 (vgl. auch 75 [1971], 430 + 431).CrossRefGoogle Scholar
  8. Draper, N.R., and Smith, H.: Applied Regression Analysis. (Wiley, pp.407), New York 1966.Google Scholar
  9. Dubois, P. H.: Multivariate Correlational Analysis. (Harper and Brothers, pp. 202), New York 1957.Google Scholar
  10. Enderlein, G.: Kriterien zur Wahl des Modellansatzes in der Regressionsanalyse mit dem Ziel der optimalen Vorhersage. Biometr. Zeitschr. 12 (1970), 285–308 [vgl. auch 13 (1971), 130-156].MathSciNetMATHCrossRefGoogle Scholar
  11. Enderlein, G., Reiher, W. und Trommer, R.: Mehrfache lineare Regression, polynomiale Regression und Nichtlinearitätstests. In: Regressionsanalyse und ihre Anwendungen in der Agrarwissenschaft. Vorträge des 2. Biometrischen Seminars der Deutschen Akademie der Landwirtschaftswissenschaften zu Berlin im März 1965. Tagungsberichte Nr. 87, Berlin 1967, S. 49-78.Google Scholar
  12. Folks, J.L., and Antle, C.E.: Straight line confidence regions for linear models. J. Amer. Statist. Assoc. 62 (1967), 1365–1374.MathSciNetCrossRefGoogle Scholar
  13. Goldberger, A. S.: Topics in Regression Analysis. (Macmillan, pp. 144), New York 1968.Google Scholar
  14. Graybill, F.A., and Bowden, D.C.: Linear segment confidence bands for simple linear models. J. Amer. Statist. Assoc. 62 (1967), 403–408.MathSciNetCrossRefGoogle Scholar
  15. Hahn, G.J., and Shapiro, S.S.: The use and misuse of multiple regression. Industrial Quality Control 23 (1966), 184–189.Google Scholar
  16. Hamaker, H.C.: On multiple regression analysis. Statistica Neerlandica 16 (1962), 31–56.CrossRefGoogle Scholar
  17. Herne, H.: How to cook relationships. The Statistician 17 (1967), 357–370.CrossRefGoogle Scholar
  18. Herzberg, P.A.: The Parameters of Cross-Validation. Psychometrika Monograph Supplement (Nr. 16) 34 (June 1969), 1–70.Google Scholar
  19. Hinchen, J.D.: Multiple regression with unbalanced data. J. Qual. Technol. 2 (1970), 1, 22–29.Google Scholar
  20. Hocking, R.R., and Leslie, R.N.: Selection of the best subset in regression analysis. Technometrics 9 (1967), 531–540 [vgl. auch 10 (1968), 432 + 433, 13 (1971), 403-408 u. 14 (1972), 967-970].MathSciNetCrossRefGoogle Scholar
  21. Huang, D.S.: Regression and Econometric Methods. (Wiley, pp.274), New York 1970.Google Scholar
  22. LaMotte, L. R., and Hocking, R. R.: Computational efficiency in the selection of regression variables. Technometrics 12 (1970), 83–93.MATHGoogle Scholar
  23. Madansky, A.: The fitting of straight lines when both variables are subject to error. J. Amer. Statist. Assoc. 54 (1959), 173–205.MathSciNetMATHCrossRefGoogle Scholar
  24. Robinson, E.A.: Applied Regression Analysis. (Holden-Day, pp.250), San Francisco 1969.Google Scholar
  25. Rutemiller, H.C., and Bowers, D.A.: Estimation in a heteroscedastic regression model. J. Amer. Statist. Assoc. 63 (1968), 552–557.MathSciNetCrossRefGoogle Scholar
  26. Schatzoff, M., Tsao, R., and Fienberg, S.: Efficient calculation of all possible regressions. Technometrics 10 (1968), 769–779.CrossRefGoogle Scholar
  27. Seber, G.A.F.: The Linear Hypothesis. A General Theory. (No. 19 of Griffin’s Statistical Monographs and Courses, Ch. Griffin, pp. 120), London 1966.Google Scholar
  28. Smillie, K.W.: An Introduction to Regression and Correlation. (Academic Press, pp.168), New York 1966.Google Scholar
  29. Toro-Vizcarrondo, C., and Wallace, T. D.: A test of the mean square error criterion for restrictions in linear regression. J. Amer. Statist. Assoc. 63 (1968), 558–572.MathSciNetMATHCrossRefGoogle Scholar
  30. Ulmo, J.: Problèmes et programmes de regression. Revue de Statistique Appliquée 19 (1971), No. 1, 27–39.MathSciNetGoogle Scholar
  31. Väliaho, H.: A synthetic approach to stepwise regression analysis. Commentationes Physico-Mathematicae 34 (1969), 91–131 [ergänzt durch 41 (1971), 9-18 und 63-72].Google Scholar
  32. Weber, E.: Biometrische Bearbeitung multipler Regressionen unter besonderer Berücksichtigung der Auswahl, der Transformation und der Linearkombination von Variablen. Statistische Hefte 8 (1967), 228–251 und 9 (1968), 13-33.CrossRefGoogle Scholar
  33. Wiezorke, B.: Auswahlverfahren in der Regressionsanalyse. Metrika 12 (1967), 68–79.MathSciNetMATHCrossRefGoogle Scholar
  34. Wiorkowski, J.J.: Estimation of the proportion of the variance explained by regression, when the number of parameters in the model may depend on the sample size. Technometrics 12 (1970), 915–919.MATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1974

Authors and Affiliations

  • Lothar Sachs

There are no affiliations available

Personalised recommendations