Skip to main content

Part of the book series: Advances in Computer Vision and Pattern Recognition ((ACVPR))

Abstract

Chapters 4 and 5 have discussed the use of decision forests in supervised tasks, i.e. when labeled training data are available. In contrast, this chapter discusses the use of forests in unlabeled scenarios. For instance, one important task is that of discovering the intrinsic nature and structure of large sets of unlabeled data. This task can be tackled via another probabilistic model, the density forest. Density forests are explained here as an instantiation of our abstract decision forest model as described in Chap. 3.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Better alternatives, perhaps incorporating priors, may be employed here.

  2. 2.

    A split function is applied usually only to a small, selected subset of features ϕ(v) and thus it can be computed efficiently, i.e. B is very small.

  3. 3.

    No use is made of the ground truth density in this stage.

  4. 4.

    See “consistent estimator” in Wikipedia for a definition of consistency.

References

  1. Bishop CM (2006) Pattern recognition and machine learning. Springer, New York

    Google Scholar 

  2. Breiman L (2001) Random forests. Mach Learn 45(1)

    Google Scholar 

  3. Criminisi A, Shotton J, Konukoglu E (2011) Online tutorial on decision forests. http://research.microsoft.com/projects/decisionforests

  4. Criminisi A, Shotton J, Konukoglu E (2012) Decision forests: a unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning. Found Trends Comput Graph Vis 7(2–3)

    Google Scholar 

  5. Dempster A, Laird N, Rubin D (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Ser B Methodol 39

    Google Scholar 

  6. Devroye L (1986) Non-uniform random variate generation. Springer, New York

    Google Scholar 

  7. Gupta SS (1963) Probability integrals of multivariate normal and multivariate t. Ann Math Stat 34(3)

    Google Scholar 

  8. Kristan M, Skocaj D, Leonardis A (2008) Incremental learning with Gaussian mixture models. In: Computer vision winter workshop (CVWW), Moravske Toplice, Slovenia

    Google Scholar 

  9. MacQueen JB (1967) Some methods for classification and analysis of multivariate observations. In: Proc of 5th Berkeley symposium on mathematical statistics and probability. University of California Press, Berkeley

    Google Scholar 

  10. Moosmann F, Triggs B, Jurie F (2006) Fast discriminative visual codebooks using randomized clustering forests. In: Advances in neural information processing systems (NIPS)

    Google Scholar 

  11. Müller A, Nowozin S, Lampert CH (2012) Information theoretic clustering using minimum spanning trees. In: Proc annual symposium of the German association for pattern recognition (DAGM)

    Google Scholar 

  12. Neal RM (2001) Annealed importance sampling. Stat Comput 11

    Google Scholar 

  13. Parzen E (1962) On estimation of a probability density function and mode. Ann Math Stat 33

    Google Scholar 

  14. Plackett RL (1954) A reduction formula for normal multivariate integrals. Biometrika 41

    Google Scholar 

  15. Ram P, Gray AG (2011) Density estimation trees. In: Proc ACM SIGKDD intl conf on knowledge discovery and data mining (KDD)

    Google Scholar 

  16. Shi T, Horvath S (2006) Unsupervised learning with random forest predictors. J Comput Graph Stat 15

    Google Scholar 

  17. Shotton J, Johnson M, Cipolla R (2008) Semantic texton forests for image categorization and segmentation. In: Proc IEEE conf computer vision and pattern recognition (CVPR)

    Google Scholar 

  18. Silverman BW (1986) Density estimation. Chapman and Hall, London

    Google Scholar 

  19. Skilling J (2010) Maximum entropy and Bayesian methods. Kluwer Academic, Dordrecht

    Google Scholar 

  20. Szekely GJ, Rizzo ML (2004) Testing for equal distributions in high dimensions. Interstat, Nov 2004.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag London

About this chapter

Cite this chapter

Criminisi, A., Shotton, J. (2013). Density Forests. In: Criminisi, A., Shotton, J. (eds) Decision Forests for Computer Vision and Medical Image Analysis. Advances in Computer Vision and Pattern Recognition. Springer, London. https://doi.org/10.1007/978-1-4471-4929-3_6

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-4929-3_6

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-4928-6

  • Online ISBN: 978-1-4471-4929-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics