Skip to main content

Twisting Features with Properties

  • Conference paper
Neural Nets WIRN Vietri-01

Part of the book series: Perspectives in Neural Computing ((PERSPECT.NEURAL))

  • 772 Accesses

Abstract

We provide three steps in the direction of shifting probability from a descriptive tool of unpredictable events to a way of understanding them. At a very elementary level we state an operational definition of probability based solely on symmetry assumptions about observed data. This definition converges, however, to the Kolmogorov one within a special large number law fashion that represents a first way of twisting features observed in the data with properties expected in the next observations. Within this probability meaning we fix a general sampling mechanism to generate random variables and extend our twisting device to computing probability distributions on population properties on the basis of the likelihood of the observed features. Here the randomness core translates from the above symmetry assumptions in a generator of unitary uniform random variables. Willing discovering suitable features (which are classically defined as sufficient statistics), we refer directly to the notions of Kolmogorov complexity and coding theorem in particular. This is to connect the features to the inner structure of the observed data in terms of concise computer codes describing them in a well equipped computational framework.

This new statistical framework allows us to recover and improve results on computational learning at both subsymbolic and symbolic stages, figuring a unique shell where the full trip from sensory data to their conceptual management might occur.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S. S. Wilks, Mathematical statistics, Wiley publications in statistics, John Wiley, New York, London, 1965.

    Google Scholar 

  2. V. K. Rohatgi, An Introduction to Probability Theory and Mathematical Statistics, Wiley series in probability and mathematical statistics, John Wiley & Sons, New York, Chichester, Brisbane, Toronto, Singapore, 1976.

    MATH  Google Scholar 

  3. L. Valiant, A theory of the learnable, Communications of the ACM 27 (11) (1984) 1134–1142.

    Article  MATH  Google Scholar 

  4. B. Apolloni, S. Chiaravalli, Pac learning of concept classes through the boundaries of their items, Journal of Theoretical Computer Science 172 (1997) 91–120.

    Article  MathSciNet  MATH  Google Scholar 

  5. B. Apolloni, D. Malchiodi, Gaining degrees of freedom in subsymbolic learning, Journal of Theoretical Computer Science 255 (2001) 295–391.

    Article  MathSciNet  MATH  Google Scholar 

  6. A. Blumer, A. Ehrenfreucht, D. Haussler, M. Warmuth, Learnability and the vapnik-chervonenkis dimension, Journal of the ACM 36 (1989) 929–965.

    Article  MATH  Google Scholar 

  7. B. Apolloni, D. Iannizzi, D. Malchiodi, Algorithmically inferring functions, Tech. rep., Università degli Studi di Milano (2000).

    Google Scholar 

  8. S. Zacks, The Theory of Statistical Inference, Wiley series in probability and mathematical statistics, John Wiley & Sons, New York, London, Sydney, Toronto, 1971.

    Google Scholar 

  9. S. Martello, P. Toth, The 0–1 knapsack problem, in: Combinatorial Optimization, Wiley, 1979, pp. 237–279.

    Google Scholar 

  10. M. Li, P. Vitànyi, An Introduction to Kolmogorov Complexity and its Applications, Springer, Berlin, 1993.

    MATH  Google Scholar 

  11. A. Church, Introduction to Mathematical Logic I, Vol. 13 of Annals of • Mathematics Studies, Princeton University Press, Princeton, NJ, 1944.

    Google Scholar 

  12. H. Roger, Theory of recoursive functions and effective computability, Mc Graw-Hill, 1967.

    Google Scholar 

  13. V. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, 1995.

    MATH  Google Scholar 

  14. B. Apolloni, D. Malchiodi, C. Orovas, G. Palmas, Prom synapses to rules, in: Foundations of Connectionist-symbolic Integration: Representation, Paradigms, and Algorithms — Proceedings of the 14th European Conference on Artificial Intelligence, 2000.

    Google Scholar 

  15. M. Hilario, An overview of strategies for neurosymbolic integration, in: F. Alexandre (Ed.), Connectionist-Symbolic Processing: From Unified to Hybrid Approaches, Lawrence Erlbaum, 1998.

    Google Scholar 

  16. R. Sun, Integrating rules and connectionism for robust commonsense reasoning, Wiley, New York, 1994.

    MATH  Google Scholar 

  17. D. E. Rumelhart, G. E. Hinton, R. J. Williams, Learning internal representations by error propagation, in: Parallel Distributed Processing, Vol. 1, MIT Press, Cambridge, Massachusstes, 1986.

    Google Scholar 

  18. J. Quinlan, Comparing connectionist and symbolic learning methods, in: Computational Learning Theory and Natural Learning Systems. Volume I. Constraints and Prospects, MIT Press, Cambridge, 1994, pp. 445–456.

    Google Scholar 

  19. W. A. Fellenz, G. J. Taylor, C. R., E. Douglas-Cowie, F. Piat, S. Kollias, C. Orovas, B. Apolloni, On emotion recognition of faces and speech using neural networks, fuzzy logic and the assess system, in: S. Amari, C. Lee Giles, M. Gori, P. V. (Eds.), Proceeding of the IEEE-INNS-ENNS International Joint Conference on Neural Networks - IJCNN 2000, IEEE Computer Society, Los Alamitos, 2000, pp. II–93, II–98.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to B. Apolloni .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag London Limited

About this paper

Cite this paper

Apolloni, B., Malchiodi, D., Zoppis, I., Gaito, S. (2002). Twisting Features with Properties. In: Tagliaferri, R., Marinaro, M. (eds) Neural Nets WIRN Vietri-01. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0219-9_33

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-0219-9_33

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-85233-505-2

  • Online ISBN: 978-1-4471-0219-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics