Skip to main content

Cognitive Structural Realism, the Nature of Cognitive Models, and some Further Clarifications

  • Chapter
  • First Online:
  • 269 Accesses

Part of the book series: Studies in Brain and Mind ((SIBM,volume 14))

Abstract

This chapter concludes the enterprise of this book. It briefly overviews some of the themes that are unfolded in the book. For example, it highlights the unificatory vocation of CSR, as a theory which seeks to reconcile structural realism to the cognitive models of science approach. This chapter also clarifies the ontological commitments of CSR. It asserts that CSR makes ontological commitments to embodied informational structures. These informational structures could be identified in terms of information processing in the biological, cognitive systems. Owing to the embodied nature of these mechanisms and their reciprocal dynamical interactions with the environment, they could be assumed to be entwined with the causal structure of the world. Finally, the chapter offers a comprehensive entropy-based informational framework for comprising the informational structure of CSR.

Parts of this chapter are reprinted with the kind permission from Springer

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD   54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Defined as \( H(X)=-\sum \limits_{i=1,n}P\left({x}_i\right)\log P\left({x}_i\right) \) where H(X) in a collection of messages is expressible in terms of a probability distribution P over the set of messages.

  2. 2.

    The paradox draws attention to a seemingly self-contradictory aspect of the probabilistic (weakly semantical) theories of information. It holds that:

    [i]t might perhaps, at first, seem strange that a self-contradictory sentence, hence one which no ideal receiver would accept, is regarded as carrying with it the most inclusive information. It should, however, be emphasized that semantic information is here not meant as implying truth. A false sentence which happens to say much is thereby highly informative in our sense. Whether the information it carries is true or false, scientifically valuable or not, and so forth, does not concern us. A self-contradictory sentence asserts too much; it is too informative to be true. (quoted from Floridi 2004, 197).

  3. 3.

    Gibbs entropy is derived out of (F = −Tln Z) where F is free energy, T is equal to fixed temperature, and Z is partition function equal to i.e. −ϵi /T

  4. 4.

    To be clear, it is possible to regiment PPT in terms of an entropy-based framework. To flesh out this claim, I argued that sparse coding strategy of the brain (which is relational and difference-based), underpins PPT. Perhaps it is worth mentioning that spare coding could be assimilated into the entropy-based informational framework that is presented here. Lee and Yu, among others, presented an information-theoretic formulation of sparse coding to suggest that the entropy of a neuronal ensemble in a hypercolumn—which contains roughly 200,000 neurons that are trusted with the job of analysing different aspects of the image in its visual window—can be used to quantify the strangeness of a particular event. The entropy is an information-theoretic measure that can capture complexity or variability of signals. Entropy of a hypercolumn’s ensemble response at a certain stage of time is the sum of entropies of all the channels (Lee and Stella 2000, 836). The entropy could be calculated through the following equation.

    $$ \mathrm{H}\Big(\mathrm{u}\left(\mathrm{Rx},\mathrm{t}\right)=-\theta, \sigma \vartheta p\left(\mathrm{u}\left(\mathrm{Rx},,,,\vartheta,,,, \sigma,,,, \theta,,,, \mathrm{t}\right)\right)\log 2\mathrm{p}\left(\mathrm{u}\left(\mathrm{Rx},,,,\vartheta,,,, \sigma,,,, \theta,,,, \mathrm{t}\right)\right) $$

    Where u(Rx, t) refers to the responses of the cell channels within the visual window of Rx of a hypercolumn at time t, and u(Rx,ϑ,σ,θ,t)) denotes the response of a VI complex cell channel of a particular scale σ and orientation σ at the given spatial location (see Lee and Stella 2000, 836). This adds up to the conclusion that sparse coding is the basic strategy that underlies the models of neural mechanisms, and it is capable of finding clear information-theoretic meaning.

References

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Beni, M.D. (2019). Cognitive Structural Realism, the Nature of Cognitive Models, and some Further Clarifications. In: Cognitive Structural Realism. Studies in Brain and Mind, vol 14. Springer, Cham. https://doi.org/10.1007/978-3-030-05114-3_8

Download citation

Publish with us

Policies and ethics