Skip to main content

A Cybernetic Approach to Characterization of Complex Sensory Environments: Implications for Human Robot Interaction

  • Conference paper
  • First Online:
Advances in Human Factors in Robots and Unmanned Systems (AHFE 2017)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 595))

Included in the following conference series:

  • 1743 Accesses

Abstract

Humans are increasingly interacting and collaborating with robotic and intelligent agents. How to make these interactions as effective as possible remains, however, an open question. Here, we argue that consistent understandings of the environment on the part of the human and agent are critical for their interaction and basing these understandings on only the objective features of sensory inputs may be inadequate. To that end, the current paper presents a novel approach to more integrated characterizations of the sensory environment that encompass objective and subjective features of sensory inputs. We propose that an approach to signal and behavioral estimation consistent with the control and communication theoretic perspective of Cybernetics could inform human robot interaction (HRI) applications. Specifically, we offer a potential path forward for quantifying similarity in stimulus events that can lead to consistent understandings of the environment, which when applied to HRI can enhance human-agent communication in HRI applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Wiener, N.: Cybernetics: Control and Communication in the Animal and the Machine. Wiley, New York (1948)

    Google Scholar 

  2. Seising, R.: Cybernetics, system (s) theory, information theory and fuzzy sets and systems in the 1950s and 1960s. Inf. Sci. 180, 4459–4476 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  3. Dubberly, H., Pangaro, P.: Cybernetics and service-craft: language for behavior-focused design. Kybernetes 36, 1301–1317 (2007)

    Article  MATH  Google Scholar 

  4. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  5. Biederman, I., Bar, M.: One-shot viewpoint invariance in matching novel objects. Vis. Res. 39, 2885–2899 (1999)

    Article  Google Scholar 

  6. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: Proceedings of the IEEE International Conference on Computer Vision (2015)

    Google Scholar 

  7. Lopez-Morillas, J., Canadas-Quesada, F.J., Vera-Candeas, P., Ruiz-Reyes, N., Mata-Campos, R., Montiel-Zafra, V.: Gunshot detection and localization based on non-negative matrix factorization and SRP-hat. In: Sensor Array and Multichannel Signal Processing Workshop (SAM), pp. 1–5. IEEE (2016)

    Google Scholar 

  8. Khalid, M.A., Babar, M.I.K., Zafar, M.H., Zuhairi, M.F.: Gunshot detection and localization using sensor networks. In: Smart Instrumentation, Measurement and Applications (ICSIMA), pp. 1–6. IEEE (2013)

    Google Scholar 

  9. Deng, L., Li, X.: Machine learning paradigms for speech recognition: an overview. IEEE Trans. Audio Speech Lang. Process. 21, 1060–1089 (2013)

    Article  Google Scholar 

  10. Lippmann, R.P.: Speech recognition by machines and humans. Speech Commun. 22, 1–15 (1997)

    Article  Google Scholar 

  11. Benzeghiba, M., De Mori, R., Deroo, O., Dupont, S., Erbes, T., Jouvet, D., Rose, R.: Automatic speech recognition and speech variability: a review. Speech Commun. 49, 763–786 (2007)

    Article  Google Scholar 

  12. Xiong, W., Droppo, J., Huang, X., Seide, F., Seltzer, M., Stolcke, A., Yu, D., Zweig, G.: Achieving Human Parity in Conversational Speech Recognition. Microsoft Research Technical Report MSR-TR-2016-71, February 2017

    Google Scholar 

  13. Ngiam, J., Khosla, A., Kim, M., Nam, J., Lee, H., Ng, A.Y. Multimodal deep learning. In: Proceedings of the 28th International Conference on Machine Learning (ICML-11) (2011)

    Google Scholar 

  14. Poria, S., Cambria, E., Howard, N., Huang, G.B., Hussain, A.: Fusing audio, visual and textual clues for sentiment analysis from multimodal content. Neurocomputing 174, 50–59 (2016)

    Article  Google Scholar 

  15. Saproo, S., Faller, J., Shih, V., Sajda, P., Waytowich, N.R., Bohannon, A., Jangraw, D.: Cortically coupled computing: a new paradigm for synergistic human-machine interaction. Computer 49, 60–68 (2016)

    Article  Google Scholar 

  16. Brooks, J., Slayback, D., Shih, B., Marathe, A., Lawhern, V., Lance, B.J.: Target class induction through image feedback manipulation in rapid serial visual presentation experiments. In: 2015 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1047–1052, October 2015

    Google Scholar 

  17. Burr, D., Alais, D.: Combining visual and auditory information. Progr. Rain Res. 155, 243–258 (2006)

    Article  Google Scholar 

  18. Ernst, M.O., Bülthoff, H.H.: Merging the senses into a robust percept. Trends Cogn. Sci. 8, 162–169 (2004)

    Article  Google Scholar 

  19. Shams, L., Seitz, A.R.: Benefits of multisensory learning. Trends Cogn. Sci. 12, 411–417 (2008)

    Article  Google Scholar 

  20. Gunraj, D.N., Drumm-Hewitt, A.M., Dashow, E.M., Upadhyay, S.S.N., Klin, C.M.: Texting insincerely: the role of the period in text messaging. Comput. Hum. Behav. 55, 1067–1075 (2016)

    Article  Google Scholar 

  21. Gunraj, D.N., Drumm-Hewitt, A.M., Klin, C.M.: Embodiment during reading: Simulating a story character’s linguistic actions. J. Exp. Psychol.: Learn. Mem. Cogn. 40, 364–375 (2014)

    Google Scholar 

  22. Angelaki, D.E., Gu, Y., DeAngelis, G.C.: Multisensory integration: psychophysics, neurophysiology, and computation. Curr. Opin. Neurobiol. 19, 452–458 (2009)

    Article  Google Scholar 

  23. Roach, N.W., Heron, J., McGraw, P.V.: Resolving multisensory conflict: a strategy for balancing the costs and benefits of audio-visual integration. Proc. R. Soc. Lond. B: Biol. Sci. 273(1598), 2159–2168 (2006)

    Article  Google Scholar 

  24. Leech, R., Gygi, B., Aydelott, J., Dick, F.: Informational factors in identifying environmental sounds in natural auditory scenes. J. Acoust. Soc. Am. 126, 3147–3155 (2009)

    Article  Google Scholar 

  25. Dickerson, K., Foots, A., Gaston, J.: The influence of concreteness on identification and response confidence for common environmental sounds. PLoS ONE (under review)

    Google Scholar 

  26. Gygi, B., Kidd, G.R., Watson, C.S.: Similarity and categorization of environmental sounds. Atten. Percept. Psychophys. 69, 839–855 (2007)

    Article  Google Scholar 

  27. Dickerson, K., Gaston, J., Foots, A., Mermagen, T.: Sound source similarity influences change perception during complex scene perception. J. Acoust. Soc. Am. 137, 2226 (2015)

    Article  Google Scholar 

  28. Gaston, J., Dickerson, K., Hipp D., Gerhardstein, P.: Change deafness for real spatialized environmental scenes. Cogn. Res.: Princ. Implic. (in press)

    Google Scholar 

  29. Dickerson, K., Gaston, J.R.: Did you hear that? The role of stimulus similarity and uncertainty in auditory change deafness. Front. Psychol. 5, 1–5 (2014)

    Article  Google Scholar 

  30. Dickerson, K., Sherry, L., Gaston, J.: The relationship between perceived pleasantness and memory for environmental sounds. J. Acoust. Soc. Am. 140(4), 3390 (2016)

    Article  Google Scholar 

  31. Ramenahalli, S., Mendat, D.R., Dura-Bernal, S., Culurciello, E., Niebur, E., Anderou, A.: Audio-visual saliency map: overview, basic models and hardware implementation. In: 2013 47th Annual Conference on Information Sciences and Systems (CISS), pp. 1–6. IEEE (2013)

    Google Scholar 

  32. Slayback, D., Files, B., Lance, B., Brooks, J.: Effects of image presentation highlighting and accuracy on target class induction (in preparation)

    Google Scholar 

  33. Ernst, M.O., Banks, M.S.: What determines dominance of vision over haptics? In: Proceedings of the Annual Psychonomics Meeting (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kelly Dickerson .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG(outside the USA)

About this paper

Cite this paper

Dickerson, K., Gaston, J., Oie, K.S. (2018). A Cybernetic Approach to Characterization of Complex Sensory Environments: Implications for Human Robot Interaction. In: Chen, J. (eds) Advances in Human Factors in Robots and Unmanned Systems. AHFE 2017. Advances in Intelligent Systems and Computing, vol 595. Springer, Cham. https://doi.org/10.1007/978-3-319-60384-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-60384-1_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-60383-4

  • Online ISBN: 978-3-319-60384-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics