Advertisement

ImPACT-TRC Thin Serpentine Robot Platform for Urban Search and Rescue

  • Masashi KonyoEmail author
  • Yuichi Ambe
  • Hikaru Nagano
  • Yu Yamauchi
  • Satoshi Tadokoro
  • Yoshiaki Bando
  • Katsutoshi Itoyama
  • Hiroshi G. Okuno
  • Takayuki Okatani
  • Kanta Shimizu
  • Eisuke Ito
Chapter
Part of the Springer Tracts in Advanced Robotics book series (STAR, volume 128)

Abstract

The Active Scope Camera has self-propelled mobility with a ciliary vibration drive mechanism for inspection tasks in narrow spaces but still lacks necessary mobility and sensing capabilities for search and rescue activities. The ImPACT-TRC program aims to improve the mobility of ASC drastically by applying a new air-jet actuation system to float ASC in the air and integrate multiple sensing systems, such as vision, auditory and tactile sensing functions, to enhance the searching ability. This paper reports an overview of the air-floating-type Active Scope Camera integrated with multiple sensory functions as a thin serpentine robot platform.

Notes

Acknowledgements

This work was supported by Impulsing Paradigm Change through Disruptive Technologies (ImPACT) Tough Robotics Challenge program of Japan Science and Technology (JST) Agency.

References

  1. 1.
    Albl, C., Sugimoto, A., Pajdla, T.: Degeneracies in rolling shutter SfM. In: Proceedings of European Conference on Computer Vision, pp. 36–51 (2016)CrossRefGoogle Scholar
  2. 2.
    Ambe, Y., Yamamoto, T., Kojima, S., Takane, E., Tadakuma, K., Konyo, M., Tadokoro, S.: Use of active scope camera in the Kumamoto Earthquake to investigate collapsed houses. In: 2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 21–27. IEEE (2016).  https://doi.org/10.1109/SSRR.2016.7784272, http://ieeexplore.ieee.org/document/7784272/
  3. 3.
    Ando, H., Ambe, Y., Ishii, A., Konyo, M., Tadakuma, K., Maruyama, S., Tadokoro, S.: Aerial hose type robot by water jet for fire fighting. IEEE Robot. Autom. Lett. 3(2), 1128–1135 (2018).  https://doi.org/10.1109/LRA.2018.2792701CrossRefGoogle Scholar
  4. 4.
    Babacan, S.D., Luessi, M., Molina, R., Katsaggelos, A.K.: Sparse Bayesian methods for low-rank matrix estimation. IEEE Trans. Signal Process. 60(8), 3964–3977 (2012)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Bando, Y., Itoyama, K., Konyo, M., Tadokoro, S., Nakadai, K., Yoshii, K., Kawahara, T., Okuno, H.G.: Speech enhancement based on Bayesian low-rank and sparse decomposition of multichannel magnitude spectrograms. IEEE/ACM Trans. Audio Speech Lang. Process. 26(2), 215–230 (2018).  https://doi.org/10.1109/TASLP.2017.2772340CrossRefGoogle Scholar
  6. 6.
    Bando, Y., Itoyama, K., Konyo, M., Tadokoro, S., Nakadai, K., Yoshii, K., Okuno, H.G.: Human-voice enhancement based on online RPCA for a hose-shaped rescue robot with a microphone array. In: IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 1–6 (2015)Google Scholar
  7. 7.
    Bando, Y., Itoyama, K., Konyo, M., Tadokoro, S., Nakadai, K., Yoshii, K., Okuno, H.G.: Microphone-accelerometer based 3D posture estimation for a hose-shaped rescue robot. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5580–5586 (2015)Google Scholar
  8. 8.
    Bando, Y., Itoyama, K., Konyo, M., Tadokoro, S., Nakadai, K., Yoshii, K., Okuno, H.G.: Variational Bayesian multi-channel robust NMF for human-voice enhancement with a deformable and partially-occluded microphone array. In: Proceedings of the 21st European Signal Processing Conference (EUSIPCO), pp. 1018–1022 (2016)Google Scholar
  9. 9.
    Bando, Y., Saruwatari, H., Ono, N., Makino, S., Itoyama, K., Kitamura, D., Ishimura, M., Takakusaki, M., Mae, N., Yamaoka, K., et al.: Low latency and high quality two-stage human-voice-enhancement system for a hose-shaped rescue robot. J. Robot. Mechatron. 29(1), 198–212 (2017)CrossRefGoogle Scholar
  10. 10.
    Bishop, C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, Berlin (2007)Google Scholar
  11. 11.
    Bloomfield, A., Badle, N.I.: Collision awareness using vibrotactile arrays. In: IEEE Virtual Reality Conference 2007. VR’07, pp. 163–170 (2007)Google Scholar
  12. 12.
    Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM 58(3), 11 (2011)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Chen, J.Y., Haas, E.C., Barnes, M.J.: Human performance issues and user interface design for teleoperated robots. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 37(6), 1231–1245 (2007)CrossRefGoogle Scholar
  14. 14.
    De Barros, P.G., Lindeman, R.W., Ward, M.O.: Enhancing robot teleoperator situation awareness and performance using vibro-tactile and graphical feedback. In: 2011 IEEE Symposium on 3D User Interfaces (3DUI), pp. 47–54. IEEE (2011)Google Scholar
  15. 15.
    Engel, J., Koltun, V., Cremers, D.: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2018)CrossRefGoogle Scholar
  16. 16.
    Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: Large-scale direct monocular slam. In: Proceedings of European Conference on Computer Vision, pp. 834–849. Springer (2014)Google Scholar
  17. 17.
    Feng, J., Xu, H., Yan, S.: Online robust PCA via stochastic optimization. In: Advances in Neural Information Processing Systems (NIPS), pp. 404–412 (2013)Google Scholar
  18. 18.
    Févotte, C., Dobigeon, N.: Nonlinear hyperspectral unmixing with robust nonnegative matrix factorization. IEEE Trans. Image Process. 24(12), 4810–4819 (2015)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Fukuda, J., Konyo, M., Takeuchi, E., Tadokoro, S.: Remote vertical exploration by Active Scope Camera into collapsed buildings. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1882–1888. IEEE (2014).  https://doi.org/10.1109/IROS.2014.6942810, http://ieeexplore.ieee.org/document/6942810/
  20. 20.
    Fukuda, J., Konyo, M., Takeuchi, E., Tadokoro, S.: Remote vertical exploration by Active Scope Camera into collapsed buildings. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1882–1888 (2014).  https://doi.org/10.1109/IROS.2014.6942810
  21. 21.
    Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2004). ISBN: 0521540518Google Scholar
  22. 22.
    Hatazaki, K., Konyo, M., Isaki, K., Tadokoro, S., Takemura, F.: Active scope camera for urban search and rescue. In: 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2596–2602 (2007).  https://doi.org/10.1109/IROS.2007.4399386, http://ieeexplore.ieee.org/document/4399386/
  23. 23.
    Heyden, A., Åkström, K.: Minimal conditions on intrinsic parameters for euclidean reconstruction. In: Proceedings of Asian Conference on Computer Vision, pp. 169–176 (1998)CrossRefGoogle Scholar
  24. 24.
    Heyden, A., Åström, K.: Euclidean reconstruction from image sequences with varying and unknown focal length and principal point. In: Proceedings of Computer Vision and Pattern Recognition, pp. 438–446 (1997)Google Scholar
  25. 25.
    Heyden, A., Astrom, K.: Flexible calibration: Minimal cases for auto-calibration. In: Proceedings of International Conference on Computer Vision, pp. 350–355 (1999)Google Scholar
  26. 26.
    Hinton, G.E.: Training products of experts by minimizing contrastive divergence. Neural Comput. 14(8), 1771–1800 (2002)CrossRefGoogle Scholar
  27. 27.
    Hioka, Y., Kingan, M., Schmid, G., Stol, K.A.: Speech enhancement using a microphone array mounted on an unmanned aerial vehicle. In: International Workshop on Acoustic Signal Enhancement (IWAENC), pp. 1–5 (2016)Google Scholar
  28. 28.
    Hoffman, M.D.: Poisson-uniform nonnegative matrix factorization. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5361–5364 (2012)Google Scholar
  29. 29.
    Ishii, A., Ambe, Y., Yamauchi, Y., Ando, H., Konyo, M., Tadakuma, K., Tadokoro, S.: Design and development of biaxial active nozzle with flexible flow channel for air floating active scope camera. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, (Accepted) (2018)Google Scholar
  30. 30.
    Ishikura, M., Takeuchi, E., Konyo, M., Tadokoro, S.: Shape estimation of flexible cable. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2539–2546 (2012)Google Scholar
  31. 31.
    Israr, A., Poupyrev, I.: Tactile brush: drawing on skin with a tactile grid display. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2019–2028. ACM (2011)Google Scholar
  32. 32.
    Ito, E., Okatani, T.: Self-calibration-based approach to critical motion sequences of rolling-shutter structure from motion. In: Proceedings of Computer Vision and Pattern Recognition, pp. 4512–4520 (2017)Google Scholar
  33. 33.
    Julier, S.J.: The scaled unscented transformation. In: American Control Conference, vol. 6, pp. 4555–4559 (2002)Google Scholar
  34. 34.
    Kahl, F., Triggs, B., Åström, K.: Critical motions for auto-calibration when some intrinsic parameters can vary. J. Math. Imaging Vis. 13, 131–146 (2000)MathSciNetCrossRefGoogle Scholar
  35. 35.
    Kamio, S., Ambe, Y., Ando, H., Konyo, M., Tadakuma, K., Maruyama, S., Tadokoro, S.: Air-floating-type active scope camera with a flexible passive parallel mechanism for climbing rubble. In: 2016 SICE Domestic Conference on System Integration (in Japanese), pp. 0639 – 0642 (2016)Google Scholar
  36. 36.
    Kim, T.: Real-time independent vector analysis for convolutive blind source separation. IEEE Trans. Circuits Syst. I 57(7), 1431–1438 (2010)MathSciNetCrossRefGoogle Scholar
  37. 37.
    Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: Proceedings of IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 225–234 (2007)Google Scholar
  38. 38.
    Knapp, C., Carter, G.C.: The generalized correlation method for estimation of time delay. IEEE Trans. Acoust. Speech Signal Process. (TASSP) 24(4), 320–327 (1976)CrossRefGoogle Scholar
  39. 39.
    Konyo, M., Isaki, K., Hatazaki, K., Tadokoro, S., Takemura, F.: Ciliary vibration drive mechanism for active scope cameras. J. Robot. Mechatron. 20(3), 490–499 (2008).  https://doi.org/10.20965/jrm.2008.p0490CrossRefGoogle Scholar
  40. 40.
    Lee, J., Ukawa, G., Doho, S., Lin, Z., Ishii, H., Zecca, M., Takanishi, A.: Non visual sensor based shape perception method for gait control of flexible colonoscopy robot. In: IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 577–582 (2011)Google Scholar
  41. 41.
    Li, Y., et al.: Speech enhancement based on robust NMF solved by alternating direction method of multipliers. In: IEEE International Workshop on Multimedia Signal Processing (MMSP), pp. 1–5 (2015)Google Scholar
  42. 42.
    Maybank, S.J., Faugeras, O.D.: A theory of self-calibration of a moving camera. Int. J. Comput. Vis. 8(2), 123–151 (1992)CrossRefGoogle Scholar
  43. 43.
    Mazumdar A, A.H.: Pulse width modulation of water jet propulsion systems using high-speed coanda-effect valves. ASME. J. Dyn. Sys. Meas. Control. 135(5), 051019 (2013).  https://doi.org/10.1115/1.4024365CrossRefGoogle Scholar
  44. 44.
    McMahan, W., Gewirtz, J., Standish, D., Martin, P., Kunkel, J.A., Lilavois, M., Wedmid, A., Lee, D.I., Kuchenbecker, K.J.: Tool contact acceleration feedback for telerobotic surgery. IEEE Trans. Haptics 4(3), 210–220 (2011)CrossRefGoogle Scholar
  45. 45.
    Miura, H., Yoshida, T., Nakamura, K., Nakadai, K.: SLAM-based online calibration for asynchronous microphone array. Adv. Robot. 26(17), 1941–1965 (2012)CrossRefGoogle Scholar
  46. 46.
    Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)CrossRefGoogle Scholar
  47. 47.
    Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017)CrossRefGoogle Scholar
  48. 48.
    Nakadai, K., Takahashi, T., Okuno, H.G., Nakajima, H., Hasegawa, Y., Tsujino, H.: Design and implementation of robot audition system HARK – open source software for listening to three simultaneous speakers. Adv. Robot. 24(5–6), 739–761 (2011)Google Scholar
  49. 49.
    Namari, H., Wakana, K., Ishikura, M., Konyo, M., Tadokoro, S.: Tube-type active scope camera with high mobility and practical functionality. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3679–3686 (2012).  https://doi.org/10.1109/IROS.2012.6386172, http://ieeexplore.ieee.org/document/6386172/
  50. 50.
    Newcombe, R.A., Lovegrove, S.J., Davison, A.J.: DTAM: dense tracking and mapping in real-time. In: Proceedings of International Conference on Computer Vision, pp. 2320–2327. IEEE (2011)Google Scholar
  51. 51.
    Nugraha, A.A., Liutkus, A., Vincent, E.: Multichannel audio source separation with deep neural networks. IEEE/ACM Trans. Audio Speech Lang. Process. (TASLP) 24(9), 1652–1664 (2016)CrossRefGoogle Scholar
  52. 52.
    Okamoto, S., Konyo, M., Saga, S., Tadokoro, S.: Detectability and perceptual consequences of delayed feedback in a vibrotactile texture display. IEEE Trans. Haptics 2(2), 73–84 (2009)CrossRefGoogle Scholar
  53. 53.
    Ono, N., Kohno, H., Ito, N., Sagayama, S.: Blind alignment of asynchronously recorded signals for distributed microphone array. In: IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pp. 161–164 (2009)Google Scholar
  54. 54.
    Ooka, T., Fujita, K.: Virtual object manipulation system with substitutive display of tangential force and slip by control of vibrotactile phantom sensation. In: 2010 IEEE Haptics Symposium, pp. 215–218 (2010)Google Scholar
  55. 55.
    Pollefeys, M., Koch, R., Gool, V.L.: Self-calibration and metric reconstruction inspite of varying and unknown intrinsic camera parameters. Int. J. Comput. Vis. 31(1), 7–25 (1999)CrossRefGoogle Scholar
  56. 56.
    Sibert, J., Cooper, J., Covington, C., Stefanovski, A., Thompson, D., Lindeman, R.W.: Vibrotactile feedback for enhanced control of urban search and rescue robots. In: Proceedings of the IEEE International Workshop on Safety, Security and Rescue Robotics (2006)Google Scholar
  57. 57.
    Silva Rico, J.A., Endo, G., Hirose, S., Yamada, H.: Development of an actuation system based on water jet propulsion for a slim long-reach robot. ROBOMECH J. 4(1), 8 (2017). https://doi.org/10.1186/s40648-017-0076-4
  58. 58.
    Strecha, C., von Hansen, W., Gool, V.L., Fua, P., Thoennessen, U.: On benchmarking camera calibration and multi-view stereo for high resolution imagery. In: Proceedings of Computer Vision and Pattern Recognition (2008)Google Scholar
  59. 59.
    Sturm, P.: Critical motion sequences for monocular self-calibration and uncalibrated euclidean reconstruction. In: Proceedings of Computer Vision and Pattern Recognition, pp. 1100–1105 (1997)Google Scholar
  60. 60.
    Suzuki, Y., Asano, F., Kim, H.Y., Sone, T.: An optimum computer-generated pulse signal suitable for the measurement of very long impulse responses. J. Acoust. Soc. Am. 97, 1119 (1995)CrossRefGoogle Scholar
  61. 61.
    Thomas, R.D.L.V.S., et al.: Response Times: Their Role in Inferring Elementary Mental Organization: Their Role in Inferring Elementary Mental Organization. Oxford University Press, USA (1986)Google Scholar
  62. 62.
    Tully, S., Kantor, G., Choset, H.: Inequality constrained Kalman filtering for the localization and registration of a surgical robot. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5147–5152 (2011)Google Scholar
  63. 63.
    Wan, E.A., et al.: The unscented Kalman filter for nonlinear estimation. In: The IEEE Adaptive Systems for Signal Processing, Communications, and Control Symposium, pp. 153–158 (2000)Google Scholar
  64. 64.
    Wu, C.: Visual SFM. http://ccwu.me/vsfm/
  65. 65.
    Wu, C.: Towards linear-time incremental structure from motion. In: Proceedings of International Conference on 3D Vision, pp. 127–134 (2013)Google Scholar
  66. 66.
    Xu, Y., Hunter, I.W., Hollerbach, J.M., Bennett, D.J.: An airjet actuator system for identification of the human arm joint mechanical properties. IEEE Trans. Biomed. Eng. 38(11), 1111–1122 (1991).  https://doi.org/10.1109/10.99075CrossRefGoogle Scholar
  67. 67.
    Yamauchi, Y., Fujimoto, T., Ishii, A., Araki, S., Ambe, Y., Konyo, M., Tadakuma, K., Tadokoro, S.: A robotic thruster that can handle hairy flexible cable of serpentine robots for disaster inspection. In: 2018 IEEE International Conference on Advanced Intelligent Mechatronics (AIM) (2018).  https://doi.org/10.1109/AIM.2018.8452708
  68. 68.
    Zhang, C., Florêncio, D., Zhang, Z.: Why does PHAT work well in lownoise, reverberative environments? In: IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 2565–2568 (2008)Google Scholar
  69. 69.
    Zhang, L., Chen, Z., Zheng, M., He, X.: Robust non-negative matrix factorization. Front. Electr. Electron. Eng. China 6(2), 192–200 (2011)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Masashi Konyo
    • 1
    Email author
  • Yuichi Ambe
    • 1
  • Hikaru Nagano
    • 1
  • Yu Yamauchi
    • 1
  • Satoshi Tadokoro
    • 1
  • Yoshiaki Bando
    • 2
  • Katsutoshi Itoyama
    • 3
  • Hiroshi G. Okuno
    • 4
  • Takayuki Okatani
    • 1
  • Kanta Shimizu
    • 1
  • Eisuke Ito
    • 1
  1. 1.Tohoku UniversitySendai-shi, MiyagiJapan
  2. 2.National Institute of Advanced Industrial Science and Technology (AIST)TokyoJapan
  3. 3.Tokyo Institute of TechnologyMeguro-ku, TokyoJapan
  4. 4.Waseda UniversityShinjuku-ku, TokyoJapan

Personalised recommendations