You Can Write Numbers Accurately on Your Hand with Smart Acoustic Sensing

Conference paper
Part of the Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering book series (LNICST, volume 234)

Abstract

Although smartwatch has drawn many attentions in recent years, small and inconvenient interaction mode limits the prevalence of smartwatches. Writing numbers with hands will naturally extend the input interface for smart watch. In this work, we design a passive acoustic sensing, where smart watches are collecting the ambient sound during writing. First of all, we use the wavelet transformation to mitigate the surrounding noise, and devise the time-frequency figures for AI enabled processing. After that, we apply the CNN(Convolutional Neural Network) model for number recognition, where three layers of convolution and three layers of max pool are incorporated. The number recognition accuracy rate could be above 95% when single person is well trained, and be around 92% when 7 to 9 persons are incorporated.

Keywords

Smartwatch Wavelet transformation CNN 

Notes

Acknowledgement

This research is partially supported by 2017YFB0801702, National key research and development plan, NSFC with No. 61772546, 61632010, 61232018, 61371118, 61402009, 61672038, 61520106007, China National Funds for Distinguished Young Scientists with No.61625205, Key Research Program of Frontier Sciences, CAS, No. QYZDY-SSW-JSC002, and NSF OF Jiangsu For Distinguished Young Scientist: BK20150030.

References

  1. 1.
    Harrison, C., Tan, D., Dan, M.: Skinput: appropriating the body as an input surface. In: Sigchi Conference on Human Factors in Computing Systems, pp. 453–462 (2010)Google Scholar
  2. 2.
    Weigel, M., Lu, T., Bailly, G., Oulasvirta, A., Majidi, C.: iSkin: flexible, stretchable and visually customizable on-body touch sensors for mobile computing. In: ACM Conference on Human Factors in Computing Systems, pp. 2991–3000 (2015)Google Scholar
  3. 3.
    Huang, D.Y., Chan, L., Yang, S., Wang, F., Liang, R.H., Yang, D.N., Hung, Y.P., Chen, B.Y.: Digitspace: designing thumb-to-fingers touch interfaces for one-handed and eyes-free interactions. In: CHI Conference on Human Factors in Computing Systems, pp. 1526–1537 (2016)Google Scholar
  4. 4.
    Zhang, Y., Zhou, J., Laput, G., Harrison, C.: Skintrack: using the body as an electrical waveguide for continuous finger tracking on the skin. In: CHI Conference on Human Factors in Computing Systems, pp. 1491–1503 (2016)Google Scholar
  5. 5.
    Kratz, S., Rohs, M.: Hoverflow: exploring around-device interaction with ir distance sensors. In: International Conference on Human-Computer Interaction with Mobile Devices and Services, p. 42 (2009)Google Scholar
  6. 6.
    Hansen, J.P., Biermann, F., Jonassen, M., Lund, H., Agustin, J.S., Sztuk, S.: A gaze interactive textual smartwatch interface. In: ACM International Joint Conference, pp. 839–847 (2015)Google Scholar
  7. 7.
    Xiao, R., Laput, G., Harrison, C.: Expanding the input expressivity of smart-watches with mechanical pan, twist, tilt and click, pp. 193–196 (2014)Google Scholar
  8. 8.
    Perrault, S.T., Lecolinet, E., Eagan, J., Guiard, Y.: Watchit: simple gestures and eyes-free interaction for wristwatches and bracelets. In: SIGCHI Conference on Human Factors in Computing Systems, pp. 1451–1460 (2013)Google Scholar
  9. 9.
    Chen, K.Y., Lyons, K., White, S., Patel, S.: uTrack: 3D input using two magnetic sensors. Springer (2015)Google Scholar
  10. 10.
    Chan, L., Liang, R.H., Tsai, M.C., Cheng, K.Y., Su, C.H., Chen, M.Y., Cheng, W.H., Chen, B.Y.: FingerPad: private and subtle interaction using fingertips. In: ACM User Interface Software and Technology Symposium, pp. 255–260 (2013)Google Scholar
  11. 11.
    Chen, K.Y., Patel, S., Keller, S.: Finexus: tracking precise motions of multiple fingertips using magnetic sensing. In: CHI Conference on Human Factors in Computing Systems, pp. 1504–1514 (2016)Google Scholar
  12. 12.
    Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  13. 13.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  14. 14.
    Zhang, M., Yang, P., Tian, C., Shi, L., Tang, S., Xiao, F.: Soundwrite. In: The International Workshop, pp. 13–17 (2015)Google Scholar
  15. 15.

Copyright information

© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018

Authors and Affiliations

  1. 1.College of Communications EngineeringPLA Army Engineering UniversityNanjingChina
  2. 2.School of Computer Science and TechnologyUniversity of Science and Technology of ChinaHefeiChina

Personalised recommendations