High-performance internet file system based on multi-download for convergence computing in mobile communication systems

  • Youngjun Moon
  • Seokhoon KimEmail author
  • Intae RyooEmail author


This paper proposes the Enhanced Internet File System (EIFS), of which the structure consists of a data manager, disk cache, and multi-download module, to improve the performance of the Common Internet File System (CIFS) and presents an evaluation of its performance. The data manager manages data by changing the page unit size used by the local file system from 4 KB to 1 MB. The multi-download module improves the transfer speed by dividing one file into 1-MB sections, which it then simultaneously downloads to multiple sessions. In addition, the disk cache uses internal memory as a cache to manage the data received from the network and also stores data in the cloud in the form of a virtual image, caching only the file pages requested by the application. The proposed EIFS speeds up the downloading of one file by concatenating two or three sessions for processing at the same time. As EIFS supports VFS and conforms to the POSIX standard, it can be used on multiple systems. The performance of EIFS was evaluated by measuring the download speed and app execution time in a mobile network environment. The test results showed that the file download speed increases for EIFS 2 (two sessions) and EIFS 3 (three sessions) compared to CIFS. The performance evaluations further show that the proposed technique outperforms the CIFS file system in terms of download speed and application execution time.


CIFS NFS Network file system Multi download Disk cache 



This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2017R1D1A1B03032777), and this work was supported by the Soonchunhyang University Research Fund.


  1. 1.
    Network file system (nfs) version 4 protocol. (2015)
  2. 2.
    Norton, J., et al.: SNIA CIFS Technical Reference. March 2002. (2002)
  3. 3.
    Hertel, C.: Implementing CIFS. 2004. (2004)
  4. 4.
    Lustre: a scalable, high-performance file system. Cluster File Systems Inc, White Paper (2002)Google Scholar
  5. 5.
    Welch, B., Unangst, M., Abbasi, Z., Gibson, G., Mueller, B., Small, J., Zelenka, J., Zhou, B.: Scalable performance of the panasas parallel file system. In: Proceedings of the 6th USENIX Conference on File and Storage Technologies, ser. FAST’08. USENIX Association, Berkeley, pp. 2:1–2:17 (2008)Google Scholar
  6. 6.
    Weil, S.A., Brandt, S.A., Miller, E.L., Long, D.D.E., Maltzahn, C.: Ceph: a scalable, high-performance distributed file system. In: Proceedings of the 7th Symposium on Operating Systems Design and Implementation, ser. OSDI’06. USENIX Association, Berkeley, pp. 307–320 (2006)Google Scholar
  7. 7.
    Hadoop distributed file system. design.html
  8. 8.
    Volos, H., Nalli, S., Panneerselvam, S.: Aerie: flexible file-system interfaces to storage-class memory. In: Proceedings of the Ninth European Conference on Computer Systems, ACM (2014)Google Scholar
  9. 9.
    Xu, G., Yang, Y., Yan, C., Gan, Y.: A rapid locating protocol of corrupted data for cloud data storage. KSII Trans. Internet Inf. Syst. 10(10), 4703–4723 (2016)Google Scholar
  10. 10.
    Zheng, M., Tucek, J., Qin, F., Lillibridge, M.: Understanding the robustness of SSDs under power fault. In: Proceedings of the 11th USENIX Conference on File and Storage Technologies (FAST’13) (2013)Google Scholar
  11. 11.
    Zheng, M., Tucek, J., Qin, F., Lillibridge, M., Zhao, B.W., Yang, E.S.: Reliability analysis of ssds under power fault. ACM Trans. Comput. Syst. (TOCS) 34, 10 (2013)Google Scholar
  12. 12.
    Ma, A., Douglis, F., Lu, G., Sawyer, D., Chandra, S., Hsu, W.: Raidshield: characterizing, monitoring, and proactively protecting against disk failures. In: 13th USENIX Conference on File and Storage Technologies (FAST 15), USENIX Association, pp. 241–256 (2015)Google Scholar
  13. 13.
    Renzelmann, M.J., Kadav, A., Swift, M.M.: Symdrive: testing drivers without devices. In: Proceedings of the 10th USENIX Conference on Operating Systems Design and Implementation, OSDI’12, USENIX Association, pp. 279–292 (2012)Google Scholar
  14. 14.
    Zheng, M., Tucek, J., Huang, D., Qin, F., Lillibridge, M., Yang, E.S., Zhao, B.W., Singh, S.: Torturing databases for fun and profit. In: 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14), USENIX Association, pp. 449–464 (2014)Google Scholar
  15. 15.
    Pillai, T.S., Chidambaram, V., Alagappan, R., Al-Kiswany, S., Arpaci-Dusseau, A.C., Arpaci-Dusseau, R.H.: All file systems are not created equal: on the complexity of crafting crashconsistent applications. In: Proceedings of the 11th USENIX Symposium on Operating Systems Design and Implementation (2014)Google Scholar
  16. 16.
    Min, C., Kashyap, S., Lee, B., Song, C., Kim, T.: Crosschecking semantic correctness: the case of finding file system bugs. In: Proceedings of the 25th Symposium on Operating Systems Principles, SOSP’15, ACM, pp. 361–377 (2015)Google Scholar
  17. 17.
    Son, Y., Song, N.Y., Han, H., Eom, H.: Design and evaluation of a user-level file system for fast storage devices. Clust. Comput. 18(3), 1075–1086 (2014)CrossRefGoogle Scholar
  18. 18.
    Corriero, N., Covino, E., D’amore, G., Pani, G.: HSFS: a compress filesystem for metadata files. In: Digital Information Processing and Communications, pp 289–300. Springer, Berlin (2011)Google Scholar
  19. 19.
    Son, Y., Song, N.Y., Yeom, H.Y., Han, H.: A low-latency storage stack for fast storage devices. Clust. Comput. 20, 2627–2640 (2017)CrossRefGoogle Scholar
  20. 20.
    Asadianfam, S., Shamsi, M., Kashany, S.: A review: distributed file system. Int. J. Comput. Netw. Commun. Secur. 3(5), 229–234 (2015)Google Scholar
  21. 21.
    Jannen, W., Yuan, J., Zhan, Y., Akshintala, A., Esmet, J., Jiao, Y.: BetrFS: a right-optimized write-optimized file system. USENIX, Berkeley (2015)CrossRefGoogle Scholar
  22. 22.
    Chang, B., Wang, Z., Chen, B., Zhang, F.: MobiPluto: file system friendly deniable storage for mobile devices. In: Proceedings of the 31st Annual Computer Security Applications Conference, ACM (2015)Google Scholar
  23. 23.
    Kwon, K., Park, H., Jung, S., Lee, J., Chung, I.-J.: Dynamic scheduling method for cooperative resource sharing in mobile cloud computing environments. TIIS 10(2), 484–503 (2016)Google Scholar
  24. 24.
    Kim, M., Cui, Y., Lee, H.: An efficient design and implementation of an MdbULPS in a cloud-computing environment. KSII Trans. Internet Inf. Syst. 9(8), 3182–3202 (2015)CrossRefGoogle Scholar
  25. 25.
    Meng, L., Zhao, W., Zhao, H., Ding, Y.: A network load sensitive block placement strategy of HDFS. KSII Trans. Internet Inf. Syst. 9(9), 3539–3558 (2015)Google Scholar
  26. 26.
    Mansouri, N.: A threshold-based dynamic data replication and parallel job scheduling strategy to enhance data grid. Clust. Comput. 17(3), 957–977 (2014)CrossRefGoogle Scholar
  27. 27.
    Liu, Y., Qin, J., Figueiredo, R.: The dispatch time aligning I/O scheduling for parallel file systems. Clust. Comput. 18(3), 1025–1039 (2015)CrossRefGoogle Scholar
  28. 28.
    Zhang, L., Wu, Y., Xue, R., Hsu, T.C., Yang, H., Chung, Y.C.: HybridFS—a high performance and balanced file system framework with multiple distributed file systems. In: 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC), Turin, pp. 796–805 (2017)Google Scholar
  29. 29.
    Su, M., Zhang, L., Wu, Y., Chen, K., Li, K.: Systematic data placement optimization in multi-cloud storage for complex requirements. IEEE Trans. Comput. 65(6), 1964–1977 (2016)MathSciNetCrossRefGoogle Scholar
  30. 30.
    Chen, J., Wei, Q., Chen, C., Wu, L.: FSMAC: a file system metadata accelerator with non-volatile memory. Mass Storage Systems and Technologies (MSST), IEEE, Santa Clara (2013)Google Scholar
  31. 31.
    Jung, J., Won, Y., Kim, E., et al.: FRASH: exploiting storage class memory in hybrid file system for hierarchical storage. ACM Trans. Storage 6(1), 3 (2010)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Computer EngineeringKyung Hee UniversitySeoulKorea
  2. 2.Department of Computer Software EngineeringSoonchunhyang UniversityAsanKorea

Personalised recommendations