Advertisement

Unraveling the Challenges for the Application of Fog Computing in Different Realms: A Multifaceted Study

  • John Paul Martin
  • A. Kandasamy
  • K. Chandrasekaran
Chapter
Part of the Studies in Computational Intelligence book series (SCI, volume 771)

Abstract

Fog computing is an emerging paradigm that deals with distributing data and computation at intermediate layers between the cloud and the edge. Cloud computing was introduced to support the increasing computing requirements of users. Later, it was observed that end users experienced a delay involved in uploading the large amounts of data to the cloud for processing. Such a seemingly centralized approach did not provide a good user experience. To overcome this limitation, processing capability was incorporated in devices at the edge. This led to the rise of edge computing. This paradigm suffered because edge devices had limited capability in terms of computing resources and storage requirements. Relying on these edge devices alone was not sufficient. Thus, a paradigm was needed without the delay in uploading to the cloud and without the resource availability constraints. This is where fog computing came into existence. This abstract paradigm involves the establishment of fog nodes at different levels between the edge and the cloud. Fog nodes can be different entities, such as personal computers (PCs). There are different realms where fog computing may be applied, such as vehicular networks and the Internet of Things. In all realms, resource management decisions will vary based on the environmental conditions. This chapter attempts to classify the various approaches for managing resources in the fog environment based on their application realm, and to identify future research directions.

1 Introduction

The latest techniques in pervasive and ubiquitous computing aim to make the world totally connected. The constant availability of computing devices empowers mobile users by providing them access to a range of cost-effective and high-performance services in different environments, including smart homes, smart cities and smart vehicles. These smart devices have the potential to interconnect distinct physical business worlds, and their usage in real-time complex applications helps to generate efficient, comfortable, simple solutions [18]. The number of devices connected to the Internet is estimated to increase to 50 billion by 2020 [16]. A large number of these devices generate huge volumes of heterogeneous data, but the limited computing and processing capabilities of these edge devices makes them unsuitable for processing these data and deriving useful results [13]. To overcome these issues, smart devices collaborate with cloud computing [11]. Use of the cloud provides efficient processing power and unlimited storage for a heterogeneous volume of data [23]. Even though much effort has been focused on efficient integration of cloud and Internet of Things (IoT), the rapid explosion in the number of services and edge devices makes the traditional cloud, where data and processing is done through a few centralized servers [3], insufficient for all applications. There are many IoT applications that require low latency (real-time applications), mobility support and location-aware processing [20]. Using existing network configurations and low bandwidth, relying on the cloud environment alone to carry out analytics operations has proved to be inadequate, especially for applications that are not delay-tolerant [2]. Researchers aim to carry out processing activities nearer the end user where data are generated, rather than the distant cloud data centers. The technology that incorporates data processing capabilities at the edge of the network, rather than holding that processing power solely in a cloud or a central data warehouse, is called edge computing [26]. The major drawback to edge computing is the lack of scalability [1]. Limitations in edge computing and cloud computing for performing real-time applications led researchers to propose a new computing model called fog computing, which can overcome these issues and handle all scenarios efficiently. Fog computing brings the computation closer to the user devices rather than deploying them on the user devices alone. Because fog computing is an emerging technology, there are still ambiguities as to what can constitute a fog node, the scenarios that can be enhanced with the introduction of fog nodes, how resource management challenges vary across domains, and so on. This chapter investigates the concepts used in fog computing and challenges faced while implementing in different application scenarios. The main contributions of the chapter are as follows:
  • Identifying the expected capabilities of the fog computing system and analyzing the domains that can be enhanced with the application of fog computing.

  • Identifying the limitations of the existing models and requirements of applications, which all point to the need for a fog computing system.

  • A detailed review on the orchestration issues of the fog computing environment in different application domains.

  • Analyzing the challenges involved and future research directions in fog computing.

The remainder of the chapter is organized as follows. Section 2 summarizes the definition and architecture of fog computing and characteristics of fog nodes. Section 3 surveys the major orchestration issues in fog computing in different application domains. Section 4 presents the major challenges and possible research directions, and Sect. 5 concludes the chapter.
Fig. 1

Fog computing-reference architecture

2 Fog Computing

Requirements for real-time applications have led to the concept of bringing computation closer to user devices rather than on user devices, a concept which is widely known as fog computing [4, 7]. Fog provides decentralized processing power and distributed intelligence for fast processing of heterogeneous, high-volume data generated by various real-time applications. Processing of real-time data involves making complex decisions such as choosing the processing location at the right place and right time [8]. Figure 1 gives a model architecture of the fog environment. There are various types of edge devices present closest to users. Fog nodes are generally placed at a one-hop distance from edge devices. Streaming data collected by edge devices are then transferred to the fog nodes, rather than transporting in bulk volume to the cloud. This can lead to a significant reduction in delays and latency. Fog nodes receive and process these data and make decisions that are communicated to the edge devices. Data that are required for future analysis are transported from the fog node to the cloud, which provides persistent storage mechanisms.

Fog nodes, or fog servers, are placed at different levels between the cloud and the edge of the network to enable efficient data storage, processing and analysis of data, and they reduce latency by limiting the amount of data transported to the cloud. Location, number and hierarchy of fog nodes are all decided on a case-by-case basis. The research in this field is still in its infancy, which is evident from the article by Tordera et al. [21], where the authors attempt to reach a consensus on what can be considered a fog node. However, the fog computing concept can be adopted by different application domains to better serve user requests. In the next section, we review a subset of applications that may be made more efficient by employing fog computing techniques.

3 Applicability of Fog in Different Domains and Challenges Involved

Fog computing plays a significant role in IoT. However, in recent research works, the applicability of fog computing in other networking systems (mobile networks, content distribution networks, radio access networks, vehicular networks, etc.) have also been highlighted. Orchestration and resource management issues in different scenarios depend on context. Our aim is to categorize works in the literature and obtain an overview of the management challenges in fog computing across the various domains. The proposed taxonomy for classifying the research works has been illustrated in Fig. 2.
Fig. 2

Taxonomy for application domains of fog computing

3.1 Internet of Things

Wen et al. [24] explored the challenges involved in the creation of an orchestrator in IoT-based fog environments. The uncertain working environment of IoT applications may create some internal transformations and corresponding dynamic changes in work flow components, so a dynamic orchestrator that has the ability to cater to efficient services is required. They proposed a framework based on a parallel genetic algorithm in Spark. The experiments were made on servers in the Amazon Web Service. Wang et al. [22] proposed a resource allocation framework for big multimedia data generated by IoT devices at the cloud, edge and fog. They also proposed a multimedia communication paradigm for the application layer which is based on premium prioritization for big volumes of data in wireless multimedia communications. Their framework can incorporate diversity of multimedia data at the application layer. They analyzed dependence in spatial, frequency and temporal domains. They also investigated how resource allocation strategies ensure energy efficiency in a quality-of-service (QoS)-driven environment.

3.2 Vehicular Networks

Chen et al. [5] explored the feasibility of fog computing in the area of vehicular networks and proposed two dynamic data scheduling algorithms for fog-enabled vehicular networks. They proposed an efficient three-layer vehicular cloud network (VCN) architecture for real-time exchange of data through the network. Resource management is controlled in a road-side cloud layer. The first dynamic scheduling algorithm is based on the classical concept called Join the Shortest Queue (JSQ), and the next is a complete dynamic algorithm based on predicting the time required for each incoming task. Performance of their proposed algorithm is evaluated by modeling the case scenarios through a compositional approach called Performance Evaluation Process Algebra (PEPA).

3.3 Cyber-Physical Systems

Gu et al. [12] analyzed the integration of fog computing in medical cyber-physical systems (MCPS). The major limitations of cloud computing in cyber-physical systems are the instability and high delay in the links between cloud providers and end devices. The authors claim that the integration of fog computing in MCPS improves QoS and other non-functional properties(NFP). They proposed a solution for the resource management problem in a cost-efficient manner with the required QoS. The cost minimization problem is formulated as a mixed integer non-linear programming (MINLP) problem with constraints such as base station association, task distribution, Virtual Machine (VM) placement and QoS. Zeng et al. [25] explored the features of fog computing, which supports the software-defined embedded system, and developed a model for embedded systems supported by a fog computing environment. They also proposed an effective resource management and task scheduling mechanism in such an environment with the objective of task completion time minimization. The task completion and time minimization problem can be treated as a mixed integer non-linear programming (MINLP) problem. They formulated constraints for storage and task completeness and propose a three-stage heuristic algorithm for solving highly computational complex MINLP problems. The algorithm uses the concept of “partition and join” and with the first two stages tries to minimize I/O time and computation time independently. Joint optimization is performed in the last stage.

3.4 Mobile Network/Radio Access Network

Dao et al. [6] proposed an adaptive resource balancing scheme for improving the serviceability in fog radio access networks. They used a back-pressure algorithm for balancing resource block utilization through service migration among the remote radio heads (RRH) on a time-varying network topology. Mungen et al. [19] surveyed the recent developments in the field of socially aware mobile networking in fog radio access networks(F-RANS). Their main scope of study was in radio resource allocation and performance analysis in an F-RAN environment. The presence of local cache and adaptive model selection makes F-RAN more complex and advanced. The authors examined how this local caching and adaptive model selection affects spectral efficiency, energy efficiency and latency.

3.5 Content Distribution Network

A content distribution network (CDN) is composed of distributed proxy servers that provide content to end users, ensuring high performance and availability. Jiang et al. [27] proposed a method for optimizing the performance of web services by taking advantage of fog computing architecture. Fog server caches contain all recently accessed details, and this content is used to improve performance. Cuong et al. [10] explored the usage of fog computing in content delivery systems. The usage of fog nodes at the edge of the network for streaming services reduces latency and improves QoS. They proposed a distributed solution for joint resource allocation in a distributed fog environment and to minimize the carbon footprint. The problem can be formed as a general convex optimization, and the proposed solution method is built on proximal algorithms rather than other conventional methods.
Table 1

Summary of the methodologies adopted by existing research works for management of the fog environment

Research Work

Authors

Objective

Methodology

Limitations

[25]

Deze Zeng et al.

Resource management and task scheduling in fog-enabled embedded systems

Formulated into mixed-integer non-linear programming problem and solved

Memory management not considered

[12]

Lin gu et al.

Resource management in cyber-physical systems supported by fog computing

Two-phase heuristic algorithm used for solving resource management problems

Only limited QoS parameters are considered

[9]

Ruilong Deng et al.

Optimal workload allocation among cloud and fog computing environments

The primal problem is divided into three sub-problems and solved

Only considered power consumption and delay for allocation

[22]

Wei Wang et al.

Resource allocation framework for big multimedia data generated by IoT devices

Framework is based on premium prioritization for big volumes of data

Feasibility in a real-time scenario not considered.

[5]

Xiao Chen et al.

Dynamic data scheduling algorithms for fog-enabled vehicular networks

Three-layer vehicular cloud network architecture used for real-time exchange of data through the vehicular network

Performance evaluation is done only through formal approach

[24]

Zhenyu Wen et al.

Creation of an orchestrator in an IoT-based fog environment

Proposed a solution based on parallel genetic algorithms

Does not consider a massive-scale system

[17]

Lina Ni et al.

Resource allocation and scheduling strategy for fog computing environments

Resource allocation strategy is developed based on priced timed petri net (PTPN)

Not all performance metrics are considered

[14]

Jingtao et al.

Developed an efficient method to share resources in a fog cluster

Steiner tree in graph theory is used for developing caching method

Does not perform well with topologies with fewer fog nodes

[27]

Jiang et al.

Web optimization using fog computing

Recently accessed details in the fog server cache are used to improve the performance.

Only developed proof-of-concept system.

[6]

Ngoc dao et al.

Implement resource balancing scheme for improving the serviceability in fog radio access networks

Back pressure algorithm is used to implement the scheme.

Number of service migrations is not considered with changing network

3.6 Conventional Fog Environments

Deng et al. [9] proposed a framework for optimal workload allocation among cloud and fog computing environments. Minimal power consumption and service delay are two major constraints considered when formulating the primal problem. They developed an approximate approach for solving the primal problem by dividing it in to three sub-problems of corresponding subsystems. Ni et al. [17] proposed a resource allocation and scheduling strategy for fog computing environments. The objectives for the proposed schemes were to maximize resource utilization, fulfill all the QoS requirements and maximize profit for both providers and users. Their dynamic resource allocation strategy is based on a priced timed petri net (PTPN), by which the user is allowed to pick resources that satisfy requirements from a pool of pre-allocated resources, considering price cost and time cost to complete a task. Jingtao et al. [14] proposed an efficient method for sharing resources in a fog cluster to provide local service to mobile users in a cost-efficient manner. The fog cluster is composed of many functional specific servers. They applied a Steiner tree in graph theory to develop this caching method in a fog environment. A graph G is composed, where vertices represent the servers and the edge represents the connection between them. The Steiner tree is used to find the minimum cost of the subgraph, which is generated from graph G. A summary of the research works considered in this table has been provided in Table 1.

4 Discussion

Fog computing has a wide range of applicability across different domains. Usage of fog computing in various domains makes it necessary to deal with different aspects of constrained environments. The dynamic changes inherent in the runtime environment impose an additional challenge for fog management and contribute to the complexity of coordination among the fog resources. Though there are a few research works attempting to solve many of the management issues in the fog environment, there are many unresolved issues, offering great opportunity for future researchers. We list a few such challenges in the next subsection. An analysis of the proportion of research contributions to the different application domains of Fog has been shown in Fig. 3.
Fig. 3

Application domains harnessing the power of fog computing systems

4.1 Challenges and Future Research Directions

4.1.1 Limited Capacity and Heterogeneous Nature of Resources

Fog nodes have limited capacity, and they are heterogeneous in nature. Unlike cloud computing resources, these behaviors create an additional burden on scheduling and managing methods. Fog nodes have different Random Access Memory (RAM) capacity, Central Processing Unit (CPU) computing performance, storage space and network bandwidth. Fog management systems should be able to handle all these customized hardware and personalized user requests. Some applications may run only on hardware with specific characteristics, and many times the fog nodes have limited capacity. Development of a fog managing component in such a complex environment that meets all user needs presents many challenges.

4.1.2 Security and Integrity Issues

End devices transfer the collected data to the fog nodes for processing, and they make decisions based on the result obtained from the fog. Intruders may be present between transmissions in the form of fake networking devices or fog nodes. They can intercept confidential data of real-time applications and make appropriate decisions. Attacks aimed at any one of the nodes pose data integrity problems [15]. Malicious codes in the nodes are usually identified using signature-based detection techniques, but because of the low computing power, execution of such detection methods is not feasible in IoT environments. Ensuring security and integrity for data in fog environments is still an open challenge.

4.1.3 Fault Diagnosis and Tolerance

Fog resources are geographically distributed in nature. Scaling of these systems increases complexity and the likelihood of failures. Some minor errors can be detected that were not identifiable on a small scale, and testing may cause a significant reduction in performance and reliability in the scaled-up systems. There should be proper mechanisms to incorporate these failures without affecting the performance of the system. Developers should incorporate redundancy techniques and policies for handling failures with minimum impact on performance results.

4.1.4 Cost Models for Fog Computing Environment

The recent developments in fog computing allows multiple providers to deliver computational resources based on demand as a service. Similar to the cloud pay-as-you-go model, these services should have a charge associated with them. Research in the field of cost models for the fog computing environment is still in its infancy. Creating models that can be implemented by users to find optimal providers and by providers to achieve their profit margins is of significant importance.

4.1.5 Development of Interactive Interfaces for Fog Nodes

Fog computing systems are distributed in nature and consist of a wide variety of nodes. Fog computing allows dynamic reallocation of tasks among fog nodes and between the cloud and the edge. Efficient management of the resources requires flexible interfaces from fog to cloud, fog to edge and among fog nodes. Efficient communication through the interface allows a collaborative effort to jointly support execution of an application. Such interfaces that can enable management tasks must be designed and implemented.

4.1.6 Programming Models

The heterogeneous and widely dispersed nature of the resources in the fog environment demands the right abstraction so that programmers need not handle these complex issues. Flexible application program interfaces (APIs) and programming models are needed that hide the complexity of infrastructure and allow users to run their applications seamlessly.

5 Conclusion

Fog computing is an emerging trend which has been the recipient of increasing interest from industrialists and academia alike. Fog can be perceived as an extension of cloud services near the edge of the network. Fog brings cloud services to end users and improves QoS. Fog consists of a group of near-end user devices called fog nodes. Fog nodes or fog servers are placed at different levels between the cloud and the edge of the network to enable efficient data storage, processing and analysis of data, and they reduce latency by limiting the amount of data transported to the cloud. We compared different analytic platforms including cloud, edge and fog, and also investigated the challenges in deploying fog computing environments in different application scenarios.

References

  1. 1.
    Ahmed, A., and E. Ahmed. 2016. A survey on mobile edge computing. In IEEE 10th International Conference on Intelligent Systems and Control (ISCO), 1–8.Google Scholar
  2. 2.
    Arkian, H.R., A. Diyanat, and A. Pourkhalili. 2017. Mist: Fog-based data analytics scheme with cost-efficient resource provisioning for iot crowdsensing applications. Journal of Network and Computer Applications 82: 152–165.CrossRefGoogle Scholar
  3. 3.
    Barcelo, M., A. Correa, J. Llorca, A.M. Tulino, J.L. Vicario, and A. Morell. 2016. Iot-cloud service optimization in next generation smart environments. IEEE Journal on Selected Areas in Communications 34 (12): 4077–4090.CrossRefGoogle Scholar
  4. 4.
    Byers, C.C. 2015. Fog computing distributing data and intelligence for resiliency and scale necessary for iot the internet of things.Google Scholar
  5. 5.
    Chen, X., and L. Wang. 2017. Exploring fog computing-based adaptive vehicular data scheduling policies through a compositional formal methodpepa. IEEE Communications Letters 21 (4): 745–748.CrossRefGoogle Scholar
  6. 6.
    Dao, N.N., J. Lee, D.N. Vu, J. Paek, J. Kim, S. Cho, K.S. Chung, and C. Keum. 2017. Adaptive resource balancing for serviceability maximization in fog radio access networks. IEEE Access 5: 14548–14559.CrossRefGoogle Scholar
  7. 7.
    Dastjerdi, A.V., and R. Buyya. 2016. Fog computing: helping the internet of things realize its potential. Computer 49 (8): 112–116.CrossRefGoogle Scholar
  8. 8.
    Datta, S.K., C. Bonnet, and J. Haerri. 2015. Fog computing architecture to enable consumer centric internet of things services. In IEEE International Symposium on Consumer Electronics (ISCE), 1–2.Google Scholar
  9. 9.
    Deng, R., R. Lu, C. Lai, T.H. Luan, and H. Liang. 2016. Optimal workload allocation in fog-cloud computing toward balanced delay and power consumption. IEEE Internet of Things Journal 3 (6): 1171–1181.Google Scholar
  10. 10.
    Do, C.T., N.H. Tran, C. Pham, M.G.R. Alam, J.H. Son, and C.S. Hong. 2015. A proximal algorithm for joint resource allocation and minimizing carbon footprint in geo-distributed fog computing. In IEEE International Conference on Information Networking (ICOIN), 324–329.Google Scholar
  11. 11.
    Fernando, N., S.W. Loke, and W. Rahayu. 2013. Mobile cloud computing: a survey. Future Generation Computer Systems 29 (1): 84–106.CrossRefGoogle Scholar
  12. 12.
    Gu, L., D. Zeng, S. Guo, A. Barnawi, and Y. Xiang. 2017. Cost efficient resource management in fog computing supported medical cyber-physical system. IEEE Transactions on Emerging Topics in Computing 5 (1): 108–119.CrossRefGoogle Scholar
  13. 13.
    He, Z., Z. Cai, J. Yu, X. Wang, Y. Sun, and Y. Li. 2017. Cost-efficient strategies for restraining rumor spreading in mobile social networks. IEEE Transactions on Vehicular Technology 66 (3): 2789–2800.CrossRefGoogle Scholar
  14. 14.
    Jingtao, S., L. Fuhong, Z. Xianwei, and L. Xing. 2015. Steiner tree based optimal resource caching scheme in fog computing. China Communications 12 (8): 161–168.CrossRefGoogle Scholar
  15. 15.
    Khan, S., S. Parkinson, and Y. Qin. 2017. Fog computing security: a review of current applications and security solutions. Journal of Cloud Computing 6 (1): 19.CrossRefGoogle Scholar
  16. 16.
    Networking, C.V. 2017. Ciscoglobal cloud index: forecast and methodology, 2015–2020. white paperGoogle Scholar
  17. 17.
    Ni, L., J. Zhang, C. Jiang, C. Yan, and K. Yu. 2017. Resource allocation strategy in fog computing based on priced timed petri nets. IEEE Internet of Things Journal.Google Scholar
  18. 18.
    Ning, H., H. Liu, J. Ma, L.T. Yang, and R. Huang. 2016. Cybermatics: cyber-physical-social-thinking hyperspace based science and technology. Future Generation Computer Systems 56: 504–522.CrossRefGoogle Scholar
  19. 19.
    Peng, M., and K. Zhang. 2016. Recent advances in fog radio access networks: performance analysis and radio resource allocation. IEEE Access 4: 5003–5009.CrossRefGoogle Scholar
  20. 20.
    Qiu, T., R. Qiao, and D. Wu. 2017. Eabs: An event-aware backpressure scheduling scheme for emergency internet of things. IEEE Transactions on Mobile Computing.Google Scholar
  21. 21.
    Tordera, E.M., X. Masip-Bruin, J. García-Almiñana, A. Jukan, G.J. Ren, and J. Zhu. 2017. Do we all really know what a fog node is? current trends towards an open definition. Computer Communications.Google Scholar
  22. 22.
    Wang, W., Q. Wang, and K. Sohraby. 2017. Multimedia sensing as a service (msaas): Exploring resource saving potentials of at cloud-edge iot and fogs. IEEE Internet of Things Journal 4 (2): 487–495.Google Scholar
  23. 23.
    Weinhardt, C., A. Anandasivam, B. Blau, N. Borissov, T. Meinl, W. Michalk, and J. Stößer. 2009. Cloud computing-a classification, business models, and research directions. Business and Information Systems Engineering 1 (5): 391–399.CrossRefGoogle Scholar
  24. 24.
    Wen, Z., R. Yang, P. Garraghan, T. Lin, J. Xu, and M. Rovatsos. 2017. Fog orchestration for internet of things services. IEEE Internet Computing 21 (2): 16–24.CrossRefGoogle Scholar
  25. 25.
    Zeng, D., L. Gu, S. Guo, Z. Cheng, and S. Yu. 2016. Joint optimization of task scheduling and image placement in fog computing supported software-defined embedded system. IEEE Transactions on Computers 65 (12): 3702–3712.MathSciNetCrossRefGoogle Scholar
  26. 26.
    Zhang, Y., D. Niyato, and P. Wang. 2015. Offloading in mobile cloudlet systems with intermittent connectivity. IEEE Transactions on Mobile Computing 14 (12): 2516–2529.CrossRefGoogle Scholar
  27. 27.
    Zhu, J., D.S. Chan, M.S. Prabhu, P. Natarajan, H. Hu, and F. Bonomi. 2013. Improving web sites performance using edge servers in fog computing architecture. IEEE 7th International Symposium on Service Oriented System Engineering (SOSE), 320–323.Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • John Paul Martin
    • 1
  • A. Kandasamy
    • 1
  • K. Chandrasekaran
    • 2
  1. 1.Department of Mathematical and Computational SciencesNational Institute of TechnologyMangaloreIndia
  2. 2.Department of Computer Science and EngineeringNational Institute of TechnologyMangaloreIndia

Personalised recommendations