1 Introduction

Nowadays the Internet of Things (IoT) has dominated all aspects of our lives. As a result, objects are connected to the network and talk about each other. Around 18 billion connected objects are forecasted by 2022 related to the IoT [1]. There are a variety of applications (e.g., smart city, smart manufacturing, and video surveillance) provided in an IoT network. In this ecosystem, most of the service providers use the cloud data centers to process the huge volume data produced by objects and extract a value of it. However, this results in imposing a high load on the network and degrading the performance of the cloud. Meanwhile, because of the distance between the cloud and the source of data, the opportunity to act on data in real-time will be lost [2]. In order to solve these problems, a new distributed computing paradigm named fog computing [3] has emerged which aims at filling the gap between cloud and end-devices. Fog computing enables network objects to cooperate and make their resources available, in order to reach a goal, i.e. providing services.

Regarding fog computing which aimed especially towards IoT, we consider an IoT network consists of three layers; cloud, fog, and end-devices. Fog layer includes distributed nodes such as routers, servers, and even mobile devices distributed between the end users and the cloud. In this context, we define the problem well-known as resource allocation as follows; given the IoT network and the IoT requests along with their requirements, find a mapping between requests and nodes at the network. The resulted nodes should execute tasks to reach the request’s target and to satisfy the network performance and quality of services. Resource allocation is one of the most important challenges in the IoT context.

Fog computing is able to locally allocate edge devices (comprises end devices such as mobile phones, edge devices such as routers, and edge servers) to the IoT requests and prevent the transmission of huge amounts of raw data to the core network (including the core routers, regional servers, and cloud centers). In order to do this, it is important to consider some key features regarding IoT objects:

  • Limited available nodes’ resources (e.g., electrical energy, memory, and processing power).

  • Network heterogeneity regards both nodes’ capabilities and requests’ requirements.

  • Dynamic behavior of IoT networks; connections among nodes are created dynamically.

  • A huge number of nodes deployed over an extensive area; network topology changes quickly.

All these features conclude to a dynamic network, where all nodes need to interoperate in order to allocate available resources in a distributed way. Most of the decisions should be taken autonomously to avoid centralized solutions. Therefore, resource management should be continuously addressed to dynamically adapt the system to changes in terms of IoT requests’ requirements and network topology. This is the reason we need “context awareness”. Context awareness can exploit all available context information in order to make better decisions regarding a constrained pool of network resources. Therefore, it can improve the performance of resource management algorithms for IoT ecosystems.

The term “context awareness” refers to the ability of computing systems to acquire and reason about the context information and subsequently adapt the corresponding applications accordingly [4]. In addition, context awareness is a foundation of all self-x properties including self-configuration, self-organization, self-optimization, self-healing, etc. [5, 6]. As a result, the IoT network would be able to exploit resources in an efficient and self-organizing way. Semantic technologies use formal semantics to facilitate context- awareness, and reasoning on IoT. They enhance the raw data and link the data in any domain in real life. Ontologies provide a sophisticated semantic mechanism for resource modeling and allow representing both hardware and software, physical and virtual resources along with the relationships between them and in a variety of granularities [7].

This paper focuses on the mentioned challenges in the IoT resource allocation and we propose to use context information to allocate IoT resources in an optimized way. To this end, this paper suggests to use ontologies to model our IoT network/requests. Then we leverage semantic rules/query engines to drive inferences and find a suitable mapping between IoT requests and resources.

Some of the applications for our proposal include but not limited to:

  • Autonomic and self-organizing IoT networks (e.g., automated manufacturing).

  • Power/workload management in the IoT ecosystem.

  • Monitoring the resources in the IoT network and detecting the likelihood of node failures and applying policies for remediation.

  • Context-aware strategies for the future regarding long-term reasoning (extracting trends and patterns).

This paper is organized as follows. In Sect. 2, we look at some related work using ontologies in the IoT domain. Section 3 describes the proposed approach, based on the unification of the IoT and cloud ontologies and leveraging it for optimized resource allocation. Finally, Sect. 4 provides an overview of the benefits of the proposed approach and some indications for future research.

2 Related Works

In the context of this paper, we review cloud computing ontologies, IoT ontologies, and resource management in IoT as our related work literature survey.

Zhang et al. [8] propose a cloud computing ontology called CoCoOn to discover suitable infrastructure services for the user’s needs. The CoCoOn ontology defines a set of properties to describe infrastructure services. The authors implement a recommendation system based on the CoCoOn ontology in which SQL queries are used to interrogate the ontology and discover services. Rekik et al. [9] propose CloudO, a comprehensive cloud service description that plays a basic role for the discovery and composition of cloud services. The proposed ontology spans functional and non-functional aspects of cloud services at the three layers of cloud models, namely Infrastructure as a Service (IaaS), platform as a Service (PaaS) and software as a Service (SaaS). The proposed ontology helps user to discover and select appropriate cloud services through user’s queries.

IoT-O [10] ontology intends to cover two sets of requirements - Conceptual and Functional in an IoT ecosystem. The “conceptual” requirements are based on the description of devices, data, services and their lifecycle while, “functional” requirements are defined as the requirements that follow best practices define by the semantic community. The ontology provides concepts needed for representing a device and its functionality. It reuses some existing ontologies such as IoT-Lifecycle [11] and SSN [12] to define the concepts related to the IoT domain such as duty cycles and sensing capabilities.

Moustafa et al. [13] propose Continuum, a model of a context-aware middleware that can dynamically discover environment and self-adapt applications to the new contextual conditions. They address the issue of changing environment due to the mobility through a monitoring service that is capable of reasoning during the runtime. Koorapati et al. [7] consider an ecosystem consisting of IoT, Software-Defined Data Center (SDDC) and cloud. The authors present a resource modeling framework based on semantic technologies insisting how semantic technologies are applied in addressing some of the key challenges in managing such an ecosystem. Delicato et al. [14] describe the challenges related to resource management in IoT considering different number of tiers; only cloud, only IoT, and three tier composed of cloud, IoT and edge nodes. The authors highlight the advantages of ontologies for resource modeling and provide a useful insight of OpenIoT [15]. OpenIoT is a middleware framework whose semantic-based resource management architecture enables managing the whole lifecycle of IoT applications and services infrastructure.

To the best of our knowledge, this paper is the first research paper which deals with using ontologies to discover, model, select, and allocate resources in an IoT ecosystem consisting of IoT, fog and cloud layers.

3 Proposed Approach

An IoT ecosystem with three layers is illustrated in the Fig. 1 in which the bottom layer encompasses the things (the IoT devices/nodes/smart objects), the top layer includes the cloud nodes and an optional middle layer consists of fog nodes. IoT request (originate from any devices) are intended to be received by the closest (in terms of the distance between the IoT requester and fog nodes) fog nodes. Upon receiving a request, the receiver fog node locally generates an assignment between its received requests and the ecosystem, resource allocation. Regarding the assignment, the IoT tasks will be distributed and deployed on the intended resources and served users considering the quality of services.

Fig. 1.
figure 1

IoT ecosystem comprising three layers: cloud, fog, and end-device layer [2]

The lack of unification of heterogeneous cloud/fog service description makes resource discovery and selection very complex tasks for IoT users. To alleviate the complexity, it is necessary to have a unified service model integrating service descriptions obtained from heterogeneous sources. This paper proposes to combine two ontologies; IoT ontology (IoT-O) [10] and cloud ontology (CloudO) [9] in order to model the IoT ecosystem using a unified ontology named IoT-Fog-Cloud.

IoT-O is a modular IoT ontology aimed at describing connected devices and their relationship with their environment. The IoT-O is composed of several modules including sensing module, acting module, service module, lifecycle module, energy module. On the other hand, CloudO describes the concepts, features and relations of different services (and their classification) in the cloud computing paradigm. Therefore, we can use IoT-O to model a variety of heterogeneous devices such as sensors, actuators along with their attributes. To represent cloud and fog services we leverage CloudO. Since a fog node is a cloud (with weak capabilities) close to the users and can provide a variety of cloud services we can model it as a cloud node in the resulted ontology. As a result, the proposed ontology enables us to model different resources and requests at the IoT ecosystem. In the following we show how we can benefit IoT-Fog-Cloud to manage the IoT resources efficiently.

The main activities of a typical workflow for allocating resources in an IoT network are illustrated in Fig. 2 and consist of: resource discovery, resource modeling, resource selection, and resource allocation. The figure depicts a context life-cycle [16] and corresponding resource allocation tasks that are related to each cycle. There are other activities related to resource management such as resource monitoring, resource estimation, and resource remediation which can benefit from ontologies. However, these steps are not the focus of this research.

Fig. 2.
figure 2

Context life cycle and activities involved in resource allocation for IoT ecosystem.

At the first step, resource discovery, nodes acquire and share the concepts to attain essential information about the network. To do this, all nodes locally exchange their information with their neighbors to adapt to environmental changes. Also, upon receiving an IoT request, the receiver node can extract the requirements of the request. The second step, resource modeling, tends to represent the resources/requests in our IoT ecosystem. Using the IoT-Fog-Cloud ontology, we can define the entities, properties, and relationships that build up the resources/requests at the IoT ecosystem. The first two steps result the following models.

  • IoT resources: cloud data centers, fog nodes, end-devices and the links between them along with their characteristics such as processing/storage capacity, load, energy, mobility status, sensing capabilities, link bandwidth/propagation delay (Network dynamic aspect). Domain/application (e.g., smart manufacturing, agriculture, health-care), peak time (Network static aspect).

  • IoT requests: requests along with their characteristics such as requester and his/her mobility status, demands (processing power, storage, network), type (real-time or batch), priority, security level, deadline.

Third step, resource selection, considering modeled requests/resources and using reasoning techniques, decides about suitable resources to host the IoT requests regarding quality of services. To do this, we use Semantic Web Rule Language (SWRL) [17], a language for expressing semantic rules as well as logic. The rules infer new/intended knowledge about network/requests from our existing OWL knowledge base. Each node can investigate and consult the structured ontology throughout SWRL to find the optimized hosts for its received requests. Finally, at the resource allocation, nodes disseminate the concepts resulted from resource allocation step to their neighbors (e.g., remained nodes’ capacity regarding new deployed tasks).

A typical resource allocation procedure from each node’s view in our proposal is shown in Fig. 3. In order to make the procedure clearer and show how the proposed solution addresses the aforementioned challenges in IoT resource management consider a resource allocation scenario; a mobile node sends an IoT request to a fog node. The fog node extracts the requirements and properties of the request. The fog node already has a global view of the network through the IoT-Fog-Cloud ontology. Consulting the ontology and using SWRL rules, the fog node specifies the best node to host the request. In this specification the matching between the request and the host is checked such as the required sensing capabilities and bandwidth. After that, the fog node forwards the request to the selected host. Then, network nodes update their information about the underlying network regarding this hosting. In this procedure network nodes constantly share their information and keep an up-to-date image on the network. As a result, by updating ontology, dynamic behavior of the network is well considered. In addition, by keeping the properties of the nodes and their resources and services, heterogeneity is addressed.

Fig. 3.
figure 3

Resource allocation work flow for each fog node in IoT ecosystem

4 Conclusion

In this paper, the IoT resource allocation in a three layers IoT ecosystem composed of cloud, fog, and end-devices is introduced. Also, we showed how semantic technologies are applied in addressing some of the key challenges in the managing resources such an ecosystem. In addition, we have seen where the unification of the IoT and cloud ontologies helps the various lifecycle of managing resources and allocating them smartly.

In the future, using this unified ontology, smart resource management applications can be developed which can predicate the future status of the network (such as failing a node given that they are hitting the threshold, going out of the coverage of a special server because of the mobility), and the requests (such as increasing required sensing frequency and changing the priority or security level), and results in a self-organized and self-healing system. On the other hand, the same unified ontology along with SWRL rules can be used to develop smart application deployment algorithms to propose optimized deployments in terms of energy saving, load balancing and secure deployment by just expressing of how the IoT ecosystem is supposed to be and then getting to know if the deployment place is good enough to meet the preference.

We plan to modify the unified ontology and the inference rules so those poor fog nodes can quickly infer required results and act on-time on the basis of the received data. We also consider using the cloud’s capability to extract long-term patterns from the IoT ontology and feed the results to the fog nodes and study the behavior.