Advertisement

Journal of Optics

, Volume 48, Issue 4, pp 539–548 | Cite as

A controllable deflection routing and wavelength assignment algorithm in OBS networks

  • Philani Khumalo
  • Bakhe NleyaEmail author
  • Andrew Mutsvangwa
Open Access
Research Article
  • 84 Downloads

Abstract

Heterogeneous IoT-enabled networks generally accommodate both jitter-tolerant traffic and jitter-intolerant traffic. Optical burst-switched backbone networks are handling the resultant volumes of such traffic by transmitting it in huge-size chunks called bursts. Because of the lack or limited buffering capabilities within the core network, contentions as well as congestion may frequently occur, thus affecting overall supportable quality of service (QoS). Both contention and congestion will be characterized by frequent burst losses, especially when traffic levels surge. The congestion is normally resolved by way of deflecting contending bursts to other less congested paths even though this may lead to differential delays incurred by bursts as they traverse the network. This will contribute to undesirable jitter that may ultimately compromise overall QoS. Noting that jitter is mostly caused by deflection routing which itself is a result of poor wavelength and routing assigning, in this paper we propose a controllable deflection routing scheme that allows the deflection of bursts to alternate paths only after controller buffer preset thresholds are surpassed. In this way, bursts intended for a common destination are always most likely to be routed on the same or least-cost path end-to-end. We describe the scheme and compare its performance to other existing approaches. Both analytical and simulation results overall show that the proposed scheme does lower both congestion and jitter, thus also improving throughput as well as avoiding congestion on deflection paths.

Keywords

Optical burst switching Jitter Deflection routing Congestion 

Introduction

In the OBS domain, primary concerns are in combating congestion as well as contention as bursts traverse the core network. In any given network, various types of congestion, e.g., nodal, CPU, path, may occur. Nodal congestion occurs when incident traffic overwhelms the serving node. CPU congestion is as a result of too many computations that jam the main CPU scheduler. Path or link congestion is caused by excessive traffic attempting to traverse the same path. In the context of OBS networks, congestion thus can be caused by several factors such as contention, uneven distribution of traffic leading to localized traffic overload, as well as improper provisioning of available resources such as in the case of routing and wavelength assignment (RWA). The presence of buffering capabilities at edge nodes makes it easy to combat edge congestion. Path congestion can be alleviated by way of dimensioning the available network resources such as wavelengths and links such that traffic is uniformly distributed throughout the network [1].

It is noted that contention will always occur at interior nodes when more than one data burst utilizing the same wavelength overlaps in time at the same single output port. Because of the buffer-less nature of such networks in their interior, different approaches are adopted to alleviate and combat contention. Primarily the contention resolution mechanisms can be implemented at space, wavelength or time domains. At wavelength domain level, wavelength converters (WCs) may be used occasionally to resolve the contention by translating one of the contending wavelengths to a different value. In doing so, the network’s performance improves. In the time domain, contention resolution is implanted/affected by introducing fiber delay lines (FDLs) to temporarily delay one or more of the contending bursts until such time that the output port becomes available. In space domain, deflection routing is introduced to resolve any contention occurrences in which one of the contending bursts can be deflected to an alternate port as well as route [2]. In this way, both congestion and contention are distributed to other routes rather than being concentrated on a single one and in the process the network’s general performance improves. Nevertheless, it should be noted that deflection routing also has several draw backs and notably that it can accelerate contention as well as congestion on the deflection paths. Its performance is largely influenced by the general network topology and may not feature effectively where the number of candidate deflection paths is relatively small. Furthermore, it can also contribute to differential delays or jitter for successive bursts destined for the same receiver as the deflected bursts might either take a longer a shorter path than their non-deflected counterparts. It is thus imperative that the deflection routing itself must be controlled [3, 4, 5].

It is on the basis of the earlier cited weaknesses that in this paper, we propose a controllable deflection routing scheme which couples with a simple wavelength and routing assignment (WRA) algorithm to enhance overall network performance, by minimizing both contention and congestion. The scheme attempts as much as possible to deflect either of the contending bursts to paths that have been chosen based on the minimization of performance measures such as delay and blocking. The scheme aims at controlling deflection traffic by way of selective path routing upon congestion onset. It is backed by a very simplified distributed RWA approach that ensures minimal contention in the primary (original) chosen route(s). Notably, a distinct feature of the proposed scheme is that it allows the deflected bursts to traverse further via deflection routes optimized for improved performance in terms of delay and blocking. The candidate deflection routes are themselves dynamically classified according to key QoS constraints (e.g., blocking and delay) they can support [6, 7].

Summarily the contributions of this paper are:
  1. 1.

    We propose and describe a controllable deflection routing (CDR) scheme which couples with a fairly simplified RWA approach to ensure minimization of both delay and blocking on deflection paths. In the process, the deflected traffic does not compromise the QoS of the already existing connections if any on these two paths.

     
  2. 2.

    Of the chosen available candidate deflection routes, we further propose a fast random least-cost algorithm for selecting the two possible routes that closely satisfy both delay and blocking constraints as the original path.

     
  3. 3.

    A Markov-type queuing model comprising a common queue feeding to two servers (representing the system) is analyzed. We provide expressions for computing system states, as well as a heuristic formula for computing the bursts waiting (delay) times in the system.

     
The rest of the paper is outlined as follows: In the next section, we elaborate on buffer-less OBS networks and contention in such networks. The proposed CDR algorithm is presented in more detail in section three. In section four, we model the controllable deflection routing queuing model. Section five presents both analytical and simulation results pertaining to the proposed scheme. Finally, conclusions are drawn in the last section.

Deflection routing in OBS networks

The OBS approach is rapidly becoming the backbone network solution for future-generation networks; this is attributed to such a network’s higher resources utilization, flexibility, as well as ultra-high bandwidth capacities both at transmission and switching levels. At the ingress node, multiple data packets are assembled together to form a super-sized packet called a data burst. The core nodes in an OBS network are buffer-less, and hence, the formed huge data bursts cannot be temporarily stored prior to switching in them. Rather, the data burst transmission is delayed by an offset time (toffset) relative to its control packet (BCP), and the later follows the BCP without waiting for an acknowledgment for resource reservation confirmation. Thus, a burst may be lost at an intermediate node due to contention, i.e., when two or more data bursts contend for the same output port in which case they overlap in both time and wavelength. The burst losses due to contention are one of the key issues hindering the realization of optical burst-switched (OBS) backbone networks that can support guaranteed QoS. Contention can be resolved by way of deflection routing in which the contending data burst(s) is deflected to alternate routes. These have to be carefully chosen so as not to degrade overall network performance. By nature, deflection routing assists in balancing the traffic traversing the entire network. The deflection multi-path routing and load balancing techniques can be effective in distributing the traffic over all links of the network provided that all ingress nodes have adequate network state information, such as traffic situations in the various parts of the network. Accordingly, to fully utilize the potential of deflection routing, each core node has to periodically receive information about the utilization of other links across the network. Otherwise, simply forwarding contending bursts to idle ports may in some situations even increase contentions.

Figure 1 illustrates deflection routing in OBS networks. When a core node xi receives a BCP, it extracts the routing information contained in the BCP and uses it to pre-configure the desired output port before the actual data burst arrival. In this case, it has toffset time allowance to locate and pre-configure the port on the outgoing link \(l_{i} ,i = \overline{1,m}\). We consider a buffer-less network comprising \(N = \left\{ {x_{1} ,x_{2} , \ldots ,x_{n} } \right\}\) sets of nodes. Data bursts flows f1 and f2 for node xi routes are destined for egress nodes xd1 and xd2, respectively. Using the shortest path first, both data bursts from flows f1 and f2 should be forwarded to the intermediate node xj via link l0. In this case, both will contend for the output port of l0. If no contention resolution scheme is implemented, the data burst from f1 is forwarded to xj and ultimately to the destination xd1 via l0, while the other data burst is discarded. However, if a deflection routing is implemented as the contention resolution scheme, the data burst from flow f2 is accommodated on an alternate deflection link \(l \in \Im \backslash \left\{ {l_{0} } \right\}\), where \(\Im\) denotes a set of all available outgoing links from xi. The available links must be carefully chosen such that the deflected data burst does not incur increased delays and blocking as it traverses further to its ultimate destination xd2. Another issue that needs particular attention is the interaction of deflection routing with offset-based signaling scheme. When an intermediate node decides to deflect the route of a burst, it has to check whether this will increase the overall length of the path or not. If this is the case, the offset time of the burst has to be appropriately increased to make sure that the burst will not overtake its corresponding BCP and that all downstream nodes will have enough time to process the BCP.
Fig. 1

Network with buffer-less interior nodes

Figure 2 shows a flowchart that summarizes the traditional deflection routing (DR) contention resolution scheme. In practical implementation, the ingress node incorporates a deflection routing information database (DRIB). The DRIB stores key management information for at both the routing and optical layers of the network. The ingress node periodically dispatches special control packets for the purpose of acquiring control information necessary for entire OBS network to carryout operation, supervisory, administration and maintenance (OAM) functions. These functions also aid the DRIB in furnishing precision information to assist in deflection route choices. These control packets are not associated individually with data bursts. Whenever network status changes the management, database should be updated accordingly. In this case, associated OAM control packets are generated and dispatched on a dedicated control channel normally referred to as an optical supervisory channel (OSC). That interconnects with all network nodes. In that way, each core (intermediate) node is periodically updated on general network status, performance in terms of burst loss rates due to contention and possibly congestion, as well as remainder hop counts for each burst-mode connection to intended egress node. As narrated before, the BCPs are those that are coupled individually with each data burst. Each BCP ferries information regarding the number of the remaining hops to be traversed by the burst, residual offset timing, as well as burst length. The information is used to schedule required resources for the burst at the next node ahead of its actual arrival. When it is determined that a burst is heading for contention with another burst, the DR contention resolution mechanism protocol is invoked and it utilizes information extracted from the associated BCP as well as DRIB to try to deflect the contending data burst appropriately. The affected intermediate node already has the relevant attributes about its input/output ports including contention status and hop counts from the OAM control packets. Furthermore, an intermediate node can also request an OAM control packet from the egress node when necessary. Ideally, updated assessment as well as measurement about burst contentions is needed at all the nodes in the network for the DR contention resolution algorithms to perform well. Figure 2 shows the mechanism for signaling contention occurrences and updating the burst contention status and statistics. Each ingress node receives updates about the burst congestion status along the primary and alternate candidate routes. These updates are signaled in the form of NACK and ACKs messages. In practice, NACKs from primary and alternate routes are distinguished from each other and treated separately.
Fig. 2

Contention and burst deflection

Proposed controllable deflection routing (CDR) scheme

We commence the section by describing the proposed scheme. Figure 3 depicts a generalized architecture of an OBS switch which comprises several input and output wavelength division multiplexed (WDM) link ports.
Fig. 3

Switch architecture with WCs

Wavelength light paths from input fibers are demultiplexed prior to switching to the desired output ports. In the event of contention, one of the contending data bursts is deflected to an alternate route. Periodic global re-optimizing of candidate deflection routes based on the most recently exchanged contention as well as congestion status updates from other nodes is necessary.

In the event that the network management system reports contention as well as wavelength congestion or its imminence on the deflected route, the contending burst may be converted to any other available wavelength by a WC. This updating interval is carefully selected in accordance with the computing power capabilities of the node so as not to cause nodal computational congestion. As can be seen in Fig. 3, the switch fabric can only accommodate a limited number of both optical links and wavelengths.

The number of input/output switch pairs tally with the number of shared WCs. A key feature of this switch architecture is that the choice as well as usage of deflection paths is controlled and by all means it will always thrive to route bursts intended for a common receiver/destination pair on the same (original intended) path. In the event that contention has occured and thus routing of both no longer possible on the original route one of the contenders will be deflected to an alternate least cost path. The scenario just described is further represented by the queuing model shown in Fig. 4.
Fig. 4

Queuing model

All arriving bursts are served on a FCFS service discipline policy. Path server #1 queue represents a deflection path that offers minimal QoS degradation in terms of blocking and delay. A contending burst will be dispatched to server 1 queue representing the first-choice deflection path out of the two, only if the controller buffer’s capacity has exceeded a threshold state q1. Similarly, path server #2 represents the second-choice deflection path that will be utilized only when the controller buffer’s threshold has exceeded q2. Otherwise, the original path will always be preferred. Neither of the two deflection paths can be expected to consistently meet its QoS expectations; hence, in general, we define α as a given path’s rate of exiting its QoS bounds, and similarly, β would be the rate at restoring it to within bounds. This transition state is shown in Fig. 4b.

In addition, also key to alleviating both contention and wavelength congestion is affective RWA.

We propose a simplified RWA method which evenly distributes the number of available wavelengths on all fibers as well as links. A network routing map (NRM) together with simplex signaling is assumed. Each node furnishes and advertises the following static information to the NRM:
  • Candidate routes as well as overall network resources state to all destinations as an example illustrated in Fig. 5a.
    Fig. 5

    Wavelength management: a example link state data structure, b example single wavelength’s occupancy state sequence, c example concatenated wavelength occupancy state

  • Sum of available links as well as fibers (wavelengths).

  • Each node also provides end-to-end link occupancy states for all possible links from it to all other destinations.

As an example, individual fiber wavelength occupancy at each node is also illustrated in Fig. 5b. All this information is dynamic; hence, it has to be updated periodically at an interval ΔTupdate on the NRM.
The concatenated wavelength occupancy state can be represented by O such that:
$$O = (t,st)$$
(1)
where t is the start time and st is the state of the slot.
A single wavelength’s occupancy state can be represented by a sequence vector of slots as follows:
$$O_{\lambda } (t) = \left[ {O_{1,} ,O_{2} , \ldots ,O_{n}^{{}} } \right].$$
(2)
The state occupancy of concatenated links (candidate light path) can be defined as:
$$O_{\text{L}} (t) = \left[ {O_{{\lambda_{1} }} (t) \oplus O_{{\lambda_{2} }} (t) \oplus , \cdots , \oplus O_{{\lambda {\text{W}}}} (t)} \right]$$
(3)
where the operation ⊕ denotes a search algorithm for free wavelengths along the links.

We can formulate the key deflection routing problem primarily as a function of the node configuration, general network topology as well as a set of QoS-related attributes such as node and link resources [8, 9, 10, 11, 12, 13, 14, 15].

If the physical network is denoted as G(N, L), where N is the number of nodes comprising it and L is the set of links interconnecting the nodes.

Each link Li,j has a total of Wij wavelengths each with capacity C.

Each network node n\((n = \overline{1,N}\)) has \(P_{n}^{\text{in}} (t)\) and \(P_{n}^{\text{out}} (t)\) ports. We define a source (s) and a destination (d) pair as well as an associated burst arrival rate \(\lambda_{i,j}^{sd} \in \varLambda\), at the switch queue. We also define \(\lambda_{{s_{k} }} d_{k}\) to represent the average flow of bursts belonging to class k-type traffic. We thus can define:
$$x_{ij} = \left\{ {\begin{array}{*{20}l} {1,} \hfill & {\quad {\text{if}}\,{\text{deflection}}\,{\text{route}}\,{\text{includes}},\,{\text{link}}\,L_{i,j} } \hfill \\ {0,} \hfill & {\quad {\text{otherwise}}} \hfill \\ \end{array} } \right.i,\quad j = \overline{1,N} ,\quad i \ne j$$
(4)
Since one light path can be set up at each node, we thus have:
$$\sum\limits_{\varLambda \, ,j \in N} {x_{ij} } \le P_{i}^{\text{out}} (t),\quad \sum\limits_{\varLambda \, ,i \in N} {x_{ij} } \le P_{j}^{\text{in}} (t)$$
(5)
Thus, the traffic demand \(\lambda_{{s_{k} }} d_{k}\) deflected from node i to j is:
$$\lambda_{i,j}^{{s_{k} ,d_{k} }} \in \left\{ {0,\lambda_{{s_{k} d{}_{k}}} } \right\}\quad \forall_{i,j} \in N.$$
(6)
The aggregated one-way flow from node i to j associated with the k traffic demand is:
$$\lambda_{ij} = \sum\limits_{s,d} {\lambda_{ij}^{sd} + \lambda_{{s_{k} d_{k} }} \quad \forall_{i,j} } \in N.$$
(7)
Traffic from node i to j may not exceed the maximum capacity C; hence, we have:
$$\lambda_{ij} \le W_{i,j} C\quad \forall_{i,j} \in N.$$
(8)
If the same link Li,j is not associated with the k-the traffic-type flow, then the previous equation becomes:
$$\lambda_{ij}^{{s_{k} d_{k} }} \le x_{ij} \lambda_{{s_{k} d_{k} }} \quad \forall_{i,j} \in N.$$
(9)
Finally, at each node the flow conservation constraint becomes:
$$\sum\limits_{i} {x_{ij} - \sum\limits_{j} {x_{ji} } } = \left\{ {\begin{array}{*{20}l} {1,} \hfill & {\quad i = s_{k} } \hfill \\ { - 1,} \hfill & {\quad i = d_{k} } \hfill \\ {0,} \hfill & {\quad {\text{otherwise}}} \hfill \\ \end{array} } \right.\quad \forall_{{s_{k} ,d_{k} ,i}} \in N.$$
(10)
Finally, if we let D = {Dij} represent the distance matrix as well as delay between nodes i and j, we can thus summarize our key objective function as follows:
$${\text{Min}}\quad \gamma_{\text{d}} \sum\limits_{ij} {x_{ij} } D_{ij} + \gamma_{\text{b}} \left[ {\log \left[ {1 - \prod\nolimits_{i,j} {\left( {1 - x_{ij} b_{ij} } \right)} } \right]} \right]$$
(11)
where γd and γb are the delay and blocking weights, respectively. Collectively, they are designated as a deflection path link cost factor:
$$c = f(\gamma_{\text{d}} ,\gamma_{\text{b}} ).$$
(12)
The key steps of the proposed CDR algorithm are summarized as follows:
  1. i.

    Ingress (source) node dispatches a burst control packet (BCP) requesting an end-to-end connection to a specified egress (destination) node.

     
  2. ii.

    The intermediate node processes the BCP together with those from other sources. If resources are available of the primary route (and is contention free), the burst will be accepted.

     
  3. iii.
    However, if contention is detected, i.e., simultaneous requests for the same output ports and wavelengths, by two or more BCPs then the contention is resolved before actual burst arrival in one of the following ways:
    1. (a)

      If the node is the sender, its BCP is discarded and retransmission is ordered at a later time.

       
    2. (b)

      The remaining bursts can either be assigned to the primary route, deflected to an alternate path, or in the worst case be discarded. This is done according to the set of rules in step iii:

       
     
  4. iv.
    assigned to the original path: There exists two or more contending bursts all on transit. The node’s controller is in state \(q \prec q_{1}^{ * }\), and there are enough free wavelengths to accommodate all the contending bursts. Their initial wavelengths will be shifted accordingly by the WCs.
    • deflected to path #1: The node’s controller is in state, \(q_{1}^{ * } \le q < q_{2}^{ * }\).

    • deflected to path #2: The node’s controller is in state, \(q_{2}^{ * } \le q \le \infty\).

     
Note that the threshold values \(q_{1}^{ * }\) and \(q_{2}^{ * }\) are set by taking into account the delay and blocking weights in Eq. (11).

Queuing model analysis

In this section, we analyze the queuing model shown in Fig. 4. We recall that our objective is to minimize both jitter and blocking probability by routing bursts originating from a given source to a destination on a single path. To simply the model, we will assume a single dispatcher queue and K path servers, each with service rates \(\mu_{j} ,j = \overline{1,K}\). Bursts arrive at a rate λ. Each server j represents an onward path with its own fixed QoS bounds, i.e., jitter and blocking. When it is busy, the path exits this bound at a rate αj, and once exited, it tries to revert (restore) to this bound at a rate βj. Choice of the chosen deflecting paths is dependent on the fixed queue thresholds q1 and q2.

System states at any arbitrary time are [6]:
$$D_{j} (t) = \left\{ {\begin{array}{*{20}l} {0,} \hfill & {\quad {\text{original}}\,{\text{path}}\,{\text{in}}\,{\text{use}}\,{\text{or}}\,{\text{system}}\,{\text{idle}}} \hfill \\ {1,} \hfill & {\quad {\text{deflection}}\,{\text{route}}\,{\text{is}}\,{\text{busy}}} \hfill \\ {2,} \hfill & {\quad {\text{deflection}}\,{\text{route}}\,{\text{is}}\,{\text{busy}}\,{\text{failing}}\,{\text{to}}\,{\text{meet}}\,{\text{QoS}}} \hfill \\ \end{array} } \right.\quad j = \overline{1,K}$$
(13)
We can define a state space of the path servers as:
$$E_{\text{D}} = \left\{ {(d_{1} ,d_{2} ); \, \left| \begin{aligned} d_{j} \in \left\langle {0,1,2} \right\rangle , \, 0 \le q \le q_{1} \hfill \\ d_{1} \in \left\langle {1,2} \right\rangle ,d_{2} \in \left\langle {0,1,2} \right\rangle , \, q_{1} \le q \le q_{2} - 1 \hfill \\ d_{1} \in \left\langle {1,2} \right\rangle ,d{}_{2} \in \left\langle {0,1,2} \right\rangle , \, (d_{1} ,d_{2} ) \ne (2,0), \, q_{2} \le q \le \infty \hfill \\ \end{aligned} \right.} \right\}$$
(14)
from which we can re-define a state space as well as a random process, respectively, as:
$$E = \left\{ {x = (q,d);\quad q \in {\rm N}_{\text{o}} ,\quad d = (d_{1} d_{2} ) \in E_{\text{D}} } \right\}.$$
(15)
Under stationary conditions, we also have:
$$\rho = {\lambda \mathord{\left/ {\vphantom {\lambda {\sum\limits_{j = 1}^{K} {\beta_{j} \mu_{j} \left( {\alpha_{j} + \beta_{j} } \right)^{ - 1} } }}} \right. \kern-0pt} {\sum\limits_{j = 1}^{K} {\beta_{j} \mu_{j} \left( {\alpha_{j} + \beta_{j} } \right)^{ - 1} } }} \prec 1$$
(16)
The utilization of each deflection path is:
$$U = 1 - \pi_{(0,0,0)}$$
(17)
where \(\pi_{(0,0,0)}\) denotes an empty state space.

Analysis and simulation

In both our numerical and simulation performance analyses, we assumed the following:

L-fixed data burst size is 600 MB, and a BCP offset time is 0.4 ms. Each link has a capacity C = 6 GBps and λ-burst generation rate of 120/s. The network updating interval is fixed throughout the simulation runs. When a connection request arrives at a node, a wavelength is assigned along the least-cost path:
$$c_{\text{o}} \le \frac{{\alpha_{i} c_{1,2} + \beta_{1} c_{1,1} }}{{\beta_{1} \mu_{1} }} \le \frac{{\alpha_{2} c_{2,2} + \beta_{1} c_{2,1} }}{{\beta_{2} \mu_{2} }}.$$
(18)
The evaluation is carried out on a multi-node network using OMNeT++ (version 5.4).
In Fig. 6, node 0 is the source (s), while node 12 is the destination. Source routing using the random shortest path first algorithm is assumed. An edge node configuration is shown in Fig. 7.
Fig. 6

Network model

Fig. 7

Edge node configuration

We further make additional assumptions as follows:
  • At the source node, all bursts are categorized according to QoS constraints, e.g., blocking and delay.

  • The various links in the network vary in lengths. They are also bidirectional with each fiber operating 16 wavelengths, 2 of which are dedicated for signaling purposes.

  • Besides the original path, only two other deflection paths are available between this node and the destination.

This means the original path is preferred before deflection path #1 and path #2, respectively, as it has the lowest cost.

We first compare the performance of the proposed CDR as well as WA (prop CDR_prop WA), scheme with regard to loss probability (PB).

In so doing, we compare it with:
  • CDR with random WA (prop CDR_rand WA), in which the wavelengths are randomly assigned randomly.

  • shortest path first together with random WA (SPF_rand WA).

  • random path and random WA (rand_rand WA).

  • SPF and proposed WA (SPF_prop WA.

Figure 8 shows several plots of the PB as a function of varying traffic load. From this graph, it is observed that the proposed CDR as well as proposed wavelength assignment (prop CDR_prop WA) outperforms the rest of the schemes.
Fig. 8

End-to-end PB versus load

Random routing coupled with the proposed WA (rand_prop WA) also shows fairly good performance as it tends to distribute traffic among the available routes. It is generally concluded that a combination of CDR and the proposed WA will reduce end-to-end blocking probabilities.

Figure 9 shows how path blocking varies as a function of the aggregate number of wavelengths available on the path. The traffic load is maintained at 100%. An increase in the number of fibers per path results in reduced blocking. Noticeable is that the traffic is evenly distributed within the fibers and the traffic also uniformly spreads; hence, this leads to reduced blockings. We also explore the effect of increasing the number of wavelengths on blocking. Once again, the Poisson arrival process is used in which each fiber’s traffic load is set to 100%. In the simulation scenario, this time is repeated with three randomly chosen sets of ingress and egress node pairs.
Fig. 9

Loss probability as a function of number of fibers per path

Further, by comparison it can be observed from Fig. 10 that the proposed scheme performs relatively much better when the number of wavelengths is increased, and at the same time, the available resources are utilized uniformly and rationally.
Fig. 10

Loss probability as a function of number of wavelengths per fiber

We gradually increase the bursts arrival rate from 0 to more than 100% so that the controller queue is always above the q2 threshold value and by so doing it is noted that deflection does reduce blocking, even though it may propagate or trigger congestion/contentions in the deflected routes. From Fig. 11, it is observed that by comparison, the proposed scheme is outperformed by the SPF_propRWA scheme at very high loads. As expected, the number of deflections increases almost exponentially for all the schemes. It may thus be necessary to regulate the volumes of deflected traffic. Figure 12 plots the performance of the various schemes as a function of total number of nodes traversed. The controlled scheme performs comparably better at high traffic volumes as it regulates the actual numbers reflected, e.g., some bursts are discarded.
Fig. 11

Average number of deflected bursts versus network load

Fig. 12

End-to-end delays versus number of nodes

End-to-end delays in the overall network are plotted as a function of the total number of nodes in Fig. 12.

In this case, we compute the delays from the point of deflection. As seen from Fig. 12, both the proposed scheme and SPF_prop WA perform comparatively the same. This is because fundamentally both opt for the shortest paths from the deflection point to ultimate destination egress node.

Conclusion

In this paper, we proposed and described the Controllable Deflection Routing (CDR)-based scheme that allows the deflection of bursts to alternate paths only after controller buffer preset thresholds are surpassed. The scheme couples with a proposed WA approach to significantly improve network performance, especially in terms of delay and blocking probability QoS metrics. The proposed CDR scheme’s performance is compared to other existing similar schemes or variants such as the ones discussed in [8] and [9]. Both analytical and simulation evaluations were carried out. It is generally found out that the proposed CDR and WA scheme does significantly improve end-to-end blocking and minimize end-to-end differential delays caused by bursts originating from the same source having to follow different paths. In that way, jitter levels are minimized and its effects negligible.

Notes

Acknowledgements

The work was supported by Durban University of Technology’s Research Office. Funding was provided by Durban University of Technology (Grant Nos. 00001, 00002).

References

  1. 1.
    K. Hirata, T. Matsuda, T. Takine, “Dynamic burst discarding scheme for deflection” routing in optical burst switching networks. Opt. Switch. Netw. 4, 106–120 (2007)CrossRefGoogle Scholar
  2. 2.
    S. Haeri, L. Trajkoví, Intelligent deflection routing in buffer-less networks. IEEE Trans. Cybern. 45(2), 316–327 (2015)CrossRefGoogle Scholar
  3. 3.
    I. Ouveysi, F. Shu, W. Chen, G. Xiang, M. Zukerman, Topology and routing optimization for congestion minimization in optical wireless networks. Opt. Switch. Netw. 7(3), 95–107 (2010)CrossRefGoogle Scholar
  4. 4.
    F. Lezama, G. Casta, A. Sarmiento, B. Indayara, B. Martins, Differential evolution optimization applied to the routing and spectrum allocation problem in flexgrid optical networks. Photon Netw. Commun. 31(1), 129–146 (2016)CrossRefGoogle Scholar
  5. 5.
    K. Christodoulopoulos, E. Varvarigos, K. Vlachos, New burst assembly scheme based on the average packet delay and its performance for TCP traffic. Opt. Switch. Netw. 4(3), 200–212 (2007)CrossRefGoogle Scholar
  6. 6.
    D.V. Efrosinin, M.P. Farkhadov, N.V. Stepanova, A study of a controllable queueing system with unreliable heterogeneous servers. Autom. Remote Control 79(2), 265–285 (2018)MathSciNetCrossRefGoogle Scholar
  7. 7.
    A.I. Abd El-Rahman, S.I. Rabia, H.M.H. Shalaby, MAC layer performance enhancement using control packet buffering in optical burst-switched networks. J. Lightw. Technol. 30(11), 1578–1586 (2012)CrossRefADSGoogle Scholar
  8. 8.
    P. Sakthivel, P. Krishna, Multi-path routing and wavelength assignment (RWA) algorithm for WDM based optical networks. Int. J. Eng. Trends Technol. 10(7), 322–27 (2014)CrossRefGoogle Scholar
  9. 9.
    S. Li, M. Wang, E.W.M. Wong, V. Abramov, M. Zukerman, Bounds of the overflow priority classification for blocking probability approximation in OBS networks. J. Opt. Commun. Netw. 5(4), 378–393 (2013)CrossRefGoogle Scholar
  10. 10.
    S. Bregni, A. Caruso, A. Pattavina, Buffering-deflection tradeoffs in optical burst switching. Photon. Netw. Commun. 20(2), 193–200 (2010)CrossRefGoogle Scholar
  11. 11.
    E. Okly, N. Wada, S. Okamoto, N. Yamaka, K. Sato, Optical networking paradigm: past, recent trends and future directions. IEICE Trans. Commun. E100-B(9), 1564–1580 (2017)ADSGoogle Scholar
  12. 12.
    Y. Ito, Y. Mori, H. Hasegawa, K. Sato, Optical networking utilizing virtual direct links, in 42nd European Conference and Exhibition on Optical Communication (ECOC 2016), W.4.P1.SC6.3 (Dusseldorf, 2016)Google Scholar
  13. 13.
    K. Sato, Optical networking and node technologies for creating cost effective bandwidth abundant networks, in Proceedings of the OECC/PS 2016 (The 21st Opto-Electronics and Communications Conference/International Conference on Photonics in Switching 2016), ThA1-2 (Niigata, 2016)Google Scholar
  14. 14.
    Y. Uematsu, S. Kamamura, H. Date, H. Yamamoto, A. Fukuda, R. Hasashi, K. Koda, Future nationwide optical network architecture for higher availability and operability using transport SDN technologies. IEICE Trans. Commun. E101-B(2), 462–474 (2018)CrossRefADSGoogle Scholar
  15. 15.
    A. Misawa, S. Kataya, Resource management architecture of metro aggregation network for IoT traffic. IEICE Trans. Commun. E101–B(3), 620–627 (2018)CrossRefADSGoogle Scholar

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Faculty of EngineeringDurban University of TechnologyDurbanSouth Africa

Personalised recommendations