Abstract
Dynameq is a simulation-based dynamic traffic assignment (DTA ) model. This model employs an iterative solution method to find the user-optimal assignment of time-varying origin–destination demands to paths through a road network where the path travel times – which depend on the assigned path flows – are time-varying and determined using a detailed traffic simulation model. Increasing congestion and the use of increasingly sophisticated measures to manage it – such as adaptive traffic control, reserved, reversible and tolled lanes, and time-varying congestion pricing – have created a need for models that are more detailed and realistic than static assignment models traditionally used in transportation planning. DTA models have begun to fill that need and have been successfully applied on real-world networks of significant size. This chapter provides a description of the assignment and simulation models that comprise the software, a discussion of fundamental concepts such as user-equilibrium and stability , an introduction to calibration methodology for simulation-based DTA, and a brief description of a typical project.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Similar content being viewed by others
9.1 Model Building Principles
9.1.1 Introduction
Dynameq, which stands for “dynamic equilibrium,” is a simulation-based dynamic traffic assignment (DTA ) model. The computational model consists of two main components: a traffic flow simulation model and a routing model. These two modules are concerned with different aspects of driver behavior. The routing model imitates how drivers choose their routes through the network to their desired destinations. The traffic flow simulation concerns all other aspects of the driving process: decisions to accelerate and decelerate due to traffic lights, signage and interactions with other vehicles, and the process of selecting a lane and executing a lane-change maneuver. The overall structure of the model is depicted in Fig. 8.1. As with all equilibrium approaches to the traffic assignment problem, the solution is an iterative method that repeats the simulation and routing computations many times over until it converges to a satisfactory solution. This procedure is analogous to the learning process of drivers in the real world repeating the same trips, such as the morning or afternoon commute, over a sequence of days.
At the start of each iteration (or “day”), the routing model generates the time-dependent path input flows, based on the time-dependant path travel times generated by the traffic simulation on the previous iteration (or “day”). The traffic simulation , more generically referred to as a network loading model, loads the network by simulating the movements of individual vehicles, as defined by the path input flows, as they make their journeys through the network. Thus, the outputs of the routing model are the inputs to the traffic simulation, and vice versa. The simulation model simultaneously generates various measures that describe the evolution of traffic flows through the network, such as flow rates, speeds, and densities for individual links, lanes, turns, and nodes. On the first iteration, in the absence of a previous iteration to generate link travel times, the free-flow travel times are used as inputs to the routing model.
The simulation model is based on the efficient discrete-event (event-based ) traffic flow simulation model of Mahut (2001). The model is not as detailed as conventional discrete-time (time-step ) simulation models (microsimulation models), but is nevertheless based on the same underlying sub-models, namely car following , lane-changing , and gap acceptance . The underlying design principle of the traffic flow simulation model in Dynameq is to provide an efficient trade-off between traffic flow fidelity (realism) and computation time. The low computation (CPU) times are particularly useful due to the iterative nature of the algorithm (see Fig. 8.1), which requires repeating the simulation many times over. The traffic flow simulation model is presented in detail in Section 9.2, Core Traffic Flow Models.
Mathematically, the DTA model is formulated as a time-discrete variational inequality and two solution methods are available. One is based on a straightforward adaptation of the method of successive averages (MSA ) and the other on a heuristic adaptation of a gradient-based method used in solving the static network equilibrium model in the space of path flows. These methods can be considered to be heuristic since the dependence of the travel times on the link flows is complex and not given by an analytical function. This is due to the complexity of the traffic simulation which carries out the network loading step in the algorithm. A realistic representation of the system requires that the network loading properly represent traffic delays, i.e., in a way that is consistent with traffic flow theory . The resulting assignment map is discontinuous and difficult to characterize analytically.
The time-discrete nature of the assignment model means that the time-dependent path input flows are defined over a sequence of short time intervals, during each of which the probability of any given path being used for a given origin–destination (O–D) pair remains constant. These time intervals are referred to as assignment intervals, or sometimes simply departure-time windows (or intervals).
The routing model in this approach functions simultaneously as the route-generation model. A maximal number of required paths (N) is provided exogenously, and at each of the first N iterations, a time-dependent shortest path (TDSP) algorithm is used to determine the shortest path for each O–D pair and each departure-time interval. This path is added to the existing path set before the route input flows are re-calculated, thus gradually building up the set of paths and simultaneously dispersing the traffic over a wider set of paths with each iteration. After iteration N, the path set generally remains fixed. The iterations continue until a stopping criterion is satisfied, indicating that the current assignment is sufficiently close to dynamic equilibrium conditions. The assignment methods and stopping criteria are presented in detail in Section 9.3, Dynamic Traffic Assignment.
9.1.2 Model Building Principles: Dynamic Traffic Assignment
The traffic assignment model in Dynameq is a pre-trip dynamic equilibrium model. “Pre-trip” refers to the fact that each simulated driver makes a single path choice before departing on his trip, and this path is followed to the destination without being reconsidered on route. “Equilibrium” refers to the fact that the path choices, or path demands (in vehicles or vehicles per hour) in the solution of the model result in path travel times that approximately satisfy dynamic user-equilibrium conditions. These conditions are a time-varying extension of the Wardrop (1952) user-equilibrium conditions for static assignment: for any given departure time, a driver cannot improve his travel time by unilaterally changing paths. Pre-trip equilibrium assignment models are appropriate for off-line planning applications, which can range from short-term operational planning (e.g., impacts of road maintenance projects) to long-term travel forecasting exercises.
Friesz et al. (1993) formulated a dynamic equilibrium assignment model as an infinite dimensions variational inequality. The infinite dimension of the model is due to the fact that time is considered to be continuous. It is usual to consider a time-discrete formulation of the model, where time is subdivided into discrete intervals. Each interval is considered to be an interval for the departure of trips. The solution of the time-discrete formulation of the equilibrium dynamic traffic assignment problem seeks to obtain, for any given departure-time window, flows that equalize the travel times for all used paths for every O–D pair.
The extension of the Wardop user-equilibrium principle to the dynamic (time varying) context is based on experienced travel time , rather than instantaneous travel time . Instantaneous travel time implies that the path travel time is evaluated by adding up the link travel times (which are time-varying in a dynamic model) for the links of the path based on their values at a given instant in time. Using this definition, a given path, for the duration of a single trip along that path, has many different travel times, depending on when the travel time is evaluated. For example, microscopic traffic simulation models typically use instantaneous travel times since the demand is assigned to paths during a single execution of the simulation model ( one-pass assignment ). By contrast, a path has only one experienced travel time for any given trip, which is an estimate of the average travel time actually experienced by a driver, for a given path and departure-time window. Using the experienced travel time is also more behaviorally sound, for obvious reasons, when modeling habitual trips such as those during the morning and evening peak periods.
Since the experienced route travel times result from the interactions of the vehicles as they move through the network from their origins to their destinations, they cannot be known in advance, i.e., when the route choices are made. The path travel times are thus an input to the route decisions and an output, and this kind of cyclical problem can only be solved properly with an iterative approach such as shown in Fig. 9.1. As mentioned above, the iterations of the model can be thought of as a sequence of days over which drivers are adapting their route choices: on each day, before commencing the trip, the route choices are reconsidered based on the travel times experienced on the previous day.
This gives the iterative solution a predictive property, because
-
(a)
the routing decisions are based on an estimate of what traffic conditions will be along the route using the travel times of the previous iteration and
-
(b)
as the model converges to a solution, the link travel times change relatively little from one iteration to the next.
This predictive property cannot be captured using instantaneous path travel time, since the instantaneous travel time measure is always based on what the link travel times are at the time of the decision, e.g., at the trip departure time.
This is true even if the instantaneous travel time is re-evaluated several times during the trip and the driver is allowed to re-consider the route at intermediate points from which an alternative route to the destination is available. For this reason, models based on instantaneous travel time, and in particular those which do not employ an iterative algorithm (i.e., only run the simulation once), are referred to as reactive models, because drivers make their route choices progressively, in response to the evolving traffic congestion on the network.
In many situations, particularly congested ones, the reactive approach can yield a significantly different solution from the equilibrium solution, i.e., obtained with an iterative (predictive) approach. Some DTA models use a hybrid predictive/reactive approach, allowing drivers to react en route within an iterative solution method. In general including any reactive component to the route choice decision can increase the instabilities of the model solution and increase the probability of deadlock (gridlock) occurring in congested conditions. For this reason, reactive en route path switching is not currently modeled in Dynameq.
9.1.3 Modeling Building Principles: Traffic Flow Simulation
The traffic flow simulation model in Dynameq moves individual vehicles on a detailed (lane-based) network using car-following , lane-changing , and gap acceptance models. This type of traffic model is commonly referred to as a microscopic traffic simulation (or micro-simulation). From a practical standpoint, a microscopic traffic flow simulator can be defined as a model which explicitly represents the movements and interactions of individual vehicles, and in which the primary outputs, such a link flows, travel times, and densities, are a direct result of these interactions. By contrast, macroscopic models represent traffic as a fluid and are based on hydrodynamic or gas-kinetic descriptions of traffic flow (Hoogendoorn and Bovy, 1999; Diakaki and Papageorgiou, 1996; Messmer, 2000a; Messmer, 2000b; Papageorgiou, 1990; Richards, 1956; Lighthill and Whitham, 1955). It should be mentioned that in recent years, macroscopic has been used by practitioners to refer to static assignment models: the above definition is the traditional one from the traffic flow theory literature, and since it has no alternative names, this definition is maintained here, as is the term static assignment model.
A unique feature of the traffic simulation model in Dynameq is that it is solved using an event-based ( discrete-event ) algorithm, rather than the time-step ( discrete-time ) method typically employed in other traffic simulation packages, both commercial and academic (http://www.tss-bcn.com (Aimsun), accessed 12 Sep 2009; http://www.ptv.de (Vissim), accessed 12 Sep 2009; http://www.sias.com (Paramics), accessed 12 Sep 2009; http://sumo.sourceforge.net/ (SUMO), accessed 12 Sep 2009; http://web.mit.edu/its/products.html (MITSIMLab), accessed 12 Sep 2009; http://www.its.leeds.ac.uk/software/dracula (Dracula), accessed 12 Sep 2009; http://ops.fhwa.dot.gov/trafficanalysistools/corsim.htm (Corsim), accessed 12 Sep 2009; Van Aerde, 1999). Time-step and event-based models are fundamentally different approaches due to how they handle time. In a time-step model time is the independent variable, while in an event-based model, time is a dependent variable. These two paradigms are the primary approaches to building simulation models in general.
The main advantage of an event-based approach is that it can be much more computationally efficient than a time-step model, and this is the primary motivation for adopting an event-based simulator in Dynameq. However, it is usually more challenging to build an efficient event-based model than a time-step model, particularly for complex systems, because it is critical to design the event-based model in such a way as to minimize the number of events. The use of relatively simple car-following and lane-changing models in Dynameq is directly tied to the fact that the model is solved with an event-based approach.
In recent years, the term mesoscopic has become quite widely used and generally refers to any model that falls somewhere between the macroscopic and microscopic definitions. Mesoscopic models arose from efforts to bring added realism to the macroscopic modeling paradigm. In practice, the term mesoscopic has also become somewhat synonymous with dynamic traffic assignment (DTA ), since most DTA models employ some type of mesoscopic traffic modeling approach (Ben-Akiva et al., 1998; Ziliaskopulos and Lee, 1997; Mahmassani et al., 2001; Leonard et al., 1989). Dynameq’s microscopic approach, although embedded within an iterative DTA model, is fundamentally different from this type of mesoscopic traffic flow model.
In effect, what all DTA models have in common is the type of problem they are trying to solve, more so than the specific ways in which they solve it. The term mesoscopic applies equally well to aspects of the problem, such as the typical network size – which falls somewhere between those that are commonly handled by microscopic simulators and static assignment models – and the required level of detail, or realism, of the modeled system.
The appropriate level of detail is determined by two competing factors:
-
the system being modeled has increasingly complex and sophisticated elements, from adaptive traffic signal control to variable message signs to high-occupancy toll (HOT) lanes, which require a relatively high-fidelity model to be properly represented and evaluated;
-
the scale of the network makes it impractical to have an exact, complete, and error-free representation of the physical system in the model;
The need to reconcile these two considerations is the main challenge in building a DTA model. Specifically, there is a need to avoid false degrees of precision by maintaining consistency between the level of detail (complexity) of the model components and the known precision of the input data. Moreover, there needs to be consistency in the precision that is assumed by the different components of the model, such as route choice, lane-selection, gap acceptance, and car following. For example,
-
it matters little if a car is braking at 2.0 or 2.5 m/s2 if the car really should be in a different lane;
-
similarly, it matters little if the car is in the correct lane if it is not on the right route;
-
and ultimately, being on the right route is unimportant if the origin or destination of the trip are not correct, i.e., if the trip should not even have been made (modeled).
Understanding the relative importance of the various model components and their related input data, and seeking consistency among them, is what ultimately leads to an efficient model implementation that provides an optimal trade-off between the quality and usefulness of the results, and the computational burden and human effort required for collecting the input data and calibrating an application of the model.
9.2 Core Traffic Flow Models
As with all traffic simulators, the core of the model is the underlying car-following model and its solution method. The traffic flow simulator in Dynameq is based on the following simplified car-following model:
where \(x(t)\) is the trajectory of a vehicle (position as a function of time), L is the effective vehicle length, R is the driver/vehicle response time, V is the free-flow speed, and ε is an arbitrarily short time interval. The subscripts f and l denote the trajectories of two vehicles in sequence, one following and the other leading, respectively. The first term inside the min operator represents the farthest position downstream that can be attained at time t based on the follower’s position at time \(t - \varepsilon\), as constrained by the maximum speed of the vehicle, V. The second term inside the min operator represents the farthest position downstream that can be attained based on the trajectory of the next vehicle downstream in the same lane, using a simple collision-avoidance rule (Mahut, 2001; Mahut, 1999; Newell, 2002).
This car-following model is referred to as a simplified car-following model, or lower order model, since it only defines the position of each vehicle in time, rather than vehicle speed or acceleration. Traditionally, and most commonly, car-following models define the acceleration \(a_f \left( t \right)\) as a function of the state variables of the follower and the leader at time \((t - R)\) (Brackstone and McDonald, 1999; Gabard, 1991). When these models are solved, i.e., using a discrete-time approach, the trajectory of each vehicle is characterized by constant acceleration over short time intervals, from which vehicle speed and position can be computed (with appropriate boundary conditions). In the simplified model used here, the trajectory is characterized by constant speed over short time intervals. In this form, the model can also be seen as a continuous-time analogy to cellular automata models used for traffic simulation (Nagel and Schreckenberg, 1992).
This model is solved using an event-based solution, which first requires converting the statement of the car-following relationship in eq. (9.1) from x(t) to t(x), which yields
From this relationship one can derive the following expression, which only calculates the link entrance and exit time of each vehicle:
where X 1 and X 2 are the lengths of two sequential links, with speeds V 1 and V 2, respectively. The subscript n indicates vehicle numbering in sequential order, i.e., vehicles n and \(n - 1\) represent a follower and leader, respectively. The vehicle attributes represented by L and R are assumed to be identical over the entire traffic stream, and each vehicle adopts the link-specific free-flow speed when traversing a given link. The link lengths are assumed to be integer multiples of the vehicle length L.
This “link-based” solution (9.3) provides a very practical and computationally efficient way to model traffic on a single lane, i.e., to rigorously solve the car-following model (eq. (9.1)) over a linear sequence of links without actually calculating the position of each vehicle at each second or less (using a time-step solution). The ability to rigorously calculate longitudinal traffic dynamics over entire links has also been demonstrated for the kinematic wave model based on the two-segment linear (triangular) relationship between traffic flow and density (Newell, 1993), often called the fundamental diagram . Not surprisingly, the three-parameter car-following model shown here (eq. (9.1)) also yields the triangular flow–density relationship (Mahut, 2001; Mahut, 1999).
A multi-lane version of the above relationship (eq. (9.3)) maintains the same property of only calculating the entrance and exit times of each vehicle and also captures the interactions between vehicles due to lane-changing maneuvers (Mahut, 2001). This multi-lane extension requires each driver to select his departure lane upon entering a link and computes the resulting delay effect of a single lane-change maneuver – across several lanes, if necessary – which occurs at the first position on the link at which the vehicle encounters a delay propagating from downstream, on any of lanes spanned by the maneuver. The intent of the multi-lane model is to capture the reductions in effective (operational) capacity on links, such as freeway segments, where a significant amount of lane changing occurs, particularly due to mandatory lane changes that drivers must execute in order to remain on their intended paths. The model also employs a complex set of heuristics for modeling a driver’s lane-selection decisions (Florian et al., 2008). It has also been extended to allow vehicle length and driver response time to vary individually by vehicle (Florian et al., 2008). The model can thus be characterized as a continuous-space, continuous-time, discrete-flow (vehicle-based) model that employs a lane-based representation of the network.
The above solution (eq. (9.3)), or the multi-lane version of it (Mahut, 2001), provides the time at which a vehicle crosses the node between two sequential links, where the node in question joins only those two links. However, this relationship can be easily extended to handling nodes with multiple incoming and outgoing links, considering conflicts between vehicle trajectories and including an explicit representation of traffic signals. This primarily requires including an additional term inside the max operator in (eq. (9.3)) to include constraints based on conflicting vehicles that have crossed the node prior to the vehicle in question, and then applying this formula to all vehicles waiting to cross a given node in the network. A gap acceptance model is then applied to resolve conflicts between these vehicles as required (discussed below), and the next vehicle to cross the node is determined. This process allows the traffic flow component of the simulation model to be solved with an event list that is of the same size as the number of nodes in the network. A more complete description of the event-based algorithm is given in Mahut (2001).
For modeling gap acceptance behavior, i.e., the interaction between two vehicles with conflicting trajectories (e.g., at an uncontrolled intersection), Dynameq employs a two-parameter gap acceptance model. The decision of whether a lower priority vehicle will precede a higher priority vehicle at the conflict point is based on two quantities, which are as follows:
-
the available gap (g): the time difference between the arrival of the two vehicles to the conflict point (higher priority vehicle minus lower priority vehicle);
-
the relative waiting time (w): the difference between the time spent waiting at the stop line for an available gap (lower priority vehicle minus higher priority vehicle);
These two quantities are used in conjunction with the following two parameters:
-
the critical gap (G): the value of available gap at which there is a 50% chance of the lower priority vehicle preceding the higher priority vehicle;
-
the critical wait (W): the value of relative waiting time at which there is a 50% chance of the lower priority vehicle preceding the higher priority vehicle;
The probability that the lower priority vehicle precedes the higher priority vehicle, called the precedence probability (P), is then computed as follows:
This model considers the effects of available gap and waiting time independently by taking the maximum of two linear density functions. Each function increases from zero to unity over the range (x/2, 3x/2), where x represents a model parameter (G or W). The waiting time term takes into account the effect of a driver’s impatience when he is unable to enter a conflicting traffic stream. Practically speaking, it ensures a minimum flow rate on the lower priority turning movement under heavily congested conditions in which no gaps of reasonable size are available.
As described above, the simulation model is solved using an event-based algorithm, which is a fundamentally different approach from time-step models. In a time-step model, the state variables of all vehicles are updated at the end of each discrete-time interval (typically between 0.1 and 1.0 s), based only on the known state variables at the previous time-step. In addition to the potential efficiency in computation that can be achieved with an event-based approach – albeit with a reduction in the complexity of the car-following and lane-changing models – some other basic differences between event-based and time-step models should be briefly mentioned.
The results generated by a time-step model will depend on the selected size of the time-step, and thus changing the time-step will inevitably change the results. If the time-step is small enough, then making it smaller should not have a significant impact on the results, but how small is small enough depends on the model and is not always easy to determine. The conventional approach is to make the time-step equal to the driver response time, which as a result has to be common across all vehicles (for smaller time-steps, all driver response times must be multiples of the time-step used). The appropriate size of the time-step, in order to properly apply a gap acceptance model and avoid undesirable model properties, is an ongoing topic of research and discussion (Chevallier and Leclerq, 2009).
Event-based models, by contrast, do not employ a time-step and thus produce a single set of results for any given set of inputs (and for a given random seed, of course, as all microsimulators include stochastic components which require the use of quasi-random number generation). Specifically, the lack of a time-step in an event-based model ensures that for the given inputs, the correctness of the outputs is not subject to the selection of an appropriate time-step size. Moreover, driver response time can be real-valued and drawn randomly from an appropriate distribution. This allows computations that involve quantities on the order of the response time to be solved rigorously and to properly capture the impact of the assumed response time distribution on gap acceptance behavior.
9.3 Dynamic Traffic Assignment
9.3.1 Mathematical Model
In this section the time-discrete formulation of the equilibrium dynamic traffic assignment model is stated.
The path choices are modeled as decision variables governed by a user-optimal principle where each driver seeks to minimize his experienced path travel time. All drivers have perfect access to information, which consists of the travel times on all paths (used and unused). The solution algorithm takes the form of an iterative procedure designed to converge to these conditions.
The solution approach adopted for solving the dynamic network equilibrium model, eqs. (9.5), (9.6), and (9.7), is based on a temporal discretization into periods \(\tau = 1,2,...,\left| {\frac{{T_d }}{{\Delta t}}} \right|\), where \(\Delta t\) is the chosen duration of a departure-time interval. This results in a time-discrete model.
The mathematical statement of a time-discrete version of the dynamic equilibrium problem is in the space of path flows\(\,h_k^\tau\), for all paths k belonging to the set K i for an origin–destination \(i \in I\), at departure time t. The time-varying demands are denoted \(g_i^\tau , \;i \in I,\;all\;\tau.\) The path flow rates in the feasible region Ω satisfy the conservation of flow and non-negativity constraints
and a temporal version of Wardrop’s (Wardrop, 1952) user-optimal route choice results in the model:
which can be shown to be equivalent to solving the discrete variational inequality (Friesz et al., 1993).
where\(K = \bigcup\limits_{i \in I} {k_i }\) where \(h_{}^\tau\)is the vector of path flows \((h_k^\tau )\) for all k and τ.
The demonstration of existence and uniqueness of a solution to this model depends on the dependence of link and path travel times on the path input flows and the dependence of the path input flows on the link and path travel times. Since the properties of these mappings are not easily verified, due to the fact that it is the output of a simulation model and not an analytical transformation, no claims are made about the existence or the uniqueness of a solution. The equilibrium principle is simply used as a guide to compute an approximate solution of the time-discrete variational inequality.
The next sections present an MSA-based solution algorithm to this problem, followed by an algorithm inspired by the projected gradient method. A heuristic method which allows the maximum step size to increase with departure time, which is applicable to both the MSA and the gradient-like algorithms, is presented afterward.
9.3.2 MSA-Based Algorithm
As mentioned above, and shown in Fig. 9.1, the solution algorithm consists of two main components other than the computation of the temporal shortest paths: a method to determine a new set of time-dependent path input flows, based on the experienced path travel times of the previous iteration, and a method to determine the actual link flows and travel times that result from a set of path inflow rates. The algorithm furthermore requires a set of initial path flows.
The path input flows \(h_k^\tau ,\;k \in K\) are determined by a variant of the method of successive averages (MSA ), which is applied to each O–D pair i and time interval 𝜏. An initial feasible solution is computed by assigning the demand for each time period to a set of successive shortest paths. Starting at the second iteration, and up to a pre-specified maximum number of iterations, N, the time-dependent link travel times after each loading are used to determine a new set of dynamic shortest paths that are added to the current set of paths.
For all iterations \(l,l \le N\), the volume assigned as input flow to each path in the set is \(\frac{{g\,_i^\tau }}{l}\), \(i \in I,\;all\;\tau\). Subsequently, for iterations \(l,l > N\), only the shortest among used paths is identified and the path input flow rates are redistributed over the known paths as described below.
If the flow of a particular path decreases below a small predetermined value then the path is dropped and its remaining flow is distributed proportionally to the other used paths. This heuristic approach is akin to the restricted simplicial decomposition algorithm of Lawphongpanich and Hearn (1984) for the solution of the static network equilibrium model with fixed demand. The stopping criteria are the maximum number of iterations, L, and a maximum average relative gap, denoted γ. The relative gap measure is discussed below, after the statement of the algorithm.
MSA Equilibrium DTA Algorithm
-
Step 0 Initialization (iteration counter l = 1):
Compute temporal shortest paths based on free-flow travel times.
Load the demands (traffic simulation) to obtain an initial solution;
Update iteration counter: l = l + 1.
-
Step 1 Reallocation of input flows to paths:
-
Step 1.1 If (l ≤ N)
-
Compute a new dynamic shortest path.
-
Assign to each path k the input flow
$$h_k^{\tau ,l} = \frac{{g\,_{\,i}^\tau }}{l},i = 1,2,,,|I|$$((9.9)) -
Step 1.2 If (l > N)
-
Identify the shortest among used paths.
-
Redistribute the flows as follows:
$$h_k^{\tau ,l} = \left\{ {\begin{array}{*{20}l}{h_k^{\tau ,l - 1} \left( {\frac{{l - 1}}{l}} \right) + \frac{{g_i^\tau }}{l},\,{\textrm{ }}} \hfill & {{\textrm{if }}\,s_k^{\tau ,l} = u_k^{\tau ,l} ;k \in K_i ,i \in I,{\textrm{ all }}\tau } \hfill \\[6pt] {h_k^{\tau ,l - 1} \left( {\frac{{l - 1}}{l}} \right)} \hfill & {{\textrm{otherwise}}} \hfill \\\end{array}} \right.$$((9.10))
-
-
Step 2 Stopping rule:
If \(l \le L\;or\;RGap \le \gamma \Rightarrow STOP\);
Otherwise, return to Step 1
While no formal convergence proof can be given for this algorithm, since the network loading map does not have an analytical form, a measure of gap, inspired from that used in static network equilibrium models, may be used for qualifying a given solution. It is the difference between the total travel time experienced and the total travel time that would have been experienced if all vehicles had the travel time equal to that of the current shortest path (for each interval 𝜎).
Hence a relative gap for each departure-time interval 𝜎and iteration l may be computed as
where \(u_i^{\tau ,l}\) are the lengths of the shortest paths at iteration l. A relative gap of zero would indicate a perfect dynamic user-equilibrium flow. Clearly this is a fleeting goal to aim for with any simulation-based dynamic traffic assignment.
It is very important to note that this model, even though its general formulation is very similar to flow-based models, is in fact a simulation model that moves individual cars on the links of the network, as discussed in Section 9.2, Core Traffic Flow Models.
9.3.3 Gradient-Like Algorithm
The equilibration algorithms used in static equilibrium models that operate in the space of path flows provide some ideas that may be adapted heuristically for the solution of the dynamic equilibrium traffic assignment problem. These algorithms are adaptations of the classical convex simplex, projected gradient and reduced gradient algorithms implemented with a Jacobi or a Gauss Seidel decomposition scheme. Some selected references on the topic are (Dafermos, 1971; Leventhal et al., 1973; Patriksson, 1994).
In particular, it is very attractive to adapt the equivalent of the projected gradient algorithm, even though there is no formal objective function that can be identified and the model formulation is a time-discrete variational inequality. Since there is no objective function the step sizes used are those of the MSA method (or the modified MSA method described below) and are adapted to ensure that the path flows remain non-negative.
Before stating the mathematical model it is useful to review the general steps of the adaptation of the projected gradient method (Rosen, 1960; Luenberger, 1984) for the static network equilibrium problem. For each O–D pair the general steps of this algorithm, stated qualitatively, are the following:
-
1.
compute the average cost of all used paths (by O–D pair);
-
2.
reduce the flow of paths that have a larger cost than the average;
-
3.
increase the flow on paths that have a smaller cost than the average;
-
4.
only keep the paths with positive flow;
-
5.
add a path if it is shorter than the current equilibrated solution.
The same basic idea is adapted for the equilibrium dynamic traffic assignment algorithm presented here. The only difference is that the method is used as a heuristic and is applied to each departure interval. In the static model, the step size is computed by minimizing the objective function over the paths that change flow. For dynamic assignment, the default step size is the MSA step size. However, it must also be constrained by the smallest step size that annuls the flow on any path (for a given O–D pair and departure-time interval).
In order to state the algorithm (for one O–D pair) the notation used is the following. Let \(K^ +\) be the set of paths with positive flow. Let s k be the cost (time) of a path, and \(\bar s\) be the average value of the path costs; p k is the proportion of input flow to the path \(k \in K^ +\), that is, \(p_k = {{h_k } \mathord{\left/{\vphantom {{h_k } {g_i }}} \right.\kern-\nulldelimiterspace} {g_i }}\); d k is the direction of change for each path and \(d_k^n\) is the normalized direction; \(\alpha _{{\textrm{MSA}}}\) is the MSA step size.
The gradient-like algorithm modifies the flow changes by using the following steps:
Compute the vector of \(d_k = \bar s - s_k ,\) \(k \in K^ +\);
The “direction” vector \(d_k^n\) is normalized, \(d_k^n = \frac{{\bar s - s_k }}{{\sum\limits_k {\left| {d_k } \right|} }}\), in order to satisfy conservation of flow conditions. The largest step size \(\alpha _{\max }\) which would diminish the input proportion of a path to zero, is\(\alpha _{\max } = \max \left[ {\frac{{p_k }}{{d_k^n }}\left| {d_k^n < 0} \right.} \right].\) The largest actual step size is then \(\alpha = \min \left( {\alpha _{{\textrm{MSA}}} ,\;\alpha _{\max } } \right)\) which is used to update the path proportions \(p_k = p_k + \alpha d_k^n\) and the new path input flows are \(g \cdot p_k\).
Next the DTA algorithm is stated for the gradient-like algorithm based on the adaptation of the projected gradient steps.
Gradient-Like Equilibrium DTA Algorithm
-
Step 0 Initialization (iteration counter l = 1):
Compute temporal shortest paths based on free-flow travel times.
Load the demands (traffic simulation) to obtain an initial solution;
Update iteration counter: l = l + 1.
-
• Step 1 Reallocation of input flows to paths:
-
Step 1.1 If (l ≤ N)
-
Compute a new dynamic shortest path.
-
Assign to each path k the input flow\(h_k^{\tau ,l} = \frac{{g_i^\tau }}{l}\)
-
Step 1.2 If (l > N)
-
Compute the vector of \(d_k = \bar s - s_k ,\;\)
$$k \in K^ +$$((9.12)) -
Normalize the vector
$$d_k^n = \frac{{\bar s - s_k }}{{\sum\limits_k {\left| {d_k } \right|} }}$$((9.13)) -
Check for \(\alpha _{\max }\), the largest value of α :
$$ \alpha _{\max } = \max \left[ {\frac{{p_k }}{{d_k^n }}\left| {d_k^n < 0} \right.} \right]; \alpha = \min \left( {\alpha _{{\textrm{MSA}}} ,\;\alpha _{\max } } \right)$$((9.14)) -
Update the path proportions \(p_k = p_k + \alpha d_k^n\)
-
Redistribute the flows as follows:
$$h_k^{\tau ,l} = g_i^\tau{}^*p_k^{\tau ,l} ;k \in K_i ,i \in I,{\textrm{ }}all{\textrm{ }}\tau $$((9.15))
-
-
• Step 2 Stopping rule:
-
If \(l \le L\;or\;RGap \le \gamma \Rightarrow STOP;\)
-
Otherwise return to Step 1
Once the path proportions are computed (with either algorithm), they are used by the vehicle generation process of the simulation model to generate discrete vehicles with individual (random) departure times. This is a stochastic process which, on average, will produce a number of vehicles for each path that corresponds to the product of the theoretical path input flow \(\left( {h_k ^{\tau ,l} } \right)\), which has units vehicles/h, and the duration of the departure-time interval. Clearly, the simulation model must load discrete vehicles onto the network, while the above product is real-valued. This is handled in the standard way by interpreting the theoretical path proportions \(\left( {p_k } \right)\) as path probabilities. If the O–D demand for each time interval is not defined as an integer number of vehicles, a standard matrix rounding technique is employed to convert the real-valued matrix to integers.
9.3.4 Time-varying Step Size Adjustment
For the vast majority of the real-world applications of this model to date, the relationship between departure time and relative gap (after the algorithm stops) is monotonically non-decreasing, i.e., the assignment for a departure-time interval is further away from the equilibrium conditions than for the preceding interval. Another consistent trend is that later departure-time intervals require more iterations before converging to a stable value of relative gap. These observations are presented in more detail in Section 9.4, Calibration and Advanced Modeling Features.
One explanation for these phenomena is that the travel times of later-departing vehicles are affected by earlier-departing vehicles, and thus the convergence for a later-departing interval cannot be achieved until it has first been achieved for the prior interval. This inherent property of the model suggested the possibility that the higher values of relative gap in the later-departing intervals might be partially a result of the fact that the MSA step size is the same for all departure-time intervals at each iteration. To put it simply, by the time (in iterations) that a later interval finally starts to converge, the step size is so small that not enough flow is being moved away from the longer paths toward the shorter paths. Another reason for the increasing values of relative gap is that later-departing vehicles incur higher congestion. A positive correlation between congestion levels and relative gap (after the algorithm stops) has also been consistently observed across various networks.
These observations are the basis of a time-varying step size heuristic. The heuristic uses an integer reset parameter n and is first applied in step 1.2 of the algorithms presented above. The first N iterations, as described by step 1.1, remain unchanged. The modifications to step 1.2 are as follows. Let the number of departure-time intervals be \(D = \left| {\frac{{T_d }}{{\Delta t}}} \right|\), and let \(\tau = 0,1,2,...D - 1\) denote the departure-time intervals in increasing order. The first \(n \cdot D\) iterations of step 1.2 are a transitory period during which the MSA step size, normally defined as \(\alpha _{MSA} = \frac{1}{l}\) is modified by adjusting the iteration number in the denominator in a way that varies with the departure interval τ. The modified step size parameter, \(\alpha{{'}_{_{MSA}}} {}^{\tau ,l}\), is calculated as follows:
This method calculates an index value\(\left\lfloor {\frac{{l - N}}{n}} \right\rfloor\), which increments by one every n iterations. At each iteration where this value is incremented, the denominator is decremented by n for all departure intervals \(\tau > \left\lfloor {\frac{{l - N}}{n}} \right\rfloor\), where \(\tau = 0\) denotes the first departure interval. After iteration \(l = N + n \cdot (D - 1)\), the step sizes are simply
That is, the inverse of the step size \(\alpha{{'}_{_{MSA}}} {}^{\tau ,l}\) for each departure interval is n less than that of the previous interval. This pattern remains for all subsequent iterations until the algorithm stops. Figure 9.2 shows this heuristic rule in a visual way as a graph of \(\alpha{{'}_{_{MSA}}} {}^{\tau ,l}\) vs. iteration number for \(D = 6,\) \(N = 10,\) and \(n = 5\).
This heuristic, along with the gradient-like method, is presented in more detail along with numerical tests in Mahut et al. (2008).
9.4 Calibration and Advanced Modeling Features
This section provides information relevant to calibration of simulation-based DTA models, based on experience with applying Dynameq on real-world networks. The methodology is related to outputs, analysis tools, and model properties specific to the Dynameq traffic flow model and equilibration scheme.
Calibration refers to the process of adjusting model inputs in order to improve the fit of the model outputs to field observations. Inputs can be justifiably modified in this context for a number of reasons:
-
For practical purposes, many model inputs are not explicitly measured in reality and are instead represented by default values. These default values are adjusted, only as is found to be necessary, after comparing the model outputs to corresponding field observations. A typical example is the maximal flow rate (ideal saturation flow rate, or capacity) that can be sustained on a roadway which can vary somewhat even between roads of the same functional category. However, it can be reasonably well approximated, in most cases, with default values. On occasion, it may be necessary to adjust this value, or parameters known to directly affect it (if it is not an explicit input).
-
The inputs in question cannot be measured accurately and are thus only known with some degree of uncertainty . A typical example is the travel demand data underlying an application, in the form of a time-varying origin–destination matrix.
-
Due to known limitations of the model, accurate inputs for the available parameters will not yield sufficiently accurate outputs in some instances. There may often be rules of thumb for adjusting certain inputs in these situations. One example is the effective (operational) capacity of weaving sections on freeways, which can be relatively unstable and difficult to predict, even in reality, and thus difficult to model accurately. Another example is driver route choice behavior, which can follow subtle individual behavior that is difficult to represent directly with the model routing algorithms. For example, the model may make use of an off-ramp/on-ramp sequence in an attempt to by-pass heavy congestion on a freeway, even though this behavior is not observed in the real world.
The actual process of calibrating a DTA model is dependent on the analysis tools that are available in the software package and requires a thorough understanding of the properties and characteristics of the model. For example, a model that is based on a microscopic traffic simulation, using a lane-based representation of the network, exhibits sensitivities to the inputs that are not captured using a less detailed approach. Thus the calibration process must be catered to the specific tool and take into consideration its strengths and limitations. The reader is referred to Chiu et al. (2010) for a good overview of DTA modeling concepts, including calibration and data-related issues.
Since the calibration approach is tied closely to the embedded analysis tools and software features in general, the discussion below includes some brief descriptions of advanced modeling features where relevant.
9.4.1 Calibration and Stability
The convergence measure associated with the solution of a DTA model – the relative gap measure mentioned above – provides critical information about how well the current assignment satisfies the equilibrium property, i.e., how equal the travel times are on alternative paths. A more general type of convergence measure (not used in Dynameq), which does not provide such information, is one which simply indicates how much the algorithm is actually changing the inputs (e.g., path demand flows) or outputs (e.g., link flows) from one iteration to the next. This kind of measure only gives an idea of how stable the current solution is, but does not quantify the solution with respect to the underlying equilibrium (user-optimal) objective. For this reason, such convergence measures can be deceiving: they indicate only that the algorithm is no longer improving the results, but does not indicate how well the final solution satisfies the desired objective of equilibrating travel times on alternative paths.
A typical plot of relative gaps against iteration number, for a sequence of departure-time windows, is shown in Fig. 9.3. The values of relative gap indicate how well the travel times on alternative paths are equilibrated, where zero would indicate a perfect equilibrium. A term that is sometimes used to refer to the property of an assignment (DTA solution) being in approximate equilibrium is consistency . This term is used in reference to the fact that at equilibrium, the path flows are consistent with the assumed behavioral mechanism of each driver attempting to minimize her own travel time (or generalized cost). The idea of consistency is exactly what is represented by convergence measures such as the relative gap. More general convergence measures, as mentioned in the previous paragraph, that do not quantify the solution in this way do not provide information about the consistency of the assignment.
The plot shows some basic properties that are common to most DTA applications:
-
after a certain number of iterations, the relative gap for any given departure-time interval becomes relatively stable, after which it no longer appears to improve (i.e., does not tend to zero with increasing iterations);
-
the relative gaps for a given departure-time interval begin to stabilize only after the previous interval begins to stabilize, in a sort of “domino effect”;
-
at any given iteration, and in particular in this stable region, the value of relative gap tends to increase with departure time, though in some cases (as in Fig. 9.3) the relative gaps will begin to decrease over the last few intervals;
-
relative gap values, and in particular the stable values attained, tend to increase with increasing congestion in the network;
-
the stable values of relative gap are generally some orders of magnitude higher than what is typically considered acceptable for static assignment models; this is due to the underlying cost dependence on flows which is highly nonlinear and discontinuous, as well as the discrete nature of the traffic representation.
In this context, typical applications are networks of less than 10,000 links, with demand periods of not more than 3 h (a typical AM or PM peak period model).
Since DTA models, i.e., the solution algorithms used for solving these models, do not converge to perfect equilibrium on networks of any significant size, the practical stopping criterion for these models depends on identifying the stable solution. Generally, after reaching stable values of relative gap , the outputs of interest are no longer changing significantly from one iteration to the next. Moreover, the goodness-of-fit statistics (between model outputs and empirical data) should not change significantly if the model is stopped one iteration sooner or later. If this were not the case, i.e., if the DTA were not stopped at a stable solution or if the algorithm cannot produce a stable solution, the calibration exercise would be completely meaningless. As a general rule, smoothness in the relative gap plots prior to the last iteration strongly indicates this kind of stability.
The idea of stability is also used in conjunction with modeling in a rather different way, where it indicates how sensitive the (equilibrium) results are to small changes in the model inputs. For example, if closing one lane of traffic or changing the free-flow speed on a single network link drastically affects the congestion in the network, it may be said that the model results are somewhat unstable with regard to these inputs. This is a different notion of stability from that discussed above, in that it involves a comparison between two sets of DTA results, both of which are converged and stable (which is what allows them to be meaningfully compared in the first place) – but, each set of results is obtained from a slightly different version of the same network.
Although these two notions of stability seem to be quite disconnected, there is one important way in which they must be understood together. When comparing two models with slightly different inputs (e.g., when comparing alternative freeway improvement projects) significant differences in the outputs may be due to the fact that the models are not being run to equilibrium (i.e., to a stable solution ), rather than because the (equilibrium) model outputs are really that sensitive to the physical differences between the two scenarios. In this case, instability of the first kind (the stability of a given DTA run) is being mistakenly attributed to an instability of the second kind, i.e., the results being sensitive to the differences in network topology between the two scenarios.
These concepts are particularly relevant to DTA modeling due to the prevalent use of en route assignment (or en route path switching) embedded in virtually all traffic simulation models, and in some DTA models as well. The most extreme case is the reactive one-pass assignment , discussed in Section 9.1.2, Model Building Principles: Dynamic Traffic Assignment, in which vehicles are constantly changing paths in response to evolving traffic conditions, but with no notion of where congestion will be encountered further downstream on the path (other than using current traffic conditions as a proxy). Such models may exhibit unstable behavior, in the sense that small changes to the inputs can result in larger than expected changes to the outputs. In this case, the instability is due to the fact that the one-pass approach does not necessarily provide a good approximation of equilibrium conditions, which is akin to an iterative model not being run for enough iterations. Perhaps because one-pass models do not generally provide a measure such a relative gap (though in principle they could) indicating how well the experienced path travel times (or generalized costs) were ultimately balanced out, the danger of instability in these models is often overlooked.
9.4.2 Calibration: Overview
The process of calibrating a DTA model can be broken down into two sequential analysis stages: qualitative and quantitative. The qualitative analysis stage is what typically starts after the very first model runs, when there may still be numerous errors in the input data to be found. In these situations it may be of little practical value to begin comparing the model outputs to empirical data, especially if the model is not converging to a stable solution. Once the model has been improved to a certain extent the quantitative analysis starts. Quantitative analysis is based on a direct comparison of model outputs and empirical data and investigating the outliers in order to further refine the model. In principle, fixing coding errors is not thought of as part of the calibration process; in practice, these two tasks are inseparable.
9.4.2.1 Qualitative Analysis
As discussed above, convergence measures indicate the quality of a DTA run, and should provide information about how well the path travel times (or generalized costs) are equilibrated. These measures are particularly important in the early stages of the calibration when they are most likely to indicate that the model results are unsatisfactory due to an unconverged or unstable solution.
As a general rule, errors in network and traffic signal coding are found to cause more congestion rather than less, due to the nature of congestion spillback: as a queue grows in space it engulfs vehicles that do not directly contribute to the original cause of the queue (their paths take them off the road before reaching the downstream bottleneck). In extreme cases, queues that are initially separate become connected as they grow, causing congestion to grow even faster and spread out in many directions, which can even lead to gridlock. Applications with gridlock typically exhibit unstable convergence, or even a complete failure to converge at all.
Under such circumstances, the DTA results are unsuitable for comparison to empirical data. There is a need to identify the key bottlenecks underlying the congestion and to correct the input errors that result in inaccurate capacity values, incorrect routing, or unrealistic demand. In general, the purpose of the qualitative analysis stage is primarily to achieve model results that exhibit a stable solution , are free of gridlock , and if possible, in which the overall congestion pattern at least resembles the observed conditions on the street.
A characteristic of this calibration stage is that correcting a single input value, e.g., adding a missing turn pocket, can dramatically alter the overall congestion patterns and quality of the convergence.
9.4.2.2 Quantitative Analysis
This stage of the calibration process is based on direct comparisons between model results and empirical observations, once the DTA is exhibiting a stable and relatively well equilibrated solution. Typically at this stage, the results are relatively stable and it can be relatively difficult to substantially change the general congestion pattern, i.e., the locations of the queues.
Various statistical measures may be used to quantify the goodness of fit between the DTA output and the observed data, but the actual process of improving the fit by adjusting input data is essentially a manual process based on intuition and modeling judgment, requiring a solid understanding of traffic phenomena and causes of congestion. For instance, understanding how changing a parameter such as link or movement capacity – e.g., by modifying signal timing parameters – affects link travel time, which in turn impacts path choice, is critical to carrying out a calibration exercise, as it makes it possible to predict how certain changes to the inputs should generally affect the outputs. Without this predictive insight into the behavior of the model, the process of calibration is little more than trial and error.
Moreover, a model (application) that is not in equilibrium will not necessarily exhibit the expected correlation between changes in travel times and the resulting changes in path choices. From a calibration standpoint, a model that is not in equilibrium is essentially a moving target: since the connection between path travel times and path choices is not reliable, changes to the inputs lead to unexpected and illogical changes in the outputs. This is exactly the problem of artificial instability discussed in Section 9.4.1, Calibration and Stability.
In its simplest form, the quantitative analysis stage consists of investigating one or more outliers at a time in order to determine how the model inputs need to change in order to better approximate the observed conditions (empirical data), without degrading the goodness of fit of the other observations. Generally speaking, the sources of error can be broken down into three categories:
-
supply side: network and signal timing parameters
-
routing: assignment model is not capturing driver behavior
-
demand: inaccurate values in the O–D matrix
The remaining sub-sections provide an overview of the general process of investigating outliers and drawing conclusions about the most likely sources of error through some simple examples.
9.4.3 Traffic Flow Calibration
Dynameq automatically collects a wide variety of measures, which can be visualized in various ways, in order to interpret simulation results for network links, turning movements, intersections (nodes), and individual lanes. Evaluation of model outputs typically starts with animating temporal link-based results on the network plot, which provides an overview of the overall traffic conditions and allows the key bottlenecks to be quickly identified. A snapshot of such a plot is shown in Fig. 9.4. This plot displays link flows as bar widths and level of congestion – represented by a measure called occupancy – by color, as indicated by the legend. Occupancy is a unitless measure that is a normalized value of link density (which is itself expressed in vehicles/km), and ranges from zero (no vehicles) to 100% (link entirely full of vehicles standing still). Starting from this high-level view, key locations such as bottlenecks can be identified and then investigated further by examining a variety of detailed measures.
The area inside the red oval in Fig. 9.4 indicates heavy congestion in the eastbound direction approaching the north-south freeway. Specifically, there are traffic counts for the link colored red (indicating very high occupancy, in the middle of the oval). This is the eastbound approach to a four-legged intersection, and thus has three exiting movements. The traffic counts at this intersection indicate that the traffic flow on the through movement is considerably lower in the model than observed in the field, while the left and right-turn movements correspond very well with the field data. A typical investigation might proceed as follows.
Figure 9.5 shows the time series plots of link outflow and occupancy measures, which in this case indicate a relatively constant outflow accompanied by increasing congestion (occupancy). The occupancy plot exhibits a sudden increase at around 16:30. The link occupancy is further analyzed by breaking it down to see the relative contributions to this value due to the vehicles destined for each of the three exiting movements from this link (referred to here as movement occupancy), as shown in Fig. 9.6. This plot is characterized by a rapid increase in the number of vehicles destined for the left-turn movement, with a jump at 16:30, followed by a significant increase in the number of vehicles destined for the through movement.
As these occupancy values are known to be reflective of congested conditions, it is quite likely that the increase in queued vehicles for the left turn may in fact be responsible for the increase in queued vehicles for the through movement: this typically occurs when a left-turn pocket overflows and begins to block a regular lane that services the through movement. It is interesting to note at this point the individual outflow values by turning movement, shown in Fig. 9.7: although the left turn movement is responsible for the majority of queued vehicles on the link, the outflow (expressed as a flow rate, in vehicles per hour) of this movement is a fraction of that of the through movement, indicating a major discrepancy in the demand/supply relationships of these two turning movements.
The situation is further investigated by observing flow rates exiting the link per lane, as shown in Fig. 9.8. The lanes are numbered from the outside edge to the inside, so that lane 3 represents the left-turn pocket. It can be seen that the flow on lane 3 is relatively constant but very low. For the other two lanes, which service the through and right-turn movements, the flows are essentially equal up until 16:30, at which time they diverge rapidly, with the flow on the middle lane dropping while the flow on the outside lane (lane 1) increases. Since this change in
lane-based flow occurs simultaneously with the sharp increase in the number of vehicles queued (occupancy) for the left-turn movement (Fig. 9.6), the cause of the insufficient traffic volume on the through movement is now easy to see.
At 16:30, the left-turn pocket overflows and is blocking lane 2 (the middle lane). As a result, the outflow on lane 2 drops, and the through traffic entering this link begins changing lanes to get around this blockage; as a consequence the flow on lane 1 suddenly increases. Nevertheless, as the through-movement traffic is significantly lower than expected, it must be concluded that the expected flow rate cannot be attained if the left-turn pocket is regularly spilling back and blocking lane 2. Thus, either the demand for this left turn is too high, or the supply, primarily determined by the signal phase design and timing parameters, is too low.
As mentioned above, the field count for the left-turn movement is in agreement with the model. Under congested conditions, the count reflects the supply (capacity) of the movement, and not the demand, and thus says nothing about the correctness of the demand for this movement or the associated queueing. However, the empirical count validates the left-turn capacity in the model, thereby allowing a clear conclusion to be drawn: the demand for the left-turn movement must be too high.
This kind of analysis provides insights into the detailed workings of the traffic flow in the model, and often allows precise conclusions to be drawn about the causes of discrepancies between model outputs and field observations. In many cases, the conclusion may be that the supply is either insufficient or excessive, rather than that the demand is incorrect as in the example above. For such cases, Dynameq has link-based parameters and gap acceptance parameters that can be adjusted locally for the purpose of calibration .
Two link-based parameters are available which are specifically intended for calibration , called the response time factor and the effective length factor. These are scalar multipliers that are applied to each vehicle during its journey along the link, and can be used to obtain desired values of maximum flow (or capacity, in vehicles per hour) and maximum density (in vehicles per kilometer or mile). The free-flow speed of the traffic on the link is another user-defined parameter. Together, these three parameters allow the user to define the speed–flow–density relationship (or fundamental diagram) for each link in the network (this relationship was briefly mentioned in Section 9.2, Core Traffic Flow Models). As mentioned above, effective capacities under conditions of heavy lane changing or weaving can be difficult to capture without some adjustment to the default inputs: in this situation, the response time factor can be adjusted to account for the fact that drivers often carry out lane-changing maneuvers with lower headways than typically used when traveling behind each other in a single lane.
As mentioned in Section 9.2, Core Traffic Flow Models, Dynameq employs a two-parameter gap acceptance model. These two parameters (critical gap and critical wait) may be adjusted, if desired, at the level of each pair of conflicting movements at an intersection. This makes it possible to account for local effects, such as grade and visibility, on gap acceptance behavior, e.g., how drivers merge at a freeway on-ramp. Default values for various standard situations, such as stop and yield signs, roundabouts, and signalized intersections, are provided by the software.
9.4.3.1 Advanced Modeling Features
A feature that can be helpful in calibration, though is more generally used as a modeling tool for representing special situations, is referred to as time-varying attributes. This allows network properties (supply-side data) to change at pre-defined points in time. This can be used to represent various real-world situations, such as congestion pricing (tolls) that changes several times during a peak period, as is currently implemented in Stockholm (Sweden), and London (UK). This can also be used to approximate traffic control or management measures that are triggered by congestion levels reaching a certain threshold, since these conditions typically occur at about the same time everyday. Examples of such measures include variable speed limits on freeways, and opening hard shoulders for regular traffic in order to increase capacity upstream of a major off-ramp. This feature is also useful when there is congestion spilling back into the area being modeled from outside the network: by measuring the typical flow rates in the field during the study period, the flow capacity of an exiting link (called a connector) can be set to follow a time-varying pattern of effective capacity, rather than simply representing the theoretical capacity under ideal conditions.
9.4.4 Route Choice Calibration
Dynameq offers several features specifically for the purposes of evaluating and calibrating route choice behavior, including path display, select link analysis , and generalized cost assignment. Although empirical data about route choice is generally not available for typical applications, inspection of routes can often provide valuable information for the calibration process, and can help to further clarify whether discrepancies between model outputs and empirical data are primarily due to demand-side or supply-side errors. The tool primarily used in addressing this question is select link analysis .
Recalling the example of the over-saturated left turn discussed above, and the conclusion that the cause was due to excessive demand rather than insufficient supply, the next step in the analysis is to determine whether the excess of cars for this movement is due to erroneous routing or excessive demand specified in the O–D matrix. The first step in addressing this question is to execute a select link analysis on the turning movement (i.e., a select-turn analysis). This procedure identifies all paths which use this turn and provides various outputs associated with these paths, including the corresponding partial O–D matrix. These outputs also include select link simulation results, which are the same types of outputs used in the above example (Fig. 9.4), but counting only those vehicles on the paths that go through the turn in question.
Figure 9.9 shows a snapshot at a given time interval (roughly the middle of the simulation, in order to be fairly representative) of the select-turn link flows as bar widths, i.e., representing only those vehicles that use this particular left turn somewhere along their journeys. The plot shows that all of the traffic is destined to a single destination (situated to the north of the main arterial), about half of the demand comes from a single origin (situated to the south of the arterial just upstream) and the remainder comes from several origins further upstream along this arterial. If necessary, this information can be complemented with path displays showing the alternative paths adopted by other vehicles for the same O–D pairs as those identified by the select link analysis. The conclusion in this case is that the paths using the over-saturated left turn are reasonable and realistic, i.e., there are no preferable alternate paths that these vehicles should be using instead. This observation then leads directly to the conclusion that the excessive demand for this turning movement is attributable to the O–D matrix rather than to the assignment (route choice).
In a situation where the discrepancy is due primarily to the route choice itself, generalized cost can be used to “calibrate” the route choice model by considering factors in addition to travel time. Although no detailed examples are provided here of such an application, one recent case involved a calibration exercise for the network of Lausanne, Switzerland, which included a very old part of the city with narrow cobblestone streets. The initial travel time based DTA was clearly routing too much traffic through this area, and this was handled by adding perceived costs to these links to make them less attractive. The costs were adjusted manually until an acceptable fit was obtained in this area between the model outputs and empirical data, which consisted primarily of link-based traffic counts.
9.4.4.1 Advanced Modeling Features
Generalized cost assignment is a feature that has uses well beyond that of calibrating a model against empirical data. In some cases, cost formulas and weights for time, distance and direct monetary cost are established as a modeling standard, rather than using a pure travel time-based assignment. The use of tolls in an application clearly requires the use of a generalized cost assignment, and the implementation of time-varying tolls necessitates the use of a dynamic model. Time-varying tolls, and other new tolling mechanisms such as HOT (high-occupancy/toll) lanes with congestion-dependent pricing, are good examples of the general trend toward the use of increasingly complex traffic management mechanisms, which require the higher level of detail and realism offered by simulation-based DTA.
9.4.5 Calibration – Future Directions
The development of algorithms for calibrating DTA and traffic simulation models has been an important area of research for some time already, though practical and robust (i.e., deployable for use by practitioners) methods for large-scale congested networks are not yet available. The simple example illustrated above demonstrates the underlying complexity of the problem: the data indicated an issue (too little flow) for a particular turning movement, but after analyzing the situation, the actual cause was in fact related to another turning movement for which the empirical data corresponded perfectly well with the model outputs. Another situation that commonly arises, associated with a supply-side (network coding) rather than demand-side error, is a false congestion point that artificially increases travel time on one route, and thereby increases the traffic flow (via the equilibrium-seeking iterative algorithm) on alternative routes. The cause of this problem can be particularly challenging to identify if the empirical data does not cover the false bottleneck, but only the alternative routes. Despite these challenges, which apply both to manual calibration and the ongoing improvement of automated calibration tools, real-world applications of DTA are being successfully calibrated to reasonable thresholds of goodness-of-fit, and such applications are becoming increasingly common.
9.5 Selected Applications
This section presents an overview of a typical DTA application that was recently carried out using Dynameq. The modeled area is the entire Municipality of Ljubljana, Slovenia. The description below briefly presents the objectives of the modeling study and the results of the base year calibration work. Applying the calibrated model to evaluating various improvements and alternatives is currently underway. The section ends with a summary of software performance metrics for this and a few other recent projects.
The city of Ljubljana, including the surrounding suburbs, represents the highest level of urbanisation in Slovenia and is the most important central town. Ljubljana lies in the heart of the Central Slovenian Region which has roughly half a million inhabitants, of which 54% (268,000) live in the Municipality of Ljubljana. Of the 215,000 jobs in this region, 77% are in Ljubljana.
Ljubljana is also the main national traffic node, at the junction of the of the primary road and railway flows in the country. The road network has a traditional radial pattern encircled with a ring road, the area inside which is approximately 60 square kilometers (see Fig. 9.10). As with most densely populated urban areas, congestion is continuously increasing due to growing traffic demand and is expected to worsen despite the fact that the population is not expected to grow significantly in the future. Moreover, the current arrangement of public transport fails to provide a viable alterative to the private car for most trips, and the share of public transport trips is falling. The most popular, and cost effective, means of mitigating congestion and increasing mobility are focused around traffic management strategies and increasing public transportation, rather than building new road capacity into the network.
Future traffic related projects are based on two main strategies:
-
implementing transit priority on the main radial arteries;
-
increasing traffic capacity on the ring road.
Increasing vehicular capacity on the radial arteries is purposely being avoided. Because the radial pattern has the effect of focusing traffic through the city center, the congestion level in this area is very sensitive to the capacities of these arterial routes. Increasing the capacities of the latter could easily result in overloading the city center with vehicles, leading to even more congestion and less mobility.
Since many traffic management strategies cannot be modeled accurately using traditional static assignment (travel forecasting) models, it was decided to adopt a DTA approach for evaluating these policies. The main policies being considered include:
-
reserved lanes for buses on city arterials;
-
transit priority at intersections;
-
park and ride locations;
-
increasing capacity on the ring road (expanding to 6 lanes);
-
reconstruction of many city roads;
-
new ITS measures for improving capacity, safety, driving comfort and environmental impacts (the ring road is already under ITS management);
-
road pricing for the central (downtown) area.
The modeled area is the entire Municipality of Ljubljana, covering approximately 274 km2 and comprising the entire national and municipal road networks. This includes approximately 550 km of roadway (bi-directional): the model itself has 1280 km of directional roadway representing over 1500 lane-kilometers. Figure 9.10 shows the modeled network drawn in white overlaying a satellite image of the area. The model includes private vehicles, public and freight transport. Public passenger transport includes interurban, suburban and urban bus lines. The model was calibrated to the morning and afternoon peak periods using 2008 data.
Due to the high level of detail required for capturing traffic phenomena such as the effects of bus delays on overall congestion, and the effect of transit signal priority on bus travel times, it was decided to adopt a high-fidelity DTA model using a microsimulation approach for traffic flow modeling. Figure 9.11 shows a snapshot of the link flows and occupancies in the network at 4:30 p.m. Traffic is flowing relatively well on the ring road, as indicated by the wide bars and blue colors: dark blue (occupancy < 15%) indicates free-flowing conditions, while light blue (15% < occupancy < 30%) indicates locations that are essentially at capacity, but not yet congested. A number of critical bottlenecks, indicated by yellow, orange and red, can be seen along the radial arteries.
The calibration effort was supported by an extensive survey of traffic counts, comprising 564 link and turning movement count locations, including 84 on the highways (including the ring road) and 154 on the main city streets. Figures 9.12 and 9.13 show scatter-plots of the model results (y-axis) vs. traffic counts (x-axis) for the AM and PM peak periods, respectively, for the entire set of count locations. Figures 9.14 and 9.15 show the corresponding AM and PM plots, respectively, for the highway (including ring road) locations, while Figs. 9.16 and 9.17 show the AM and PM plots, respectively, for the main city roads. Regression analysis was used to produce a linear best-fit for each data set, as shown on the plots. Table 9.1 presents the R 2 statistics for the linear regression analysis, with values ranging from 0.94 to 0.96. These are considered to be very good results, particularly for a network of this size. Although the numeric values of the slopes are not reported, they can be seen from the plots to be generally just below or around 1, as expected.
Tables 9.2 and 9.3 show travel time comparisons for some of the main routes in the network for the AM and PM scenarios, respectively. These routes normally consist of a sequence of several network links and the empirical data is collected by actually driving the routes several times during the peak period and taking the average measured travel time. The reported travel times are rounded to the nearest minute, while the percentage differences are computed based on the exact values. The goodness of fit of the travel time results were found to be excellent: the relative differences for the AM paths were between 5% and 8%, while the PM results had similar values, but with one path at 12%.
It should also be mentioned that this calibration was carried out without the use of matrix adjustment algorithms or techniques. Matrix adjustment algorithms, which automatically adjust the demand matrix in order to provide a better fit to a set of traffic counts, have been available for many years for static assignment models and can be used to pre-process the demand matrices for a DTA model as well. Their use poses some difficulties in the context of long-term planning studies for which future demand scenarios must be modeled, since future traffic counts are not available for adjusting those matrices. Avoiding matrix adjustment in these cases maintains a stronger linkage between the DTA results and the synthetic demand model.
Some basic software performance metrics for this project, as well as a few other recent projects, are presented in Table 9.4. These include the following:
-
size of the network measured in number of links;
-
duration of the modeled study period;
-
total volume of vehicles summed over all classes;
-
number of iterations used in calibrated base year DTA;
-
average relative gap (final iteration) in calibrated base year DTA;
-
average CPU time (minutes) per iteration in calibrated base year DTA;
-
real-time speed up: study period duration divided by the portion of CPU time associated with traffic flow simulation (last iteration).
Metrics reported for projects that are currently underway or that were run only as test networks, and thus are not fully calibrated, do not include the number of iterations or relative gap. CPU times were obtained on a DELL Optiplex 755 running the Windows Vista™ Business (32-bit) operating system, with a 3.0 GHz processor and 3.325 Gb RAM. The exception is the San Francisco network, which was run under a Linux operating system on a 2.6 GHz processor.
DTA models are a relatively new tool to arrive into traffic planning and engineering practice. The general experience with DTA has been that due to the combination of the scale of these models and the relatively high sensitivities they exhibit, i.e., compared to static assignment models traditionally used for travel forecasting, meeting conventionally accepted goodness-of-fit calibration criteria can sometimes be challenging. The overall quality of the calibration results presented in this section are considered to be excellent and are very promising for the continued use and adoption of simulation-based DTA for projects of similar size and scope.
References
Ben-Akiva M, Koutsopoulos HN, Mishalani R (1998) “DynaMIT: A simulation-based system for traffic prediction”. Paper presented at the DACCORD short term forecasting workshop, Delft, The Netherlands. See also http://web.mit.edu/its/products.html, accessed 12 September 2009
Brackstone M, McDonald M (1999) Car-following: a historical review. Trans Res 2F(4):181–196
Chevallier E, Leclerq L (2009) Do microscopic merging models reproduce observed priority sharing ratio in congestion? Trans Res 17C:328–336
Chiu Y-C, Bottom J, Mahut M, Paz A, Balakrishna R, Waller T, Hicks J (2010). DTA Primer, Network Modeling Committee (ADB30) of the Transportation Research Board, Washington, DC. http://www.nextrans.org/ADB30/UPLOAD/ssharma/dta_primer.pdf
Dafermos SC (1971) An extended traffic assignment model with application to two-way traffic. Trans Sci 5, 366–389
Diakaki C, Papageorgiou M (1996) Integrated modeling and control of corridor traffic networks using the METACOR modeling tool. Dynamic systems and simulation laboratory, Technical University of Crete. Internal Report No. 1996-8. Chania, Greece, p 41
Florian M, Mahut M, Tremblay N (2008) Application of a simulation-based dynamic traffic assignment model. Eur J Oper Res 189(3):1381–1392
Friesz T, Bernstein D, Smith T, Tobin R, Wie B (1993) A variational inequality formulation of the dynamic network user equilibrium problem. Oper Res 41:179–191
Gabard JF (1991) Car-following models. In: Papageorgiou M (ed) Concise encyclopedia of traffic and transportation systems, Pergamon Press, Oxford, pp 337–341
Hoogendoorn SP and Bovy PHL (1999) Macroscopic multiple user-class traffic flow modelling: a multilane generalisation using gas-kinetic theory. Proceedings of the 14th international symposium on transportation and traffic theory. Jerusalem, Israel, 20–23 July 1999
Lawphongpanich S, Hearn DW (1984). Simplicial decomposition of the asymmetric traffic assignment problem. Transport Res B 17:123–133
Leonard DR, Gower P, and Taylor NB (1989). CONTRAM: Structure of the model. Transport and Road Research Laboratory (TRRL) Research Report 178. Department of Transport, Crowthorne.
Leventhal T, Nemhauser G, Trooter L Jr (1973). A column generation algorithm for optimal traffic assignment. Transport Sci 7:168–176
Lighthill MJ, Whitham GB (1955) On kinematic waves I: flood movement in long rivers. II: a theory of traffic flow on long crowded roads. Proc R Soc Lond, A229:281–345
Luenberger D (1984) Linear and nonlinear programming, 2nd edn. Addison-Wesley, Inc., Reading, Massachusetts
Mahmassani HS, Abdelghany AF, Huynh N, Zhou X, Chiu Y-C, Abdelghany KF (2001). DYNASMART-P (version 0.926) User’s Guide. Technical Report STO67-85-PIII, Center for Transportation Research, University of Texas at Austin, Austin, USA
Mahut M (1999) Speed maximizing car-following models based on safe-stopping rules. Compendium of Papers CD-ROM, 78th Annual Meeting of the Transportation Research Board, January 10–14, 1999, Washington, DC
Mahut M (2001) Discrete flow model for dynamic network loading. Ph.D. thesis, Département d’informatique et de recherhe opérationelle, Université de Montréal
Mahut M, Florian M, Tremblay N (2008) Comparison of assignment methods for simulation-based dynamic-equilibrium traffic assignment. Compendium of Papers CD-ROM, 87th Annual Meeting of the Transportation Research Board, January, 2008, Washington, DC
Messmer A (2000a) METANET a simulation program for motorway networks (documentation). Dynamic Systems and Simulation Laboratory, Technical University of Crete, Chania, Greece
Messmer A (2000b). METANET-DTA an exact dynamic traffic assignment tool based on METANET. Dynamic Systems and Simulation Laboratory, Technical University of Crete, Chania, Greece, p 37
Nagel K, Schreckenberg M (1992) A cellular automaton model for freeway traffic. Journal de Physique I France, 2:2221–2229
Newell GF (1993) A simplified theory of kinematic waves in highway traffic. Part I: general theory. Transport Res B 27B(4):281–287
Newell GF (2002) A simplified car-following theory: a lower order model. Transport Res Part B 36B(3):195–205
Papageorgiou M (1990) Dynamic modelling, assignment and route guidance in traffic networks. Transport Res 24B(6):471–495
Patriksson M (1994) The traffic assignment problem: models and methods. Topics in Transportation, VSP BV, Utrecht, The Netherlands
Richards PI (1956) Shock waves on the highway. Oper Res 4:42–51
Rosen JB (1960) The gradient projected method for nonlinear programming, part I: linear constraints. J Soc Indus Appl Math 8:181–217
Van Aerde M (1999) INTEGRATION Release 2.20 for Windows: User’s Guide. MVA and Associates, Kingston, Canada
Wardrop JG (1952) Some theoretical aspects of road traffic research. In: Proceedings of Institue of Civil Engineerings, Part II, vol 1. pp 325–378
Ziliaskopulos AK, Lee S (1997) A cell transmission based assignment-simulation model for integrated freeway/surface street systems. Transport Res Rec 1701:12–22
Acknowledgment
The authors would like to thank PNZ Consulting Designing Ltd., who are carrying out the Ljubljana project, for generously providing the related information presented above.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
Mahut, M., Florian, M. (2010). Traffic Simulation with Dynameq. In: Barceló, J. (eds) Fundamentals of Traffic Simulation. International Series in Operations Research & Management Science, vol 145. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-6142-6_9
Download citation
DOI: https://doi.org/10.1007/978-1-4419-6142-6_9
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4419-6141-9
Online ISBN: 978-1-4419-6142-6
eBook Packages: Business and EconomicsBusiness and Management (R0)