Flexible Computational Pipelines for Robust AbstractionBased Control Synthesis
Abstract
Successfully synthesizing controllers for complex dynamical systems and specifications often requires leveraging domain knowledge as well as making difficult computational or mathematical tradeoffs. This paper presents a flexible and extensible framework for constructing robust control synthesis algorithms and applies this to the traditional abstractionbased control synthesis pipeline. It is grounded in the theory of relational interfaces and provides a principled methodology to seamlessly combine different techniques (such as dynamic precision grids, refining abstractions while synthesizing, or decomposed control predecessors) or create custom procedures to exploit an application’s intrinsic structural properties. A Dubins vehicle is used as a motivating example to showcase memory and runtime improvements.
Keywords
Control synthesis Finite abstraction Relational interface1 Introduction
A control synthesizer’s high level goal is to automatically construct control software that enables a closed loop system to satisfy a desired specification. A vast and rich literature contains results that mathematically characterize solutions to different classes of problems and specifications, such as the HamiltonJacobiIsaacs PDE for differential games [3], Lyapunov theory for stabilization [8], and fixedpoints for temporal logic specifications [11, 17]. While many control synthesis problems have elegant mathematical solutions, there is often a gap between a solution’s theoretical characterization and the algorithms used to compute it. What data structures are used to represent the dynamics and constraints? What operations should those data structures support? How should the control synthesis algorithm be structured? Implementing solutions to the questions above can require substantial time. This problem is especially critical for computationally challenging problems, where it is often necessary to let the user rapidly identify and exploit structure through analysis or experimentation.
1.1 Bottlenecks in AbstractionBased Control Synthesis
 1.
Abstracting the continuous state system into a finite automaton whose underlying transitions faithfully mimic the original dynamics [21, 23].
 2.
Synthesizing a discrete controller by leveraging data structures and symbolic reasoning algorithms to mitigate combinatorial state explosion.
 3.
Refining the discrete controller into a continuous one. Feasibility of this step is ensured through the abstraction step.
This pipeline appears in tools PESSOA [12] and SCOTS [19], which can exhibit acute computational bottlenecks for high dimensional and nonlinear system dynamics. A common method to mitigate these bottlenecks is to exploit a specific dynamical system’s topological and algebraic properties. In MASCOT [7] and CoSyMA [14], multiscale grids and hierarchical models capture notions of statespace locality. One could incrementally construct an abstraction of the system dynamics while performing the control synthesis step [10, 15] as implemented in tools ROCS [9] and ARCS [4]. The abstraction overhead can also be reduced by representing systems as a collection of components composed in parallel [6, 13]. These have been developed in isolation and were not previously interoperable.
1.2 Methodology
One may also add new composite operators to encode desirable heuristics that exploit structural properties in the system and specifications. The last three sections encode the techniques for abstractionbased control synthesis from Sect. 1.1 within the relational interfaces framework. By deliberately deconstructing those techniques, then reconstructing them within a compositional framework it was possible to identify implicit or unnecessary assumptions then generalize or remove them. It also makes the aforementioned techniques interoperable amongst themselves as well as future techniques.
Interfaces come equipped with a refinement partial order that formalizes when one interface abstracts another. This paper focuses on preserving the refinement relation and sufficient conditions to refine discrete controllers back to concrete ones. Additional guarantees regarding completeness, termination, precision, or decomposability can be encoded, but impose additional requirements on the control synthesis algorithm and are beyond the scope of this paper.
1.3 Contributions
To our knowledge, the application of relational interfaces to robust abstractionbased control synthesis is new. The framework’s building blocks consist of a collection of small, well understood operators that are nonetheless powerful enough to express many prior techniques. Encoding these techniques as relational interface operations forced us to simplify, formalize, or remove implicit assumptions in existing tools. The framework also exhibits numerous desirable features.
 1.
It enables compositional tools for control synthesis by leveraging a theoretical foundation with compositionality built into it. This paper showcases a principled methodology to seamlessly combine the methods in Sect. 1.1, as well as construct new techniques.
 2.
It enables a declarative approach to control synthesis by enforcing a strict separation between the high level algorithm from its low level implementation. We rely on the availability of an underlying data structure to encode and manipulate predicates. Low level predicate operations, while powerful, make it easy to inadvertently violate the refinement property. Conforming to the relational interface operations minimizes this danger.
This paper’s first half is domain agnostic and applicable to general robust control synthesis problems. The second half applies those insights to the finite abstraction approach to control synthesis. A smaller Dubins vehicle example is used to showcase and evaluate different techniques and their computational gains, compared to the unoptimized problem. In an extended version of this paper available at [1], a 6D lunar lander example leverages all techniques in this paper and introduces a few new ones.
1.4 Notation
Let \(=\) be an assertion that two objects are mathematically equivalent; as a special case ‘\(\equiv \)’ is used when those two objects are sets. In contrast, the operator ‘\(==\)’ checks whether two objects are equivalent, returning true if they are and false otherwise. A special instance of ‘\(==\)’ is logical equivalence ‘\(\Leftrightarrow \)’.
Variables are denoted by lower case letters. Each variable v is associated with a domain of values \(\mathcal {D}(v)\) that is analogous to the variable’s type. A composite variable is a set of variables and is analogous to a bundle of wrapped wires. From a collection of variables \(v_1, \ldots , v_M\) a composite variable \(v\) can be constructed by taking the union \(v\equiv v_1 \cup \ldots \cup v_M\) and the domain \(\mathcal {D}(v) \equiv \prod _{i=1}^M \mathcal {D}(v_i)\). Note that the variables \(v_1, \ldots , v_M\) above may themselves be composite. As an example if v is associated with a Mdimensional Euclidean space \(\mathbb {R}^M\), then it is a composite variable that can be broken apart into a collection of atomic variables \(v_1, \ldots , v_M\) where \(\mathcal {D}(v_i) \equiv \mathbb {R}\) for all \(i \in \{1,\ldots , M\}\). The technical results herein do not distinguish between composite and atomic variables.
Predicates are functions that map variable assignments to a Boolean value. Predicates that stand in for expressions/formulas are denoted with capital letters. Predicates P and Q are logically equivalent (denoted by \(P \Leftrightarrow Q\)) if and only if \(P \Rightarrow Q\) and \(Q \Rightarrow P\) are true for all variable assignments. The universal and existential quantifiers \(\forall \) and \(\exists \) eliminate variables and yield new predicates. Predicates \({\exists w}P\) and \({\forall w}P\) do not depend on \(w\). If \(w\) is a composite variable \(w\equiv w_1 \cup \ldots \cup w_N\) then \({\exists w}P\) is simply a shorthand for \({\exists w_1}\ldots {\exists w_N} P\).
2 Control Synthesis for a Motivating Example
The above iterations are not guaranteed to reach a fixed point in a finite number of iterations, except under certain technical conditions [21]. Figure 2 depicts an approximate region where the controller can force the Dubins vehicle to enter T. We showcase different improvements relative to a base line script used to generate Fig. 2. A toolbox that adopts this paper’s framework is being actively developed and is open sourced at [2]. It is written in python 3.6 and uses the dd package as an interface to CUDD [20], a library in C/C++ for constructing and manipulating binary decision diagrams (BDD). All experiments were run on a single core of a 2013 Macbook Pro with 2.4 GHz Intel Core i7 and 8 GB of RAM.
The following section uses relational interfaces to represent the controlled predecessor \({{\texttt {cpre}}}(\cdot )\) and iterations (2) and (3) as a computational pipeline. Subsequent sections show how modifying this pipeline leads to favorable theoretical properties and computational gains.
3 Relational Interfaces
Relational interfaces are predicates augmented with annotations about each variable’s role as an input or output^{1}. They abstract away a component’s internal implementation and only encode an inputoutput relation.
Definition 1
(Relational Interface [22]). An interface \(M(i,o)\) consists of a predicate M over a set of input variables \(i\) and output variables \(o\).
For an interface M(i, o), we call (i, o) its inputoutput signature. An interface is a sink if it contains no outputs and has signature like \((i, {\varnothing })\), and a source if it contains no inputs like \(({\varnothing }, o)\). Sinks and source interfaces can be interpreted as sets whereas inputoutput interfaces are relations. Interfaces encode relations through their predicates and can capture features such as nondeterministic outputs or blocking (i.e., disallowed, error) inputs. A system blocks for an input assignment if there does not exist a corresponding output assignment that satisfies the interface relation. Blocking is a critical property used to declare requirements; sink interfaces impose constraints by modeling constrain violations as blocking inputs. Outputs on the other hand exhibit nondeterminism, which is treated as an adversary. When one interface’s outputs are connected to another’s inputs, the outputs seek to cause blocking whenever possible.
3.1 Atomic and Composite Operators
Operators are used to manipulate interfaces by taking interfaces and variables as inputs and yielding another interface. We will show how the controlled predecessor \({{\texttt {cpre}}}(\cdot )\) in (1) can be constructed by composing operators appearing in [22] and one additional one. The first, output hiding, removes interface outputs.
Definition 2
Existentially quantifying out w ensures that the inputoutput behavior over the unhidden variables is still consistent with potential assignments to w. The operator \({\texttt {nb}}(\cdot )\) is a special variant of \({\texttt {\textit{ohide}}}(\cdot )\) that hides all outputs, yielding a sink encoding all nonblocking inputs to the original interface.
Definition 3
(Nonblocking Inputs Sink). Given an interface \(F(i,o)\), the nonblocking operation nb(F) yields a sink interface with signature \((i, {\varnothing })\) and predicate \({\texttt {nb}}(F) = {\exists o}F\). If \(F(i, {\varnothing })\) is a sink interface, then \({\texttt {nb}}(F) = F\) yields itself. If \(F({\varnothing }, o)\) is a source interface, then \({\texttt {nb}}(F) = \bot \) if and only if \(F \Leftrightarrow \bot \); otherwise \({\texttt {nb}}(F) = \top \).
The interface composition operator takes multiple interfaces and “collapses” them into a single inputoutput interface. It can be viewed as a generalization of function composition in the special case where each interface encodes a total function (i.e., deterministic output and inputs never block).
Definition 4
Interfaces \(F_1\) and \(F_2\) are composed in parallel if \(io_{21} \equiv {\varnothing }\) holds in addition to \(io_{12} \equiv {\varnothing }\). Equation (8) under parallel composition reduces to \(F_1 \wedge F_2\) (Lemma 6.4 in [22]) and \({\texttt {comp}}(\cdot )\) is commutative and associative. If \(io_{12} \not \equiv {\varnothing }\), then they are composed in series and the composition operator is only associative. Any acyclic interconnection can be composed into a single interface by systematically applying Definition 4’s binary composition operator. Nondeterministic outputs are interpreted to be adversarial. Series composition of interfaces has a builtin notion of robustness to account for \(F_1\)’s nondeterministic outputs and blocking inputs to \(F_2\) over the shared variables \(io_{12}\). The term \({\forall o_{12}}(F_1 \Rightarrow {\texttt {nb}}(F_2))\) in Eq. (8) is a predicate over the composition’s input set \(i_{12}\). It ensures that if a potential output of \(F_1\) may cause \(F_2\) to block, then \({\texttt {comp}}(F_1, F_2)\) must preemptively block.
The final atomic operator is input hiding, which may only be applied to sinks. If the sink is viewed as a constraint, an input variable is “hidden” by an angelic environment that chooses an input assignment to satisfy the constraint. This operator is analogous to projecting a set into a lower dimensional space.
Definition 5
Unlike the composition and output hiding operators, this operator is not included in the standard theory of relational interfaces [22] and was added to encode a controller predecessor introduced subsequently in Eq. (10).
3.2 Constructing Control Synthesis Pipelines
The robust controlled predecessor (1) can be expressed through operator composition.
Proposition 1
3.3 Modifying the Control Synthesis Pipeline

Sect. 4: Coarsening interfaces reduces the computational complexity of a problem by throwing away fine grain information. The synthesis result is conservative but the degree of conservatism can be modified.

Sect. 5: Refining interfaces decreases result conservatism. Refinement in combination with coarsening allows one to dynamically modulate the complexity of the problem as a function of multiple criteria such as the result granularity or minimizing computational resources.

Sect. 6: If the dynamics or specifications are decomposable then the control predecessor operator can be broken apart to refect that decomposition.
These sections do more than simply reconstruct existing techniques in the language of relational interfaces. They uncover some implicit assumptions in existing tools and either remove them or make them explicit. Minimizing the number of assumptions ensures applicability to a diverse collection of systems and specifications and compatibility with future algorithmic modifications.
4 Interface Abstraction via Quantization
A key motivator behind abstractionbased control synthesis is that computing the game iterations from Eqs. (2) and (3) exactly is often intractable for highdimensional nonlinear dynamics. Termination is also not guaranteed. Quantizing (or “abstracting”) continuous interfaces into a finite counterpart ensures that each predicate operation of the game terminates in finite time but at the cost of the solution’s precision. Finer quantization incurs a smaller loss of precision but can cause the memory and computational requirements to store and manipulate the symbolic representation to exceed machine resources.
This section first introduces the notion of interface abstraction as a refinement relation. We define the notion of a quantizer and show how it is a simple generalization of many existing quantizers in the abstractionbased control literature. Finally, we show how one can inject these quantizers anywhere in the control synthesis pipeline to reduce computational bottlenecks.
4.1 Theory of Abstract Interfaces
While a controller synthesis algorithm can analyze a simpler model of the dynamics, the results have no meaning unless they can be extrapolated back to the original system dynamics. The following interface refinement condition formalizes a condition when this extrapolation can occur.
Definition 6
The refinement relation satisfies the required reflexivity, transitivity, and antisymmetry properties to be a partial order [22] and is depicted in Fig. 4. This order has a bottom element \(\bot \) which is a universal abstraction. Conveniently, the bottom element \(\bot \) signifies both boolean false and the bottom of the partial order. This interface blocks for every potential input. In contrast, Boolean \(\top \) plays no special role in the partial order. While \(\top \) exhibits totally nondeterministic outputs, it also accepts inputs. A blocking input is considered “worse” than nondeterministic outputs in the refinement order. The refinement relation \(\preceq \) encodes a direction of conservatism such that any reasoning done over the abstract models is sound and can be generalized to the concrete model.
Theorem 1
(Informal Substitutability Result [22]). For any input that is allowed for the abstract model, the output behaviors exhibited by an abstract model contains the output behaviors exhibited by the concrete model.
If a property on outputs has been established for an abstract interface, then it still holds if the abstract interface is replaced with the concrete one. Informally, the abstract interface is more conservative so if a property holds with the abstraction then it must also hold for the true system. All aforementioned interface operators preserve the properties of the refinement relation of Definition 6, in the sense that they are monotone with respect to the refinement partial order.
Theorem 2
(Composition Preserves Refinement [22]). Let \(\hat{A} \preceq A\) and \(\hat{B} \preceq B \). If the composition is well defined, then \({\texttt {comp}}(\hat{A}, \hat{B}) \preceq {\texttt {comp}}(A,B)\).
Theorem 3
(Output Hiding Preserves Refinement [22]). If \(A \preceq B\), then \({\texttt {\textit{ohide}}}(w,A) \preceq {\texttt {\textit{ohide}}}(w,B)\) for any variable w.
Theorem 4
(Input Hiding Preserves Refinement). If A, B are both sink interfaces and \(A \preceq B\), then \({\texttt {ihide}}(w, A) \preceq {\texttt {ihide}}(w, B)\) for any variable w.
Proofs for Theorems 2 and 3 are provided in [22]. Theorem 4’s proof is simple and is omitted. One can think of using interface composition and variable hiding to horizontally (with respect to the refinement order) navigate the space of all interfaces. The synthesis pipeline encodes one navigated path and monotonicity of these operators yields guarantees about the path’s end point. Composite operators such as \({{\texttt {cpre}}}(\cdot )\) chain together multiple incremental steps. Furthermore since the composition of monotone operators is itself a monotone operator, any composite constructed from these parts is also monotone. In contrast, the coarsening and refinement operators introduced later in Definitions 8 and 10 respectively are used to move vertically and construct abstractions. The “direction” of new composite operators can easily be established through simple reasoning about the cumulative directions of their constituent operators.
4.2 Dynamically Coarsening Interfaces
In practice, the sequence of interfaces \(Z_i\) generated during synthesis grows in complexity. This occurs even if the dynamics F and the target/safe sets have compact representations (i.e., fewer nodes if using BDDs). Coarsening F and \(Z_i\) combats this growth in complexity by effectively reducing the amount of information sent between iterations of the fixed point procedure.
Spatial discretization or coarsening is achieved by use of a quantizer interface that implicitly aggregates points in a space into a partition or cover.
Definition 7
A quantizer Q(i, o) is any interface that abstracts the identity interface \((i == o)\) associated with the signature (i, o).
Quantizers decrease the complexity of the system representation and make synthesis more computationally tractable. A coarsening operator abstracts an interface by connecting it in series with a quantizer. Coarsening reduces the number of nonblocking inputs and increases the output nondeterminism.
Definition 8
Figure 5 depicts how coarsening reduces the information required to encode a finite interface. It leverages a variable precision quantizer, whose implementation is described in the extended version at [1].
The corollary below shows that quantizers can be seamlessly integrated into the synthesis pipeline while preserving the refinement order. It readily follows from Theorems 2, 3, and the quantizer definition.
Corollary 1
Input and output coarsening operations (13) and (14) are monotone operations with respect to the interface refinement order \(\preceq \).

Downsampling with progress [7]: Initially use coarser system dynamics to rapidly identify a coarse reach basin. Finer dynamics are used to construct a more granular set whenever the coarse iteration “stalls”. In [7] only the \(Z_i\) are coarsened during synthesis. We enable the dynamics F to be as well.

Greedy quantization: Selectively coarsening along certain dimensions by checking at runtime which dimension, when coarsened, would cause \(Z_i\) to shrink the least. This reward function can be leveraged in practice because coarsening is computationally cheaper than composition. For BDDs, the winning region can be coarsened until the number of nodes reduces below a desired threshold. Figure 6 shows this heuristic being applied to reduce memory usage at the expense of answer fidelity. A fixed point is not guaranteed as long as quantizers can be dynamically inserted into the synthesis pipeline, but is once quantizers are always inserted at a fixed precision.
The most common quantizer in the literature never blocks and only increases nondeterminism (such quantizers are called “strict” in [18, 19]). If a quantizer is interpreted as a partition or cover, this requirement means that the union must be equal to an entire space. Definition 7 relaxes that requirement so the union can be a subset instead. It also hints at other variants such as interfaces that don’t increase output nondeterminism but instead block for more inputs.
5 Refining System Dynamics
Shared refinement [22] is an operation that takes two interfaces and merges them into a single interface. In contrast to coarsening, it makes interfaces more precise. Many tools construct system abstractions by starting from the universal abstraction \(\bot \), then iteratively refining it with a collection of smaller interfaces that represent inputoutput samples. This approach is especially useful if the canonical concrete system is a black box function, Simulink model, or source code file. These representations do not readily lend themselves to the predicate operations or be coarsened directly. We will describe later how other tools implement a restrictive form of refinement that introduces unnecessary dependencies.
Interfaces can be successfully merged whenever they do not contain contradictory information. The shared refinability condition below formalizes when such a contradiction does not exist.
Definition 9
For any inputs that do not block for all interfaces, the corresponding output sets must have a nonempty intersection. If multiple shared refinable interfaces, then they can be combined into a single one that encapsulates all of their information.
Definition 10
The left term expands the set of accepted inputs. The right term signifies that if an input was accepted by multiple interfaces, the output must be consistent with each of them. The shared refinement operation reduces to disjunction for sink interfaces and to conjunction for source interfaces.
Shared refinement’s effect is to move up the refinement order by combining interfaces. Given a collection of shared refinable interfaces, the shared refinement operation yields the least upper bound with respect to the refinement partial order in Definition 6. Violation of (15) can be detected if the interfaces fed into \({\texttt {refine}}(\cdot )\) are not abstractions of the resulting interface.
5.1 Constructing Finite Interfaces Through Shared Refinement
Smaller interfaces are constructed by sampling regions of the input space and constructing an inputoutput pair. In Fig. 7’s left half, a sink interface \(I(x \cup u, {\varnothing })\) acts as a filter. The source interface \(\hat{I}({\varnothing }, x \cup u)\) composed with \(F(x \cup u, x^+)\) prunes any information that is outside the relevant input region. The original interface refines any sampled interface. To make samples finite, interface inputs and outputs are coarsened. An individual sampled abstraction is not useful for synthesis because it is restricted to a local portion of the interface input space. After sampling many finite interfaces are merged through shared refinement. The assumption \(\hat{I}_i \Rightarrow {\texttt {nb}}(F)\) encodes that the dynamics won’t raise an error when simulated and is often made implicitly. Figure 7’s right half depicts the sample, coarsen, and refine operations as methods to vertically traverse the interface refinement order.
6 Decomposed Control Predecessor
7 Conclusion
Tackling difficult control synthesis problems will require exploiting all available structure in a system with tools that can flexibly adapt to an individual problem’s idiosyncrasies. This paper lays a foundation for developing an extensible suite of interoperable techniques and demonstrates the potential computational gains in an application to controller synthesis with finite abstractions. Adhering to a simple yet powerful set of wellunderstood primitives also constitutes a disciplined methodology for algorithm development, which is especially necessary if one wants to develop concurrent or distributed algorithms for synthesis.
Footnotes
 1.
Relational interfaces closely resemble assumeguarantee contracts [16]; we opt to use relational interfaces because inputs and outputs play a more prominent role.
 2.
Disjunctions over sinks are required to encode \({\texttt {reach}}(\cdot )\). This will be enabled by the shared refinement operator defined in Definition 10.
References
 1.
 2.
 3.Basar, T., Olsder, G.J.: Dynamic Noncooperative Game Theory, vol. 23. Siam, Philadelphia (1999)zbMATHGoogle Scholar
 4.Bulancea, O.L., Nilsson, P., Ozay, N.: Nonuniform abstractions, refinement and controller synthesis with novel BDD encodings. CoRR, arXiv: abs/1804.04280 (2018)CrossRefGoogle Scholar
 5.Burch, J., Clarke, E., Long, D.: Symbolic model checking with partitioned transition relations (1991)Google Scholar
 6.Gruber, F., Kim, E., Arcak, M.: Sparsityaware finite abstraction. In: 2017 IEEE 56th Conference on Decision and Control (CDC), December 2017Google Scholar
 7.Hsu, K., Majumdar, R., Mallik, K., Schmuck, A.K.: Multilayered abstractionbased controller synthesis for continuoustime systems. In: Proceedings of the 21st International Conference on Hybrid Systems: Computation and Control (Part of CPS Week), HSCC 2018, pp. 120–129. ACM, New York (2018)Google Scholar
 8.Khalil, H.K., Grizzle, J.W.: Nonlinear Systems, vol. 3. Prentice Hall, Upper Saddle River, New Jersey (2002)Google Scholar
 9.Li, Y., Liu, J.: ROCS: a robustly complete control synthesis tool for nonlinear dynamical systems. In: Proceedings of the 21st International Conference on Hybrid Systems: Computation and Control (Part of CPS Week), HSCC 2018, pp. 130–135. ACM, New York (2018)Google Scholar
 10.Liu, J.: Robust abstractions for control synthesis: completeness via robustness for lineartime properties. In: Proceedings of the 20th International Conference on Hybrid Systems: Computation and Control, HSCC 2017, pp. 101–110. ACM, New York (2017)Google Scholar
 11.Majumdar, R.: Symbolic algorithms for verification and control. Ph.D. thesis, University of California, Berkeley (2003)Google Scholar
 12.Mazo Jr., M., Davitian, A., Tabuada, P.: PESSOA: a tool for embedded controller synthesis. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 566–569. Springer, Heidelberg (2010). https://doi.org/10.1007/9783642142956_49CrossRefGoogle Scholar
 13.Meyer, P.J., Girard, A., Witrant, E.: Compositional abstraction and safety synthesis using overlapping symbolic models. IEEE Trans. Autom. Control. 63, 1835–1841 (2017)MathSciNetCrossRefGoogle Scholar
 14.Mouelhi, S., Girard, A., Gössler, G.: CoSyMA: a tool for controller synthesis using multiscale abstractions. In: 16th International Conference on Hybrid Systems: Computation and Control, pp. 83–88. ACM (2013)Google Scholar
 15.Nilsson, P., Ozay, N., Liu, J.: Augmented finite transition systems as abstractions for control synthesis. Discret. Event Dyn. Syst. 27(2), 301–340 (2017)MathSciNetCrossRefGoogle Scholar
 16.Nuzzo, P.: Compositional design of cyberphysical systems using contracts. Ph.D. thesis, EECS Department, University of California, Berkeley, August 2015Google Scholar
 17.Piterman, N., Pnueli, A., Sa’ar, Y.: Synthesis of reactive(1) designs. In: Emerson, E.A., Namjoshi, K.S. (eds.) VMCAI 2006. LNCS, vol. 3855, pp. 364–380. Springer, Heidelberg (2005). https://doi.org/10.1007/11609773_24CrossRefGoogle Scholar
 18.Reißig, G., Weber, A., Rungger, M.: Feedback refinement relations for the synthesis of symbolic controllers. IEEE Trans. Autom. Control. 62(4), 1781–1796 (2017)MathSciNetCrossRefGoogle Scholar
 19.Rungger, M., Zamani, M.: SCOTS: a tool for the synthesis of symbolic controllers. In: 19th International Conference on Hybrid Systems: Computation and Control, pp. 99–104. ACM (2016)Google Scholar
 20.Somenzi, F.: CUDD: CU Decision Diagram Package. http://vlsi.colorado.edu/~fabio/CUDD/, Version 3.0.0 (2015)
 21.Tabuada, P.: Verification and Control of Hybrid Systems. Springer, New York (2009). https://doi.org/10.1007/9781441902245CrossRefzbMATHGoogle Scholar
 22.Tripakis, S., Lickly, B., Henzinger, T.A., Lee, E.A.: A theory of synchronous relational interfaces. ACM Trans. Program. Lang. Syst. (TOPLAS) 33(4), 14 (2011)CrossRefGoogle Scholar
 23.Zamani, M., Pola, G., Mazo, M., Tabuada, P.: Symbolic models for nonlinear control systems without stability assumptions. IEEE Trans. Autom. Control 57(7), 1804–1809 (2012)MathSciNetCrossRefGoogle Scholar
Copyright information
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.