Learning Transparent Data Automata

  • Normann Decker
  • Peter Habermehl
  • Martin Leucker
  • Daniel Thoma
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8489)


This paper studies the problem of learning data automata (DA), a recently introduced model for defining languages of data words which are finite sequences of pairs of letters from a finite and, respectively, infinite alphabet. The model of DA is closely related to general Petri nets, for which no active learning algorithms have been introduced so far. This paper defines transparent data automata (tDA) as a strict subclass of deterministic DA. Yet, it is shown that every language accepted by DA can be obtained as the projection of the language of some tDA. The model of class memory automata (CMA) is known to be equally expressive as DA. However deterministic DA are shown to be strictly less expressive than deterministic CMA. For the latter, and hence for tDA, equivalence is shown to be decidable. On these grounds, in the spirit of Angluin’s L ∗  algorithm we develop an active learning algorithm for tDA. They are incomparable to register automata and variants, for which learning algorithms were given recently.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Angluin, D.: Learning regular sets from queries and counterexamples. Inf. Comput. 75(2), 87–106 (1987)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Bojanczyk, M., David, C., Muscholl, A., Schwentick, T., Segoufin, L.: Two-variable logic on data words. ACM Trans. Comput. Log. 12(4), 27 (2011)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Björklund, H., Schwentick, T.: On notions of regularity for data languages. Theor. Comput. Sci. 411(4-5), 702–715 (2010)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Grinchtein, O., Leucker, M., Piterman, N.: Inferring network invariants automatically. In: Furbach, U., Shankar, N. (eds.) IJCAR 2006. LNCS (LNAI), vol. 4130, pp. 483–497. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  5. 5.
    Leucker, M., Neider, D.: Learning minimal deterministic automata from inexperienced teachers. In: Margaria, T., Steffen, B. (eds.) ISoLA 2012, Part I. LNCS, vol. 7609, pp. 524–538. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  6. 6.
    Kaminski, M., Francez, N.: Finite-memory automata. Theor. Comput. Sci. 134(2), 329–363 (1994)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Jonsson, B.: Learning of automata models extended with data. In: Bernardo, M., Issarny, V. (eds.) SFM 2011. LNCS, vol. 6659, pp. 327–349. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  8. 8.
    Howar, F., Steffen, B., Jonsson, B., Cassel, S.: Inferring canonical register automata. In: Kuncak, V., Rybalchenko, A. (eds.) VMCAI 2012. LNCS, vol. 7148, pp. 251–266. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  9. 9.
    Bollig, B., Habermehl, P., Leucker, M., Monmege, B.: A fresh approach to learning register automata. In: Béal, M.-P., Carton, O. (eds.) DLT 2013. LNCS, vol. 7907, pp. 118–130. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  10. 10.
    Esparza, J., Leucker, M., Schlund, M.: Learning workflow Petri nets. Fundam. Inform. 113(3-4), 205–228 (2011)MathSciNetzbMATHGoogle Scholar
  11. 11.
    Biermann, A.W., Feldman, J.A.: On the synthesis of finite-state machines from samples of their behaviour. IEEE Transactions on Computers 21, 592–597 (1972)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Normann Decker
    • 1
  • Peter Habermehl
    • 2
  • Martin Leucker
    • 1
  • Daniel Thoma
    • 1
  1. 1.ISPUniversity of LübeckGermany
  2. 2.Univ Paris Diderot, Sorbonne Paris Cité, LIAFA, CNRSFrance

Personalised recommendations