Advertisement

Metabolic computing

  • Vincenzo MancaEmail author
Review Paper

Abstract

The paper reviews some aspects of MP grammars, a particular type of P systems (M stands for Metabolic) consisting of multiset rewriting rules, which were introduced in the context of Membrane Computing, for modeling biological dynamics. The main features of MP theory are recalled, such as the control mechanisms based on regulation functions, MP graphs, representation of oscillatory dynamics, regression algorithms, and MP modeling. Finally, the computational universality of MP grammars is proved by means of Minsky’s register machines.

Keywords

Discrete dynamics Multiset grammars MP regression algorithms Metabolic computing 

1 Introduction

MP systems are discrete dynamical systems introduced in the context of membrane computing [1] and investigated for more than 20 years. Preliminary results were developed since the end of 1990 years [2, 3, 4, 5, 6, 7]. Related approaches to MP theory and first investigations on MP regression (see later on) were investigated in [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. Applications of MP systems in modeling biological systems, theoretical foundations, and efficient MP regression algorithms were investigated in [19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30].

MP systems are essentially multiset rewriting rules with functions that define the quantities of transformed elements. The attribute MP comes from P systems (multiset rewriting rules distributed in compartments) introduced by Păun [31, 32, 33, 34], where the focus is on metabolic processes. MP regression is a peculiar aspect of MP theory, which provides methods that determine MP grammars able to generate observed time series of an observed dynamics. MP regression algorithms rely on the methods of algebraic manipulation and Least Square Evaluation, or on statistical methods, or genetic algorithms, by providing very accurate solutions [1, 20, 24, 26, 35, 36, 37, 38, 39, 40, 41, 42, 43]. Public software platforms based on MP theory are available, which provide examples and documentation [44, 45].

In the present paper, we mainly consider MP grammars for their computational aspect, which provides a natural notion of distributed computation where the program is encoded by a graph expressing the transformations and the regulations of a finite number of multiset rewriting rules.

2 MP grammars

An MP grammar G is a discrete dynamical system based on a set X of variables, and a state space constituted by the assignments of values to variables of X. Let \(\mathbb {N}\) be the set of natural numbers. Assuming variables in some order, if X is a finite set of \(n \in \mathbb {N}\) variables, the set of possible states of G coincides with the set \(\mathbb {R}^n\) of real vectors of dimension n. A dynamics function \(\delta _G\) is associated with G that provides a next-state function, which changes the variable values, according to an increase–decrease variation specified by all the rules (if a variable does not occur in a rule, its value remains unchanged). Namely, a reading of “MP” is the basic Minus–Plus mechanism of the rules of an MP grammar. A multiset over X is a function assigning a natural number, called multiplicity, to every \(x \in X\). The formal definition of MP grammar follows.

Definition 1

An MP grammarG is given by a structure [24]:
$$\begin{aligned} G = (X, R, \Phi ), \end{aligned}$$
where:
1.

X is a finite set of real variables. The values of variables determine their current state;

2.

R is a finite set of rules. Each rule \(r \in R\) is expressed by \(\alpha _r \rightarrow \beta _r\) with \(\alpha _r ,\beta _r\) multisets over X, where \(\alpha _r(x)\) and \(\beta _r(y)\) denote the multiplicities of x and y in \(\alpha _r\) and in \(\beta _r\), respectively;

3.

\(\Phi = \{\varphi _r \; | \; r \in R\}\) is the set of regulators, or flux functions. Functions \(\varphi _r\) assume real values depending on the current values of some variables of X. In this way, a regulator \(\varphi _r\) associates with any state \(s \in \mathbb {R}^n\) a value \(u = \varphi _r(s)\), called “flux”, according to which any variable x occurring in \(\alpha _r\) is decreased by the value \(u \cdot \alpha _r(x)\), and any variable y occurring in \(\beta _r\) is increased by the value \(u \cdot \beta _r(y)\) (in both cases the flux is multiplied by the variable multiplicity).

The whole variation \(\Delta _G(s)_x\) of any variable \(x \in X\) of G is given by summing the variations due to all the rules where x occurs:
$$\begin{aligned} \Delta _G(s)_x = \sum _{r \in R} (\beta _r(x) - \alpha _r(x))\varphi _r(s), \end{aligned}$$
and the state variation, due to the application of rules to all variables, is given by
$$\begin{aligned} \Delta _G(s) = (\Delta _G(s)_{x \in X})^T \end{aligned}$$
(superscript T denotes transposition, so that \(\Delta _G(s)\) is viewed a column vector).
The dynamics \(\delta _G\) of G is given by (subscript G is omitted when understood)
$$\begin{aligned} \delta _G(s) = s + \Delta _G(s). \end{aligned}$$
When an initial state \(s_0\) is given, then an MP grammar G, starting from \(s_0\), generates the time series of states : \(\left( \delta ^i(s_0) \right) _{i \ge 0}\), by iteratively applying the dynamics function \(\delta\). \(\square\)

When variables are equipped with measurement units (related to their interpretation) and time duration of each step, the MP grammar is more properly called an MP system.

The dynamics of an MP grammar can be naturally expressed by a system of (first-order) recurrent equations, synthetically represented in matrix notation (see [24] for details).

In fact, rules define the following matrix, called rule stoichiometric matrix.
$$\begin{aligned} \mathbb {A}= (\beta _r(x) - \alpha _r(x))_{x \in X, r \in R}. \end{aligned}$$
If fluxes are given by vector U(s) (superscript T stands for transposition):
$$\begin{aligned} U(s)= (\varphi _r(s)_{r \in R})^T, \end{aligned}$$
and the vector of variable variations \(\Delta (s)\) is given by
$$\begin{aligned} \Delta (s) = (\Delta _x(s)_{x \in X})^T. \end{aligned}$$
Then, the system of variable variations can be expressed by (\(\times\) is the usual matrix product)
$$\begin{aligned} \Delta (s)=\mathbb {A} \times U(s). \end{aligned}$$
This formulation of MP grammar dynamics, introduced in [36], is called EMA (Equational Metabolic Algorithm) and allows us to generate a sequence of states from any given initial state.
A natural way of expressing MP grammars by means of graphs, called MP graphs, was introduced in [19] (see Fig. 2). In an MP graph, rules (or reactions) are multi-edges connecting variable nodes (sources) to other variable nodes (targets) entering and exiting, respectively, from a rule node. Moreover, a “regulator” is associated with each rule that goes from some variable nodes, called tuners, to a rule. The regulator is a function that applied to the values of its variable nodes provides a value regulating the rule. Input and output nodes are considered in correspondence to rules with left and right parts consisting of the empty multi-set. The essence of MP graphs is the basic phenomenon depicted in Fig. 1, that is the possibility of second-level edges connecting nodes with (first-level) edges.
Fig. 1

A simple graph where nodes and edges are present with a new kind of edges, meta-edges, connecting nodes with edges

Fig. 2

An example of MP graph. Empty circles denote variables (capital letters) and arrows denote transformations. These arrows are multi-arrows with a number of sources (possibly zero, and possibly with repetitions) and a number of targets (possibly zero, and possibly with repetitions). The identity of a multiarrow is its body consisting of a small full circle from which its branches exit. Small multiarrows (tiny lines) are also present in the graph. Any small multiarrow has a body consisting of a full small circle, and a number of variables as sources (possibly zero, and possibly with repetitions), but exactly one target consisting of a transformation. Any small multi-arrow denotes a regulator (lower case letter), that is, a function having as argument values of variables and providing a real number expressing the flux of a rule with respect to a given state of variables. Triangles (input and output) represent channels that acquire/expel quantities (“matter”) from/to the external environment

The MP graph of Fig. 2 represents the following rules.
  • \(\emptyset \rightarrow A : b\)

  • \(\emptyset \rightarrow B : d\)

  • \(\emptyset \rightarrow F : f\)

  • \(\emptyset \rightarrow L : k\)

  • \(H \rightarrow \emptyset : g\)

  • \(C \rightarrow \emptyset : a\)

  • \(P \rightarrow \emptyset : h\)

  • \(G \rightarrow \emptyset : s(C)\)

  • \(A B \rightarrow CD : e(E)\)

  • \(D \rightarrow G : c(C)\)

  • \(E F \rightarrow P : p(H)\)

  • \(L P \rightarrow H : q(P)\)

Example 2

The following MP grammar generates the Fibonacci sequence, as values of variable x, from the initial state \(x=1, y=0\) (\(\emptyset\) denotes the empty multiset).
$$\begin{aligned} \emptyset&\rightarrow y : x \end{aligned}$$
(1)
$$\begin{aligned} y&\rightarrow x : y \end{aligned}$$
(2)
\((x=1, y=0) \Rightarrow (x=1 , y=1) \Rightarrow (x=2 , y=1) \Rightarrow (x=3 , y=2) \Rightarrow (x=5 , y=3) \ldots\)

MP grammars have an intrinsic versatility in describing oscillatory phenomena, which are crucial in all processes of life [24, 27]. Here, we give some simple, but interesting examples.

The schema of MP grammars given in Example 3 [24] has an input rule \(r_1\) and an output rule \(r_3\) incrementing and decrementing variables x and y, respectively. Both rules are regulated by the same variable that they change (autocatalysis), while the rule \(r_2\) from x to y is regulated by both variables (bicatalysis). An MP grammar of this type provides a simple model for predator–prey dynamics first modeled in differential terms by Lotka and Volterra [46]. The model represents the growth of the two populations x and y, preys and predators, respectively. Preys grow by eating nutrients from the environment (according to some reproduction rate) while die by predation. Predators grow by eating preys while die by extinction (according to some death rate). If predators increase, then preys decrease; consequently predators have a minor availability of food and decrease. Symmetrically, a decrease of predators implies a consequent increase of preys.

Example 3

The following grammar provides a predator–prey dynamics.
$$\begin{aligned}&\emptyset \rightarrow x \;\; : \qquad 0.061\cdot x + 0.931 \nonumber \\&x \rightarrow y \;\; : \qquad 0.067\cdot x + 0.15\cdot y\nonumber \\&y \rightarrow \emptyset \;\; : \qquad 0.154\cdot y + 0.403. \end{aligned}$$
(3)
Matrix A below is the stoichiometric matrix of MP grammar in Example 3.
$$\begin{aligned} \mathbb {A} = \left( \begin{array}{ccc} 1 &{} -1 &{} 0 \\ 0 &{} 1 &{} -1 \\ \end{array} \right) \end{aligned}$$
(4)
Fig. 3

The MP graph of a prey–predator input–output grammar

Example 4

The following grammar has the same stoichiometry of prey–predator grammar. It was obtained using MP regression (see next subsection) and reproduces sine and cosine dynamics using linear regulators (\(x=1\), \(y=0\) the initial state). It was proved in [47] that this grammar can be deduced from the classical analytical and geometric definitions of sine and cosine. In other words, the MP regression is able to discover the logic of circular functions.
$$\begin{aligned} \begin{array}{ll} r_1: \emptyset \rightarrow x &{} \qquad : \;\;\; k_1\cdot x \\ r_2: x \rightarrow y &{} \qquad : \;\;\; k_2\cdot (x + y)\\ r_3: y \rightarrow \emptyset &{} \qquad : \;\;\; k_3\cdot y\\ \end{array} \end{aligned}$$
(5)
where
$$\begin{aligned} k_1 & = 0.000999499833375\\ k_2 & = 0.000999999833333\\ k_3 & = 0.001000499833291 \end{aligned}$$
coefficients are truncated to the 15th decimal digits, according to the computer accuracy representation. The sine/cosine dynamics is obtained with an absolute error of order \(10^{-14}\).

Even if the intuition underlying MP grammars was initially linked to metabolism, these grammars can be applied to any type of dynamics where some variables change in time. In this sense, metabolic processes are only special cases of a general dynamical framework. We know that ordinary differential equations (ODE) are the most popular mathematical model for dynamical processes. Therefore, a natural question is: “Why using other formalisms different from ODE?”.

The answer to this question is related to the fact that, very often, we observe a dynamics, but we are completely ignorant about what are the forces acting on variables in their changes. In this case, it is difficult to formulate, in differential terms, the rules acting on the observed system. The only data, on which a mathematical model can be based, are the time series of values that variable assume in some given instants. The search for a mathematical model able to reproduce the observed data, within some approximation, is a case of an “inverse problem”. MP regression algorithms provide systematic methods for solving such inverse problems. Now, we shortly recall how MP grammars can be very useful in this regard.

Dynamical inverse problems (DIP) were the beginning of modern science aimed at discovering the motion laws of planets around the sun, that is, the deduction of equations underlying planet motions from the forces acting on them. The approach we will outline here is similar, because we are interested in inferring a possible (approximate) internal logic regulating how variables change when they are cooperatively organized in a given system. Very often, in complex systems, with poor information about internal causes acting on them, this is the most realistic possibility of investigation. In the context of MP theory, a DIP can be formulated in the following way. Given a time series \((s_i)_{i=0, 1, \ldots t}\)of observed states (equally spaced in time), find the MP grammar able to generate the same series (within a given approximation threshold). This means to solve, in the best way, the following equation where the MP grammar G is the unknown value:
$$\begin{aligned} \left( \delta ^i_G(s_0) \right) _{i=0, 1, \ldots t}= (s_i)_{i=0, 1, \ldots t} . \end{aligned}$$
General and specific cases of DIP were intensively investigated, in the context of the MP theory, in the last 10 years (see [24] for a detailed account, and [27, 28, 29, 43, 44, 45, 48] for new developments and applications to biological modeling). Table 1 reports some models obtained by means of MP regressions. Moreover, many complex oscillators and chaotic dynamics were generated by means of suitable regulatory schemata, and MP regression was able to correctly discover the parameters underlying some given chaotic time series.
Table 1

Models obtained by MP regression

Belousov–Zhabotinsky, Prigogine’s brusselator (BZ)

([49, 50])

Lotka–Volterra, Predator-prey dynamics (LV)

([46, 51, 52])

Susceptible–infected–recovered epidemics (SIR)

([50, 53])

Early amphybian mitotic cycle (AMC)

([39, 54, 55])

Drosophila circadian rythms (DCR)

([50])

Non-photochemical quenching in photosynthesis (NPQ)

([56])

Minimal diabetes mellitus (MDM)

([28, 40])

Bi-catalytic synthetic oscillator

([36])

Synthetic oscillators

([27, 37, 48])

Chaotic dynamics

([27, 48])

Gene expression dynamics

([28, 29])

3 Input–output, positive, and reactive MP grammars

Two MP grammars are dynamically equivalent, with respect to a set of variables, when these variables change according to the same dynamics.

An MP grammar is called input–output if its rules have the empty multiset as left or right member [24]. The following theorem holds.

Theorem 5

Any MP grammar is dynamically equivalently to an input–output MP grammar. Moreover, any system of (first-order) recurrent equations can be expressed in terms of an MP grammar with input–output rules.

Proof

In fact, any rule \(\alpha \rightarrow \beta : \varphi\) with \(\alpha \not = \emptyset\) and \(\beta \not = \emptyset\) can be transformed into the set of rules \(x \rightarrow \emptyset : \varphi\) (output rule) for every \(x \in \alpha\), and \(\emptyset \rightarrow y: \varphi\) (input rule) for every \(y \in \beta\). Of course, applying \(\alpha \rightarrow \beta : \varphi\) is equivalent to applying all the two corresponding input–output rules.

Conversely, let us consider a system of n recurrent equations (\(\times\) is the scalar product of vectors):
$$\begin{aligned} x^1_{i+1} & = a_{1,1}x^1_{i} + a_{1,2}x^2_{i} + \ldots a_{1,n}x^n_{i}\\ x^2_{i+1} & = a_{2,1}x^1_{i} + a_{2,2}x^2_{i} + \ldots a_{2,n}x^n_{i}\\ \vdots \\ x^n_{i+1} & = a_{n,1}x^1_{i} + a_{n,2}x^2_{i} + \ldots a_{n,n}x^n_{i}, \end{aligned}$$
where values of variables at step \(i+1\) depend on values of variables at step i, which can be expressed by a vector equation:
$$\begin{aligned} \Delta _i(s) = \left( \begin{array}{c} P_1 \times s - Q_1 \times s \\ P_2 \times s - Q_2 \times s \\ \vdots \\ P_n \times s - Q_n \times s \\ \end{array} \right) \end{aligned}$$
where \(s = (x^1_i \ldots x^n_i), \Delta _i(s) = (x^1_{i+1} - x^1_{i} \ldots x^n_{i+1} - x^n_{i}), P_i = ((a_{i,1})^+, (a_{i,2})^+ \ldots (a_{i,n})^+), Q_i = ((-a_{i,1}),^+ (-a_{i,2})^+ \ldots (-a_{i,n})^+),\) for \(i = 1, 2, \ldots , n\), by setting, for any real value a, that \((a)^+=a\) if \(a>0\) and \((a)^+=0\) otherwise.

Then, we can consider n variables \(x^1, \ldots , \ldots x^n\) with rules \(\emptyset \rightarrow x^j: a_{i,j}\) for \(j=1, 2, \ldots m\) such that \((a_{i,j})^+ \not = 0\) and rules \(x^j \rightarrow \emptyset : -a_{i,j}\) for \(i,j=1, 2, \ldots n\) such that \((-a_{i,j})^+ \not = 0\). Of course, these MP rules provide the same dynamics computed by the original system of recurrent equations. \(\square\)

Figure 3 shows the MP graph of an input–output MP grammar equivalent to that one given in Fig. 4. In the input–output grammar, \(\varphi _2 = \varphi _3\) is equal to \(\varphi _2\) of the grammar of Fig. 4, while \(\varphi _4\) is the same as \(\varphi _3\) of the same grammar.
Fig. 4

The MP graph of the prey–predator grammar

An external variable (called parameter in [24]) is a variable of an input rule without flux. This means that when a time series of values is given the dynamics of the grammar can be computed by taking the values of the series as values of the external variable. An MP grammar with external variables is not a generator of time series, but a function transforming the time series associated with its external variables into the time series of its internal (i.e. not external) variables.

An MP grammar is non-cooperative when in it each rule has at most one left variable, and it is monic when its left variables have at most multiplicity one.

Lemma 6

For any MP grammar there exists a monic MP grammar that is dynamically equivalent to it.

Proof

In fact, let us consider a rule with two variables (the extension to more variables is trivial): \(x y \rightarrow z : \varphi\). Then, we can equivalently split this rule into two rules that change the variables in the same manner: \(x \rightarrow z : \varphi\) and \(y \rightarrow \emptyset : \varphi\). Let us apply this transformation to any rule with more than one left variable. Analogously, a rule such as \(2x \rightarrow z : \varphi\) equivalently transforms into the two rules \(x \rightarrow z : \varphi\) and \(x \rightarrow \emptyset : \varphi\). In this way, we get a monic MP grammar that is dynamically equivalent to the original one.\(\square\)

An MP grammar is positive when, starting from a state where all variables are positive, then in all the following states variables and fluxes are always positive. Given an MP grammar G a positive grammar \(G'\), called the positively controlled grammar associated withG, is defined in the following manner. The grammar \(G'\) has the same variables and the same rules as G. Moreover, a regulator \(\varphi '\) is defined in \(G'\) in correspondence to each regulator \(\varphi\) of G in the following way:

let s(x) be the value of variable x in the state s,

let \(\varphi ^+(s) =\)max\(\{\varphi (s), 0\}\),

let \(\Phi ^-(x)\) be the regulators of rules decreasing the variable x.

Then, regulators \(\varphi '\) are defined, for every \(\varphi \in \Phi ^-(x)\), by
$$\begin{aligned} \varphi '(s)= & {} 0 \quad \mathbf{if } \quad \sum _{\varphi \in \Phi ^-(x)}\varphi ^+(s) > s(x) \end{aligned}$$
(6)
$$\begin{aligned} \varphi '(s)= & {} \varphi ^+(s) \quad \mathbf{otherwise }. \end{aligned}$$
(7)
A class of positive MP grammars, called reactive MP grammars, can be defined, by means of reaction functions and inertia functions. Namely, if we restrict to the case of monic grammars, this means that, in any state s, for each rule r of type \(x \rightarrow y\), the regulator is given by
$$\begin{aligned} \varphi _r(s)= f_r(s)/ \left( \sum _{l \in R^-(x)}f_l(s) + h_x(s)\right), \end{aligned}$$
where \(f_l\) are the reaction functions of rules competing with rule r for variable x, that is, l belongs to the set \(R^-(x)\) of rules consuming x, \(f_r\) is the reaction function of rule r, and \(h_x\) is the inertia function of x (for input rules, regulators coincide with their reaction functions).

Theorem 7

For any positive MP grammar, there exists a dynamically equivalent reactive MP grammar and vice versa. (a proof is given in [24]).

In Fig. 5, an MP graph is given that represents a grammar for computing the square root of any integer number greater than 1 using the well-known identity \(\sum _{i=0}^{n-1} (2i+1) = n^2\). In Table 2, the sequence of the states of the computation with input 7 is given. In Fig. 6, the MP grammar is represented by its topological structure. In Figs. 7 and 8, the two parts of the the MP graph of Fig. 5 are separated: Fig. 7 represents the transformations, while Fig. 8 represents the regulators. Finally, in Table 3 the grammar of Fig. 5 is expressed by assignments and positivity conditions.
Fig. 5

The MP graph of a positive MP grammar computing the square root of any integer number greater than 1. The values inside circles are the initial values of variables (where no value is indicated value 0 is assumed)

Fig. 6

A topological computation paradigm expressed by an MP graph. The grammar of Fig. 5 is given in a purely graphical form, because regulators, expressed by expressions in Fig. 5, here are replaced by arrows. All what needs in the computation is expressed by matter localizations, that is, membranes and the passage of matter between them, which in turn, are regulated by the current distribution of matter in the membranes

Fig. 7

The transformation component of the MP graph in Fig. 5

Table 2

The computation of \(\sqrt{7}\)

A

B

C

D

Y

W

Z

7

0

1

1

0

0

0

6

1

1

1

0

0

0

3

2

1

3

0

0

0

3

3

1

0

0

0

0

3

1

0

0

3

1

0

3

2

0

0

2

0

1

Numerical rows are the variable states. The input is the value given to variable A. When variable Z has value 1, then the computation halts (the rules are not active, that is, no transformation happens). The final value of variable Y is the square root, while the final value of A is the remainder (the greatest integer with the square non-exceeding the input)

Fig. 8

The regulation component of the MP graph in Fig. 5

Let us consider again the MP grammar of Fig. 5. When the input value is assigned to the variable A, the computation goes on while variable \(Z=0\) and stops when \(Z=1\) (the value of Y is the result and the final value of A is the remainder). The imperative program given in Table 3 provides the same sequence of states as the MP graph of Fig. 5 (relatively to the variables present in that graph).
Table 3

An imperative program expressing the MP grammar of Fig. 5

INPUT A;

 

\(C := 1\), \(D : = 1\),

\(B := 0, Y := 0, W := 0 ; Z := 0\);

WHILE \(Z=0\)

DO

\(A1 := A ; B1 := A ; C1 := C, \; D1 : = D, Y1 := Y, W1 := W, Z1 := Z\);

\(B := B + C1\);

IF \(A1 \ge 2B1+C1\) THEN \(A := A - 2B1 - C1\) AND \(D := 2B1+C1\);

IF \(C1 \ge C1+D1\) THEN \(C := C - C1 - D1\) AND \(W := W \!+\! C1\!+ \! D1\);

IF \(B1 \ge B1+D1+Y1\)

      THEN \(B := B - B1 - D1 + Y1\) AND \(Y := Y +B1+D1+Y1\);

IF \(Y1 \ge W1\) THEN \(Y := Y - W1\) AND \(Z := Z + W1\);

END-DO;

 

OUTPUT YA;

 

HALT.

 

The example considered above puts in evidence a deep relationship between MP computations and classical imperative programming, by relating both of them to a representation of computation in terms of “matter” flowing among locations with fluxes determined by the content of these locations. In the next section, we develop this intuition in general terms.

3.1 MP computability

In this section, we follow the papers [30, 57, 58]), by showing that the class of positively controlled MP grammars is computational universal and that computational universality can be obtained by means of regulators having a simple form.

We consider a well-known kind of universal register machines given in [59]. A register machine M of this type is given by
$$\begin{aligned} M = (\mathbb {R}, \mathbb {I}, \mathbb {O}, \mathbb {P}) \end{aligned}$$
where \(\mathbb {R}\) is a set \(\{R_1, . . .,R_n\}\) of registers, \(\mathbb {I} \subseteq \mathbb {R}\) is the set of input registers, and \(\mathbb {O} \subseteq \mathbb {R}\) is the set of output registers. \(\mathbb {P}\) is a program, that is, a set of instructions \(\{I_1, . . . , I_m\}\). An instruction can be: 1) a pair (label : operation); 2) a triple (label : testrelabel), where label and relabel are integers; 3) a pair (label : Halt) (end of computation). The operations are of two types:
  • Increment of register R: Inc(R),

  • Decrement of register R: Dec(R).

The application of first one replaces the content of register R with its successor, while the application of the second one replaces the content of R by its predecessor, or leaves unchanged the content if it is zero. The integer label is the content of a special register \(R_0\), which is called the machine accumulator. The content L of \(R_0\) is replaced by \(L+1\) when an Inc of Dec instruction is applied.
About the second type of instructions, which now we abbreviate by
$$\begin{aligned} L : Jnz(R, N), \end{aligned}$$
if L is the content of accumulator \(R_0\), then it is replaced by N only if the content of R is different from 0.

About the third type of instructions, when the accumulator has the label of a Halt instruction, the computation of the register machine stops.

Any computation of M is relative to some positive integers as contents of the input registers (all the other registers are assumed with zero content). The computation realized by the machine M is a sequence of application of instructions. The application of any instruction is a change of values in the registers (\(R_0\) included). At any step, the instruction having the label put in the accumulator is applied. The initial integer put in the accumulator determines the first instruction that will be applied. When the computation halts, the results of the computation are the numbers put in the output registers.

For example, the following program when computation halts, gives in the register \(R_1\) the sum of the contents of registers \(R_1, R_2\).
  1. 1 :

    \(Inc(R_1\))

     
  2. 2 :

    \(Dec(R_2)\)

     
  3. 3 :

    \(Jnz(R_2, 1)\)

     
  4. 4 :

    Halt

     

Theorem 8

For any Register Machine M,there exists a monic positive MP grammar \(G_{M}\)equivalent to M.

Proof

Given a register machine M with m instructions, we consider an MP grammar \(G_M\) with a variable \(I_h\) for each instruction \(I_h\) of M (h is the instruction label), plus another variable H, and a variable for each register \(R_j\) of M (register variables are denoted in the same way registers are denoted in M). All register variables are initialized with the same values that the registers have in M, and all instruction variables are zero. If M has the program consisting of instructions \(I_1, I_2, \ldots , I_m\) and 1 is the initial content of the accumulator, then the set of rules \(R_M\) of \(G_M\) are produced according to the following procedure.

From register machines to MP positive grammars
  1. 1.

    \(R_M := \{ \emptyset \rightarrow I_{1} : 1\}\)

     
  2. 2.

    for any instruction I of Mdo

     
  3. 3.

    begin

     
  4. 4.

    if\(I = h : Halt\)then add to \(R_M\) the rule \(I_{h} \rightarrow H : I_{h}\)

     
  5. 5.

    if\(I = h : Inc(R_j)\)then add to \(R_M\) the rules \(\emptyset \rightarrow R_j : I_h\) and \(I_h \rightarrow I_{h+1} : I_h\)

     
  6. 6.

    if\(I = h : Dec(R_j)\)then add to \(R_M\) the rule \(R_j \rightarrow \emptyset : I_h\) and \(I_h \rightarrow I_{h+1} : I_h\)

     
  7. 7.

    if\(I = h : Jnz(R_j, k)\)then add to \(R_M\) the rules specified below.

     
  8. 8.

    end

     
The method above provides the natural translation of HaltInc, andDec instructions. The translation of \(h: Jnz(R_j, k)\) instruction is easy if register \(R_j\) contains either 0 or 1. In fact, in this case it is translated by the two following rules (exponent \(+\) is according to Eq. (7)):
  1. 1.

    \(I_h \rightarrow I_ k : R_j\)

     
  2. 2.

    \(I_h \rightarrow I_{h+1} : (R_j + I_h)^+\)

     
Namely, if \(R_j = 0\) the first rule does not change its variables and the second rule produces \(I_{h+1}= 1\). Otherwise, if \(R_j = 1\) the first rule is active and the second one is blocked by the control of positivity because its flux is 2, but \(I_{h}= 1\).
In the case that \(R_j\) contains a value greater than 1, the translation is realized in a more long way, by introducing auxiliary variables: \(H_j, L_h, F_h, F_{h+1}\), and by adding the following MP rules:
  1. 1.

    \(R_j \rightarrow H_j : I_h\)

     
  2. 2.

    \(I_h \rightarrow L_h : I_h\)

     
  3. 3.

    \(L_h \rightarrow F_k : H_j\)

     
  4. 4.

    \(L_h \rightarrow F_{h+1} : (L_h + H_j)^+\)

     
  5. 5.

    \(H_j \rightarrow R_j : F_k\)

     
  6. 6.

    \(H_j \rightarrow R_j : F_{h+1}\)

     
  7. 7.

    \(F_k \rightarrow I_k : F_k\)

     
  8. 8.

    \(F_{h+1} \rightarrow I_{h +1}: F_{h+1}\)

     
According to these rules, if \(R_j>0\), the value 1 from \(R_j\) and 1 from \(I_h\) are moved, to the copy variables \(H_j, L_h\) respectively. Therefore, the same strategy of the 0/1 translation above can be applied to these copy variables, by means of the rules (3) and (4). Then, the original value of register \(R_j\) is restored, by means of the rules (5) and (6), and according to the test about \(R_j\), the value 1 is assigned to the right instruction variable by means of rules (7) and (8). \(\square\)

Theorem 9

For any Register Machine M ,there exists a monic positive MP grammar \(G_M\) (dynamically) equivalent to M where regulators are single variables.

We can improve the translation of the previous theorem by means of MP rules that are regulated by single variables. In fact, rule (4) is replaced by other rules, where no sum of variables appears. To this end, an auxiliary variable \(K_h\) is introduced, and the rules above are replaced in \(G_M\) by the following ones:
  1. 1.

    \(R_j \rightarrow H_j : I_h\)

     
  2. 2.

    \(I_h \rightarrow L_h : I_h\)

     
  3. 3.

    \(L_h \rightarrow F_k : H_j\)

     
  4. 4.

    \(H_j \rightarrow K_h: H_j\)

     
  5. 5.

    \(L_h \rightarrow K_{h} : L_h\)

     
  6. 6.

    \(L_h \rightarrow F_{h+1} : K_h\)

     
  7. 7.

    \(H_j \rightarrow R_j : F_k\)

     
  8. 8.

    \(H_j \rightarrow R_j : F_{h+1}\)

     
  9. 9.

    \(F_k \rightarrow I_k : F_k\)

     
  10. 10.

    \(F_{h+1} \rightarrow I_{h +1}: F_{h+1}\).

     
These rules change register variables in the same way as they do in the machine M, and \(G_M\) ends to transform variables in the halting configuration of M. Moreover, the MP grammar \(G_M\) is positive, because, when more than one rule consumes a variable, only one of them has a positive flux. \(\Box\)

4 Conclusions

MP grammars were introduced for modeling metabolic processes. However, they proved to be a natural formalism for expressing a great number of systems. Moreover, MP grammars showed to have a feature very important from the applicative viewpoint, that is, efficient and reliable algorithms for solving inverse dynamical problems: finding grammars that provide a given sequence of observed states for some given variables. This means that, in terms of MP grammars, we can discover a logic that underlies a given time series. MP regression theory concerns with a wide class of algorithms discovering MP grammars that solve dynamical inverse problems, based on initial flux estimation and recurrent solution of linear equations, log-gain principle, minimum square methods, linear statistical regression, and evolutionary genetic strategies. Also complex oscillatory dynamics and chaotic processes are naturally obtained by means of MP grammars [27, 48].

The structure of MP grammars is expressed by MP graphs that represent in a very natural way the transformation component of such grammars together with their regulation component. In the visualization of these graphs, the main novelty of MP grammars easily emerges. In fact, the nodes–edges structure of usual graphs is enriched, in MP graphs, by a further level consisting of meta-edges connecting nodes to edges. As we showed, this feature allows MP grammars to be a universal computational formalism, with no centralized programs.

A concluding remark intends to point out an important aspect related with the meta-level of MP graphs. In fact, they can be extended for expressing a crucial mechanism of real neural circuits, discovered in the mnemonic consolidation of Aplysia (a marine invertebrate) [60]. Namely, a meta-level mechanism of some neural synaptic connections was discovered, according to which special neural connections are activated for modulating synaptic connections. In this way, new synapses can be developed by some modulation schemata. In MP terms, this kind of synapse modulation corresponds to a special type of meta-edge. If our regulators are equipped with the capability of introducing new edges, then MP graphs become meta-graphs that express “plastic” structures, and the letters of MP prefix result as the shortcomings for meta-plastic graphs. An extension of classical neural networks with the capability of modulations that realize graph structure modifications could be an interesting novelty in the formalization of the neural plasticity, on which many complex behaviors are surely based.

References

  1. 1.
    Manca, V., Bianco, L., & Fontana, F. (2005). Evolution and oscillation in P systems: Applications to biological phenomena. In Gh. Păun (Ed.), Lecture Notes in Computer Science (WMC 2004) (Vol. 3365, pp. 63–84).Google Scholar
  2. 2.
    Manca, V. (1998). Logical string rewriting and molecular computing. In Proceedigns of International Symposium on Mathematical Foundation of Computer Science (pp. 23–28). Brno, Czech Republic.Google Scholar
  3. 3.
    Manca, V. (1998). String rewriting and metabolism: A logical perspective. In Gh. Păun (Ed.), Springer series in discrete mathematics and theoretical computer science (Vol. 7, pp. 36–60), Computing with bio-molecules. Singapore: Springer.Google Scholar
  4. 4.
    Manca, V., & Martino, M. D. (1999). From string rewriting to logical metabolic systems. In Gh. Păun & A. Salomaa (Eds.), Grammatical models of multiagent systems (Vol. 8, pp. 297–315). Washington, D.C: Gordon and Breach Science Publishers.Google Scholar
  5. 5.
    Manca, V. (2000). Monoidal systems and membrane systems. In Pre-proceedings of the Workshop on Multiset Processing (pp. 176–190). Auckland: University of Auckland, Centre for Discrete Mathematics.Google Scholar
  6. 6.
    Manca, V. (2001). Membrane algorithms for propositional satisfiability. In Gh. Păun (Ed.), Pre-Proceedings of International Workshop on Membrane Computing (pp. 181–192). Tarragona: University Rovira i Virgili.Google Scholar
  7. 7.
    Manca, V. (2001). Monoidals for molecules and membranes. Romanian Journal of Information Science and Technology, 4(1–2), 155–170.Google Scholar
  8. 8.
    Manca, V. (2002). DNA and membrane algorithms for SAT. Fundamenta Informaticae, 49, 171–175.MathSciNetzbMATHGoogle Scholar
  9. 9.
    Manca, V., & Bernardini, F. (2003). P systems with boundary rules. In Gh. Păun (Ed.),  Lecture Notes in Computer Science (WMC 2002) (Vol. 2597, pp. 107–118).Google Scholar
  10. 10.
    Franco, G., & Manca, V. (2003). A membrane system for selective leukocyte recruitment. In Gh. Păun (Ed.), Lecture Notes in Computer Science (WMC 2003) (Vol. 2933, pp. 181–190).Google Scholar
  11. 11.
    Bianco, L., Fontana, F., & Manca, V. (2005). Metabolic algorithm with time-varying reaction maps. In Gh. Păun (Ed.), Membrane computing brainstorming 2005 (pp. 43–62). Fenix Editora: Sevilla.Google Scholar
  12. 12.
    Bianco, L., & Manca, V. (2005). Encoding–decoding transitional systems for classes of P systems. In Gh. Păun (Ed.), Membrane computing. Lecture Notes in Computer Science (Vol. 3850, pp. 134–143). Berlin: Springer.Google Scholar
  13. 13.
    Bianco, L., Fontana, F., & Manca, V. (2006). Computation of biochemical dynamics using MP systems. In E. Bartocci, P. Lio, & N. Paoletti (Eds.), Computational methods in systems biology (pp. 40–45). Berlin: Springer.Google Scholar
  14. 14.
    Manca, V., Franco, G., Lampis, S., & Vallini, G. (2008). The phenomenon of sampling and growing in bio-populations. In DNA 14: Proceedings of the 14th International Meeting on DNA Computing (pp. 187–188) Opava: Silesian University.Google Scholar
  15. 15.
    Pagliarini, R., Franco, G., & Manca, V. (2009). An algorithm for initial fluxes of metabolic P systems. International Journal of Computers Communications and Control, 4, 263–272.CrossRefGoogle Scholar
  16. 16.
    Castellini, A., Franco, G., & Manca, V. (2009). Toward a representation of hybrid functional petri nets by MP systems. In Proceedings in Information and Communications Technology - PICT (pp. 28–37) Vol. 1. Berlin: Springer (Dec 10–13, 2007 2009).Google Scholar
  17. 17.
    Castellini, A., Franco, G., & Manca, V. (2010). Hybrid functional petri nets as MP systems. Natural Computing, 9, 61–81.MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Franco, G., & Manca, V. (2011). On synthesizing a replicating metabolic system by membrane systems. ERCIM News, 85, 21–22.Google Scholar
  19. 19.
    Manca, V., & Bianco, L. (2008). Biological networks in metabolic P systems. BioSystems, 372, 165–182.Google Scholar
  20. 20.
    Manca, V. (2009). Fundamentals of metabolic P systems. In G. Rozenberg & A. Salomaa (Eds.), The Oxford handbook of membrane computing (pp. 475–498). Oxford: Oxford University Press.Google Scholar
  21. 21.
    Manca, V. (2010). From P to MP systems. In Gh. Păun (Ed.), Membrane computing, WMC 2009. Lecture Notes in Computer Science (Vol. 5957, pp. 74–94). Berlin: Springer.Google Scholar
  22. 22.
    Manca, V. (2010). Metabolic P Systems. In G. Păun (Ed.), Scholarpedia (Vol. 5(3),pp. 9273–9273). http://www.scholarpedia.org/.
  23. 23.
    Manca, V. (2013). An outline of MP modeling framework. In Gh. Păun (Ed.), Membrane computing. Lecture Notes in Computer Science (Vol. 7762, pp. 47–55). Berlin: Springer.CrossRefGoogle Scholar
  24. 24.
    Manca, V. (2013). Infobiotics: Information in biotic systems. Berlin: Springer.CrossRefzbMATHGoogle Scholar
  25. 25.
    Manca, V., Castellini, A., Franco, G., Marchetti, L., & Pagliarini, R. (2013). Metabolic P systems: A discrete model for biological dynamics. Chinese Journal of Electronics, 22, 717–723.Google Scholar
  26. 26.
    Marchetti, L., & Manca, V. (2012). A methodology based on MP theory for gene expression analysis. In Gh. Păun (Ed.), Membrane computing. Lecture Notes in Computer Science (Vol. 7184, pp. 300–313). Berlin: Springer.CrossRefGoogle Scholar
  27. 27.
    Manca, V. (2013). Algorithmic models of biochemical dynamics: MP grammars synthetizing complex oscillators. International Journal of Nanotechnology and Molecular Computation, 3, 24–37.Google Scholar
  28. 28.
    Marchetti, L., Manca, V., Pagliarini, R., & Bollig-Fischer, A. (2014). MP modelling for systems biology: Two case studies. In P. Frisco, M. Gheorghe, & M. J. Pérez-Jiménez (Eds.), Applications of membrane computing in systems and synthetic biology (pp. 223–245). Berlin: Springer.CrossRefGoogle Scholar
  29. 29.
    Bollig-Fischer, A., Marchetti, L., Mitrea, C., Wu, J., Kruger, A., Manca, V., et al. (2014). Modeling time-dependent transcription effects of HER2 oncogene and discovery of a role for E2F2 in breast cancer cell-matrix adhesion. Bioinformatics, 30, 3036–3043.CrossRefGoogle Scholar
  30. 30.
    Manca, V. (2016). Grammars for discrete dynamics. In A. Holzinger (Ed.), Machine learning for health informatics. Lecture Notes in Computer Science (Vol. 9605, pp. 37–58). Berlin: Springer.CrossRefGoogle Scholar
  31. 31.
    Păun, Gh. (2002). Membrane computing: An introduction. Berlin: Springer.CrossRefzbMATHGoogle Scholar
  32. 32.
    Ciobanu, G., Pérez Jiménez, M., & Păun, Gh. (2006). Applications of membrane computing. Berlin: Springer.Google Scholar
  33. 33.
    Păun, Gh., Rozenberg, G., & Salomaa, A. (2010). Oxford handbook of membrane computing. Oxford: Oxford University Press.CrossRefzbMATHGoogle Scholar
  34. 34.
    Frisco, P., Gheorghe, M., & Pérez-Jiménez, M. J. (Eds.). (2014). Applications of membrane computing in systems and synthetic biology. Berlin: Springer.Google Scholar
  35. 35.
    Fontana, F., & Manca, V. (2007). Discrete solutions to differential equations by metabolic P systems. Theoretical Computer Science, 372, 165–182.MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    Manca, V. (2008). The metabolic algorithm for P systems: Principles and applications. Theoretical Computer Science, 404, 142–155.MathSciNetCrossRefzbMATHGoogle Scholar
  37. 37.
    Manca, V., & Marchetti, L. (2010). Metabolic approximation of real periodical functions. Journal of Logic and Algebraic Programming., 79, 1–11.MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    Manca, V., & Marchetti, L. (2011). Log-gain stoichiometric stepwise regression for MP systems. International Journal of Foundations of Computer Science, 22, 97–106.MathSciNetCrossRefzbMATHGoogle Scholar
  39. 39.
    Manca, V., & Marchetti, L. (2010). Golbeter’s mitotic oscillator entirely modeled by MP systems. In G. Păun (Ed.), Membrane computing. Lecture Notes in Computer Science (Vol. 6501, pp. 273–284). Berlin: Springer.CrossRefGoogle Scholar
  40. 40.
    Manca, V., Marchetti, L., & Pagliarini, R. (2011). MP modelling of glucose-insulin interactions in the intravenous glucose tolerance test. International Journal of Natural Computing Research, 3, 13–24.CrossRefGoogle Scholar
  41. 41.
    Manca, V., & Marchetti, L. (2012). Solving dynamical inverse problems by means of metabolic P systems. Biosystems, 109, 78–86.CrossRefGoogle Scholar
  42. 42.
    Manca, V., & Marchetti, L. (2013). An algebraic formulation of inverse problems in MP dynamics. International Journal of Computer Mathematics, 90, 845–856.MathSciNetCrossRefzbMATHGoogle Scholar
  43. 43.
    Castellini, A., Zucchelli, M., Busato, M., & Manca, V. (2013). From time series to biological network regulations. Molecular Biosystems, 9(1), 225–233.CrossRefGoogle Scholar
  44. 44.
    Castellini, A., Paltrinieri, D., & Manca, V. (2015). MP-geneticSynth: Inferring biological network regulations from time series. Bioinformatics, 31, 785–787.CrossRefGoogle Scholar
  45. 45.
    Marchetti, L., & Manca, V. (2015). Mptheory Java library: A multi-platform Java library for systems biology based on the metabolic P theory. Bioinformatics, 31, 1328–1330.CrossRefGoogle Scholar
  46. 46.
    Fontana, F., & Manca, V. (2008). Predator-prey dynamics in P systems ruled by metabolic algorithm. BioSystems, 91, 545–557.CrossRefGoogle Scholar
  47. 47.
    Manca, V., & Marchetti, L. (2014). Recurrent solutions to dynamics inverse problems: A validation of MP regression. Journal of Applied and Computational Mathematics, 3, 1–8.Google Scholar
  48. 48.
    Manca, V., Marchetti, L., & Zelinka, I. (2014). On the inference of deterministic chaos: Evolutionary algorithm and metabolic P system approaches. In Ciobanu, G., Pérez-Jiménez, M. J., Păun, Gh. (Eds.) Evolutionary computation (CEC) (pp. 1483–1488), 2014 IEEE Congress on, Beijing.Google Scholar
  49. 49.
    Hilborn, R. C. (2000). Chaos and nonlinear dynamics. Oxford: Oxford University Press.CrossRefzbMATHGoogle Scholar
  50. 50.
    Bianco, L., Fontana, F., Franco, G., & Manca, V. (2006). P systems for biological dynamics. In: Ciobanu, G., Păun, Gh., Pérez-Jiménez, M.J. (Eds.) Applications of P systems (Vol. 3, pp. 5–23). Springer, Berlin.Google Scholar
  51. 51.
    Lotka, A. J. (1920). Analytical note on certain rhythmic relations in organic systems. Proceedings of the National Academy of Sciences, 6, 410–415.CrossRefGoogle Scholar
  52. 52.
    Volterra, V. (1926). Fluctuations in the abundance of a species considered mathematically. Nature, 118, 558–60.CrossRefzbMATHGoogle Scholar
  53. 53.
    Lambert, J. D. (1973). Computational methods in ordinary differential equations. Hoboken: Wiley.zbMATHGoogle Scholar
  54. 54.
    Goldbeter, A. (1996). Biochemical oscillations and cellular rhythms: The molecular bases of periodic and chaotic behaviour. Cambridge: Cambridge University Press.CrossRefzbMATHGoogle Scholar
  55. 55.
    Goldbeter, A. (2002). Computational approaches to cellular rhythms. Nature, 420, 238–245.CrossRefGoogle Scholar
  56. 56.
    Manca, V., Pagliarini, R., & Zorzan, S. (2009). A photosynthetic process modelled by a metabolic P system. Natural Computing, 8, 847–864.MathSciNetCrossRefGoogle Scholar
  57. 57.
    Manca, V., & Lombardo, R. (2012). Computing with multi-membranes. In Gh. Păun (Ed.), Membrane computing CMC12. Lecture Notes in Computer Science (Vol. 7184). Berlin: Springer.Google Scholar
  58. 58.
    Guraldelli-Gracini, R. H., & Manca, V. (2015). Automatic translation of MP+V systems to register machines. In Gh. Păun (Ed.), Membrane computing. Lecture Notes in Computer Science (Vol. 9504). Berlin: Springer.Google Scholar
  59. 59.
    Minsky, M. L. (1967). Computation: Finite and infinite machines. Upper Saddle River: Prentice Hall.zbMATHGoogle Scholar
  60. 60.
    Kandel, E. R. (2006). In the search of memory: The emergence of a new science of mind. New York City: W. W. Norton and Company Inc.Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Department of Computer Science, and Center for BioMedical ComputingUniversity of VeronaVeronaItaly

Personalised recommendations