Polarization: a new communication protocol in networks of bioinspired processors
 210 Downloads
Abstract
This work is a survey of the most recent results regarding the computational power of the networks of bioinspired processors whose communication is based on a new protocol called polarization. In the former models, the communication amongst processors is based on filters defined by some randomcontext conditions, namely the presence of some symbols and the absence of other symbols. In the new protocol discussed here, a polarization (negative, neutral, and positive) is associated with each node, while the polarization of data navigating through the network is computed in a dynamical way by means of a valuation function. Consequently, the protocol of communication amongst processors is naturally based on the compatibility between their polarization and the polarization of the data. We consider here three types of bioinspired processors: evolutionary processors, splicing processors, and multiset processors. A quantitative generalization of polarization (evaluation sets) is also presented. We recall results regarding the computational power of these networks considered as accepting devices. Furthermore, a solution to an intractable problem, namely the 0 / 1 Knapsack problem, based on the networks of splicing processors with evaluation sets considered as problem solving devices, is also recalled. Finally, we discuss some open problems and possible directions for further research in this area.
Keywords
Evolutionary processor Splicing processor Multiset processor Polarization Evaluation set Computational power1 Introduction
Computational models inspired by different biological phenomena turned out to be theoretically able to efficiently solve intractable problems. The main computational features of these models are abstracted from the way in which nature evolves. These computational models have appeared in the last 2 decades, they have been vividly investigated from a formal point of view, but some important features that could make them more attractive and useful to practitioners have not been yet explored. Most of the models have a nondeterministic behavior in the computational sense; that is, at each computational step, the next configuration is nondeterministically chosen from the set of all the possible configurations that may follow. This might be the reason why there is a gap between theoretical results and practical aspects of these models. For a survey of several classes of bioinspired computational, the reader is referred to [61].
Along these lines, networks of bioinspired processors form a class of highly parallel and distributed computing models inspired and abstracted from different biological phenomena. Networks of bioinspired processors resemble the other models of computation with similar or different origins: evolutionary systems inspired by the evolution of cell populations [18], tissuelike P systems [46] in the membrane computing area [54], networks of parallel language processors as a formal language generating device [16], flowbased programming as a wellknown programming paradigm [48], distributed computing using mobile programs [29], Connection Machine, viewed as a network of microprocessors processing 1 bit per unit time in the shape of a hypercube [32], etc.
Networks of bioinspired processors may be informally described as a graph whose vertices are processors running operations on data structured as strings, pictures, and multisets. Two main types of string processors have been considered so far: evolutionary processors and splicing processors. An evolutionary processor [15] is designed to carry out very simple operations on strings. These operations, which could be interpreted as formal operations abstracted from the gene mutations in DNA molecules, consist of deleting or inserting a symbol, and substituting one symbol by another. Moreover, each evolutionary processor is specialized in just of the three mentioned operations. A splicing processor [40] performs an operation called splicing that is inspired from the recombination of DNA molecules under the effect of different types of enzymes [30]. The splicing operation may be informally explained as follows: two DNA sequences (represented by strings) are cut at specific sites (defined by means of splicing rules), and the prefix of one sequence is pasted to the suffix of the other and vice versa. A multiset processor [13] may be viewed as an evolutionary processor working on data, where the linear structure does not matter anymore. The corresponding operations performed by multiset processors are: increasing or decreasing the number of multiplicities of some symbol and replacing an occurrence of some symbol by another one. Picture processors [12] may transform a picture, given in the form of a twodimensional array of symbols, by changing its frontier as follows: (i) a symbol of a row or column is replaced by another symbol, (ii) a row or column is deleted, (iii) a row or column is masked or unmasked, and (iv) a row or column is circularly shifted.
 (i)
all the nodes simultaneously send out the data which they contain after an evolutionary step to all adjacent nodes;
 (ii)
all the nodes simultaneously handle all the arriving data.
Networks of evolutionary processors (NEP) have been widely investigated from a theoretical point of view: NEPs as language generating devices, accepting devices, and problem solvers [14, 15, 39], characterizations of the complexity classes NP, P, and PSPACE based on accepting NEPs [38], universal NEPs and descriptional complexity results in [40, 41], etc. A very early survey may be found in [42].
As far as the three variants (i)–(iii) described above for NEP are concerned, it turned out that all of them are equivalent from the computational power point of view. The equivalence of variants (i) and (iii) is a direct consequences that all of them can simulate Turing machines. Thus, the smallest NEP of type (i) that can simulate a Turing needs 7 nodes [4], while NEPs of type (iii) with only 16 nodes are able to simulate Turing machines [36]. However, no direct simulation of one model by another has been reported until [11], where direct simulations between the variants (i) and (iii) are presented. It is worth noting that both simulations are timeefficient; that is, each computational step in one model is simulated in a constant number of computational steps in the other. In some sense, this result is somehow surprising as the possibility of controlling the computation in the variant (iii) seems to be weaker. This is particularly useful when one wants to translate a solution from one model into the other, whereas a translation via a Turing machine squares the time complexity of the new solution. The investigation in [10] is along the same lines of [11] and extends it with new timeefficient simulations between the variants (i) and (iii) on one hand and the variant (ii) on the other hand. Furthermore, characterizations of the class NP are proposed in [37] (NEPs of type (i) of size 10) and [36] (NEPs of type (iii) of size 16).
The corresponding model based on the splicing operation, namely network of splicing processors (NSP), was introduced in [40]. The NSP model resembles some features of the test tube distributed systems based on splicing introduced in [17] and further investigated in [51]. The differences between the models considered in [17] and [40] are precisely described in [40]. In [40], one also mentions the differences between NSP and the timevarying distributed H systems, another generative model based on splicing introduced in [53], and further investigated in [43, 50, 52]. In all these models, the splicing operation is applied to an arbitrary pair of strings. A restricted version of NSPs, such that the pair of string which the splicing is applied to, is formed by an auxiliary string and an arbitrary nonauxiliary string was introduced in [40], where it was proved that this computing model is computationally complete. A characterization of the complexity class NP as the class of languages accepted by restricted NSPs in polynomial time was proposed. Furthermore, a similar characterization was proposed for the class PSPACE as the class of languages accepted by restricted NSPs with at most polynomial length of the strings used in the derivation. In [39], it was proved that NSPs (unrestricted, this time) of constant size accept all recursively enumerable languages, and can solve all problems in NP in polynomial time; in addition, an universality result for NSPs was proposed. In both cases, the number of nodes needed was 7. In [35], it is shown that computational completeness can be achieved by NSPs of two nodes. In the same paper, a more involved construction, showing that NSPs of size 3 can simulate the computations of a nondeterministic Turing machine in parallel, is presented.
Several software implementations of NEPs have been reported in the literature (see [21, 22, 23]). Other software simulators for NEP model using massively parallel platforms for multicore desktop computers, clusters of computers, and cloud resources have been reported in [26, 27]. These solutions have encountered difficulties when trying to implement filters because of the challenge to orchestrate communication and computation across available resources. One idea was to consider a safethread model of processors, such that each rule and filter is associated with a thread, the threads for filters being more complicated than those for rules. It is worth mentioning that the threads do not necessarily synchronize their steps (evolutionary or communication), but the itineraries of data through the theoretical model do not interfere with each other. In this setting, a processor is the parent of a set of threads, which use all objects from that processor in a mutual exclusion region. When a processor starts to run, it starts cascadically the rule threads and filter threads.
Another difficulty for all of these simulators is that they were only developed for the NEP model and extending it to new models is not an trivial task. Recently, in [27], it is shown that massively distributed platforms for big data scenarios make them potential candidates for the development of ultrascalable simulators able to execute NEP models and its variants. In particular, a new framework, named NPEPE, was introduced to deploy NEP solutions to these computing platforms by developing an engine that uses Apache Giraph on top of the Hadoop platform. The results of some experiments with NPEPE suggest its suitability to deploy NEP solutions for hard computational problems. Furthermore, this work also suggests that other variants of NEP might be amenable to be adapted to these ultrascalable platforms to minimize the growth of the processing data. Therefore, this resource reduction into the filtering process could become a clear advantage when we will deploy hardware/software solutions on top of massively distributed computational platforms.
Introducing a communication protocol based on a new condition, namely the polarization, has several reasons. One reason is to replace the communication based on filters among processors by another protocol that seems easier to be implemented. More importantly, the syntactic conditions discusses above, that may capture the presence or absence of certain elements in a cell, cannot capture phenomena like the equilibrium potential of an electrochemical reaction in a cell (electrochemical polarization) or the concentration polarization of a cell from its equilibrium value. If the existence of some elements in a cell is important, it is without any doubt that the concentration of these elements is also very important. Inspired by these phenomena, the new condition is called polarization.
Paper [1] considers a new variant of NEP in which the old filtering process was replaced by a process regulated by polarization and discusses the potential of this variant for solving hard computational problems. A slightly more general variant of networks of polarized evolutionary processors considered in [1] called (NPEP) was introduced in [2]. All nodes of an NPEP are “polarized” in the sense that an element in the set \(\{,0,+\}\) is associated with each node of the network, such that each node can be seen as having a negative, neutral, or positive polarization. To define a strategy of communication following an analogy to the electrical charge, we need to define a procedure to compute the polarization of data. As seen above, the polarization of a node is previously defined and fixed; in its turn, the polarization of strings is computed by means of a valuation mapping. This function associates an integer value of each strings depending on the integer values assigned to its symbols. Then, only the sign of the value associated with a strings is set as its polarization. Thus, string migration amongst adjacent nodes, that might be viewed as a simulation of the communication channel between cells, depends both on their polarization and the node polarization which by simplicity reasons have to be the same.
It is clear that the linear structure of a string does not matter when computing its valuation. Therefore, one may consider that data are less exactly organized; that is, one stores only the number of occurrences of each symbol. This is a multiset of symbols; processors acting on multisets of symbols and networks of multiset processors (NPMP) have been considered in [13].
Networks of polarized splicing processors (NPSP) have been introduced in [8] and further investigated in [9]. A quantitative generalization of these networks, called networks of splicing processors with evaluation sets (NSPES), has been introduced in [28]. In an NSPES, unlike in all the aforementioned cases, the valuation mapping returns the exact value computed for a string. The new model refines the communication protocol based on polarization discussed above, in which each polarization may be viewed as one of the intervals of integers \((\infty ,0)\), \(\{0\}\), and \((0,\infty )\), to a more complex polarization based on more intervals. The new model tries to resemble the biological concept of the concentration gradient in a solution. Now, the strategy of communication between two nodes follows the compatibility between their accepting values with respect to some predefined evaluation sets and the values of the data computed by a valuation mapping. More precisely, the values of data have to be in the set of accepting values with respect to some evaluation set of symbols. This new communication protocol might be interpreted as the movement of molecules or particles along a concentration gradient between two areas.
The paper is organized as follows. After a section that presents the basic notions and concepts, we recall the results regarding the effect of polarization in networks of evolutionary processors. The same is then done for networks of multiset processors and finally for networks of splicing processors. The last section is devoted to a discussion about open problems and directions for further research.
2 Preliminaries
We assume that the reader is familiar with the basic notions of the formal language theory. In the sequel, we summarize the main concepts and notations used in this work; for all unexplained notions, the reader is referred to [62].
An alphabet is a finite and nonempty set of symbols. The cardinality of a finite set A is written card(A). Any finite sequence of symbols from an alphabet V is called string over V. The set of all strings over V is denoted by \(V^*\) and the empty string is denoted by \(\lambda \). The length of a string x is denoted by x, while alph(x) denotes the minimal alphabet W, such that \(x\in W^*\). Furthermore, \(x_a\) denotes the number of occurrences of the symbol a in x.
A homomorphism from the monoid \(V^*\) into the monoid (group) of additive integers \(\mathbb {Z}\) is called valuation of \(V^*\) in \(\mathbb {Z}\). The absolute value of an integer k is denoted by k. Although the absolute value of an integer and the length of a string are denoted in the same way, this cannot cause any confusion as the arguments are understood from the context.
A nondeterministic Turing machine is a construct \(M=(Q,\)V, U, \(\delta ,\)\(q_0,\)B, F), where Q is a finite set of states, V is the input alphabet, U is the tape alphabet, \(V \subset U\), \(q_0\) is the initial state, \(B \in U \setminus V\) is the “blank” symbol, \(F \subseteq Q\) is the set of final states, and \(\delta \) is the transition function, \( \delta : (Q\setminus F)\times U \rightarrow 2^{Q\times (U\setminus \{B\})\times \{R,L\}}\). There are many variants of Turing machines; the one considered here can be described intuitively in the following way. It has only one semiinfinite tape (bounded to the left) storing strings over the alphabet \(U\setminus \{B\}\) from the beginning of the tape. The blank symbol B occupies all the tape cells that do not store a symbol from \(U\setminus \{B\}\). The tape head can read a symbol from the tape and write a symbol different from B. Initially, a string over the input alphabet V is stored on the tape. A computational step (or move) of M can be described as follows: the symbol stored on the current cell of the tape is read, the current state may be changed or not, a symbol from \(U\setminus \{B\}\) is written on the current cell, and the tape head is moved one cell either to the left (provided that the cell scanned was not the leftmost one) or to the right. A computation of M is a sequence, that may be infinite, of moves. The input string is accepted iff the machine reaches a final state. The language accepted by M is the language of all the accepted strings by M. A Turing machine is deterministic if, for every state and symbol, the machine can make at most one move.
We now recall the definition of 2tag systems following in [60]. It is worth noting that this definition is slightly different from those in [47, 59], but equivalent to them. A 2tag system is a pair \(T=(V,\mu )\), where V is a finite alphabet of symbols that contains a special halting symbol H, and \(\mu :V\setminus \{H\} \rightarrow V^+\) with \(\mu (x)\ge 2\) or \(\mu (x)=H\). Furthermore, \(\mu (x)=H\) for just one \(x\in V\setminus \{H\}\). A string over V is said to be a halting string if it contains H or is shorter than 2. The tag operation \(t_T\) is defined for each nonhalting string w in the following way: if x is the leftmost symbol of w, then \(t_T(w)\) returns the string obtained by deleting the leftmost 2 symbols of w and appending \(\mu (x)\) to the obtained string. A computation of a 2tag system as above is an arbitrary iteration of the tag operation described above. A computation is not considered to exist unless a halting string is produced in finitely many iterations. It is known, see[60], that 2tag systems are computationally complete.
The time complexity of the finite computation \(w_0=w,w_1=t_T(w_0)\), \(w_2=t_T(w_1),\ldots ,w_p=t_T(w_{p1} )=\alpha H\), with \(w_i\in (V\setminus \{H\})^+,w_i\ge 2\) for all \(0\le i\le p1\), and \(\alpha \in (V\setminus \{H\})^*\), of T on \(w\in V^*\) is denoted by \(Time_T(w)\) and equals p, that is the number of steps required for the 2tag system to produce a halting string starting from the string w. The time complexity of T is the partial function from \({\mathbb N}\) to \({\mathbb N}\), \(Time_T(n)=max\{Time_T(w)w\in V^*,w=n\}\).
It is worth mentioning that most of the small universal Turing machines have been obtained via the simulations of 2tag systems (see [60]), and more recently [67] together with the references therein.
3 Polarization in networks of evolutionary processors
 If \(\sigma \equiv a\rightarrow b\in Sub_V\), then \(\sigma ^*(w)=\left\{ \begin{array}{ll} \{ubv:\ w=uav\},\\ \{w\}, \text{ otherwise. } \end{array}\right. \)$$\begin{aligned} \sigma ^l(w)=\left\{ \begin{array}{ll} \{bu:\ w=au\},\\ \{w\}, \text{ otherwise. } \end{array}\right. \quad \quad \sigma ^r(w)=\left\{ \begin{array}{ll} \{ub:\ w=ua\},\\ \{w\}, \text{ otherwise. } \end{array}\right. \end{aligned}$$
 If \(\sigma \equiv a\rightarrow \lambda \in Del_V\), then \(\sigma ^*(w)=\left\{ \begin{array}{ll} \{uv:\ w=uav\},\\ \{w\}, \text{ otherwise. } \end{array}\right. \)$$\begin{aligned} \begin{array}{llc} \sigma ^l(w)=\left\{ \begin{array}{ll} \{v:\ w=av\},\\ \{w\}, \text{ otherwise } \end{array}\right. &{} \qquad &{} \sigma ^r(w)=\left\{ \begin{array}{ll} \{u:\ w=ua\},\\ \{w\}, \text{ otherwise } \end{array}\right. \end{array} \end{aligned}$$

If \(\sigma \equiv \lambda \rightarrow a\in Ins_V\), then \(\sigma ^*(w)=\{uav:\ w=uv\}, \ \sigma ^l(w)=\{aw\}, \ \sigma ^r(w)=\{wa\}.\)
As we do not give the proofs here, it is worth mentioning that all the results surveyed here have been obtained with deletion and insertion rules acting in the modes l and r only, and substitution rules acting in the mode \(*\) only. In other words, we have noticed that deletions and insertions applied everywhere in the string and substitutions applied in the ends of the string can be ignored without any effect on the presented results.
Given \(\alpha \in \{*,l,r\}\), we extend the action of an evolutionary rule \(\sigma \) on a string to a set of strings \(L\subseteq V^*\) by \(\displaystyle \sigma ^\alpha (L)=\bigcup _{w\in L} \sigma ^\alpha (w)\). Given a finite set of rules M, we define the \(\alpha \)action ofM on the string w and the language L by \(\displaystyle M^{\alpha }(w)=\bigcup _{\sigma \in M} \sigma ^{\alpha }(w)\ \text{ and } \ M^{\alpha }(L)=\bigcup _{w\in L}M^{\alpha }(w),\) respectively.

M is a set of evolutionary rules, namely substitution, deletion, or insertion rules, over the alphabet V. Formally: \((M\subseteq Sub_V)\) or \((M\subseteq Del_V)\) or \((M\subseteq Ins_V)\).

\(\alpha \in \{*,l,r\}\) gives the action mode of the rules of the node.

\(\pi \in \{,+,0\}\) is the polarization of the node (negatively or positively charged, or neutral, respectively).

V and U are the input and network alphabet, respectively, \(V\subseteq U\).

\(G=(X_G,E_G)\) is an undirected graph without loops with the set of vertices \(X_G\) and the set of edges \(E_G\). G is called the underlying graph of the network.

\(\mathcal {R}:X_G\longrightarrow EP_U\) is a mapping which associates with each node \(x\in X_G\) the polarized evolutionary processor \(\mathcal {R}(x)=(M_x,\alpha _x, \pi _x)\).

\(\varphi \) is a valuation of \(U^*\) in \(\mathbb {Z}\).

\(\underline{In} and \underline{Out} \in X_G\) are the input and the output nodes of \(\varGamma \), respectively.
A short discussion is in order here. Obviously, the evolutionary processor described here is just a mathematical concept similar to some extent to that of an evolutionary algorithm, both being inspired from evolution in nature. The evolutionary operations defined for an evolutionary processor might be interpreted as mutations in an evolutionary algorithm, while the filtering process during the communication step in an NEP might be viewed as a selection process. Recombination, which appears in an evolutionary algorithm, is missing here, but it was asserted that evolutionary and functional relationships between genes can be captured by taking only local mutations into consideration [63].
Let \(\varGamma \) be an NPEP, the computation of \(\varGamma \) on the input string \(w\in V^*\) is a sequence of configurations \(C_0^{(w)},C_1^{(w)},C_2^{(w)},\dots \), where \(C_0^{(w)}\) is the initial configuration of \(\varGamma \) on w, \(C_{2i}^{(w)}\Longrightarrow C_{2i+1}^{(w)}\) and \(C_{2i+1}^{(w)}\vdash C_{2i+2}^{(w)}\), for all \(i\ge 0\). Note that the configurations are changed by alternating steps.
A computation as above halts, if there exists a configuration in which the set of strings existing in the output node \(\underline{Out}\) is nonempty. Given an NPEP \(\varGamma \) and an input string w, we say that \(\varGamma \) accepts w if the computation of \(\varGamma \) on w halts.
Parameters of the nodes of \(\varGamma \)
Node  M  \(\alpha \) 

\(\underline{In}\)  \(\bigcup _{i=1}^n \{a\rightarrow r_i,a\rightarrow b_i, a\rightarrow g_i\}\)  \(+\) 
X  \(\{T_k\rightarrow T'_{k1}\mid 1\le k\le n(n1)/2\}\cup \{T_0\rightarrow T''_0\}\)  0 
(i, j)  \(\{e_k\rightarrow e'_k\mid e_k=\{i,j\}\}\)  \(+\) 
\((i,j,z), z\in \{r,b,g\}\)  \(\{z_i\rightarrow z'_i\}\)  − 
\((i,j,z,y), z\in \{r,b,g\} y\in \{r,b,g\}, z\ne y\)  \(\{y_j\rightarrow y''_j\}\)  0 
Z  \(\{T'_k\rightarrow T_k\mid 0\le k\le n(n1)/2\}\ \cup \{z'_i\rightarrow z_i,g''_j\rightarrow g_j\}\cup \{e'_k\rightarrow \bar{e}_k\mid e_k=\{i,j\}\}\)  \(+\) 
\(\underline{Out}\)  \(\emptyset \)  − 
We now recall some results from [5] and [6] concerning the computational power of these networks.
Theorem 1
For every 2tag system\(T=(V,\mu )\), there exists an NPEP\(\varGamma \)of size 15, such that\(L(\varGamma )=\{w\mid T\,halts\,on\,w\}\).
Although 2tag systems efficiently simulate deterministic Turing machines, via cyclic tag systems (see [66]), the previous result does not allow us to say much about the NPEP accepting in a computationally efficient way all recursively enumerable languages. The following statement shows that all recursively enumerable languages can be efficiently (from the time complexity point of view) accepted by NPEP by simulating arbitrary Turing machines.
Theorem 2
Letfbe a function from\(\mathbb {N}\)to\(\mathbb {N}\).
For any recursively enumerable languageL, accepted in\({\mathcal {O}}(f(n))\)time by a Turing machine with tape alphabetU, there exists an NPEP of size\(10\cdot card(U)\)acceptingLin\({\mathcal {O}}(f(n))\)time.
A natural problem arises: is it possible to simulate arbitrary Turing machines by NPEP of constant size? if this is the case, is such a simulation still timeefficient? The next result gives an affirmative answer to the first question; however, a price in terms of time complexity must be paid for this simulation.
Theorem 3
Letfbe a function from\(\mathbb {N}\)to\(\mathbb {N}\).
For any recursively enumerable languageL, accepted in\({\mathcal {O}}(f(n))\)time by a Turing machine with tape alphabetU, there exists an NPEP of size 39 acceptingLin\({\mathcal {O}}((f(n)\cdot card(U))^2)\)time.
Conversely, NPEP can be simulated by Turing machines.
Theorem 4
Letfbe a function from\(\mathbb {N}\)to\(\mathbb {N}\).
For every languageLaccepted by an NPEP, with the input alphabetVand the valuation mapping\(\varphi \), in\({\mathcal {O}}(f(n))\)time, there exists a Turing machine that acceptsLin\({\mathcal {O}}(f(n)(Kf(n)+Kn))\)time, where\(K=\max \{\varphi (a)\mid a\in V\}\).
Let \(\varGamma =(V,U,G,\mathcal {R},\varphi ,\underline{In},\underline{Out})\) be an NPEP. If the valuation mapping \(\varphi \) takes values in the set \(\{1,0,1\}\) only, the network is said to be with elementary polarization of symbols. This restriction appears to be natural in the sense that each symbol has now a polarization. This does not mean that the valuation mapping associating an integer value with every symbol is artificial; it suffices to imagine that each such integer represents the number of subatomic particles (electrons and protons). Rather surprisingly, in [58] as well as [57], it is shown that the computational power of these networks does not diminish; even more, the universality can be reached with the networks of constant size. However, the price paid is an increase of time complexity.
More precisely, the following statement holds.
Theorem 5
For every recursively enumerable languageL, there exists an NPEP with elementary polarization of symbols of size 35 acceptingL.
Along the same lines, it is worth mentioning [20] which presents an even restricted NPEP that is computationally complete. The price paid is that the size of the networks is not constant anymore. The main result in [20] is as follows:
Theorem 6

The polarization of all nodes is restricted to\(\{0,+\}\).

Each node has only one rule.

The valuation mapping takes only two values\(\{0,1\}\).
4 Polarization in networks of multiset processors
As we have seen above, when computing the polarization of a string, the linear structure of the string does not play any role. All anagrams of a given string have the same polarization. Therefore, instead of a string, one can consider its commutative closure represented as a multiset of symbols. This is our approach in this section taken from [13].
A formalism based on multiset rewriting, called constraint multiset grammar, was introduced in [31] and further investigated in [45], with motivations related to a highlevel specification of visual languages. Constraint multiset grammars may be considered as a bridge between the usual string rewriting grammars and constraint logic programs. Another formalism based on multiset rewriting, called abstract rewriting multiset system, with motivations related to the population development in artificial cell systems, was introduced in [64, 65]. A Chomskylike hierarchy of formal grammars rewriting multisets was proposed in [34]. It is worth noting that the multisets are processed in a sequential, nondistributed way, in all the aforementioned models. The situation is different in membrane systems [55], where multisets are processed in parallel.
A finite multiset over a finite set A is a mapping \(\sigma :A\longrightarrow \mathbb {N}\); \(\sigma (a)\) expresses the number of copies of \(a\in A\) in the multiset \(\sigma \). The empty multiset over a set A is denoted by \(\varepsilon _A\); that is, \(\varepsilon _A(a)=0\) for all \(a\in A\). The set of all multisets over A is denoted by \(A^\#\). A subset of \(A^\#\) is called macroset over A. We use the same notation for the empty set and empty macroset, namely \(\emptyset \). In what follows, a multiset containing the elements \(b_1,b_2,\dots ,b_r\), any of them possibly with repetitions, will be denoted by \(\langle b_1,b_2,\dots , b_r\rangle \). Each multiset \(\sigma \) over a set A of cardinality n may also be viewed as an array of size n with nonnegative entries.
The weight of a multiset \(\sigma \) as above is \(\parallel \sigma \parallel =\displaystyle {\sum _{a\in A} \sigma (a)}\), and \(\parallel \sigma \parallel _B=\displaystyle {\sum _{b\in B}\sigma (b)}\) for any subset B of A.

the addition multiset \(\sigma +\tau \) with \((\sigma +\tau )(a)=\sigma (a)+\tau (a)\) for all \(a\in A\);

the difference multiset \(\sigma \tau \) with \((\sigma \tau )(a)=\max (\sigma (a)\tau (a),0)\) for each \(a\in A\);

scalar multiplication multiset \(c\sigma \), with \(c\in \mathbb {N}\), with \((c\sigma )(a)=c\sigma (a)\) for all \(a\in A\).
We recall from [19] the definition of a multiset Turing machine. Before giving the formal definition, we informally explain what a multiset Turing machine is. Such a machine has a nonstructured memory storing a multiset of symbols, a read–write head which can access the memory and pick up a symbol (that may be the empty symbol) and put back at most one symbol (that may also be the empty one). Reading (picking up) and writing (putting back) a symbol different from the empty symbol are meant as decreasing and increasing the number of copies of that symbol by 1, respectively. The computation of such a machine can be described in the following way: it starts in the initial state with a given multiset in the memory. A computational step consists of reading a symbol, changing the current state, and writing a symbol to the memory. The initial multiset is accepted when a final state is reached with an empty memory; otherwise, it is rejected. Note that multiset Turing machines resemble some variants of register machines defined in [47].
 1.
\(a,b\in U\setminus \{\flat \} \text{ and } \tau (a)\ge {1},\rho (a)=\tau (a)1, \rho (b)=\tau (b)+1, \rho (c)=\tau (c) \text{ for } \text{ all } c\in U,c\notin \{a,b\}\).
 2.
\(a=\flat , b\ne \flat , \text{ and } \rho (b)=\tau (b)+1, \rho (c)=\tau (c) \text{ for } \text{ all } c\in U,c\ne b\).
 3.
\(a\ne \flat , b=\flat , \text{ and } \tau (a)\ge 1,\rho (a)=\tau (a)1,\rho (c)=\tau (c)\), for all \(c\in U,c\ne a\).
 1.
\((s,b)\in {f}(q,\overline{a}) \text{ for } \text{ some } a,b\in {U}\setminus \{\flat \}, \text{ s.t. } \tau (a)=0, \rho (b)=\tau (b)+1, \text{ and } \rho (c)=\tau (c) \text{ for } \text{ all } c\in {U},c\ne b\).
 2.
\((s,\flat )\in f(q,\overline{a}) \text{ for } \text{ some } a\in {U}, \text{ s.t. } \tau (a)=0, \rho (c)=\tau (c) \text{ for } \text{ all } c\in U\).
The following result is proved in [13].
Proposition 1
A macroset is accepted by an MTMD if and only if it is accepted by a GMTMD.
It is important to note that the time complexity of a GMTMD simulating an MTMD may be significantly higher than that of the MTMD.
 If \(r\equiv a\oplus \in {Inc_A}\), then \(r(\sigma )\) is the multiset over A defined by the following:$$\begin{aligned} (r(\sigma ))(b)=\left\{ \begin{array}{ll} \sigma (b), \text{ if } b\ne a,\\ \sigma (b)+1, \text{ if } b=a. \end{array}\right. \end{aligned}$$
 If \(r\equiv a\ominus \in Dec_A\), then \(r(\sigma )\) is the multiset over A defined by the following:$$ \begin{aligned} (r(\sigma ))(b)=\left\{ \begin{array}{ll} \sigma (b), \text{ if } b\ne a \text{ or } (b=a) \& (\sigma (a)=0),\\ \sigma (b)1, \text{ otherwise. } \end{array}\right. \end{aligned}$$
 If \(r\equiv a\rightarrow b \in Sub_A\), then \(r(\sigma )\) is the multiset over A defined by the following:$$ \begin{aligned} (r(\sigma ))(c)=\left\{ \begin{array}{lll} \sigma (c), \text{ if } c\ne {a},c\ne {b},\\ \sigma (a)1, \text{ if } (c=a) \& (\sigma (a)>0),\\ \sigma (a), \text{ if } (c=a) \& (\sigma (a)=0),\\ \sigma (b)+1, \text{ if } (c=b) \& (\sigma (a)>0),\\ \sigma (b), \text{ if } (c=b) \& (\sigma (a)=0). \end{array}\right. \end{aligned}$$

M is a set of substitution, increment or decrement rules over A. Formally: \((M\subseteq Sub_A)\) or \((M\subseteq Inc_A)\) or \((M\subseteq Dec_A)\).

\(p\in \{,+,0\}\) is the polarization of the node (negatively or positively charged, or neutral, respectively).

A is the input set.

B is the working set, \(A\subseteq B\).

G, \(\mathcal {R}:X_G\longrightarrow PMP_B\), \(\varphi \), \(\underline{In}\), and \(\underline{Out}\) have the same roles as in an NPEP.
Theorem 7
A macroset accepted by a GMTMD can be accepted by an NPMP.
The proof of the previous theorem shows that structure of the underlying graph of the network depends only on the working set of the multiset Turing machine. More precisely, the following holds.
Corollary 1
 1.
Each macroset accepted by a GMTMD with working setUcan be accepted by an NPMP of size\(2\cdot card(U)+17\).
 2.
Each macroset accepted by a GMTM with working setUcan be accepted by an NPMP of constant size 16.
By modifying the construction in the proof of Theorem 7, one can prove:
Corollary 2
 1.
Each macroset accepted by an MTMD with working setUcan be accepted by an NPMP of size\(3\cdot card(U)+16\).
 2.
Each macroset accepted by an MTM with working setUcan be accepted by an NPMP of size\(card(U)+15\).
As the first statement of the last corollary seems to make no sense, because the first statement of the previous corollary gives a better bound, we recall that the time complexity of a GMTMD simulating a MTMD may be significantly higher than that of the MTMD. Therefore, although the size of the NPMP simulating an MTMD is greater than the size of an NPMP simulating a GMTMD which, in its turn, simulates an MTMD, the direct simulation is time complexity preserving.
On the other hand, the next statement was proved.
Theorem 8
Each macroset accepted by an NPMP can be accepted by a GMTMD and, hence, by an MTMD.
Although one can easily find relations between NPMP and classical Turing machines as well as between these Turing machines and the multiset ones, it remains as an open problem whether or not networks of polarized multiset processors can be directly simulated by other types of multiset Turing machines.
5 Polarization in networks of splicing processors

S is a finite set of splicing rules over V.

A is a finite set of auxiliary strings over V.

\(\pi \in \{,+,0\}\) is the polarization of the node (negatively or positively charged, or neutral, respectively).

U is the network alphabet and \(V\subseteq U\) is the input alphabet.

\(\langle ,\rangle \in U\setminus V\) are two special symbols.

\(\mathcal {N}\) is a mapping which associates with each node \(x\in X_G\) the splicing processor over U, \(\mathcal {N}(x)=(S_x,A_x,\pi _x)\).

G, \(\varphi \), \(\underline{In}\), and \(\underline{Out}\) are defined as in the previous models.
Given an NPSP \(\varGamma \) and an input string w, we say that \(\varGamma \) accepts w if the computation of \(\varGamma \) on w halts. The language accepted by an NPSP \(\varGamma \) consists of all the strings accepted by \(\varGamma \) and is denoted by \(L(\varGamma )\).
We now return to the computational power of these networks. Thus, deterministic Turing machines can be simulated by NPSP as follows.
Theorem 9
 1.
All recursively enumerable languages are accepted by NPSPs of size 2.
 2.
Every language accepted by a deterministic Turing Machine in\(\mathcal {O}(f(n))\)time, for some function from\(\mathbb {N}\)to\(\mathbb {N}\), is accepted by an NPSP of size 2 in\(\mathcal {O}(f(n))\)time.
Furthermore, we have:
Corollary 3
The class of polynomially recognizable languages is included in the class of languages accepted by NPSPs of size 2 in polynomial time.
It is clear that every nondeterministic Turing machine can eventually be simulated by NPSP with two nodes, by composing the above construction with the simulation of a nondeterministic Turing machine by a deterministic one. However, it is known that the latter simulation increases the time complexity of the deterministic machine.
 (i)
\(a^{k1}_x\triangle \), if \(\varphi (x)>0\), where \(a_x\) are new symbols and \(\varphi (a_x)=\varphi (\triangle )=1\).
 (ii)
\(a^{k1}_x\nabla \), if \(\varphi (x)<0\), where \(a_x\) are new symbols and \(\varphi (a_x)=\varphi (\nabla )=1\).
Theorem 10
Letfbe a function from\(\mathbb {N}\)to\(\mathbb {N}\).
Every language accepted by a nondeterministic Turing machine inf(n) time is accepted by an NPSP of size 4 having a valuation in the set\(\{1,0,1\}\)in time\(\mathcal {O}(f(n))\).
We do not know whether three components are sufficient for simulating nondeterministic Turing machines preserving the time complexity. However, as we shall see in the next statement, networks with three components are able to simulate efficiently another computationally complete model, namely the tag system. Again, such a simulation may follow from Theorem 9 and the simulation of 2tag systems by deterministic Turing machine, but the simulation is not timeefficient.
Theorem 11
Letfbe a function from\(\mathbb {N}\)to\(\mathbb {N}\).
For every 2tag system\(T=(V,\mu )\), there exists an NPSP\(\varGamma \)of size 3, such that ifThalts after at mostf(k) steps for an input stringwwith\(w=k\), then\(\varGamma \)halts onwin\(\mathcal {O}(f(k))\)time.
Clearly, every NPSP can be simulated by a Turing machine. However, the simulation might be very inefficient. In all constructions presented so far, one of the two strings in each splicing step was always an auxiliary string. If we consider these NPSPs, such that one of the two strings in each splicing step is an auxiliary string, we can construct a Turing machine able to simulate such a network in an efficient way.
Theorem 12
Every language accepted by an NPSP\(\varGamma \), such that one of the two strings in each splicing step is an auxiliary string is accepted by a Turing machineM. If\(Time_{\varGamma }(n)\in \mathcal {O}(f(n))\), for some function from\(\mathbb {N}\)to\(\mathbb {N}\), thenMaccepts any input string of lengthnin\(\mathcal {O}((f(n)+n)(f(n)+n+K))\), whereKis the maximal absolute value of the valuations of symbols in the working alphabet of\(\varGamma \).
This result rises a few open problems that will be discussed in the final section.
6 A generalization of NPSP
It is worth mentioning that the bioinspired features of the NPEP models considered so far in this note have only been considered from a qualitative perspective. However, there are plenty of situations, including biological phenomena, where quantitative aspects play an important role. Starting from this premise, [28] introduces a new model of NPSP that tries to take into account quantitative features. The new model, called Network of Splicing Processors With Evaluation Sets (NSPES), refines the protocol based on polarization described above. It is obvious that a polarization described above may actually be viewed as a value in one of the following intervals of integers: \((\infty ,0)\), \(\{0\}\), and \((0,\infty )\) for negative, neutral, and positive polarization, respectively. The new model, some predefined evaluation sets defined with respect to some subsets of symbols are associated with each node and the valuation mapping returns the exact value of a string. The communication amongst nodes is now regulated by the compatibility between the values of data and the evaluation sets associated with nodes. This new communication protocol might be interpreted as the movement of molecules or particles along a concentration gradient between two areas.
In [28], a timeefficient solution to an NPhard optimization problems, namely the wellknown 0 / 1 Knapsack problem, based on NSPES is proposed. Rather surprising, the solution is not only computationally efficient (direct proportional to the input size and logarithmic proportional to the numerical values of the instance), but also very succinct (the underlying networks contain four nodes only; its topology may be chosen as a chain, ring, or complete graph). Furthermore, neither the evaluation sets nor the valuation mapping depends on the numerical values of the instance.

M is a finite set of splicing rules over V.

\(\varDelta \) is a finite set of auxiliary strings over V.

\(S \subseteq 2^V\) is the class of evaluation sets.

\(\alpha \) is a set of mutually disjoint intervals of \(\mathbb {Z}\); these are the compatibility values of the node.

V and U are the input and network alphabets, respectively.

\(\rho \) is a mapping that associates with each subset of U an interval, possibly empty, of \(\mathbb {Z}\), such that \(\rho (X)\cap \rho (Y)=\emptyset \) for any \(X\ne Y\) subsets of U.

\(G=(X_G,E_G)\) is an undirected graph with the set of vertices \(X_G\) and the set of edges \(E_G\). G is called the underlying graph of the network.

\(R: X_G\rightarrow SPES_U\) is a mapping that associates with each node \(x \in X_G\) the splicing processor with evaluation sets R(x) over U, with \(R(x) = (M_x, \varDelta _x,S_x, \alpha _x)\).

\(\varphi \) is a valuation of \(U^* \in \mathbb {Z}\) as it was defined in the previous section.

\(\underline{In}, \underline{Out}\in X_G\) are the input and the output nodes of \(\varGamma \), respectively.
The acceptance of an input string is defined as usual. Moreover, if we want to use \(\varGamma \) as a problem solver, then it must halt on every input; in this case, the input string is decided by \(\varGamma \). We discuss below the potential of a given NSPES to solve complex problems. Optimization problems can be met in many areas and various domains. An optimization problem requires to find optimal solutions with respect to some goals. These problems are motivated by the efficient allocation of limited resources to meet desired objectives. We now formally define what an NSPES algorithm for optimization problems is the following [28].

I is the set of all instances of \(\mathcal{P}\);

for any instance \(x \in I\), s(x) is the set of feasible solutions;

\(m:(I\times s(I))\rightarrow \mathbb {R}\), where m(x, y) denotes the measure of a feasible solution y of x;

g is the goal function, and is either \(\min \) or \(\max \).
 f is the solution of \(\mathcal{P}\), that is, for some instance, x, f(x) is the set of optimal solutions y, such that$$\begin{aligned} m(x, y) = g \{ m(x, y') \mid y' \in s(x) \}. \end{aligned}$$
As an example, in [28], a wellknown optimization problem, namely the 0 / 1 Knapsack problem, is solved by an NSPES algorithm. Informally, the 0 / 1 Knapsack problem can be stated as follows. Given a set of objects, each with a weight and a value, determine a collection of objects, so that the total weight is less than or equal to a given limit and the total value is as large as possible. Formally,
Given a number\(K \in \mathbb {N}\), and twontuples of positive integers\(W =(w_1,w_2,\dots ,w_n)\)and\(P=(p_1,p_2,\dots ,p_n),\)determine a subsetTof\(\{1,2,\dots ,n\}\)that maximizes\({\sum\nolimits_{i\in T} p_i}\)subject to\( {\sum\nolimits_{i\in T} w_i}\le K\).
Clearly, the 0 / 1 Knapsack problem is a (combinatorial) optimization problem. We denote by \(KS_{0/1}(n,K,W,P)\) an arbitrary instance of the 0 / 1 Knapsack problem as above. It is known that the 0 / 1 Knapsack problem is NPhard [25], while there are known pseudopolynomial time solutions based on dynamic programming. Parallel algorithms for the 0 / 1 Knapsack problem are discussed in [3], while an overview of several types of solutions to this problem is presented in [33].
Theorem 13
Every instance\(KS_{0/1}(n,K,W,P)\)of the 0 / 1 Knapsack problem, with\(W =(w_1,w_2,\dots ,w_n)\)and\(P=(p_1,p_2,\dots ,p_n),\)can be solved by NSPESs in\(O(n+\log T)\)time, where\(T=\displaystyle {\sum _{i=1}^n p_i}\).
The informal idea for solving this problem is actually a general one for an optimization problem: in the first step, NSPES generates the candidates for the solution, while, in the second step, the best candidates are extracted, by iteratively removing the local nonoptimal ones. The strategy for extracting the best candidates is that every couple of computational steps (splicing and communication) selects from the previous pool of possible solutions a cluster of solutions for which the value difference of each pair is bounded by a half of the previous bound.
This solution suggests that the NSPES model is more suitable to address complex problems where quantitative conditions have a relevant role.
7 Concluding remarks

Is it possible to simulate 2tag systems with NPEP of a size smaller than 15? (see Theorem 1)

Is it possible to simulate an arbitrary Turing machine with NPEP of constant size without increasing the simulation time? (see Theorems 2 and 3).

Is it possible to decrease the simulation time of a Turing machine by an NPEP reported in Theorem 3, such that the NPEP is still of constant size?
As we have seen in Corollaries 1 and 2, every generalized multiset Turing machine with detection can be simulated by a network of polarized multiset processors whose size is linearly proportional with the working set of the Turing machine. Rather surprisingly, the size of the network becomes constant if the Turing machine is without detection. The simulation of multiset Turing machines (with or without detection) requires networks with a size that is linearly proportional with the working set of the Turing machine.
It is worth mentioning that all these simulations are also time complexity preserving, that is the number of processing and communication steps of the simulating network is a linear function of the number of steps of the simulated Turing machine. Is it possible to extend the result concerning the simulation of multiset Turing machines without detection (NPMP of constant size) to the other types of multiset Turing machines? It also remains as an open problem whether or not networks of polarized multiset processors can be simulated by other types of multiset Turing machines.

Find a direct simulations between NEP and NPEP. Is any of these simulations timeefficient?

Discuss polarized networks where languages are not accepted but generated. Generating NEPs is a wellinvestigated topic.

Investigate the ability of NPEP to solve NPhard problems. Solutions to two NPcomplete problems, namely the “3colorability problem” and the “Common Algorithmic Problem” are proposed in [2].
What can be said on simulation of NSP with NPSP and vice versa? For a better understanding of these models, not only simulations between networks with the same type of processors would be of interest but also between networks with different types of processors.
An attractive direction of research would be to investigate whether NSPESs are more suitable for modeling different aspects of biological phenomena. On the other hand, in our view, the model deserves a systematic study of the limits of software and hardware implementation to solve hard problems.
It is worth noting that the nondeterministic way of applying the rules in networks of bioinspired processors is captured by assuming that data appear in a sufficiently large number of copies. However, this means a need for a huge memory able to store these data. What possibilities would exist for coping with this drawback of the model? One possibility would be to introduce probabilities with the aim of decreasing the exponential expansion of the number of strings/multisets/pictures stored during the computational steps. A decrease of the exponential expansion of this number is achieved with a loss of certainty of the final result which is reached with some error probability in a similar way as in the case of randomized algorithms. Thus, during the computation, those strings with a very low probability or those for which the estimation to arrive in the output node is very small will be ignored. A first attempt in this respect has been done in [7]. Last but not least, we consider that networks of processors based on other biooperations in the molecular computing are reported in the literature which would be of interest.
Last but not least, how polarization could be implemented in networks of picture processors? A first attempt in this direction has been made in [56] as well as [57].
Notes
Acknowledgements
We thank to the reviewers for their comments and suggestions that improved the presentation. This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, Project number POC P37257
References
 1.Alarcón P, Arroyo F, Mitrana V. Networks of polarized evolutionary processors as problem solvers. Advances in knowledgebased and intelligent information and engineering systems, frontiers in artificial intelligence and applications. Amsterdam: IOS; 2012. p. 807–15.Google Scholar
 2.Alarcón P, Arroyo F, Mitrana V. Networks of polarized evolutionary processors. Inf Sci. 2014;265:189–97.MathSciNetCrossRefzbMATHGoogle Scholar
 3.Alexandrov V, Megson G. Parallel algorithms for knapsack type problems. Singapore: World Scientific; 1999.CrossRefzbMATHGoogle Scholar
 4.Alhazov A, CsuhajVarj E, MartnVide C, Rogozhin Y. On the size of computationally complete hybridnetworks of evolutionary processors. Theor Comput Sci. 2009;410:3188–97.CrossRefzbMATHGoogle Scholar
 5.Arroyo F, GómezCanaval S, Mitrana V, Popescu S. Networks of polarized evolutionary processors are computationally complete. Language and automata theory and applications (LATA 2014). LNCS, vol. 8370. Berlin: Springer; 2014. p. 101–12.CrossRefGoogle Scholar
 6.Arroyo F, GómezCanaval S, Mitrana V, Popescu S. On the computational power of networks of polarized evolutionary processors. Inf Comput. 2017;253:371–80.MathSciNetCrossRefzbMATHGoogle Scholar
 7.Arroyo F, GómezCanaval S, Mitrana V, Păun M, SánchezCouso JR. Towards probabilistic networks of polarized evolutionary processors. In: The 16th annual meeting of international conference on high performance computing & simulation (HPCS 2018) (accepted). 2018.Google Scholar
 8.Bordihn H, Mitrana V, Păun A, Păun M. Networks of polarized splicing processors. Theory and practice of natural computing, TPNC 2017, LNCS 10687. Berlin: Springer; 2017. p. 165–77.Google Scholar
 9.Bordihn H, Mitrana V, Negru MC, Păun A, Păun M. Small networks of polarized splicing processors are universal. Nat Comput. 2018;17(4):799–809.MathSciNetCrossRefGoogle Scholar
 10.Bottoni P, Labella A, Manea F, Mitrana V, Petre I, Sempere JM. Complexitypreserving simulations among three variants of accepting networks of evolutionary processors. Nat Comput. 2011;10:429–45.MathSciNetCrossRefzbMATHGoogle Scholar
 11.Bottoni P, Labella A, Manea F, Mitrana V, Sempere JM. Filter position in networks of evolutionary processors does not matter: a direct proof. Proc 15th international meeting on DNA computing and molecular programming LNCS, vol. 5877. Berlin: Springer; 2009. p. 1–11.CrossRefGoogle Scholar
 12.Bottoni P, Labella A, Mitrana V. Accepting networks of evolutionary picture processors. Fundam Inform. 2014;131:337–49.MathSciNetzbMATHGoogle Scholar
 13.Bottoni P, Labella A, Mitrana V. Networks of polarized multiset processors. J Comput Syst Sci. 2017;85:93–103.MathSciNetCrossRefzbMATHGoogle Scholar
 14.Campos M, Sempere JM. Accepting networks of genetic processors are computationally complete. Theor Comput Sci. 2012;456:18–29.MathSciNetCrossRefzbMATHGoogle Scholar
 15.Castellanos J, MartínVide C, Mitrana V, Sempere JM. Networks of evolutionary processors. Acta Inform. 2003;39:517–29.MathSciNetCrossRefzbMATHGoogle Scholar
 16.CsuhajVarjú E, Salomaa A. Networks of parallel language processors. New Trends in formal languages LNCS 1218. Berlin: Springer; 2005. p. 299–318.Google Scholar
 17.CsuhajVarjú E, Kari L, Păun G. Test tube distributed systems based on splicing. Comput AI. 1996;15:211–32.MathSciNetzbMATHGoogle Scholar
 18.CsuhajVarjú E, Mitrana V. Evolutionary systems: a language generating device inspired by evolving communities of cells. Acta Inform. 2000;36:913–26.MathSciNetCrossRefzbMATHGoogle Scholar
 19.CsuhajVarjú E, MartínVide C, Mitrana V. Multiset automata. Multiset processing: mathematical, computer science, and molecular computing points of view, LNCS 2235. Berlin: Springer; 2001. p. 69–83.CrossRefGoogle Scholar
 20.Dassow J. On networks of polarized evolutionary proccesors, In: Gheorghe M, Petre I, PerezJimenez M, Rozenberg G, Salomaa A, editors. Multidisciplinary creativity, homage to Gheorghe Paun on his 65th birthday, Spandugino; 2015. p. 228–238.Google Scholar
 21.del Rosal E, Nuñez R, Ortega A. MapReduce: simplified data processing on large clusters. Int J Comput Commun Control. 2008;3:480–5.Google Scholar
 22.Diaz MA, de Mingo LF, Gómez Blas N. Networks of evolutionary processors: Java implementation of a threaded processor. Int J Inf Theor Appl. 2008;15:37–43.Google Scholar
 23.Diaz MA, de Mingo LF, Gómez Blas N, Castellanos J. Implementation of massive parallel networks of evolutionary processors (MPNEP): 3colorability problem. Nature inspired cooperative strategies for optimization (NICSO 2007) studies in computational intelligence, vol. 129. Berlin: Springer; 2008. p. 399–408.CrossRefGoogle Scholar
 24.Drăgoi C, Manea F, Mitrana V. Accepting networks of evolutionary processors with filtered connections. J Univ Comput Sci. 2007;13:1598–614.MathSciNetzbMATHGoogle Scholar
 25.Garey M, Johnson D. Computers and intractability, a guide to the theory of NPcompleteness. New York: W. H. Freeman and Company; 1979.zbMATHGoogle Scholar
 26.GómezCanaval S, Ortega A, Orgaz P. Distributed simulation of NEPs based ondemand cloud elastic computation. Advances in computational intelligence. LNCS 9094. Berlin: Springer; 2015. p. 40–54.Google Scholar
 27.GómezCanaval S, Ordozgoiti B, Mozo A. NPEPE: massive natural computing engine for optimally solving NPcomplete problems in Big Data scenarios. Communications in computer and information science, vol. 539. Berlin: Springer; 2015. p. 207–17.Google Scholar
 28.GómezCanaval S, Mitrana V, SánchezCouso JS. Networks of splicing processors with evaluation sets as optimization problems solvers. Inf Sci. 2016;369:457–66.MathSciNetCrossRefGoogle Scholar
 29.Gray R, Kotz D, Nog S, Rus D, Cybenko G. Mobile agents: the next generation in distributed computing. Proceedings of the 2nd AIZU international symposium on parallel algorithms/architecture synthesis, PAS’97. Washington, DC: IEEE Computer Society; 1997. p. 8–24.CrossRefGoogle Scholar
 30.Head T, Păun G, Pixton D. Language theory and molecular genetics: Generative mechanisms suggested by DNA recombination. Handb Form Lang. 1996;2:295–360.Google Scholar
 31.Helm R, Marriott K, Odersky M. Building visual language parsers. Proceedings CHI ’91. New York: ACM; 1991. p. 105–12.Google Scholar
 32.Hillis DW. The connection machine. Cambridge: MIT; 1979.Google Scholar
 33.Kellerer H, Pferschy U, Pisinger D. Knapsack problems. Berlin: Springer; 2004.CrossRefzbMATHGoogle Scholar
 34.Kudlek M, MartínVide C, Păun G. Toward a formal macroset theory. Multiset processing: mathematical, computer science, and molecular computing points of view. Berlin: Springer; 2001. p. 123–33.CrossRefGoogle Scholar
 35.Loos R, Manea F, Mitrana V. On small, reduced, and fast universal accepting networks of splicing processors. Theor Comput Sci. 2009;410:406–16.MathSciNetCrossRefzbMATHGoogle Scholar
 36.Loos R, Manea F, Mitrana V. Small universal accepting networks of evolutionary processors with filtered connections. Proc Desc Complex Form Syst Workshop EPTCS. 2009;3:173–83.zbMATHGoogle Scholar
 37.Loos R, Manea F, Mitrana V. Small universal accepting networks of evolutionary processors. Acta Inform. 2010;47:133–46.MathSciNetCrossRefzbMATHGoogle Scholar
 38.Manea F, Margenstern M, Mitrana V, PérezJiménez MJ. A new characterization of NP, P, and PSPACE with accepting hybrid networks of evolutionary processors. Theory Comput Syst. 2010;46:174–92.MathSciNetCrossRefzbMATHGoogle Scholar
 39.Manea F, MartínVide C, Mitrana V. All NPproblems can be solved in polynomial time by accepting networks of splicing processors of constant size. DNA computing. LNCS, vol. 4287. Berlin: Springer; 2006. p. 47–57.CrossRefGoogle Scholar
 40.Manea F, MartínVide C, Mitrana V. Accepting networks of splicing processors: complexity results. Theor Comput Sci. 2007;371:72–82.MathSciNetCrossRefzbMATHGoogle Scholar
 41.Manea F, MartínVide C, Mitrana V. On the size complexity of universal accepting hybrid networks of evolutionary processors. Math Struct Comput Sci. 2007;17:753–71.MathSciNetCrossRefzbMATHGoogle Scholar
 42.Manea F, MartínVide C, Mitrana V. Accepting networks of evolutionary word and picture processors: a survey. Scientific applications of language methods, mathematics, computing, language, and life Frontiers in mathematical linguistics and language theory, vol. 2. Singapore: World Scientific; 2010. p. 523–60.Google Scholar
 43.Margenstern M, Rogozhin Y. Timevarying distributed H systems of degree 1 generate all recursively enumerable languages. Words, semigroups, and transductions. Singapore: World Scientific; 2001. p. 329–40.CrossRefGoogle Scholar
 44.Margenstern M, Mitrana V, PérezJiménez MJ. Accepting hybrid networks of evolutionary processors. Proc. 10th international workshop on DNA computing LNCS, vol. 3384. Berlin: Springer; 2004. p. 235–46.Google Scholar
 45.Marriott K. Constraint multiset grammars. Proc VL’94. Washington, DC: IEEE Computer Society; 1994. p. 118–25.Google Scholar
 46.MartínVide C, Pazos J, Păun G, RodríguezPatón A. A new class of symbolic abstract neural nets: tissue p systems. Proc 8th Annual International Conference, COCOON 2002. LNCS, vol. 2387. Berlin: Springer; 2002. p. 290–9.Google Scholar
 47.Minsky ML. Size and structure of universal Turing machines using tag systems. Recursive function theory. Symp Pure Math. 1966;429:74–86.Google Scholar
 48.Morrison JP. Flowbased programming: a new approach to application development. 2nd ed. J.P. Enterprises Ltd: Unionville, Ontario, Canada; 2010.Google Scholar
 49.Păun G. On the splicing operation. Discrete Appl Math. 1996;70:57–79.MathSciNetCrossRefzbMATHGoogle Scholar
 50.Păun A. On timevarying H systems. Bull EATCS. 1999;67:157–64.zbMATHGoogle Scholar
 51.Păun G. Distributed architectures in DNA computing based on splicing: limiting the size of components. Unconventional models of computation. Berlin: Springer; 1998. p. 323–35.Google Scholar
 52.Păun G. Regular extended H systems are computationally universal. J Autom Lang Comb. 1996;1:27–36.MathSciNetzbMATHGoogle Scholar
 53.Păun G. DNA computing; distributed splicing systems. Structures in logic and computer science. LNCS 1261. Berlin: Springer; 1997. p. 351–70.Google Scholar
 54.Păun G. Membrane computing. An Introduction. Berlin: Springer; 2002.CrossRefzbMATHGoogle Scholar
 55.Păun G, Rozenberg G, Salomaa A. The oxford handbook of membrane computing. Oxford: Oxford University Press; 2010.CrossRefzbMATHGoogle Scholar
 56.Popescu S. Networks of polarized picture networks. RomJIST. 2016;18:3–17.Google Scholar
 57.Popescu S. Bioinspired computing models. PhD Thesis, University of Bucharest; 2015.Google Scholar
 58.Popescu S. Networks of polarized evolutionary processors with elementary polarization of symbols. Eighth workshop on nonclassical models of automata and applications, NCMA 2016. Wien: Österreichische Computer Gesellschaft; 2016. p. 275–85.Google Scholar
 59.Post EL. Formal reductions of the general combinatorial decision problem. Am J Math. 1943;65:197–215.MathSciNetCrossRefzbMATHGoogle Scholar
 60.Rogozhin Y. Small universal Turing machines. Theor Comput Sci. 1992;89:6575–9.Google Scholar
 61.Rozenberg G, Bäck T, Kok J, editors. Handbook of natural computing. Berlin: Springer; 2012.zbMATHGoogle Scholar
 62.Rozenberg G, Salomaa A, editors. Handbook of formal languages. Berlin: Springer; 1997.zbMATHGoogle Scholar
 63.Sankoff D. Gene order comparisons for phylogenetic inference:evolution of the mitochondrial genome. Proc Natl Acad Sci USA. 1943;65:197–215.Google Scholar
 64.Suzuki Y, Tanaka H. Symbolic chemical system based on abstract rewriting system and its behavior pattern. Artif Life Robot. 1997;1:211–9.CrossRefGoogle Scholar
 65.Suzuki Y, Tanaka H. Chemical evolution among artificial protocells. Proc. artificial life VII. Cambridge: MIT; 2000. p. 54–63.Google Scholar
 66.Woods D, Neary T. On the time complexity of \(2\)tag systems and small universal Turing machines. 47th Annual IEEE symposium on foundations of computer science FOCS ’06. Washington, DC: IEEE Computer Society; 2006. p. 439–48.Google Scholar
 67.Woods D, Neary T. The complexity of small universal Turing machines: A survey. Theor Comput Sci. 2009;410:443–50.MathSciNetCrossRefzbMATHGoogle Scholar