Advertisement

Journal of Membrane Computing

, Volume 1, Issue 2, pp 127–143 | Cite as

Polarization: a new communication protocol in networks of bio-inspired processors

  • Victor MitranaEmail author
Review Paper
  • 210 Downloads

Abstract

This work is a survey of the most recent results regarding the computational power of the networks of bio-inspired processors whose communication is based on a new protocol called polarization. In the former models, the communication amongst processors is based on filters defined by some random-context conditions, namely the presence of some symbols and the absence of other symbols. In the new protocol discussed here, a polarization (negative, neutral, and positive) is associated with each node, while the polarization of data navigating through the network is computed in a dynamical way by means of a valuation function. Consequently, the protocol of communication amongst processors is naturally based on the compatibility between their polarization and the polarization of the data. We consider here three types of bio-inspired processors: evolutionary processors, splicing processors, and multiset processors. A quantitative generalization of polarization (evaluation sets) is also presented. We recall results regarding the computational power of these networks considered as accepting devices. Furthermore, a solution to an intractable problem, namely the 0 / 1 Knapsack problem, based on the networks of splicing processors with evaluation sets considered as problem solving devices, is also recalled. Finally, we discuss some open problems and possible directions for further research in this area.

Keywords

Evolutionary processor Splicing processor Multiset processor Polarization Evaluation set Computational power 

1 Introduction

Computational models inspired by different biological phenomena turned out to be theoretically able to efficiently solve intractable problems. The main computational features of these models are abstracted from the way in which nature evolves. These computational models have appeared in the last 2 decades, they have been vividly investigated from a formal point of view, but some important features that could make them more attractive and useful to practitioners have not been yet explored. Most of the models have a nondeterministic behavior in the computational sense; that is, at each computational step, the next configuration is nondeterministically chosen from the set of all the possible configurations that may follow. This might be the reason why there is a gap between theoretical results and practical aspects of these models. For a survey of several classes of bio-inspired computational, the reader is referred to [61].

Along these lines, networks of bio-inspired processors form a class of highly parallel and distributed computing models inspired and abstracted from different biological phenomena. Networks of bio-inspired processors resemble the other models of computation with similar or different origins: evolutionary systems inspired by the evolution of cell populations [18], tissue-like P systems [46] in the membrane computing area [54], networks of parallel language processors as a formal language generating device [16], flow-based programming as a well-known programming paradigm [48], distributed computing using mobile programs [29], Connection Machine, viewed as a network of microprocessors processing 1 bit per unit time in the shape of a hypercube [32], etc.

Networks of bio-inspired processors may be informally described as a graph whose vertices are processors running operations on data structured as strings, pictures, and multisets. Two main types of string processors have been considered so far: evolutionary processors and splicing processors. An evolutionary processor [15] is designed to carry out very simple operations on strings. These operations, which could be interpreted as formal operations abstracted from the gene mutations in DNA molecules, consist of deleting or inserting a symbol, and substituting one symbol by another. Moreover, each evolutionary processor is specialized in just of the three mentioned operations. A splicing processor [40] performs an operation called splicing that is inspired from the recombination of DNA molecules under the effect of different types of enzymes [30]. The splicing operation may be informally explained as follows: two DNA sequences (represented by strings) are cut at specific sites (defined by means of splicing rules), and the prefix of one sequence is pasted to the suffix of the other and vice versa. A multiset processor [13] may be viewed as an evolutionary processor working on data, where the linear structure does not matter anymore. The corresponding operations performed by multiset processors are: increasing or decreasing the number of multiplicities of some symbol and replacing an occurrence of some symbol by another one. Picture processors [12] may transform a picture, given in the form of a two-dimensional array of symbols, by changing its frontier as follows: (i) a symbol of a row or column is replaced by another symbol, (ii) a row or column is deleted, (iii) a row or column is masked or unmasked, and (iv) a row or column is circularly shifted.

The computation in a network of bio-inspired processors consists of a sequence of steps, processing and communication, which alternates with each other until a predefined condition is satisfied. In each processing step, all processors simultaneously apply their rules on the data existing in the nodes hosting them. Furthermore, the data in each string or picture processing node are organized in the form of multisets of strings or pictures (each string/picture may appear in an arbitrarily large number of copies), and all copies are processed in parallel, so that all the possible events that can take place do actually take place. In each communication step, two actions are done according to some strategies that will be explained below:
  1. (i)

    all the nodes simultaneously send out the data which they contain after an evolutionary step to all adjacent nodes;

     
  2. (ii)

    all the nodes simultaneously handle all the arriving data.

     
The communication strategy considered so far is based on filters defined mainly by two types of conditions: syntactical conditions (random-context conditions, membership to regular languages, and semi-conditional conditions) and semantical conditions, where polarization is just very simple case. In the case of filters, there are three variants: (i) each node has an input and an output filter that could be different [44], (ii) the two filters in the previous case coincide [10], and (iii) the filters as in the previous case of two adjacent nodes collapse on the edge between them, such that each edge, which acts as a bidirectional channel between the nodes, has a unique filter [24].

Networks of evolutionary processors (NEP) have been widely investigated from a theoretical point of view: NEPs as language generating devices, accepting devices, and problem solvers [14, 15, 39], characterizations of the complexity classes NP, P, and PSPACE based on accepting NEPs [38], universal NEPs and descriptional complexity results in [40, 41], etc. A very early survey may be found in [42].

As far as the three variants (i)–(iii) described above for NEP are concerned, it turned out that all of them are equivalent from the computational power point of view. The equivalence of variants (i) and (iii) is a direct consequences that all of them can simulate Turing machines. Thus, the smallest NEP of type (i) that can simulate a Turing needs 7 nodes [4], while NEPs of type (iii) with only 16 nodes are able to simulate Turing machines [36]. However, no direct simulation of one model by another has been reported until [11], where direct simulations between the variants (i) and (iii) are presented. It is worth noting that both simulations are time-efficient; that is, each computational step in one model is simulated in a constant number of computational steps in the other. In some sense, this result is somehow surprising as the possibility of controlling the computation in the variant (iii) seems to be weaker. This is particularly useful when one wants to translate a solution from one model into the other, whereas a translation via a Turing machine squares the time complexity of the new solution. The investigation in [10] is along the same lines of [11] and extends it with new time-efficient simulations between the variants (i) and (iii) on one hand and the variant (ii) on the other hand. Furthermore, characterizations of the class NP are proposed in [37] (NEPs of type (i) of size 10) and [36] (NEPs of type (iii) of size 16).

The corresponding model based on the splicing operation, namely network of splicing processors (NSP), was introduced in [40]. The NSP model resembles some features of the test tube distributed systems based on splicing introduced in [17] and further investigated in [51]. The differences between the models considered in [17] and [40] are precisely described in [40]. In [40], one also mentions the differences between NSP and the time-varying distributed H systems, another generative model based on splicing introduced in [53], and further investigated in [43, 50, 52]. In all these models, the splicing operation is applied to an arbitrary pair of strings. A restricted version of NSPs, such that the pair of string which the splicing is applied to, is formed by an auxiliary string and an arbitrary nonauxiliary string was introduced in [40], where it was proved that this computing model is computationally complete. A characterization of the complexity class NP as the class of languages accepted by restricted NSPs in polynomial time was proposed. Furthermore, a similar characterization was proposed for the class PSPACE as the class of languages accepted by restricted NSPs with at most polynomial length of the strings used in the derivation. In [39], it was proved that NSPs (unrestricted, this time) of constant size accept all recursively enumerable languages, and can solve all problems in NP in polynomial time; in addition, an universality result for NSPs was proposed. In both cases, the number of nodes needed was 7. In [35], it is shown that computational completeness can be achieved by NSPs of two nodes. In the same paper, a more involved construction, showing that NSPs of size 3 can simulate the computations of a nondeterministic Turing machine in parallel, is presented.

Several software implementations of NEPs have been reported in the literature (see [21, 22, 23]). Other software simulators for NEP model using massively parallel platforms for multi-core desktop computers, clusters of computers, and cloud resources have been reported in [26, 27]. These solutions have encountered difficulties when trying to implement filters because of the challenge to orchestrate communication and computation across available resources. One idea was to consider a safe-thread model of processors, such that each rule and filter is associated with a thread, the threads for filters being more complicated than those for rules. It is worth mentioning that the threads do not necessarily synchronize their steps (evolutionary or communication), but the itineraries of data through the theoretical model do not interfere with each other. In this setting, a processor is the parent of a set of threads, which use all objects from that processor in a mutual exclusion region. When a processor starts to run, it starts cascadically the rule threads and filter threads.

Another difficulty for all of these simulators is that they were only developed for the NEP model and extending it to new models is not an trivial task. Recently, in [27], it is shown that massively distributed platforms for big data scenarios make them potential candidates for the development of ultra-scalable simulators able to execute NEP models and its variants. In particular, a new framework, named NPEPE, was introduced to deploy NEP solutions to these computing platforms by developing an engine that uses Apache Giraph on top of the Hadoop platform. The results of some experiments with NPEPE suggest its suitability to deploy NEP solutions for hard computational problems. Furthermore, this work also suggests that other variants of NEP might be amenable to be adapted to these ultra-scalable platforms to minimize the growth of the processing data. Therefore, this resource reduction into the filtering process could become a clear advantage when we will deploy hardware/software solutions on top of massively distributed computational platforms.

Introducing a communication protocol based on a new condition, namely the polarization, has several reasons. One reason is to replace the communication based on filters among processors by another protocol that seems easier to be implemented. More importantly, the syntactic conditions discusses above, that may capture the presence or absence of certain elements in a cell, cannot capture phenomena like the equilibrium potential of an electrochemical reaction in a cell (electrochemical polarization) or the concentration polarization of a cell from its equilibrium value. If the existence of some elements in a cell is important, it is without any doubt that the concentration of these elements is also very important. Inspired by these phenomena, the new condition is called polarization.

Paper [1] considers a new variant of NEP in which the old filtering process was replaced by a process regulated by polarization and discusses the potential of this variant for solving hard computational problems. A slightly more general variant of networks of polarized evolutionary processors considered in [1] called (NPEP) was introduced in [2]. All nodes of an NPEP are “polarized” in the sense that an element in the set \(\{-,0,+\}\) is associated with each node of the network, such that each node can be seen as having a negative, neutral, or positive polarization. To define a strategy of communication following an analogy to the electrical charge, we need to define a procedure to compute the polarization of data. As seen above, the polarization of a node is previously defined and fixed; in its turn, the polarization of strings is computed by means of a valuation mapping. This function associates an integer value of each strings depending on the integer values assigned to its symbols. Then, only the sign of the value associated with a strings is set as its polarization. Thus, string migration amongst adjacent nodes, that might be viewed as a simulation of the communication channel between cells, depends both on their polarization and the node polarization which by simplicity reasons have to be the same.

It is clear that the linear structure of a string does not matter when computing its valuation. Therefore, one may consider that data are less exactly organized; that is, one stores only the number of occurrences of each symbol. This is a multiset of symbols; processors acting on multisets of symbols and networks of multiset processors (NPMP) have been considered in [13].

Networks of polarized splicing processors (NPSP) have been introduced in [8] and further investigated in [9]. A quantitative generalization of these networks, called networks of splicing processors with evaluation sets (NSPES), has been introduced in [28]. In an NSPES, unlike in all the aforementioned cases, the valuation mapping returns the exact value computed for a string. The new model refines the communication protocol based on polarization discussed above, in which each polarization may be viewed as one of the intervals of integers \((-\infty ,0)\), \(\{0\}\), and \((0,\infty )\), to a more complex polarization based on more intervals. The new model tries to resemble the biological concept of the concentration gradient in a solution. Now, the strategy of communication between two nodes follows the compatibility between their accepting values with respect to some predefined evaluation sets and the values of the data computed by a valuation mapping. More precisely, the values of data have to be in the set of accepting values with respect to some evaluation set of symbols. This new communication protocol might be interpreted as the movement of molecules or particles along a concentration gradient between two areas.

The paper is organized as follows. After a section that presents the basic notions and concepts, we recall the results regarding the effect of polarization in networks of evolutionary processors. The same is then done for networks of multiset processors and finally for networks of splicing processors. The last section is devoted to a discussion about open problems and directions for further research.

2 Preliminaries

We assume that the reader is familiar with the basic notions of the formal language theory. In the sequel, we summarize the main concepts and notations used in this work; for all unexplained notions, the reader is referred to [62].

An alphabet is a finite and nonempty set of symbols. The cardinality of a finite set A is written card(A). Any finite sequence of symbols from an alphabet V is called string over V. The set of all strings over V is denoted by \(V^*\) and the empty string is denoted by \(\lambda \). The length of a string x is denoted by |x|, while alph(x) denotes the minimal alphabet W, such that \(x\in W^*\). Furthermore, \(|x|_a\) denotes the number of occurrences of the symbol a in x.

A homomorphism from the monoid \(V^*\) into the monoid (group) of additive integers \(\mathbb {Z}\) is called valuation of \(V^*\) in \(\mathbb {Z}\). The absolute value of an integer k is denoted by |k|. Although the absolute value of an integer and the length of a string are denoted in the same way, this cannot cause any confusion as the arguments are understood from the context.

A nondeterministic Turing machine is a construct \(M=(Q,\)VU\(\delta ,\)\(q_0,\)BF), where Q is a finite set of states, V is the input alphabet, U is the tape alphabet, \(V \subset U\), \(q_0\) is the initial state, \(B \in U \setminus V\) is the “blank” symbol, \(F \subseteq Q\) is the set of final states, and \(\delta \) is the transition function, \( \delta : (Q\setminus F)\times U \rightarrow 2^{Q\times (U\setminus \{B\})\times \{R,L\}}\). There are many variants of Turing machines; the one considered here can be described intuitively in the following way. It has only one semi-infinite tape (bounded to the left) storing strings over the alphabet \(U\setminus \{B\}\) from the beginning of the tape. The blank symbol B occupies all the tape cells that do not store a symbol from \(U\setminus \{B\}\). The tape head can read a symbol from the tape and write a symbol different from B. Initially, a string over the input alphabet V is stored on the tape. A computational step (or move) of M can be described as follows: the symbol stored on the current cell of the tape is read, the current state may be changed or not, a symbol from \(U\setminus \{B\}\) is written on the current cell, and the tape head is moved one cell either to the left (provided that the cell scanned was not the leftmost one) or to the right. A computation of M is a sequence, that may be infinite, of moves. The input string is accepted iff the machine reaches a final state. The language accepted by M is the language of all the accepted strings by M. A Turing machine is deterministic if, for every state and symbol, the machine can make at most one move.

We now recall the definition of 2-tag systems following in [60]. It is worth noting that this definition is slightly different from those in [47, 59], but equivalent to them. A 2-tag system is a pair \(T=(V,\mu )\), where V is a finite alphabet of symbols that contains a special halting symbol H, and \(\mu :V\setminus \{H\} \rightarrow V^+\) with \(|\mu (x)|\ge 2\) or \(\mu (x)=H\). Furthermore, \(\mu (x)=H\) for just one \(x\in V\setminus \{H\}\). A string over V is said to be a halting string if it contains H or is shorter than 2. The tag operation \(t_T\) is defined for each nonhalting string w in the following way: if x is the leftmost symbol of w, then \(t_T(w)\) returns the string obtained by deleting the leftmost 2 symbols of w and appending \(\mu (x)\) to the obtained string. A computation of a 2-tag system as above is an arbitrary iteration of the tag operation described above. A computation is not considered to exist unless a halting string is produced in finitely many iterations. It is known, see[60], that 2-tag systems are computationally complete.

The time complexity of the finite computation \(w_0=w,w_1=t_T(w_0)\), \(w_2=t_T(w_1),\ldots ,w_p=t_T(w_{p-1} )=\alpha H\), with \(w_i\in (V\setminus \{H\})^+,|w_i|\ge 2\) for all \(0\le i\le p-1\), and \(\alpha \in (V\setminus \{H\})^*\), of T on \(w\in V^*\) is denoted by \(Time_T(w)\) and equals p, that is the number of steps required for the 2-tag system to produce a halting string starting from the string w. The time complexity of T is the partial function from \({\mathbb N}\) to \({\mathbb N}\), \(Time_T(n)=max\{Time_T(w)|w\in V^*,|w|=n\}\).

It is worth mentioning that most of the small universal Turing machines have been obtained via the simulations of 2-tag systems (see [60]), and more recently [67] together with the references therein.

3 Polarization in networks of evolutionary processors

Let \(a\rightarrow b\) be a rewriting rule with \(a,b\in V\cup \{\lambda \}\) and \(ab\ne \lambda \); if \(\mid ab\mid =2\), then the rule is a substitution rule; if \(a\ne \lambda \) and \(b=\lambda \), then the rule is a deletion rule; if \(a=\lambda \) and \(b\ne \lambda \), then the rule is an insertion rule. We denote by \(Sub_V\), \(Del_V\), and \(Ins_V\), the set of all substitution, deletion, and insertion rules over an alphabet V, respectively. Let \(\sigma \) be a rule as above and w be a string over V; we define the following actions of \(\sigma \) on w, according to the modes of modifying DNA strands by means of enzymes (exonuclease, endonuclease, and polymerase), which is the source of inspiration:
  • If \(\sigma \equiv a\rightarrow b\in Sub_V\), then \(\sigma ^*(w)=\left\{ \begin{array}{ll} \{ubv:\ w=uav\},\\ \{w\}, \text{ otherwise. } \end{array}\right. \)
    $$\begin{aligned} \sigma ^l(w)=\left\{ \begin{array}{ll} \{bu:\ w=au\},\\ \{w\}, \text{ otherwise. } \end{array}\right. \quad \quad \sigma ^r(w)=\left\{ \begin{array}{ll} \{ub:\ w=ua\},\\ \{w\}, \text{ otherwise. } \end{array}\right. \end{aligned}$$
  • If \(\sigma \equiv a\rightarrow \lambda \in Del_V\), then \(\sigma ^*(w)=\left\{ \begin{array}{ll} \{uv:\ w=uav\},\\ \{w\}, \text{ otherwise. } \end{array}\right. \)
    $$\begin{aligned} \begin{array}{llc} \sigma ^l(w)=\left\{ \begin{array}{ll} \{v:\ w=av\},\\ \{w\}, \text{ otherwise } \end{array}\right. &{} \qquad &{} \sigma ^r(w)=\left\{ \begin{array}{ll} \{u:\ w=ua\},\\ \{w\}, \text{ otherwise } \end{array}\right. \end{array} \end{aligned}$$
  • If \(\sigma \equiv \lambda \rightarrow a\in Ins_V\), then \(\sigma ^*(w)=\{uav:\ w=uv\}, \ \sigma ^l(w)=\{aw\}, \ \sigma ^r(w)=\{wa\}.\)

The way of applying a rule to a string is specified by \(\alpha \in \{*,l,r\}\), that is anywhere in the string, in the left (\(\alpha =l\)), or in the right (\(\alpha =r\)) end of the string, respectively. It is important to note that, if a rule can be applied to more than a position in the string, then its application returns the set of all strings that can be obtained by applying the rule to every position where it may be applied.

As we do not give the proofs here, it is worth mentioning that all the results surveyed here have been obtained with deletion and insertion rules acting in the modes l and r only, and substitution rules acting in the mode \(*\) only. In other words, we have noticed that deletions and insertions applied everywhere in the string and substitutions applied in the ends of the string can be ignored without any effect on the presented results.

Given \(\alpha \in \{*,l,r\}\), we extend the action of an evolutionary rule \(\sigma \) on a string to a set of strings \(L\subseteq V^*\) by \(\displaystyle \sigma ^\alpha (L)=\bigcup _{w\in L} \sigma ^\alpha (w)\). Given a finite set of rules M, we define the \(\alpha \)-action ofM on the string w and the language L by \(\displaystyle M^{\alpha }(w)=\bigcup _{\sigma \in M} \sigma ^{\alpha }(w)\ \text{ and } \ M^{\alpha }(L)=\bigcup _{w\in L}M^{\alpha }(w),\) respectively.

A polarized evolutionary processor over V is a pair \((M,\alpha ,\pi )\), where:
  • M is a set of evolutionary rules, namely substitution, deletion, or insertion rules, over the alphabet V. Formally: \((M\subseteq Sub_V)\) or \((M\subseteq Del_V)\) or \((M\subseteq Ins_V)\).

  • \(\alpha \in \{*,l,r\}\) gives the action mode of the rules of the node.

  • \(\pi \in \{-,+,0\}\) is the polarization of the node (negatively or positively charged, or neutral, respectively).

Note that a processor can perform just one evolutionary operation. We denote the set of evolutionary processors over V by \(EP_V\).
A network of polarized evolutionary processors (NPEP for short) is a 7-tuple \(\varGamma =(V,U,G,\mathcal {R},\varphi ,\underline{In},\underline{Out}),\) where:
  • V and U are the input and network alphabet, respectively, \(V\subseteq U\).

  • \(G=(X_G,E_G)\) is an undirected graph without loops with the set of vertices \(X_G\) and the set of edges \(E_G\). G is called the underlying graph of the network.

  • \(\mathcal {R}:X_G\longrightarrow EP_U\) is a mapping which associates with each node \(x\in X_G\) the polarized evolutionary processor \(\mathcal {R}(x)=(M_x,\alpha _x, \pi _x)\).

  • \(\varphi \) is a valuation of \(U^*\) in \(\mathbb {Z}\).

  • \(\underline{In} and \underline{Out} \in X_G\) are the input and the output nodes of \(\varGamma \), respectively.

We say that \(card(X_G)\) is the size of \(\varGamma \). A configuration of an NPEP \(\varGamma \) as above is a function \(C: X_G\longrightarrow 2^{U^*}\) which associates a set of strings with every node of the graph. For every string \(w \in V^*\), the initial configuration of \(\varGamma \) on w is defined by \(C_0^{(w)}(x_I)=\{w\}\) and \(C_0^{(w)}(x)=\emptyset \) for all \(x\in X_G\setminus \{x_I\}\).
A configuration is changed either by an evolutionary step or by a communication step. In an evolutionary step, each component C(x) of the configuration C is changed in accordance with the set of evolutionary rules \(M_x\) associated with the node x. Formally, we say that the configuration \(C'\) is obtained in one evolutionary step from the configuration C, written as \(C\Longrightarrow C'\), iff
$$\begin{aligned} C'(x)=M_x^{\alpha _x}(C(x)) \text{ for } \text{ all } x\in X_G. \end{aligned}$$
Before defining a communication step, we want to fix what do we mean by the polarization of a string. Every string has a valuation computed by the valuation mapping; by taking the sign of this valuation, we get a value which looks like a polarity. Therefore, although a string has not a real polarization, we use the polarity of a string as a shorter speaking.
In a communication step, each node processor \(x\in X_G\) sends out copies of all its strings but keeping a local copy of the strings having the same polarity to that of x only, to all the node processors connected to x and receives a copy of each string sent by any node processor connected with x, providing that it has the same polarity as that of x. Note that, for simplicity reasons, a string is considered to migrate to a node with the same polarity and not to an opposed one. Formally, we say that the configuration \(C'\) is obtained in one communication step from configuration C, written as \(C\vdash C'\), iff
$$\begin{aligned} C'(x) & = (C(x)\setminus \{w\in C(x)\mid sign(\varphi (w))\ne \pi _x\})\ \cup \\&\bigcup _{\{x,y\}\in E_G} (\{w\in C(y)\mid sign(\varphi (w))=\pi _x\}), \end{aligned}$$
for all \(x\in X_G.\) Here, sign(m) is the sign function which returns \(+,0,-\), provided that m is a positive integer, is 0, or is a negative integer, respectively. Note that all strings with a different polarity than that of x are discarded. Furthermore, each expelled string from a node x that cannot enter any node connected to x (no such node has the same polarity as the string has) is lost.

A short discussion is in order here. Obviously, the evolutionary processor described here is just a mathematical concept similar to some extent to that of an evolutionary algorithm, both being inspired from evolution in nature. The evolutionary operations defined for an evolutionary processor might be interpreted as mutations in an evolutionary algorithm, while the filtering process during the communication step in an NEP might be viewed as a selection process. Recombination, which appears in an evolutionary algorithm, is missing here, but it was asserted that evolutionary and functional relationships between genes can be captured by taking only local mutations into consideration [63].

Let \(\varGamma \) be an NPEP, the computation of \(\varGamma \) on the input string \(w\in V^*\) is a sequence of configurations \(C_0^{(w)},C_1^{(w)},C_2^{(w)},\dots \), where \(C_0^{(w)}\) is the initial configuration of \(\varGamma \) on w, \(C_{2i}^{(w)}\Longrightarrow C_{2i+1}^{(w)}\) and \(C_{2i+1}^{(w)}\vdash C_{2i+2}^{(w)}\), for all \(i\ge 0\). Note that the configurations are changed by alternating steps.

A computation as above halts, if there exists a configuration in which the set of strings existing in the output node \(\underline{Out}\) is nonempty. Given an NPEP \(\varGamma \) and an input string w, we say that \(\varGamma \) accepts w if the computation of \(\varGamma \) on w halts.

Let \(\varGamma \) be an NPEP with the input alphabet V; the time complexity of the finite computation \(C_0^{(x)}\), \(C_1^{(x)}\), \(C_2^{(x)}\), and \(\dots C_m^{(x)}\) of \(\varGamma \) on \(x\in V^*\) is denoted by \(Time_{\varGamma }(x)\) and equals m. The time complexity of \(\varGamma \) is the function from \(\mathbb {N}\) to \(\mathbb {N}\),
$$\begin{aligned} Time_{\varGamma }(n)=\text{ sup }\{Time_{\varGamma }(x)\mid |x|=n\}. \end{aligned}$$
For a better understanding of the model, we give an example. To this aim, we give the network used in [2] for solving the “3-colorability” problem. This problem is to decide whether each vertex in a connected undirected graph can be colored using three colors (say, red, blue, and green) in such a way that after coloring, no two vertices which are connected by an edge have the same color. Let \(Y=(B,Q)\) be a graph with set of vertices \(B=\{v_1,v_2,\dots ,v_n\}\) and set of edges \(Q=\{e_1,e_2,\dots ,\)\(e_m\}\), where each \(e_k\) is given in the form \(e_k=\{v_i,v_j\}\), for some \(1\le i\ne j\le n\). We construct the NPEP \(\varGamma \) that decides whether the graph Y can be colored with three colors, namely red, blue, and green, as follows:
$$\begin{aligned} \varGamma =(V,U,G,\mathcal {R},\varphi ,\underline{In},\underline{Out}), \end{aligned}$$
with
$$\begin{aligned} V & = \{T_0,T_1,T_2,\dots , T_{n(n-1)/2}\}\cup \{e_1,e_2,\dots , e_{n(n-1)/2}\}\cup \{a\},\\ U & = V\cup \{T'_0,T'_1,T'_2,\dots , T'_{n(n-1)/2}\}\cup \{e'_1,e'_2,\dots , e'_{n(n-1)/2}\}\cup \\&\{\bar{e}_1,\bar{e}_2,\dots , \bar{e}_{n(n-1)/2}\}\cup \{r_i,b_i,g_i\mid 1\le i\le n\}\cup \{T''_0,F\}. \end{aligned}$$
We now define the valuation mapping \(\varphi \):
$$\begin{aligned} \begin{array}{lllclclc} \varphi (T_k)=0, 0\le k\le n(n-1)/2, &{} &{} \varphi (T'_j)=1, 0\le j\le n(n-1)/2,\\ \varphi (z_i)=0, 1\le i\le n, z\in \{r,b,g\}, &{} &{} \varphi (e_j)=0, 1\le j\le n(n-1)/2,\\ \varphi (z'_i)=1, 1\le i\le n, z\in \{r,b,g\}, &{} &{} \varphi (e'_j)=-2, 1\le j\le n(n-1)/2,\\ \varphi (z''_i)=3, 1\le i\le n, z\in \{r,b,g\}, &{} &{} \varphi (\bar{e}_j)=0, 1\le j\le n(n-1)/2,\\ \varphi (a)=1, &{} &{} \varphi (F)=1,\\ \varphi (T''_0)=-1. &{} \end{array} \end{aligned}$$
We first give the shape of the underlying graph G in Fig. 1.
Fig. 1

General shape of the underlying graph G

Each box labeled by [ij], \(1\le i<j\le n\), encapsulates a subgraph associated with the edge \(\{i,j\}\) which is described in Fig. 2.
Fig. 2

Subgraph associated with each edge \(\{i,j\}\)

We now define the nodes of our network in Table 1:
Table 1

Parameters of the nodes of \(\varGamma \)

Node

M

\(\alpha \)

\(\underline{In}\)

\(\bigcup _{i=1}^n \{a\rightarrow r_i,a\rightarrow b_i, a\rightarrow g_i\}\)

\(+\)

X

\(\{T_k\rightarrow T'_{k-1}\mid 1\le k\le n(n-1)/2\}\cup \{T_0\rightarrow T''_0\}\)

0

(ij)

\(\{e_k\rightarrow e'_k\mid e_k=\{i,j\}\}\)

\(+\)

\((i,j,z), z\in \{r,b,g\}\)

\(\{z_i\rightarrow z'_i\}\)

\((i,j,z,y), z\in \{r,b,g\} y\in \{r,b,g\}, z\ne y\)

\(\{y_j\rightarrow y''_j\}\)

0

Z

\(\{T'_k\rightarrow T_k\mid 0\le k\le n(n-1)/2\}\ \cup \{z'_i\rightarrow z_i,g''_j\rightarrow g_j\}\cup \{e'_k\rightarrow \bar{e}_k\mid e_k=\{i,j\}\}\)

\(+\)

\(\underline{Out}\)

\(\emptyset \)

The computation starts with a string of the form:
$$\begin{aligned} w=T_me_1e_2\dots e_m a^n, \end{aligned}$$
which encodes a graph with n nodes (the number of a’s equals the number of nodes) and m edges (the edges are encoded by the symbols \(e_k\)). In the input node \(\underline{In}\), the input string stays until all occurrences of a are replaced by \(r_i,b_i,g_i\), for some \(1\le i\le n\). The meaning of replacing an occurrence of a by \(r_i\) is that the node \(v_i\) is colored in red. Therefore, after replacing all occurrences of a, the input node contains strings encoding all possible colorings of the graph. The rough idea is that the strings encoding all possible colorings will arrive to node X, where they will be distributed to each subnetwork associated with an edge. This subnetwork blocks all string encoding colorings that are not correct with respect to this edge, while those encoding correct colorings will be passed to another such subnetwork. The strings that can pass all these subnetworks encode correct colorings and, finally, arrive in the output node.

We now recall some results from [5] and [6] concerning the computational power of these networks.

Theorem 1

For every 2-tag system\(T=(V,\mu )\), there exists an NPEP\(\varGamma \)of size 15, such that\(L(\varGamma )=\{w\mid T\,halts\,on\,w\}\).

Although 2-tag systems efficiently simulate deterministic Turing machines, via cyclic tag systems (see [66]), the previous result does not allow us to say much about the NPEP accepting in a computationally efficient way all recursively enumerable languages. The following statement shows that all recursively enumerable languages can be efficiently (from the time complexity point of view) accepted by NPEP by simulating arbitrary Turing machines.

Theorem 2

Letfbe a function from\(\mathbb {N}\)to\(\mathbb {N}\).

For any recursively enumerable languageL, accepted in\({\mathcal {O}}(f(n))\)time by a Turing machine with tape alphabetU, there exists an NPEP of size\(10\cdot card(U)\)acceptingLin\({\mathcal {O}}(f(n))\)time.

A natural problem arises: is it possible to simulate arbitrary Turing machines by NPEP of constant size? if this is the case, is such a simulation still time-efficient? The next result gives an affirmative answer to the first question; however, a price in terms of time complexity must be paid for this simulation.

Theorem 3

Letfbe a function from\(\mathbb {N}\)to\(\mathbb {N}\).

For any recursively enumerable languageL, accepted in\({\mathcal {O}}(f(n))\)time by a Turing machine with tape alphabetU, there exists an NPEP of size 39 acceptingLin\({\mathcal {O}}((f(n)\cdot card(U))^2)\)time.

Conversely, NPEP can be simulated by Turing machines.

Theorem 4

Letfbe a function from\(\mathbb {N}\)to\(\mathbb {N}\).

For every languageLaccepted by an NPEP, with the input alphabetVand the valuation mapping\(\varphi \), in\({\mathcal {O}}(f(n))\)time, there exists a Turing machine that acceptsLin\({\mathcal {O}}(f(n)(Kf(n)+Kn))\)time, where\(K=\max \{|\varphi (a)|\mid a\in V\}\).

Let \(\varGamma =(V,U,G,\mathcal {R},\varphi ,\underline{In},\underline{Out})\) be an NPEP. If the valuation mapping \(\varphi \) takes values in the set \(\{-1,0,1\}\) only, the network is said to be with elementary polarization of symbols. This restriction appears to be natural in the sense that each symbol has now a polarization. This does not mean that the valuation mapping associating an integer value with every symbol is artificial; it suffices to imagine that each such integer represents the number of subatomic particles (electrons and protons). Rather surprisingly, in [58] as well as [57], it is shown that the computational power of these networks does not diminish; even more, the universality can be reached with the networks of constant size. However, the price paid is an increase of time complexity.

More precisely, the following statement holds.

Theorem 5

For every recursively enumerable languageL, there exists an NPEP with elementary polarization of symbols of size 35 acceptingL.

Along the same lines, it is worth mentioning [20] which presents an even restricted NPEP that is computationally complete. The price paid is that the size of the networks is not constant anymore. The main result in [20] is as follows:

Theorem 6

For every recursively enumerable languageL, there exists an NPEP\(\varGamma \)with elementary polarization of symbols acceptingL. Furthermore, the following holds:
  • The polarization of all nodes is restricted to\(\{0,+\}\).

  • Each node has only one rule.

  • The valuation mapping takes only two values\(\{0,1\}\).

4 Polarization in networks of multiset processors

As we have seen above, when computing the polarization of a string, the linear structure of the string does not play any role. All anagrams of a given string have the same polarization. Therefore, instead of a string, one can consider its commutative closure represented as a multiset of symbols. This is our approach in this section taken from [13].

A formalism based on multiset rewriting, called constraint multiset grammar, was introduced in [31] and further investigated in [45], with motivations related to a high-level specification of visual languages. Constraint multiset grammars may be considered as a bridge between the usual string rewriting grammars and constraint logic programs. Another formalism based on multiset rewriting, called abstract rewriting multiset system, with motivations related to the population development in artificial cell systems, was introduced in  [64, 65]. A Chomsky-like hierarchy of formal grammars rewriting multisets was proposed in [34]. It is worth noting that the multisets are processed in a sequential, nondistributed way, in all the aforementioned models. The situation is different in membrane systems [55], where multisets are processed in parallel.

A finite multiset over a finite set A is a mapping \(\sigma :A\longrightarrow \mathbb {N}\); \(\sigma (a)\) expresses the number of copies of \(a\in A\) in the multiset \(\sigma \). The empty multiset over a set A is denoted by \(\varepsilon _A\); that is, \(\varepsilon _A(a)=0\) for all \(a\in A\). The set of all multisets over A is denoted by \(A^\#\). A subset of \(A^\#\) is called macroset over A. We use the same notation for the empty set and empty macroset, namely \(\emptyset \). In what follows, a multiset containing the elements \(b_1,b_2,\dots ,b_r\), any of them possibly with repetitions, will be denoted by \(\langle b_1,b_2,\dots , b_r\rangle \). Each multiset \(\sigma \) over a set A of cardinality n may also be viewed as an array of size n with nonnegative entries.

The weight of a multiset \(\sigma \) as above is \(\parallel \sigma \parallel =\displaystyle {\sum _{a\in A} \sigma (a)}\), and \(\parallel \sigma \parallel _B=\displaystyle {\sum _{b\in B}\sigma (b)}\) for any subset B of A.

Normally, we use lower case Greek letters for multisets and capital Greek letters for macrosets. For two multisets \(\sigma ,\tau \) over the set A, we define the following:
  • the addition multiset \(\sigma +\tau \) with \((\sigma +\tau )(a)=\sigma (a)+\tau (a)\) for all \(a\in A\);

  • the difference multiset \(\sigma -\tau \) with \((\sigma -\tau )(a)=\max (\sigma (a)-\tau (a),0)\) for each \(a\in A\);

  • scalar multiplication multiset \(c\sigma \), with \(c\in \mathbb {N}\), with \((c\sigma )(a)=c\sigma (a)\) for all \(a\in A\).

The valuation function of a multiset \(\sigma \) over a set A is defined by \(\varphi :A\longrightarrow \mathbb {Z}\) extended to \(\varphi (\sigma )=\displaystyle {\sum _{a\in A}\varphi (a)\cdot \sigma (a)}.\)

We recall from [19] the definition of a multiset Turing machine. Before giving the formal definition, we informally explain what a multiset Turing machine is. Such a machine has a nonstructured memory storing a multiset of symbols, a read–write head which can access the memory and pick up a symbol (that may be the empty symbol) and put back at most one symbol (that may also be the empty one). Reading (picking up) and writing (putting back) a symbol different from the empty symbol are meant as decreasing and increasing the number of copies of that symbol by 1, respectively. The computation of such a machine can be described in the following way: it starts in the initial state with a given multiset in the memory. A computational step consists of reading a symbol, changing the current state, and writing a symbol to the memory. The initial multiset is accepted when a final state is reached with an empty memory; otherwise, it is rejected. Note that multiset Turing machines resemble some variants of register machines defined in [47].

Formally, a multiset Turing machine (shortly, MTM) is a construct:
$$\begin{aligned} M=(Q,V,U,f,q_0,\flat ,F), \end{aligned}$$
where Q and \(F\subseteq Q\) are the finite sets of states and final states, respectively, V and U are the input and the bag alphabets, respectively, \(V\subset U\), \(q_0\) is the initial state, \(\flat \) is the empty symbol in \(U\setminus V\), and f is the transition mapping from \(Q\times U\) into the set of all subsets of \(Q\times U\). Furthermore, there are no \(q,s\in Q\), such that \((s,\flat )\in f(q,\flat )\). A configuration is a pair \((q,\tau )\), where q is the current state and \(\tau \) is the content of the bag (a multiset over U). The derivation operation, denoted by \(\models \), is defined as follows:
$$\begin{aligned} (q,\tau )\models (s,\rho ) \text{ iff } (s,b)\in {f}(q,a) \text{ for } \text{ some } a,b\in {U}, \end{aligned}$$
such that one of the following three conditions holds:
  1. 1.

    \(a,b\in U\setminus \{\flat \} \text{ and } \tau (a)\ge {1},\rho (a)=\tau (a)-1, \rho (b)=\tau (b)+1, \rho (c)=\tau (c) \text{ for } \text{ all } c\in U,c\notin \{a,b\}\).

     
  2. 2.

    \(a=\flat , b\ne \flat , \text{ and } \rho (b)=\tau (b)+1, \rho (c)=\tau (c) \text{ for } \text{ all } c\in U,c\ne b\).

     
  3. 3.

    \(a\ne \flat , b=\flat , \text{ and } \tau (a)\ge 1,\rho (a)=\tau (a)-1,\rho (c)=\tau (c)\), for all \(c\in U,c\ne a\).

     
The reflexive and transitive closure of the derivation operation is denoted by \(\models ^*\). The macroset accepted by M is defined by the following:
$$\begin{aligned} \mathcal{M}(M)=\{\tau \in V^\#\mid (q_0,\tau )\models ^* (q,\varepsilon _U) \text{ for } \text{ some } q\in F\}. \end{aligned}$$
The next variant is a slight extension of the multiset Turing machine, also introduced in [19]. This machine, called multiset Turing machine with detection (MTMD, for short) is able to make a move not only when some symbol is present in its bag but also when some symbol does not appear in the bag. More precisely, an MTMD is a construct \(M=(Q,V,U,f,q_0,\flat ,F)\), where all parameters are defined as an MTM, except for the transition mapping, which is defined on \(Q\times (U\cup (\overline{U\setminus \{\flat \}}))\) into the set of subsets of \(Q\times U\). By \(\overline{X}\), we denote the alphabet containing all barred copies of the symbols in X. The definition of the relation \(\models \) becomes:
$$\begin{aligned} (q,\tau )\models (s,\rho ) \end{aligned}$$
if and only if the following two conditions are added to the three conditions mentioned above:
  1. 1.

    \((s,b)\in {f}(q,\overline{a}) \text{ for } \text{ some } a,b\in {U}\setminus \{\flat \}, \text{ s.t. } \tau (a)=0, \rho (b)=\tau (b)+1, \text{ and } \rho (c)=\tau (c) \text{ for } \text{ all } c\in {U},c\ne b\).

     
  2. 2.

    \((s,\flat )\in f(q,\overline{a}) \text{ for } \text{ some } a\in {U}, \text{ s.t. } \tau (a)=0, \rho (c)=\tau (c) \text{ for } \text{ all } c\in U\).

     
The macroset accepted by an MTMD is defined in the same way as the one accepted by an MTM.
Two other variants of multiset Turing machines, in which the halting condition is relaxed in the sense that the emptiness of the bag is no longer required, are proposed in [13]. A generalized multiset Turing machine (with detection) is an MTM (MTMD) \(M=(Q,V,U,f,q_0,\flat ,F)\) with the single difference that the transition mapping f is defined from \((Q\setminus F)\times U\) into the set of all subsets of \(Q\times U\). The relation \(\models \) is the same as that defined for an MTM (MTMD), while the macroset accepted by M is defined by the following:
$$\begin{aligned} \mathcal{M}(M)=\{\tau \in V^\#\mid (q_0,\tau )\models ^* (q,\beta ) \text{ for } \text{ some } q\in F, \text{ and } \beta \in V^\#\}. \end{aligned}$$
We use GMTM (GMTMD) for generalized multiset Turing machines (with detection). The time complexity of all these multiset Turing machines is defined similar to that of a classic Turing machine, the size of the input multiset being its weight.

The following result is proved in [13].

Proposition 1

A macroset is accepted by an MTMD if and only if it is accepted by a GMTMD.

It is important to note that the time complexity of a GMTMD simulating an MTMD may be significantly higher than that of the MTMD.

Let A be a finite set and \(a,b\in A\); we say that a rule \(a\oplus \) is an increment rule and a rule \(a\ominus \) is a decrement rule, while \(a\rightarrow b\) is a substitution rule. The sets of all increment, decrement, and substitution rules over a set A are denoted by \(Inc_A\), \(Dec_A\), and \(Sub_A\), respectively. Given a rule r as above and a multiset \(\sigma \) over A, we define the following actions of r on \(\sigma \):
  • If \(r\equiv a\oplus \in {Inc_A}\), then \(r(\sigma )\) is the multiset over A defined by the following:
    $$\begin{aligned} (r(\sigma ))(b)=\left\{ \begin{array}{ll} \sigma (b), \text{ if } b\ne a,\\ \sigma (b)+1, \text{ if } b=a. \end{array}\right. \end{aligned}$$
  • If \(r\equiv a\ominus \in Dec_A\), then \(r(\sigma )\) is the multiset over A defined by the following:
    $$ \begin{aligned} (r(\sigma ))(b)=\left\{ \begin{array}{ll} \sigma (b), \text{ if } b\ne a \text{ or } (b=a) \& (\sigma (a)=0),\\ \sigma (b)-1, \text{ otherwise. } \end{array}\right. \end{aligned}$$
  • If \(r\equiv a\rightarrow b \in Sub_A\), then \(r(\sigma )\) is the multiset over A defined by the following:
    $$ \begin{aligned} (r(\sigma ))(c)=\left\{ \begin{array}{lll} \sigma (c), \text{ if } c\ne {a},c\ne {b},\\ \sigma (a)-1, \text{ if } (c=a) \& (\sigma (a)>0),\\ \sigma (a), \text{ if } (c=a) \& (\sigma (a)=0),\\ \sigma (b)+1, \text{ if } (c=b) \& (\sigma (a)>0),\\ \sigma (b), \text{ if } (c=b) \& (\sigma (a)=0). \end{array}\right. \end{aligned}$$
For every rule r and \(\varPi \subseteq A^\#\), we define the action of r on \(\varPi \) by \(r(\varPi )=\{r(\pi )\mid \pi \in \varPi \}\). Given a finite set of rules M, we define the action of M on the multiset \(\pi \) and the macroset \(\varPi \) by the following, respectively:
$$\begin{aligned} M(\pi )=\{r(\pi )\mid r\in M\}\ \text{ and } \ M(\varPi )=\bigcup _{\pi \in \varPi }M(\pi ). \end{aligned}$$
A polarized multiset processor over A is a pair (Mp), where:
  • M is a set of substitution, increment or decrement rules over A. Formally: \((M\subseteq Sub_A)\) or \((M\subseteq Inc_A)\) or \((M\subseteq Dec_A)\).

  • \(p\in \{-,+,0\}\) is the polarization of the node (negatively or positively charged, or neutral, respectively).

As one can see, a processor is “specialized” in one operation, only. We denote the set of polarized multiset processors over A by \(PMP_A\).
A network of polarized multiset processors (NPMP for short) is a 7-tuple \(\varGamma =(A,B,G,\mathcal {R},\varphi ,\underline{In},\underline{Out}),\) where:
  • A is the input set.

  • B is the working set, \(A\subseteq B\).

  • G, \(\mathcal {R}:X_G\longrightarrow PMP_B\), \(\varphi \), \(\underline{In}\), and \(\underline{Out}\) have the same roles as in an NPEP.

The computation in an NPMP is defined in the same way as in an NPEP. The macroset accepted by \(\varGamma \) is
$$\begin{aligned} \mathcal{M}_a(\varGamma )= \{\pi \in A^\#\mid \text{ the } \text{ computation } \text{ of } \varGamma \text{ on } \pi \text{ halts}\}. \end{aligned}$$
By considering a normal form for GMTMD, the next result is proved in [13].

Theorem 7

A macroset accepted by a GMTMD can be accepted by an NPMP.

The proof of the previous theorem shows that structure of the underlying graph of the network depends only on the working set of the multiset Turing machine. More precisely, the following holds.

Corollary 1

  1. 1.

    Each macroset accepted by a GMTMD with working setUcan be accepted by an NPMP of size\(2\cdot card(U)+17\).

     
  2. 2.

    Each macroset accepted by a GMTM with working setUcan be accepted by an NPMP of constant size 16.

     

By modifying the construction in the proof of Theorem 7, one can prove:

Corollary 2

  1. 1.

    Each macroset accepted by an MTMD with working setUcan be accepted by an NPMP of size\(3\cdot card(U)+16\).

     
  2. 2.

    Each macroset accepted by an MTM with working setUcan be accepted by an NPMP of size\(card(U)+15\).

     

As the first statement of the last corollary seems to make no sense, because the first statement of the previous corollary gives a better bound, we recall that the time complexity of a GMTMD simulating a MTMD may be significantly higher than that of the MTMD. Therefore, although the size of the NPMP simulating an MTMD is greater than the size of an NPMP simulating a GMTMD which, in its turn, simulates an MTMD, the direct simulation is time complexity preserving.

On the other hand, the next statement was proved.

Theorem 8

Each macroset accepted by an NPMP can be accepted by a GMTMD and, hence, by an MTMD.

Although one can easily find relations between NPMP and classical Turing machines as well as between these Turing machines and the multiset ones, it remains as an open problem whether or not networks of polarized multiset processors can be directly simulated by other types of multiset Turing machines.

5 Polarization in networks of splicing processors

We start this section with the formal definition of the splicing operation following [49]. A splicing rule over a finite alphabet V is a quadruple of strings of the form \([(u_{1},u_{2});(v_{1},v_{2})]\), such that \(u_{1}\), \(u_{2}\), \(v_{1}\), and \(v_{2}\) are in \(V^{*}\). For a splicing rule \(r = [(u_{1},u_{2});(v_{1},v_{2})]\) and for \(x,y,z\in V^{*}\), we say that r produces z from x and y (denoted by \((x,y)\vdash _{r}z\)) if there exist some \(x_{1}, x_{2}, y_{1}, y_{2}\in V^{*}\), such that \(x=x_{1}u_{1}u_{2}x_{2}\), \(y=y_{1}v_{1}v_{2}y_{2}\), and \(z=x_{1}u_{1}v_{2}y_2\). For a language L over V and a set of splicing rules R, we define the following:
$$\begin{aligned} \sigma _{R}(L)=\{z\in V^{*}\mid \exists u, v \in L, \exists r \in R \text{ such } \text{ that } (u, v) \vdash _r z\}. \end{aligned}$$
A short discussion is in order here. As one can see, the splicing rule defined above is a 1-splicing rule in the sense of [30]. However, in the rest of the paper, we do not make any difference between the two strings that a splicing rule is applied to; therefore, we may say that the rules are actually 2-splicing rules.
We define now the polarized splicing processors and networks with nodes of this type following [9]. A polarized splicing processor over V is a triple \((S,A,\pi )\), where
  • S is a finite set of splicing rules over V.

  • A is a finite set of auxiliary strings over V.

  • \(\pi \in \{-,+,0\}\) is the polarization of the node (negatively or positively charged, or neutral, respectively).

A network of polarized splicing processors (NPSP for short) is a construct:
$$\begin{aligned} \varGamma =(V,U,\langle ,\rangle ,G,\mathcal {N},\varphi ,\underline{In},\underline{Out}), \end{aligned}$$
where
  • U is the network alphabet and \(V\subseteq U\) is the input alphabet.

  • \(\langle ,\rangle \in U\setminus V\) are two special symbols.

  • \(\mathcal {N}\) is a mapping which associates with each node \(x\in X_G\) the splicing processor over U, \(\mathcal {N}(x)=(S_x,A_x,\pi _x)\).

  • G, \(\varphi \), \(\underline{In}\), and \(\underline{Out}\) are defined as in the previous models.

By convention, the auxiliary strings do not appear in any configuration. Configuration \(C'\) is obtained in one splicing step from the configuration C, written as \(C\Rightarrow C'\), iff for all \(x\in X_G\):
$$\begin{aligned} C'(x)=\sigma _{S_x}(C(x)\cup A_x). \end{aligned}$$
The communication step is defined exactly as in the previous models.

Given an NPSP \(\varGamma \) and an input string w, we say that \(\varGamma \) accepts w if the computation of \(\varGamma \) on w halts. The language accepted by an NPSP \(\varGamma \) consists of all the strings accepted by \(\varGamma \) and is denoted by \(L(\varGamma )\).

We discuss a simple example of NPSP that can be used for solving the same “3-colorability” problem. Let \(Y=(B,Q)\) be a graph with set of vertices \(B=\{v_1,v_2,\dots ,v_n\}\) and set of edges \(Q=\{e_1,e_2,\dots ,\)\(e_m\}\), where each \(e_k\) is given in the form \(e_k=\{v_i,v_j\}\), for some \(1\le i\ne j\le n\). We construct the NPSP \(\varGamma \) that decides whether the graph Y can be colored with three colors, namely red, blue, and green, as follows. The underlying graph of \(\varGamma \) has only two nodes \(\underline{In}\) and \(\underline{Out}\), such that the polarity of \(\underline{In}\) is 0 and that of \(\underline{Out}\) is \(+\). Let us consider the input string \(a^n\) with a null polarity in the node \(\underline{In}\). Here it will be transformed into \(Z_1,Z_2\dots Z_nC\), where \(Z_i\in \{b_j,g_j,r_j\mid 1\le j\le n\}\), for each \(1\le i\le n\). This can be done with the following splicing rules:
$$\begin{aligned}&[(<,a);(<C,\#)], \text{ and }<C\# \text{ is } \text{ an } \text{ axiom },\\&[(\dag _1,Ca);(X,Y)], \dag _1\in \{<\}\cup \{b_j,g_j,r_j\mid 1\le j\le n\} \text{ and } XY \text{ is } \text{ an } \text{ axiom },\\&[(XCa,\dag _2);(X\dag _3C,\#)], \dag _2\in \{a,>\}, \dag _3\in \{b_j,g_j,r_j\mid 1\le j\le n\},\\&\qquad \text{ and } \text{ all } X\dag _3C\# \text{ are } \text{ axioms },\\&[(\dag _1,Y);(X,\dag _3C\dag _2)]. \end{aligned}$$
The rough idea is that C scans the input string from left to right and replaces each occurrence of a by a symbol in \(\{b_j,g_j,r_j\mid 1\le j\le n\}\). The strings obtained in every step have a null polarity. In what follows, we do not formally give the splicing rules anymore, but explain informally the rest of the computation. When C reaches the end of the strings, that is its next symbol is >, it is replaced by the symbol \(E_1\). Now, in the same way as above, \(E_1\) scans the string from right to left and check whether the colors assigned to the endpoints of the edge \(e_1\). If the string does not encode a correct coloring with respect to this edge, the string is transformed into a string with a negative polarization and it is lost. When \(E_1\) reaches the left end of the string, it is replaced by \(E_2\) and the process resumes by scanning the string from left to right, and so on until the scanning process with the symbol \(E_m\) is finished. It is worth mentioning that all the strings obtained during these processes have a null polarity, and hence, they remain in \(\underline{In}\). When \(E_m\) has finished successfully its scanning process, it is transformed into a symbol that change the polarity of the string into a positive one, such that it migrates to the output node ending the computation. With these explanations, t1he reader may write the formal rules and the axioms, and define the valuation mapping.

We now return to the computational power of these networks. Thus, deterministic Turing machines can be simulated by NPSP as follows.

Theorem 9

  1. 1.

    All recursively enumerable languages are accepted by NPSPs of size 2.

     
  2. 2.

    Every language accepted by a deterministic Turing Machine in\(\mathcal {O}(f(n))\)time, for some function from\(\mathbb {N}\)to\(\mathbb {N}\), is accepted by an NPSP of size 2 in\(\mathcal {O}(f(n))\)time.

     

Furthermore, we have:

Corollary 3

The class of polynomially recognizable languages is included in the class of languages accepted by NPSPs of size 2 in polynomial time.

It is clear that every nondeterministic Turing machine can eventually be simulated by NPSP with two nodes, by composing the above construction with the simulation of a nondeterministic Turing machine by a deterministic one. However, it is known that the latter simulation increases the time complexity of the deterministic machine.

In [9], a more involved construction of an NPSP with four nodes that can simulate, in parallel, the computations of nondeterministic Turing machines and preserves the time complexity is presented. NPSPs constructed in the proof of Theorem 8 have a valuation mapping with values in \(\{-1,0,1\}\), actually in \(\{0,1\}\), only. It is natural to ask whether the aforementioned result from [9] is still valid if the valuation mapping of the NPSPs is restricted to take values in the set \(\{-1,0,1\}\). The answer is positive and the modification is rather simple as explained in [9]. Informally, each occurrence of a symbol x in the splicing rules and auxiliary strings, such that \(\varphi (x)=k\), \(k\notin \{-1,0,1\}\), is replaced by a string which is defined as follows:
  1. (i)

    \(a^{k-1}_x\triangle \), if \(\varphi (x)>0\), where \(a_x\) are new symbols and \(\varphi (a_x)=\varphi (\triangle )=1\).

     
  2. (ii)

    \(a^{-k-1}_x\nabla \), if \(\varphi (x)<0\), where \(a_x\) are new symbols and \(\varphi (a_x)=\varphi (\nabla )=-1\).

     
Therefore, we can state:

Theorem 10

Letfbe a function from\(\mathbb {N}\)to\(\mathbb {N}\).

Every language accepted by a nondeterministic Turing machine inf(n) time is accepted by an NPSP of size 4 having a valuation in the set\(\{-1,0,1\}\)in time\(\mathcal {O}(f(n))\).

We do not know whether three components are sufficient for simulating nondeterministic Turing machines preserving the time complexity. However, as we shall see in the next statement, networks with three components are able to simulate efficiently another computationally complete model, namely the tag system. Again, such a simulation may follow from Theorem 9 and the simulation of 2-tag systems by deterministic Turing machine, but the simulation is not time-efficient.

Theorem 11

Letfbe a function from\(\mathbb {N}\)to\(\mathbb {N}\).

For every 2-tag system\(T=(V,\mu )\), there exists an NPSP\(\varGamma \)of size 3, such that ifThalts after at mostf(k) steps for an input stringwwith\(|w|=k\), then\(\varGamma \)halts onwin\(\mathcal {O}(f(k))\)time.

Clearly, every NPSP can be simulated by a Turing machine. However, the simulation might be very inefficient. In all constructions presented so far, one of the two strings in each splicing step was always an auxiliary string. If we consider these NPSPs, such that one of the two strings in each splicing step is an auxiliary string, we can construct a Turing machine able to simulate such a network in an efficient way.

Theorem 12

Every language accepted by an NPSP\(\varGamma \), such that one of the two strings in each splicing step is an auxiliary string is accepted by a Turing machineM. If\(Time_{\varGamma }(n)\in \mathcal {O}(f(n))\), for some function from\(\mathbb {N}\)to\(\mathbb {N}\), thenMaccepts any input string of lengthnin\(\mathcal {O}((f(n)+n)(f(n)+n+K))\), whereKis the maximal absolute value of the valuations of symbols in the working alphabet of\(\varGamma \).

This result rises a few open problems that will be discussed in the final section.

6 A generalization of NPSP

It is worth mentioning that the bio-inspired features of the NPEP models considered so far in this note have only been considered from a qualitative perspective. However, there are plenty of situations, including biological phenomena, where quantitative aspects play an important role. Starting from this premise, [28] introduces a new model of NPSP that tries to take into account quantitative features. The new model, called Network of Splicing Processors With Evaluation Sets (NSPES), refines the protocol based on polarization described above. It is obvious that a polarization described above may actually be viewed as a value in one of the following intervals of integers: \((-\infty ,0)\), \(\{0\}\), and \((0,\infty )\) for negative, neutral, and positive polarization, respectively. The new model, some predefined evaluation sets defined with respect to some subsets of symbols are associated with each node and the valuation mapping returns the exact value of a string. The communication amongst nodes is now regulated by the compatibility between the values of data and the evaluation sets associated with nodes. This new communication protocol might be interpreted as the movement of molecules or particles along a concentration gradient between two areas.

In [28], a time-efficient solution to an NP-hard optimization problems, namely the well-known 0 / 1 Knapsack problem, based on NSPES is proposed. Rather surprising, the solution is not only computationally efficient (direct proportional to the input size and logarithmic proportional to the numerical values of the instance), but also very succinct (the underlying networks contain four nodes only; its topology may be chosen as a chain, ring, or complete graph). Furthermore, neither the evaluation sets nor the valuation mapping depends on the numerical values of the instance.

Let U be a finite alphabet and w be a string over U, such that \(w=a_1a_2\cdots a_k\), \(a_i\in U\), \(i\in [k]\). We define the projection function \(\pi : U^* \times 2^U \rightarrow U^*\) by
$$\begin{aligned} \pi (w,S)=a'_1 a'_2\cdots a'_k \text{ where } a'_i = \left\{ \begin{array}{ll} a_i &{} \ \text{ if } a_i \in S,\\ \lambda &{} \ \text{ otherwise }. \end{array} \right. \end{aligned}$$
This function is particularly useful when we are interested in calculating the “concentration” of some symbols in a string, more specifically those included in S, ignoring the other symbols.
A splicing processor with evaluation sets over an alphabet V is a 4-tuple \((M,\varDelta , S, \alpha )\), where:
  • M is a finite set of splicing rules over V.

  • \(\varDelta \) is a finite set of auxiliary strings over V.

  • \(S \subseteq 2^V\) is the class of evaluation sets.

  • \(\alpha \) is a set of mutually disjoint intervals of \(\mathbb {Z}\); these are the compatibility values of the node.

It is worth mentioning that \(\alpha \) may be viewed as a generalization of the polarity of a node defined in [2], where the polarity of every node is \(-,0,+\), which can be viewed as interval \((-\infty ,0)\), \(\{0\}\), and \((0,\infty )\), respectively. We denote the set of splicing processors with evaluation sets over V by \(SPES_V\).
A network of splicing processors with evaluation sets (NSPES) is a 8-tuple:
$$\begin{aligned} \varGamma =(V,U,\rho ,G,R,\varphi ,\underline{In},\underline{Out}), \end{aligned}$$
where:
  • V and U are the input and network alphabets, respectively.

  • \(\rho \) is a mapping that associates with each subset of U an interval, possibly empty, of \(\mathbb {Z}\), such that \(\rho (X)\cap \rho (Y)=\emptyset \) for any \(X\ne Y\) subsets of U.

  • \(G=(X_G,E_G)\) is an undirected graph with the set of vertices \(X_G\) and the set of edges \(E_G\). G is called the underlying graph of the network.

  • \(R: X_G\rightarrow SPES_U\) is a mapping that associates with each node \(x \in X_G\) the splicing processor with evaluation sets R(x) over U, with \(R(x) = (M_x, \varDelta _x,S_x, \alpha _x)\).

  • \(\varphi \) is a valuation of \(U^* \in \mathbb {Z}\) as it was defined in the previous section.

  • \(\underline{In}, \underline{Out}\in X_G\) are the input and the output nodes of \(\varGamma \), respectively.

While the splicing step is the same to that in an NPSP, the communication step is redefined as follows: \(C\vdash C'\) iff
$$\begin{aligned} C'(x)= (C(x) \backslash H_x)\cup \bigcup _{\{x,y \in E_G\}} T_y, \end{aligned}$$
(1)
where
$$\begin{aligned} H_x & = \big\{ w \in C(x) \mid \forall X\in S_x((\varphi (\pi (w,X))\notin \bigcup _{Y\in \alpha _x} Y) \vee \\&((\exists Y\in \alpha _x \text{ s.t. } \varphi (\pi (w,X))\in Y) \rightarrow (Y\ne \rho (X))))\big\},\\ T_y & = \{ w \in C(y) \mid \exists X\in S_y \text{ s.t. } \varphi (\pi (w,X))\in \rho (X) \text{ and } \\&\rho (X) \in \alpha _x \}. \end{aligned}$$
Some explanations are necessary. If some string from a node x has its valuation with respect to some \(X\in S_x\) in \(\rho (X)\) and \(\rho (X)\in \alpha _x\), then one copy of the string remains in the node x (we say that the string is compatible with the node x); otherwise, it is discarded. If \(\rho (X)\in \alpha _y\), for some node y, then one copy of the string enters node y (the string is compatible with node y), provided that x and y are adjacent to one another.

The acceptance of an input string is defined as usual. Moreover, if we want to use \(\varGamma \) as a problem solver, then it must halt on every input; in this case, the input string is decided by \(\varGamma \). We discuss below the potential of a given NSPES to solve complex problems. Optimization problems can be met in many areas and various domains. An optimization problem requires to find optimal solutions with respect to some goals. These problems are motivated by the efficient allocation of limited resources to meet desired objectives. We now formally define what an NSPES algorithm for optimization problems is the following [28].

Formally, an optimization problem \(\mathcal{P}\) is a quadruple (Ismgf), where
  • I is the set of all instances of \(\mathcal{P}\);

  • for any instance \(x \in I\), s(x) is the set of feasible solutions;

  • \(m:(I\times s(I))\rightarrow \mathbb {R}\), where m(xy) denotes the measure of a feasible solution y of x;

  • g is the goal function, and is either \(\min \) or \(\max \).

  • f is the solution of \(\mathcal{P}\), that is, for some instance, x, f(x) is the set of optimal solutions y, such that
    $$\begin{aligned} m(x, y) = g \{ m(x, y') \mid y' \in s(x) \}. \end{aligned}$$
An optimization problem \(\mathcal{P}\) as above is said to be solved in time O(t(n)) by NSPESs if there exists a family \(\mathcal{G}\) of NSPESs, which can be constructed by an effective procedure, such that, for each instance x of size O(n) of the problem \(\mathcal{P}\), encoded by a string w, one can effectively construct an NSPES \(\varGamma (x) \in \mathcal{G}\), such that the computation of \(\varGamma (x)\) on w halts in time O(t(n)) with the output node containing the strings encoding all optimal solutions in f(x). This effective construction is called an O(t(n)) time NSPES algorithm for the considered problem.

As an example, in [28], a well-known optimization problem, namely the 0 / 1 Knapsack problem, is solved by an NSPES algorithm. Informally, the 0 / 1 Knapsack problem can be stated as follows. Given a set of objects, each with a weight and a value, determine a collection of objects, so that the total weight is less than or equal to a given limit and the total value is as large as possible. Formally,

Given a number\(K \in \mathbb {N}\), and twon-tuples of positive integers\(W =(w_1,w_2,\dots ,w_n)\)and\(P=(p_1,p_2,\dots ,p_n),\)determine a subsetTof\(\{1,2,\dots ,n\}\)that maximizes\({\sum\nolimits_{i\in T} p_i}\)subject to\( {\sum\nolimits_{i\in T} w_i}\le K\).

Clearly, the 0 / 1 Knapsack problem is a (combinatorial) optimization problem. We denote by \(KS_{0/1}(n,K,W,P)\) an arbitrary instance of the 0 / 1 Knapsack problem as above. It is known that the 0 / 1 Knapsack problem is NP-hard [25], while there are known pseudo-polynomial time solutions based on dynamic programming. Parallel algorithms for the 0 / 1 Knapsack problem are discussed in [3], while an overview of several types of solutions to this problem is presented in [33].

Theorem 13

Every instance\(KS_{0/1}(n,K,W,P)\)of the 0 / 1 Knapsack problem, with\(W =(w_1,w_2,\dots ,w_n)\)and\(P=(p_1,p_2,\dots ,p_n),\)can be solved by NSPESs in\(O(n+\log T)\)time, where\(T=\displaystyle {\sum _{i=1}^n p_i}\).

The informal idea for solving this problem is actually a general one for an optimization problem: in the first step, NSPES generates the candidates for the solution, while, in the second step, the best candidates are extracted, by iteratively removing the local nonoptimal ones. The strategy for extracting the best candidates is that every couple of computational steps (splicing and communication) selects from the previous pool of possible solutions a cluster of solutions for which the value difference of each pair is bounded by a half of the previous bound.

This solution suggests that the NSPES model is more suitable to address complex problems where quantitative conditions have a relevant role.

7 Concluding remarks

We start this final section with several problems that are attractive in our opinion. Some of them regard the size complexity of the networks used in some constructions; for instance:
  • Is it possible to simulate 2-tag systems with NPEP of a size smaller than 15? (see Theorem 1)

  • Is it possible to simulate an arbitrary Turing machine with NPEP of constant size without increasing the simulation time? (see Theorems 2 and 3).

  • Is it possible to decrease the simulation time of a Turing machine by an NPEP reported in Theorem 3, such that the NPEP is still of constant size?

Along the same lines, the working time of the NPEP constructed in Theorem 5 is rather high. Is it possible to simulate an arbitrary Turing machine with an NPEP with elementary polarization of symbols within a time similar to that reported in Theorem 3? If we look at Theorem 4, the simulation time depends directly proportional on a numerical value given as input. This does not matter if the input NPEP is with elementary polarization of symbols. Is it possible to design a Turing machine working in a time that does not depends at all on that value, or if it does, is the time growth logarithmical with respect to that value?

As we have seen in Corollaries 1 and 2, every generalized multiset Turing machine with detection can be simulated by a network of polarized multiset processors whose size is linearly proportional with the working set of the Turing machine. Rather surprisingly, the size of the network becomes constant if the Turing machine is without detection. The simulation of multiset Turing machines (with or without detection) requires networks with a size that is linearly proportional with the working set of the Turing machine.

It is worth mentioning that all these simulations are also time complexity preserving, that is the number of processing and communication steps of the simulating network is a linear function of the number of steps of the simulated Turing machine. Is it possible to extend the result concerning the simulation of multiset Turing machines without detection (NPMP of constant size) to the other types of multiset Turing machines? It also remains as an open problem whether or not networks of polarized multiset processors can be simulated by other types of multiset Turing machines.

Several other directions of research concerning the networks of evolutionary processors may be of interest. We mention here some of them we like:
  • Find a direct simulations between NEP and NPEP. Is any of these simulations time-efficient?

  • Discuss polarized networks where languages are not accepted but generated. Generating NEPs is a well-investigated topic.

  • Investigate the ability of NPEP to solve NP-hard problems. Solutions to two NP-complete problems, namely the “3-colorability problem” and the “Common Algorithmic Problem” are proposed in [2].

As far as the picture of NPSP is concerned, Theorem 8 solves completely the case of simulating deterministic Turing machines with NPSP. However, one open problem concerns the possibility to simulate a nondeterministic Turing machine with an NPSP of size 3 preserving the time complexity. If this is not possible, is there another simulation more efficient than the one via a deterministic Turing machine? Similar to Theorem 4, is it possible to remove the parameter K in Theorem 12? Or to replace it by \(\log K\)? It is clear that a Turing machine can simulate an arbitrary NPSP; however, it appears that such a machine should store all the strings obtained during the computation of the network on its tape. This means that the machine requires both space and time resources which are huge, making the simulation extremely inefficient. Is there another option?

What can be said on simulation of NSP with NPSP and vice versa? For a better understanding of these models, not only simulations between networks with the same type of processors would be of interest but also between networks with different types of processors.

An attractive direction of research would be to investigate whether NSPESs are more suitable for modeling different aspects of biological phenomena. On the other hand, in our view, the model deserves a systematic study of the limits of software and hardware implementation to solve hard problems.

It is worth noting that the nondeterministic way of applying the rules in networks of bio-inspired processors is captured by assuming that data appear in a sufficiently large number of copies. However, this means a need for a huge memory able to store these data. What possibilities would exist for coping with this drawback of the model? One possibility would be to introduce probabilities with the aim of decreasing the exponential expansion of the number of strings/multisets/pictures stored during the computational steps. A decrease of the exponential expansion of this number is achieved with a loss of certainty of the final result which is reached with some error probability in a similar way as in the case of randomized algorithms. Thus, during the computation, those strings with a very low probability or those for which the estimation to arrive in the output node is very small will be ignored. A first attempt in this respect has been done in [7]. Last but not least, we consider that networks of processors based on other bio-operations in the molecular computing are reported in the literature which would be of interest.

Last but not least, how polarization could be implemented in networks of picture processors? A first attempt in this direction has been made in [56] as well as [57].

Notes

Acknowledgements

We thank to the reviewers for their comments and suggestions that improved the presentation. This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, Project number POC P-37-257

References

  1. 1.
    Alarcón P, Arroyo F, Mitrana V. Networks of polarized evolutionary processors as problem solvers. Advances in knowledge-based and intelligent information and engineering systems, frontiers in artificial intelligence and applications. Amsterdam: IOS; 2012. p. 807–15.Google Scholar
  2. 2.
    Alarcón P, Arroyo F, Mitrana V. Networks of polarized evolutionary processors. Inf Sci. 2014;265:189–97.MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Alexandrov V, Megson G. Parallel algorithms for knapsack type problems. Singapore: World Scientific; 1999.CrossRefzbMATHGoogle Scholar
  4. 4.
    Alhazov A, Csuhaj-Varj E, Martn-Vide C, Rogozhin Y. On the size of computationally complete hybridnetworks of evolutionary processors. Theor Comput Sci. 2009;410:3188–97.CrossRefzbMATHGoogle Scholar
  5. 5.
    Arroyo F, Gómez-Canaval S, Mitrana V, Popescu S. Networks of polarized evolutionary processors are computationally complete. Language and automata theory and applications (LATA 2014). LNCS, vol. 8370. Berlin: Springer; 2014. p. 101–12.CrossRefGoogle Scholar
  6. 6.
    Arroyo F, Gómez-Canaval S, Mitrana V, Popescu S. On the computational power of networks of polarized evolutionary processors. Inf Comput. 2017;253:371–80.MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Arroyo F, Gómez-Canaval S, Mitrana V, Păun M, Sánchez-Couso JR. Towards probabilistic networks of polarized evolutionary processors. In: The 16th annual meeting of international conference on high performance computing & simulation (HPCS 2018) (accepted). 2018.Google Scholar
  8. 8.
    Bordihn H, Mitrana V, Păun A, Păun M. Networks of polarized splicing processors. Theory and practice of natural computing, TPNC 2017, LNCS 10687. Berlin: Springer; 2017. p. 165–77.Google Scholar
  9. 9.
    Bordihn H, Mitrana V, Negru MC, Păun A, Păun M. Small networks of polarized splicing processors are universal. Nat Comput. 2018;17(4):799–809.MathSciNetCrossRefGoogle Scholar
  10. 10.
    Bottoni P, Labella A, Manea F, Mitrana V, Petre I, Sempere JM. Complexity-preserving simulations among three variants of accepting networks of evolutionary processors. Nat Comput. 2011;10:429–45.MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Bottoni P, Labella A, Manea F, Mitrana V, Sempere JM. Filter position in networks of evolutionary processors does not matter: a direct proof. Proc 15th international meeting on DNA computing and molecular programming LNCS, vol. 5877. Berlin: Springer; 2009. p. 1–11.CrossRefGoogle Scholar
  12. 12.
    Bottoni P, Labella A, Mitrana V. Accepting networks of evolutionary picture processors. Fundam Inform. 2014;131:337–49.MathSciNetzbMATHGoogle Scholar
  13. 13.
    Bottoni P, Labella A, Mitrana V. Networks of polarized multiset processors. J Comput Syst Sci. 2017;85:93–103.MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Campos M, Sempere JM. Accepting networks of genetic processors are computationally complete. Theor Comput Sci. 2012;456:18–29.MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Castellanos J, Martín-Vide C, Mitrana V, Sempere JM. Networks of evolutionary processors. Acta Inform. 2003;39:517–29.MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Csuhaj-Varjú E, Salomaa A. Networks of parallel language processors. New Trends in formal languages LNCS 1218. Berlin: Springer; 2005. p. 299–318.Google Scholar
  17. 17.
    Csuhaj-Varjú E, Kari L, Păun G. Test tube distributed systems based on splicing. Comput AI. 1996;15:211–32.MathSciNetzbMATHGoogle Scholar
  18. 18.
    Csuhaj-Varjú E, Mitrana V. Evolutionary systems: a language generating device inspired by evolving communities of cells. Acta Inform. 2000;36:913–26.MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Csuhaj-Varjú E, Martín-Vide C, Mitrana V. Multiset automata. Multiset processing: mathematical, computer science, and molecular computing points of view, LNCS 2235. Berlin: Springer; 2001. p. 69–83.CrossRefGoogle Scholar
  20. 20.
    Dassow J. On networks of polarized evolutionary proccesors, In: Gheorghe M, Petre I, Perez-Jimenez M, Rozenberg G, Salomaa A, editors. Multidisciplinary creativity, homage to Gheorghe Paun on his 65th birthday, Spandugino; 2015. p. 228–238.Google Scholar
  21. 21.
    del Rosal E, Nuñez R, Ortega A. MapReduce: simplified data processing on large clusters. Int J Comput Commun Control. 2008;3:480–5.Google Scholar
  22. 22.
    Diaz MA, de Mingo LF, Gómez Blas N. Networks of evolutionary processors: Java implementation of a threaded processor. Int J Inf Theor Appl. 2008;15:37–43.Google Scholar
  23. 23.
    Diaz MA, de Mingo LF, Gómez Blas N, Castellanos J. Implementation of massive parallel networks of evolutionary processors (MPNEP): 3-colorability problem. Nature inspired cooperative strategies for optimization (NICSO 2007) studies in computational intelligence, vol. 129. Berlin: Springer; 2008. p. 399–408.CrossRefGoogle Scholar
  24. 24.
    Drăgoi C, Manea F, Mitrana V. Accepting networks of evolutionary processors with filtered connections. J Univ Comput Sci. 2007;13:1598–614.MathSciNetzbMATHGoogle Scholar
  25. 25.
    Garey M, Johnson D. Computers and intractability, a guide to the theory of NP-completeness. New York: W. H. Freeman and Company; 1979.zbMATHGoogle Scholar
  26. 26.
    Gómez-Canaval S, Ortega A, Orgaz P. Distributed simulation of NEPs based on-demand cloud elastic computation. Advances in computational intelligence. LNCS 9094. Berlin: Springer; 2015. p. 40–54.Google Scholar
  27. 27.
    Gómez-Canaval S, Ordozgoiti B, Mozo A. NPEPE: massive natural computing engine for optimally solving NP-complete problems in Big Data scenarios. Communications in computer and information science, vol. 539. Berlin: Springer; 2015. p. 207–17.Google Scholar
  28. 28.
    Gómez-Canaval S, Mitrana V, Sánchez-Couso JS. Networks of splicing processors with evaluation sets as optimization problems solvers. Inf Sci. 2016;369:457–66.MathSciNetCrossRefGoogle Scholar
  29. 29.
    Gray R, Kotz D, Nog S, Rus D, Cybenko G. Mobile agents: the next generation in distributed computing. Proceedings of the 2nd AIZU international symposium on parallel algorithms/architecture synthesis, PAS’97. Washington, DC: IEEE Computer Society; 1997. p. 8–24.CrossRefGoogle Scholar
  30. 30.
    Head T, Păun G, Pixton D. Language theory and molecular genetics: Generative mechanisms suggested by DNA recombination. Handb Form Lang. 1996;2:295–360.Google Scholar
  31. 31.
    Helm R, Marriott K, Odersky M. Building visual language parsers. Proceedings CHI ’91. New York: ACM; 1991. p. 105–12.Google Scholar
  32. 32.
    Hillis DW. The connection machine. Cambridge: MIT; 1979.Google Scholar
  33. 33.
    Kellerer H, Pferschy U, Pisinger D. Knapsack problems. Berlin: Springer; 2004.CrossRefzbMATHGoogle Scholar
  34. 34.
    Kudlek M, Martín-Vide C, Păun G. Toward a formal macroset theory. Multiset processing: mathematical, computer science, and molecular computing points of view. Berlin: Springer; 2001. p. 123–33.CrossRefGoogle Scholar
  35. 35.
    Loos R, Manea F, Mitrana V. On small, reduced, and fast universal accepting networks of splicing processors. Theor Comput Sci. 2009;410:406–16.MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    Loos R, Manea F, Mitrana V. Small universal accepting networks of evolutionary processors with filtered connections. Proc Desc Complex Form Syst Workshop EPTCS. 2009;3:173–83.zbMATHGoogle Scholar
  37. 37.
    Loos R, Manea F, Mitrana V. Small universal accepting networks of evolutionary processors. Acta Inform. 2010;47:133–46.MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    Manea F, Margenstern M, Mitrana V, Pérez-Jiménez MJ. A new characterization of NP, P, and PSPACE with accepting hybrid networks of evolutionary processors. Theory Comput Syst. 2010;46:174–92.MathSciNetCrossRefzbMATHGoogle Scholar
  39. 39.
    Manea F, Martín-Vide C, Mitrana V. All NP-problems can be solved in polynomial time by accepting networks of splicing processors of constant size. DNA computing. LNCS, vol. 4287. Berlin: Springer; 2006. p. 47–57.CrossRefGoogle Scholar
  40. 40.
    Manea F, Martín-Vide C, Mitrana V. Accepting networks of splicing processors: complexity results. Theor Comput Sci. 2007;371:72–82.MathSciNetCrossRefzbMATHGoogle Scholar
  41. 41.
    Manea F, Martín-Vide C, Mitrana V. On the size complexity of universal accepting hybrid networks of evolutionary processors. Math Struct Comput Sci. 2007;17:753–71.MathSciNetCrossRefzbMATHGoogle Scholar
  42. 42.
    Manea F, Martín-Vide C, Mitrana V. Accepting networks of evolutionary word and picture processors: a survey. Scientific applications of language methods, mathematics, computing, language, and life Frontiers in mathematical linguistics and language theory, vol. 2. Singapore: World Scientific; 2010. p. 523–60.Google Scholar
  43. 43.
    Margenstern M, Rogozhin Y. Time-varying distributed H systems of degree 1 generate all recursively enumerable languages. Words, semigroups, and transductions. Singapore: World Scientific; 2001. p. 329–40.CrossRefGoogle Scholar
  44. 44.
    Margenstern M, Mitrana V, Pérez-Jiménez MJ. Accepting hybrid networks of evolutionary processors. Proc. 10th international workshop on DNA computing LNCS, vol. 3384. Berlin: Springer; 2004. p. 235–46.Google Scholar
  45. 45.
    Marriott K. Constraint multiset grammars. Proc VL’94. Washington, DC: IEEE Computer Society; 1994. p. 118–25.Google Scholar
  46. 46.
    Martín-Vide C, Pazos J, Păun G, Rodríguez-Patón A. A new class of symbolic abstract neural nets: tissue p systems. Proc 8th Annual International Conference, COCOON 2002. LNCS, vol. 2387. Berlin: Springer; 2002. p. 290–9.Google Scholar
  47. 47.
    Minsky ML. Size and structure of universal Turing machines using tag systems. Recursive function theory. Symp Pure Math. 1966;429:74–86.Google Scholar
  48. 48.
    Morrison JP. Flow-based programming: a new approach to application development. 2nd ed. J.P. Enterprises Ltd: Unionville, Ontario, Canada; 2010.Google Scholar
  49. 49.
    Păun G. On the splicing operation. Discrete Appl Math. 1996;70:57–79.MathSciNetCrossRefzbMATHGoogle Scholar
  50. 50.
    Păun A. On time-varying H systems. Bull EATCS. 1999;67:157–64.zbMATHGoogle Scholar
  51. 51.
    Păun G. Distributed architectures in DNA computing based on splicing: limiting the size of components. Unconventional models of computation. Berlin: Springer; 1998. p. 323–35.Google Scholar
  52. 52.
    Păun G. Regular extended H systems are computationally universal. J Autom Lang Comb. 1996;1:27–36.MathSciNetzbMATHGoogle Scholar
  53. 53.
    Păun G. DNA computing; distributed splicing systems. Structures in logic and computer science. LNCS 1261. Berlin: Springer; 1997. p. 351–70.Google Scholar
  54. 54.
    Păun G. Membrane computing. An Introduction. Berlin: Springer; 2002.CrossRefzbMATHGoogle Scholar
  55. 55.
    Păun G, Rozenberg G, Salomaa A. The oxford handbook of membrane computing. Oxford: Oxford University Press; 2010.CrossRefzbMATHGoogle Scholar
  56. 56.
    Popescu S. Networks of polarized picture networks. RomJIST. 2016;18:3–17.Google Scholar
  57. 57.
    Popescu S. Bio-inspired computing models. PhD Thesis, University of Bucharest; 2015.Google Scholar
  58. 58.
    Popescu S. Networks of polarized evolutionary processors with elementary polarization of symbols. Eighth workshop on non-classical models of automata and applications, NCMA 2016. Wien: Österreichische Computer Gesellschaft; 2016. p. 275–85.Google Scholar
  59. 59.
    Post EL. Formal reductions of the general combinatorial decision problem. Am J Math. 1943;65:197–215.MathSciNetCrossRefzbMATHGoogle Scholar
  60. 60.
    Rogozhin Y. Small universal Turing machines. Theor Comput Sci. 1992;89:6575–9.Google Scholar
  61. 61.
    Rozenberg G, Bäck T, Kok J, editors. Handbook of natural computing. Berlin: Springer; 2012.zbMATHGoogle Scholar
  62. 62.
    Rozenberg G, Salomaa A, editors. Handbook of formal languages. Berlin: Springer; 1997.zbMATHGoogle Scholar
  63. 63.
    Sankoff D. Gene order comparisons for phylogenetic inference:evolution of the mitochondrial genome. Proc Natl Acad Sci USA. 1943;65:197–215.Google Scholar
  64. 64.
    Suzuki Y, Tanaka H. Symbolic chemical system based on abstract rewriting system and its behavior pattern. Artif Life Robot. 1997;1:211–9.CrossRefGoogle Scholar
  65. 65.
    Suzuki Y, Tanaka H. Chemical evolution among artificial proto-cells. Proc. artificial life VII. Cambridge: MIT; 2000. p. 54–63.Google Scholar
  66. 66.
    Woods D, Neary T. On the time complexity of \(2\)-tag systems and small universal Turing machines. 47th Annual IEEE symposium on foundations of computer science FOCS ’06. Washington, DC: IEEE Computer Society; 2006. p. 439–48.Google Scholar
  67. 67.
    Woods D, Neary T. The complexity of small universal Turing machines: A survey. Theor Comput Sci. 2009;410:443–50.MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Faculty of Mathematics and Computer ScienceUniversity of BucharestBucharestRomania
  2. 2.Bioinformatics DepartmentNational Institute for R&D for Biological SciencesBucharestRomania

Personalised recommendations