Skip to main content

Can We Recognize an Innovation? Perspective from an Evolving Network Model

  • Chapter
  • First Online:
  • 1870 Accesses

Part of the book series: The Frontiers Collection ((FRONTCOLL))

Abstract

“Innovations” are central to the evolution of societies and the evolution of life. But what constitutes an innovation? We can often agree after the event, when its consequences and impact over a long term are known, whether something was an innovation, and whether it was a “big” innovation or a “minor” one. But can we recognize an innovation “on the fly” as it appears? Successful entrepreneurs often can. Is it possible to formalize that intuition? We discuss this question in the setting of a mathematical model of evolving networks. The model exhibits self-organization , growth, stasis, and collapse of a complex system with many interacting components, reminiscent of real-world phenomena. A notion of “innovation” is formulated in terms of graph-theoretic constructs and other dynamical variables of the model. A new node in the graph gives rise to an innovation, provided it links up “appropriately” with existing nodes; in this view innovation necessarily depends upon the existing context. We show that innovations, as defined by us, play a major role in the birth, growth, and destruction of organizational structures. Furthermore, innovations can be categorized in terms of their graph-theoretic structure as they appear. Different structural classes of innovation have potentially different qualitative consequences for the future evolution of the system, some minor and some major. Possible general lessons from this specific model are briefly discussed.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   69.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    See Derivation of Equation (5.1) in Appendix A.

  2. 2.

    See Time Scale for Appearance and Growth of the Dominant ACS in Appendix A.

  3. 3.

    See The Attractor of Equation 5.1 in Appendix A.

  4. 4.

    See Dominant ACS of a Graph in Appendix A.

  5. 5.

    See Graph-Theoretic Properties of ACSs in Appendix A.

  6. 6.

    For related discussion of discontinuous transitions in other systems, see [2224].

  7. 7.

    Most of the time the deleted node (being the one with the least relative population) is outside the dominant ACS of C 0 or in its periphery. Thus, in most cases the core is unchanged by the deletion: \(Q_{\textrm{i}} = Q_0\). However, sometimes the deleted node belongs to Q 0. In that case \(Q_{\textrm{i}} \neq Q_0\). In most such cases, Q i is a proper subset of Q 0. In very few (but important) cases, \(Q_{\textrm{i}} \cap Q_0 = \emptyset\) (the null set). In these latter cases, the deleted node is a keystone node [16]; its removal results in a “core shift”.

References

  1. J.A. Schumpeter, The Theory of Economic Development (Harvard University Press, Cambridge, MA, 1934)

    Google Scholar 

  2. E.M. Rogers, The Diffusion of Innovations, 4th edn. (Free Press, New York, NY, 1995)

    Google Scholar 

  3. P.G. Falkowski, Science 311, 1724 (2006)

    Article  Google Scholar 

  4. S. Bornholdt, H.G. Schuster (eds.), Handbook of Graphs and Networks (Wiley-VCH, Weinheim, 2003)

    MATH  Google Scholar 

  5. S. Jain, S. Krishna, Phys. Rev. Lett. 81, 5684 (1998)

    Article  ADS  Google Scholar 

  6. F. Dyson, Origins of Life (Cambridge University Press, Cambridge, 1985)

    Google Scholar 

  7. S.A. Kauffman, The Origins of Order (Oxford University Press, Oxford, 1993)

    Google Scholar 

  8. R.J. Bagley, J.D. Farmer, W. Fontana, in Artificial Life II, ed. by C.G. Langton, C. Taylor, J.D. Farmer, S. Rasmussen (Addison-Wesley, Redwood City, CA, 1991), pp. 141–158

    Google Scholar 

  9. W. Fontana, L. Buss, Bull. Math. Biol. 56, 1 (1994)

    MATH  Google Scholar 

  10. P. Bak, K. Sneppen, Phys. Rev. Lett. 71, 4083 (1993)

    Article  ADS  Google Scholar 

  11. S.A. Kauffman, J. Cybernetics 1, 71 (1971)

    Article  Google Scholar 

  12. S. Jain, S. Krishna, Proc. Natl. Acad. Sci. USA 99, 2055 (2002)

    Article  MathSciNet  ADS  Google Scholar 

  13. S. Krishna, Ph.D. Thesis (2003), arXiv:nlin/0403050v1 [nlin.AO]

    Google Scholar 

  14. S. Jain, S. Krishna, Proc. Natl. Acad. Sci. USA 98, 543 (2001)

    Article  ADS  Google Scholar 

  15. LEDA, The Library of Efficient Data Types and Algorithms (presently distributed by Algorithmic Solutions Software GmbH; http://www.algorithmic-solutions.com)

  16. S. Jain, S. Krishna, Phys. Rev. E 65, 026103 (2002)

    Article  ADS  Google Scholar 

  17. S. Jain, S. Krishna, Comput. Phys. Commun. 121–122, 116 (1999)

    Article  Google Scholar 

  18. S. Jain, S. Krishna, in Handbook of Graphs and Networks, ed. by S. Bornholdt, H.G. Schuster (Wiley-VCH, Weinheim, 2003), pp. 355–395

    Google Scholar 

  19. M. Eigen, Naturwissenschaften 58, 465 (1971)

    Article  ADS  Google Scholar 

  20. O.E. Rossler, Z. Naturforsch. 26b, 741 (1971)

    Google Scholar 

  21. E. Seneta, Non-negative Matrices (George Allen and Unwin, London, 1973)

    MATH  Google Scholar 

  22. J. Padgett, in Networks and Markets, ed. by J.E. Rauch, A. Casella (Russel Sage, New York, NY, 2001), pp. 211–257

    Google Scholar 

  23. M.D. Cohen, R.L. Riolo, R. Axelrod, Rationality Soc. 13, 5 (2001)

    Article  Google Scholar 

  24. J.M. Carlson, J. Doyle, Phys. Rev. E 60, 1412 (1999)

    Article  ADS  Google Scholar 

  25. H. Gutfreund, Kinetics for the Life Sciences (Cambridge University Press, Cambridge, 1995)

    Book  Google Scholar 

Download references

Acknowledgments

S.J. thanks John Padgett for discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sanjay Jain .

Editor information

Editors and Affiliations

Appendices

Appendix A: Definitions and Proofs

In this Appendix we collect some useful facts about the model. These and other properties can be found in [5, 13, 17, 18].

1.1 Derivation of Equation (5.1)

Let \(i\in \{1,\ldots ,s\}\) denote a chemical (or molecular) species in a well-stirred chemical reactor . Molecules can react with one another in various ways; we focus on only one aspect of their interactions: catalysis. The catalytic interactions can be described by a directed graph with s nodes. The nodes represent the s species and the existence of a link from node j to node i means that species j is a catalyst for the production of species i. In terms of the adjacency matrix, \(C=(c_{ij})\) of this graph, c ij is set to unity if j is a catalyst of i and is set to zero otherwise. The operational meaning of catalysis is as follows:

Each species i will have an associated nonnegative population y i in the reactor that changes with time. Let species j catalyze the ligation of reactants A and B to form the species i, \(A+B\stackrel{j}{\rightarrow }i\). Assuming that the rate of this catalyzed reaction is given by the Michaelis–Menten theory of enzyme catalysis, \(\dot{y}_{i}=V_{\textrm{max}}ab\frac{y_{j}}{K_{\textrm{M}}+y_{j}}\) [25], where a,b are the reactant concentrations, and V max and K M are constants that characterize the reaction. If the Michaelis constant K M is very large this can be approximated as \(\dot{y}_{i}\propto y_{j}ab\). Combining the rates of the spontaneous and catalyzed reactions and also putting in a dilution flux ϕ, the rate of growth of species i is given by \(\dot{y}_{i}=k(1+\nu y_{j})ab-\phi y_{i}\), where k is the rate constant for the spontaneous reaction, and ν is the catalytic efficiency. Assuming the catalyzed reaction is much faster than the spontaneous reaction, and that the concentrations of the reactants are nonzero and fixed, the rate equation becomes \(\dot{y_{i}}=Ky_{j}-\phi y_{i}\), where K is a constant. In general because species i can have multiple catalysts, \(\dot{y}_{i}=\sum _{j=1}^{s}K_{ij}y_{j}-\phi y_{i}\), with \(K_{ij}\sim c_{ij}\). We make the further idealization \(K_{ij}=c_{ij}\), giving

$$\dot{y}_{i}=\sum _{j=1}^{s}c_{ij}y_{j}-\phi y_{i}\;.$$
((5.2))

The relative population of species i is by definition \(x_{i}\equiv y_{i}/\sum\nolimits _{j=1}^{s}y_{j}\). As \(0\leq x_{i}\leq 1,\sum\nolimits _{i=1}^{s}x_{i}=1\), \({\textbf {x}}\equiv (x_{1},\ldots ,x_{s})^{T}\in J\). Taking the time derivative of x i and using (5.2) it is easy to see that \(\dot{x}_{i}\) is given by (5.1). Note that the ϕ term, present in (5.2), cancels out and is absent in (5.1).

1.2 The Attractor of Equation (5.1)

A graph described by an adjacency matrix C has an eigenvalue \(\lambda_1(C)\) that is a real, non-negative number that is greater than or equal to the modulus of all other eigenvalues. This follows from the Perron–Frobenius theorem [21] and this eigenvalue is called the Perron–Frobenius eigenvalue of C.

The attractor X of (5.1) is an eigenvector of C with eigenvalue \(\lambda_1(C)\). Since (5.1) does not depend on ϕ, we can set \(\phi = 0\) in (5.2) without loss of generality for studying the attractors of (5.1). For fixed C the general solution of (5.2) is \(\textbf{y}(t) = e^{Ct}\textbf{y}(0)\), where y denotes the s-dimensional column vector of populations. It is evident that if \(\textbf{y}^{\lambda} \equiv (y_1^\lambda, \ldots, y_s^\lambda)\) viewed as a column vector is a right eigenvector of C with eigenvalue λ, then \(\textbf{x}^{\lambda} \equiv \textbf{y}^{\lambda}/\sum_i^s y_i^\lambda\) is a fixed point of (5.1). Let λ 1 denote the eigenvalue of C that has the largest real part; it is clear that \(\textbf{x}^{\lambda_1}\) is an attractor of (5.1). By the theorem of Perron–Frobenius for nonnegative matrices [21], λ 1 is real and \(\geq 0\) and there exists an eigenvector \(\textbf{x}^{\lambda_1}\) with \(x_i \geq 0\). If λ 1 is nondegenerate, \(\textbf{x}^{\lambda_1}\) is the unique asymptotically stable attractor of (5.1), \(\textbf{x}^{\lambda_1} = (X_1, \ldots, X_s)\).

1.3 The Attractor of Equation (5.1) When There Are No Cycles

For any graph with no cycles, in the attractor only the nodes at the ends of the longest paths are nonzero. All other nodes are zero.

Consider a graph consisting only of a linear chain of \(r+1\) nodes, with r links, pointing from node 1 to node 2, node 2 to node 3, etc. Node 1 (to which there is no incoming link) has a constant population y 1 because the right-hand side (rhs) of (5.2) vanishes for i = 1 (taking \(\phi =0\)). For node 2, we get \(\dot{y_{2}}=y_{1}\), hence \(y_{2}(t)=y_{2}(0)+y_{1}t\sim t\) for large t. Similarly, it can be seen that y k grows as \(t^{k-1}\). In general, it is clear that for a graph with no cycles, \(y_{i}\sim t^{r}\) for large t (when \(\phi =0\)), where r is the length of the longest path terminating at node i. Thus, nodes with the largest r dominate for sufficiently large t. Because the dynamics (5.1) does not depend upon the choice of ϕ, \(X_{i}=0\) for all i except the nodes at which the longest paths in the graph terminate.

1.4 Graph-Theoretic Properties of ACSs

  1. i.

    An ACS must contain a closed walk.

  2. ii.

    If a graph, C, has no closed walk then \(\lambda_{\it 1}(C)={\it 0}\).

  3. iii.

    If a graph, C, has a closed walk then \(\lambda_{\it 1}(C)\geq {\it 1}\). Consequently:

  4. iv.

    If a graph C has no ACS then \(\lambda _{\it 1}(C)={\it 0}\).

  5. v.

    If a graph C has an ACS then \(\lambda _{\it 1}(C)\geq {\it 1}\).

  1. i.

    Let A be the adjacency matrix of a graph that is an ACS. Then by definition, every row of A has at least one nonzero entry. Construct A′ by removing, from each row of A, all nonzero entries except one that can be chosen arbitrarily. Thus A′ has exactly one nonzero entry in each row. Clearly the column vector \({\textbf {x}}=(1,1,\ldots ,1)^{T}\) is an eigenvector of A′ with eigenvalue 1 and hence \(\lambda _{1}(A')\ge 1\). Proposition iii therefore implies that A′ contains a closed walk. Because the construction of A′ from A involved only removal of some links, it follows that A must also contain a closed walk.

  2. ii.

    If a graph has no closed walk then all walks are of finite length. Let the length of the longest walk of the graph be denoted r. If C is the adjacency matrix of a graph then \((C^{k})_{ij}\) equals the number of distinct walks of length k from node j to node i. Clearly \(C^{m}=0\) for \(m>r\). Therefore all eigenvalues of C m are zero. If λ i are the eigenvalues of C then \(\lambda _{i}^{k}\) are the eigenvalues of C k. Hence, all eigenvalues of C are zero, which implies \(\lambda _{1}=0\). This proof was supplied by V. S. Borkar.

  3. iii.

    If a graph has a closed walk then there is some node i that has at least one closed walk to itself, i.e., \((C^{k})_{ii}\geq 1\), for infinitely many values of k. Because the trace of a matrix equals the sum of the eigenvalues of the matrix, we have \(\sum _{i=1}^{s}(C^{k})_{ii}=\sum _{i=1}^{s}\lambda _{i}^{k}\), where λ i are the eigenvalues of C. Thus, \(\sum _{i=1}^{s}\lambda _{i}^{k}\ge 1\), for infinitely many values of k. This is only possible if one of the eigenvalues λ i has a modulus ≥ 1. By the Perron–Frobenius theorem, λ 1 is the eigenvalue with the largest modulus, hence \(\lambda _{1}\geq 1\). This proof was supplied by R. Hariharan.

  4. iv.

    and (v) follow from the above.

1.5 Dominant ACS of a Graph

If a graph has (one or more) ACSs, i.e., \(\lambda_1 \geq 1\), then the subgraph corresponding to the set of nodes i for which \(X_i > 0\) is an ACS.

Renumber the nodes of the graph so that \(x_i>0\) only for \(i=1, \ldots, k\). Let C be the adjacency matrix of this graph. Since X is an eigenvector of the matrix C, with eigenvalue λ 1, we have \(\sum_{j=1}^s c_{ij}X_j=\lambda_1 X_i \Rightarrow\sum_{j=1}^k c_{ij}X_j=\lambda_1 X_i\). Since \(X_i>0\) only for \(i=1, \ldots, k\) it follows that for each \(i \in \{1,\ldots,k\}\) there exists a j such that \(c_{ij}>0\). Hence the \(k \times k\) submatrix \(C' \equiv (c_{ij})\), \(i,j=1,\ldots,k\) has at least one nonzero entry in each row. Thus each node of the subgraph corresponding to this submatrix has an incoming link from one of the other nodes in the subgraph. Hence the subgraph is an ACS. We call this subgraph the dominant ACS of the graph.

1.6 Time Scales for Appearance and Growth of the Dominant ACS

The probability for an ACS to be formed at some graph update in a graph that has no cycles can be closely approximated by the probability of a two-cycle (the simplest ACS with one-cycles being disallowed) forming by chance, which is p 2s (the probability that in the row and column corresponding to the replaced node in C any matrix element and its transpose are both assigned unity). Thus, the “average time of appearance” of an ACS is \(\tau _{a}=1/p^{2}s\), and the distribution of times of appearance is \(P(n_{a})=p^{2}s(1-p^{2}s)^{n_{a}-1}\). This approximation is better for small p.

Assuming that the possibility of a new node forming a second ACS is rare enough to neglect, and that the dominant ACS grows by adding a single node at a time, one can estimate the time required for it to span the entire graph. Let the dominant ACS consist of \(s_{1}(n)\) nodes at time n. The probability that the new node gets an incoming link from the dominant ACS and hence joins it is ps 1. Thus in δn graph updates, the dominant ACS will grow, on average, by \({\Delta}s_{1}=ps_{1}{\Delta}n\) nodes. Therefore \(s_{1}(n)=s_{1}(n_{a})\mathrm{exp}((n-n_{a})/\tau _{g})\), where \(\tau _{g}=1/p\), n a is the time of appearance of the first ACS and \(s_{1}(n_{a})\) is the size of the first ACS. Thus s 1 is expected to grow exponentially with a characteristic timescale \(\tau _{g}=1/p\). The time taken from the appearance of the ACS to its spanning is \(\tau _{g}\ln (s/s_{1}(n_{a}))\).

Appendix B: Graph-Theoretic Classification of Innovations

In the main text we defined an innovation to be the new structure created by the addition of a new node, when the new node has a nonzero population in the new attractor. Here, we present a graph-theoretic hierarchical classification of innovations (see Fig. 5.3). At the bottom of this hierarchy we recover the six categories of innovations described in the main text.

Some notation follow: We need to distinguish between two graphs, one just before the new node is inserted, and one just after. We denote them by C i and C f respectively, and their cores by Q i and Q f . Note that a graph update event consists of two parts – the deletion of a node and the addition of one. C i is the graph after the node is deleted and before the new node is inserted. The graph before the deletion will be denoted C 0; Q 0 will denote its coreFootnote 7. If a graph has no ACS, its core is the null set.

The links of the new node may be such that new cycles arise in the graph (that were absent in C i but are present in C f). In this case the new node is part of a new irreducible subgraph that has arisen in the graph. N will denote the maximal irreducible subgraph which includes the new node. If the new node does not create new cycles, \(N = \emptyset\). If \(N \neq \emptyset\), then N will either be disjoint from Q f or will include Q f (it cannot partially overlap with Q f because of its maximal character). The structure of N and its relationship with the core before and after the addition determines the nature of the innovation. With this notation all innovations can be grouped into two classes:

  1. A.

    Innovations that do not create new cycles, \(N=\emptyset\). This implies \(Q_{\textrm{f}}=Q_{\textrm{i}}\) because no new irreducible structure has appeared and therefore the core of the graph, if it exists, is unchanged.

  2. B.

    Innovations that do create new cycles, \(N\ne \emptyset\). This implies \(Q_{\textrm{f}}\ne \emptyset\) because if a new irreducible structure is created then the new graph has at least one ACS and therefore a nonempty core.

Class A can be further decomposed into two classes:

  1. A1.

    \(Q_{\textrm{i}}=Q_{\textrm{f}}=\emptyset\). In other words, the graph has no cycles both before and after the innovation. This corresponds to shortlived innovations discussed in Sect. 5.6.1 (Fig. 5.2b, c).

  2. A2.

    \(Q_{\textrm{i}}=Q_{\textrm{f}} \ne \emptyset\). In other words, the graph had an ACS before the innovation, and its core was not modified by the innovation. This corresponds to incremental innovations discussed in Sect. 5.6.3 (Fig. 5.2f, g).

Class B of innovations can also be divided into two subclasses:

  1. B1.

    \(N\ne Q_{\textrm{f}}\). If the new irreducible structure is not the core of the new graph, then N must be disjoint from Q f. This can only be the case if the old core has not been modified by the innovation. Therefore \(N\ne Q_{\textrm{f}}\) necessarily implies that \(Q_{\textrm{f}}=Q_{\textrm{i}}\). This corresponds to dormant innovations discussed in Sect. 5.6.6 (Fig. 5.2o, p).

  2. B2.

    \(N=Q_{\textrm{f}}\), i.e., the innovation becomes the new core after the graph update. This is the situation where the core is transformed due to the innovation. The “core-transforming theorem” [12, 18, 13] states that an innovation of type B2 occurs whenever either of the following conditions are true:

    1. (a)

      \(\lambda_{1}(N)>\lambda _{1}(Q_{\textrm{i}})\) or

    2. (b)

      \(\lambda_{1}(N)=\lambda {1}(Q_{\textrm{i}})\) and N is downstream of Q i.

Class B2 can be subdivided as follows:

  1. B21.

    \(Q_{\textrm{i}}\ne \emptyset\), i.e., the graph contained an ACS before the innovation. In this case an existing core is modified by the innovation.

  2. B22.

    \(Q_{\textrm{i}}=\emptyset\), i.e., the graph had no ACS before the innovation. Thus, this kind of innovation creates an ACS in the graph. It corresponds to the birth of a organization discussed in Sect. 5.6.2 (Fig. 5.2d, e).

Finally, class B21 can be subdivided:

  1. B211.

    \(Q_{\textrm{i}}\subset Q_{\textrm{f}}\). When the new core contains the old core as a subset we get an innovation that causes the growth of the core, discussed in Sect. 5.6.4 (Fig. 5.2l, m).

  2. B212.

    Q i and Q f are disjoint (note that it is not possible for Q i and Q f to partially overlap, else they would form one big irreducible set which would then be the core of the new graph and Q i would be a subset of Q f). This is an innovation where a core-shift is caused due to a takeover by a new competitor, discussed in Sect. 5.6.5 (Fig. 5.2 h, i).

Note that each branching above is into mutually exclusive and exhaustive classes. This classification is completely general and applicable to all runs of the system. Figure 5.3 shows the hierarchy obtained using this classification.

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Jain, S., Krishna, S. (2011). Can We Recognize an Innovation? Perspective from an Evolving Network Model. In: Meyer-Ortmanns, H., Thurner, S. (eds) Principles of Evolution. The Frontiers Collection. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-18137-5_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-18137-5_5

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-18136-8

  • Online ISBN: 978-3-642-18137-5

  • eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)

Publish with us

Policies and ethics