Skip to main content

Auto-Contractive Maps, H Function, and the Maximally Regular Graph: A New Methodology for Data Mining

  • Chapter
  • First Online:
Intelligent Data Mining in Law Enforcement Analytics
  • 1327 Accesses

Abstract

Data mining can be described as a process of discovering knowledge from a large dataset and depicting it in a human-understandable structure. It involves the disciplines of artificial intelligence, machine learning, database systems, and, of course, mathematics. A specialized data mining tool called the auto-contractive map (AutoCM) is defined and illustrated. After the AutoCM has discovered the relationships contained within the dataset, it is depicted in the form of a minimal spanning tree. An example is given to illustrate how to interpret the tree. A measure to determine the degree to which the tree is hub-oriented is developed and called the hubness index. Finally, a new index to measure the relevance and contribution of any node with a graph generated by a dataset is defined and called the delta H function.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The “B” operator was invented and implemented by M Buscema in 1998 at Semeion Research Center. The “B” operator is presented in this chapter for the first time.

References

  • Buscema, M. (2007a). A novel adapting mapping method for emergent properties discovery in data bases: Experience in medical field. In 2007 IEEE international conference on systems, man and cybernetics (SMC 2007). Montreal, Canada.

    Google Scholar 

  • Buscema, M. (Ed.). (2007b). Squashing theory and contractive map network (Semeion Technical Paper #32). Rome.

    Google Scholar 

  • Buscema, M., & Grossi, E. (2008). The semantic connectivity map: An adapting self-organizing knowledge discovery method in data bases. Experience in gastro-oesophageal reflux disease. International Journal of Data Mining and Bioinformatics, 2(4), 362–404.

    Article  Google Scholar 

  • Buscema, M., & Grossi, E. (Eds.). (2009). Artificial adaptive systems in medicine (pp. 25–47). Saif Zone: Bentham e-books.

    Google Scholar 

  • Buscema, M., & Sacco, P. L. (2010). Auto-contractive maps, the H function, and the Maximally Regular Graph (MRG): A new methodology for data mining, Chapter 11. In V. Capecchi et al. (Eds.), Applications of mathematics in models, artificial neural networks and arts. New York/London: Springer. doi:10.1007/978-90-481-8581-8_11.

  • Buscema, M., Grossi, E., Snowdon, D., & Antuono, P. (2008a). Auto-contractive maps: An artificial adaptive system for data mining. An application to Alzheimer disease. Current Alzheimer Research, 5, 481–498.

    Article  Google Scholar 

  • Buscema, M., Helgason, C., & Grossi, E. (2008b). Auto contractive maps, H function and maximally regular graph: Theory and applications. In Special session on “Artificial adaptive systems in medicine: Applications in the real world, NAFIPS 2008 (IEEE)”, New York.

    Google Scholar 

  • Buscema, M., Newman, F., Grossi, E., & Tastle, W. (2010, July 12–14). Application of adaptive systems methodology to radiotherapy. In NAFIPS 2010, Toronto, Canada.

    Google Scholar 

  • Eller-Vainicher, C., Zhukouskaya, V. V., Tolkachev, Y. V., Koritko, S. S., Cairoli, E., Grossi, E., Beck-Peccoz, P., Chiodini, I., & Shepelkevich, A. P. (2011). Low bone mineral density and its predictors in type 1 diabetic patients evaluated by the classic statistics and artificial neural network analysis. Diabetes Care, 34, 2186–2191.

    Article  Google Scholar 

  • Gomiero, T., Croce, L., Grossi, E., DeVreese, L., Buscema, M., Mantesso, U., & DeBastiani, E. (2011). A short version of SIS (Support Intensity Scale): The utility of the application of artificial adaptive systems. US-China Education Review A, 2, 196–207.

    Google Scholar 

  • Grossi, E., Blessi, G., Sacco, P. L., & Buscema, M. (2011). The interaction between culture, health and psychological well-being: Data mining from the Italian culture and well-being project. Journal of Happiness Studies, 13, 129–148.

    Article  Google Scholar 

  • Kruskal, J. B. (1956). On the shortest spanning subtree of a graph and the traveling salesman problem. Proceedings of the American Mathematical Society, 7(1), 48–50.

    Article  Google Scholar 

  • Licastro, F., Porcellini, E., Chiappelli, M., Forti, P., & Buscema, M. (2010a). Multivariable network associated with cognitive decline and dementia. International Neurobiology of Aging, 1(2), 257–269.

    Article  Google Scholar 

  • Licastro, F., Porcellini, E., Forti, P., Buscema, M., Carbone, I., Ravaglia, G., & Grossi, E. (2010b). Multi factorial interactions in the pathogenesis pathway of Alzheimer’s disease: a new risk charts for prevention of dementia. Immunity & Ageing, 7(Suppl 1), S4.

    Article  Google Scholar 

  • Zsuzsanna, A. R. (2001). Statistical mechanics of complex networks. Dissertation, Department of Physics, Notre Dame University, Indiana.

    Google Scholar 

Software

  • Buscema, M. (2007a). Contractive maps. Software for programming Auto Contractive Maps, Semeion Software #15, v. 2, Rome.

    Google Scholar 

  • Buscema, M. (2007b). Constraints satisfaction networks. Software for programming Non Linear Auto-Associative Networks, Semeion Software #14, v. 10, Rome.

    Google Scholar 

  • Buscema, M. (2008). MST. Software for programming trees from artificial networks weights matrix, Semeion Software #38, v 5, Rome.

    Google Scholar 

  • Massini, G. (2007) Tree visualizer. Software to draw and manipulate tree graph, Semeion Software #40, v. 3, Rome.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Massimo Buscema .

Editor information

Editors and Affiliations

Appendices

Appendices

15.1.1 Appendix A: AutoCM Convergence

Giovanni Pieri

The first step is to show the CM convergence equation. Convergence means that in the long term (for growing n), the connections do not vary any more, that is,

$$ \Delta {v_{{{i_{{(n)}}}}}} = 0;{ }\Delta {w_{{i,{j_{{(n)}}}}}} = 0. $$

A sufficient condition for convergence is:

$$ \mathop{{\lim }}\limits_{{n \to \infty }} {v_{{{i_{{(n)}}}}}} = C. $$
(15.8)

In fact, when \( {v_{{{i_{{(n)}}}}}} = C \), then \( \Delta {v_{{{i_{{(n)}}}}}} = 0 \) (Eq. 15.2) and \( m_{{{i_{{(n)}}}}}^{{[h]}} = 0 \) (Eq. 15.1) and, consequently, \( { }\Delta {w_{{i,{j_{{(n)}}}}}} = 0 \) (Eq. 15.6).

The second step is to demonstrate that the sufficient condition holds. At this point, we can rewrite Eqs. (15.2) and (15.3) in this way:

$$ \Delta {v_{{{i_{{(n)}}}}}} = \left( {m_{{{i_{{(n)}}}}}^{{[s]}} - m_{{{i_{{(n)}}}}}^{{[s]}} \cdot \left( {1 - \frac{{{v_{{{i_{{(n)}}}}}}}}{C}} \right)} \right) \cdot \left( {1 - \frac{{{v_{{{i_{{(n)}}}}}}}}{C}} \right) = m_{{{i_{{(n)}}}}}^{{[s]}} \cdot \frac{{{v_{{{i_{{(n)}}}}}}}}{C} \cdot \left( {1 - \frac{{{v_{{{i_{{(n)}}}}}}}}{C}} \right); $$
(15.2a)
$$ {v_{{{i_{{(n + 1)}}}}}} = {v_{{{i_{{(n)}}}}}} + m_{{{i_{{(n)}}}}}^{{[s]}} \cdot \frac{{{v_{{{i_{{(n)}}}}}}}}{C} \cdot \left( {1 - \frac{{{v_{{{i_{{(n)}}}}}}}}{C}} \right). $$
(15.3a)

For the sake of clarity, we pose:

$$ {v_{{{i_{{(n + 1)}}}}}} = {v_{{n + 1}}}; $$
$$ \frac{{{v_{{{i_{{(n)}}}}}}}}{C} = y; $$
$$ m_{{{i_{{(n)}}}}}^{{[s]}} = m. $$

Obtaining a simplified version of Eq. (15.3a):

$$ {v_{{n + 1}}} = Cy + m \cdot y \cdot (1 - y) = y\left( {C + m} \right) - m{y^2}. $$
(15.3b)

It has to be noted that in Eq. (15.3b), while C is a true constant, remaining unchanged during training, m is a variable, which is bounded both superiorly and inferiorly; in fact, the following inequalities hold: \( 0 \leqslant m \leqslant 1 \). This property will be exploited to demonstrate Eq. (15.8).

A graphical representation of Eq. (15.3b) is helpful to make clear its properties. The general form of Eq. (15.3b) is parabolic, passing for two fixed points not dependent on m: the origin where both v n+1 and y are null and the point of coordinates y = 1; v n+1  = C. Between the two points, the function may have a maximum. This happens if C < 1 and m is close to 1. The lower C is and the higher m is, the more pronounced is the maximum. Otherwise (C > 1) the maximum is outside the interval and it is found for y > 1.

Fig. A.1
figure i

Equation (15.3b) in the case of C = 0.8

Fig. A.2
figure j

Equation (15.3b) in the case of C = 1

Fig. A.3
figure k

Equation (15.3b) in the case of C = 1.5

Three cases of Eq. (15.3b) are represented in Figs. A.1, A.2, A.3, which are obtained respectively for C = 0.8, C = 1, and C = 1.5. In each case, various values of m give origin to different curves.

To elucidate what the use of the above diagrams is, let us assume for a moment a constant value for m: the curve corresponding to that value represents all the possible values for v n+1. If C ≥ 1 and y < 1, the following properties are readily seen:

  1. (a)

    v n+1 is always less than C.

  2. (b)

    v n+1 is always larger than v n.

  3. (c)

    When n grows, even y grows and so does v n+1 indefinitely.

It is also readily seen that these properties hold even for a variable m, that is, m function of n.

If it is assumed that the property (c) is equivalent to say that for any positive ε exists at least one v n which is:

$$ {v_{\text{n}}} > C - {\text{e}}, $$

it can be easily demonstrated that the sufficient condition (15.8) for convergence holds (if the above equation is satisfied, also the definition of limit is satisfied, and therefore, \( \mathop{{\lim }}\limits_{{n \to \infty }} {v_{{{i_{{(n)}}}}}} = C \)).

The equivalence cannot be assumed as unconditionally true, but only as being a reasonable conjecture, and, therefore, any demonstration based on it must be considered not completely sound.

The same holds for the condition \( {{{{v_{{{i_{{(n)}}}}}}}} \left/ {C} \right.} = 1 - \varepsilon \) which is in turn another form of the equivalent of the property (c) discussed above.

In particular, it is clear that the sufficient condition \( \mathop{{\lim }}\limits_{{n \to \infty }} {v_{{{i_{{(n)}}}}}} = C \) does not hold either where the parabolic curve shows a maximum at y <1 and when the initial value v 0 is larger than C.

15.1.2 Appendix B: Operator “B”

Riccardo PetritoliFootnote 1

Let \( \bar{W} \) be the space of the square matrices whose elements satisfy these relations:

$$ W \in \bar{W},\quad {n_{{\dim }}} = \dim (W),\quad {n_{{\dim }}} \in \left( {x|\exists \,y,z \in {N^{ + }},x = y \cdot z} \right). $$

We define:

  1. (a)

    The set D of the ordered pairs \( ({n_R}, {n_C} ) \) with \( {n_R}, {n_C} \in {N^{ + }}, {n_R} \cdot {n_C} = {n_{{\dim }}} \)

  2. (b)

    The operator \( B({n_R}, {n_C} ) \): \( W \to W^{\prime} \)

    $$ {w_{{i,j}}} = {w^{\prime}_{{k,l}}}\quad i,j,k,l \in \{ 1,...,{n_{{\dim }}}\} $$

    with:

    $$ k = \left[ {{{{(i - 1)}} \left/ {{{n_R}}} \right.}} \right] \cdot {n_R} + 1 + {{{(j - 1)}} \left/ {{{n_C}}} \right.} $$
    $$ l = \left[ {(i - 1)\bmod {n_R}} \right] \cdot {n_C} + 1 + (j - 1)\bmod {n_C} $$

    where:

    “/” is the division defined in \( N \)

    “mod” is the modulo operation (also defined in \( N \))

Notes

  1. 1.

    From definition (b) follows that the B operator makes a simple change of element positions in the matrix (a sort of “block transpose”).

    Consider this example:

    Let \( \bar{W} \) be a matrix space with \( {n_{{\dim }}} = 12 \) and the B operator with \( {n_R} = 3,\quad {n_C} = 4 \).

    The matrix \( W \) is:

    $$ W = \left( {\begin{array}{*{20}{c}} {{w_{{1,1}}}} & {{w_{{1,2}}}} & \cdots & {{w_{{1,12}}}} \\ {{w_{{2,1}}}} & {{w_{{2,2}}}} & \cdots & {{w_{{2,12}}}} \\ \vdots & \vdots & \ddots & \vdots \\ {{w_{{12,1}}}} & {{w_{{12,2}}}} & \cdots & {{w_{{12,12}}}} \\ \end{array} } \right). $$

    Using the operator \( B(3,\;4) \), the result will be:

    $$ W^{\prime} = \left( {\begin{array}{*{20}{c}} {\overbrace{{\begin{array}{*{20}{c}} {{w_{{1,1}}}} & {{w_{{1,2}}}} & {{w_{{1,3}}}} & {{w_{{1,4}}}} \\ {{w_{{1,5}}}} & {{w_{{1,6}}}} & {{w_{{1,7}}}} & {{w_{{1,8}}}} \\ {{w_{{1,9}}}} & {{w_{{1,10}}}} & {{w_{{1,11}}}} & {{w_{{1,12}}}} \\ {} & {} & {} & {} \\ \end{array} }}^1} & {\overbrace{{\begin{array}{*{20}{c}} {{w_{{2,1}}}} & {{w_{{2,2}}}} & {{w_{{2,3}}}} & {{w_{{2,4}}}} \\ {{w_{{2,5}}}} & {{w_{{2,6}}}} & {{w_{{2,7}}}} & {{w_{{2,8}}}} \\ {{w_{{2,9}}}} & {{w_{{2,10}}}} & {{w_{{2,11}}}} & {{w_{{2,12}}}} \\ {} & {} & {} & {} \\ \end{array} }}^2} & {\overbrace{{\begin{array}{*{20}{c}} {{w_{{3,1}}}} & {{w_{{3,2}}}} & {{w_{{3,3}}}} & {{w_{{3,4}}}} \\ {{w_{{3,5}}}} & {{w_{{3,6}}}} & {{w_{{3,7}}}} & {{w_{{3,8}}}} \\ {{w_{{3,9}}}} & {{w_{{3,10}}}} & {{w_{{3,11}}}} & {{w_{{3,12}}}} \\ {} & {} & {} & {} \\ \end{array} }}^3} & {} \\ {\overbrace{{\begin{array}{*{20}{c}} {{w_{{4,1}}}} & {{w_{{4,2}}}} & {{w_{{4,3}}}} & {{w_{{4,4}}}} \\ {{w_{{4,5}}}} & {{w_{{4,6}}}} & {{w_{{4,7}}}} & {{w_{{4,8}}}} \\ {{w_{{4,9}}}} & {{w_{{4,10}}}} & {{w_{{4,11}}}} & {{w_{{4,12}}}} \\ {} & {} & {} & {} \\ \end{array} }}^4} & {\overbrace{{\begin{array}{*{20}{c}} {{w_{{5,1}}}} & {{w_{{5,2}}}} & {{w_{{5,3}}}} & {{w_{{5,4}}}} \\ {{w_{{5,5}}}} & {{w_{{5,6}}}} & {{w_{{5,7}}}} & {{w_{{5,8}}}} \\ {{w_{{5,9}}}} & {{w_{{5,10}}}} & {{w_{{5,11}}}} & {{w_{{5,12}}}} \\ {} & {} & {} & {} \\ \end{array} }}^5} & {\overbrace{{\begin{array}{*{20}{c}} {{w_{{6,1}}}} & {{w_{{6,2}}}} & {{w_{{6,3}}}} & {{w_{{6,4}}}} \\ {{w_{{6,5}}}} & {{w_{{6,6}}}} & {{w_{{6,7}}}} & {{w_{{6,8}}}} \\ {{w_{{6,9}}}} & {{w_{{6,10}}}} & {{w_{{6,11}}}} & {{w_{{6,12}}}} \\ {} & {} & {} & {} \\ \end{array} }}^6} & {} \\ {\overbrace{{\begin{array}{*{20}{c}} {{w_{{7,1}}}} & {{w_{{7,2}}}} & {{w_{{7,3}}}} & {{w_{{7,4}}}} \\ {{w_{{7,5}}}} & {{w_{{7,6}}}} & {{w_{{7,7}}}} & {{w_{{7,8}}}} \\ {{w_{{7,9}}}} & {{w_{{7,10}}}} & {{w_{{7,11}}}} & {{w_{{7,12}}}} \\ {} & {} & {} & {} \\ \end{array} }}^7} & {\overbrace{{\begin{array}{*{20}{c}} {{w_{{8,1}}}} & {{w_{{8,2}}}} & {{w_{{8,3}}}} & {{w_{{8,4}}}} \\ {{w_{{8,5}}}} & {{w_{{8,6}}}} & {{w_{{8,7}}}} & {{w_{{8,8}}}} \\ {{w_{{8,9}}}} & {{w_{{8,10}}}} & {{w_{{8,11}}}} & {{w_{{8,12}}}} \\ {} & {} & {} & {} \\ \end{array} }}^8} & {\overbrace{{\begin{array}{*{20}{c}} {{w_{{9,1}}}} & {{w_{{9,2}}}} & {{w_{{9,3}}}} & {{w_{{9,4}}}} \\ {{w_{{9,5}}}} & {{w_{{9,6}}}} & {{w_{{9,7}}}} & {{w_{{9,8}}}} \\ {{w_{{9,9}}}} & {{w_{{9,10}}}} & {{w_{{9,11}}}} & {{w_{{9,12}}}} \\ {} & {} & {} & {} \\ \end{array} }}^9} & {} \\ {\overbrace{{\begin{array}{*{20}{c}} {{w_{{10,1}}}} & {{w_{{10,2}}}} & {{w_{{10,3}}}} & {{w_{{10,4}}}} \\ {{w_{{10,5}}}} & {{w_{{10,6}}}} & {{w_{{10,7}}}} & {{w_{{10,8}}}} \\ {{w_{{10,9}}}} & {{w_{{10,10}}}} & {{w_{{10,11}}}} & {{w_{{10,12}}}} \\ {} & {} & {} & {} \\ \end{array} }}^{{10}}} & {\overbrace{{\begin{array}{*{20}{c}} {{w_{{1,1}}}} & {{w_{{1,2}}}} & {{w_{{1,3}}}} & {{w_{{1,4}}}} \\ {{w_{{1,5}}}} & {{w_{{1,6}}}} & {{w_{{1,7}}}} & {{w_{{1,8}}}} \\ {{w_{{11,9}}}} & {{w_{{11,10}}}} & {{w_{{11,11}}}} & {{w_{{11,12}}}} \\ {} & {} & {} & {} \\ \end{array} }}^{{11}}} & {\overbrace{{\begin{array}{*{20}{c}} {{w_{{1,1}}}} & {{w_{{1,2}}}} & {{w_{{1,3}}}} & {{w_{{1,4}}}} \\ {{w_{{1,5}}}} & {{w_{{1,6}}}} & {{w_{{1,7}}}} & {{w_{{1,8}}}} \\ {{w_{{12,9}}}} & {{w_{{12,10}}}} & {{w_{{12,11}}}} & {{w_{{12,12}}}} \\ {} & {} & {} & {} \\ \end{array} }}^{{12}}} & {} \\ \end{array} } \right). $$

    From the previous example, we can extract a simple algorithm for the B operator.

    Let the starting matrix be divided in 3 × 4 blocks:

    $$ W = \left( \begin{array}{*{20}{c}} {{A_1}} & {{B_1}} & {{C_1}} \\ {{A_2}} & {{B_2}} & {{C_2}} \\ {{A_3}} & {{B_3}} & {{C_3}} \\ {{A_4}} & {{B_4}} & {{C_4}} \end{array} \right). $$

    From every ordered set (\( {A_n},{B_n},{C_n} \)), we can obtain a 3 × 12 block \( {D_n} \); the resulting 4 blocks are going to be the final matrix:

    $$ W^{\prime} = \left( {\begin{array}{*{20}{c}} {{D_1}} \\ {{D_2}} \\ {{D_3}} \\ {{D_4}} \\ \end{array} } \right). $$

    This is the procedure:

    Step 1:

    Consider block A:

    $$ A = \left( {\begin{array}{*{20}{c}} {{a_{{1,1}}}} & {{a_{{1,2}}}} & {{a_{{1,3}}}} & {{a_{{1,4}}}} \\ {{a_{{2,1}}}} & {{a_{{2,2}}}} & {{a_{{2,3}}}} & {{a_{{2,4}}}} \\ {{a_{{3,1}}}} & {{a_{{3,2}}}} & {{a_{{3,3}}}} & {{a_{{3,4}}}} \\ {} & {} & {} & {} \\ \end{array} } \right). $$

    Let the block be “vectorized” obtaining \( {V_A} \):

    $$ {V_A} = \left( {\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {{a_{{1,1}}}} & {{a_{{1,2}}}} & {{a_{{1,3}}}} & {{a_{{1,4}}}} \\ \end{array} } & {\begin{array}{*{20}{c}} {{a_{{2,1}}}} & {{a_{{2,2}}}} & {{a_{{2,3}}}} & {{a_{{2,4}}}} \\ \end{array} } & {\begin{array}{*{20}{c}} {{a_{{3,1}}}} & {{a_{{3,2}}}} & {{a_{{3,3}}}} & {{a_{{3,4}}}} \\ \end{array} } \\ \end{array} } \right). $$
    Step 2:

    Repeat step 1 for blocks B and C obtaining vectors \( {V_B} \) and \( {V_C} \).

    Step 3:

    Put the row vectors \( {V_A},{V_B},{V_C} \) in a column obtaining D:

    $$ D = \left( {\begin{array}{*{20}{c}} {{V_A}} \\ {{V_B}} \\ {{V_C}} \\ \end{array} } \right). $$
  2. 2.

    The operator B is a bijective function: \( B({n_R}, {n_C} ) \): \( W \leftrightarrow W^{\prime} \).

    In fact, the inverse operator \( {B^{{ - 1}}}({n_R}, {n_C} ) \) exists and is equal to \( B({n_R}, {n_C} ) \): \( W\mathop{ \to }\limits^B W^{\prime}\mathop{ \to }\limits^B W \).

    (Note that the transpose operator also has this feature.)

  3. 3.

    The B operator is linear. In fact, \( B\left( {a \cdot W^{\prime} + b \cdot W^{\prime ^ \prime}} \right) = a \cdot B\left( {W^{\prime}} \right) + b \cdot B\left( {W^{\prime ^ \prime}} \right) \).

  4. 4.

    It follows from definition (a) that, for every \( {n_{{\dim }}} \), the set D always contains at least two elements:

    $$ (1,{n_{{\dim }}}) \,{\text{e}}\,({n_{{\dim }}},1). $$

    In these cases, we have:

    $$ B(1, {n_{{\dim }}} ) \equiv I\quad \left( {\text{Identity operator}} \right) $$
    $$ B( {n_{{\dim }}},1 ) \equiv T\quad \left( {\text{Transpose operator}} \right) $$

15.1.3 Appendix C: The Concept of Hubness

R. Petritoli and M. Buscema

15.1.3.1 C.1 Definition of Hubness

Pruning Algorithm

Consider a graph with N nodes and A links. We use the following algorithm (pruning algorithm):

  1. 1.

    Detect in the graph all the nodes with the minimum gradient, that is, all the nodes with the smallest number of links.

  2. 2.

    “Set free” all the detected nodes erasing their links.

  3. 3.

    Apply steps 1 and 2 until all the nodes of the graph are free (complete disconnection of the graph).

We define pruning cycle number as the number of iteration to disconnect completely the graph; we indicate this value with M.

Example 1

Consider the following graph (N = 8 and A = 11):

Let us apply the pruning algorithm:

Table 18
Table 19

The resulting number of pruning cycles M is 4.

Pruning Table

In order to take note of evolution of all the variables during the pruning process, we introduce the pruning table:

$$ \left| {\begin{array}{*{20}{c}} K & G & L & {{N_d}} \\ 1 & {{g_1}} & {{l_1}} & {{n_1}} \\ {...} & {...} & {...} & {...} \\ M & {{g_M}} & {{l_M}} & {{n_M}} \\ \end{array} } \right|. $$

Every row is matched with a single pruning cycle; a row is composed by the following variables:

  • K: progressive number which identify the jth pruning cycle

  • G: pruning gradient of the jth cycle

  • L: number of links erased in the jth cycle

  • N d : number of nodes released in the jth cycle

Example 2

Considering the graph of the previous example, we have the following pruning table:

$$ \left| {\begin{array}{*{20}{c}} K & G & L & {{N_d}} \\ 1 & 1 & 1 & 1 \\ 2 & 2 & 4 & 2 \\ 3 & 2 & 5 & 3 \\ 4 & 1 & 1 & 2 \\ \end{array} } \right|. $$

We also define two variables that come from the pruning table: P e S TG . To do that, we need a preliminary operation: the partition of gradients.

We have the sequence of gradients from the pruning:

G = g1, g2, …, g M .

Let us split the gradients in classes using the following rules:

  • A class contains at least one element of the sequence.

  • Two adjoining elements in the sequence with equal values are in the same class.

We name P the number of emerging classes and S TG the common value of a class (i.e., the value of each element belonging to the class).

Example 3

Let us have a sequence of gradients with M = 10:

G = 2, 1, 1, 1, 2, 4, 3, 2, 6, 6.

The partition will be:

C1 = {2}, C2 = {1,1,1}, C3 = {2}, C4 = {4}, C5 = {3}, C6 = {2}, C7 = {6,6}.

The resulting number of classes will be equal to 7; the sequence S TG will be 2, 1, 2, 4, 3, 2, 6.

Example 4

Consider the graph of Examples 1 and 2; we have:

G = 1, 2, 2, 1.

The partition will be:

C1 = {1}, C2 = {2,2}, C3 = {1}.

The resulting number of classes will be equal to 3; the sequence S TG will be 1, 2, 1.

The μ and φ Parameters

We introduce two variables that will be used for the definition of hubness:

$$ \mu = \frac{1}{M}\sum\limits_i^M {N{d_i}} = \frac{N}{M}; $$
$$ \phi = \frac{1}{P}\sum\limits_j^P {{S_{{TG\,j}}}}. $$

Hubness of a Graph

Definition of hubness:

$$ {H_0} = \frac{{\mu \cdot \phi - 1}}{A}. $$

Example 5

Considering the graph of Examples 1 and 2, we have:

$$ \mu = \frac{1}{M}\sum\limits_i^M {N{d_i}} = \frac{N}{M} = \frac{8}{4} = 2, $$
$$ \phi = \frac{1}{P}\sum\limits_j^P {{S_{{TG\,j}}}} = \frac{1}{3}(1 + 2 + 1) = \frac{4}{3}, $$
$$ {H_0} = \frac{{\mu \cdot \phi - 1}}{A} = \frac{{2 \cdot \frac{4}{3} - 1}}{{11}} = \frac{{\frac{{8 - 3}}{3}\,}}{{11}} = \frac{5}{{33}} = 0.\overline {15}. $$

15.1.3.2 C.2 Remarkable Cases

15.1.3.2.1 C.2.1 Case No 1: The Chain
  • Case 1.1: Chain with x nodes

    If x is even:

    $$ \left| {\begin{array}{*{20}{c}} M & G & L & N \\ 1 & 1 & 2 & 2 \\ 2 & 1 & 2 & 2 \\ \vdots & \vdots & \vdots & \vdots \\ {\frac{x}{2}} & 1 & 1 & 2 \\ \end{array} } \right| $$
    $$ {\phi^{{[C]}}} = 1\quad {\mu^{{[C]}}} = 2 $$
    $$ {N^{{[C]}}} = x $$
    $$ H_0^{{[C]}} = \frac{{{\mu^{{[C]}}} \cdot {\phi^{{[C]}}} - 1}}{{{N^{{[C]}}} - 1}} = \frac{{2 - 1}}{{x - 1}} = \frac{1}{{x - 1}}. $$

    If x is odd:

    $$ \left| {\begin{array}{*{20}{c}} M & G & L & N \\ 1 & 1 & 2 & 2 \\ 2 & 1 & 2 & 2 \\ \vdots & \vdots & \vdots & \vdots \\ {\frac{{(x - 1)}}{2}} & 1 & 2 & 3 \\ \end{array} } \right| $$
    $$ {\phi^{{[C]}}} = 1\quad {\mu^{{[C]}}} = \frac{{2x}}{{x - 1}} $$
    $$ {N^{{[C]}}} = x $$
    $$ H_0^{{[C]}} = \frac{{{\mu^{{[C]}}} \cdot {\phi^{{[C]}}} - 1}}{{{N^{{[C]}}} - 1}} = \frac{{x + 1}}{{{{(x - 1)}^2}}} = \frac{1}{{x - 1}} \cdot \frac{{x + 1}}{{x - 1}}. $$

The value of the hubness depends on the level of connectivity of the graph, that is, the possibility of reaching any node starting from any other node using the shortest path (the smallest number of links). In that sense, the presence of hubs (nodes with a high number of links) increases the global connectivity of the graph.

In the case of chain, the connectivity of the graph is very low: to reach one edge from the other one, we need to use all the links of the chain. The longer is the chain, the more the compactness of the graph decreases. The consequence is that the hubness decreases as 1/x when the number of nodes x increases.

15.1.3.2.2 C.2.2 Case No 2: The Star
  • Case 2.1: Star with x nodes:

    $$ \left| {\begin{array}{*{20}{c}} M & G & L & N \\ 1 & 1 & {x - 1} & x \\ \end{array} } \right| $$
    $$ {\phi^{{[S]}}} = 1\quad {\mu^{{[S]}}} = x $$
    $$ {N^{{[S]}}} = x $$
    $$ H_0^{{[S]}} = \frac{{{\mu^{{[S]}}} \cdot {\phi^{{[S]}}} - 1}}{{{N^{{[S]}}} - 1}} = \frac{{x - 1}}{{x - 1}} = 1. $$

Comparing with the case of the chain, the star is the opposite: each node can reach any other node with at most 2 links (i.e., crossing only one node). Such level of connectivity holds steady as the number of nodes of the star increases. So the hubness is equal to 1 regardless of the number x of nodes.

  • Case 2.2: Star with x nodes and 1 tail with x nodes:

    $$ \left| {\begin{array}{*{20}{c}} M & G & L & N \\ 1 & 1 & 1 & 1 \\ \vdots & \vdots & \vdots & \vdots \\ x & 1 & 1 & 1 \\ {x + 1} & 1 & {x - 1} & x \\ \end{array} } \right| $$
    $$ \phi ^{\prime} = 1\quad \mu ^{\prime} = \frac{{2x}}{{x + 1}} $$
    $$ N^{\prime} = 2x $$
    $$ A^{\prime} = 2x - 1 $$
    $$ {H^{\prime}_0} = \frac{{\mu ^{\prime} \cdot \phi ^{\prime} - 1}}{{2x - 1}} = \frac{{x - 1}}{{2{x^2} + x - 1}} $$
    $$ {H^{\prime}_0}(3) = \frac{{3 - 1}}{{18 + 3 - 1}} = \frac{1}{{10}} $$
    $$ \mathop{{\lim }}\limits_{{x \to \infty }} {H^{\prime}_0} = \mathop{{\lim }}\limits_{{x \to \infty }} \,\,\,\frac{{x - 1}}{{2{x^2} + x - 1}} = 0. $$

The tail may collapse dramatically the compactness of the whole graph; increasing the number of branches of the star and the number of nodes of the chain at the same time, the hubness goes to 0: the connectivity of the graph is lost.

The following cases show further this feature:

  • Case 2.3: Star with x nodes and 2 tails with x nodes:

    $$ \left| {\begin{array}{*{20}{c}} M & G & L & N \\ 1 & 1 & 2 & 2 \\ \vdots & \vdots & \vdots & \vdots \\ x & 1 & 2 & 2 \\ {x + 1} & 1 & {x - 1} & x \\ \end{array} } \right| $$
    $$ \phi ^{\prime ^ \prime} = 1\quad \mu ^{\prime ^\prime} = \frac{{2x + x}}{{x + 1}} = \frac{{3x}}{{x + 1}} $$
    $$ N'' = 3x $$
    $$ A'' = 3x - 1 $$
    $$ {H''_0} = \frac{{\mu '' \cdot \phi '' - 1}}{{3x - 1}} = \frac{{2x - 1}}{{3{x^2} + 2x - 1}} $$
    $$ {H''_0}(3) = \frac{{6 - 1}}{{27 + 6 - 1}} = \frac{5}{{321}} $$
    $$ \mathop{{\lim }}\limits_{{x \to \infty }} {H''_0} = \mathop{{\lim }}\limits_{{x \to \infty }} \,\,\frac{{2x - 1}}{{3{x^2} + 2x - 1}} = 0. $$
  • Case 2.4: Star with x nodes and x tails with x nodes:

    $$ \left| {\begin{array}{*{20}{c}} M & G & L & N \\ 1 & 1 & x & x \\ \vdots & \vdots & \vdots & \vdots \\ x & 1 & x & x \\ {x + 1} & 1 & {x - 1} & x \\ \end{array} } \right| $$
    $$ {\phi^{*}} = 1\quad {\mu^{*}} = \frac{{{x^2} + x}}{{x + 1}} = x $$
    $$ {N^{*}} = {x^2} + x $$
    $$ {A^{*}} = {x^2} + x - 1 $$
    $$ H_0^{*} = \frac{{{\mu^{*}} \cdot {\phi^{*}} - 1}}{{{x^2} + x - 1}} = \frac{{x - 1}}{{{x^2} + x - 1}} $$
    $$ H_0^{*}(3) = \frac{{3 - 1}}{{9 + 3 - 1}} = \frac{2}{{11}} $$
    $$ \mathop{{\lim }}\limits_{{x \to \infty }} H_0^{*} = \mathop{{\lim }}\limits_{{x \to \infty }} \,\,\,\frac{{x - 1}}{{{x^2} + x - 1}} = 0. $$
15.1.3.2.3 C.2.3 Case No 3: The Tree
  • Case 3.1: Tree with x nodes and y pruning steps (x ≥ 2; y ≤ x):

    $$ H_0^{{[A]}} = \frac{{{\mu^{{[A]}}} \cdot {\phi^{{[A]}}} - 1}}{{{N^{{[A]}}} - 1}} = \frac{{\frac{x}{y} - 1}}{{x - 1}}. $$

    If y = 1 (star case):

    $$ H_0^{{[A1]}} = \frac{{\frac{x}{y} - 1}}{{x - 1}} = \frac{{x - 1}}{{x - 1}} = 1. $$

    If y = 2:

    \( H_0^{{[A2]}} = \frac{{\frac{x}{y} - 1}}{{x - 1}} = \frac{{\frac{x}{2} - 1}}{{x - 1}} \); therefore, for x = 2, 3, 4, …:

    \( H_0^{{[A2]}} = 0,\,\,\,\frac{1}{4},\,\,\,\frac{1}{3},\,\,\,\frac{3}{8},\,\,\, \cdots \to \,\,\,\frac{1}{2}, \) that is, \( \forall x,\,\,\,H_0^{{[A2]}} < \,\,\,\frac{1}{2} \)

    (note: x = 2 and y = 2 is impossible).

    If y = x − 1:

    \( H_0^{{\left[ {A\left( {x - 1} \right)} \right]}} = \frac{{\frac{x}{{x - 1}} - 1}}{{x - 1}} = \frac{{x - x + 1}}{{{{\left( {x - 1} \right)}^2}}} = \frac{1}{{{{\left( {x - 1} \right)}^2}}} \), then with x = 2, 3, 4, …:

    $$ H_0^{{\left[ {A\left( {x - 1} \right)} \right]}} = 1,\frac{1}{4},\frac{1}{9},\frac{1}{{16}}, \cdots \to 0 $$

    (note: x = 2 and y = 1 is the case of the star with two tails [case 7]).

    If y = x (impossible):

    $$ H_0^{{\left[ {Ax} \right]}} = \frac{{\frac{x}{x} - 1}}{{x - 1}} = \frac{{1 - 1}}{{x - 1}} = 0. $$

    Since for x > 2 and y > 1:

    $$ \frac{x}{y} - 1 \leqslant \frac{x}{2} - 1,\quad {\text{that}}\,{\text{is}}\quad \frac{{\frac{x}{y} - 1}}{{x - 1}} \leqslant \frac{{\frac{x}{2} - 1}}{{x - 1}} $$

    We have for x ≥ 2, 1 < y < x:

    $$ H_0^{{\left[ A \right]}} \leqslant H_0^{{\left[ {A2} \right]}} < \frac{1}{2}. $$

This result highlights that the hubness of a tree usually is very small (<½; the star [H = 1] is an exception). In fact, the lack of close loops decreases the level of connectivity of the graph: there is only a path between two nodes (there are no “shortcuts”!). Increasing the number of nodes, the compactness of the graph decreases and the hubness goes to 0.

15.1.3.2.4 C.2.4 Case No 4: The Complete Regular Graph
  • Case 4.1: Complete regular graph with x nodes:

    $$ \left| {\begin{array}{*{20}{c}} M & G & L & N \\ 1 & {x - 1} & {\frac{{{x^2} - x}}{2}} & x \\ \end{array} } \right| $$
    $$ {\phi^{{\left[ {\text{GRC}} \right]}}} = x - 1\quad {\mu^{{\left[ {\text{GRC}} \right]}}} = x $$
    $$ {N^{{\left[ {\text{GRC}} \right]}}} = x $$
    $$ {A^{{\left[ {\text{GRC}} \right]}}} = \frac{{{x^2} - x}}{2} $$
    $$ \begin{array}{llllllll} H_0^{{\left[ {\text{GRC}} \right]}} & = \frac{{{\mu^{{\left[ {\text{GRC}} \right]}}} \cdot {\phi^{{\left[ {\text{GRC}} \right]}}} - 1}}{{\frac{{{x^2} - x}}{2}}}\\ & = 2 \cdot \frac{{x \cdot (x - 1) - 1}}{{{x^2} - x}} = 2 \cdot \frac{{{x^2} - x - 1}}{{{x^2} - x}} = 2 - \frac{2}{{{x^2} - x}}.\end{array} $$

In a complete regular graph, each node is directly linked to any other node; the compactness is the maximum possible and the hubness has a value >1.5, which goes to 2 as the number of nodes increases. Note that the hubness is an extensive variable, and therefore, it depends not only by the connectivity but also by the dimensions (number of nodes): between two complete regular graphs (maximum connectivity) the one with more nodes will have the highest hubness.

  • Case 4.2: Complete regular graph with x nodes and 1 tail with x nodes:

    $$ \left| {\begin{array}{*{20}{c}} M & G & L & N \\ 1 & 1 & 1 & 1 \\ \vdots & \vdots & \vdots & \vdots \\ x & 1 & 1 & 1 \\ {x + 1} & {x - 1} & {\frac{{{x^2} - x}}{2}} & x \\ \end{array} } \right| $$
    $$ \phi ^{\prime} = \frac{x}{2}\quad \mu ^{\prime} = \frac{{2x}}{{x + 1}} $$
    $$ N^{\prime} = 2x $$
    $$ A^{\prime} = \frac{{{x^2} - x}}{2} + x = \frac{{{x^2} + x}}{2} $$
    $$ {H^{\prime}_0} = \frac{{\mu ^{\prime} \cdot \phi ^{\prime} - 1}}{{\frac{{{x^2} + x}}{2}}} = \frac{2}{x} \cdot \frac{{{x^2} - x + 1}}{{{x^2} + 2x + 1}} $$
    $$ \mathop{{\lim }}\limits_{{x \to \infty }} {H^{\prime}_0} = \mathop{{\lim }}\limits_{{x \to \infty }} \,\,\,\frac{2}{x} \cdot \frac{{{x^2} - x + 1}}{{{x^2} + 2x + 1}} = 0. $$

Like Case 2.2, the presence of nodes with gradient lower than the maximum one makes the hubness collapse under the unity. This “hypersensibility” is examined in the next case.

  • Case 4.3: Complete regular graph with x nodes and 2 tails with x nodes:

    $$ \left| {\begin{array}{*{20}{c}} M & G & L & N \\ 1 & 1 & 2 & 2 \\ \vdots & \vdots & \vdots & \vdots \\ x & 1 & 2 & 2 \\ {x + 1} & {x - 1} & {\frac{{{x^2} - x}}{2}} & x \\ \end{array} } \right| $$
    $$ \phi '' = \frac{x}{2}\quad \mu '' = \frac{{2x + x}}{{x + 1}} = \frac{{3x}}{{x + 1}} $$
    $$ N'' = 3x $$
    $$ A'' = \frac{{{x^2} - x}}{2} + 2x = \frac{{{x^2} + 3x}}{2} $$
    $$ {H''_0} = \frac{{\mu '' \cdot \phi '' - 1}}{{\frac{{{x^2} + 3x}}{2}}} = \frac{1}{x} \cdot \frac{{3{x^2} - 2x - 2}}{{{x^2} + 4x + 3}} $$
    $$ \mathop{{\lim }}\limits_{{x \to \infty }} {H''_0} = \mathop{{\lim }}\limits_{{x \to \infty }} \,\,\frac{1}{x} \cdot \frac{{3{x^2} - 2x - 2}}{{{x^2} + 4x + 3}} = 0. $$
  • Case 4.4: Complete regular graph with x nodes and x tails with x nodes:

    $$ \left| {\begin{array}{*{20}{c}} M & G & L & N \\ 1 & 1 & x & x \\ \vdots & \vdots & \vdots & \vdots \\ x & 1 & x & x \\ {x + 1} & {x - 1} & {\frac{{{x^2} - x}}{2}} & x \\ \end{array} } \right| $$
    $$ {\phi^{*}} = \frac{x}{2}\quad {\mu^{*}} = \frac{{{x^2} + x}}{{x + 1}} = x $$
    $$ {N^{*}} = {x^2} + x $$
    $$ {A^{*}} = \frac{{{x^2} - x}}{2} + {x^2} = \frac{{3{x^2} - x}}{2} $$
    $$ H_0^{*} = \frac{{{\mu^{*}} \cdot {\phi^{*}} - 1}}{{\frac{{3{x^2} - x}}{2}}} = \frac{{{x^2} - 2}}{{3{x^2} - x}} $$
    $$ \mathop{{\lim }}\limits_{{x \to \infty }} H_0^{*} = \mathop{{\lim }}\limits_{{x \to \infty }} \,\,\,\frac{{{x^2} - 2}}{{3{x^2} - x}} = \frac{1}{3}. $$
  • Case 4.5: Complete regular graph with x nodes and y tails with x nodes (1 < y ≤ x):

    $$ \left| {\begin{array}{*{20}{c}} M & G & L & N \\ 1 & 1 & y & y \\ \vdots & \vdots & \vdots & \vdots \\ x & 1 & y & y \\ {x + 1} & {x - 1} & {\frac{{{x^2} - x}}{2}} & x \\ \end{array} } \right| $$
    $$ \phi ''' = \frac{x}{2} $$
    $$ \mu ''' = \frac{{xy + x}}{{x + 1}} = \frac{{x(y + 1)}}{{x + 1}} $$
    $$ N''^{\prime} = x + xy = x(y + 1) $$
    $$ A''' = \frac{{{x^2} - x}}{2} + xy = \frac{{{x^2} + 2xy - x}}{2} $$
    $$ \begin{array}{lllllllll} {H'''_0} & = \frac{{\frac{{x(y + 1)}}{{x + 1}} \cdot \frac{x}{2} - 1}}{{\frac{{{x^2} + 2xy - x}}{2}}} = \frac{1}{2} \cdot \frac{1}{{x + 1}}\left( {x - \frac{{{x^3} - 3{x^2} + 4x + 4}}{{{x^2} + 2xy - x}}} \right)\\ & = \frac{1}{2} \cdot \frac{x}{{x + 1}}\left( {1 - \frac{{{x^3} - 3{x^2} + 4x + 4}}{{{x^3} + 2{x^2}y - {x^2}}}} \right).\end{array} $$

    If y ≠ x:

    $$ \mathop{{\lim }}\limits_{{x \to \infty }}{H'''_0} = \mathop{{\lim }}\limits_{{x \to\infty }} \,\,\,\frac{1}{2} \cdot \frac{x}{{x + 1}}\left( {1 -\frac{{{x^3} - 3{x^2} + 4x + 4}}{{{x^3} + 2{x^2}y - {x^2}}}} \right)= 0. $$

    If y = x:

    $$ \begin{array}{llllllll} \mathop{{\lim }}\limits_{{x \to \infty}} {{H'''}_0} & = \mathop{{\lim}}\limits_{{x \to \infty }} \frac{1}{2} \cdot \frac{x}{{x +1}}\left( {1 - \frac{{{x^3} - 3{x^2} + 4x + 4}}{{{x^3} + 2{x^2}y -{x^2}}}} \right)\\[3pt] & = \mathop{{\lim }}\limits_{{x \to \infty}} \frac{1}{2} \cdot \frac{x}{{x + 1}}\left( {1 - \frac{{{x^3} -3{x^2} + 4x + 4}}{{{x^3} + 2{x^3} - {x^2}}}} \right) \\[3pt] & =\mathop{{\lim }}\limits_{{x \to \infty }} \frac{1}{2} \cdot\frac{x}{{x + 1}}\left( {1 - \frac{{{x^3} - 3{x^2} + 4x +4}}{{3{x^3} - {x^2}}}} \right)\\[3pt] & = \frac{1}{2} \cdot \left({1 - \frac{1}{3}} \right) = \frac{1}{2} \cdot \left( {\frac{2}{3}}\right) = \frac{1}{3}. \end{array} $$
15.1.3.2.5 C.2.5 Case No 5: The Closed Star
  • Case 5.1: Closed star with x nodes:

    $$ \left| {\begin{array}{*{20}{c}} M & G & L & N \\ 1 & 3 & {2x - 2} & x \\ \end{array} } \right| $$
    $$ {\phi^{{\left[ {\text{SC}} \right]}}} = 3\quad {\mu^{{\left[ {\text{SC}} \right]}}} = x $$
    $$ {N^{{\left[ {\text{SC}} \right]}}} = x $$
    $$ {A^{{\left[ {\text{SC}} \right]}}} = 2(x - 1) $$
    $$ H_0^{{\left[ {\text{SC}} \right]}} = \frac{{{\mu^{{\left[ {\text{SC}} \right]}}} \cdot {\phi^{{\left[ {\text{SC}} \right]}}} - 1}}{{2(x - 1)}} = \frac{{3x - 1}}{{2(x - 1)}} $$
    $$ H_0^{{{\text{[SC]}}}}(5) = \frac{{15 - 1}}{{2(5 - 1)}} = \frac{7}{4} $$
    $$ \mathop{{\lim }}\limits_{{x \to \infty }} H_0^{{{\text{[SC]}}}} = \mathop{{\lim }}\limits_{{x \to \infty }} \frac{{3x - 1}}{{2x - 2}} = \frac{3}{2}. $$

The closed star represents the case of a not regular graph with hubness greater than 1. In fact the star, already with a very high connectivity, is upgraded by the connections between the “spokes” of the wheel, increasing the space of possible paths (and possible “shortcuts”). This increased compactness explains the above-mentioned high levels of hubness.

The following cases once more explain how easily the value of hubness decreases adding low connected nodes.

  • Case 5.2: Closed star with x nodes and 1 tail with x nodes:

    $$ \left| {\begin{array}{*{20}{c}} M & G & L & N \\ 1 & 1 & 1 & 1 \\ \vdots & \vdots & \vdots & \vdots \\ x & 1 & 1 & 1 \\ {x + 1} & 3 & {2x - 2} & x \\ \end{array} } \right| $$
    $$ \phi ^{\prime} = 2\quad \mu ^{\prime} = \frac{{2x}}{{x + 1}} $$
    $$ N^{\prime} = 2x $$
    $$ A^{\prime} = 3x - 2 $$
    $$ {H^{\prime}_0} = \frac{{\mu ^{\prime} \cdot \phi ^{\prime} - 1}}{{3x - 2}} = \frac{{3x - 1}}{{3{x^2} + x - 2}} $$
    $$ {H^{\prime}_0}(5) = \frac{{15 - 1}}{{75 + 5 - 2}} = \frac{7}{{39}} $$
    $$ \mathop{{\lim }}\limits_{{x \to \infty }} {H^{\prime}_0} = \mathop{{\lim }}\limits_{{x \to \infty }} \,\,\,\frac{{3x - 1}}{{3{x^2} + x - 2}} = 0. $$
  • Case 5.3: Closed star with x nodes and 2 tails with x nodes:

    $$ \left| {\begin{array}{*{20}{c}} M & G & L & N \\ 1 & 1 & 2 & 2 \\ \vdots & \vdots & \vdots & \vdots \\ x & 1 & 2 & 2 \\ {x + 1} & 3 & {2x - 2} & x \\ \end{array} } \right| $$
    $$ \phi '' = 2\quad \mu '' = \frac{{2x + x}}{{x + 1}} = \frac{{3x}}{{x + 1}} $$
    $$ N'' = 3x $$
    $$ A'' = 4x - 2 $$
    $$ {H''_0} = \frac{{\mu '' \cdot \phi '' - 1}}{{2(x - 1)}} = \frac{{5x - 1}}{{4{x^2} + 2x - 2}} $$
    $$ {H''_0}(5) = \frac{{25 - 1}}{{100 + 10 - 2}} = \frac{2}{9} $$
    $$ \mathop{{\lim }}\limits_{{x \to \infty }} {H''_0} = \mathop{{\lim }}\limits_{{x \to \infty }} \,\,\frac{{5x - 1}}{{4{x^2} + 2x - 2}} = 0. $$
  • Case 5.4: Closed star with x nodes and x tails with x nodes:

    $$ \left| {\begin{array}{*{20}{c}} M & G & L & N \\ 1 & 1 & x & x \\ \vdots & \vdots & \vdots & \vdots \\ x & 1 & x & x \\ {x + 1} & 3 & {2x - 2} & x \\ \end{array} } \right| $$
    $$ {\phi^{*}} = 2\quad {\mu^{*}} = \frac{{{x^2} + x}}{{x + 1}} = x $$
    $$ {N^{*}} = {x^2} + x $$
    $$ {A^{*}} = {x^2} + 2x - 2 $$
    $$ H_0^{*} = \frac{{{\mu^{*}} \cdot {\phi^{*}} - 1}}{{{x^2} + 2x - 2}} = \frac{{2x - 1}}{{{x^2} + 2x - 2}} $$
    $$ H_0^{*}(5) = \frac{{10 - 1}}{{25 + 10 - 2}} = \frac{3}{{11}} $$
    $$ \mathop{{\lim }}\limits_{{x \to \infty }} H_0^{*} = \mathop{{\lim }}\limits_{{x \to \infty }} \,\,\,\frac{{2x - 1}}{{{x^2} + 2x - 2}} = 0. $$
  • Case 5.5: Closed star with x nodes and maximum gradient

    • Construction procedure of the graph with gradient x − 2 (only with x odd):

      • Take a complete regular graph with dimension x − 1.

      • Erase one link in each couple of nodes (x − 1 will be even).

      • Add a new node and link it to the others.

        We have:

        $$ {\mu^{{\left[ {\text{SC}} \right]}}} = x $$
        $$ {\phi^{{\left[ {\text{SC}} \right]}}} = x - 2 $$
        $$ {N^{{\left[ {\text{SC}} \right]}}} = x $$
        $$ \begin{array}{lllllll}{A^{{\left[ {\text{SC}} \right]}}} & =\frac{{{{(x - 1)}^2} - (x - 1)}}{2} - \frac{{x - 1}}{2} + (x - 1)\\& = \frac{{{{(x - 1)}^2} - 2(x - 1) + 2(x - 1)}}{2} =\frac{{{{(x -1)}^2}}}{2} \end{array}$$
        $$ \begin{array}{llllllll} H_0^{{\left[ {\text{SC}} \right]}} &=\frac{{{\mu^{{\left[ {\text{SC}} \right]}}} \cdot {\phi^{{\left[{\text{SC}} \right]}}} - 1}}{{\frac{{{{(x - 1)}^2}}}{2}}}\\ & = 2\cdot \frac{{x(x - 2) - 1}}{{{{(x - 1)}^2}}} = 2 \cdot \frac{{{x^2}- 2x - 1}}{{{{(x - 1)}^2}}} = 2 \cdot \frac{{{x^2} - 2x - 1}}{{{x^2}- 2x + 1}}\end{array} $$
        $$ \mathop{{\lim }}\limits_{{x \to \infty }} H_0^{{\left[ {\text{SC}} \right]}} = \mathop{{\lim }}\limits_{{x \to \infty }} 2 \cdot \frac{{{x^2} - 2x - 1}}{{{x^2} - 2x + 1}} = 2. $$
  • Construction procedure of the graph with gradient x − 3:

    • Take a complete regular graph with dimension x − 1.

    • Erase two links in each node (we can do that with x even as well as x odd).

    • Add a new node and link it to the others.

      We have:

      $$ {\mu^{{\left[ {\text{SC}} \right]}}} = x $$
      $$ {\phi^{{\left[ {\text{SC}} \right]}}} = x - 3 $$
      $$ {N^{{\left[ {\text{SC}} \right]}}} = x $$
      $$ \begin{array}{lllllll} {A^{{\left[ {\text{SC}} \right]}}} & =\frac{{{{(x - 1)}^2} - (x - 1)}}{2} - (x - 1) + (x - 1)\\ & =\frac{{{{(x - 1)}^2} - (x - 1)}}{2} = \frac{{(x - 1)(x - 2)}}{2}\end{array} $$
      $$ H_0^{{\left[ {\text{SC}} \right]}} = \frac{{{\mu^{{\left[ {\text{SC}} \right]}}} \cdot {\phi^{{\left[ {\text{SC}} \right]}}} - 1}}{{\frac{{{{(x - 1)}^2}}}{2}}} = 2 \cdot \frac{{x(x - 3) - 1}}{{(x - 1)(x - 2)}} = 2 \cdot \frac{{{x^2} - 3x - 1}}{{{x^2} - 3x + 2}} $$
      $$ \mathop{{\lim }}\limits_{{x \to \infty }} H_0^{{\left[ {\text{SC}} \right]}} = \mathop{{\lim }}\limits_{{x \to \infty }} 2 \cdot \frac{{{x^2} - 3x - 1}}{{{x^2} - 3x + 2}} = 2. $$

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Buscema, M. (2013). Auto-Contractive Maps, H Function, and the Maximally Regular Graph: A New Methodology for Data Mining. In: Buscema, M., Tastle, W. (eds) Intelligent Data Mining in Law Enforcement Analytics. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-4914-6_15

Download citation

Publish with us

Policies and ethics