A probabilistic coevolutionary biclustering algorithm for discovering coherent patterns in gene expression dataset
 1.9k Downloads
Abstract
Background
Biclustering has been utilized to find functionally important patterns in biological problem. Here a bicluster is a submatrix that consists of a subset of rows and a subset of columns in a matrix, and contains homogeneous patterns. The problem of finding biclusters is still challengeable due to computational complex trying to capture patterns from twodimensional features.
Results
We propose a Probabilistic COevolutionary Biclustering Algorithm (PCOBA) that can cluster the rows and columns in a matrix simultaneously by utilizing a dynamic adaptation of multiple species and adopting probabilistic learning. In biclustering problems, a coevolutionary search is suitable since it can optimize interdependent subcomponents formed of rows and columns. Furthermore, acquiring statistical information on two populations using probabilistic learning can improve the ability of search towards the optimum value. We evaluated the performance of PCOBA on synthetic dataset and yeast expression profiles. The results demonstrated that PCOBA outperformed previous evolutionary computation methods as well as other biclustering methods.
Conclusions
Our approach for searching particular biological patterns could be valuable for systematically understanding functional relationships between genes and other biological components at a genomewide level.
Keywords
Synthetic Dataset Gene Expression Dataset Volume Term Conventional Genetic Algorithm Mean Square ResidueBackground
Since many biological data could be represented as a twodimensional matrix, it is important to find the hidden structure contained within such a structure. Here, the hidden structure can mean the clusters embedded in the subspace in a highdimensional dataset [1]. The problem of finding these structures can be solved using biclustering, which is also known as coclustering or block clustering [2, 3, 4, 5]. A bicluster is a submatrix that consists of a subset of the rows (e.g., genes) and a subset of columns (e.g., conditions) in the matrix. The purpose of biclustering is to find the submatrix that consists of homogeneous elements in rows, columns, or both. Biclustering has been applied to diverse areas such as frequent itemsets, information retrieval, and gene expression analysis [4, 6].
Biclustering has been intensively studied in molecular biology research, as the expression levels of thousands of genes can be measured experimentally using microarrays [7]. DNA microarray data are represented as a matrix of expression levels of genes under different conditions corresponding to a set of rows and a set of columns. Here, the conditions usually include the environment, diseases, and tissues. The biclustering algorithm tries to find a subset of the genes representing similar behavior under multiple conditions. The biclustering problem is known as an NPhard combinatorial problem [2].
Biclustering problems are more complex than oneway clustering problems, because of the coupled landscapes of their search space. Biclustering problems may reflect the issues encountered in evolving the interdependent subcomponents considered in coevolutionary learning. In biclustering problems, the rows and columns of a matrix can be considered as interdependent subcomponents. If a biclustering algorithm is permitted to interact between these subcomponents, then it can search efficiently in a coupled landscape. For example, Potter and De Jong suggested the potential problemsolving capability of cooperative coevolutionary systems [8, 9] and following study of Zaritsky and Sipper presented good results for the Shortest Common Superstring (SCS) problem, using a cooperative coevolutionary algorithm [10].
Here, we propose a Probabilistic COevolutionary Biclustering Algorithm (PCOBA) to find functional groups of genes and corresponding conditions from microarray datasets. It is based on the concept of coevolutionary learning and probabilistic searching. The most distinctive idea of PCOBA is that it decomposes the entire search space into subcomponents to discover hidden patterns in the matrix. In this algorithm, two populations, corresponding to a subset of rows and a subset of columns, are maintained. Coevolutionary learning evolves the two different populations within the context of each other [11, 12, 13]. PCOBA guides these populations towards the minimum of the objective function representing the quality of the biclusters through cooperation between two populations.
When applied to synthetic datasets and the microarray data of yeast, the results demonstrate PCOBA incorporating probabilistic searching improves its ability of finding biclusters. The resulting patterns are well enriched to known annotations that are consistent with biological knowledge. Our approach for searching important biological patterns could be utilized to find the uncovered relationships between genes and other biological components at a genomewide level.
Methods
Biclustering of microarray data
In gene expression data, it is defined as a subset of genes and a subset of the conditions. Let G = {g_{1}, g_{2}, ..., g_{ N }} be a set of genes and C = {c_{1}, c_{2}, ..., c_{ M }} be a set of conditions, such as different tissue samples. The data can be represented as an N × M matrix with real values, denoted as E. Here each entry, e_{ ij }, in E indicates the expression level of a gene, g_{ i }, under a specific condition, c_{ j }.
Let I be the set of row indices belonging to a row cluster, and J be the set of column indices belonging to a column cluster, where I ⊆ {1,...,N} and J ⊆ {1,...,M}. Thus, a bicluster is a submatrix, B = (I, J), I≤N and J≤M, where I and J indicate the set of genes (rows) and conditions (columns), respectively. The volume of a bicluster, (I, J), is defined as the number of entries, e_{ ij }, i ∈ I and j ∈ J.
where e_{ iJ } indicates the mean of the entries in row i, of which column indices are in J. e_{ Ij } indicates the mean of the entries in column j, of which row indices are in I. e_{ IJ } is the mean of all the entries in the submatrix consisting of I and J.
By adding this term to an objective function, it is possible to detect fluctuations in the gene expression levels under some conditions or samples.
To find a bicluster, we present the objective function to minimize it by employing some characteristics.

Minimizing the mean squared residue, H_{ IJ }. If a mean squared residue of a specific bicluster has lower than a parameter value, δ, then its bicluster is denoted as δbicluster.

Maximizing variance, coupled with highly coherent biclusters.

Maximizing volume, which means a large number of genes and conditions.
Probabilistic coevolutionary biclustering
Various attempts have been made to find biclusters in microarray data [2, 14, 15, 16]. Several evolutionary algorithms for biclustering have also been proposed. Bleuler et al., introduced an evolutionary algorithm coupled with previous biclustering algorithms [17]. Mitra et al., proposed a multiobjective evolutionary biclustering algorithm incorporating local search strategies [18]. They demonstrated that evolutionary algorithms can successfully improve the quality of biclusters. The search strategy of our algorithm is different from those using conventional operators. Our algorithm utilizes the global statistical information of two cooperative populations so that its ability to search biclusters is more effective. The key idea is that the algorithm coevolves two populations for a gene set and a condition set, as the one is adapted cooperatively to the other.
Coevolutionary optimization
The population of the gene set, Pop_{ G } and that of the condition set, Pop_{ C } consist of {x_{1}, x_{2}, ..., x_{ μ }} and {y_{1}, y_{2}, ..., y_{ ν }}, respectively. Here, each individual x_{ i } is encoded by a binary string, $\left({x}_{i}^{1},{x}_{i}^{2},\dots ,{x}_{i}^{N}\right)\in {\left\{0,1\right\}}^{N}$, that represents the presence of several genes among a set of genes, {g_{1}, g_{2}, ..., g_{ N }}.
In addition, y_{ j } for a given set of conditions is encoded in the same way as x_{ i } is. Therefore, the total search space is Ω = {0, 1}^{ N } + {0, 1}^{ M }. A bicluster, (I, J), is an index with a value = 1 in (x_{ i }, y_{ j }) pair, i = 1,...,N and j = 1,...,M.
Fitness evaluation
The score function is designed to measure the quality of a bicluster [19]. The minimum score denotes the best quality that should have a low mean squared residue, high variance, and large volume. This bicluster may satisfy that the expression patterns of many genes are similar in many different conditions.
If H_{ IJ } is greater than δ, then RES reflects the mean squared residue, else it is set as a constant. Here δ is predefined by user. When RES is a constant, the fitness can concentrate more on the variance and volume terms.
Here, w_{ b } is a parameter controlling the variance term among all the terms.
Here, w_{ v } is a control parameter used to set an importance to the volume term among the terms. The terms w_{ g } and w_{ c } are weight parameters used to keep a balance between the genes and conditions.
The minimum score determines the fitness of each individual when it is combined with individuals from the other population. In terms of coevolution, individuals are adapted cooperatively to the other population.
Here, it may be not necessary to evaluate the fitness to calculate the scores between all the x and y pairs. If the algorithm calculates all the scores of the pairs to select the best collaborator, then the evaluation cost will be high. To reduce the evaluation cost, we applied the following strategy. The algorithm selects the number of R, R≤M, randomly for each y_{ j }, and then it calculates their scores. Thus, the total number of evaluations is reduced by R⋅ν in each generation. Since this strategy can affect performance, appropriate R value (> = 10% of M) should be carefully chosen.
Probabilistic update of a population
The next population is generated by sampling with a probabilistic distribution and mutation operator. While the probabilistic update of populations utilizes statistical information from the previous generation, the mutation operator involves utilizing the location information in the solution space. A strategy related to the combination of an EDA and a conventional operator [20, 21] can improve the performance with regards to the optimality and convergence of conventional genetic algorithms.
where α ∈ (0, 1) and β ∈ (0, 1) are the parameters for controlling the updates. This updating rule is similar to the populationbased incremental learning (PBIL) algorithm [22]. In each generation, two sets of best individuals, S_{ g } and S_{ c } are selected based on the fitness, and each probability is updated based on the fraction of the number, ones in the selected individuals. This probabilistic model for generating the next population is relatively simple.
We applied an additional mutation operator to generate offspring because it could be helpful for increasing the diversity of population. The number of individuals selected for mutation was different from S_{ g } and S_{ c }, and was set to maintain a sufficient selection pressure. Thus, half of the population size was generated by a probability distribution, and the other half was generated by a mutation operator.
Other evolutionary algorithms
Here, we describe three different types of algorithm for comparison with other evolutionary algorithms.
Genetic algorithm (GA)
The genotype of a bicluster is a continuous bit string, $\left({x}_{i}^{1},{x}_{i}^{2},\dots ,{x}_{i}^{N},{y}_{i}^{1},{y}_{i}^{2},\dots ,{y}_{i}^{M}\right)$. Here, reproduction and mutation are used as genetic operators. A crossover operator was not applied in this study, since a crossover operator tends to form biclusters with a high volume, which interrupts to obtain good solutions. In reproduction, individuals were selected using a proportional selection. The population size was 100, and the mutation rate was set to 0.05.
Coevolutionary genetic algorithm (CGA)
Unlike a conventional genetic algorithm, the genotype of a bicluster is not a continuous bit string. The genotype of a CGA is separated into two parts. The genetic operators are the same as the genetic algorithm, and the method of evolution is the same as the PCOBA.
Estimation of the distribution algorithm (EDA)
The encoding of individuals here was the same as in the genetic algorithm. However, the next population was generated from a probability vector based on the PBIL algorithm and a mutation, such as the PCOBA. The probability vector was $\left({p}_{g}^{1},{p}_{g}^{2},...,{p}_{g}^{N},{p}_{c}^{1},{p}_{c}^{2},...,{p}_{c}^{M}\right)$.
Results
Experimental data preparation and parameter setting
We performed experiments to show the performance of PCOBA, including both synthetic datasets and a yeast gene expression dataset. The synthetic datasets are E_{ a }, E_{ b }, and E_{ c }, which were noisy matrices like gene expression datasets. They had embedded homogenous block structures like submatrices coupled between genes and conditions. Their matrices were filled with random values ranging from 0 to 500, and then a fixed number of clusters were embedded. First, we examined whether the proposed PCOBA could find the single homogeneous block structure from E_{ a } which embeds only one bicluster. E_{ a } is the noisy matrix of 100 rows × 20 columns with single structure of (16 × 9).
Furthermore, we studied if PCOBA were able to find the multiple homogeneous block structures in E_{ b } embedding multiple biclusters. Although the volumes of datasets were relatively small, it could be difficult to find biclusters if a block is very homogeneous. Therefore, to make these kinds of matrices, we designed a block structure embedding more homogeneous blocks. E_{ b } contains three different structures (16 rows × 9 columns, 10 rows × 5 columns, and 10 rows × 10 columns) in the noisy matrix of 100 × 20. These structures were less than δ = 20. Here, δ is the threshold of residue score and lower score means high quality biclusters.
The E_{ c } was used to examine the ability of finding a bicluster from a higher dimensional dataset. Real datasets, such as gene expression data, are composed of large dimensional matrices. In general, if the dimension of a matrix gets larger, then the volume of the biclusters is increased. In addition, the matrix contains a higher number of biclusters. We designed the synthetic dataset, E_{ c }, considering these conditions. E_{ c } is a 1,500 × 30 matrix that contained three 100 × 15 structures. All the block structures were less than δ = 300.
The real datasets were gene expression profiles of yeast microarrays. Typically, a microarray experiment assesses the expression of a large number of genes under various conditions. These conditions may be a time series during a biological process, or a collection of different tissue samples, e.g., normal versus tumor tissues. The performance of our proposed algorithm was measured using the cell cycle expression data of a yeast Saccharomyces cerevisiae that was obtained from Tavazoie et al., [23]. The matrix dataset contains expression levels of 2,884 genes (rows) under 17 conditions (columns). In this matrix, missing values were replaced by sampled random numbers from a uniform distribution between 0 and 600.
Parameter setting of PCOBA
Parameter  Description  Artificial dataset  Real dataset 

μ  Pop. size for genes  100 (1000)  1000 
ν  Pop. size for conditions.  50  100 
MaxGen  Maximum generation  100 (200)  500 
δ  Cutoff of residue score  20 (300)  250 
w _{ b }  Controlling the variance  0.5  0.5 
w _{ v }  Controlling the volume  10 (30)  30 
w _{ g }  Keeping a balance between  0.9 (0.8)  0.8 
w _{ c }  gene and condition  0.1 (0.2)  0.2 
α, β  Controlling update of probabilities.  0.2, 0.2  0.2, 0.2 
S _{ g } , S _{ c }  Size of best individuals in genes and conditions  20, 10 (200, 10)  200, 20 
Searching biclusters using the PCOBA
Comparison with other evolutionary algorithms
In this section, we present a comparison of the performance between PCOBA and other evolutionary algorithms. The purpose of this comparison was to analyze the effect of coevolution, an estimation of the distribution, and finally the potential synergy of two different strategies.
We applied four different algorithms, including Genetic Algorithm (GA), Coevolutionary Genetic Algorithm (CGA) [11], Estimation of the Distribution Algorithm (EDA) [24] and the proposed PCOBA, to the synthetic datasets. For a fair comparison, the number of evaluations was the same for all algorithms. First, the runs for the E_{ a } and E_{ b } datasets were terminated after the following iterations. For GA and EDA, the number of iterations was set to 100 populations × 1,000 generations. For CGA and PCOBA, it was set to 100 populations × 10 selected genes × 100 generations. Here 10 selected genes correspond to R value (see Methods section) to reduce the evaluation cost. Second, the number of iterations for the E_{ c } dataset was set to 1,000 populations × 1,000 populations for GA and EDA. For CGA and PCOBA, it was set to 1,000 populations × 10 selected genes × 100 generations.
Comparison of the performance of PCOBA and other evolutionary algorithms.
Datasets  Algorithms  Avg. Fitness  Avg. Residue  Avg. Variance  Avg. Volume 

E _{ a }  GA  11.96 ± 16.32  203.51 ± 323.67  19745 ± 9587.70  105.28 ± 54.28 
CGA  3.90 ± 6.99  36.63 ± 140.32  21220 ± 7202  72.39 ± 20.11  
EDA  5.80 ± 11.14  81.84 ± 220.84  23527 ± 6719.4  127.48 ± 21.64  
PCOBA  1.88 ± 0.06  0.05 ± 0.00  26254 ± 833.22  104.90 ± 8.49  
E _{ b }  GA  5.59 ± 10.16  76.67 ± 201.51  18570 ± 7496.3  107.17 ± 38.87 
CGA  3.05 ± 5.02  20.03 ± 100.81  22489 ± 6876.7  75.49 ± 18.99  
EDA  5.12 ± 8.28  67.63 ± 163.60  20862 ± 6834.7  112.36 ± 44.52  
PCOBA  2.03 ± 1.35  2.74 ± 26.88  25199 ± 3295.9  99.66 ± 16.92  
E _{ c }  GA  2.21 ± 0.02  262.63 ± 9.05  3807.20 ± 1068  470.96 ± 18.90 
CGA  2.20 ± 0.03  263.09 ± 7.55  3229.40 ± 1160.4  443.00 ± 19.07  
EDA  2.22 ± 0.05  263.94 ± 6.96  2359.70 ± 228.74  450.83 ± 50.57  
PCOBA  1.94 ± 0.05  265.01 ± 4.63  2473.50 ± 176.1  562.63 ± 47.43 
Usually, real datasets, such as gene expression data, have large dimensions and contain multiple homogenous blocks, and it is difficult to obtain good solutions using a real dataset. Thus, E_{ c } was utilized as an alternative dataset to evaluate the performance considering the scalability in the dataset size. All the algorithms found scores less than δ. The average scores of the three algorithms were little different. However, PCOBA had a high value for the volume term.
Comparison with other biclustering algorithms
We compared the performance with previous Cheng and Church (CC) and the Order Preserving Submatrix (OPSM) biclustering algorithms using the cell cycle expression data of a yeast Saccharomyces cerevisiae. The CC algorithm was proposed by Cheng and Church [2] and employs the heuristic in a relaxed "greedy" search. We set the parameter of the CC algorithm, δ, with the identical value to our parameter. The OPSM was introduced by BenDor et al., [25]. It was designed to discover biclusters exhibiting coherent behavior in the columns. Thus, this algorithm focuses on the relative order of the columns.
Performance between PCOBA and other biclustering algorithm.
PCOBA  CC  OPSM  

Avg. Residue  219.15 ± 1.14  221.40 ± 8.99  447.72 ± 88.36 
Avg. Variance  412.11 ± 17.62  404.67 ± 134.26  1224.89 ± 415.95 
Avg. Volume  1321.30 ± 102.82  1369.18 ± 366.90  1365.40 ± 1642.85 
Avg. Num. (Genes)  92.40 ± 1.64  98.54 ± 21.89  265.10 ± 412.22 
Avg. Num. (Conditions)  14.30 ± 0.48  12.18 ± 2.37  8.50 ± 3.02 
Functional analysis of the discovered clusters by PCOBA
To validate the discovered biclusters, we analysed the functional correlations between clustered genes by Protein Interaction Network Analysis (PINA) [26] for yeast dataset. We show two biclusters with more biological significance in this study. Table S1 (Additional File 1) presents the identified two biclusters with most enriched GO biological process terms and KEEG pathways (pvalue < 0.01). In particular, 'cell cycle' is exactly assigned as an enriched pathway in Cluster I, of which members are highly modulated by protein interaction. 'metabolic process' related terms are enriched in Cluster II. It has been known that metabolism of methionine has been associated with cell cycle progression [27]. These properties confirm the biological relevance of the identified biclusters.
Conclusions
We have proposed the biclustering algorithm (PCOBA) that can cluster the rows and columns in a twodimensional matrix simultaneously, based on coevolutionary searching. PCOBA can be considered to be a synergistic optimization technique that combines a coevolutionary search with a populationbased probabilistic search. In particular, it is a novel algorithm that can obtain highly correlated patterns from variables of a twoway problem in a dataset having a matrix form. In detail, it could be an efficient procedure to discover coherent patterns, since our algorithm tries to decompose a task using coevolutionary searching, and utilizes former global information in a complex problem of a largescale matrix. The performance of the proposed PCOBA was tested using synthetic datasets. Our algorithm outperformed conventional evolutionary computing methods including genetic algorithm, coevolutionary genetic algorithm, and estimation of distribution algorithm. In addition, the results from yeast expression datasets showed that our method can offer biclusters of higher quality in regards to coherent patterns. Our proposed method provides substantial guidance for the development of algorithms for finding hidden patterns from datasets in a matrix form that are generated in various research fields, including biology.
Notes
Acknowledgements
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 20120005643) and Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2012R1A1A2002804).
This article has been published as part of BMC Bioinformatics Volume 13 Supplement 17, 2012: Eleventh International Conference on Bioinformatics (InCoB2012): Bioinformatics. The full contents of the supplement are available online at http://www.biomedcentral.com/bmcbioinformatics/supplements/13/S17.
Supplementary material
References
 1.Yang J, Wang W, Wang H, Yu P: δCluster: capturing subspace correlation in a large data set. Proceedings of the 18th International Conference on Data Engineering 2002 (ICDE 2002). 2002, 517528. (ICDE 2002)Google Scholar
 2.Cheng Y, Church G: Biclustering of expression data. Proceedings of International Society for Computational Biology 2000 (ISMB 2000). 2000, 93103. (ISMB 2000)Google Scholar
 3.Gupta R, Rao N, Kumar V: Discovery of errortolerant biclusters from noisy gene expression data. BMC Bioinformatics. 2011, 12 (Suppl 12): S110.1186/1471210512S12S1.PubMedCentralCrossRefPubMedGoogle Scholar
 4.Liu J, Li Z, Hu X, Chen Y, Park E: Dynamic biclustering of microarray data by multiobjective immune optimization. BMC Genomics. 2011, 12 (Suppl 2): S1110.1186/1471216412S2S11.PubMedCentralCrossRefPubMedGoogle Scholar
 5.Smet R, Marchal K: An ensemble biclustering approach for querying gene expression compendia with experimental lists. Bioinformatics. 2011, 27 (14): 19481956. 10.1093/bioinformatics/btr307.CrossRefPubMedGoogle Scholar
 6.Dhillon IS, Mallela S, Modha DS: Information theoretic coclustering. Proceedings of the 9th International Conference on Knowledge Discovery and Data Mining 2003 (KDD 2003). 2003, 8998. (KDD 2003)Google Scholar
 7.Madeira SC, Oliveira AL: Biclustering algorithms for biological data analysis: a survey. IEEE/ACM Transactions on Computational Biology and Bioinformatics. 2004, 1 (1): 2445. 10.1109/TCBB.2004.2.CrossRefPubMedGoogle Scholar
 8.Potter MA, De Jong KA: A cooperative coevolutionary approach to function optimization. Proceedings of the Third Conference on Parallel Problem Solving from Nature 1994 (PPSN 1994). 1994, 249257. (PPSN 1994)Google Scholar
 9.Potter MA, De Jong KA: Cooperative coevolution: an architecture for evolving coadapted subcomponents. Evolutionary Computation. 2000, 8: 19. 10.1162/106365600568086.CrossRefPubMedGoogle Scholar
 10.Zaritsky A, Sipper M: Coevolving solutions to the shortest common superstring problem. BioSystems. 2004, 76: 209216. 10.1016/j.biosystems.2004.05.013.CrossRefPubMedGoogle Scholar
 11.Hillis DW: Coevolving parasites improve simulated evolution in an optimization procedure. Physica D. 1990, 42: 228234. 10.1016/01672789(90)900762.CrossRefGoogle Scholar
 12.Axelrod R: The evolution of strategies in the iterated prisoner's dilemma. Genetic Algorithms and Simulated Annealing. Edited by: Davis L. 1987, 3241.Google Scholar
 13.Barricelli NA: Numerical testing of evolution theories, part I: theoretical introduction and basic tests. Acta Biotheoretica. 1962, 16: 6998. 10.1007/BF01556771.CrossRefGoogle Scholar
 14.Yang J, Wang W, Wang H, Yu P: Enhanced biclustering on expression data. Proceedings of the third IEEE Conference on Bioinformatics and Bioengineering 2003 (BIBE 2033). 2003, 321327. (BIBE 2033)CrossRefGoogle Scholar
 15.Wu CJ, Kasif S: GEMS: a web server for biclustering analysis of expression data. Nucleic Acids Research. 2005, 33: W596W599. 10.1093/nar/gki469.PubMedCentralCrossRefPubMedGoogle Scholar
 16.Prelic A, Bleuler S, Zimmermann P, Wille A, Buhlmann P, Gruissem W, Hennig L, Thiele L, Zitzler E: A systematic comparison and evaluation of biclustering methods for gene expression data. Bioinformatics. 2006, 22 (9): 11221129. 10.1093/bioinformatics/btl060.CrossRefPubMedGoogle Scholar
 17.Bleuler S, Prelić A, Zitzler E: An EA framework for biclustering of gene expression data. Proceedings of Congress on Evolutionary Computation 2004 (CEC2004). 2004, 166173. (CEC2004)Google Scholar
 18.Mitra S, Banka H, Pal SK: A MOE framework for biclustering of microarray data. Proceedings of the 18th International Conference on Pattern Recognition 2006 (ICPR'06). 2006, 11541157. (ICPR'06)CrossRefGoogle Scholar
 19.Divina F, AguilarRuiz J: Biclustering of expression data with evolutionary computation. IEEE Transactions on Knowledge & Data Engineering. 2006, 18 (5): 590602.CrossRefGoogle Scholar
 20.Pena JM, Robles V, Larranaga P, Herves V, Rosales F, Perez MS: GAEDA: Hybrid evolutionary algorithm using genetic and estimation of distribution algorithms. Proceedings of 17th Int. Conf. Ind. & Eng. Appl. Artif. Intell. & Expert Syst. 2004, 361371.CrossRefGoogle Scholar
 21.Zhang Q, Sun J, Tsang E: An evolutionary algorithm with guided mutation for the maximum clique problem. IEEE transaction on Evolutionaly Computation. 2005, 9 (2): 192200. 10.1109/TEVC.2004.840835.CrossRefGoogle Scholar
 22.Baluja S: Populationbased incremental learning: a method for integrating genetic search based function optimization and competitive learning. School of Comput. Sci., Carnegie Mellon Univ., Pittsburgh, PA, Tech. Rep. CMUCS94163. 1994Google Scholar
 23.Tavazoie S, Hughes J, Campbell M, Cho R, Church G: Systematic determination of genetic network architecture. Nature Genetics. 1999, 22: 281285. 10.1038/10343.CrossRefPubMedGoogle Scholar
 24.Pelikan M, Goldberg DE, Lobo F: A survey of optimization by building and using probabilistic models. Computational Optimization and Applications. 2002, 21 (1): 520. 10.1023/A:1013500812258.CrossRefGoogle Scholar
 25.BenDor A, Chor B, Karp R, Yakhini Z: Discovering local structure in gene expression data: the orderpreserving submatrix problem. J Comput Biol. 2003, 10: 373384. 10.1089/10665270360688075.CrossRefPubMedGoogle Scholar
 26.Cowley M, Pinese M, Kassahn K, Waddell N, Pearson J, Grimmond S, Biankin A, Hautaniemi S, Wu J: PINA v2.0: mining interactome modules. Nucleic Acids Research. 2012, 40: D862865. 10.1093/nar/gkr967.PubMedCentralCrossRefPubMedGoogle Scholar
 27.Dummitt B, Micka WS, Chang YH: NTerminal methionine removal and methionine metabolism in Saccharomyces cerevisiae. Journal of Cellular Biochemistry. 2003, 89: 964974. 10.1002/jcb.10566.CrossRefPubMedGoogle Scholar
Copyright information
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.