A Genetic Classifier Account for the Regulation of Expression

  • Tsvi Achler
  • Eyal Amir
Part of the Springer Optimization and Its Applications book series (SOIA, volume 38)


This work is motivated by our model of neuroscience processing which incorporates large numbers of reentrant top-down feedback regulation connections. Such regulation is fundamental and can be found throughout biology. The purpose of this chapter is to broaden this model's application.

Genes perform important life functions, responsible for virtually every organic molecule that organisms produce. The genes must closely regulate the amount of their products, because too little or too much production may be deleterious for the organism. Furthermore, they must respond efficiently and in unison to the environments that the organism faces. Networks that are closely regulated can behave as robust classifiers which can recognize and respond to their environment. Using simple examples we demonstrate that such networks perform dynamic classification, determining the most efficient set of genes needed to replace consumed products.


Consumption Pattern Control Gene Expression Product Consumption Deleterious Gene Communal Equilibrium 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



I would like to thank Eyal Amir, Satwik Rajaram, Frances R. Wang, Tal Raveh, Cyrus Omar, and Robert Lee DeVille for helpful suggestions. This work was supported by the US National Geospatial Agency Grant HM1582-06–BAA-0001.


  1. 1.
    Achler, T. Input shunt networks. Neurocomputing 44, 249–255 (2002)CrossRefGoogle Scholar
  2. 2.
    Achler, T. Object classification with recurrent feedback neural networks. Proceedings of the SPIE Evolutionary and Bio-inspired Computation: Theory and Applications, vol. 6563, Orlando (2007)Google Scholar
  3. 3.
    Achler, T., Amir, E. Input feedback networks: Classification and inference based on network structure. Artif General Intell 1, 15–26 (2008)Google Scholar
  4. 4.
    Achler, T., Omar, C., Amir, E. Shedding weights: More with less. Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IJCNN'08), Hong Kong, pp. 3020–3027 (2008)Google Scholar
  5. 5.
    Albert, R., Jeong, H., Barabasi, A.L. Error and attack tolerance of complex networks. Nature 406(6794), 378–382 (2000)CrossRefGoogle Scholar
  6. 6.
    Kauffman, S., et al. Random Boolean network models and the yeast transcriptional network. Proc Natl Acad Sci USA 100(25), 14796–14799 (2003)CrossRefGoogle Scholar
  7. 7.
    Mcfadden, F.E. Convergence of competitive activation models based on virtual lateral inhibition. Neural Networks 8(6), 865–875 (1995)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Nochomovitz, Y.D., Li, H. Highly designable phenotypes and mutational buffers emerge from a systematic mapping between network topology and dynamic output. Proc Natl Acad Sci USA 103(11), 4180–4185 (2006)CrossRefGoogle Scholar
  9. 9.
    Nykter, M., et al. Critical networks exhibit maximal information diversity in structure-dynamics relationships. Phys Rev Lett 100(5), 058702 (2008)Google Scholar
  10. 10.
    Shmulevich, I., et al. The role of certain Post classes in Boolean network models of genetic networks. Proc Natl Acad Sci USA 100(19), 10734–10739 (2003)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of Illinois Urbana-ChampaignUrbanaUSA

Personalised recommendations