Pooling spike neural network for fast rendering in global illumination

  • Joseph ConstantinEmail author
  • Andre Bigand
  • Ibtissam Constantin
IWANN2017: Learning algorithms with real world applications


The generation of photo-realistic images is a major topic in computer graphics. By using the principles of physical light propagation, images that are indistinguishable from real photographs can be generated. However, this computation is a very time-consuming task. When simulating the real behavior of light, images can take hours to be of sufficient quality. This paper proposes a bio-inspired architecture with spiking neurons for fast rendering in global illumination. The objective is to find the number of paths that are required for each image in order to be perceived identical to the visually converged one computed by the path tracing algorithm. The challenge is that the visually converged image is unknown so that we start from a very noisy image to converge toward the less noisy image. This architecture with functional parts of sparse encoding, dynamic learning, and decoding consists of a robust convergence measure on blocks. Different pooling strategies are performed in order to separate noise from signal in a deep learning process. The learning algorithm selects the most pertinent images using clustering dynamic learning. The system dynamic computes a learning parameter for each image based on its level of noise. The experiments are conducted on a global illumination set which contains a large number of images with different resolutions and noise levels computed using diffuse and specular rendering. With respect to the scenes with \(512\times 512\) resolution, 3232 different images are used for learning and 9696 images are used for testing. For the scenes with \(800\times 800\) resolution, the training and the testing data contain, respectively, 3760 and 6320 images. The result is a system composed from only two spike pattern association neurons that accurately predict the quality of images with respect to human psycho-visual scores. The pooling spike neural network has been compared with the support vector and fast relevance vector machines. The obtained results show that the proposed method gives promising efficiency in terms of accuracy (which is calculated as the mean square error on each block of the scenes and the variation of the actual thresholds of the perception models and the desired human psycho-visual scores) and less number of parameters.


Clustering-based dynamic learning Global illumination Sparse coding Pooling spike neural network 



This project has been funded with support from the Lebanese University under Grant Number 428/2015.

Compliance with ethical standards

Conflict of interest

This is to certify that all the authors have participated sufficiently in the work to take public responsibility for the content, including participation in the concept, design, analysis, writing, or revision of the manuscript. Furthermore, each author certifies that this material or similar material has not been and will not be submitted to or published in any other publication.


  1. 1.
    Ikeda S, Watanabe S, Raytchev B, Tamaki T, Kaneda K (2015) Spectral rendering of interference phenomena caused by multilayer films under global illumination environment. ITE Trans Media Technol Appl 3(1):76–84CrossRefGoogle Scholar
  2. 2.
    Hedman P, Karras T, Lehtinen J (2016) Sequential Monte Carlo instant radiosity. In: Proceedings of the 20th ACM SIGGRAPH symposium on interactive 3D graphics and games. pp 121–128Google Scholar
  3. 3.
    Parker SG, Bigler J, Dietrich A, Friedrich H, Hoberock J, Luebke D, Mcallister D, Mcguire M, Morley K, Robinson A, Stich M (2010) A general purpose ray tracing engine. ACM Trans Graph.
  4. 4.
    Thiedemann S, Henrich N, Grosch T, Muller S (2011) Voxel-based global illumination. In: Proceeding I3D Symposium on interactive 3D graphics and games, pp 103–110Google Scholar
  5. 5.
    Volevich V, Myszkowski K, Khodulev A, Kopylov AE (2000) Using the visual differences predictor to improve performance of progressive global illumination computation. ACM Trans Graph 19(2):122–161CrossRefGoogle Scholar
  6. 6.
    Shi J, Yan Q, Xu L, Jia J (2015) Hierarchical image saliency detection on extended CSSD. IEEE Trans Pattern Anal Mach Intell 38(4):717–729CrossRefGoogle Scholar
  7. 7.
    Demirtas A, Reibman A, Jafarkhani H (2014) Full-reference quality estimation for images with different spatial resolutions. IEEE Trans Image Process 23(5):2069–2080MathSciNetCrossRefGoogle Scholar
  8. 8.
    Delepoulle S, Bigand A, Renaud C (2012) A no-reference computer generated images quality metrics and its application to denoising. In: IEEE intelligent systems IS12 conference, vol 1, pp 67–73Google Scholar
  9. 9.
    Constantin J, Bigand A, Constantin I, Hamad D (2015) Image noise detection in global illumination methods based on frvm. NeuroComputing 64:82–95CrossRefGoogle Scholar
  10. 10.
    Ciregan D, Meier U, Schmidhuber J (2012) Multi-column deep neural networks for image classification. In: IEEE conference on computer vision and pattern recognition, pp 3642–3649.
  11. 11.
    Hinton G, Deng L, Yu D, Dahl GE, Mohamed A, Jaitly N et al (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag 29(6):82–97. CrossRefGoogle Scholar
  12. 12.
    Goldberg Y (2016) A primer on neural network models for natural language processing. J Artif Intell Res (JAIR) 57(1):345–420MathSciNetCrossRefGoogle Scholar
  13. 13.
    Florian RV (2012) The chronotron: a neuron that learns to fire temporally precise spike patterns. PLoS ONE 7(8):e40233. CrossRefGoogle Scholar
  14. 14.
    Schaffer JD (2017) Initial experiments evolving spiking neural networks with supervised learning capability. Procedia Comput Sci 114:184–191. CrossRefGoogle Scholar
  15. 15.
    Bohte SM, Kok JN, Poutr HL (2002) Error-backpropagation in temporary encoded networks of spiking neurons. Neurocomputing 48(1–4):17–37CrossRefGoogle Scholar
  16. 16.
    Xu Y, Zeng X, Han L, Yang J (2013) A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks. Neural Netw 43:99–113CrossRefGoogle Scholar
  17. 17.
    Constantin J, Constantin I, Rammouz R, Bigand A, Hamad D (2015) Perception of noise in global illumination algorithms based on spiking neural network. In: The IEEE third international conference on technological advances in electrical, electronics and computer engineering, pp 68–73Google Scholar
  18. 18.
    Takouachet N, Delepoulle S, Renaud C (2007) A perceptual stopping condition for global illumination computations. In: Proceedings of the spring conference on computer graphics, Budmerice, Slovakia, pp 61–68Google Scholar
  19. 19.
    Lubin J (1995) A visual discrimination model for imaging system design and evaluation. In: Peli E (ed) Vision models for target detection and recognition. World Scientific, Singapore, pp 245–283CrossRefGoogle Scholar
  20. 20.
    Longhurst P, Debattista K, Chalmers A (2006) A GPU based saliency map for high-fidelity selective rendering. In: AFRIGRAPH 2006 4th international conference on computer graphics, virtual reality, visualization and interaction in Africa. ACM SIGGRAPH, pp 21–29. ISBN 1-59593-288-7Google Scholar
  21. 21.
    Wang J, Borji A, Kuo C-CJ, Itti L (2016) Learning a combined model of visual saliency for fixation prediction. IEEE Trans Image Process 25(4):1566–1579MathSciNetCrossRefGoogle Scholar
  22. 22.
    Longhurst P, Chalmers A (2004) User validation of image quality assessment algorithms. In: TPCG 04: proceedings of the theory and practice of computer graphics 2004 (TPCG04). Washington, DC, USA, IEEE Computer Society, pp 196–202, ISBN 0-7695-2137-1.
  23. 23.
    Mohemmed A, Kasabov N (2011) Incremental learning algorithm for spatio-temporal spike pattern classification. In: The 2012 international joint conference on neural networks (IJCNN), pp 1–6Google Scholar
  24. 24.
    Stefan S, Mohemmed A, Kasakov N (2011) Are probabilistic spiking neural networks suitable for reservoir computing. In: Proceedings of international joint conference on neural networks, pp. 3156–3163Google Scholar
  25. 25.
    Wald I, Kollig T, Benthin C, Keller A, Slusalleki P (2002) Interactive global illumination using fast ray tracing. In: Proceedings of the 13th Eurographics workshop on rendering, pp 15–24Google Scholar
  26. 26.
    Kajiya JT (1986) The rendering equation. In: ACM SIGGRAPH computer graphics, pp 143–150Google Scholar
  27. 27.
    Talbot J, Cline D, Egbert P (2005) Importance resampling for global illumination. In: Proceedings of the sixteenth Eurographics conference on rendering techniques, pp 139–146Google Scholar
  28. 28.
    Makandar A, Halalli B (2015) Image enhancement techniques using highpass and lowpass filters. Int J Comput Appl 109(14):21–27Google Scholar
  29. 29.
    Dawood F, Rahmat R, Kadiman S, Abdullah L, Zamrin M (2012) Effect comparison of speckle noise reduction filters on 2D-echocardiographic. World Acad Sci Eng Technol 6(9):415–420Google Scholar
  30. 30.
    Biswas P, Sarkar A, Mynuddin M (2015) Deblurring images using a Wiener filter. Int J Comput Appl 109(7):36–38Google Scholar
  31. 31.
    Gao D, Liao Z, Lv Z, Lu Y (2015) Multi-scale statistical signal processing of cutting force in cutting tool condition monitoring. Int J Adv Manuf Technol 90(9):1843–1853CrossRefGoogle Scholar
  32. 32.
    Shigeo A (2010) Support vector machines for pattern classification. Springer, Berlin ISBN-10: 9781849960977zbMATHGoogle Scholar
  33. 33.
    Ren J, ANN vs. SVM (2012) which one performs better in classification of MCCs in mammogram imaging. Knowl Based Syst 26(2):144–153CrossRefGoogle Scholar
  34. 34.
    Brezhneva O, Tretyakov A (2011) An elementary proof of the Karush-Kuhn-Tucker theorem in normed linear spaces for problems with a finite number of inequality constraints. Optimization 60(5):613–618MathSciNetCrossRefGoogle Scholar
  35. 35.
    Tipping ME, Faul A C (2003) Fast marginal likelihood maximisation for sparse Bayesian models, In: Bishop CM, Frey BJ (eds) Proceedings of the ninth international workshop on artificial intelligence and statistics. Key West, FL. Jan 3–6Google Scholar
  36. 36.
    Kim J, Suga Y, Won S (2006) A new approach to fuzzy modeling of nonlinear dynamic systems with noise. Relevance Vector Learning Machine. IEEE Trans Fuzzy Syst 14(2):222–231CrossRefGoogle Scholar
  37. 37.
    Tipping ME (2004) Bayesian inference: an introduction to principles and practice in machine learning. In: Advanced lectures in machine learning, Springer, New York, pp 41-62Google Scholar
  38. 38.
    Faul AC, Tipping ME (2002) Analysis of sparse Bayesian learning. In: Advances in neural information processing systems, vol 14, pp 383–389Google Scholar
  39. 39.
    Press W, Teukolsky S, Vetterling W, Flanneryi B (2007) Numerical recipes, 3rd edn. In: The art of scientific computing. Cambridge University Press, CambridgeGoogle Scholar
  40. 40.
    Shi Y, Xiong F, Xiu R, Liu Y (2013) A comparative study of relevant vector machine and support vector machine in uncertainty analysis. In: International conference on quality, reliability, risk, maintenance, and safety engineering, pp 15–18Google Scholar
  41. 41.
    Yu Q, Tang H, Chen Tan K, Yu H (2014) A brain inspired spiking neural network model with temporal encoding and learning. Neurocomputing 138:3–13CrossRefGoogle Scholar
  42. 42.
    Hu J, Tang H, Tan KC, Li H, Shi L (2013) A spike-timing-based integrated model for pattern recognition. Neural Comput 25(2):450–472MathSciNetCrossRefGoogle Scholar
  43. 43.
    Yu Q, Yan R, Tang H, Tan KC, Li H (2016) A spiking neural network system for robust sequence recognition. IEEE Trans Neural Netw Learn Syst 27(3):621–635MathSciNetCrossRefGoogle Scholar
  44. 44.
    Mohemmed A, Guoyu L, Kasabov N (2012) Evaluating SPAN incremental learning for handwritten digit recognition. In: Huang T et al (eds) ICONIP 2012, Part III, LNCS 7665, pp 670–677Google Scholar
  45. 45.
    Pavlidis N, Tasoulis D, Plagianakos VP, Nikiforidis G, Vrahatis M (2005) Spiking neural network training using evolutionary algorithms. IEEE Int. Joint Conf. Neural Netw. 4:2190–2194Google Scholar
  46. 46.
    Qu H, Xie X, Liu Y, Zhang M, Lu L (2015) Improved perception based spiking neuron learning rule for real-time user authentication. Neurocomputing 151:310–318CrossRefGoogle Scholar
  47. 47.
    An S, Liu W, Venkatesh S (2007) Fast cross-validation algorithms for least squares support vector machine and kernel ridge regression. Pattern Recogn 40(8):2154–2162CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.Laboratoire de Physique Appliquée, Faculté des Sciences 2Université LibanaiseJdeidetLebanon
  2. 2.LISIC, ULCOCalais CedexFrance

Personalised recommendations