Skip to main content

On the Co-absence of Input Terms in Higher Order Neuron Representation of Boolean Functions

  • Conference paper
  • First Online:
  • 2725 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10262))

Abstract

Boolean functions (BFs) can be represented by using polynomial functions when −1 and +1 are used represent True and False respectively. The coefficients of the representing polynomial can be obtained by exact interpolation given the truth table of the BF. A more parsimonious representation can be obtained with so called polynomial sign representation, where the exact interpolation is relaxed to allow the sign of the polynomial function to represent the BF value of True or False. This corresponds exactly to the higher order neuron or sigma-pi unit model of biological neurons. It is of interest to know what is the minimal set of monomials or input lines that is sufficient to represent a BF. In this study, we approach the problem by investigating the (small) subsets of monomials that cannot be absent as a whole from the representation of a given BF. With numerical investigations, we study low dimensional BFs and introduce a graph representation to visually describe the behavior of the two-element monomial subsets as to whether they cannot be absent from any sign representation. Finally, we prove that for any n-variable BF, any three-element monomial set cannot be absent as a whole if and only if all the pairs from that set has the same property. The results and direction taken in the study may lead to more efficient algorithms for finding higher order neuron representations with close-to-minimal input terms for Boolean functions.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Example sign representation with \( \left\{ {x_{1} ,x_{2} ,x_{1} x_{3} } \right\} \) absent: f(x1,x2,x3,x4) = 689 − 689x 1 x 2  + 689x 3  + 1056x 2 x 3  – 689x 1 x 2 x 3  + 689x 4  + 977x 1 x 4  + 977x 2 x 4  − 689x 1 x 2 x 4  − 689x 3 x 4  − 977x 1 x 3 x 4  − 977x 2 x 3 x 4  + 689x 1 x 2 x 3 x 4.

References

  • Busygin, S., Butenko, S., Pardalos, P.M.: A heuristic for the maximum independent set problem based on optimization of a quadratic over a sphere. J. Comb. Optim. 6(3), 287–297 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  • Chandru, V.: Variable elimination in linear constraints. Comput. J. 36(5), 463–470 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  • Ghosh, J., Shin, Y.: Efficient higher order neural networks for classification and function approximation. Int. J. Neural Syst. 3(4), 323–350 (1992)

    Article  Google Scholar 

  • Giles, C.L., Maxwell, T.: Learning, invariance, and generalization in high-order neural networks. Appl. Opt. 26(23), 4972–4978 (1987)

    Article  Google Scholar 

  • Guler, M.: A model with an intrinsic property of learning higher order correlations. Neural Netw. 14(4–5), 495–504 (2001)

    Article  Google Scholar 

  • Mesnager, S.: On constructions of bent functions from involutions. In: 2016 IEEE International Symposium on Information Theory (2016)

    Google Scholar 

  • Oztop, E.: An upper bound on the minimum number of monomials required to separate dichotomies of {-1, 1}n. Neural Comput. 18(12), 3119–3138 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  • Oztop, E.: Sign-representation of Boolean functions using a small number of monomials. Neural Netw. 22(7), 938–948 (2009)

    Article  MATH  Google Scholar 

  • Schmitt, M.: On the capabilities of higher-order neurons: a radial basis function approach. Neural Comput. 17(3), 715–729 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  • Sezener, C.E., Oztop, E.: Minimal sign representation of Boolean functions: algorithms and exact results for low dimensions. Neural Comput. 27(8), 1796–1823 (2015)

    Article  Google Scholar 

  • Siu, K.Y., Roychowdhury, V., Kailath, T.: Discrete Neural Computation. Englewood Cliffs, Prentice Hall (1995)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Oytun Yapar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Yapar, O., Oztop, E. (2017). On the Co-absence of Input Terms in Higher Order Neuron Representation of Boolean Functions. In: Cong, F., Leung, A., Wei, Q. (eds) Advances in Neural Networks - ISNN 2017. ISNN 2017. Lecture Notes in Computer Science(), vol 10262. Springer, Cham. https://doi.org/10.1007/978-3-319-59081-3_43

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-59081-3_43

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-59080-6

  • Online ISBN: 978-3-319-59081-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics