On the Number of Bits to Encode the Outputs of Densely Deployed Sensors

  • David L. Neuhoff
  • S. Sandeep Pradhan

Suppose M sensors are densely deployed throughout some bounded geographical region in order to sample a stationary two-dimensional random field, such as temperature. Suppose also that each sensor encodes its measurements into bits in a lossy fashion for transmission to some collector or fusion center where the continuous-space field is reconstructed. We consider the following question. If the distortion in the reconstruction is required to be D or less, what happens to the total number of bits produced by the encoders as the sensors become more numerous and dense? Does the increasing number of sensors mean that the total number of bits increases without limit? Or does the increasing correlation between neighboring sensor values sufficiently mitigate the increasing number of sensors to permit the total number of bits to remain bounded as M increases?


Mean Square Error Distortion Function Scalar Quantization Test Channel Gaussian Source 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    D. Slepian and J. Wolf, “Noiseless coding of correlated information sources,” IEEE Trans. Inform. Theory., vol. 19, pp. 471-480, Jul. 1973.MATHCrossRefMathSciNetGoogle Scholar
  2. [2]
    T. Berger, “Multiterminal source coding,” Information Theory Approach to Communications, CISM Courses and Lecture Notes No. 229, G. Longo, Ed., New York: Springer-Verlag, 1977.Google Scholar
  3. [3]
    S. S. Pradhan, J. Kusuma, K. Ramchandran, “Distributed compression in a dense microsensor network,” IEEE Sig. Proc. Mag., pp. 51-60, Mar. 2002.Google Scholar
  4. [4]
    S. S. Pradhan and K. Ramchandran, “Distributed source coding using syndromes (DISCUS): Design and construction,” IEEE Trans. Inform. Theory, vol. 49, pp. 626-643, Mar. 2003.MATHCrossRefMathSciNetGoogle Scholar
  5. [5]
    D. Marco, E. J. Duarte-Melo, M. Liu, and D. L. Neuhoff, “On the many-to-one transport capacity of a dense wireless sensor network and the com-pressibility of its data,” IPSN, Palo Alto, pp. 1-16, Apr. 2003.Google Scholar
  6. [6]
    D. Marco and D.L. Neuhoff, “Entropy of quantized data at high sam-pling rates,” IEEE Int. Symp. Inform. Thy., Adelaide, Aus., pp. 342-346, Sept. 2005.Google Scholar
  7. [7]
    D. Marco and D.L. Neuhoff, “Entropy of quantized data at high sampling rates,” submitted to IEEE Trans. Inform. Thy., Sept. 2006.Google Scholar
  8. [8]
    A. Kashyap, L. A. Lastras-Montano, C. Xia, and Z. Liu, “Distributed source coding in dense sensor networks,” Data Compression Conference (DCC), pp. 13-21, Snowbird, UT, Mar. 2005.Google Scholar
  9. [9]
    A. Kashyap, L. A. Lastras-Montano, C. Xia, and Z. Liu, “Distributedsourcecodingindensesensornetworks,”∼kashyap/research/tosn submission.pdf, 2006.
  10. [10]
    D. L. Neuhoff and S. S. Pradhan, “An Upper Bound to the Rate of Ideal Distributed Lossy Source Coding of Densely Sampled Data,” ICASSP, Toulouse, France, May 2006.Google Scholar
  11. [11]
    R. G. Gallager, Information Theory and Reliable Communication. New York: Wiley, 1968.MATHGoogle Scholar
  12. [12]
    T. Berger, Rate distortion theory: A mathematical basis for data com-pression. Englewood Cliffs: Prentice Hall, 1971.Google Scholar
  13. [13]
    A. N. Kolmogorov, “On the Shannon theory of information transmission in the case of continuous signals,” IRE Trans. Inform. Thy., vol. 2, pp. 102-108,1956.CrossRefGoogle Scholar
  14. [14]
    D. P. Bertsekas, Nonlinear Programming. Belmont, MA: Athena Sc., 2003.Google Scholar
  15. [15]
    U. Grenander and G. Szego, Toeplitz Forms and Their Applications. Berkeley, CA: University of California Press, 1958.MATHGoogle Scholar
  16. [16]
    R. M. Gray, “Toeplitz and Circulant Matrices: A Review,” Foundation and Trends in Comm. and Inform. Thy., vol. 2, no. 3, pp. 155-239, 2006.CrossRefGoogle Scholar
  17. [17]
    H. L. Royden, Real Analysis, 2nd Ed. New York: Macmillan, 1968.Google Scholar
  18. [18]
    T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd Ed. Hoboken, NJ: John Wiley and Sons, 2006.MATHGoogle Scholar
  19. [19]
    H. Gish and J. N. Pierce, “Asymptotically efficient quantization,” IEEE Trans. Inform. Thy., vol. 14, pp. 676-683, Sept. 1968.CrossRefGoogle Scholar
  20. [20]
    S. Shamai, “Information rates by oversampling the sign of a bandlimited process,” IEEE Trans. Inform. Thy., vol. 40, pp. 1230-1236, July 1994.MATHCrossRefGoogle Scholar
  21. [21]
    Z. Cvetkovic and M. Vetterli, “Error-rate characteristics of oversampled analog-to-digital conversion,” IEEE Trans. Inform. Thy., vol. 44, pp. 1961-1964, Sept. 1998.MATHCrossRefGoogle Scholar
  22. [22]
    Z. Cvetkovic and M. Vetterli, “On simple oversampled A/D conversion in L2 (R),” IEEE Trans. Inform. Thy., vol. 47, pp. 146-154, Jan. 2001.MATHCrossRefMathSciNetGoogle Scholar
  23. [23]
    Z. Cvetkovic and I. Daubechies, “Single-bit oversampled A/D conversion with exponential accuracy in the bit-rate,” DCC, Snowbird, UT, pp. 343-352, Mar. 2000.Google Scholar
  24. [24]
    P. Ishwar, A. Kumar, and K. Ramchandran, “Distributed sampling for dense sensor networks: A ‘bit-conservation principle’,” IPSN, Palo Alto, CA, pp. 17-31, April 2003.Google Scholar
  25. [25]
    A. Kumar, P. Ishwar, and K. Ramchandran, “On distributed sampling of smooth non-bandlimited fields,” IPSN, Berkeley, CA, pp. 89-98, Apr. 2004.Google Scholar
  26. [26]
    D. Marco and D. L. Neuhoff, paper in preparation.Google Scholar
  27. [27]
    N.T. Thao and M. Vetterli, “Reduction of the MSE in R-times over-sampled A/D conversion O(1/R) to O(1/R2 ),” IEEE Trans. Sig. Proc., vol. 42, pp. 200-203, Jan. 1994.CrossRefGoogle Scholar
  28. [28]
    N.T. Thao and M. Vetterli, “Deterministic analysis of oversampled A/D conversion and decoding improvement based on consistent estimates,” IEEE Trans. Sig. Proc., vol. 42, pp. 519-531, Mar. 1994.CrossRefGoogle Scholar
  29. [29]
    Z. Cvetkovic and M. Vetterli, “Error analysis in oversampled A/D con-version and quantization of Weyl-Heisenberg frame expansions,” ICASSP, vol. 3, pp. 1435-1438, May 1996.Google Scholar
  30. [30]
    I. Bar-David, “Sample functions of a Gaussian random process cannot be reconstructed from their zero crossings,” IEEE Trans. Inform. Thy., vol. 21, pp. 86-87, Jan. 1975.MATHCrossRefMathSciNetGoogle Scholar
  31. [31]
    D. Slepian, “Estimation of the Gauss-Markov Process from observation of its sign,” Stoch. Proc. Appl., vol. 14, pp. 249-265, 1983.MATHCrossRefMathSciNetGoogle Scholar
  32. [32]
    D. Marco, “Markov random processes are not recoverable after quantiza-tion and mostly not recoverable from samples,” IEEE Int. Symp. Inform. Thy., Nice, France, pp. 2886-2890 June 2007.Google Scholar
  33. [33]
    S.-Y. Tung, “Multiterminal source coding,” Ph.D. Dissertation, Cornell University, Ithaca, NY, 1978.Google Scholar
  34. [34]
    R. Zamir and T. Berger, “Multiterminal source coding with high resolu-tion,” IEEE Trans. Inform. Thy., vol. 45, pp. 106-1127, Jan. 1999.MATHCrossRefMathSciNetGoogle Scholar
  35. [35]
    P. Viswanath, “Sum rate of multiterminal Gaussian source coding,” DI-MACS Series in Discrete Math. and Theoretical Computer Science, 2002.Google Scholar
  36. [36]
    A. .B. Wagner, S. Tavildar and P. Viswanath, “Rate region for the quadratic Gaussian two-encoder source-coding problem,” IEEE Int. Symp. Inform. Thy., pp. 1404-1408, Seattle, July 2006.Google Scholar
  37. [37]
    H. L. van Trees, Detection, Estimation, and Modulation Theory, Part I, New York: Wiley, 1968.MATHGoogle Scholar
  38. [38]
    R. Zamir, “Rate loss in the Wyner-Ziv problem,” IEEE Trans. In-form. Thy., vol. 42, pp. 2073-2084, Nov. 1996.MATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  • David L. Neuhoff
    • 1
  • S. Sandeep Pradhan
    • 1
  1. 1.Electrical Engineering and Computer Science DepartmentUniversity of MichiganAnn ArborUSA

Personalised recommendations