# Separation Theorems And Partial Orderings For Sensor Network Problems

• Michael C. Gastpar
Chapter

In this chapter, we discuss information-theoretic techniques to understand sensor network performance. From an information-theoretic perspective, sensor network problems are typically joint source-channel coding problems: The goal is to recover an approximate version of the underlying source information (by contrast to, for example, the standard channel coding problems where the goal is to communicate bits at the smallest possible error probability). Hence, the overall encoding process maps a sequence of source observations into a suitable sequence of channel inputs in such a way that the decoder, upon observing a noisy version of that sequence, can get an estimate of the source observations at the highest possible fidelity. Successful code constructions must exploit the structure of the underlying source (and the mechanism by which the source is observed) and the communication channel. Designing codes that simultaneously achieve both should be expected to be a rather difficult task, and it is therefore somewhat surprising that Shannon [27] found a very elegant solution for the case of point-to-point communication (as long as both the source and the channel are stationary and ergodic, and cost and fidelity are assessed by per-letter criteria). The solution consists in a separation of the overall task into two separate tasks. Specifically, an optimal communication strategy can be designed in two parts, a source code, exploiting the structure of the source and the observation process, followed by a channel code, exploiting the structure of the communication channel. The two stages are connected by a universal interface - bits- that does not depend on the precise structure. For the purpose of this chapter, we will refer to such an architecture as separation-based.

## Keywords

Sensor Network Network Code Channel Code Separation Theorem Source Symbol
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

## References

1. [1]
R. Ahlswede. Multi-way communication channels. In Proc IEEE Int Symp Info Theory, Tsahkadsor, Armenian S.S.R., September 1971.Google Scholar
2. [2]
R. Ahlswede and T. S. Han. On source coding with side information via a multiple-access channel and related problems in multi-user information theory. IEEE Transactions on Information Theory, IT-29(3):396-412, May 1983.
3. [3]
J. Barros and S. Servetto. Reachback capacity with non-interfering nodes. In Proc IEEE Int Symp Info Theory, Yokohama, Japan, July 2003.Google Scholar
4. [4]
T. M. Cover, A. A. El Gamal, and M. Salehi. Multiple access channels with arbitrarily correlated sources. IEEE Transactions on Information Theory, 26(6):648-657, November 1980.
5. [5]
T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, New York, 2nd edition, 2006.
6. [6]
I. Csiszár and J. Körner. Information Theory: Coding Theory for Discrete Memoryless Systems. Academic Press, New York, 1981.
7. [7]
M. Gastpar. To Code Or Not To Code. PhD thesis, Ecole Polytechnique Fédérale (EPFL), Lausanne, Switzerland, 2002.Google Scholar
8. [8]
M. Gastpar. On capacity under receive and spatial spectrum-sharing constraints. IEEE Transactions on Information Theory, 53(2):471-487, February 2007.
9. [9]
M. Gastpar. Uncoded transmission is exactly optimal for a simple Gaus-sian sensor network. In Proc. 2007 Information Theory and Applications Workshop, San Diego, CA, February 2007.Google Scholar
10. [10]
M. Gastpar, B. Rimoldi, and M. Vetterli. To code or not to code. In Proc IEEE Int Symp Info Theory, page 236, Sorrento, Italy, June 2000.Google Scholar
11. [11]
M. Gastpar, B. Rimoldi, and M. Vetterli. To code, or not to code: Lossy source-channel communication revisited. IEEE Transactions on Infor-mation Theory, 49(5):1147-1158, May 2003.
12. [12]
M. Gastpar and M. Vetterli. On the capacity of wireless networks: The relay case. In Proc IEEE Infocom 2002, volume 3, pages 1577 - 1586, New York, NY, June 2002.Google Scholar
13. [13]
M. Gastpar and M. Vetterli. Source-channel communication in sensor net- works. In Leonidas J. Guibas and Feng Zhao, editors, 2nd International Workshop on Information Processing in Sensor Networks (IPSN’03), pages 162-177. Lecture Notes in Computer Science, Springer, New York, NY, April 2003.
14. [14]
M. Gastpar and M. Vetterli. Power, spatio-temporal bandwidth, and distortion in large sensor networks. IEEE Journal on Selected Areas in Communications (Special Issue on Self-Organizing Distributive Collaborative Sensor Networks), 23(4):745-754, April 2005.Google Scholar
15. [15]
M. Gastpar, M. Vetterli, and P. L. Dragotti. Sensing reality and com-municating bits: A dangerous liaison. IEEE Signal Processing Magazine, 23(4):70-83, July 2006.
16. [16]
J. Körner and K. Marton. How to encode the modulo-two sum of binary sources. IEEE Transactions on Information Theory, IT-25(2):219-221, March 1979.
17. [17]
H. Liao. A coding theorem for multiple access communications. In Proc IEEE Int Symp Info Theory, Asilomar, CA, February 1972.Google Scholar
18. [18]
J. L. Massey. Causality, feedback and directed information. In Proc 1990 Int Symp Info Theory and Its Applications (ISITA’90), pages 303-305, Hawaii, U.S.A., November 1990.Google Scholar
19. [19]
N. Merhav and S. Shamai. On joint source-channel coding for the Wyner-Ziv source and the Gel’fand-Pinsker channel. IEEE Transactions on Information Theory, IT-49(11):2844-2855, November 2003.
20. [20]
B. Nazer and M. Gastpar. Reliable computation over multiple access channels. In Proc 43rd Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, September 2005.Google Scholar
21. [21]
B. Nazer and M. Gastpar. Computing over multiple-access channels with connections to wireless network coding. In Proceedings of the 2006 Inter-national Symposium on Information Theory (ISIT 2006), Seattle, WA, July 2006.Google Scholar
22. [22]
Y. Oohama. The rate-distortion function for the quadratic Gaussian CEO problem. IEEE Transactions on Information Theory, IT-44(3):1057-1070, May 1998.
23. [23]
L. H. Ozarow. The capacity of the white Gaussian multiple access channel with feedback. IEEE Transactions on Information Theory, IT-30(4):623-629, July 1984.
24. [24]
S. S. Pradhan, S. Choi, and K. Ramchandran. An achievable rate region for multiple access channels with correlated messages. In Proc IEEE Int Symp Info Theory, Chicago, IL, June 27-July 2 2004.Google Scholar
25. [25]
S. Ray, M. Effros, M. Médard, R. Koetter, and T. Ho abd D. Karger abd J. Abounadir. On separation, randomness and linearity for network codes over finite fields. March 2006. http://arxiv.org/abs/cs/0603022.
26. [26]
Z. Reznic, R. Zamir, and M. Feder. Joint source-channel coding of a Gaus-sian mixture source over a Gaussian broadcast channel. IEEE Transac-tions on Information Theory, IT-48(3):776-781, March 2002.
27. [27]
C. E. Shannon. A mathematical theory of communication. Bell Sys. Tech. Journal, 27:379-423, 623-656, 1948.
28. [28]
D. Slepian and J. K. Wolf. Noiseless coding of correlated information sources. IEEE Transactions on Information Theory, IT-19:471-480, 1973.
29. [29]
I. E. Telatar. Capacity of multi-antenna Gaussian channels. Bell Labs Technical Memorandum, June 1995. Also published in European Trans-actions on Telecommunications, 10(6):585-596, Nov.-Dec. 1999.Google Scholar