Capacity bounds for multiple access-cognitive interference channel

  • Mahtab Mirmohseni
  • Bahareh Akhbari
  • Mohammad Reza Aref
Open Access
Research
Part of the following topical collections:
  1. Ten Years of Cognitive Radio: State of the Art and Perspectives

Abstract

Motivated by the uplink scenario in cellular cognitive radio, this study considers a communication network in which a point-to-point channel with a cognitive transmitter and a Multiple Access Channel (MAC) with common information share the same medium and interfere with each other. A Multiple Access-Cognitive Interference Channel (MA-CIFC) is proposed with three transmitters and two receivers, and its capacity region in different interference regimes is investigated. First, the inner bounds on the capacity region for the general discrete memoryless case are derived. Next, an outer bound on the capacity region for full parameter regime is provided. Using the derived inner and outer bounds, the capacity region for a class of degraded MA-CIFC is characterized. Two sets of strong interference conditions are also derived under which the capacity regions are established. Then, an investigation of the Gaussian case is presented, and the capacity regions are derived in the weak and strong interference regimes. Some numerical examples are also provided.

Keywords

Cognitive interference channel Multiple access channel Strong Interference Weak interference Capacity region 

1. Introduction

Interference avoidance techniques have traditionally been used in wireless networks wherein multiple source-destination pairs share the same medium. However, the broadcasting nature of wireless networks may enable cooperation among entities, which ensures higher rates with more reliable communication. On the other hand, due to the increasing number of wireless systems, spectrum resources have become scarce and expensive. The exponentially growing demand for wireless services along with the rapid advancements in wireless technology has lead to cognitive radio technology which aims to overcome the spectrum inefficiency problem by developing communication systems that have the capability to sense the environment and adapt to it [1].

In overlay cognitive networks, the cognitive user can transmit simultaneously with the non-cognitive users and compensate for the interference by cooperation in sending, i.e., relaying, the non-cognitive users' messages [1]. From an information theoretic point of view, Cognitive Interference Channel (CIFC) was first introduced in [2] to model an overlay cognitive radio and refers to a two-user Interference Channel (IFC) in which the cognitive user (secondary user) has the ability to obtain the message being transmitted by the other user (primary user), either in a non-causal or in a causal manner. An achievable rate region for the non-causal CIFC was derived in [2], by combining the Gel'fand-Pinsker (GP) binning [3] with a well-known simultaneous superposition coding scheme (rate splitting) applied to IFC [4]. For the non-causal CIFC, where the cognitive user has non-causal full or partial knowledge of the primary user's transmitted message several achievable rate regions and capacity results in some special cases have been established [5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. More recently a three-user cognitive radio network with one primary user and two cognitive users is studied in [15, 16], where an achievable rate region is derived for this setup based on rate splitting and GP binning.

In the interference avoidance-based systems, i.e., when the communication medium is interference-free, uplink transmission is modeled with a Multiple Access Channel (MAC) whose capacity region has been fully characterized for independent transmitters [17, 18] as well as for the transmitters with common information [19]. Recently, taking the effects of interference into account in the uplink scenario, a MAC and an IFC have been merged into one setup by adding one more transmit-receive pair to the communication medium of a two-user MAC [20, 21], where the channel inputs at the transmitters are independent and there is no cognition or cooperation.

In this paper, we introduce Multiple Access-Cognitive Interference Channel (MA-CIFC) by providing the transmitter of the point-to-point channel with cognition capabilities in the uplink with interference model. Moreover, transmitters of MAC have common information that enables cooperation among them. As shown in Figure 1, the proposed channe consists of three transmitters and two receivers: two-user MAC with common information as the primary network and a point-to-point channel with a cognitive transmitter that knows the message being sent by all of the transmitters in a non-causa manner. A physical example of this channel is the coexistence of cognitive users with the licensed primary users in a cellular or satellite uplink transmission, where the cognitive radios by their abilities exploit side information about the environment to maintain or improve the communication of primary users while also achieving some spectrum resources for their own communication. In this scenario, the primary non-cognitive users can be oblivious to the or aware of the cognitive users [1]. When the non-cognitive user is oblivious to the cognitive user's presence, its receiver's decoding process is independent of the interference caused by the cognitive user's transmission. In fact, the primary receiver treats interference as noise. However, in the aware cognitive user's scenario, the decoding process at the primary receiver can be adapted to improve its own rate. For example, the primary receiver can decode the cognitive user's message and cancel the interference when the interfering signal is strong enough. If the multi-antenna capability is available at the primary receiver, it can also reduce or increase the interfering signal by beam-steering, which results in the occurrence of the weak or strong interference regimes [1].
Figure 1

Graphic representation for MA-CIFC.

To analyze the capacity region of MA-CIFC, we first derive three inner bounds on the capacity region (achievable rate regions). The first two bounds assume an oblivious primary receiver, which does not decode the cognitive user's message but treats it as noise. Two different coding schemes are proposed based on the superposition coding, the GP binning and the method of [6] in defining auxiliary Random Variables (RVs). Later, we show that these strategies are optimal for a degraded MA-CIFC and also in the Gaussian weak interference regime. In the third achievability scheme, we consider an aware primary receiver and obtain an inner bound on the capacity region based on using superposition coding in the encoding part and allowing both receivers to decode all messages with simultaneous joint decoding in the decoding part. This strategy is capacity-achieving in the strong interference regime. Next, we provide a general outer bound on the capacity region and derive conditions under which the first achievability scheme achieves capacity for the degraded MA-CIFC. We continue the capacity results by the derivation of two sets of strong interference conditions, under which the third inner bound achieves capacity. Further, we compare these two sets of conditions and identify the weaker set. We also extend the strong interference results to a network with k primary users.

Moreover, we consider the Gaussian case and find capacity results for the Gaussian MA-CIFC in both the weak and strong interference regimes. We use the second derived inner bound to show that the capacity-achieving scheme in weak interference consists of Dirty Paper Coding (DPC) [22] at the cognitive transmitter and treating interference as noise at both receivers. We also provide some numerical examples.

The rest of the paper is organized as follows. Section 2 introduces MA-CIFC model and the notations. Three inner bounds and an outer bound on the capacity region are derived in Section 3 and Section 4, respectively, for the discrete memoryless MA-CIFC. Section 5 presents the capacity results for the discrete memoryless MA-CIFC in three special cases. In Section 6, the Gaussian MA-CIFC is investigated. Finally, Section 7 concludes the paper.

2. Channel models and preliminaries

Throughout the paper, upper case letters (e.g. X) are used to denote RVs and lower case letters (e.g. x) show their realizations. The probability mass function (p.m.f) of a RV X with alphabet set X Open image in new window is denoted by p X (x), where subscript X is occasionally omitted. A ε n ( X , Y ) Open image in new window specifies the set of ϵ-strongly, jointly typical sequences of length n. The notation X i j Open image in new window indicates a sequence of RVs (X i , Xi+1, ..., X j ), where X j is used instead of X 1 j Open image in new window, for brevity. N ( 0 , σ 2 ) Open image in new window denotes a zero mean normal distribution with variance σ2.

Consider the MA-CIFC in Figure 2, which is denoted by ( X 1 × X 2 × X 3 , p ( y 1 n , y 3 n | x 1 n , x 2 n , x 3 n ) , Y 1 × Y 3 ) Open image in new window, where X 1 X 1 , X 2 X 2 Open image in new window and X 3 X 3 Open image in new window are channel inputs at Transmitter 1 (Tx1), Transmitter 2 (Tx2) and Transmitter 3 (Tx3), respectively; Y 1 Y 1 Open image in new window and Y 3 Y 3 Open image in new window are channel outputs at Receiver 1 (Rx1) and Receiver 3 (Rx3), respectively; and p ( y 1 n , y 3 n | x 1 n , x 2 n , x 3 n ) Open image in new window is the channel transition probability distribution. In n channel uses, each Txj desires to send a message pair (m0, m j ) to Rx1 where j ∈ {1,2}, and Tx3 desires to send a message m3 to Rx3.
Figure 2

Multiple Access-Cognitive Interference Channel (MA-CIFC).

Definition 1: A ( 2 n R 0 , 2 n R 1 , 2 n R 2 , 2 n R 3 , n ) Open image in new window code for MA-CIFC consists of (i) four independent message sets M j = { 1 , . . . , 2 n R j } Open image in new window, where j ∈ {0, 1, 2, 3}; (ii) two encoding functions at the primary transmitters, f 1 : M 0 × M 1 X 1 n Open image in new window at Tx1 and f 2 : M 0 × M 2 X 2 n Open image in new window at Tx2; (iii) an encoding function at the cognitive transmitter, f 3 : M 0 × M 1 × M 2 × M 3 X 3 n Open image in new window; and (iv) two decoding functions, g 1 : Y 1 n M 0 × M 1 × M 2 Open image in new window at Rx1 and g 3 : Y 3 n M 3 Open image in new window at Rx3. We assume that the channel is memoryless. Thus, the channel transition probability distribution is given by
p ( y 1 n , y 3 n | x 1 n , x 2 n , x 3 n ) = i = 1 n p ( y 1 , i , y 3 , i | x 1 , i , x 2 , i , x 3 , i ) . Open image in new window
(1)
The probability of error for this code is defined as
P e = 1 2 n ( R 0 + R 1 + R 2 + R 3 ) m 0 , m 1 , m 2 , m 3 p [ { g 3 ( Y 3 n ) m 3 } { g 1 ( Y 1 n ) ( m 0 , m 1 , m 2 ) } | ( m 0 , m 1 , m 2 , m 3 ) sent ] . Open image in new window

Definition 2: A rate quadruple (R0, R1, R2, R3) is achievable if there exists a sequence of ( 2 n R 0 , 2 n R 1 , 2 n R 2 , 2 n R s , n ) Open image in new window codes with P e → 0 as n → ∞. The capacity region C Open image in new window, is the closure of the set of all achievable rates.

3. Inner bounds on the capacity region of discrete memoryless MA-CIFC

Now, we derive three achievable rate regions for the general setup. Theorems 1 and 2 assume an oblivious primary receiver (Rx1), which does not decode the cognitive user's message (m3) and treats it as noise. The decoding procedure at the cognitive receiver (Rx3) differs in these schemes. In Theorem 1, the cognitive receiver (Rx3) decodes the primary messages (m0, m1, m2), and all the transmitters use superposition coding. However, in Theorem 2, the cognitive receiver (Rx3) also treats the interference from the primary messages (m0, m1, m2) as noise, while the cognitive transmitter (Tx3) uses GP binning to precode its message for interference cancelation at Rx3. We also utilize the method of [6] in defining auxiliary RVs, which helps us to achieve the outer bound in special cases. In fact, we achieve the outer bound of Theorem 4 using the region of Theorem 1 for a class of degraded MA-CIFC in Section 5. The region of Theorem 2 is used in Section 6 to derive the capacity region in the weak interference regime. In the scheme of Theorem 3, we consider an aware primary receiver (Rx1) which decodes the cognitive user's message (m3). The cognitive receiver (Rx3) also decodes the primary messages (m0, m1, m2). Therefore, this region is obtained based on using superposition coding in the encoding part and by allowing both receivers to decode all messages with simultaneous joint decoding in the decoding part. In Section 5, we show that this strategy is capacity-achieving in the strong interference regime. Proofs are provided in "Appendix A".

Theorem 1: The union of rate regions given by
R 3 I ( X 3 ; Y 3 | T , U , X 1 , V , X 2 ) Open image in new window
(2)
R 1 I ( U , X 1 ; Y 1 | T , V , X 2 ) Open image in new window
(3)
R 2 I ( V , X 2 ; Y 1 | T , U , X 1 ) Open image in new window
(4)
R 1 + R 2 I ( U , X 1 , V , X 2 ; Y 1 | T ) Open image in new window
(5)
R 0 + R 1 + R 2 I ( T , U , X 1 , V , X 2 ; Y 1 ) Open image in new window
(6)
R 1 + R 3 I ( U , X 1 , X 3 ; Y 3 | T , V , X 2 ) Open image in new window
(7)
R 2 + R 3 I ( V , X 2 , X 3 ; Y 3 | T , U , X 1 ) Open image in new window
(8)
R 1 + R 2 + R 3 I ( U , X 1 , V , X 2 , X 3 ; Y 3 | T ) Open image in new window
(9)
R 0 + R 1 + R 2 + R 3 I ( T , U , X 1 , V , X 2 , X 3 ; Y 3 ) Open image in new window
(10)
is achievable for MA-CIFC, where the union is over all p.m.fs that factor as
p ( t ) p ( u , x 1 | t ) p ( v , x 2 | t ) p ( x 3 | t , u , x 1 , v , x 2 ) . Open image in new window
(11)
Theorem 2: The union of rate regions given by (3)-(6) and
R 3 I ( W ; Y 3 ) - I ( W ; T , U , X 1 , V , X 2 ) Open image in new window
(12)
is achievable for MA-CIFC, where the union is over all p.m.fs that factor as
p ( t ) p ( u , x 1 | t ) p ( v , x 2 | t ) p ( w , x 3 | t , u , x 1 , v , x 2 ) . Open image in new window
(13)
Theorem 3: The union of rate regions given by
R 3 I ( X 3 ; Y 3 | X 1 , X 2 , T ) Open image in new window
(14)
R 1 + R 3 min { I ( X 1 , X 3 ; Y 1 | X 2 , T ) , I ( X 1 , X 3 ; Y 3 | X 2 , T ) } Open image in new window
(15)
R 2 + R 3 min { I ( X 2 , X 3 ; Y 1 | X 1 , T ) , I ( X 2 , X 3 ; Y 3 | X 1 , T ) } Open image in new window
(16)
R 0 + R 1 + R 2 + R 3 min { I ( X 1 , X 2 , X 3 ; Y 1 ) , I ( X 1 , X 2 , X 3 ; Y 3 ) } Open image in new window
(17)
is achievable for MA-CIFC, where the union is over all p.m.fs that factor as
p ( t ) p ( x 1 | t ) p ( x 2 | t ) p ( x 3 | x 1 , x 2 , t ) . Open image in new window
(18)

Remark 1: We utilize the region of Theorem 1 in Section 5 to achieve capacity results for a class of degraded MA-CIFC, and the region of Theorem 2 to derive the results for the Gaussian case in Section 6. The region of Theorem 3 is also used to characterize the capacity region under strong interference conditions in Section 5.

4. An outer bound on the capacity region of discrete memoryless MA-CIFC

Here, we derive a general outer bound on the capacity region of MA-CIFC which is used to obtain the capacity region for a class of the degraded MA-CIFC in Section 5 and also to find capacity results for the Gaussian MA-CIFC in the weak interference regime in Section 6. Let R o 1 Open image in new window denote the union of all rate quadruples (R0, R1, R2, R3) satisfying (3)-(6) and
R 3 I ( X 3 ; Y 3 , Y 1 | T , U , X 1 , V , X 2 ) , Open image in new window
(19)

where the union is over all p.m.fs that factor as (11).

Theorem 4: The capacity region of MA-CIFC satisfies
Proof: Consider a ( 2 n R 0 , 2 n R 1 , 2 n R 2 , 2 n R 3 , n ) Open image in new window code with the average error probability of P e n 0 Open image in new window. Define the following RVs for i = 1, ..., n:
T i = ( M 0 , Y 1 i - 1 ) Open image in new window
(20)
U i = ( M 0 , M 1 , Y 1 i - 1 ) = ( M 1 , T i ) Open image in new window
(21)
V i = ( M 0 , M 2 , Y 1 i - 1 ) = ( M 2 , T i ) Open image in new window
(22)
Considering the encoding functions f1 and f2, defined in Definition 1, and the above definitions for auxiliary RVs, we remark that (X1,i, U i ) → T i → (X2,i, V i ) forms a Markov chain. Thus, these choices of auxiliary RVs satisfy the p.m.f (11) of Theorem 4. Now using Fano's inequality [23], we derive the bounds in Theorem 4. For the first bound, we have:
n R 3 = H ( M 3 ) = ( a ) H ( M 3 | M 0 , M 1 , M 2 ) = I ( M 3 ; Y 3 n | M 0 , M 1 , M 2 ) + H ( M 3 | Y 3 n , M 0 , M 1 , M 2 ) ( b ) I ( M 3 ; Y 3 n | M 0 , M 1 , M 2 ) + n δ 3 n Open image in new window
(23)
where (a) follows since messages are independent and (b) holds due to Fano's inequality and the fact that conditioning does not increase entropy. Hence,
n R 3 - n δ 3 n I ( M 3 ; Y 3 n | M 0 , M 1 , M 2 ) ( a ) I ( M 3 , X 3 n ; Y 3 n , Y 1 n | M 0 , M 1 , M 2 , X 1 n , X 2 n ) = ( b ) i = 1 n I M 3 , X 3 n ; Y 3 , i , Y 1 , i | M 0 , M 1 , M 2 , X 1 n , X 2 n , Y 3 i - 1 , Y 1 i - 1 ( c ) i = 1 n H Y 3 , i , Y 1 , i | M 0 , M 1 , M 2 , X 1 , i , X 2 , i , Y 1 i - 1 - H ( Y 3 , i , Y 1 , i | M 0 , M 1 , M 2 , Y 1 i - 1 , X 1 , i , X 2 , i , X 3 , i ) = ( d ) i = 1 n I ( X 3 , i ; Y 3 , i , Y 1 , i | T i , U i , X 1 , i , V i , X 2 , i ) Open image in new window
(24)

where (a) is due to the encoding functions f1, f2 and f3, defined in Definition 1, and the non-negativity of mutual information, (b) is obtained from the chain rule, (c) follows from the memoryless property of the channel and the fact that conditioning does not increase entropy, and (d) is obtained from (20)-(22).

Now, applying Fano's inequality and the independence of the messages, we can bound R1 as:
n R 1 - n δ 1 n I ( M 1 ; Y 1 n | M 0 , M 2 ) = ( a ) i = 1 n I ( M 1 , X 1 , i ; Y 1 , i | M 0 , M 2 , X 2 , i , Y 1 i - 1 ) = i = 1 n I ( M 1 , X 1 , i , M 0 , Y 1 i - 1 ; Y 1 , i | M 0 , M 2 , X 2 , i , Y 1 i - 1 ) = ( b ) i = 1 n I ( U i , X 1 , i ; Y 1 , i | T i , V i , X 2 , i ) , Open image in new window
(25)
where (a) follows from the chain rule and the encoding functions f1 and f2, and (b) from (20)-(22). Similarly, we can show that
n R 2 - n δ 2 n i = 1 n I ( V i , X 2 , i ; Y 1 , i | T i , U i , X 1 , i ) . Open image in new window
(26)
Next, based on similar arguments, we bound R1 + R2 as
n ( R 1 + R 2 ) - n ( δ 1 n + δ 2 n ) I ( M 1 , M 2 ; Y 1 n | M 0 ) = i = 1 n I ( M 1 , X 1 , i , M 2 , X 2 , i ; Y 1 , i | M 0 , Y 1 i - 1 ) = i = 1 n I ( M 1 , X 1 , i , M 2 , X 2 , i , Y 1 i - 1 ; Y 1 , i | M 0 , Y 1 i - 1 ) = i = 1 n I ( U i , X 1 , i , V i , X 2 , i ; Y 1 , i | T i ) . Open image in new window
(27)
The last sum-rate bound can be derived as follows:
n ( R 0 + R 1 + R 2 ) - n ( δ 0 n + δ 1 n + δ 2 n ) I ( M 0 , M 1 , M 2 ; Y 1 n ) = i = 1 n I ( M 0 , M 1 , X 1 , i , M 2 , X 2 , i ; Y 1 , i | Y 1 i - 1 ) ( a ) i = 1 n H ( Y 1 , i ) - H ( Y 1 , i | M 0 , M 1 , X 1 , i , M 2 , X 2 , i , Y 1 i - 1 ) = i = 1 n I ( T i , U i , X 1 , i , V i , X 2 , i ; Y 1 , i ) Open image in new window
(28)

where (a) follows since conditioning does not increase entropy. Using the standard time-sharing argument for (24)-(28) completes the proof.

5. Capacity results for discrete memoryless MA-CIFC

In this section, we characterize the capacity region of MA-CIFC under specific conditions. First, we consider a class of degraded MA-CIFC and derive conditions under which the inner bound in Theorem 1 achieves the outer bound of Theorem 4. Next, we investigate the strong interference regime by deriving two sets of strong interference conditions under which the region of Theorem 3 achieves capacity. We also compare these two sets of conditions and identify the weaker set. Finally, we extend the strong interference results to a network with k primary users.

A. Degraded MA-CIFC

Now, we characterize the capacity region for a class of MA-CIFC with a degraded primary receiver. We define MA-CIFC with a degraded primary receiver as a MA-CIFC where Y1 and X3 are independent given Y3, X1, X2. More precisely, the following Markov chain holds:
X 3 | X 1 , X 2 Y 3 | X 1 , X 2 Y 1 | X 1 , X 2 , Open image in new window
(29)

or equivalently, X3 → (X1, X2, Y3) → Y1 forms a Markov chain. This means that the primary receiver (Rx1) observes a degraded or noisier version of the cognitive user's signal (Tx3) compared with the cognitive receiver (Rx3).

Assume that the following conditions are satisfied for MA-CIFC over all p.m.fs that factor as (11):
I ( U , X 1 ; Y 1 | T , V , X 2 ) I ( U , X 1 ; Y 3 | T , V , X 2 ) Open image in new window
(30)
I ( V , X 2 ; Y 1 | T , U , X 1 ) I ( V , X 2 ; Y 3 | T , U , X 1 ) Open image in new window
(31)
I ( U , X 1 , V , X 2 ; Y 1 | T ) I ( U , X 1 , V , X 2 ; Y 3 | T ) Open image in new window
(32)
I ( T , U , X 1 , V , X 2 ; Y 1 ) I ( T , U , X 1 , V , X 2 ; Y 3 ) Open image in new window
(33)

Under these conditions, the cognitive receiver (Rx3) can decode the messages of the primary users with no rate penalty. If MA-CIFC with a degraded primary receiver satisfies conditions (30)-(33), the region of Theorem 1 coincides with R o 1 Open image in new window and achieves capacity, as stated in the following theorem.

Theorem 5: The capacity region of MA-CIFC with a degraded primary receiver, defined in (29), satisfying (30)-(33) is given by the union of rate regions satisfying (2)-(6) over all joint p.m.fs (11).

Remark 2: The messages of the primary users (m0, m1, m2) can be decoded at Rx3 under conditions (30)-(33). Therefore, Rx3-Tx3 achieves the rate in (2). Moreover, we can see that due to the degradedness condition in (29), treating interference as noise at the primary receiver (Rx1) achieves capacity. We show in Section 6 that, in the Gaussian case the capacity is achieved by using the region of Theorem 2 based on DPC (or GP binning), where the cognitive receiver (Rx3) does not decode the primary messages and conditions (30)-(33) are not necessary.

Proof: Achievability: The proof follows from the region of Theorem 1. Using the condition in (30), the sum of the bounds in (2) and (3) makes the bound in (7) redundant. Similarly, conditions (31)-(33), along with the bound in (2), make the bounds in (8)-(10) redundant and the region reduces to (2)-(6).

Converse: To prove the converse part, we evaluate R o 1 Open image in new window of Theorem 4 with the degradedness condition in (29). It is noted that the p.m.f of Theorem 5 is the same as the one for R o 1 Open image in new window. Moreover, the bounds in (3)-(6) are equal for both regions. Hence, it is only necessary to show the bound in (2). Considering (19), we obtain:
R 3 I ( X 3 ; Y 3 , Y 1 | T , U , X 1 , V , X 2 ) = I ( X 3 ; Y 3 | T , U , X 1 , V , X 2 ) + I ( X 3 ; Y 1 | T , U , X 1 , V , X 2 , Y 3 ) = ( a ) I ( X 3 ; Y 3 | T , U , X 1 , V , X 2 ) Open image in new window

where (a) is obtained by applying the degradedness condition in (29). This completes the proof.

B. Strong interference regime

Now, we derive two sets of strong interference conditions under which the region of Theorem 3 achieves capacity. First, assume that the following set of strong interference conditions, referred to as Set1, holds for all p.m.fs that factor as (18):
I ( X 3 ; Y 3 | X 1 , X 2 , T ) I ( X 3 ; Y 1 | X 1 , X 2 , T ) Open image in new window
(34)
I ( X 1 , X 3 ; Y 1 | X 2 , T ) I ( X 1 , X 3 ; Y 3 | X 2 , T ) Open image in new window
(35)
I ( X 2 , X 3 ; Y 1 | X 1 , T ) I ( X 2 , X 3 ; Y 3 | X 1 , T ) Open image in new window
(36)
I ( X 1 , X 2 , X 3 ; Y 1 ) I ( X 1 , X 2 , X 3 ; Y 3 ) . Open image in new window
(37)

In fact, under these conditions, interfering signals at the receivers are strong enough that all messages can be decoded by both receivers. Condition (34) implies that the cognitive user's message (m3) can be decoded at Rx1, while conditions (35)-(37) guarantee the decoding of the primary messages (m0, m1, m2) along with m3 at Rx3 in a MAC fashion.

Theorem 6: The capacity region of MA-CIFC satisfying (34)-(37) is given by:
C 1 str = p ( t ) p ( x 1 | t ) p ( x 2 | t ) p ( x 3 | x 1 , x 2 , t ) { ( R 0 , R 1 , R 2 , R 3 ) : R 0 , R 1 , R 2 , R 3 0 R 3 I ( X 3 ; Y 3 | X 1 , X 2 , T ) ( 38 ) R 1 + R 3 I ( X 1 , X 3 ; Y 1 | X 2 , T ) ( 39 ) R 2 + R 3 I ( X 2 , X 3 ; Y 1 | X 1 , T ) ( 40 ) R 0 + R 1 + R 2 + R 3 I ( X 1 , X 2 , X 3 ; Y 1 ) } . ( 41 ) Open image in new window

Remark 3: The message of the cognitive user (m3) can be decoded at Rx1, under condition (34) and (m0, m1, m2) can be decoded at Rx3 under conditions (35)-(37). Hence, the bound in (38) gives the capacity of a point-to-point channel with message m3 with side-information X1, X2 at the receiver. Moreover, (38)-(41) with condition (34) give the capacity region for a three-user MAC with common information where R1 and R2 are the common rates, R3 is the private rate for Tx3, and the private rates for Tx1 and Tx2 are zero.

Remark 4: If we omit Tx2, i.e., X 2 = Open image in new window, and Tx2 has no message to transmit, i.e., R2 = 0, the model reduces to a CIFC, and C 1 str Open image in new window coincides with the capacity region of the strong interference channel with unidirectional cooperation (or CIFC), which was characterized in [8, Theorem 5]. It is noted that in this case, the common message can be ignored, i.e., T = Open image in new window and R0 = 0.

Proof: Achievability: Considering (35)-(37), the proof follows from Theorem 3.

Converse: Consider a ( 2 n R 0 , 2 n R 1 , 2 n R 2 , 2 n R 3 , n ) Open image in new window code with an average error probability of P e n 0 Open image in new window. Define the following RV for i = 1, ..., n:
(42)

It is noted that due to the encoding functions f1, f2 and f3, defined in Definition 1, the independence of messages, and the above definitions for T n , RVs satisfy the p.m.f (18) of Theorem 6. First, we provide a useful lemma which we need in the proof of the converse part.

Lemma 1: If (34) holds for all distributions that factor as (18), then
I ( X 3 n ; Y 3 n | X 1 n , X 2 n , T n , U ) I ( X 3 n ; Y 1 n | X 1 n , X 2 n , T n , U ) . Open image in new window
(43)

Proof: The proof relies on the results in [24, Proposition 1] and [25, Lemma]. By redefining X2 = X3, Y2 = Y3, X1 = (X1, X2, T) in [8, Lemma 5], the proof follows.

Now, using Fano's inequality [23], we derive the bounds in Theorem 6. Using (23) provides:
n R 3 - n δ 3 n I ( M 3 ; Y 3 n | M 0 , M 1 , M 2 ) = ( a ) I ( M 3 , X 3 n ; Y 3 n | T n , M 1 , M 2 , X 1 n , X 2 n ) ( b ) I ( X 3 n ; Y 3 n | T n , X 1 n , X 2 n ) = ( c ) i = 1 n I ( X 3 n ; Y 3 , i | T n , X 1 n , X 2 n , Y 3 i - 1 ) ( d ) i = 1 n I ( X 3 , i ; Y 3 , i | X 1 , i , X 2 , i , T i ) Open image in new window
(44)

where (a) is due to (42) and the encoding functions f1, f2 and f3, defined in Definition 1, (b) follows from two facts; conditioning does not increase entropy and ( M 1 , M 2 , M 3 ) ( X 1 n , X 2 n , X 3 n ) Y 3 n Open image in new window forms a Markov chain, (c) is obtained from the chain rule, and (d) follows from the memoryless property of the channel and the fact that conditioning does not increase entropy.

Now, applying Fano's inequality and the independence of the messages, we can bound R1 + R3 as
n ( R 1 + R 3 ) - n ( δ 1 n + δ 3 n ) I ( M 1 ; Y 1 n | M 0 , M 2 ) + I ( M 3 ; Y 3 n | M 0 , M 1 , M 2 ) = ( a ) I ( M 1 , X 1 n ; Y 1 n | M 0 , M 2 , X 2 n ) + I ( M 3 , X 3 n ; Y 3 n | M 0 , M 1 , M 2 , X 1 n , X 2 n ) = ( b ) I ( M 1 , X 1 n ; Y 1 n | T n , M 2 , X 2 n ) + I ( X 3 n ; Y 3 n | T n , M 1 , M 2 , X 1 n , X 2 n ) ( c ) I ( M 1 , X 1 n ; Y 1 n | T n , M 2 , X 2 n ) + I ( X 3 n ; Y 1 n | T n , M 1 , M 2 , X 1 n , X 2 n ) = I ( M 1 , X 1 n , X 3 n ; Y 1 n | T n , M 2 , X 2 n ) = ( d ) i = 1 n I ( M 1 , X 1 n , X 3 n ; Y 1 , i | T n , M 2 , X 2 n , Y 1 i - 1 ) ( e ) i = 1 n I ( X 1 , i , X 3 , i ; Y 1 , i | X 2 , i , T i ) Open image in new window
(45)

where (a) follows from encoding functions f1, f2 and f3, (b) follows from (42) and the fact that M 3 ( X 1 n , X 2 n , X 3 n ) Y 3 n Open image in new window forms a Markov chain, (c) is obtained from (43), (d) follows from the chain rule, and (e) follows from the memoryless property of the channel and the fact that conditioning does not increase entropy.

Applying similar steps, we can show that,
n ( R 2 + R 3 ) - n ( δ 2 n + δ 3 n ) i = 1 n I ( X 2 , i , X 3 , i ; Y 1 , i | X 1 , i , T 1 ) . Open image in new window
(46)
Finally, the sum-rate bound can be obtained as
n ( R 0 + R 1 + R 2 + R 3 ) - n ( δ 0 n + δ 1 n + δ 2 n + δ 3 n ) I ( M 0 , M 1 , M 2 ; Y 1 n ) + I ( M 3 ; Y 3 n | M 0 , M 1 , M 2 ) = I ( M 0 , M 1 , M 2 , X 1 n , X 2 n ; Y 1 n ) + I ( M 3 , X 3 n ; Y 3 n | M 0 , M 1 , M 2 , X 1 n , X 2 n ) ( a ) I ( T n , M 1 , M 2 , X 1 n , X 2 n ; Y 1 n ) + I ( X 3 n ; Y 1 n | T n , M 1 , M 2 , X 1 n , X 2 n ) = I ( T n , M 1 , M 2 , X 1 n , X 2 n , X 3 n ; Y 1 n ) = ( b ) I ( T n , X 1 n , X 2 n , X 3 n ; Y 1 n ) = ( c ) i = 1 n I ( X 1 , i , X 2 , i , X 3 , i ; Y 1 , i ) Open image in new window
(47)

where (a) follows from steps (a)-(c) in (45), (b) is due to the fact that ( M 1 , M 2 ) ( X 1 n , X 2 n , X 3 n ) Y 1 n Open image in new window forms a Markov chain, and (c) follows from the memoryless property of the channel and the fact that conditioning does not increase entropy. Using a standard time-sharing argument for (44)-(47) completes the proof.

Next, we derive the second set of strong interference conditions, called Set2, under which the region of Theorem 3 is the capacity region. For all p.m.fs that factor as (18), Set2 includes (34) and the following conditions:
I ( X 1 ; Y 1 | X 2 , T ) I ( X 1 ; Y 3 | X 2 , T ) Open image in new window
(48)
I ( X 2 ; Y 1 | X 1 , T ) I ( X 2 ; Y 3 | X 1 , T ) Open image in new window
(49)
I ( X 1 , X 2 ; Y 1 ) I ( X 1 , X 2 ; Y 3 ) . Open image in new window
(50)

Remark 5: Similar to the condition Set1, under these conditions interfering signals at the receivers are strong enough that all messages can be decoded by both receivers. The first condition in (34) is equal in the two sets under which the cognitive user's message (m3) can be decoded at Rx1. However, conditions (48)-(50) imply that the primary messages (m0, m1, m2) can be decoded at Rx3 in a MAC fashion, while in Set1, they can be decoded along with m3.

Theorem 7: The capacity region of MA-CIFC, satisfying (34) and (48)-(50), referred to as C 2 str Open image in new window, is given by the union of rate regions satisfying (14)-(17) over all p.m.fs that factor as (18).

Proof: See "Appendix B".

Remark 6: Similar to Remark 4, by omitting Tx 2 ( T = X 2 = , R 0 = R 2 = 0 ) Open image in new window, the model reduces to a CIFC. Moreover, C 2 Open image in new window and Set2 reduce to the capacity region and strong interference conditions which have been derived in [13] for non-causal CIFC.

Remark 7 (Comparison of two sets of conditions): In the strong interference conditions of Set1, the first condition in (34) is used in the converse part, while (35)-(37) are used to reduce the inner bound to C 1 str Open image in new window. However, all the conditions of Set2 are utilized to prove the converse part. Now, we compare the conditions in these two sets. We can write (35) as
I ( X 1 ; Y 1 | X 2 , T ) + [ I ( X 3 ; Y 1 | X 1 , X 2 , T ) - I ( X 3 ; Y 3 | X 1 , X 2 , T ) ] I diff I ( X 1 ; Y 3 | X 2 , T ) . Open image in new window

Considering (34), it can be seen that Idiff ≥ 0. Hence, condition (35) implies condition (48), but not vice versa. Similar conclusions can be drawn for other conditions of these two sets. Therefore, Set1 implies Set2, and the conditions of Set2 are weaker compared to those of Set1.

C. Multiple access-cognitive interference network (MA-CIFN)

Now, we extend the result of Theorem 6 to a network with k + 1 transmitters and two receivers; a k-user MAC as a primary network and a point-to-point channel with a cognitive transmitter. We call it Multiple Access-Cognitive Interference Network (MA-CIFN). Consider MA-CIFN in Figure 3, denoted by ( X 1 × X 2 × × X k × X k + 1 , p ( y 1 n , y k + 1 n | x 1 n , x 2 n , x k n , x k + 1 n ) , Y 1 × Y k + 1 ) Open image in new window, where X j X j Open image in new window is the channel input at Transmitter j (Txj), for j { 1 , . . . , k + 1 } ; Y 1 Y 1 Open image in new window and Y k + 1 Y k + 1 Open image in new window are channel outputs at the primary and cognitive receivers, respectively, and p ( y 1 n , y k + 1 n | n 1 n , x 2 n , . . . , x k n , x k + 1 n ) Open image in new window is the channel transition probability distribution. In n channel uses, each Txj desires to send a message pair m j to the primary receiver where j ∈ {1, ..., k}, and Txk + 1 desires to send a message mk+1to the cognitive receiver. We ignore the common information for brevity. Definitions 1 and 2 can be simply extended to the MA-CIFN. Therefore, we state the result on the capacity region under strong interference conditions.
Figure 3

Graphic representation for the MA-CIFN.

Corollary 1: The capacity region of the MA-CIFN, satisfying
I ( X k + 1 ; Y k + 1 | X ( [ 1 : k ] ) ) I ( X k + 1 ; Y 1 | X ( [ 1 : k ] ) ) Open image in new window
(51)
I ( X k + 1 , X ( S ) ; Y 1 | X ( S c ) ) I ( X k + 1 , X ( S ) ; Y k + 1 | X ( S c ) ) Open image in new window
(52)
for all S ⊆ [1: k] and for every p(x1)p(x2)...p(x k ) p(xk+ 1|x1,x2,...,x k )p(y1,yk+1|x1,x2,...,x k ,xk+1), is given by
C n e t str = p ( x 1 ) p ( x 2 ) p ( x k ) p ( x k + 1 | x 1 , x 2 , , x k ) { ( R 1 , R 2 , , R k , R k + 1 ) : R 1 , R 2 , , R k , R k + 1 0 R k + 1 I ( X k + 1 ; Y k + 1 | X ( [ 1 : k ] ) ) ( 53 ) R k + 1 + j S R j I ( X k + 1 , X ( S ) ; Y 1 | X ( S c ) ) } ( 54 ) Open image in new window

for all S ⊆ [1: k], where X(S) is the ordered vector of X j , jS, and S c denotes the complement of the set S.

Proof: Following the same lines as the proof of Theorem 6, the proof is straightforward. Therefore, it is omitted for the sake of brevity.

Remark 8: Under condition (51), the message of the cognitive user (mk+1) can be decoded at the primary receiver (Y1). Also, the cognitive receiver (Yk+1), under condition (52), can decode m j ; j ∈ {1,...,k} in a MAC fashion. Therefore, the bound in (53) gives the capacity of a point-to-point channel with message mk+1with side-information X j ; j ∈ {1,..., k} at the cognitive receiver. Moreover, (53) and (54) with condition (51), give the capacity region for a k + 1-user MAC with common information at the primary receiver.

6. Gaussian MA-CIFC

In this section, we consider the Gaussian MA-CIFC and characterize capacity results for the Gaussian case in the weak and strong interference regimes. For simplicity, we assume that Tx1 and Tx2 have no common information. This means that, R0 = 0 and M 0 = Open image in new window. to investigate these regions. The Gaussian MA-CIFC, as depicted in Figure 4, at time i = 1,..., n can be mathematically modeled as
Figure 4

Gaussian MA-CIFC.

Y 1 , i = X 1 , i + X 2 , i + h 31 X 3 , i + Z 1 , i Open image in new window
(55)
Y 3 , i = h 13 X 1 , i + h 23 X 2 , i + X 3 , i + Z 3 , i Open image in new window
(56)
where h31, h13, and h23 are known channel gains. X1,i, X2,i and X3,iare input signals with average power constraints:
1 n i = 1 n ( x j , i ) 2 P j Open image in new window
(57)

for j ∈ {1,2, 3}. Z1,iand Z3,iare independent and identically distributed (i.i.d) zero mean Gaussian noise components with unit powers, i.e., Z j , i ~ N ( 0 , 1 ) Open image in new window for j ∈ {1, 3}.

A. Strong interference regime

Here, we extend the results of Theorem 6, i.e., C 1 str Open image in new window and Set1, to the Gaussian case. The strong interference conditions of Set1, i.e., (34)-(37), for the above Gaussian model become:
(58)
P 1 ( h 13 2 - 1 ) + 2 ρ 1 P 1 P 3 ( h 13 - h 31 ) P 3 ( 1 - ρ 2 2 ) ( h 31 2 - 1 ) Open image in new window
(59)
P 2 ( h 23 2 - 1 ) + 2 ρ 2 P 2 P 3 ( h 23 - h 31 ) P 3 ( 1 - ρ 1 2 ) ( h 31 2 - 1 ) Open image in new window
(60)
P 1 ( h 13 2 - 1 ) + P 2 ( h 23 2 - 1 ) + 2 ρ 1 P 1 P 3 ( h 13 - h 31 ) + 2 ρ 2 P 2 P 3 ( h 23 - h 31 ) P 3 ( h 31 2 - 1 ) Open image in new window
(61)

where - 1 ≤ ρ u ≤ 1 is the correlation coefficient between X u and X3, i.e., E ( X u , X 3 ) = ρ u P u P 3 Open image in new window for u ∈ {1, 2}.

Theorem 8: For the Gaussian MA-CIFC satisfying conditions (58)-(61), the capacity region is given by
C 1 G = 1 ρ 1 , ρ 2 1 : ρ 12 + ρ 22 1 { ( R 1 , R 2 , R 3 ) : R 1 , R 2 , R 3 0 R 3 θ ( P 3 ( 1 ρ 12 ρ 22 ) ) Open image in new window
(62)
R 1 + R 3 θ P 1 + h 31 2 P 3 ( 1 - ρ 2 2 ) + 2 h 31 ρ 1 P 1 P 3 Open image in new window
(63)
R 2 + R 3 θ P 2 + h 31 2 P 3 ( 1 - ρ 1 2 ) + 2 h 31 ρ 2 P 2 P 3 Open image in new window
(64)
R 1 + R 2 + R 3 θ P 1 + P 2 + h 31 2 P 3 + 2 h 31 P 3 ( ρ 1 P 1 + ρ 2 P 2 ) Open image in new window
(65)
where, to simplify notation, we define
θ ( x ) = . 1 2 log ( 1 + x ) . Open image in new window
(66)

Remark 9: Condition (58) implies that Tx3 causes strong interference at Rx1. This enables Rx1 to decode m3. Moreover, (59)-(61) provide strong interference conditions at Rx3, under which all messages can be decoded in Rx3 in a MAC fashion.

Proof: The achievability part follows from C 1 str Open image in new window in Theorem 6 by evaluating (38)-(41) with zero mean jointly Gaussian channel inputs X1, X2 and X3. That is, X 1 ~ N ( 0 , P 1 ) , X 2 ~ N ( 0 , P 2 ) Open image in new window, and X 3 ~ N ( 0 , P 3 ) Open image in new window, where E(X1,X2) = 0, E ( X 1 , X 3 ) = ρ 1 P 1 P 3 Open image in new window, and E ( X 2 , X 3 ) = ρ 2 P 2 P 3 Open image in new window. The converse proof is based on reasoning similar to that in [26] and is provided in "Appendix C".

It is noted that the channel parameters, i.e., P1,P2,P3,h31,h13,h23, must satisfy (58)-(61) for all - 1 ρ 1 , ρ 2 1 : ρ 1 2 + ρ 2 2 1 Open image in new window, to numerically evaluate the C 1 G Open image in new window using (62)-(65). Here, we choose P 1 = P 2 = P 3 = 6 , h 31 = h 13 = h 23 = 1 . 5 Open image in new window which satisfy strong interference conditions (58)-(61); hence, the regions are derived under strong interference conditions.

Figure 5 shows the capacity region for the Gaussian MA-CIFC of Theorem 8, for P1 = P2 = P3 = 6, and h 31 = h 13 = h 23 = 1 . 5 Open image in new window, where ρ1 = ρ2 is fixed in each surface. The ρ1 = ρ2 = 0 region corresponds to the no cooperation case, where the channel inputs are independent. It can be seen that as ρ1 = ρ2 increases, the bound on R3 becomes more restrictive while the sum-rate bounds become looser; because Tx3 dedicates parts of its power for cooperation. This means that, as Tx3 allocates more power to relay m1, m2 by increasing ρ1 = ρ2, R1 and R2 improve, while R3 degrades due to the less power allocated to transmit m3. The capacity region for this channel is the union of all the regions obtained for different values of ρ1 and ρ2, satisfying ρ 1 2 + ρ 2 2 1 Open image in new window. This union is shown in Figure 6. In order to better perceive the effect of cooperation, we let R2 = 0 in Figure 7. It is seen that by increasing ρ1 = ρ2, the bound on R1 + R3 becomes looser and R1 improves, while R3 decreases due to the more power dedicated for cooperation.
Figure 5

The capacity region for the Gaussian MA-CIFC for fixed ρ 1 = ρ 2 under the strong interference conditions.

Figure 6

The capacity region for the Gaussian MA-CIFC under the strong interference conditions.

Figure 7

The capacity region for the Gaussian MA-CIFC under the strong interference conditions when R 2 = 0.

B. Weak interference regime

Now, we consider the Gaussian MA-CIFC with weak interference at the primary receiver (Rx1), which means h31 ≤ 1. We remark that, since there is no cooperation between receivers, the capacity region for this channel is the same as the one with the same marginal outputs p ( y 1 n | x 1 n , x 2 n , x 3 n ) Open image in new window and p ( y 3 n | x 1 n , x 2 n , x 3 n ) Open image in new window. Hence, we can state the following useful lemma.

Lemma 2: The capacity region of a Gaussian MA-CIFC, defined by (55) and (56) when h31 ≤ 1, is the same as the capacity region of a Gaussian MA-CIFC with the following channel outputs:
Ŷ 1 , i = X 1 , i + X 2 , i + h 31 Y 3 , i + Z 1 , i Open image in new window
(67)
Y ^ 3 , i = h 13 X 1 , i + h 23 X 2 , i + Y 3 , i Open image in new window
(68)

where Y'3,i= X3,i+ Z3,iand Z 1 , i ~ N ( 0 , 1 - h 31 2 ) Open image in new window. Therefore, the degradedness condition in (29) holds for the Gaussian MA-CIFC when h31 ≤ 1.

Proof: The proof follows from [6, Lemma 3.5].

Next, we use the inner bound of Theorem 2 and the outer bound of Theorem 4 to derive the capacity region, which shows that the capacity-achieving scheme in this case consists of DPC at the cognitive transmitter and treating interference as noise at both receivers.

Theorem 9: For the Gaussian MA-CIFC, defined by (55) and (56), when h31 ≤ 1, the capacity region is given by
C 2 G = 1 ρ 1 , ρ 2 1 : ρ 12 + ρ 22 1 { ( R 1 , R 2 , R 3 ) : R 1 , R 2 , R 3 0 R 3 θ ( P 3 ( 1 ρ 12 ρ 22 ) ) Open image in new window
(69)
R 1 θ ( P 1 + h 31 ρ 1 P 3 ) 2 h 31 2 P 3 ( 1 - ρ 1 2 - ρ 2 2 ) + 1 Open image in new window
(70)
R 2 θ ( P 2 + h 31 ρ 2 P 3 ) 2 h 31 2 P 3 ( 1 - ρ 1 2 - ρ 2 2 ) + 1 Open image in new window
(71)
R 1 + R 2 θ P 1 + h 31 ρ 1 P 3 2 + P 2 + h 31 ρ 2 P 3 2 h 31 2 P 3 ( 1 - ρ 1 2 - ρ 2 2 ) + 1 Open image in new window
(72)

where θ(·) is defined in (66).

Remark 10: By evaluating (2) with jointly Gaussian channel inputs, one can easily achieve (69). However, this results in the Gaussian counterparts of the bounds in (7)-(10). Therefore, some conditions are necessary to make these bounds redundant, similar to the ones in (30)-(33). However, we show that (12) is also evaluated to (69), if we apply DPC with appropriate parameters. Hence, conditions (30)-(33) are unnecessary in the Gaussian case. This means that DPC completely mitigates the effects of interference for the Tx3-Rx3 pair and leaves the link between them interference-free for fixed values of ρ1, ρ2. Consequently, C 2 G Open image in new window is independent of h13 and h23.

Remark 11: If we omit Tx2, the model reduces to a CIFC and by setting P 2 = ρ 2 = R 2 = 0 , C 2 G Open image in new window coincides with the capacity region of the Gaussian CIFC with weak interference, which was characterized in [6, Lemma 3.6].

Proof: The rate region in Theorem 2 can be extended to the discrete-time Gaussian memoryless case with continuous alphabets by standard arguments [23]. Hence, it is sufficient to evaluate (3)-(6) and (12) with an appropriate choice of input distribution to reach (69)-(72). Let R 0 = 0 , M 0 = Open image in new window, and T = Open image in new window, since Tx1 and Tx2 have no common information. Also, let U and V be deterministic constants. We choose zero mean jointly Gaussian channel inputs X1, X2 and X3. In fact, X j ~ N ( 0 , P j ) Open image in new window for j ∈ {1,2,3}, where E(X1,X2) = 0, E ( X 1 , X 3 ) = ρ 1 P 1 P 3 Open image in new window, and E ( X 2 , X 3 ) = ρ 2 P 2 P 3 Open image in new window. Noting the p.m.f (13), consider the following choice of input distribution for certain { - 1 ρ 1 , ρ 2 1 : ρ 1 2 + ρ 2 2 1 } Open image in new window:
X 1 ~ N ( 0 , P 1 ) , X 2 ~ N ( 0 , P 2 ) W = X 3 + α 1 X 1 + α 2 X 2 , X 3 ~ N ( 0 , ( 1 - ρ 1 2 - ρ 2 2 ) P 3 ) X 3 = X 3 + ρ 1 P 3 P 1 X 1 + ρ 2 P 3 P 2 X 2 Open image in new window
(73)

Therefore, (3)-(6) are easily evaluated to (70)-(72). In "Appendix D", we derive (69) by evaluating (12) with appropriate parameters. The converse proof follows by applying the bounds in the proof of Theorem 4 to the Gaussian case and utilizing Entropy Power Inequality (EPI) [23, 27]. A detailed converse proof is provided in "Appendix D".

Remark 12: According to Theorems 8 and 9, jointly Gaussian channel inputs X1, X2 and X3 are optimal for the Gaussian MA-CIFC under the strong and weak interference conditions, determined in the above theorems.

Figure 8 shows the capacity region for the Gaussian MA-CIFC of Theorem 9, for P1 = P2 = P3 = 6, and h 31 = 0 . 55 Open image in new window, where ρ1 = ρ 2 is fixed in each surface. It is noted that the capacity region is independent of h13 and h23. The ρ1 = ρ2 = 0 region corresponds to the no cooperation case, where channel inputs are independent. We see that when Tx3 dedicates parts of its power for cooperation, i.e., ρ1 = ρ2 = 0.5, the rates of the primary users (R1, R2) increase, while R3 decreases. The capacity region for this channel is the union of all the regions obtained for different values of ρ1 and ρ2 satisfying, ρ 1 2 + ρ 2 2 1 Open image in new window, which is shown in Figure 9. Similar to Figure 7, we investigate the capacity region for R2 = 0 in Figure 10 in the weak interference regime. It is seen that, when Tx3 dedicates more power for cooperation by increasing ρ1 = ρ2, R1 improves and R3 decreases.
Figure 8

The capacity region for the weak Gaussian MA-CIFC for fixed ρ 1 = ρ 2 .

Figure 9

The capacity region for the weak Gaussian MA-CIFC.

Figure 10

The capacity region for the weak Gaussian MA-CIFC when R 2 = 0.

7. Conclusion

We investigated a cognitive communication network where a MAC with common information and a point-to-point channel share a same medium and interfere with each other. For this purpose, we introduced Multiple Access-Cognitive Interference Channel (MA-CIFC) by merging a two-user MAC as a primary network and a cognitive transmitter-receiver pair in which the cognitive transmitter knows the message being sent by all of the transmitters in a non-causal manner. We analyzed the capacity region of MA-CIFC by deriving the inner and outer bounds on the capacity region of this channel. These bounds were proved to be tight in some special cases. Therefore, we determined the optimal strategy in these cases. Specifically, in the discrete memoryless case, we established the capacity regions for a class of degraded MA-CIFC and also under two sets of strong interference conditions. We also derive strong interference conditions for a network with k primary users. Further, we characterized the capacity region of the Gaussian MA-CIFC in the weak and strong interference regimes. We showed that DPC at the cognitive transmitter and treating interference as noise at the receivers, i.e., an oblivious primary receiver, are optimal in the weak interference. However, the receivers have to decode all messages when the interference is strong enough, which requires an aware primary receiver.

Appendix A Proofs of Theorems 1, 2 and 3

Outline of the proof for Theorem 1: We propose the following random coding scheme, which contains superposition coding and the technique of [6] in defining auxiliary RVs, i.e., U and V. The cognitive receiver (Rx3) decodes the interfering signals caused by the primary messages (m1, m2), while the primary receiver (Rx1) does not decode the interference from the cognitive user's message (m3) and treats it as noise.

Codebook Generation: Fix a joint p.m.f as (11). Generate 2 n R 0 Open image in new window i.i.d t n sequences, according to the probability i = 1 n p ( t i ) Open image in new window. Index them as t n (m0) where m 0 [ 1 , 2 n R 0 ] Open image in new window. For each t n (m0), generate 2 n R 1 Open image in new window i.i.d ( u n , x 1 n ) Open image in new window sequences, each with probability i = 1 n p ( u i , x 1 , i | t i ) Open image in new window. Index them as ( u n ( m 0 , m 1 ) , x 1 n ( m 0 , m 1 ) ) Open image in new window where m 1 [ 1 , 2 n R 1 ] Open image in new window. Similarly, for each t n (m0), generate 2 n R 2 Open image in new window i.i.d ( v n , x 2 n ) Open image in new window sequences, each with probability i = 1 n p ( v i , x 2 , i | t i ) Open image in new window. Index them as ( v n ( m 0 , m 2 ) , x 2 n ( m 0 , m 2 ) ) Open image in new window where m 2 [ 1 , 2 n R 2 ] Open image in new window. For each ( t n ( m 0 ) , u n ( m 0 , m 1 ) , x 1 n ( m 0 , m 1 ) , v n ( m 0 , m 2 ) , x 2 n ( m 0 , m 2 ) ) Open image in new window, generate 2 n R 3 Open image in new window i.i.d x 3 n Open image in new window sequences, according to i = 1 n p ( x 3 , i | t i , u i , x 1 , i , v i , x 2 , i ) Open image in new window. Index them as x 3 n ( m 0 , m 1 , m 2 , m 3 ) Open image in new window where m 3 [ 1 , 2 n R 3 ] Open image in new window.

Encoding: In order to transmit the messages (m0, m1, m2, m3), Txj sends x j n ( m 0 , m j ) Open image in new window for j ∈ {1,2} and Tx3 sends x 3 n ( m 0 , m 1 , m 2 , m 3 ) Open image in new window.

Decoding:

Rx1: After receiving y 1 n Open image in new window, Rx1 looks for a unique triple ( m ^ 0 , m ^ 1 , m ^ 2 ) Open image in new window and some m ^ 2 Open image in new window such that
( y 1 n , t n ( m ^ 0 ) , u n ( m ^ 0 , m ^ 1 ) , x 1 n ( m ^ 0 , m ^ 1 ) , v n ( m ^ 0 , m ^ 2 ) , x 2 n ( m ^ 0 , m ^ 2 ) ) A n ( Y 1 , T , U , X 1 , V , X 2 ) . Open image in new window

For large enough n with arbitrarily high probability, ( m ^ 0 , m ^ 1 , m ^ 2 ) = ( m 0 , m 1 , m 2 ) Open image in new window if (3)-(6) hold.

Rx3: After receiving y 3 n Open image in new window, Rx3 finds a unique index m ^ ^ 3 Open image in new window and some triple ( m ^ ^ 0 , m ^ ^ 1 , m ^ ^ 2 ) Open image in new window such that
y 3 n , x 3 n ( m ^ ^ 0 , m ^ ^ 1 , m ^ ^ 2 , m ^ ^ 3 ) , t n ( m ^ ^ 0 ) , u n ( m ^ ^ 0 , m ^ ^ 1 ) , x 1 n ( m ^ ^ 0 , m ^ ^ 1 ) , v n ( m ^ ^ 0 , m ^ ^ 2 ) , x 2 n ( m ^ ^ 0 , m ^ ^ 2 ) A n ( Y 3 , X 3 , T , U , X 1 , V , X 2 ) . Open image in new window

With arbitrary high probability, m ^ ^ 3 = m 3 Open image in new window if n is large enough and (2), (7)-(10) hold. This completes the proof.

Outline of the proof for Theorem 2: Our proposed random coding scheme, in the encoding part, contains the methods of Theorem 1 and GP binning at the cognitive transmitter (Tx3) which is used at Tx3 to cancel the interference caused by m0, m1, m2 at Rx3. In the decoding part, both receivers decode only their intended messages, treating the interference as noise. Therefore, unlike the decoding part of Theorem 1, Rx3 decodes only its own message (m 3), while treating the other signals as noise.

Codebook Generation: Fix a joint p.m.f as (13). Generate t n ( m 0 ) , u n ( m 0 , m 1 ) , x 1 n ( m 0 , m 1 ) , v n ( m 0 , m 2 ) , x 2 n ( m 0 , m 2 ) Open image in new window codewords based on the same lines as in the codebook generation part of Theorem 1. Then, generate 2 n ( R 3 + L ) Open image in new window i.i.d w n sequences. Index them as w n (m3,l) where m 3 [ 1 , 2 n R 3 ] Open image in new window and l ∈ [1,2 nL ].

Encoding: In order to transmit the messages (m0,m1,m2,m3), Txj sends x j n ( m 0 , m j ) Open image in new window for j ∈ {1,2}. Tx3 (the cognitive transmitter), in addition to m3, knows m0, m1 and m2. Hence, knowing codewords t n , u n , x 1 n , v n , x 2 n Open image in new window, to send m3, it seeks an index l such that
( w n ( m 3 , l ) , t n ( m 0 ) , u n ( m 0 , m 1 ) , x 1 n ( m 0 , m 1 ) , v n ( m 0 , m 2 ) , x 2 n ( m 0 , m 2 ) ) A n ( W , T , U , X 1 , V , X 2 ) . Open image in new window
If there is more than one such index, Tx3 picks the smallest. If there is no such codeword, it declares an error. Using covering lemma [27], it can be shown that there exists such an index l with high enough probability if n is sufficiently large and
L I ( W ; T , U , X 1 , V , X 2 ) . Open image in new window
(74)

Then, Tx3 sends x 3 n Open image in new window generated according to i = 1 n p ( x 3 , i | t i , u i , x 1 , i , v i , x 2 , i ) Open image in new window.

Decoding: The decoding procedure at Rx1 is similar to Theorem 1 and the error in this receiver can be bounded if (3)-(6) hold.

Rx3: After receiving y 3 n Open image in new window, Rx3 finds a unique index m ^ ^ 3 Open image in new window for some index l ^ ^ Open image in new window such that
y 3 n , w n m ^ ^ 3 , l ^ ^ A n ( Y 3 , W ) . Open image in new window
For large enough n, the probability of error can be made sufficiently small if
R 3 + L I ( W ; Y 3 ) . Open image in new window
(75)

Combining (74) and (75) results in (12). This completes the proof.

Outline of the proof for Theorem 3: We propose the following random coding scheme, which contains superposition coding in the encoding part and simultaneous joint decoding in the decoding part. All messages are common to both receivers, i.e., both receivers decode (m0, m1, m2, m3).

Codebook Generation: Fix a joint p.m.f as (18). Generate 2 n R 0 Open image in new window i.i.d t n sequences, according to the probability i = 1 n p ( t i ) Open image in new window. Index them as t n (m0) where m 0 [ 1 , 2 n R 0 ] Open image in new window. For j ∈ {1,2} and each t n (m0), generate 2 n R j Open image in new window i.i.d x j n Open image in new window sequences, each with probability i = 1 n p ( x j , i | t i ) Open image in new window. Index them as x j n ( m 0 , m j ) Open image in new window where m j [ 1 , 2 n R j ] Open image in new window. For each ( t n ( m 0 ) , x 1 n ( m 0 , m 1 ) , x 2 n ( m 0 , m 2 ) ) Open image in new window, generate 2 n R 3 Open image in new window i.i.d x 3 n Open image in new window sequences, each with probability i = 1 n p ( x 3 , i | t i , x 1 , i , x 2 , i ) Open image in new window. Index them as x 3 n ( m 0 , m 1 , m 2 , m 3 ) Open image in new window where m 3 [ 1 , 2 n R 3 ] Open image in new window.

Encoding: In order to transmit message (m0, m1, m2, m3), Txj sends x j n ( m 0 , m j ) Open image in new window for j ∈ {1,2} and Tx3 sends x 3 n ( m 0 , m 1 , m 2 , m 3 ) Open image in new window.

Decoding:

Rx1: After receiving y 1 n Open image in new window, Rx1 looks for a unique triple ( m ^ 0 , m ^ 1 , m ^ 2 ) Open image in new window and some m ^ 3 Open image in new window such that
( y 1 n , t n ( m ^ 0 ) , x 1 n ( m ^ 0 , m ^ 1 ) , x 2 n ( m ^ 0 , m ^ 2 ) , x 3 n ( m ^ 0 , m ^ 1 , m ^ 2 , m ^ 3 ) ) A n ( Y 1 , T , X 1 , X 2 , X 3 ) . Open image in new window
For large enough n, with arbitrarily high probability ( m ^ 0 , m ^ 1 , m ^ 2 ) = ( m 0 , m 1 , m 2 ) Open image in new window if
R 1 + R 3 I ( X 1 , X 3 ; Y 1 | X 2 , T ) Open image in new window
(76)
R 2 + R 3 I ( X 2 , X 3 ; Y 1 | X 1 , T ) Open image in new window
(77)
R 0 + R 1 + R 2 + R 3 I ( X 1 , X 2 , X 3 ; Y 1 ) . Open image in new window
(78)
Rx3: Similarly, after receiving y 3 n Open image in new window, Rx3 finds a unique index m ^ ^ 3 Open image in new window and some triple ( m ^ ^ 0 , m ^ ^ 1 , m ^ ^ 2 ) Open image in new window such that
( y 3 n , t n ( m ^ ^ 0 ) , x 1 n ( m ^ ^ 0 , m ^ ^ 1 ) , x 2 n ( m ^ ^ 0 , m ^ ^ 2 ) , x 3 n ( m ^ ^ 0 , m ^ ^ 1 , m ^ ^ 2 , m ^ ^ 3 ) ) A ε n ( Y 3 , T , X 1 , X 2 , X 3 ) . Open image in new window
With the arbitrary high probability m ^ ^ 3 = m 3 Open image in new window, if n is large enough and
R 3 I ( X 3 ; Y 3 | X 1 , X 2 , T ) Open image in new window
(79)
R 1 + R 3 I ( X 1 , X 3 ;