Abstract

Efficient spectrum resource management in cognitive radio networks (CRNs) is a promising method that improves the utilization of spectrum resource. In particular, the power control and channel allocation are of top priorities in spectrum resource management. Nevertheless, the joint design of power control and channel allocation is an NP-hard problem and the research is still in the preliminary stage. In this paper, we propose a novel joint approach based on long short-term memory deep Q network (LSTM-DQN). Our objective is to obtain the channel allocation schemes of the access points (APs) and the power control strategies of the secondary users (SUs). Specifically, the received signal strength information (RSSI) collected by the microbase stations is used as the input of LSTM-DQN. In this way, the collection of RSSI can be shared between users. After the training is completed, the APs are capable of selecting channels with small interference while the SUs may access the authorized channels in an underlay operation mode without knowing any knowledge about the primary users (PUs). Experimental results show that the channels are allocated to the APs with a lower probability of collision. Moreover, the SUs can adjust their power control strategies quickly to avoid the harmful interference to the PUs when the environment parameters change randomly. Consequently, the overall performance of CRNs and the utilization of spectrum resources are improved significantly compared to existing popular solutions.

1. Introduction

Cognitive radio networks (CRNs), also known as cognitive wireless networks (CWNs), are formed when cognitive radio devices are organically connected through cognitive base stations. Spectrum resource management is one of the basic tasks of CRNs, which aims to achieve high unitization of the spectrum resource through dividing it into a group of channels or resource blocks and designing proper management strategies. Faced with the increasing demand for mobile data capacity, channel allocation and power control play a key role in spectrum resource management [1, 2].

Spectrum resource management is to determine the most suitable channels for secondary users (SUs) without affecting the communication of primary users (PUs), based on the analysis of available channels. Currently, optimization and game theory have been widely used in spectrum management. In [3], spectrum sharing was made according to interference temperature and radio frequency (RF) power per unit of bandwidth measured in the receiving antenna. The optimal solution can be obtained by particle swarm optimization (PSO) algorithm, if the objective function was convex. In addition, simulated annealing (SA) is applied to prevent falling into suboptimal solutions. Three improved algorithms of PSO, namely, binary PSO, sociocognitive PSO, and derivation zero algorithm were proposed and the throughput of SU links was compared under the interference constraints in [4]. The spectrum access algorithm, proposed in [5], improved the throughput and spectrum sensing ability of the network system by formulating a Lagrange dual optimization problem and derived the optimal power allocation strategy and target detection probability. In the research of spectrum resource management based on game theory, the core idea is to obtain the equilibrium of optimal distribution of spectrum resources among SUs. In [6], the double auction model from microeconomic theory was used in TV band transactions between TV broadcasting companies and wireless regional area network (WRAN) service providers. For WRAN service providers, spectrum bidding and pricing problems were formulated as a noncooperative game model and obtained the Nash equilibrium. Tehrani and Uysal [7] proposed a sealed bid first-price auction model, aiming to maximize the revenue of service provider and the satisfaction of SUs under incomplete spectrum sensing conditions. Tan et al. [8] considered cooperative and noncooperative spectrum access schemes based on threshold policy. Experimental results showed that, in noncooperative cases, the optimal scheme met the Nash equilibrium.

Existing work using the optimal control or game theory often assumes that users in the wireless networks have obtained the complete environmental state information. However, such information is difficult, if not impossible to obtain in complex and dynamic scenarios, so in many cases, a solution has to be given based on partial environmental information. Inspired by the emerging artificial intelligence, reinforcement learning and neural network provide us a new tool to tackle challenges in CRNs [912]. Deep reinforcement learning (DRL) has used the model free feature of reinforcement learning (RL) and the ability of deep learning (DL) to process data in spectrum resource management. The potential advantages of applying DRL to spectrum resource management are threefold. First, the optimal solution for decision-making problems can be obtained through trial and error, and the cycle of manual spectrum planning is greatly reduced. So, CRNs can learn and obtain efficient spectrum resource management solutions. Second, it is possible to simulate the complex real-loop scenario that is difficult to model mathematically and constantly accumulate new experiences to adapt to various extreme situations. Third, real-time effective monitoring of dynamic environment, mining the potentially important data and information, and improving the performance of CRNs can be achieved. These advantages boost a few research works [1317]. For instance, Wan and Cohen [14] proposed a distributed dynamic spectrum access algorithm based on deep multiuser reinforcement learning, aiming at maximizing network utility in multichannel wireless networks. At each time slot, each SU mapped its current state into the spectrum access action by using the trained deep Q network (DQN). Experimental results showed that, in some observable environments, SUs were able to learn out good control strategies to ensure network performance without using online acknowledgement (ACK) signals. Liu et al. [16] adopted a multiagent DQN technology, which further optimized the learning process by combining the DQN algorithm with transfer learning so that SUs of the new access network could obtain more experience and knowledge.

In spite of the aforementioned research work, spectrum resource management based on DRL is still in its infancy stage. Existing results revealed that the state information of the channels has a high degree of self-correlation [18, 19]. However, this property may have a considerable time interval from the current state. There is still a large gap in the study of this problem. Considering the extraordinary network structure of long short-term memory, it is possible to explore such self-correlation and make a better estimate of the state of the channels. Motivated by the limitations of the current state-of-the-art and the joint design problem of channel allocation and power control for spectrum resource management, this paper proposes a long short-term memory deep Q network- (LSTM-DQN-) based joint channel allocation and power control algorithm, which helps to achieve spectrum utilization flexibility by sharing the received signal strength information (RSSI) among users. Additionally, we consider that PUs may have multiple alternative power control strategies rather than a single strategy and choose the appropriate one dynamically according to the changing environment. The evaluations show that the adjacent access points (APs) access available channels without conflict, whereas SUs maximize the power control strategies to avoid harmful interference to PUs.

The remainder of this paper is organized as follows. Section 2 introduces the system model and formulates the problem to be solved. The implementation of the proposed algorithm is discussed in Section 3. Section 4 describes the simulation experiments and result analysis, and finally, the conclusion and future work are presented in Section 5.

2. Preliminaries

2.1. System Model

The channel allocation problem is raised due to huge number of wireless devices accessing limited spectrum space. In such problem, there is no one-to-one connection between channels and APs. The main challenges are adjacent channel interference (ACI) and co-channel interference (CCI). For the joint optimization of channel allocation and power control, it is necessary to consider not only the transmit power of primary and secondary users but also the selection of channels at different access points and their possible conflicts to each other.

The system model we focus in this paper is shown in Figure 1. There are 5 APs deployed in the scenario, and each AP serves several primary and secondary users distributed randomly within its communication range. We allow overlapping between APs. For instance, the service range of AP1 and AP2 overlap with each other, and so do AP3 and AP4. In contrast, AP5 is independent of others. Within the service range of each AP, the PUs always transmit data on their authorized channels, whereas SUs are only allowed to access channels without affecting the communication of PUs. The base station in the middle is mainly responsible for the communication of PUs. Meanwhile, microcells assist SUs to control the transmit power. These microcells collect the RSSI of primary and secondary users, package the collected information into packets occupying a few bytes, and then send them to SUs through a dedicated control channel. It is assumed that each PU adjusts the transmitting power according to its own control strategy and always transmits data on its authorized channel. Both PUs and SUs are ignorant of others’ power control strategy. To be more specific, PUs are never concerned about the existence of SUs. Therefore, SUs need to learn appropriate transmit power strategies through utilizing the RSSI, as to accomplish their own transmission tasks.

2.2. Problem Formulation

In the joint optimization of channel allocation and power control, the first thing to determine is whether to allow the same channel to be selected between different APs. In this paper, this is not allowed, i.e., we consider the case of no channel conflicts. Based on such assumption, the transmit power and control strategies of primary and secondary users are then determined. Table 1 specifies the symbols used in this paper.

The set of APs is denoted as , and the set of available channels is . Each AP can only use one channel. The channel matrix is in which each element is defined bywhere .

Accordingly, we define as the interference matrix, and each element is defined by the following formula:

In order to measure the service quality, the SINR of primary and secondary users need to be defined. We assume that the users are able to communicate only if the relevant adjacent APs access the channel successfully. Let the SINR of PU i in AP p at time t be written as follows:

Similarly, the SINR of SU j in AP p at time t is

In multichannel scenarios, both the available channels and the channel gain change with time. Therefore, the problem becomes dynamic, and thus more complicated. The throughput of a single SU j in AP p at time t is

The objective is to maximize the total throughput of all SUs, which is denoted as follows:

3. Deep Reinforcement Learning-Based Framework

Due to the widespread application of CRNs, the network structure is becoming more and more complex. It is difficult to establish a corresponding mathematical model to simulate a highly complex network environment. The model-free RL can effectively solve this problem. In recent years, DRL has shown excellent ability in dealing with complex problems and data operations. Therefore, this paper focuses on the application of DRL in spectrum resource management, especially the joint optimization of power control and channel allocation to improve the robustness and adaptability of CRNs.

3.1. Description of RL

The model-free learning is one type of method through continuous interaction with the virtual environment in RL. In general, RL constructs the problem as a Markov decision process (MDP). At every moment t, the agent can observe the current state of the environment and then select an action . After the action is executed, the environment state is transitioned with a certain probability to a new state . Meanwhile, the environment will feed back a reward value to the agent. The schematic diagram is shown in Figure 2. In a word, RL aims to find the best strategy by maximizing the cumulative reward value through a limited number of steps [9].

Using RL to solve the joint design problem in CRNs, an array should be defined in advance, where S represents the set of environmental states, A is the set of SU actions, and denotes the reward obtained when taking the next action in the current state.

3.1.1. State Space

There are 5 APs deployed in the network environment, with several primary and secondary users around each AP. The SUs can only obtain incomplete environmental information at APs to implement their transmission tasks. Assuming that L microcells are responsible for collecting the RSSI of primary and secondary users in the service area of each AP, a total of 5 L microcells are distributed in the whole network environment. We adopt a discretized-time model. According to the nonfree space propagation [20], the RSSI collected by the microcells in the area served by the AP p at time slot t is denoted by the following equation:where is defined by

Therefore, the RSSI of these 5 APs is integrated and used as the input layer of LSTM-DQN, namely,

3.1.2. Action Space

We add the set of SU transmit power into the action space, and the action of all SUs in AP p at time t iswhere represents the transmit power of the SU j in AP p.

Therefore, the action value of all APs in the whole network environment is

3.1.3. Reward Function

For the problem of channel allocation and power control, it is firstly necessary to consider that the channels are selected by APs without conflict. Specifically, APs 1 and 2 choose different channels, 3 and 4 choose different channels, and 5 can choose any channel. Only after the APs successfully select the channels can the users perform data transmission. It should be considered that both primary and secondary users in each AP meet the service quality requirements and do not exceed the threshold. According to the constraint conditions, the reward at AP p is defined by the following equation:where the constraints are given as follows: and accesses the available channel, .

The reward function of the whole network system iswhich represents the mean value of rewards obtained by all APs.

3.2. Power Control Strategy of PUs

We consider that the PUs can adjust their transmit power according to the specified control strategy and always transmit data on the authorized channels. The typical power control strategy proposed in [21] iswhere the value of is no less than the minimum value of according to the predefined range of the discretization threshold.

We also adopt the more intelligent strategy proposed in [22] as follows:where, which represents the SINR of the PU i at the predicted time t + 1.

When a PU conducts the intelligent control strategy of equation (15), according to the current SINR at time t and the predicted SINR at time t + 1, it only needs to adjust its own transmit power only once. Therefore, the advantage of this intelligent strategy lies in that it can reduce the extra energy consumption caused by frequent power switching. At the same time, it comprehensively considers the trend estimation to determine whether the PU should adjust its transmit power and has the ability of spectrum prediction.

In order to cope with the complexity of network environment, PUs may have multiple alternative power control strategies rather than a single strategy and choose the appropriate one according to the actual situation. Equation (14) is denoted as power control strategy 1 of the PU, and equation (15) is strategy 2. We will discuss and analyse these strategies in detail in the experiments in Section 5.

3.3. LSTM-DQN-Based Joint Channel Allocation and Power Control Algorithm

LSTM is a special recurrent neural network (RNN) [23]. As shown in Figure 3, the unit of LSTM mainly includes the forget stage, selective memory stage, and output stage, which is realized through the forget gate, input gate, and output gate, respectively. The core of LSTM is to control the cell state through these three interactive gate states. It can catch the important but implicit knowledge for a long time and discard the unnecessary message. Therefore, it shows excellent performance in solving the problem of gradient disappearance or gradient explosion in the process of long sequence training.

On one hand, it is verified that the state information of the channels has a high degree of self-correlation, which may have a considerably long time interval from the current state [24]. On the other hand, there is great potential to improve the probability of successfully access the channels owing to the unique network structure of LSTM because LSTM can effectively capture valuable knowledge that is not obvious. To track the implicit correlation over a long period of time, we combine LSTM with DQN (as shown in Figure 4) to integrate the collected partial known information and obtain better control strategies through offline learning. Once the training phase is completed, the users only need to communicate with the central unit by slightly adjusting the weight of the neural network. At each moment, the APs select the available channels and the SUs choose the optimal transmit power according to the trained DQN. The specific algorithm is shown in Algorithm 1.

(1)Initialization: the capacity O of memory D, the transmit power of PU and SU is respectively, the channel interference matrix , LSTM-estimates LSTM-DQN Q weight , targets LSTM-DQN
(2)For episode = 1 to E do
(3) According to the initial state , SUs randomly select actions with probability, otherwise choose actions with probability
(4)For t = 1 to T do
(5)  The PUs update the transmit power according to their own power control strategies
(6)  SUs select actions with probability, otherwise select the action
(7)  Obtain rewards and the next state
(8)  Save empirical data to memory D
(9)  Ifthen
(10)   Select training sample randomly from D
(11)   Calculate
(12)   Use the gradient descent method to minimize the loss function and update parameters
(13)  End If
(14)End For
(15) Reset environment parameters randomly
(16)End For

4. Performance Evaluation

In this section, we evaluate the performance of our proposed algorithm through simulation-based experiments.

4.1. Experiment Settings

In our simulated scenarios, there is a circular area with a radius of 1,000 m. 3 available channels are provided for 5 APs. AP 1 has overlap with AP 2, and AP 3 has overlap with AP 4. AP 5 is independent of others. There are 10 microcells in the service range of each AP, where 1 PU and 2 SUs contend for accessing the spectrum resources. Thus, the whole network environment includes one base station, 50 microcells, 5 PUs, and 10 SUs. Specifically, the transmission power range of the PU is , and the transmit power range of SU is . The white noise is 0.1 mW. The SINR thresholds for primary and secondary users are 1.0 dB and 0.5 dB, respectively. According to the path loss rule of nonfree space, the channel model is now considered as the 2-ray ground reflection model of wireless propagation, and the channel gain expression iswhere path loss index , and are the gain of the transmitter and receiver, respectively, and and are the heights of the transmit and receive antennas, respectively [20]. In order to simulate the complex change of the environment, the number of each iteration is now set to 40,000. Furthermore, the position of primary and secondary users in the environment as well as the channel gain are randomly initialized every 10,000 iterations.

The LSTM-DQN is constructed with 5 hidden layers. The first hidden layer is the LSTM layer, and the middle 4 hidden layers are the full connection layer. The number of neurons in the full connection layer is 256, 128, 128, and 256, respectively. The activation function of the second, third, and fourth hidden layers adopt ReLUs function, and the activation function of the fifth hidden layer is tanh function. Besides, Adam algorithm is used to update the weight of the neural network. The size of the training samples is set to 128. The initial exploration probability of greedy algorithm is 0.8 and linearly decreases to 0 with the number of iterations. Moreover, the memory bank has a capacity of 1,000, whereas training is not started until the capacity reaches 500 or more.

For the dynamic and the complexity of the application environment, we consider the PUs take different power control strategies. One case is in which the PUs take single control strategy 2. Another one is that each time the environmental parameters are updated, the power control strategy of 1 or 2 is chosen randomly by PUs. The proposed joint algorithm based on LSTM-DQN will be compared with two benchmark algorithms: the original DQN-based algorithm and priority memory combined with DQN- (PM-DQN-) based algorithm.

4.2. Simulation Results

Figure 5 shows the loss function of different algorithms when the PUs adopt control strategy 2, and Figure 6 plots the loss function when the PUs employ mixed control strategies. It can be seen that all of algorithms meet convergence after iterative learning. Our LSTM-DQN algorithm has a large instantaneous fluctuation when the environmental parameters change, which is slightly better than the benchmark. On the other hand, the algorithm based on PM-DQN has less fluctuation. This is because the PM greatly accelerates the convergence rate of the loss function by cutting off the correlation, whereas the LSTM needs to correlate the past experience so that the loss function does not converge to the minimum value quickly. Nevertheless, it is meaningful for the joint problem of channel allocation and power control without Markov property. We will explain from other aspects below.

Figures 7 and 8 describe the comparison of the cumulative rewards when the PUs adopt a single and mixed control strategies, respectively. It can be seen from the results that the reward of the benchmark algorithm is always decreasing, whereas the cumulative rewards of our LSTM-DQN and the algorithm based on PM-DQN are relatively stable. Moreover, the reward of LSTM-DQN is higher. It is worth noting that the cumulative reward of LSTM-DQN is close to or slightly higher than the horizontal line of 0, which indicates that the channel allocation and power control scheme still have room for further improvement in the future work.

Figures 9 and 10 are evaluated in terms of the switching success rate. Once the user is able to access the channel and successfully complete the transmission task within 20 switches, it is deemed to a successful experience. It can be concluded from the simulation results that our LSTM-DQN can ensure the maximum success rate and adjust the strategy rapidly when the environment parameters are updated randomly. Moreover, when the PU adopts the mixed strategy, the proposed algorithm can still show excellent robustness and desirable generalization ability.

Figures 11 and 12 depict the comparison of handover steps. We observe that regardless of the control strategies adopted by the PUs, and the proposed algorithm guarantees that the optimal strategy can be found after an average of one handover. It helps reduce the energy consumption and greatly improve the sensitivity of the users, which can react to the change of the real-time environment more quickly. Moreover, when the environmental parameters update, the proposed algorithm shows the anti-interference performance and generalization ability.

We then analyse the channel cumulative conflicts shown in Figures 13 and 14. When the PUs take the single control strategy, the proposed algorithm and the algorithm based on PM-DQN perform closely. In the situation that PUs employ the mixed strategy, LSTM-DQN-based algorithm can further reduce channel conflict. It shows that the proposed algorithm has a good potential in dealing with complex conditions.

5. Conclusion and Future Work

Aiming at the joint design problem of channel allocation and power control in CRNs, this paper proposed a novel algorithm based on LSTM-DQN. We analysed the feasibility and implementation process of the proposed algorithm. Through simulation-based experiments, the advantages of LSTM-DQN-based algorithm were discussed and illustrated from the aspects of loss function, reward function, success rate, handover steps, and channel cumulative conflict. Specially, our proposed method outperformed other two DQN-based competitors.

Our future work will involve using real data to verify the feasibility of the algorithm. Moreover, various factors of the environment, e.g., mobility of users, can be taken into account, as to further study the large-scale spectrum resource management problems.

Data Availability

The data used to support the findings of this study are currently under embargo while the research findings are commercialized. Requests for data, 12 months after publication of this article, will be considered by the corresponding author.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (Grant no. 61971147), Special Funds from Central Finance to support the development of local universities (Grant nos. 400170044 and 400180004), Foundation of National & Local Joint Engineering Research Center of Intelligent Manufacturing Cyber-Physical Systems, and Guangdong Provincial Key Laboratory of Cyber-Physical Systems (Grant no. 008).