ARTIFICIAL INTELLIGENCE-BASED RADIO RESOURCE MANAGEMENT FOR WIRELESS NETWORKS
- Cognitive radio is considered an effective solution to the problem of spectrum scarcity, which allows secondary users (SUs) to opportunistically access licensed spectrum bands that are temporarily unused by the primary users (PUs). Therefore, it has attracted much attention from both academic and industrial communities in recent years. In cognitive radio networks (CRNs), the SUs frequently sense the presence of the PU on the licensed channels and then transmit data on unoccupied channels. In modern communication systems, the security of CRNs is critical, since the legitimate communication in CRNs might be vulnerable to hidden eavesdroppers due to the open characteristics of the networks. Furthermore, energy conservation has been a primary concern for energy-harvesting powered CRNs, in which the SUs harvest energy from ambient sources, such as solar power, wind power, and radio frequency (RF) energy. Each energy-harvesting node uses its limited energy for spectrum sensing, data processing, and data transmission. To improve spectral efficiency and energy efficiency, network operators tend to deploy more and more small-cell radio networks with short-range and low-power base stations (BSs). Such a deployment can enhance network coverage and capacity in highly populated areas. However, it also brings challenges to efficient resource allocation due to the stochastic property of mobile subscribers and the intensive characteristic of dense networks. Hence, how to effectively manage scarce resources, such as spectrum and energy, is of critical importance in the design of energy harvesting-based wireless networks. Future wireless networks will become more intelligent with the assistance of artificial intelligence (AI) techniques, such as machine learning (ML), optimization theory, game theory, and meta-heuristics. Among them, reinforcement learning (RL) methods and deep neural networks (DNNs), which are two of the most important sub-fields of ML, are well known for their useful applications in wireless networks. Accordingly, RL methods and DNNs have shown their advantages in empowering wireless communication systems in terms of network operation and optimization. Therefore, it is essential to employ these innovative techniques into future mobile networks to ensure long-term and maintenance-free operation of energy harvesting-based networks. In this dissertation, we study the applications of AI techniques for efficient resource management and security improvement in energy harvesting-based wireless networks. We aim to find the optimal resource management scheme that can ensure long-term network performance.
In the first part of this dissertation, we investigate the problem of energy-efficient data communications in an energy-harvesting cognitive radio network, in which SUs harvest energy from solar power and opportunistically access a time-slotted primary channel for data transmission. However, legitimate communication can be vulnerable to external attacks that are carried out by hidden eavesdroppers. Therefore, we propose two energy-efficient data encryption schemes for a SU in CRNs to increase the security level under energy constraints. More specifically, based on the sensing result at the beginning of each time slot, the SU decides whether to stay silent to save energy or to transmit data to the destination. The SU also needs to choose an appropriate private-key data encryption method to maximize data security in the long run. In the first scheme, the information about the environment (e.g., the activity of the PU and the model of harvested energy) is available to the SUs. Hence, we model the problem as the framework of a partially observable Markov decision process (POMDP). We then use a value iteration-based method to solve the formulated problem. In the second scheme, the SU will interact with the environment through a sequential decision process. During this process, the SU can decide its operation mode based on a reinforcement learning-based algorithm, which can maximize its long-term data security.
In the second part of this dissertation, we study an optimal power allocation policy for energy-efficient data transmissions in a wireless sensor network in the presence of a full-duplex (FD) eavesdropper. In this network, a sensor node (i.e., the source) powered by renewable energy wants to transmit data to a cluster head (i.e., the destination). The eavesdropper with FD capability can opportunistically launch jamming attacks to the destination. We aim to find the optimal power allocation scheme for the source to maximize its long-term secrecy rate. We model the problem of transmit power allocation as the framework of a Markov decision process and investigate the formulated problem in two different scenarios. In the first scenario, we propose a POMDP-based method to solve the problem using value iteration-based dynamic programming with the assumption that the information about the harvested energy and the model of jamming activities of the eavesdropper is available to the system. In the second scenario, we use a learning-based algorithm to help the source find the optimal solution to the power allocation problem through interactions with the environment. We verify the effectiveness of the proposed schemes through numerical simulation results.
The third part of this dissertation mainly presents reinforcement learning-based methods for efficient resource allocation and user scheduling in small-cell networks with energy harvesting. First, we investigate the problem of bandwidth allocation for an operation controller in hierarchical cellular networks consisting of several small-cell base stations (SBSs) that are powered by energy harvesters. We aim to find the optimal bandwidth allocation policy in order to enhance user satisfaction and energy efficiency within the constraints of energy harvesting and bandwidth sharing. However, the arrivals of harvested energy and traffic requests are unknown in advance, so it is necessary to design a learning algorithm for the controller to predict the system dynamics before making decisions about bandwidth allocation. Therefore, we employ a natural actor-critic algorithm to help the controller effectively allocate bandwidth to the SBSs. Then, we introduce an actor-critic deep learning framework for efficient user association and bandwidth allocation in dense mobile networks with green base stations. The agent of the proposed algorithm learns about the evolution of the environment through trial and error experience. In this framework, we use deep neural networks to approximate the policy and the value functions so that the algorithm can work effectively with large-scale problems. Simulation results show that the proposed methods can improve network performance in the long run.
Then, we consider the problem of resource sharing in wireless virtualized networks with energy harvesting, where several virtual network operators (VNOs) lease spectrum resources from a mobile network operator (MNO) to provide data services to their subscribers. We aim to find the optimal spectrum leasing schemes based on deep reinforcement learning (DRL) algorithms in order to help the VNOs provide users with the best performance while ensuring the minimal leasing costs. Since the spectrum resources are limited, the VNOs need to compete for them by announcing their requested spectrum sizes to the MNO. We investigate the spectrum competition problem in both regular virtualized networks and cognitive virtualized networks with energy-harvesting base stations. In the first scenario, each VNO leases spectrum only through a long-term contract with the MNO. In the second scenario, the VNOs can obtain spectrum resources via both spectrum sensing and leasing contract. We formulate the resource leasing problem in the mentioned scenarios as the framework of a sequential decision-making process. We then develop a DRL algorithm, which is a combination of DNNs and RL, for a VNO to learn the optimal leasing policy by interacting with the environment. We compare the performance of the proposed methods with other traditional learning and non-learning methods.
Finally, we summarize the main contributions of this dissertation and discuss future research directions regarding deep reinforcement learning and its applications in modern wireless networks.
- 도 빈 쾅
- Issued Date
- Awarded Date
- Authorize & License
- Files in This Item:
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.