Congestion control, Video streaming, Wireless networks, Deep reinforcement learning
View/ Open
Date
2019-01-03Type of Degree
PhD DissertationDepartment
Electrical and Computer Engineering
Metadata
Show full item recordAbstract
With the fast growth of wireless technologies, such as 4G-LTE, cognitive radio networks, and 5G and beyond, data transportation in the wireless environment is dramatically increasing and has taken the dominant role over wired networks. The advantage of wireless networks are obvious: easier deployment, ubiquitous access, etc. However, the traditional transportation layer protocols, such as Transmission Control Protocol (TCP), are well known to perform poorly in wireless environments. This is because packets might be dropped due to transmission errors or broken connection, in addition to congestion caused buffer overflow. Furthermore, the capacity variation of wireless links also affect the TCP performance. One of the key applications of TCP nowadays, is video streaming. It has now dominated the mobile data traffic, i.e., over 60 percent, in 2016, and is expected to account for over 75 percent of the mobile data in 2021. At the same time, rapidly growing overall mobile traffic (which has increased 18-fold since 2011) and the large number of mobile devices (429 millions were added in 2016) have made it a great challenge for mobile video streaming. The instability nature of wireless link capacity will make the situation even worse. There is a compelling need now to deal with the problem: how to achieve efficiency and robustness for video delivery over emerging wireless networks. This problem should be studied from both the wireless infrastructure and user equipment aspects. In this thesis, we first conduct research on how to enhance the performance of TCP in wireless environment, specifically, for cognitive radio networks (CRNs). We investigate the problem of robust congestion control in infrastructure based cognitive radio networks (CRN). We develop an active queue management (AQM) algorithm, termed MAQ, which is based on multiple model predictive control (MMPC). The goal is to stabilize the TCP queue at the base station (BS) under disturbances from the time-varying service capacity for secondary users (SU). The proposed MAQ scheme is validated with extensive simulation studies under various types of background traffic and system/network configurations. It outperforms two benchmark schemes with considerable gains in all the scenarios considered. We then further investigate the dynamics of nowadays complicated networking systems and propose a smart congestion control algorithm based on recent progress of Artificial Intelligence (i.e., Deep reinforcement learning (DRL)). In order to accommodate the wide range of different types of networks and goals, the general framework of congestion control based on DRL is first proposed. With the understanding of the congestion control problem, the states, actions, and reward of the DRL framework are defined. The new progress in deep learning such as convolutional neural network (CNN) and long short term memory (LSTM) are also utilized to deal with the challenges in congestion control problem. Extensive NS3 simulation studies are conducted, which validate the superior of this method over all scenarios. Finally, the problem of effective and robust delivery of video over orthogonal frequencydivision multiplexing access (OFDMA) wireless network is studied. The measurement of real network capacity and request interval time is presented as the motivation for the consideration of the estimation of capacity and request interval. The offline cross-layer optimization is first formulated. Then the online transformation is proposed and proved to be asymptotically converge to the offline solution. After analyzing the structure of the optimization problem, we proposed an primal decomposition (DORA) for this DASH bit rate adaption and OFDM Resource Allocation problem. Further, we introduce the stochastic model predictive control (SMPC) to achieve better Robustness on bit rate adaption and consider the request Interval time at resource allocation (DORA-RI). Simulations are conducted and the efficacy and robustness of the proposed scheme are validated.