Show simple item record

Authordc.contributor.authorChen, Xianfu 
Authordc.contributor.authorWu, Jinsong 
Authordc.contributor.authorCai, Yueming 
Authordc.contributor.authorZhang, Honggang 
Authordc.contributor.authorChen, Tao 
Admission datedc.date.accessioned2015-08-12T15:51:11Z
Available datedc.date.available2015-08-12T15:51:11Z
Publication datedc.date.issued2015
Cita de ítemdc.identifier.citationIEEE Journal on Selected Areas in Communications, Vol. 33, No. 4, April 2015en_US
Identifierdc.identifier.issn0733-8716
Identifierdc.identifier.urihttps://repositorio.uchile.cl/handle/2250/132644
General notedc.descriptionArtículo de publicación ISIen_US
Abstractdc.description.abstractThis paper first provides a brief survey on existing traffic offloading techniques in wireless networks. Particularly as a case study, we put forward an online reinforcement learning framework for the problem of traffic offloading in a stochastic heterogeneous cellular network (HCN), where the time-varying traffic in the network can be offloaded to nearby small cells. Our aim is to minimize the total discounted energy consumption of the HCN while maintaining the quality-of-service (QoS) experienced by mobile users. For each cell (i.e., a macro cell or a small cell), the energy consumption is determined by its system load, which is coupled with system loads in other cells due to the sharing over a common frequency band. We model the energy-aware traffic offloading problem in such HCNs as a discrete-time Markov decision process (DTMDP). Based on the traffic observations and the traffic offloading operations, the network controller gradually optimizes the traffic offloading strategy with no prior knowledge of the DTMDP statistics. Such a model-free learning framework is important, particularly when the state space is huge. In order to solve the curse of dimensionality, we design a centralized Q-learning with compact state representation algorithm, which is named QC-learning. Moreover, a decentralized version of the QC-learning is developed based on the fact the macro base stations (BSs) can independently manage the operations of local small-cell BSs through making use of the global network state information obtained from the network controller. Simulations are conducted to show the effectiveness of the derived centralized and decentralized QC-learning algorithms in balancing the tradeoff between energy saving and QoS satisfaction.en_US
Patrocinadordc.description.sponsorshipNational Basic Research Program of China (973Green), Chinese Ministry of Education, Key Technologies R&D Program of China, France ANRen_US
Lenguagedc.language.isoenen_US
Publisherdc.publisherIEEEen_US
Type of licensedc.rightsAtribución-NoComercial-SinDerivadas 3.0 Chile*
Link to Licensedc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/cl/*
Keywordsdc.subjectteam Markov gameen_US
Keywordsdc.subjectcompact state representationen_US
Keywordsdc.subjectreinforcement learningen_US
Keywordsdc.subjectdiscrete-time Markov decision processen_US
Keywordsdc.subjecttraffic load balancingen_US
Keywordsdc.subjectenergy savingen_US
Títulodc.titleEnergy-Efficiency Oriented Traffic Offloading in Wireless Networks: A Brief Survey and a Learning Approach for Heterogeneous Cellular Networksen_US
Document typedc.typeArtículo de revista


Files in this item

Icon

This item appears in the following Collection(s)

Show simple item record

Atribución-NoComercial-SinDerivadas 3.0 Chile
Except where otherwise noted, this item's license is described as Atribución-NoComercial-SinDerivadas 3.0 Chile