Research on 5G cellular nets is ongoing to achieve unprecendently experienced bitrates, exploiting massive MIMO technology, machine learning for self-adapting audio/video streaming and intelligent network policies for energy harvesting base stations and devices.
The design of 5G systems is considered as the next big challenge for the ICT community in the upcoming years. Besides increased bit rate and energy efficiency of the terminals and of the whole system, 5G aims at providing minimal latency to critical communication, seamless integration of IoT nodes and support to massive Machine-to-Machine (M2M) communication, all without degrading the quality of experience for traditional services. Although the general requirements of 5G systems are progressively taking shape, also thanks to the industrial-driven actions promoted by the H2020 framework of the EC, the technological issues raised by such a vision are still quite foggy. Nonetheless, general consensus has been reached upon the importance of few, key approaches and technologies, including massive MIMO, millimeter wave communication, machine learning for self-optimization of network parameters and policies, energy harvesting mechanisms for base stations and devices. While our interests extend over all such topics, currently our research activity is focused on a selected number of relevant challenges, as described below.
As the demand for higher data rates increases, one of the solutions available to operators is to reduce the size of the cell, thus increasing the spectral efficiency by enabling higher frequency reuse, while reducing transmit power. Moreover, the deployment of small cells indoor may help improving the wireless coverage where signal reception from the macro base station may be difficult, and may contribute to offload traffic from the macro cells, when required. Small cells can have different flavors, with low powered femtocells typically used in residential and enterprise deployments, and higher powered picocells used for wider outdoor coverage, or for filling in macro cell coverage holes. The concurrent operation of different classes of base stations is known as Heterogeneous Networks (HetNets). This configuration is foreseen as the next generation of cellular network infrastructure. However, the existence of multiple type of access nodes, such as macro, pico and femto base stations, raises new challenges because of complex interference conditions from node densification and self-deployed access. Our current research activity focuses on the use of context information to optimize the resource utilization in HetNets. We started by studying the handover process that, in a HetNet scenario, becomes particularly challenging due to the high variability of the coverage, transmit power, interference level, and traffic loading of the cells. Current handover policies, which have excellently served in classical cellular networks, reveal all their limits in this new environment, in incurring in outage periods when delaying handover for too long, or ping-pong effect (i.e., quick transitions between cells) when the handover is triggered with much anticipation. To avoid these performance losses, hence, the handover parameters need to be dynamically adapted to the context, i.e., the speed of the mobile users, the signal propagation coefficients for macro and femtocells, the location of the base stations, and so on. We are currently investigating these issues both mathematically, and by using machine-learning based techniques to infer the context parameters from the data available at the mobile user, i.e., its speed and direction, and the power of the beaconing signals transmitted by the base stations. We have also developed a mathematical model that makes it possible to derive the performance of a non-causal optimal handover strategy, which will be used as a benchmarks to assess the performance of the practical schemes we will propose.
As the telecommunication technology continues to evolve rapidly, fueling the growth of service coverage and capacity, new use cases and applications are being identified. Many of these new business areas (e.g., smart metering, in-car satellite navigation, E-health monitoring, smart cities) involve fully-automated communication between devices, without human intervention. This new form of communication is generally referred to as Machine-to-Machine (M2M) Communication, but also as Machine-type Communication (MCT), while the devices that are involved in this type of communication are called Machine-Type Devices (MTD), which include sensors, actuators, RFtags, smartphones, and so on. M2M communication paradigm is expected to play a significant role in future networks, both because the potentially huge number of MTDs that shall be connected to cellular networks and the characteristics of machine-type traffic. Indeed, differently from traditional broadband services, M2M communication is expected to generate, in most cases, sporadic transmissions of short packets. Nonetheless, while the data rate of a single M2M link is extremely low, the potentially huge number of MTDs that shall gain connectivity through a single Base Station will raise a number of issues related to the signaling and control traffic, which may become the bottleneck of the system. As a matter of fact, today’s standard for cellular networks are not designed to support massive MTD access, and will collapse under the weight of signaling traffic. In addition, although transmissions from machine devices are, in many cases, delay tolerant (smart metering, telemetry), there is also an important class of applications that require ultra-low latency (E-health, vehicular communications). Furthermore, most of MTDs are expected to be severely constrained in terms of computational and storage capabilities, and energy capacity. This scenario, hence, raises a number of challenges that need to be addressed in the next future, including: control overhead, energy efficiency, coverage extension, heterogeneous QoS support, robustness to malfunctioning devices, security, and scalability. The biggest challenge is to embed this type of traffic in the overall 5G architecture, so that M2M traffic can coexist with broadband data traffic. In this scenario, we are investigating the problem of managing massive access from a huge number of simple MTDs to a common powerful Base Station, capable of performing advanced reception processes such as multi-packet reception, successive interference cancellation, and so on. We start from a theoretical analysis of the problem, with the aim of finding information-theoretical results that shed light on the best access strategy to be used in this context. Then, we will move forward to define some practical access mechanisms, with the aim of maximizing the number of MTDs that can be served by a single base station, with minimum energy expenditure.
In a nutshell, massive MIMO consists in using large arrays of antenna elements, with many more elements than typically used today, to provide diversity and compensate for path loss, thus making it possible to significantly increase the transmission capacity of the system, and the spectral and energy-efficiency. In addition, it provides many degrees of freedom, which can be exploited by means of beamforming in case the channel state information is available. Issues related to this topic are the current prohibitive cost, in terms of resource consumption, required by channel estimation and feedback, the complex interactions of pilot contamination and interference that Massive MIMO suffer from other cells and the lack of accurate channel models for Massive MIMO systems.
- Alcatel Lucent Bell Labs, NY (USA)
- Telenor Group, Oslo, Norway,