TheSIGNET Group at the Department of Information Engineering, University of Padova, is mainly active in SIGnal and NETworking research. This page presents the research areas of the group.
Posted on by Marco Giordani
TheSIGNET Group at the Department of Information Engineering, University of Padova, is mainly active in SIGnal and NETworking research. This page presents the research areas of the group.
Posted on by admin - Updated
Research on 5G cellular nets is ongoing to achieve unprecendently experienced bitrates, exploiting massive MIMO technology, machine learning for self-adapting audio/video streaming and intelligent network policies for energy harvesting base stations and devices.
The design of 5G systems is considered as the next big challenge for the ICT community in the upcoming years. Besides increased bit rate and energy efficiency of the terminals and of the whole system, 5G aims at providing minimal latency to critical communication, seamless integration of IoT nodes and support to massive Machine-to-Machine (M2M) communication, all without degrading the quality of experience for traditional services. Although the general requirements of 5G systems are progressively taking shape, also thanks to the industrial-driven actions promoted by the H2020 framework of the EC, the technological issues raised by such a vision are still quite foggy. Nonetheless, general consensus has been reached upon the importance of few, key approaches and technologies, including massive MIMO, millimeter wave communication, machine learning for self-optimization of network parameters and policies, energy harvesting mechanisms for base stations and devices. While our interests extend over all such topics, currently our research activity is focused on a selected number of relevant challenges, as described below.
As the demand for higher data rates increases, one of the solutions available to operators is to reduce the size of the cell, thus increasing the spectral efficiency by enabling higher frequency reuse, while reducing transmit power. Moreover, the deployment of small cells indoor may help improving the wireless coverage where signal reception from the macro base station may be difficult, and may contribute to offload traffic from the macro cells, when required. Small cells can have different flavors, with low powered femtocells typically used in residential and enterprise deployments, and higher powered picocells used for wider outdoor coverage, or for filling in macro cell coverage holes. The concurrent operation of different classes of base stations is known as Heterogeneous Networks (HetNets). This configuration is foreseen as the next generation of cellular network infrastructure. However, the existence of multiple type of access nodes, such as macro, pico and femto base stations, raises new challenges because of complex interference conditions from node densification and self-deployed access. Our current research activity focuses on the use of context information to optimize the resource utilization in HetNets. We started by studying the handover process that, in a HetNet scenario, becomes particularly challenging due to the high variability of the coverage, transmit power, interference level, and traffic loading of the cells. Current handover policies, which have excellently served in classical cellular networks, reveal all their limits in this new environment, in incurring in outage periods when delaying handover for too long, or ping-pong effect (i.e., quick transitions between cells) when the handover is triggered with much anticipation. To avoid these performance losses, hence, the handover parameters need to be dynamically adapted to the context, i.e., the speed of the mobile users, the signal propagation coefficients for macro and femtocells, the location of the base stations, and so on. We are currently investigating these issues both mathematically, and by using machine-learning based techniques to infer the context parameters from the data available at the mobile user, i.e., its speed and direction, and the power of the beaconing signals transmitted by the base stations. We have also developed a mathematical model that makes it possible to derive the performance of a non-causal optimal handover strategy, which will be used as a benchmarks to assess the performance of the practical schemes we will propose.
As the telecommunication technology continues to evolve rapidly, fueling the growth of service coverage and capacity, new use cases and applications are being identified. Many of these new business areas (e.g., smart metering, in-car satellite navigation, E-health monitoring, smart cities) involve fully-automated communication between devices, without human intervention. This new form of communication is generally referred to as Machine-to-Machine (M2M) Communication, but also as Machine-type Communication (MCT), while the devices that are involved in this type of communication are called Machine-Type Devices (MTD), which include sensors, actuators, RFtags, smartphones, and so on. M2M communication paradigm is expected to play a significant role in future networks, both because the potentially huge number of MTDs that shall be connected to cellular networks and the characteristics of machine-type traffic. Indeed, differently from traditional broadband services, M2M communication is expected to generate, in most cases, sporadic transmissions of short packets. Nonetheless, while the data rate of a single M2M link is extremely low, the potentially huge number of MTDs that shall gain connectivity through a single Base Station will raise a number of issues related to the signaling and control traffic, which may become the bottleneck of the system. As a matter of fact, today’s standard for cellular networks are not designed to support massive MTD access, and will collapse under the weight of signaling traffic. In addition, although transmissions from machine devices are, in many cases, delay tolerant (smart metering, telemetry), there is also an important class of applications that require ultra-low latency (E-health, vehicular communications). Furthermore, most of MTDs are expected to be severely constrained in terms of computational and storage capabilities, and energy capacity. This scenario, hence, raises a number of challenges that need to be addressed in the next future, including: control overhead, energy efficiency, coverage extension, heterogeneous QoS support, robustness to malfunctioning devices, security, and scalability. The biggest challenge is to embed this type of traffic in the overall 5G architecture, so that M2M traffic can coexist with broadband data traffic. In this scenario, we are investigating the problem of managing massive access from a huge number of simple MTDs to a common powerful Base Station, capable of performing advanced reception processes such as multi-packet reception, successive interference cancellation, and so on. We start from a theoretical analysis of the problem, with the aim of finding information-theoretical results that shed light on the best access strategy to be used in this context. Then, we will move forward to define some practical access mechanisms, with the aim of maximizing the number of MTDs that can be served by a single base station, with minimum energy expenditure.
In a nutshell, massive MIMO consists in using large arrays of antenna elements, with many more elements than typically used today, to provide diversity and compensate for path loss, thus making it possible to significantly increase the transmission capacity of the system, and the spectral and energy-efficiency. In addition, it provides many degrees of freedom, which can be exploited by means of beamforming in case the channel state information is available. Issues related to this topic are the current prohibitive cost, in terms of resource consumption, required by channel estimation and feedback, the complex interactions of pilot contamination and interference that Massive MIMO suffer from other cells and the lack of accurate channel models for Massive MIMO systems.
Human Sensing
Posted on by admin - Updated
Example research results:
Modern wearable devices allow monitoring vital parameters such as heart or respiratory rates, electrocardiogram, photo-plethysmographic or even video signals, and are being massively commercialized in the consumer electronics market. A common issue of wearable technology is that signal processing and transmission are power demanding and, as such, require frequent battery charges. In our research, we consider biometric signal compression as a means to boost the battery life of wearables, while still allowing for fine-grained and long-term monitoring applications. We have proposed a few algorithms based on different approaches: 1) online motif extraction and pattern identification, 2) online and subject-adaptive dictionaries, and 3) denoising autoencoders. These techniques are compared with other recent algorithms from the literature based on: compressive sensing, discrete cosine and Wavelet transforms, principal component analysis and lightweight temporal compression. As we quantify in our performance evaluation, our algorithms allow reductions in the signal size of up to 70 (dictionary-based) or 100 (autoencoders) times, and obtain similar reductions in the energy demand, by still keeping the reconstruction error within 4% of the peak-to-peak signal amplitude.
We are investigating system identification techniques based on inertial signals from wearable devices, such as smartphones. Their goal is to recognize a target user from their way of walking, using the accelerometer and gyroscope (inertial) signals provided by a commercial smartphone worn in the front pocket of the user’s trousers. Our design features several innovations including: a robust and smartphone-orientation-independent walking cycle extraction block, a novel feature extractor based on convolutional neural networks, a one-class support vector machine to classify walking cycles, and the coherent integration of these into a multi-stage authentication system. To the best of our knowledge, our system is the first exploiting convolutional neural networks as universal feature extractors for gait recognition, and using classification results from subsequent walking cycles into a multi-stage decision making framework. Experimental results show the superiority of our approach against state-of-the-art techniques, leading to misclassification rates (either false negatives or positives) smaller than 0.15% in fewer than five walking cycles.
The signals used to design and test IDNet are freely downloadable here below. Our dataset features accelerometer and gyroscope signals collected from fifty users through a number of different smartphones. Motion traces were acquired wearing the smartphone in the front pocket of the user’s trousers. Multiple acquisition sessions were carried out for each user to account for multiple types of terrain and clothes. The collected traces were anonymized and organized in the below “.tar.gz” archive.
For further publications see human data analysis papers.
Internet of Things
Posted on by Matteo Drago - Updated

The term Internet of Things (IoT) describes a number of technologies and research disciplines that will enable the Internet to reach out to the real world of physical objects. According to the IoT paradigm, physical objects will be equipped with some communication capabilities, which will be exploited to coordinate their action and the way these objects influence the surrounding physical space. For example, books in a library could be equipped with RFID tags: in this way, each book could be precisely located by a system deployed in the library and, by communicating this information through the Internet, it would be possible to know the physical location of the book (or objects in general) from anywhere, by simply accessing these data.
A meaningful application is that of smart cities, where a system of smart lightening could automatically turn on and off street lights, according to the actual lightening conditions or could be easily configured through specific applications; smart parking system could suggest drivers where to find a place for their car, or sensors in the streets could inform about traffic congestions. In homes and buildings, networks of sensors will be able to acquire room temperature and humidity and use this information to control heating and ventilation. Wearable devices (smartwatches, wristbands) would permit the collection of biometrics data (e.g., heart rate, oxygen level, respiration, blood pressure, etc.) that can be used to help address the individual health and fitness needs of the users. Agriculture and breeding farms could also benefit from systems of sense-elaborate-control, for more efficient usage of hydric resources (e.g., soil moisture), control of production quality, and localization of animals and monitoring of their vital parameters.
Last but not the least, IoT can be applied in plants to realize some of the requirements at the basis of the Industry 4.0: also in this case, only through a system that connects all the machines and production means together it will be possible to accurately monitor the production quality, regulate production processes (such as heating/cooling procedures, packaging), check the use of the machines, track storage and, through proper control, avoid the waste of energy and raw materials and improve safety conditions.
Many technologies are envisaged to face the diverse requirements of IoT, which can be specified for each particular application. Systems for sensing applications, for example, will need to host thousands of devices. Also, these nodes will be likely spread in a large area (e.g., a city, or a big farm), and their communication range should be large, in order to minimize the number of infrastructure needs. To lower maintenance costs, IoT devices should also have a long lifetime. All this translates into the network requirements of scalability, long communication range, energy efficiency, low device cost.
The so-called Low-Power Wide-Area Networks (LPWANs), are family of technologies that share the features of a long communication range and low energy consumption, features that make them the most suitable candidates for the IoT scenarios described above.
In our research group we are studying LPWANs technologies, analyzing how they perform in terms of scalability, energy efficiency, and communication reliability, facing different use cases. In particular, we are LoRaWAN, a world-wide spread LPWAN technology using unlicensed frequency bands, and NB-IoT, a cellular technology enabling multiple IoT services offered by big network operators.
For the analysis of LoRaWAN, we also developed the lorawan module for the network simulator ns-3, which made it possible to gain insights into the network protocol, point out some blemishes, and propose possible improvements. The software is publicly available and is helping the scientific community in LoRaWAN evaluations.

We are working on real-time processing of human data from wearable and portable IoT devices for e-health and wellbeing applications. We are designing algorithms and developing lightweight and integrated software for on-device and real-time processing of human data, so that they can be effectively processed, stored in the limited memory of wearable IoT devices and conveniently transmitted over their wireless interface.



Microfluidics
Posted on by admin - Updated
Droplet-based microfluidics, molecular networking, droplet-based communication. Droplet microfluidics refers to manipulation and control of little amount of fluids flowing into channels of micro-size scale. We are concerned with understanding microfluidic flow dynamics and their propagation characteristics. These concepts are then used to devise networking algorithms to route droplets in a controlled manner through complex microfluidic networks.
Microfluidics is a multidisciplinary field with practical applications to the design of systems, called Lab-on-a-Chip (LoC), where tiny volumes of fluids are circulated through channels with millimeter size and driven into structures where precise chemical/physical processes take place. At this scale, fluids may exhibit specific behaviors that are unobserved at macro scales. These properties are at the basis of a number of applications, ranging from the inkjet printer heads to DNA sequencing chips, and have been recently exploited in the development of Lab-on-Chip (LoC) systems or, more generally, Microfluidic Machines (MMs), that are currently used for different purposes, included the synthesis of particles for therapeutic delivery, drug discovery, biomolecule synthesis, diagnostic testing, DNA sequencing.
The interest on microfluidics has been increasing over the last few years and, recently, droplet-based microfluidic circuits capable of performing simple logical operations have been proposed and experimentally tested, paving the way to a new research branch known as microfluidic networking. The overall objective of microfluidic networking is to realize switching and networking elements that make it possible to interconnect specialized LoCs in a flexible and modular system, possibly using only hydrodynamics properties. Such an architecture would unleash the still largely unexpressed potential of the microfluidic technology, producing a leap forward in many fields, included the pharmaceutical, chemical, and medical sectors. This challenge calls for interdisciplinary competences, which range from telecommunication engineering to physics and chemistry, and can open the way to a number of exciting research trails!
The first component of any networks are the the switches. Microfluidic networks are no exception. A possible solution to switch droplets across a microfluidic network consists in electro-hydrodynamic actuation that, however, requires specific and complex circuitry with a large number of electrical connections and high power source. Furthermore, the contact of electrodes with fluids may give rise to corrosion problems and unwanted reactions. Most of these problems are solved by adopting a purely hydrodynamic approach that only relies on the actuators (pumps and reservoirs) at the periphery of the chip (the boundary system) and on the channel geometries and hydrodynamic forces that act on the fluids to control the droplets in the network. The basic principle is that droplets flow along the path with minimum instantaneous hydraulic resistance, meanwhile increasing the resistance of the channel they are crossing. Thus, an isolated droplet entering a T or Y junction through the inlet will proceed towards the outlet with minimum instantaneous hydraulic resistance, while a closely following droplet may be driven to the other outlet. Therefore, it is possible to steer a payload droplet through a series of junctions by modulating its distance with respect to a certain number of control droplets. However, the flow behavior in complex microfluidic systems is affected by a number of interdependent factors that make the fluid flow at any one location depend upon the properties of the entire system. Furthermore, the time behavior of a droplet-based microfluidic network is also difficult to predict, because the motion of the droplets across the channels will continuously change the hydraulic resistance of different parts of the network and, hence, the flow rates of the continuous phase in different channels that, in turn, affect the speed of the droplets. All these interdependencies make extremely difficult to figure out the effect of combining together simple structures into a more complex system and to assess the time dynamics of a network in presence of multiple droplets. In this area, one interesting challenge concerns the comparison between switching mechanisms based on different physical quantities, such as the inter-distance between the droplets, or their length.
Recently, physics researchers have advanced the idea of exploiting microfluidic systems to build tiny computing units, and the possibility of using them in order to realize simple boolean functions has been experimentally proved. Even more interesting is the recent proposal of introducing communication notions in the microfluidic domain, e.g., encoding information in the distance between consecutive drops. Inspired by these works, we carried out some experiments using real microfluidic devices with the intent of investigating a proper way to transmit information in a microfluidic channel. In particular, we exploited the T-junction droplet generator governing laws in order to modulate the length of generated droplets (and, consequently, their interdistance) in a sort of binary Pulse Amplitude Modulation scheme. Results show that the noise that affects both droplet length and inter-distance can be modeled as a zero-mean normal random process, whose variance, however, depends on the transmitted symbol, i.e., on the working point of the system. More specifically, droplet-based modulation exhibits stronger noise for symbols associated to relatively short droplets, while interdistance-based modulation is more noisy when droplets are more spaced apart. Overall, however, droplet interdistance appears to be more robust a signal than droplet length for PAM modulation, though both techniques can achieve relatively low bit error probability in the considered scenarios. Although this analysis moves some initial steps toward the performance characterization of a microfluidic-based PAM transmission system, the way to go is still very long and unexplored. As an example, the transient behavior exhibited by the droplet generation circuit when changing the volumetric flow rates at the input has not yet been analyzed, nor has it been considered the actual bitrate that can be achieved with such a mechanism. These an other interesting challenges are left for future work.
Smart Energy Grids
Posted on by admin - Updated
Communications, market models and control. Our focus is centered around the use of Distributed Energy Sources (DES) in electricity grids. This includes the joint optimization of control, communication algorithms and market rules (energy trading and pricing). Our study also encompasses network elements equipped with energy harvesting capabilities and the interaction of Smart Electricity Grids and Mobile Networks.
In future energy grids, an intelligent and coordinated use of distributed generators (DG) based on renewables holds the potential of boosting the grid efficiency, for example, in terms of reduction of power losses, reactive power compensation, load balancing and peak shaving, while also relieving electricity production plants from some of the power load. This requires the addition of communication and smart metering capabilities to the power grid, and their careful orchestration. Our present focus is on the design of a set of control and communication algorithms to optimally manage the micro grid in terms of electrical performance and energy trading policies. These will exploit the computational and communication capabilities offered by the smart gateways that will be installed within each structure. In particular, we note that optimal pricing procedures are needed to assure that the end users equipped with DGs are willing to contribute to the enhancement of the power grid’s performance. The joint investigation of pricing techniques combined with suitable communication and control aspects has not been subject to an in depth theoretical study yet, despite being of great importance for the successful design and the deployment of the foreseen distributed smart grid infrastructure. Our project is rather ambitious and challenging as it involves techniques from telecommunications, electrical engineering and control systems. Up to now, research has mostly been carried out in isolation within these fields, leading to control strategies that are efficient in terms of smart grid’s performance but either neglect important telecommunication details or are difficult to implement as they do not well suit current market models. Our proposal is instead to tackle the distributed optimization of smart grids through a synergistic approach, where market (pricing) models are jointly modeled and designed together with communication and control algorithms.
Accurate statistical characterizations of daily power demand for different types of buildings and users (as, for example, households, shops and factories) is key to the study of smart grids. In addition to the energy consumption, we are also concerned with modeling the hourly energy price from real grids. These traces are instrumental to identify suitable statistical predictive models for the energy market. These models are being used within co-simulation tools and as well as the input of mathematical optimization frameworks.
We are tackling the design of optimal pricing policies based on metering and communication capabilities. These policies guarantee that, at any given time, the network is driven through an efficient operating point in terms of revenue for the users and the grid operator (market efficiency), while also assuring that the power grid is efficiently operated (electrical efficiency). In detail, the optimization algorithm will determine optimal discount factors (on the prices proposed by the energy producers) that will govern the local energy market, with the objective of driving the electricity grid’s performance toward selected (and pluggable) optimization goals.
We are addressing the design distributed online algorithms to jointly drive the smart grid toward efficient working points in terms of power quality and market. This solution shall also account for an efficient use of the state of charge of the distributed storage devices so that they are, at any time, within their operating range.
We are also engaged in the implementation of a full-fledged smart grid simulator accounting for: a) electrical power grid topology and dynamics. Grid topologies will be generated according to random graph theory applied to real-world power grid topologies; b) statistical characterization of photovoltaic electrical energy generation. This characterization will account for the photovoltaic panel technology and size, geographical coordinates, azimuthal and tilt orientation parameters and the month of the year selected for the simulation run; c) statistical characterization of electrical power demand by end users. This characterization will account for active and reactive power demand variations on a minute basis; d) communication network architecture, topology and physical layer dynamics; e) communication protocol stack; f) real time, plug-and-play, grid optimization techniques; g) dynamic energy pricing and marked dynamics.
In the last few years, we have been carrying out an intense research activity on smart micro grids. Scientific collaborations have been set up with the DEI groups of Control Systems and Power Electronics. Industrial partners have been contacted for the preparation of national and EU proposals. Joint papers, involving researchers from different disciplines have been published and others are currently under submission (see our publications page). This collaboration has been carried out within the DEI Smart Grid group, which has been established in 2009 by Dr. Michele Rossi and other professors at DEI. As part of this undertaking we have developed an experimental lab hosting a full-fledged micro grid (with tunable electrical topology, generation, communication and control elements).
H2020 MSCA-ITN “SCAVENGE” (on energy harvesting mobile networks, GA no. 675891)
Posted on by admin - Updated
Networking Protocols, experimentation, at-sea trials. We are involved in several research activities for underwater networking. There, our focus is centered around the design of networking protocols and interfaces with commercial and prototype undersea devices. Various experimental activities are also on the way.
Underwater networking poses serious challenges to network designers, as no straightforward translation exists between protocols for the terrestrial wireless radio environment and their underwater counterparts. The main reason behind this is the different nature of the underwater acoustic channel: long propagation delays (the average propagation speed of an acoustic wave is about 1500 m/s underwater, nearly 200’000 times smaller than radio waves in the air), strong multipath effects, long-term channel variations, and last but not least a much smaller bandwidth available (due to the use of acoustic frequencies, i.e., in the tens of kHz range), which turns into lower transmit bit rates. While radio and optical waves may also be an option for underwater communications, acoustic waves still foster a lot of interest, as they are currently the only means to reach distances longer than a few hundred meters. With the proper configuration of transmission hardware parameters, acoustic waves can travel up to tens of kilometers, making long range communication feasible, albeit at a possibly very low bit rate.
As there is no widely agreed upon model for the channel, we seek new models that are sufficiently compact yet simple enough to be plugged into a network simulator. This activity is carried out in cooperation with institutions and research centers that can provide real data from undersea measurement campaigns.
Underwater acoustic channel can be simulated using ray-tracer as well. We develop the WOSS Framework to obtain realistic acustic channel realization based on ray-tracer to be used in underwater network simulators.
We are currently analyzing a number of MAC solutions by means of simulations and stochastic models, in order to discover which features make one protocol perform better than others, with the final objective to create a novel protocol encompassing the best behaviors seen in other approaches.
Multihop underwater networks will require delay-tolerant routing protocols, that work well in the presence of very long propagation delays. We are currently analyzing the relevant routing tradeoffs that allow, e.g., to save energy by choosing wisely which nodes will relay messages. We are also designing efficient broadcasting techniques based on Hybrid ARQ.
We collaborate with research institutions to define joint sea trial activities which allow for network protocol evaluation as well as channel characterization; this feeds novel insight and provides directions to all above tasks.
Previous works can be found at this link
DESERT Underwater is a complete set of public C++ libraries that extend the NS-MIRACLE simulator to support the design and implementation of underwater network protocols.
WOSS is a multi-threaded C++ framework that permits the integration of any existing underwater channel simulator that expects environmental data as input and provides as output a channel realization. Currently, WOSS integrates the Bellhop ray-tracing program.
