During their participation in the WindMill Project, the Early Stage Researchers are also expected to publish articles regarding their findings. Here you can find the abstracts of these articles, as well as links to access their full versions.
One of the beyond-5G developments that is often highlighted is the integration of wireless communication and radio sensing. This paper addresses the potential of communication-sensing integration of Large Intelligent Surfaces (LIS) in an exemplary Industry 4.0 scenario. Besides the potential for high throughput and efficient multiplexing of wireless links, an LIS can offer a high-resolution rendering of the propagation environment. This is because, in an indoor setting, it can be placed in proximity to the sensed phenomena, while the high resolution is offered by densely spaced tiny antennas deployed over a large area. By treating an LIS as a radio image of the environment, we develop sensing techniques that leverage the usage of computer vision combined with machine learning. We test these methods for a scenario where we need to detect whether an industrial robot deviates from a predefined route. The results show that the LIS-based sensing offers high precision and has a high application potential in indoor industrial environments.
Sensing capability is one of the most highlighted new feature of future 6G wireless networks. This paper addresses the sensing potential of Large Intelligent Surfaces (LIS) in an exemplary Industry 4.0 scenario. Besides the attention received by LIS in terms of communication aspects, it can offer a high-resolution rendering of the propagation environment. This is because, in an indoor setting, it can be placed in proximity to the sensed phenomena, while the high resolution is offered by densely spaced tiny antennas deployed over a large area. By treating an LIS as a radio image of the environment relying on the received signal power, we develop techniques to sense the environment, by leveraging the tools of image processing and machine learning. Once a radio image is obtained, a Denoising Autoencoder (DAE) network can be used for constructing a super-resolution image leading to sensing advantages not available in traditional sensing systems. Also, we derive a statistical test based on the Generalized Likelihood Ratio (GLRT) as a benchmark for the machine learning solution. We test these methods for a scenario where we need to detect whether an industrial robot deviates from a predefined route. The results show that the LIS-based sensing offers high precision and has a high application potential in indoor industrial environments.
We introduce Pantomime, a novel mid-air gesture recognition system exploiting spatio-temporal properties of millimeter-wave radio frequency (RF) signals. Pantomime is positioned in a unique region of the RF landscape: mid-resolution mid-range high-frequency sensing, which makes it ideal for motion gesture interaction. We configure a commercial frequency-modulated continuous-wave radar device to promote spatial information over the temporal resolution by means of sparse 3D point clouds and contribute a deep learning architecture that directly consumes the point cloud, enabling real-time performance with low computational demands. Pantomime achieves 95% accuracy and 99% AUC in a challenging set of 21 gestures articulated by 41 participants in two indoor environments, outperforming four state-of-the-art 3D point cloud recognizers. We further analyze the effect of the environment in 5 different indoor environments, the effect of articulation speed, angle, and the distance of the person up to 5m. We have publicly made available the collected mmWave gesture dataset consisting of nearly 22,000 gesture instances along with our radar sensor configuration, trained models, and source code for reproducibility. We conclude that pantomime is resilient to various input conditions and that it may enable novel applications in industrial, vehicular, and smart home scenarios.
The IEEE 802.11ad Wi-Fi amendment enables short-range multi-gigabit communications in the unlicensed 60 GHz spectrum, unlocking new interesting applications such as wireless Augmented and Virtual Reality. The characteristics of the Millimeter Wave (mmW) band and directional communications allow increasing the system throughput by scheduling pairs of nodes with low cross-interfering channels in the same time-frequency slot. On the other hand, this requires significantly more signaling overhead. Furthermore, IEEE 802.11ad introduces a hybrid MAC characterized by two different channel access mechanisms: contention-based and contention-free access periods. The coexistence of both access period types and the directionality typical of mmW increase the channel access and scheduling complexity in IEEE 802.11ad compared to previous Wi-Fi versions. Hence, to provide the Quality of Service performance required by demanding applications, a proper resource scheduling mechanism that takes into account both directional communications and the newly added features of this Wi-Fi amendment is needed. In this paper, we present a brief but comprehensive review of the open problems and challenges associated with channel access in IEEE 802.11ad and propose a workflow to tackle them via both heuristic and learning-based methods.
In the context of wireless networking, it was recently shown that multiple DNNs can be jointly trained to offer a desired collaborative behaviour capable of coping with a broad range of sensing uncertainties. In particular, it was established that DNNs can be used to derive policies that are robust with respect to the information noise statistic affecting the local information (e.g. CSI in a wireless network) used by each agent (e.g. transmitter) to make its decision. While promising, a major challenge in the implementation of such method is that information noise statistics may differ from agent to agent and, more importantly, that such statistics may not be available at the time of training or may evolve over time, making burdensome retraining necessary. This situation makes it desirable to devise a “universal” machine learning model, which can be trained once for all so as to allow for decentralized cooperation in any future feedback noise environment. With this goal in mind, we propose an architecture inspired from the well-known Mixture of Experts (MoE) model, which was previously used for non-linear regression and classification tasks in various contexts, such as computer vision and speech recognition. We consider the decentralized power control problem as an example to showcase the validity of the proposed model and to compare it against other power control algorithms. We show the ability of the so called Team-DMoE model to efficiently track time-varying statistical scenarios.
We address the realization of the Findability, Accessibility, Interoperability, and Reusability (FAIR) data principles in an Internet of Things (IoT) application through a data transfer protocol. In particular, we propose an architecture for the Message Queuing Telemetry Transport (MQTT) protocol that validates, normalizes, and filters the incoming messages based on the FAIR principles to improve the interoperability and reusability of data. We show that our approach can significantly increase the degree of FAIRness of the system by evaluating the architecture using existing maturity indicators in the literature. The proposed architecture successfully passes 18 maturity indicators out of 22. We also evaluate the performance of the system in 4 different settings in terms the latency and dropped messages in a simulation environment.We demonstrate that the architecture not only improves the degree of FAIRness of the system but also reduces the dropped messages rate.
The recently proposed QUIC protocol has been widely adopted at the transport layer of the Internet over the past few years. Its design goals are to overcome some of TCP’s performance issues, while maintaining the same properties and basic application interface. Two of the main drivers of its success were the integration with the innovative Bottleneck Bandwidth and Round-trip propagation time (BBR) congestion control mechanism, and the possibility of multiplexing different application streams over the same connection. Given the strong interest in QUIC shown by the ns-3 community, we present an extension to the native QUIC module that allows researchers to fully explore the potential of these two features. In this work, we present the integration of BBR into the QUIC module and the implementation of the necessary pacing and rate sampling mechanisms, along with a novel scheduling interface, with three different scheduling flavors. The new features are tested to verify that they perform as expected, using a web traffic model from the literature.
In a Flying Ad-Hoc Network, Unmanned Aerial Vehicles (UAVs), (i.e. drones or quadcopters), use wireless communication to exchange data, status updates, and commands between each other and with the control center. However, due to the movement of UAVs, maintaining communication is difficult, particularly when multiple hops are needed to reach the destination. In this work, we propose the Stochastic Multipath UAV Routing for FANETs (SMURF) protocol, which exploits trajectory tracking information from the drones to compute the routes with the highest reliability. SMURF is a centralized protocol, as the control center gathers location updates and sends routing commands following the Software Defined Networking (SDN) paradigm over a separate long-range low bitrate technology such as LoRaWAN. Additionally, SMURF exploits multiple routes, to increase the probability that at least one of the routes is usable. Simulation results show a significant reliability improvement over purely distance-based routing, and that just 3 routes are enough to achieve performance very close to oracle-based routing with perfect information.
We address an actively discussed problem in signal processing, recognizing patterns from spatial data in motion. In particular, we suggest a neural network architecture to recognize motion patterns from 4D point clouds. We demonstrate the feasibility of our approach with point cloud datasets of hand gestures. The architecture, PointGest, directly feeds on unprocessed timelines of point cloud data without any need for voxelization or projection. The model is resilient to noise in the input point cloud through abstraction to lower-density representations, especially for regions of high density. We evaluate the architecture on a benchmark dataset with ten gestures. PointGest achieves an accuracy of 98.8%, outperforming five state-of-the-art point cloud classification models.