During their participation in the WindMill Project, the Early Stage Researchers are also expected to publish articles regarding their findings. Here you can find the abstracts of these articles, as well as links to access their full versions.
The IEEE 802.11ad Wi-Fi amendment enables short-range multi-gigabit communications in the unlicensed 60 GHz spectrum, unlocking new interesting applications such as wireless Augmented and Virtual Reality. The characteristics of the Millimeter Wave (mmW) band and directional communications allow increasing the system throughput by scheduling pairs of nodes with low cross-interfering channels in the same time-frequency slot. On the other hand, this requires significantly more signaling overhead. Furthermore, IEEE 802.11ad introduces a hybrid MAC characterized by two different channel access mechanisms: contention-based and contention-free access periods. The coexistence of both access period types and the directionality typical of mmW increase the channel access and scheduling complexity in IEEE 802.11ad compared to previous Wi-Fi versions. Hence, to provide the Quality of Service performance required by demanding applications, a proper resource scheduling mechanism that takes into account both directional communications and the newly added features of this Wi-Fi amendment is needed. In this paper, we present a brief but comprehensive review of the open problems and challenges associated with channel access in IEEE 802.11ad and propose a workflow to tackle them via both heuristic and learning-based methods.
In the context of wireless networking, it was recently shown that multiple DNNs can be jointly trained to offer a desired collaborative behaviour capable of coping with a broad range of sensing uncertainties. In particular, it was established that DNNs can be used to derive policies that are robust with respect to the information noise statistic affecting the local information (e.g. CSI in a wireless network) used by each agent (e.g. transmitter) to make its decision. While promising, a major challenge in the implementation of such method is that information noise statistics may differ from agent to agent and, more importantly, that such statistics may not be available at the time of training or may evolve over time, making burdensome retraining necessary. This situation makes it desirable to devise a “universal” machine learning model, which can be trained once for all so as to allow for decentralized cooperation in any future feedback noise environment. With this goal in mind, we propose an architecture inspired from the well-known Mixture of Experts (MoE) model, which was previously used for non-linear regression and classification tasks in various contexts, such as computer vision and speech recognition. We consider the decentralized power control problem as an example to showcase the validity of the proposed model and to compare it against other power control algorithms. We show the ability of the so called Team-DMoE model to efficiently track time-varying statistical scenarios.
We address the realization of the Findability, Accessibility, Interoperability, and Reusability (FAIR) data principles in an Internet of Things (IoT) application through a data transfer protocol. In particular, we propose an architecture for the Message Queuing Telemetry Transport (MQTT) protocol that validates, normalizes, and filters the incoming messages based on the FAIR principles to improve the interoperability and reusability of data. We show that our approach can significantly increase the degree of FAIRness of the system by evaluating the architecture using existing maturity indicators in the literature. The proposed architecture successfully passes 18 maturity indicators out of 22. We also evaluate the performance of the system in 4 different settings in terms the latency and dropped messages in a simulation environment.We demonstrate that the architecture not only improves the degree of FAIRness of the system but also reduces the dropped messages rate.
The recently proposed QUIC protocol has been widely adopted at the transport layer of the Internet over the past few years. Its design goals are to overcome some of TCP’s performance issues, while maintaining the same properties and basic application interface. Two of the main drivers of its success were the integration with the innovative Bottleneck Bandwidth and Round-trip propagation time (BBR) congestion control mechanism, and the possibility of multiplexing different application streams over the same connection. Given the strong interest in QUIC shown by the ns-3 community, we present an extension to the native QUIC module that allows researchers to fully explore the potential of these two features. In this work, we present the integration of BBR into the QUIC module and the implementation of the necessary pacing and rate sampling mechanisms, along with a novel scheduling interface, with three different scheduling flavors. The new features are tested to verify that they perform as expected, using a web traffic model from the literature.
In a Flying Ad-Hoc Network, Unmanned Aerial Vehicles (UAVs), (i.e. drones or quadcopters), use wireless communication to exchange data, status updates, and commands between each other and with the control center. However, due to the movement of UAVs, maintaining communication is difficult, particularly when multiple hops are needed to reach the destination. In this work, we propose the Stochastic Multipath UAV Routing for FANETs (SMURF) protocol, which exploits trajectory tracking information from the drones to compute the routes with the highest reliability. SMURF is a centralized protocol, as the control center gathers location updates and sends routing commands following the Software Defined Networking (SDN) paradigm over a separate long-range low bitrate technology such as LoRaWAN. Additionally, SMURF exploits multiple routes, to increase the probability that at least one of the routes is usable. Simulation results show a significant reliability improvement over purely distance-based routing, and that just 3 routes are enough to achieve performance very close to oracle-based routing with perfect information.
We address an actively discussed problem in signal processing, recognizing patterns from spatial data in motion. In particular, we suggest a neural network architecture to recognize motion patterns from 4D point clouds. We demonstrate the feasibility of our approach with point cloud datasets of hand gestures. The architecture, PointGest, directly feeds on unprocessed timelines of point cloud data without any need for voxelization or projection. The model is resilient to noise in the input point cloud through abstraction to lower-density representations, especially for regions of high density. We evaluate the architecture on a benchmark dataset with ten gestures. PointGest achieves an accuracy of 98.8%, outperforming five state-of-the-art point cloud classification models.