During their participation in the WindMill Project, the Early Stage Researchers are also expected to publish articles regarding their findings. Below you can find the abstracts of these articles, as well as links to access their full versions.
We are interested in deducing whether two user equipments (UEs) in a cellular system are at nearby physical locations from measuring similarity of their channel state information (CSI). This becomes essential for fingerprinting localization as well as for channel charting. A channel chart is a low dimensional (e.g., 2-dimensional) radio map based on CSI measurements only, which is created using self-supervised machine learning techniques. Analyzing CSI in terms of the angledelay power profile (ADPP) takes advantage of the uniqueness of the multipath channel between the base station and the UE over the geographical region of interest. We consider super-resolution features in the angle and delay domains in massive multiple-input multiple-output (MIMO) systems and consider the earth-mover distance (EMD) to measure the distance between two features. Simulation results based on the DeepMIMO data set show that the super-resolution ADPP features with EMD leads to a better quality channel chart as compared to other CSI features and distances from the literature.
We consider a machine learning approach for beam handover in mmWave 5G New Radio systems, in which User Equipments (UEs) perform autonomous beam selection, conditioned on a used Base Station (BS) beam. We develop a networkcentric approach for predicting beam Signal-to-Noise Ratio (SNR) from Channel State Information (CSI) features measured at the BS, which consists of two phases; offline and online. In the offline training phase, we construct CSI features and dimensionalityreduced Channel Charts (CC). We annotate the CCs with perbeam SNRs for different combinations of a BS beam and the corresponding best UE beam, and train models to predict SNR from CSI features for different BS/UE beam combinations. In the online phase, we predict SNRs of beam combinations not being used at the moment. We develop a low complexity out-of-sample algorithm for dimensionality reduction in the online phase. We consider k-nearest neighbors, Gaussian process regression, and neural network-based predictions. To evaluate the efficacy of the proposed framework, we perform simulations for a street segment with synthetically generated CSI. We investigate the complexityaccuracy trade-off for different dimensionality reduction techniques and different predictors. Our results reveal that nonlinear dimensionality reduction of CSI features with neural network prediction shows the best performance, and the performance of the best CSI-based prediction method is comparable to prediction based on using known physical location.
This paper explores the potential of Large Intelligent Surfaces (LIS) in the context of radio sensing for 6G wireless networks. By capitalizing on arbitrary communication signals present in the environment, we employ direct processing techniques on the output signal from the LIS. This enables us to generate a radio map that accurately depicts the physical presence of passive devices, such as scatterers and humans, which effectively act as virtual sources due to signal reflections. To extract meaningful information from these radio maps, we evaluate the application of machine learning and computer vision methods, including clustering, template matching, and component labeling. To illustrate the effectiveness of this approach, we specifically focus on passive multi-human detection in indoor environments. Our results demonstrate the significant application potential of the proposed method, as it achieves a passive detection rate of approximately 98% for humans, even under challenging Signal-to-Noise Ratio (SNR) conditions.
We leverage standards-compliant beam training measurements from commercial-of-the-shelf (COTS) 802.11ad/ay devices for localization of a moving object. Two technical challenges need to be addressed: the beam training measurements are intermittent due to beam scanning overhead control and contention-based channel-time allocation, and how to exploit underlying object dynamics to assist the localization. To this end, we formulate the trajectory estimation as a sequence regression problem. We propose a dual-decoder neural dynamic learning framework to simultaneously reconstruct Wi-Fi beam training measurements at irregular time instances and learn the unknown dynamics over the latent space in a continuous-time fashion by enforcing strong supervision at both the coordinate and measurement levels. The proposed method was evaluated on an in-house mmWave Wi-Fi dataset and compared with a range of baseline methods, including traditional machine learning methods and recurrent neural networks.
RF-sensing, the analysis and interpretation of movement or environment-induced patterns in received electromagnetic signals, has been actively investigated for more than a decade. Since electromagnetic signals, through cellular communication systems, are omnipresent, RF sensing has the potential to become a universal sensing mechanism with applications in smart home, retail, localization, gesture recognition, intrusion detection, etc. Specifically, existing cellular network installations might be dual-used for both communication and sensing. Such communications and sensing convergence is envisioned for future communication networks. We propose the use of NR-sidelink direct device-to-device communication to achieve device-initiated,flexible sensing capabilities in beyond 5G cellular communication systems. In this article, we specifically investigate a common issue related to sidelink-based RF-sensing, which is its angle and rotation dependence. In particular, we discuss transformations of mmWave point-cloud data which achieve rotational invariance, as well as distributed processing based on such rotational invariant inputs, at angle and distance diverse devices. To process the distributed data, we propose a graph based encoder to capture spatio-temporal features of the data and propose four approaches for multi-angle learning. The approaches are compared on a newly recorded and openly available dataset comprising 15 subjects, performing 21 gestures which are recorded from 8 angles.
We present Tesla-Rapture, a gesture recognition interface for point clouds generated by mmWave Radars. State of the art gesture recognition models are either too resource consuming or not sufficiently accurate for integration into real-life scenarios using wearable or constrained equipment such as IoT devices (e.g. Raspberry PI), XR hardware (e.g. HoloLens), or smart-phones. To tackle this issue, we developed Tesla, a Message Passing Neural Network (MPNN) graph convolution approach for mmWave radar point clouds. The model outperforms the state of the art on two datasets in terms of accuracy while reducing the computational complexity and, hence, the execution time. In particular, the approach, is able to predict a gesture almost 8 times faster than the most accurate competitor. Our performance evaluation in different scenarios (environments, angles, distances) shows that Tesla generalizes well and improves the accuracy up to 20% in challenging scenarios like a through-wall setting and sensing at extreme angles. Utilizing Tesla, we develop Tesla-Rapture, a real-time implementation using a mmWave Radar on a Raspberry PI 4 and evaluate its accuracy and time-complexity. We also publish the source code, the trained models, and the implementation of the model for embedded devices.
The usage of Reconfigurable Intelligent Surfaces (RIS) in conjunction with Unmanned Ariel Vehicles (UAVs) is being investigated as a way to provide energy-efficient communication to ground users in dense urban areas. In this paper, we devise an optimization scenario to reduce overall energy consumption in the network while guaranteeing certain Quality of Service (QoS) to the ground users in the area. Due to the complex nature of the optimization problem, we provide a joint UAV trajectory and RIS phase decision to minimize transmission power of the UAV and Base Station (BS) that yields good performance with lower complexity. So, the proposed method uses a Successive Convex Approximation (SCA) to iteratively determine a joint optimal solution for UAV Trajectory, RIS phase and BS and UAV Transmission Power. The approach has, therefore, been analytically evaluated under different sets of criterion.
In this paper, we apply an multi-agent reinforcement learning (MARL) framework allowing the base station (BS) and the user equipments (UEs) to jointly learn a channel access policy and its signaling in a wireless multiple access scenario. In this framework, the BS and UEs are reinforcement learning (RL) agents that need to cooperate in order to deliver data. The comparison with a contention-free and a contention-based baselines shows that our framework achieves a superior performance in terms of goodput even in high traffic situations while maintaining a low collision rate. The scalability of the proposed method is studied, since it is a major problem in MARL and this paper provides the first results in order to address it.
In this paper, we propose a new framework, exploiting the multi-agent deep deterministic policy gradient (MADDPG) algorithm, to enable a base station (BS) and user equipment (UE) to come up with a medium access control (MAC) protocol in a multiple access scenario. In this framework, the BS and UEs are reinforcement learning (RL) agents that need to learn to cooperate in order to deliver data. The network nodes can exchange control messages to collaborate and deliver data across the network, but without any prior agreement on the meaning of the control messages. In such a framework, the agents have to learn not only the channel access policy, but also the signaling policy. The collaboration between agents is shown to be important, by comparing the proposed algorithm to ablated versions where either the communication between agents or the central critic is removed. The comparison with a contention-free baseline shows that our framework achieves a superior performance in terms of goodput and can effectively be used to learn a new protocol.
Random access (RA) schemes are a topic of high interest in machine-type communication (MTC). In RA protocols, backoff techniques such as exponential backoff (EB) are used to stabilize the system to avoid low throughput and excessive delays. However, these backoff techniques show varying performance for different underlying assumptions and analytical models. Therefore, finding a better transmission policy for slotted ALOHA RA is still a challenge. In this paper, we show the potential of deep reinforcement learning (DRL) for RA. We learn a transmission policy that balances between throughput and fairness. The proposed algorithm learns transmission probabilities using previous action and binary feedback signal, and it is adaptive to different traffic arrival rates. Moreover, we propose average age of packet (AoP) as a metric to measure fairness among users. Our results show that the proposed policy outperforms the baseline EB transmission schemes in terms of throughput and fairness.
Grant-free random access (RA) techniques are suitable for machine-type communication (MTC) networks but they need to be adaptive to the MTC traffic, which is different from the human-type communication. Conventional RA protocols such as exponential backoff (EB) schemes for slotted-ALOHA suffer from a high number of collisions and they are not directly applicable to the MTC traffic models. In this work, we propose to use multi-agent deep Q-network (DQN) with parameter sharing to find a single policy applied to all machine-type devices (MTDs) in the network to resolve collisions. Moreover, we consider binary broadcast feedback common to all devices to reduce signalling overhead. We compare the performance of our proposed DQN-RA scheme with EB schemes for up to 500 MTDs and show that the proposed scheme outperforms EB policies and provides a better balance between throughput, delay and collision rate.
This work takes a critical look at the application of conventional machine learning methods to wireless communication problems through the lens of reliability and robustness. Deep learning techniques adopt a frequentist framework, and are known to provide poorly calibrated decisions that do not reproduce the true uncertainty caused by limitations in the size of the training data. Bayesian learning, while in principle capable of addressing this shortcoming, is in practice impaired by model misspecification and by the presence of outliers. Both problems are pervasive in wireless communication settings, in which the capacity of machine learning models is subject to resource constraints and training data is affected by noise and interference. In this context, we explore the application of the framework of robust Bayesian learning. After a tutorial-style introduction to robust Bayesian learning, we showcase the merits of robust Bayesian learning on several important wireless communication problems in terms of accuracy, calibration, and robustness to outliers and misspecification.
Environmental scene reconstruction is of great interest for autonomous robotic applications, since an accurate representation of the environment is necessary to ensure safe interaction with robots. Equally important, it is also vital to ensure reliable communication between the robot and its controller. Large Intelligent Surface (LIS) is a technology that has been extensively studied due to its communication capabilities. Moreover, due to the number of antenna elements, these surfaces arise as a powerful solution to radio sensing. This paper presents a novel method to translate radio environmental maps obtained at the LIS to floor plans of the indoor environment built of scatterers spread along its area. The usage of a Least Squares (LS) based method, U-Net (UN) and conditional Generative Adversarial Networks (cGANs) were leveraged to perform this task. We show that the floor plan can be correctly reconstructed using both local and global measurements.
We are interested in deducing whether two users in a cellular system are at nearby physical locations from measuring similarity of their covariance matrices at a base station. This becomes challenging in multiple-input-multiple-output mmWave channels, as the semi-optical nature of mmWave radio propagation gives rise to non-Kronecker correlation. Hence, the estimated BS covariance matrix depends on the UE pilot beamformer, and moreover, on the direction of movement in the radio environment. A coordinated UE pilot transmission approach is needed to make measured covariances spatially consistent. We formulate the UE pilot beamformer selection problem as an optimization problem aiming to preserve the spatial consistency of a set of UEs moving in the same large-scale radio environment. We use the collinearity matrix distance to measure the similarity of the BS covariance matrices of UEs in the radio environment. Covariance matrix and instantaneous channel state based UE pilot beamformers with different ranks are considered. Simulations are used to evaluate the spatial consistency provided by coordinated uplink precoding methods. Depending on the expected signal-to-noise ratio, there is an optimal rank for the UE pilot transmission, which maximizes the similarity between covariances estimated from transmissions of different UEs in the same large-scale fading environment.
We consider a machine learning algorithm to predict the Signal-to-Noise-Ratio (SNR) of a user transmission at a neighboring base station in a massive MIMO (mMIMO) cellular system. This information is needed for Handover (HO) decisions for mobile users. For SNR prediction, only uplink channel characteristics of users, measured in a serving cell, are used. Measuring the signal quality from the downlink signals of neighboring Base Stations (BSs) at the User Equipment (UE) becomes increasingly problematic in forthcoming mMIMO Millimeter-Wave (mmWave) 5G cellular systems, due to the high degree of directivity required from transmissions, and vulnerability of mm Wave signals to blocking. Channel Charting (CC) is a machine learning technique for creating a radio map based on radio measurements only, which can be used for radio-resource-management problems. A CC is a two-dimensional representation of the space of received radio signals. Here, we learn an annotation of the CC in terms of neighboring BS signal qualities. Such an annotated CC can be used by a BS serving a UE to first localize the UE in the CC, and then to predict the signal quality from neighboring BSs. Each BS first constructs a CC from a number of samples, determining similarity of radio signals transmitted from different locations in the network based on covariance matrices. Then, the BS learns a continuous function for predicting the vector of neighboring BS SNRs as a function of a 2D coordinate in the chart. The considered algorithm provides information for handover decisions without UE assistance. UE-power consuming neighbor measurements are not needed, and the protocol overhead for HO is reduced.
We consider a machine learning approach to perform best beam prediction in Non-Standalone Millimeter Wave (mmWave) Systems utilizing Channel Charting (CC). The approach reduces communication overheads and delays associated with initial access and beam tracking in 5G New Radio (NR) systems. The network has a mmWave and a sub-6 GHz component. We devise a Base Station (BS) centric approach for best mmWave beam prediction, based on Channel State Information (CSI) measured at the sub-6 GHz BS, with no need to exchange information with UEs. In a training phase, we collect CSI at the sub-6 GHz BS from sample UEs, and construct a dimensional reduction of the sample CSI, called a CC. We annotate the CC with best beam information measured at a mmWave BS for the sample UEs, assuming autonomous beamformer at the UE side. A beam predictor is trained based on this information, connecting any sub-6 GHz CSI with a predicted best mmWave beam. To evaluate the efficiency of the proposed framework, we perform simulations for a street segment with synthetic spatially consistent CSI. With a neural network predictor, we obtain 91% accuracy for predicting best beam and 99% accuracy for predicting one of two best beams. The accuracy of CC based beam prediction is indistinguishable from true location based beam prediction.
We consider a scalable User Equipment (UE)-side indoor localization framework that processes Channel State Information (CSI) from multiple Access Points (APs). We use CSI features that are resilient to synchronization errors and other hardware impairments. As a consequence our method does not require accurate network synchronization among APs. Increasing the number of APs considered by a UE profoundly improves fingerprint positioning, with the cost of increasing complexity and channel estimation time. In order to improve scalability of the framework to large networks consisting of multiple APs in many rooms, we train a multi-layer neural network that combines CSI features and unique AP identifiers of a subset of APs in range of a UE. We simulate UE-side localization using CSI obtained from a commercial raytracer. The considered method processing frequency selective CSI achieves an average positioning error of 60 cm, outperforming methods that process received signal strength information only. The mean localization accuracy loss compared to a non-scalable approach with perfect synchronization and CSI is 20 cm.
We consider machine learning for intra cell beam handovers in mmWave 5GNR systems by leveraging Channel Charting (CC). We develop a base station centric approach for predicting the Signal-to-Noise-Ratio (SNR) of beams. Beam SNRs are predicted based on measured signal at the BS without the need to exchange information with UEs. In an offline training phase, we construct a beam-specific dimensionality reduction of Channel State Information (CSI) to a low-dimensional CC, annotate the CC with beam-wise SNRs and then train SNR predictors for different target beams. In the online phase, we predict target beam SNRs. K-nearest neighbors, Gaussian Process Regression and Neural Network based prediction are considered. Based on SNR difference between the serving and target beams a handover can be decided. To evaluate the efficiency of the proposed framework, we perform simulations for a street segment with synthetically generated CSI. SNR prediction accuracy of average root mean square error less than 0.3 dB is achieved.
We propose a novel beam-tracking algorithm based on channel charting (CC) which maintains the communication link between a base station (BS) and a mobile user equipment (UE) in a millimeter wave (mmWave) mobile communications system. Our method first uses large-scale channel state information at the BS in order to learn a CC. The points in the channel chart are then annotated with the signal-to-noise ratio (SNR) of best beams. One can then leverage this CC-to-SNR mapping in order to track strong beams between UEs and BS efficiently and robustly at very low beam-search overhead. Simulation results in a mmWave scenario show that the performance of the CC-assisted beam tracking method approaches that of an exhaustive beam-search approach while requiring significantly lower beam-search overhead than conventional tracking methods.
Routing is a crucial component in the design of Flying Ad-Hoc Networks (FANETs). State of the art routing solutions exploit the position of Unmanned Aerial Vehicles (UAVs) and their mobility information to determine the existence of links between them, but this information is often unreliable, as the topology of FANETs can change quickly and unpredictably. In order to improve the tracking performance, the uncertainty introduced by imperfect measurements and tracking algorithms needs to be accounted for in the routing. Another important element to consider is beamforming, which can reduce interference, but requires accurate channel and position information to work. In this work, we present the Beam Aware Stochastic Multihop Routing for FANETs (BA-SMURF), a Software-Defined Networking (SDN) routing scheme that takes into account the positioning uncertainty and beamforming design to find the most reliable routes in a FANET. Our simulation results show that joint consideration of the beamforming and routing can provide a 5% throughput improvement with respect to the state of the art.
Thanks to its capability to provide a uniform service rate for the User Equipments (UEs), Cell-free (CF) massive Multiple-Input, Multiple-Output (mMIMO), has recently attracted considerable attention, both in academia and in industry, and so is considered as one of the potential technologies for beyond-5G and 6G. However, the reuse of the same pilot signals by multiple users can create the so-called pilot contamination problem, which can hinder the CF mMIMO from unlocking its full performance. In this paper, we address the challenge by formulating the pilot assignment as a maximally diverse clustering problem and propose an efficient yet straightforward repulsive clustering-based pilot assignment scheme to mitigate the effects of pilot contamination on CF mMIMO. The numerical results show the superiority of the proposed technique compared to some other methods with respect to the achieved uplink per-user rate.
Since electromagnetic signals are omnipresent, Radio Frequency (RF)-sensing has the potential to become a universal sensing mechanism with applications in localization, smart-home, retail, gesture recognition, intrusion detection, etc. Two emerging technologies in RF-sensing, namely sensing through Large Intelligent Surfaces (LISs) and mmWave Frequency-Modulated Continuous-Wave (FMCW) radars, have been successfully applied to a wide range of applications. In this work, we compare LIS and mmWave radars for localization in real-world and simulated environments. In our experiments, the mmWave radar achieves 0.71 Intersection Over Union (IOU) and 3cm error for bounding boxes, while LIS has 0.56 IOU and 10cm distance error. Although the radar outperforms the LIS in terms of accuracy, LIS features additional applications in communication in addition to sensing scenarios.
We address a timely and relevant problem in signal processing: The recognition of patterns from spatial data in motion through a zero-shot learning scenario. We introduce a neural network architecture based on Siamese networks to recognize unseen classes of motion patterns. The approach uses a graph-based technique to achieve permutation invariance and also encodes moving point clouds into a representation space in a computationally efficient way. We evaluated the model on an open dataset with twenty-one gestures. The model out-performs state-of-the-art architectures with a considerable margin in four different settings in terms of accuracy while reducing the computational complexity up to 60 times.
A methodology to cluster multiple sets of Gaussian multivariate complex observations based on the alignment of their column spaces is presented. These subspaces are identified with points in the Grassmann manifold and compared according to a similarity measure drawn from a chosen manifold distance, which is proportional to the squared projection–Frobenius norm. In order to guarantee that distances between subspaces of different dimensions are comparable, we proposed to normalise the corresponding decision statistics with respect to their asymptotic mean and variance, assuming that (i) the dimensions of both the observation and the involved subspaces are large but comparable in magnitude and (ii) both subspaces are generated by the same statistical law. A procedure is derived to estimate these normalisation parameters, leading to a new statistic that can be built exclusively from the observations. The method is applied to a MIMO wireless channel clustering problem, where is shown to outperform conventional similarity measures in terms of classification performance.
One of the beyond-5G developments that is often highlighted is the integration of wireless communication and radio sensing. This paper addresses the potential of communication-sensing integration of Large Intelligent Surfaces (LIS) in an exemplary Industry 4.0 scenario. Besides the potential for high throughput and efficient multiplexing of wireless links, an LIS can offer a high-resolution rendering of the propagation environment. This is because, in an indoor setting, it can be placed in proximity to the sensed phenomena, while the high resolution is offered by densely spaced tiny antennas deployed over a large area. By treating an LIS as a radio image of the environment, we develop sensing techniques that leverage the usage of computer vision combined with machine learning. We test these methods for a scenario where we need to detect whether an industrial robot deviates from a predefined route. The results show that the LIS-based sensing offers high precision and has a high application potential in indoor industrial environments.
Sensing capability is one of the most highlighted new feature of future 6G wireless networks. This paper addresses the sensing potential of Large Intelligent Surfaces (LIS) in an exemplary Industry 4.0 scenario. Besides the attention received by LIS in terms of communication aspects, it can offer a high-resolution rendering of the propagation environment. This is because, in an indoor setting, it can be placed in proximity to the sensed phenomena, while the high resolution is offered by densely spaced tiny antennas deployed over a large area. By treating an LIS as a radio image of the environment relying on the received signal power, we develop techniques to sense the environment, by leveraging the tools of image processing and machine learning. Once a radio image is obtained, a Denoising Autoencoder (DAE) network can be used for constructing a super-resolution image leading to sensing advantages not available in traditional sensing systems. Also, we derive a statistical test based on the Generalized Likelihood Ratio (GLRT) as a benchmark for the machine learning solution. We test these methods for a scenario where we need to detect whether an industrial robot deviates from a predefined route. The results show that the LIS-based sensing offers high precision and has a high application potential in indoor industrial environments.
The spectral behavior of kernel matrices built from complex multi-variate data is established in the asymptotic regime where both the number of observations and their dimensionality increase without bound at the same rate. The result is an extension of currently available results for inner product based kernel matrices formed from real valued observations to the case where the input data is complex valued. In particular, assuming complex independent standardized Gaussian inputs and imposing certain conditions on the kernel function, it is shown that the empirical distribution of eigenvalues of this type of matrices converges almost surely to a probability measure in this asymptotic domain. Furthermore, the asymptotic spectral density can be obtained by solving a quartic polynomial equation involving its Stieltjes transform and some coefficients depending on the Hermite-like expansion of the kernel function. This is in stark contrast with the equivalent result for real valued observations, in which the underlying polynomial equation is cubic.
We introduce Pantomime, a novel mid-air gesture recognition system exploiting spatio-temporal properties of millimeter-wave radio frequency (RF) signals. Pantomime is positioned in a unique region of the RF landscape: mid-resolution mid-range high-frequency sensing, which makes it ideal for motion gesture interaction. We configure a commercial frequency-modulated continuous-wave radar device to promote spatial information over the temporal resolution by means of sparse 3D point clouds and contribute a deep learning architecture that directly consumes the point cloud, enabling real-time performance with low computational demands. Pantomime achieves 95% accuracy and 99% AUC in a challenging set of 21 gestures articulated by 41 participants in two indoor environments, outperforming four state-of-the-art 3D point cloud recognizers. We further analyze the effect of the environment in 5 different indoor environments, the effect of articulation speed, angle, and the distance of the person up to 5m. We have publicly made available the collected mmWave gesture dataset consisting of nearly 22,000 gesture instances along with our radar sensor configuration, trained models, and source code for reproducibility. We conclude that pantomime is resilient to various input conditions and that it may enable novel applications in industrial, vehicular, and smart home scenarios.
This work proposes a hierarchical clustering method for wireless users based on a similarity measure between the subspaces spanned by their channel matrices. Specifically, the channel subspaces are seen as points in Grassmann manifolds and their similarity is measured in terms of the squared projection-Frobenius distance. The asymptotic (in the number of antennas) analysis of the first- and second-order statistics of the similarity measure provides a tool for effectively comparing Grassmann manifolds of different sizes and allows for a proper similarity measure between clusters of different number of users/antennas, as corroborated by numerical results.
The IEEE 802.11ad Wi-Fi amendment enables short-range multi-gigabit communications in the unlicensed 60 GHz spectrum, unlocking new interesting applications such as wireless Augmented and Virtual Reality. The characteristics of the Millimeter Wave (mmW) band and directional communications allow increasing the system throughput by scheduling pairs of nodes with low cross-interfering channels in the same time-frequency slot. On the other hand, this requires significantly more signaling overhead. Furthermore, IEEE 802.11ad introduces a hybrid MAC characterized by two different channel access mechanisms: contention-based and contention-free access periods. The coexistence of both access period types and the directionality typical of mmW increase the channel access and scheduling complexity in IEEE 802.11ad compared to previous Wi-Fi versions. Hence, to provide the Quality of Service performance required by demanding applications, a proper resource scheduling mechanism that takes into account both directional communications and the newly added features of this Wi-Fi amendment is needed. In this paper, we present a brief but comprehensive review of the open problems and challenges associated with channel access in IEEE 802.11ad and propose a workflow to tackle them via both heuristic and learning-based methods.
In the context of wireless networking, it was recently shown that multiple DNNs can be jointly trained to offer a desired collaborative behaviour capable of coping with a broad range of sensing uncertainties. In particular, it was established that DNNs can be used to derive policies that are robust with respect to the information noise statistic affecting the local information (e.g. CSI in a wireless network) used by each agent (e.g. transmitter) to make its decision. While promising, a major challenge in the implementation of such method is that information noise statistics may differ from agent to agent and, more importantly, that such statistics may not be available at the time of training or may evolve over time, making burdensome retraining necessary. This situation makes it desirable to devise a “universal” machine learning model, which can be trained once for all so as to allow for decentralized cooperation in any future feedback noise environment. With this goal in mind, we propose an architecture inspired from the well-known Mixture of Experts (MoE) model, which was previously used for non-linear regression and classification tasks in various contexts, such as computer vision and speech recognition. We consider the decentralized power control problem as an example to showcase the validity of the proposed model and to compare it against other power control algorithms. We show the ability of the so called Team-DMoE model to efficiently track time-varying statistical scenarios.
We address the realization of the Findability, Accessibility, Interoperability, and Reusability (FAIR) data principles in an Internet of Things (IoT) application through a data transfer protocol. In particular, we propose an architecture for the Message Queuing Telemetry Transport (MQTT) protocol that validates, normalizes, and filters the incoming messages based on the FAIR principles to improve the interoperability and reusability of data. We show that our approach can significantly increase the degree of FAIRness of the system by evaluating the architecture using existing maturity indicators in the literature. The proposed architecture successfully passes 18 maturity indicators out of 22. We also evaluate the performance of the system in 4 different settings in terms the latency and dropped messages in a simulation environment. We demonstrate that the architecture not only improves the degree of FAIRness of the system but also reduces the dropped messages rate.
The recently proposed QUIC protocol has been widely adopted at the transport layer of the Internet over the past few years. Its design goals are to overcome some of TCP’s performance issues, while maintaining the same properties and basic application interface. Two of the main drivers of its success were the integration with the innovative Bottleneck Bandwidth and Round-trip propagation time (BBR) congestion control mechanism, and the possibility of multiplexing different application streams over the same connection. Given the strong interest in QUIC shown by the ns-3 community, we present an extension to the native QUIC module that allows researchers to fully explore the potential of these two features. In this work, we present the integration of BBR into the QUIC module and the implementation of the necessary pacing and rate sampling mechanisms, along with a novel scheduling interface, with three different scheduling flavors. The new features are tested to verify that they perform as expected, using a web traffic model from the literature.
In a Flying Ad-Hoc Network, Unmanned Aerial Vehicles (UAVs), (i.e. drones or quadcopters), use wireless communication to exchange data, status updates, and commands between each other and with the control center. However, due to the movement of UAVs, maintaining communication is difficult, particularly when multiple hops are needed to reach the destination. In this work, we propose the Stochastic Multipath UAV Routing for FANETs (SMURF) protocol, which exploits trajectory tracking information from the drones to compute the routes with the highest reliability. SMURF is a centralized protocol, as the control center gathers location updates and sends routing commands following the Software Defined Networking (SDN) paradigm over a separate long-range low bitrate technology such as LoRaWAN. Additionally, SMURF exploits multiple routes, to increase the probability that at least one of the routes is usable. Simulation results show a significant reliability improvement over purely distance-based routing, and that just 3 routes are enough to achieve performance very close to oracle-based routing with perfect information.
We address an actively discussed problem in signal processing, recognizing patterns from spatial data in motion. In particular, we suggest a neural network architecture to recognize motion patterns from 4D point clouds. We demonstrate the feasibility of our approach with point cloud datasets of hand gestures. The architecture, PointGest, directly feeds on unprocessed timelines of point cloud data without any need for voxelization or projection. The model is resilient to noise in the input point cloud through abstraction to lower-density representations, especially for regions of high density. We evaluate the architecture on a benchmark dataset with ten gestures. PointGest achieves an accuracy of 98.8%, outperforming five state-of-the-art point cloud classification models.