As a Marie Curie ITN project consortium, the WindMill team fully recognizes the essential role played by the scientific and career training of the young researchers (ESRs) involved in the project. This document reports on the outcomes of the various training events organized to the benefit of our ESRs, as well as the ESRs’ direct feedback on these events.
Machine learning and information theory are highly interconnected because they both turn statistical relations between data into actionable solutions. While information theory mainly deals with communications problems and have clear models to do so,machine learning mostly relies on collected data. In this document, we review open issues in both machine learning and communications and how they are intertwined. We especially devote a chapter to transfer learning, which would be paramount for learning from data in communications networks. We also dedicate another chapter to GANs and how they can be used to learn communications simulators from data. Finally, we revisit the need for machine learning for MIMO.
Machine Learning (ML) approaches have attracted researchers in wireless communications to employ and develop ML algorithms in order to exploit information about the radio channels.However, ML methods are data hungry and to obtain reliable results huge amount of real world measurements are needed.This problem can be circumvented by generating authentic synthetic data which mimics the behavior of realistic data. To do so, accurate simulators need to be designed and created based on mathematical models and measured data. In this re-port, synthetic data use cases in three different applications are briefly discussed. An overview of channel models appropriate for Fifth Generation and Beyond Fifth Generation (B5G) systems is given. Channel model characteristics including path loss, Large-Scale Fading and Small-Scale Fading models, are discussed,and differences of Milimeter-Wave and microwave models are explained. The QUAsi Deterministic RadIo channel GenerAtor (QuaDRiGa) and ray-tracing channel simulators are discussed.For ML use cases where radio characteristics are to be predictedfrom traces of user channel measurements, spatial consistency of the channel model is paramount. Spatial consistency of channels generated in QuaDRiGa is investigated here. As an example use of spatially consistent channel models, channels generatedby a ray-tracing model are used in a channel charting based algo-rithm for handowver, and the performance of the system is eval-uated. One of the B5G developments that is often highlightedis the integration of wireless communication and radio sensing.The potential of communication-sensing integration of Large In-telligent Surfaces (LIS) is still under development. By treatinga LIS as a radio image of the environment, sensing techniquesthat leverage the tools of image processing and computer visioncombined with machine learning can be undertaken.
In this report we discuss a joint modelling problem of wireless channels together with hardware impairments of radio front-end. In the first half we discuss necessity of such simulators and then proceed to an overview of hardware impairments which effect performance the most. We also highlight main disadvantages of existing impairment modelling methods with respect to channel modelling. In the core section we propose a solution in a from of a bayesian non-parametric tool of gaussian processes. Its choice is justified by their generative and universal functional approximation properties, and interpretable way of model construction. Same chapter also includes an overview of more advanced techniques, such as automatic kernel construction and deep gaussian processes. We close report with a thorough overview of modern probabilistic modelling frameworks and possible future research directions.
The term Radio Resource Management (RRM) describes themechanisms used by wireless networks to conduct various operations. Today, those mechanisms are mostly the product of human minds, who have designed them and optimized them overtime. This document introduces some well-known as well as other novel challenges in RRM and proposes some innovative approaches to solve them using machine learning. The problems covered by this report include random access, mobility, routing, scheduling, and scalability. Reinforcement-Learning approaches dominate most of the solutions proposed here. Finally, a framework for benchmarking solutions to wireless problems is also discussed.
Future wireless networks attempt to combine different types of communication for various applications thereby leading to a high number of devices needed to be integrated and handled efficiently. Hence, the need for network optimization in future wire-less networks, such as 5G networks and beyond, has become very important as large number of devices requesting a certain data rate would need high amount of bandwidth and advanced re-source management algorithms to maintain an appreciable quality of service and quality of experience. In order to handle largely variable requirements, prediction and estimation of wireless net-work state parameters, such as channel state, user data requirement, user mobility, are needed to calculate optimal network configurations for a certain application at certain time for a certain user. So, in this document, we describe the different methodologies and schemes proposed for anticipatory or predictive net-working and optimization schemes proposed based on the differ-ent types of anticipatory information from the wireless networks.