On Neural Architectures for Deep Learning-based Source Separation of Co-Channel OFDM Signals
Gary CF Lee (MIT); Amir Weiss (Massachusetts Institute of Technology); Alejandro Lancho (MIT); Yury Polyanskiy (MIT); Gregory W Wornell (MIT)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
We study the single-channel source separation problem involving orthogonal frequency-division multiplexing (OFDM) signals, which are ubiquitous in many modern day digital communication systems. Related efforts have been pursued in monaural source separation, where state-of-the-art neural architectures have been adopted to train an end-to-end separator for audio signals (as 1-dimensional time series). In this work, through a prototype problem based on the OFDM source model, we assess---and question---the efficacy of using audio-oriented neural architectures in separating signals based on features pertinent to communication waveforms. Perhaps surprisingly, we demonstrate that in some configurations, where perfect separation is theoretically attainable, these audio-oriented neural architectures perform poorly in separating co-channel OFDM waveforms. Yet, we propose critical domain-informed modifications to the network parameterization, based on insights from OFDM structures, that can confer about 30 dB improvement in performance.