EXPLOITING TEMPORAL CONTEXT IN CNN BASED MULTISOURCE DOA ESTIMATION
Alexander Bohlender, Nilesh Madhu, Ann Spriet, Wouter Tirry
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:15:14
In this work, we consider a previously proposed convolutional neural network (CNN) approach that estimates the DOAs for multiple sources from the phase spectra of the microphones. For speech, specifically, the approach was shown to work well even when trained entirely on synthetically generated data. However, as each frame is processed separately, temporal context cannot be taken into account. We therefore consider two different extensions of the CNN: the integration of a long short-term memory (LSTM) layer, or of a temporal convolutional network (TCN). In order to accommodate the incorporation of temporal context, the training data generation framework needs to be adjusted. To obtain an easily parameterizable model, we propose to employ Markov chains to realize a gradual evolution of the source activity at different times, frequencies, and directions, throughout a training sequence. A thorough evaluation demonstrates that the proposed configuration for generating training data is suitable for the tasks of single-, and multi-talker localization. In particular, we note that with temporal context, it is important to use speech, or realistic signals in general, for the sources. The experiments reveal that the CNN with the LSTM extension outperforms all other considered approaches, including the plain CNN, and the TCN extension.