Analysis and transformation of voice level in singing voice
Frederik Bous (STMS - IRCAM, Sorbonne Université, CNRS); Axel Roebel (Ircam)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
We introduce a neural auto-encoder that transforms the musical dynamic in recordings of singing voice via changes in voice level. Since most recordings of singing voice are not annotated with voice level we propose a means to estimate the voice level from the signal's timbre using a neural voice level estimator. We introduce the recording factor that relates the voice level to the recorded signal power as a proportionality constant. This unknown constant depends on the recording conditions and the post-processing and may thus be different for each recording (but is constant across each recording). We provide two approaches to estimate the voice level without knowing the recording factor. The unknown recording factor can either be learned alongside the weights of the voice level estimator, or a special loss function based on the scalar product can be used to only match the contour of the recorded signal's power. The voice level models are used to condition a previously introduced bottleneck auto-encoder that disentangles its input, the mel-spectrogram, from the voice level. We evaluate the voice level models on recordings annotated with musical dynamic and by their ability to provide useful information to the auto-encoder. A perceptive test is carried out that evaluates the perceived change in voice level in transformed recordings and the synthesis quality. The perceptive test confirms that changing the conditional input changes the perceived voice level accordingly thus suggesting that the proposed voice level models encode information about the true voice level.