A Conditioned UNet for Music Source Separation
Abstract
In this paper we propose a conditioned UNet for Music Source Separation (MSS). MSS is generally performed by multi-output neural networks, typically UNets, with each output representing a particular stem from a predefined instrument vocabulary. In contrast, conditioned MSS networks accept an audio query related to a stem of interest alongside the signal from which that stem is to be extracted. Thus, a strict vocabulary is not required and this enables more realistic tasks in MSS. The potential of conditioned approaches for such tasks has been somewhat hidden due to a lack of suitable data, an issue recently addressed with the MoisesDb dataset. A recent method, Banquet, employs this dataset with promising results seen on larger vocabularies. Banquet uses Bandsplit RNN rather than a UNet and the authors state that UNets should not be suitable for conditioned MSS. We counter this argument and propose QSCNet, a novel conditioned UNet for MSS that integrates network conditioning elements in the Sparse Compressed Network for MSS. We find QSCNet to outperform Banquet by over 1dB SNR on a couple of MSS tasks, while using less than half the number of parameters.
Links & Resources
Authors
Cite This Paper
O'Hanlon, K., Woods, B., Wang, L., Sandler, M. (2025). A Conditioned UNet for Music Source Separation. arXiv preprint arXiv:2512.15532.
Ken O'Hanlon, Basil Woods, Lin Wang, and Mark Sandler. "A Conditioned UNet for Music Source Separation." arXiv preprint arXiv:2512.15532 (2025).