TY - GEN
T1 - SoundCount
T2 - 38th AAAI Conference on Artificial Intelligence, AAAI 2024
AU - He, Yuhang
AU - Dai, Zhuangzhuang
AU - Trigoni, Niki
AU - Chen, Long
AU - Markham, Andrew
PY - 2024/3/24
Y1 - 2024/3/24
N2 - In this paper, we study an underexplored, yet important and challenging problem: counting the number of distinct sounds in raw audio characterized by a high degree of polyphonicity. We do so by systematically proposing a novel end-to-end trainable neural network (which we call DyDecNet, consisting of a dyadic decomposition front-end and backbone network), and quantifying the difficulty level of counting depending on sound polyphonicity. The dyadic decomposition front-end progressively decomposes the raw waveform dyadically along the frequency axis to obtain time-frequency representation in multi-stage, coarse-to-fine manner. Each intermediate waveform convolved by a parent filter is further processed by a pair of child filters that evenly split the parent filter’s carried frequency response, with the higher-half child filter encoding the detail and lower-half child filter encoding the approximation. We further introduce an energy gain normalization to normalize sound loudness variance and spectrum overlap, and apply it to each intermediate parent waveform before feeding it to the two child filters. To better quantify sound counting difficulty level, we further design three polyphony-aware metrics: polyphony ratio, max polyphony and mean polyphony. We test DyDecNet on various datasets to show its superiority.
AB - In this paper, we study an underexplored, yet important and challenging problem: counting the number of distinct sounds in raw audio characterized by a high degree of polyphonicity. We do so by systematically proposing a novel end-to-end trainable neural network (which we call DyDecNet, consisting of a dyadic decomposition front-end and backbone network), and quantifying the difficulty level of counting depending on sound polyphonicity. The dyadic decomposition front-end progressively decomposes the raw waveform dyadically along the frequency axis to obtain time-frequency representation in multi-stage, coarse-to-fine manner. Each intermediate waveform convolved by a parent filter is further processed by a pair of child filters that evenly split the parent filter’s carried frequency response, with the higher-half child filter encoding the detail and lower-half child filter encoding the approximation. We further introduce an energy gain normalization to normalize sound loudness variance and spectrum overlap, and apply it to each intermediate parent waveform before feeding it to the two child filters. To better quantify sound counting difficulty level, we further design three polyphony-aware metrics: polyphony ratio, max polyphony and mean polyphony. We test DyDecNet on various datasets to show its superiority.
UR - https://ojs.aaai.org/index.php/AAAI/article/view/29134
UR - http://www.scopus.com/inward/record.url?scp=85189618219&partnerID=8YFLogxK
U2 - 10.1609/aaai.v38i11.29134
DO - 10.1609/aaai.v38i11.29134
M3 - Conference publication
AN - SCOPUS:85189618219
T3 - Proceedings of the AAAI Conference on Artificial Intelligence
SP - 12421
EP - 12429
BT - AAAI-24 Technical Tracks 11
A2 - Wooldridge, Michael
A2 - Dy, Jennifer
A2 - Natarajan, Sriraam
PB - AAAI
Y2 - 20 February 2024 through 27 February 2024
ER -