R.I.P.
π»
Ghosted
Disentangling Speaker Traits for Deepfake Source Verification via Chebyshev Polynomial and Riemannian Metric Learning
March 23, 2026 Β· Grace Period Β· π Interspeech 2026
Authors
Xi Xuan, Wenxin Zhang, Zhiyu Li, Jennifer Williams, Ville HautamΓ€ki, Tomi H. Kinnunen
arXiv ID
2603.21875
Category
eess.AS: Audio & Speech
Cross-listed
cs.CL,
cs.SD
Citations
0
Venue
Interspeech 2026
Abstract
Speech deepfake source verification systems aims to determine whether two synthetic speech utterances originate from the same source generator, often assuming that the resulting source embeddings are independent of speaker traits. However, this assumption remains unverified. In this paper, we first investigate the impact of speaker factors on source verification. We propose a speaker-disentangled metric learning (SDML) framework incorporating two novel loss functions. The first leverages Chebyshev polynomial to mitigate gradient instability during disentanglement optimization. The second projects source and speaker embeddings into hyperbolic space, leveraging Riemannian metric distances to reduce speaker information and learn more discriminative source features. Experimental results on MLAAD benchmark, evaluated under four newly proposed protocols designed for source-speaker disentanglement scenarios, demonstrate the effectiveness of SDML framework. The code, evaluation protocols and demo website are available at https://github.com/xxuan-acoustics/RiemannSD-Net.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Audio & Speech
R.I.P.
π»
Ghosted
SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition
R.I.P.
π»
Ghosted
DiffWave: A Versatile Diffusion Model for Audio Synthesis
R.I.P.
π»
Ghosted
FastSpeech 2: Fast and High-Quality End-to-End Text to Speech
R.I.P.
π»
Ghosted
MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis
R.I.P.
π»
Ghosted