Segment Length Matters: A Study of Segment Lengths on Audio Fingerprinting Performance
Abstract
Neural audio fingerprinting performance varies with segment length, with short segments (0.5-second) generally providing better retrieval accuracy, and large language models showing promise in recommending optimal segment durations.
Audio fingerprinting provides an identifiable representation of acoustic signals, which can be later used for identification and retrieval systems. To obtain a discriminative representation, the input audio is usually segmented into shorter time intervals, allowing local acoustic features to be extracted and analyzed. Modern neural approaches typically operate on short, fixed-duration audio segments, yet the choice of segment duration is often made heuristically and rarely examined in depth. In this paper, we study how segment length affects audio fingerprinting performance. We extend an existing neural fingerprinting architecture to adopt various segment lengths and evaluate retrieval accuracy across different segment lengths and query durations. Our results show that short segment lengths (0.5-second) generally achieve better performance. Moreover, we evaluate LLM capacity in recommending the best segment length, which shows that GPT-5-mini consistently gives the best suggestions across five considerations among three studied LLMs. Our findings provide practical guidance for selecting segment duration in large-scale neural audio retrieval systems.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Lightweight Resolution-Aware Audio Deepfake Detection via Cross-Scale Attention and Consistency Learning (2026)
- BEST-STD2.0: Balanced and Efficient Speech Tokenizer for Spoken Term Detection (2025)
- VIBEVOICE-ASR Technical Report (2026)
- DAME: Duration-Aware Matryoshka Embedding for Duration-Robust Speaker Verification (2026)
- Divide, then Ground: Adapting Frame Selection to Query Types for Long-Form Video Understanding (2025)
- LongSpeech: A Scalable Benchmark for Transcription, Translation and Understanding in Long Speech (2026)
- SpeechQualityLLM: LLM-Based Multimodal Assessment of Speech Quality (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper