Audio Understanding Experiment DOI: 10.57967/hf/8154
Voice Sample 26 Mar 2026 · 20m 54s

Evaluating Audio Understanding in Multimodal AI

A systematic experiment testing Gemini 3.1 Flash Lite's ability to analyse a 20-minute voice recording across 137 structured prompts spanning speaker analysis, emotion detection, audio engineering, forensic audio, demographics, and more.

49 completed evaluations 13 categories tested 137 total prompts Model: Gemini 3.1 Flash Lite Date: 26 March 2026 Audio: FLAC mono 24kHz, 20m 54s

Objectives

Methodology

A single freeform voice recording was made by Daniel Rosehill on a OnePlus Nord 3.5G phone in HQ mode (WAV 44.1kHz, converted to FLAC mono 24kHz). The recording is unscripted, covering topics from voice cloning and TTS technology to personal background and current events. The speaker was fatigued from disrupted sleep, providing a natural test of the model's ability to detect vocal state.

137 test prompts were designed across 22 categories. 49 prompts were implemented with full prompt text and executed against the audio using Gemini 3.1 Flash Lite via the Google Generative AI API. Each prompt was run independently with the full audio file as context. The remaining 88 prompts are catalogued as suggested extensions.

The Voice Sample

The recording features a male speaker in his late 30s with an Irish accent (Cork origin), living in Jerusalem for ~11 years. Voice type: bass/low baritone (median F0 ~110 Hz). Speaking rate: ~169 WPM. Recorded in an untreated room while pacing. A timestamped transcript (97.4% confidence, AssemblyAI) and detailed acoustic analysis (pitch, formants, signal levels) are included in the dataset.

Explore

Cite This Work

Rosehill, D. (2026). Audio Understanding Test Set. Hugging Face. https://doi.org/10.57967/hf/8154

Dataset: danielrosehill/Audio-Understanding-Test-Set · Source: GitHub · DOI: 10.57967/hf/8154