A systematic experiment testing Gemini 3.1 Flash Lite's ability to analyse a 20-minute voice recording across 137 structured prompts spanning speaker analysis, emotion detection, audio engineering, forensic audio, demographics, and more.
A single freeform voice recording was made by Daniel Rosehill on a OnePlus Nord 3.5G phone in HQ mode (WAV 44.1kHz, converted to FLAC mono 24kHz). The recording is unscripted, covering topics from voice cloning and TTS technology to personal background and current events. The speaker was fatigued from disrupted sleep, providing a natural test of the model's ability to detect vocal state.
137 test prompts were designed across 22 categories. 49 prompts were implemented with full prompt text and executed against the audio using Gemini 3.1 Flash Lite via the Google Generative AI API. Each prompt was run independently with the full audio file as context. The remaining 88 prompts are catalogued as suggested extensions.
The recording features a male speaker in his late 30s with an Irish accent (Cork origin), living in Jerusalem for ~11 years. Voice type: bass/low baritone (median F0 ~110 Hz). Speaking rate: ~169 WPM. Recorded in an untreated room while pacing. A timestamped transcript (97.4% confidence, AssemblyAI) and detailed acoustic analysis (pitch, formants, signal levels) are included in the dataset.
Full waveform player with transcript and acoustic profile.
All prompt-output pairs organised by category with the original prompts.
Cross-cutting analysis of model capabilities, limitations, and safety boundaries.
JSONL prompts, results, audio files, transcript, and acoustic analysis on Hugging Face.
Dataset: danielrosehill/Audio-Understanding-Test-Set · Source: GitHub · DOI: 10.57967/hf/8154