UIUC Website


Edit SideBar


Last Modified : Thu, 29 Jun 23

Auditory Models

A collection of demos, software, research, history, reflections, and speech data


Raspberry pi information:


Dumb wonderful word stuff

xkcd 1007, xkcd 273, xkcd 730

Talk at Johns Hopkins on human phoneme recognition

The Role of the Cochlea in Human Speech Recognition from CLSP Seminars PRO Jont Allen, UIUC Allen on phone recognition in hearing impaired ears 2007 August 7 Center for Speech and Language Processing, Johns Hopkins University Allen vimeo video
Extra: Summer workshop results: Modeling Mardrin with CASS (mp4)

Harvey Fletcher mp3 video (~28 mins long)

Harvey Fletcher video from c1963

Allen Video about modeling cochlear transduction (ca. 1997)


LDC isolated speech sounds

Video Demos of Cue-modified speech from Interspeech 2013

  1. Feipeng Li and Jont B. Allen. (2011) Manipulation of Consonants in Natural Speech; IEEE Trans. Audio, Speech and Language processing, (officially published: Jul, 2010; Appearance date: Mar 2011) pp. 496-504. ((pdf), Video-Demos)
    1. Interspeech 2013 Demo files: wav, mv4
    2. Interspeech 2013: All-files (talks, wav, video, demos) tgz

Interspeech 2013 Tutorial presentation

Video Demos of KunLun Software

Publications on Consonant manipulation

  • Support documentation that describes the basic speech perception research behind KunLun:
    1. Allen, Jont and Li, Feipeng (2009). Speech perception and cochlear signal processing, IEEE Signal Processing Magazine, Invited: Life-sciences, 26(4), pp 73-77, July. (pdf, djvu)
    2. Feipeng Li and Jont B. Allen. (2011) Manipulation of Consonants in Natural Speech; IEEE Trans. Audio, Speech and Language processing, (officially published: Jul, 2010; Appearance date: Mar 2011) pp. 496-504 (pdf)
    3. Li, F., Menon, A. and Allen, Jont B., (2010) A psychoacoustic method to find the perceptual cues of stop consonants in natural speech, apr, J. Acoust. Soc. Am. pp. 2599-2610, (pdf)
    4. Li, F., Trevino, A., Menon, A. and Allen, Jont B (2012). "A psychoacoustic method for studying the necessary and sufficient perceptual cues of American English fricative consonants in noise" J. Acoust. Soc. Am., v132(4) Oct, pp. 2663-2675 (pdf)
  • AIgram source code zip, txt; If you would like to download this code, ask me for the password.

Research Objectives and summary of speech Experiments

The research in the Human Speech Recognition group is directed at a fundamental understanding of speech perception in both normal-hearing (NH) and Hearing-Impaired ears. These are related problems, and are actually a continiuium, not two separate things. Most people are born with normal hearing. Within a few years we learn, without seeming effort, to understand human speech. How this happens is a mystery. But what happens is not a mystery. The research we have been doing over the past 10 years, as documented in the section below, is a systematic study of the nature of the failure to process and communicate under various conditions. Only by stressing the system, causing failure, can we hope to understand it. There are at least four levels of experimentation:

  1. The first level of experiments is with NH ears, with speech in noise.
  2. The second level of experiments are filtering experiments, where the speech is filtered before the noise is added.
  3. In the third series of experiments, the speech is truncated in time.
  4. Finally small regions of speech are modified by a few dB, or removed altogether.

Examples of such processing are given in later on this page.


We have found that speech perception is a discrete (binary) zero error task Singh and Allen, 2012. Working at the token level, we defined 2 groups: ZE, NZE. Zero-Error (ZE) speech is defined as speech that NH listeners never make an error in identifying, at and above above -2 dB SNR. The non-ZE (NZE) sounds are all the rest. All of the speech CV sounds that we have tested contain many ZE tokens: most CV consonants consist of more than 80% ZE utterances.

The remaining 20% of the CVs may be broken down into 0% < medium-error (ME) <10% and >10% high-error (HE) groups. ME consonants are typically utterances having varying degrees of mispronounced utterances. HE consonants are typically those that are heard as a different sound, with high probability (>20%). Based on the entropy across normal hearing listeners, we view such sounds as mislabled. The reasons for these errors can typically be traced to a specific flaw in the production of the sound, which is typically easily identified.

A chronological history of HSR papers

Summary of UIUC-HSR Experiments (Updated Mar 15, 2014)

YearExperimentStudentsDetails; \(N_s\)=# SubjectsPublications.mat
2004MN64 (MN04SWN)Phatak & LovittMiller-Nicely in SWN with 4 vowels:
f/a/ther, b/a/t, b/i/t, b/ee/t (not b/e/t)
i.e., LaTex's tipia ``textipa{ @, \ae, E, i},''
LDCbet: [a, xq, i, xi] ([a, Q, i I]),
\(V_{ldc}\)=/a, @, i, I/
\(N_s=18\) with 4 "bad subjects"
Phatak & Allen (2007) [PA07] pdfMN64
2005StudyAllen, J. B.Consonant recognition and the AIJASA 117(4), p. 2212-2223. (2005) pdf 
2005MN16-R (MN05WN)Phatak & LovittReplicate MN04 (WN)Phatak, Lovitt & Allen (2008) pdf 
2005MN64R (MN05SWN)Phatak & LovittMore MN64; 14 new subjects; SWNPhatak, Lovitt & Allen (2008) pdfMN64
2005HIMCL05Yoon & PhatakCVs; 10 HI ears @MCL in WNPhatak, Yoon, Gooler & Allen (2009) pdf 
2006HINALR05YoonCVs; 10 HI ears; NALR@MCL in SWN  
2006VerificationRegnierModifications of /ta/Regnier & Allen (2008) pdf 
2006CV06SWNPhatak\(C_{ldc}\)d,b,k,p,s,t,S,Z,z/, \(V_{ldc}\)o,E,u,R,Q,U,I,a/ cv06swn
2006CV06WNRegnier9C+8V WN /d, b, k, p, s, t, xs, xz, z/ cv06wn
2007CV06PanAnalysis of 9 Vowels of CV062 unpublished MSs 
2007HL07LiHigh and Low pass Repeat of FletcherLi Allen 2009, JASA pdf 
2008TR07LiTime Truncation after Furui86Allen Li (2009) ASSP Magazine pdf 
2008TR08LiTime Truncation after Furui86? 3 vowels ? 
20093DDSLi3DDS (i.e., MN64, HL07, TR07-8)Li Allen (2010) JASA pdf; Li Allen (2010) IEEE TLSP; Li Trevino Allen 2012 JASA; 
2009VerificationMenonRemove Primary burst  
2009VerificationAbhinauvModify (\(\pm 6\) dB)+Remove Primary burstKapoor and Allen, 131(1), 2012 pdf 
2009VerificationCvengrosModify burst + devoiced + voiced transition  
2009MN64(+R)SinghFull analysis of \(N_s=25\) of MN64+MN64RJASA, April 2012 pdf 
2010HIMCL10-I/IIIWoojae HanCVs; \(N_s=46\) HI ears with \(N_t\)2/token/SNRpdf 
2010HI10NALR-II/IVWoojae HanCVs \(N_s = 17\) HI ears with \(N_t\)10/token/SNRpdf 
2011HL11TrevinoHigh/Low filter CVs of HI10  
2013HI Exp2 AnalysisTrevinoAnalysis of the individual variability of HITrevino & Allen pdf, pdf 
2014MN64(+R)Toscano&AllenExtend Singh & Allen (2009)pdf 


Software of interest

Measurement systems

  • ARTA software to be used with your sound card. Performance will vary
  • QA400 Inexpensive (e.g., {$\tiny \approx$}\$200) USB box with Windows software with a -140 [dB] noise floor and 110 [dB] dymanic range.

HSR Pictures (entertainment value only)

Historical Documents

Powered by PmWiki