ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

VOICED AND UNVOICED SEPARATION IN SPEECH AUDITORY BRAINSTEM RESPONSES OF HUMAN SUBJECTS USING ZERO CROSSING RATE (ZCR) AND ENERGY OF THE SPEECH SIGNAL

Journal: International Journal of Engineering Sciences & Research Technology (IJESRT) (Vol.6, No. 9)

Publication Date:

Authors : ;

Page : 370-380

Keywords : ABR; ZCR; EEG; Auditory Evoked Potentials; Voiced; Unvoiced; Audiology;

Source : Downloadexternal Find it from : Google Scholarexternal

Abstract

In speech signals two activities - voiced & unvoiced - are prominently observed, in both the cases of “production speech” from mouth, and “hearing speech” passed through ears-brainstem-and brain. These speech signals are broadly categorized into these two regions: Voiced- nearly periodic in nature; Unvoiced – random noise like in nature. For many speech applications it is most important to distinguish between Voiced and Unvoiced speech. We have collected Speech Auditory Brainstem Responses (ABR), from healthy human subjects, for “consonant-vowel” stimulus, using single electrode EEG – having Brainstem Speech Evoked Potentials data of voiced and unvoiced combined regions. For this Speech ABR we have proposed two simple & best approaches to detect the Voiced & Unvoiced regions in the EEG Data - first approach is Zero Crossing Rate (ZCR) & Second approach is Short Time Energy (STE). We have collected real time data from 20 different healthy human subjects in an audiology lab of University of Ottawa. We have collected the data at the sampling frequency of 3202 Hz (3.202 KHz). For this research article, we did the Voiced/Unvoiced separation experiment on the Auditory Evoked Potentials (AEP) data for 2 different human subjects. We observed that even for Speech Auditory Brainstem Responses the combined algorithm of ZCR & STE is very good solution for the separation of voiced/unvoiced parts

Last modified: 2017-09-27 19:27:00