This Is AuburnElectronic Theses and Dissertations

Show simple item record

Distributed Listening in Automatic Speech Recognition


Metadata FieldValueLanguage
dc.contributor.advisorGilbert, Juan
dc.contributor.authorMcMillian, Yolanda
dc.date.accessioned2010-06-30T13:35:49Z
dc.date.available2010-06-30T13:35:49Z
dc.date.issued2010-06-30T13:35:49Z
dc.identifier.urihttp://hdl.handle.net/10415/2192
dc.description.abstractWhile speech recognition systems have come a long way in the last forty years, there is still room for improvement. Although readily available, these systems are sometimes inaccurate and insufficient. The research presented here outlines a technique called Distributed Listening which demonstrates noticeable improvements to existing speech recognition methods. The Distributed Listening architecture introduces the idea of multiple, parallel, yet physically separate automatic speech recognizers called listeners. Distributed Listening also uses a piece of middleware, called an interpreter, which resolves multiple interpretations using a phrase resolution algorithm. The subsequent experiments of the research show that these efforts work together to increase the accuracy of the transcription of spoken utterances and Distributed Listening at worst, is as good as the best individual listener.en
dc.rightsEMBARGO_NOT_AUBURNen
dc.subjectComputer Scienceen
dc.titleDistributed Listening in Automatic Speech Recognitionen
dc.typedissertationen
dc.embargo.lengthNO_RESTRICTIONen_US
dc.embargo.statusNOT_EMBARGOEDen_US

Files in this item

Show simple item record