Repository logo
 

Continuous speech recognition using articulatory data.

dc.contributor.authorWrench, Alan A.
dc.contributor.authorRichmond, Korin
dc.date.accessioned2018-06-29T15:52:34Z
dc.date.available2018-06-29T15:52:34Z
dc.date.issued2000
dc.description.abstractIn this paper we show that there is measurable information in the articulatory system which can help to disambiguate the acoustic signal. We measure directly the movement of the lips, tongue, jaw, velum and larynx and parameterise this articulatory feature space using principal components analysis. The parameterisation is developed and evaluated using a speaker dependent phone recognition task on a specially recorded TIMIT corpus of 460 sentences. The results show that there is useful supplementary information contained in the articulatory data which yields a small but significant improvement in phone recognition accuracy of 2%. However, preliminary attempts to estimate the articulatory data from the acoustic signal and use this to supplement the acoustic input have not yielded any significant improvement in phone accuracy.
dc.description.eprintid2490
dc.description.facultycasl
dc.description.ispublishedpub
dc.description.statuspub
dc.format.extent145-148
dc.identifierER2490
dc.identifier.citationWrench, A. & Richmond, K. (2000) Continuous speech recognition using articulatory data. In: Proceedings of the International Conference on Spoken Language Processing (ICSLP2000 China). pp. 145-148.
dc.identifier.urihttps://eresearch.qmu.ac.uk/handle/20.500.12289/2490
dc.relation.ispartofProceedings of the International Conference on Spoken Language Processing (ICSLP2000 China)
dc.titleContinuous speech recognition using articulatory data.
dc.typearticle
dcterms.accessRightspublic
qmu.centreCASLen
rioxxterms.typearticle

Files

Original bundle

Now showing 1 - 1 of 1
Thumbnail Image
Name:
Continuous_speech.pdf
Size:
54.47 KB
Format:
Adobe Portable Document Format

Collections