A Multi-Channel/Multi-Speaker Articulatory Database for Continuous Speech Recognition Research.
dc.contributor.author | Wrench, Alan A. | |
dc.date.accessioned | 2018-06-29T15:51:35Z | |
dc.date.available | 2018-06-29T15:51:35Z | |
dc.date.issued | 2000 | |
dc.description.abstract | The goal of this research is to improve the performance of a speaker-independent Automatic Speech Recognition (ASR) system by using directly measured articulatory parameters in the training phase. This paper examines the need for a multi-channel/multi-speaker articulatory database and describes the design of such a database and the processes involved in its creation. | |
dc.description.eprintid | 2489 | |
dc.description.faculty | casl | |
dc.description.ispublished | pub | |
dc.description.status | pub | |
dc.description.volume | 5 | |
dc.format.extent | Jan-13 | |
dc.identifier | ER2489 | |
dc.identifier.citation | Wrench, A. (2000) A Multi-Channel/Multi-Speaker Articulatory Database for Continuous Speech Recognition Research., Phonus., vol. 5, pp. Jan-13. | |
dc.identifier.uri | https://eresearch.qmu.ac.uk/handle/20.500.12289/2489 | |
dc.relation.ispartof | Phonus. | |
dc.title | A Multi-Channel/Multi-Speaker Articulatory Database for Continuous Speech Recognition Research. | |
dc.type | article | |
dcterms.accessRights | public | |
qmu.centre | CASL | en |
rioxxterms.type | article |
Files
Original bundle
1 - 1 of 1