Browsing by Person "Turk, Alice"
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
Item Coarticulation across morpheme boundaries: An ultrasound study of past-tense inflection in Scottish English(Elsevier, 2021-09-15) Mousikou, Petroula; Strycharczuk, Patrycja; Turk, Alice; Scobbie, James M.It has been hypothesized that morphologically-complex words are mentally stored in a decomposed form, often requiring online composition during processing. Morphologically-simple words can only be stored as a whole. The way a word is stored and retrieved is thought to influence its realization during speech production, so that when retrieval requires less time, the articulatory plan is executed faster. Faster articulatory execution could result in more coarticulation. Accordingly, we hypothesized that morphologically-simple words might be produced with more coarticulation than apparently homophonous morphologically-complex words, because the retrieval of monomorphemic forms is direct, in contrast to morphologically-complex ones, which might need to be composed online into full word forms. Using Ultrasound Tongue Imaging, we tested this hypothesis with nine speakers of Scottish English. Over two days of training, participants learned phonemically identical monomorphemic and morphologically-complex nonce words, while on the third consecutive testing day, they produced them in two prosodic contexts. Two types of articulatory analyses revealed no systematic differences in coarticulation between monomorphemic and morphologically-complex items, yet a few speakers did idiosyncratically produce some morphological effects on articulation. Our work contributes to our understanding of how morphologically complex words are stored and processed during speech production.Item Development of cue weighting in children's speech perception(Temporal Integration in the Perception of Speech, 2002) Mayo, Catherine; Turk, Alice; Watson, Jocelynne; Hawkins, S.; Nguyen, N.Item Dynamic Dialects: an articulatory web resource for the study of accents [website](University of Glasgow, 2015-04-01) Lawson, Eleanor; Stuart-Smith, Jane; Scobbie, James M.; Nakai, Satsuki; Beavan, David; Edmonds, Fiona; Edmonds, Iain; Turk, Alice; Timmins, Claire; Beck, Janet M.; Esling, John; Leplatre, Gregory; Cowen, Steve; Barras, Will; Durham, MercedesDynamic Dialects (www.dynamicdialects.ac.uk) is an accent database, containing an articulatory video-based corpus of speech samples from world-wide accents of English. Videos in this corpus contain synchronised audio, ultrasound-tongue-imaging video and video of the moving lips. We are continuing to augment this resource. Dynamic Dialects is the product of a collaboration between researchers at the University of Glasgow, Queen Margaret University Edinburgh, University College London and Napier University, Edinburgh. For modelled International Phonetic Association speech samples produced by trained phoneticians, please go to the sister site http://www.SeeingSpeech.ac.ukItem Morphemes, Phonetics and Lexical Items: The Case of the Scottish Vowel Length Rule.(International Congress of Phonetic Sciences, 1999) Scobbie, James M.; Turk, Alice; Hewlett, NigelWe show that, in the Scottish Vowel Length Rule, the high vowels in the sequences /i#d/ and /##d/ are 68% longer than in the tautomorphemic /id/ and /ud/ sequences, while /ai#d/ is only 28% longer than /aid/. There is no quality difference associated with /i/ and /#/, but long and short /ai/ do differ in quality. Spectral analysis of F1 and F2 trajectories indicates that the prime difference in the vowels due to the SVLR appears to be the timing of formant movements, not the location of the targets in formant space. In the longer vowel of sighed, the rise towards a high front position starts at about 75ms-100ms into the vowel, and in the shorter vowel of side it is aligned nearer the start of the vowel. There are, moreover, genuine target differences which function as a marker of social class.Item Morphological effects on pronunciation(International Phonetic Association, 2015-08-15) Mousikou, P.; Strycharczuk, Patrycja; Turk, Alice; Rastle, K.; Scobbie, James M.Converging, albeit inconsistent, empirical evidence suggests that the morphological structure of a word influences its pronunciation. We investigated this issue using Ultrasound Tongue Imaging in the context of an experimental cognitive psychology paradigm. Scottish speakers were trained on apparently homophonous monomorphemic and bimorphemic novel words (e.g. zord, zorred), and tested on speech production tasks. Monomorphemic items were realised acoustically with shorter durations than bimorphemic items; however, this difference was not statistically significant. Progressive coarticulatory effects were also observed in the monomorphemic condition for some speakers. A dynamic analysis of the articulatory data revealed that the observed differences in the pronunciations of the two types of items could be due to factors other than morphological structure. Our results, albeit inconclusive, make a significant contribution to the literature in this research domain insofar as the presence or absence of morphological effects on pronunciation has important implications for extant theories of speech production.Item Recording speech articulation in dialogue: Evaluating a synchronized double Electromagnetic Articulography setup(Elsevier, 2013-08-28) Geng, Christian C.; Turk, Alice; Scobbie, James M.; Macmartin, Cedric; Hoole, Philip; Richmond, Korin; Wrench, Alan A.; Pouplier, Marianne; Bard, Ellen Gurman; Campbell, Ziggy; Dickie, Catherine; Dubourg, Eddie; Hardcastle, William J.; Kainada, Evia; King, Simon; Lickley, Robin; Nakai, Satsuki; Renals, Steve; White, Kevin; Wiegand, Ronny; EPSRCWe demonstrate the workability of an experimental facility that is geared towards the acquisition of articulatory data from a variety of speech styles common in language use, by means of two synchronized electromagnetic articulography (EMA) devices. This approach synthesizes the advantages of real dialogue settings for speech research with a detailed description of the physiological reality of speech production. We describe the facility's method for acquiring synchronized audio streams of two speakers and the system that enables communication among control room technicians, experimenters and participants. Further, we demonstrate the feasibility of the approach by evaluating problems inherent to this specific setup: The first problem is the accuracy of temporal synchronization of the two EMA machines, the second is the severity of electromagnetic interference between the two machines. Our results suggest that the synchronization method used yields an accuracy of approximately 1 ms. Electromagnetic interference was derived from the complex-valued signal amplitudes. This dependent variable was analyzed as a function of the recording status - i.e. on/off - of the interfering machine's transmitters. The intermachine distance was varied between 1 m and 8.5 m. Results suggest that a distance of approximately 6.5 m is appropriate to achieve data quality comparable to that of single speaker recordings.Item Seeing Speech: an articulatory web resource for the study of phonetics [website](University of Glasgow, 2015-04-01) Lawson, Eleanor; Stuart-Smith, Jane; Scobbie, James M.; Nakai, Satsuki; Beavan, David; Edmonds, Fiona; Edmonds, Iain; Turk, Alice; Timmins, Claire; Beck, Janet M.; Esling, John; Leplatre, Gregory; Cowen, Steve; Barras, Will; Durham, MercedesSeeing Speech (www.seeingspeech.ac.uk) is a web-based audiovisual resource which provides teachers and students of Practical Phonetics with ultrasound tongue imaging (UTI) video of speech, magnetic resonance imaging (MRI) video of speech and 2D midsagittal head animations based on MRI and UTI data. The model speakers are Dr Janet Beck of Queen Margaret University (Scotland) and Dr John Esling of University of Victoria (Canada). The first phase of this resource began in July 2011 and was completed in September 2013. Further funding was obtained in 2014 to improve and augment this resource (this version) and to develop its sister site Dynamic Dialects. The website contains two main resources: An introduction to UTI, MRI vocal tract imaging techniques and information about the production of the articulatory animations. Clickable International Phonetic Association charts links to UTI, MRI and animated speech articulator video. This online resource is a product of the collaboration between researchers at six Scottish Universities: The University of Glasgow, Queen Margaret University, Napier University, the University of Strathclyde, the University of Edinburgh and the University of Aberdeen; as well as scholars from University College London and Cardiff University. For examples of various dialects of English, please go to the sister site http://www.dynamicdialects.ac.ukItem The Edinburgh Speech Production Facility DoubleTalk Corpus(International Speech Communication Association, 2013-08-25) Scobbie, James M.; Turk, Alice; Geng, Christian; King, Simon; Lickley, Robin; Richmond, KorinThe DoubleTalk articulatory corpus was collected at the Edinburgh Speech Production Facility (ESPF) using two synchronized Carstens AG500 electromagnetic articulometers. The first release of the corpus comprises orthographic transcriptions aligned at phrasal level to EMA and audio data for each of 6 mixed-dialect speaker pairs. It is available from the ESPF online archive. A variety of tasks were used to elicit a wide range of speech styles, including monologue (a modified Comma Gets a Cure and spontaneous story-telling), structured spontaneous dialogue (Map Task and Diapix), a wordlist task, a memory-recall task, and a shadowing task. In this session we will demo the corpus with various examples.