Browsing by Person "Nakai, Satsuki"
Now showing 1 - 13 of 13
- Results Per Page
- Sort Options
Item A prerequisite to L1 homophone effects in L2 spoken-word recognition(2015-01) Nakai, Satsuki; Lindsay, S.; Ota, M.When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation of kettle, as L1 Dutch speakers perceptually map the vowel in the two English words to a single vowel phoneme in their L1. In an auditory word-learning experiment using Greek and Japanese speakers of English, we asked whether such cross-lexical activation in L2 spoken-word recognition necessarily involves inaccurate perception by the L2 listeners, or can also arise from interference from L1 phonology at an abstract level, independent of the listeners' phonetic processing abilities. Results suggest that spurious activation of L2 words containing L2-specific contrasts in spoken-word recognition is contingent on the L2 listeners' inadequate phonetic processing abilities.Item An explanation for phonological word-final vowel shortening: Evidence from Tokyo Japanese(2013-10) Nakai, SatsukiThis paper offers an account for the cross-linguistic prevalence of phonological word-final vowel shortening, in the face of phonetic final lengthening, also commonly observed across languages. Two contributing factors are hypothesized: (1) an overlap in the durational distributions of short and long vowel phonemes across positions in the utterance can lead to the misidentification of phonemic vowel length and (2) the direction of bias in such misidentification is determined by the distributional properties of the short and long vowel phonemes in the region of the durational overlap. Because short vowel phonemes are typically more frequent in occurrence and less variable in duration than long vowel phonemes, long vowel phonemes are more likely to be misidentified than short vowel phonemes. Results of production and perception studies in Tokyo Japanese support these hypotheses.Item Dynamic Dialects: an articulatory web resource for the study of accents [website](University of Glasgow, 2015-04-01) Lawson, Eleanor; Stuart-Smith, Jane; Scobbie, James M.; Nakai, Satsuki; Beavan, David; Edmonds, Fiona; Edmonds, Iain; Turk, Alice; Timmins, Claire; Beck, Janet M.; Esling, John; Leplatre, Gregory; Cowen, Steve; Barras, Will; Durham, MercedesDynamic Dialects (www.dynamicdialects.ac.uk) is an accent database, containing an articulatory video-based corpus of speech samples from world-wide accents of English. Videos in this corpus contain synchronised audio, ultrasound-tongue-imaging video and video of the moving lips. We are continuing to augment this resource. Dynamic Dialects is the product of a collaboration between researchers at the University of Glasgow, Queen Margaret University Edinburgh, University College London and Napier University, Edinburgh. For modelled International Phonetic Association speech samples produced by trained phoneticians, please go to the sister site http://www.SeeingSpeech.ac.ukItem F1/F2 targets for Finnish single vs. double vowels(University of Glasgow: Glasgow, 2015-08-10) Nakai, Satsuki; Suomi, K.; Wrench, Alan A.This paper explores the reason why Finnish single (short) vowels tend to occupy less peripheral positions in the F1/F2 vowel space compared to their double (long) counterparts. The results of two production studies suggest that the less extreme vowel quality of single vowels is best described as arising from undershoot of articulatory/acoustic targets due to their short durations, assuming single, context-free targets for phonemes.Item Helping children learn non-native articulations: The implications for ultrasound-based clinical intervention(International Phonetic Association, 2015-08-15) Cleland, Joanne; Scobbie, James M.; Nakai, Satsuki; Wrench, Alan A.An increasing number of studies are examining the effectiveness of ultrasound as a visual biofeedback device for speech production training or therapy. However, no randomised control trials exist. We compared the success of typically-developing children learning new articulations with and without ultrasound biofeedback. Thirty children aged 6-12 were randomly assigned to 2 groups: Group U were taught novel (non-English) consonants and vowels using ultrasound in addition to imitation, modelling, articulatory descriptions and feedback on performance. Group A were taught the same speech sounds, using the same methods but in the absence of ultrasound visual biofeedback. Results showed that both groups of children improved in their production of the novel sounds with the exception of the high back vowels [u,]. No advantage for Group U was found, except for the palatal stop [c].Item LAURENCE LABRUNE, The phonology of Japanese. Oxford: Oxford University Press, 2012. Pp. xiii + 296. ISBN: 9780199545834(Cambridge University Press, 2014-04) Nakai, SatsukiItem On the perceived quantity of young children's speech segments(Pacini Editore, 2014) Nakai, Satsuki; Kunnari, S.; Celata, C.; Costamagna, L.This chapter considers why young children's speech segments are often perceived by adults as geminates in light of two studies on the perception of phonological quantity in Finnish and Japanese. Study 1 used stimulus continua created from a nonword keke, which orthogonally varied in the word-medial stop's absolute (raw) duration and its durational ratios to the neighbouring vowels. For both Finnish and Japanese, the adults' perception of phonological quantity of the wordmedial stop was jointly affected by the two manipulated factors: the longer its absolute duration, the more likely the word-medial stop was perceived as a geminate, for any given set of durational ratios between the stop and the neighbouring vowels. Study 2 found the same effects in the native-speaker adults' perception of Finnish and Japanese children's early words: the adults often judged the word-medial stop in the children's attempts at disyllabic words as a geminate if the word-medial stop had a long absolute duration, even if its duration relative to neighbouring vowels was short. We suggest that young children's slow articulation rate makes their speech segments prone to be perceived as geminates by the adults.Item Onset vs. Coda Asymmetry in the Articulation of English /r/(International Phonetic Association, 2015-08-15) Scobbie, James M.; Lawson, Eleanor; Nakai, Satsuki; Cleland, Joanne; Stuart-Smith, JaneWe describe an asymmetric categorical pattern of onset-coda allophony for English /r/, the post-alveolar rhotic approximant, drawing on published and unpublished information on over 100 child, teenage and adult speakers from prior studies. Around two thirds of the speakers exhibited allophonic variation that was subtle: onset and coda /r/ were typically both bunched (BB), or both tip-raised (RR), with minor within speaker differences. The other third had a more radical categorical allophonic pattern, using both R and B types. Such variable speakers had R onsets and B codas (RB): but the opposite pattern of allophony (BR) was extremely rare. This raises questions as to whether the asymmetry is accidental or motivated by models of syllable structure phonetic implementation.Item Recording speech articulation in dialogue: Evaluating a synchronized double Electromagnetic Articulography setup(Elsevier, 2013-08-28) Geng, Christian C.; Turk, Alice; Scobbie, James M.; Macmartin, Cedric; Hoole, Philip; Richmond, Korin; Wrench, Alan A.; Pouplier, Marianne; Bard, Ellen Gurman; Campbell, Ziggy; Dickie, Catherine; Dubourg, Eddie; Hardcastle, William J.; Kainada, Evia; King, Simon; Lickley, Robin; Nakai, Satsuki; Renals, Steve; White, Kevin; Wiegand, Ronny; EPSRCWe demonstrate the workability of an experimental facility that is geared towards the acquisition of articulatory data from a variety of speech styles common in language use, by means of two synchronized electromagnetic articulography (EMA) devices. This approach synthesizes the advantages of real dialogue settings for speech research with a detailed description of the physiological reality of speech production. We describe the facility's method for acquiring synchronized audio streams of two speakers and the system that enables communication among control room technicians, experimenters and participants. Further, we demonstrate the feasibility of the approach by evaluating problems inherent to this specific setup: The first problem is the accuracy of temporal synchronization of the two EMA machines, the second is the severity of electromagnetic interference between the two machines. Our results suggest that the synchronization method used yields an accuracy of approximately 1 ms. Electromagnetic interference was derived from the complex-valued signal amplitudes. This dependent variable was analyzed as a function of the recording status - i.e. on/off - of the interfering machine's transmitters. The intermachine distance was varied between 1 m and 8.5 m. Results suggest that a distance of approximately 6.5 m is appropriate to achieve data quality comparable to that of single speaker recordings.Item Seeing Speech: an articulatory web resource for the study of phonetics [website](University of Glasgow, 2015-04-01) Lawson, Eleanor; Stuart-Smith, Jane; Scobbie, James M.; Nakai, Satsuki; Beavan, David; Edmonds, Fiona; Edmonds, Iain; Turk, Alice; Timmins, Claire; Beck, Janet M.; Esling, John; Leplatre, Gregory; Cowen, Steve; Barras, Will; Durham, MercedesSeeing Speech (www.seeingspeech.ac.uk) is a web-based audiovisual resource which provides teachers and students of Practical Phonetics with ultrasound tongue imaging (UTI) video of speech, magnetic resonance imaging (MRI) video of speech and 2D midsagittal head animations based on MRI and UTI data. The model speakers are Dr Janet Beck of Queen Margaret University (Scotland) and Dr John Esling of University of Victoria (Canada). The first phase of this resource began in July 2011 and was completed in September 2013. Further funding was obtained in 2014 to improve and augment this resource (this version) and to develop its sister site Dynamic Dialects. The website contains two main resources: An introduction to UTI, MRI vocal tract imaging techniques and information about the production of the articulatory animations. Clickable International Phonetic Association charts links to UTI, MRI and animated speech articulator video. This online resource is a product of the collaboration between researchers at six Scottish Universities: The University of Glasgow, Queen Margaret University, Napier University, the University of Strathclyde, the University of Edinburgh and the University of Aberdeen; as well as scholars from University College London and Cardiff University. For examples of various dialects of English, please go to the sister site http://www.dynamicdialects.ac.ukItem The influence of babbling patterns on the processing of speech(Elsevier Inc, 2013-12) DePaolis, R. A.; Vihman, M. M.; Nakai, SatsukiThis study compared the preference of 27 British English- and 26 Welsh-learning infants for nonwords featuring consonants that occur with equal frequency in the input but that are produced either with equal frequency (Welsh) or with differing frequency (British English) in infant vocalizations. For the English infants a significant difference in looking times was related to the extent of production of the nonword consonants. The Welsh infants, who showed no production preference for either consonant, exhibited no such influence of production patterns on their response to the nonwords. The results are consistent with a previous study that suggested that pre-linguistic babbling helps shape the processing of input speech, serving as an articulatory filter that selectively makes production patterns more salient in the input. 2013 Elsevier Inc.Item The VOT category boundary in word-initial stops: Counter-evidence against rate normalization in English spontaneous speech(Ubiquity Press, 2016-10-07) Nakai, Satsuki; Scobbie, James M.Some languages, such as many varieties of English, use short-lag and long-lag VOT to distinguish word- and syllable- initial voiced vs. voiceless stop phonemes. According to a popular view, the optimal category boundary location between the two types of stops moves towards larger values as articulation rate becomes slower (and speech segments longer), and listeners accordingly shift the perceptual VOT category boundary. According to an alternative view, listeners need not shift the category boundary with a change in articulation rate, because the same VOT category boundary location remains optimal across articulation rates in normal speech, although a shift in optimal boundary location can be induced in the laboratory by instructing speakers to use artificially extreme articulation rates. In this paper we applied rate-independent VOT category boundaries to word-initial stop phonemes in spontaneous English speech data, and compared their effectiveness against that of Miller, Green and Reeves's (1986) rate-dependent VOT category boundary applied to laboratory speech. The classification accuracies of the two types of category boundaries were comparable, when factors other than articulation rate are controlled, suggesting that perceptual VOT category boundaries need not shift with a change in articulation rate under normal circumstances. For example, Optimal VOT category boundary locations for homorganic word-initial stops differed considerably depending on the following vowel, however, when boundary location was assumed to be affected by the relative frequency of voiced vs. voiceless categories in each vowel context.Item Viewing speech in action: Speech articulation videos in the public domain that demonstrate the sounds of the International Phonetic Alphabet (IPA)(Taylor & Francis, 2016-04-11) Nakai, Satsuki; Beavan, David; Lawson, Eleanor; Leplatre, Gregory; Scobbie, James M.; Stuart-Smith, JaneIn this article, we introduce recently released, publicly available resources, which allow users to watch videos of hidden articulators (e.g. the tongue) during the production of various types of sounds found in the world's languages. The articulation videos on these resources are linked to a clickable International Phonetic Alphabet chart ([International Phonetic Association. 1999. Handbook of the International Phonetic Association: A Guide to the Use of the International Phonetic Alphabet. Cambridge: Cambridge University Press]), so that the user can study the articulations of different types of speech sounds systematically. We discuss the utility of these resources for teaching the pronunciation of contrastive sounds in a foreign language that are absent in the learner's native language.