( B) Comparison of noise and word trials reveals acoustic-phonetic processing comparison of congruous and incongruous trials reveals modulation of lexico-semantic processing. ( A) Trials present words (preceded by a congruous or incongruous picture visible for the duration of the trial) or matched noise control sound. Temporal resolution combined with sufficient spatial localization thus provides essential additional information for untangling the dynamic interaction of the different processes contributing to speech understanding, as well as defining the role of feedback from later to earlier stages (Fig. 1 B), partly because hemodynamic measures such as positron emission tomography and functional magnetic resonance imaging do not have the resolution to separate them temporally, and furthermore they find that all of these processes activate overlapping (but not identical) cortical locations ( Price 2010). To date, neural evidence for or against feedback processes in speech perception has been lacking (Fig. These models (and the behavioral data supporting them) provide important testable hypotheses to determine whether top-down effects occur during early word identification or late post-lexical processing. However, others can account for these phenomena with a flow of information that is exclusively bottom-up ( Marslen-Wilson 1987 Norris et al. Inspired by behavioral evidence for effects of the lexico-semantic context on phoneme identification ( Samuel 2011), some neurocognitive theories of speech perception posit lexico-semantic feedback to at least the phonemic stage ( McClelland and Elman 1986). While these stages are generally acknowledged in neurocognitive models of speech perception, there is much disagreement regarding the role of higher level lexico-semantic information during early word-form encoding stages. Once the stimulus is in this pre-lexical form, it can be sent to higher level brain areas for word recognition and meaning integration. The translation of an acoustic stimulus from a sensory-based, nonlinguistic signal into a linguistically relevant code presumably requires a neural mechanism that selects and encodes word-like features from the acoustic input. Traditional accounts distinguish several stages: initial acoustic (nonlinguistic), phonetic (linguistic featural), phonemic (language-specific segments) and, finally, word recognition ( Frauenfelder and Tyler 1987 Indefrey and Levelt 2004 Samuel 2011). Speech perception can logically be divided into successive stages that convert the acoustic input into a meaningful word. The independence of the early acoustic-phonetic response from semantic context suggests a limited role for lexical feedback in early speech perception.ĮCoG, MEG, N400, speech processing Introduction These recordings further identified sites within superior temporal cortex that responded only to the acoustic-phonetic contrast at short latencies, or the lexico-semantic at long. The MEG source estimates were confirmed with intracranial local field potential and high gamma power responses acquired in 2 additional subjects performing the same task. The earlier onset of acoustic-phonetic processing compared with lexico-semantic modulation was significant in each individual subject. Within the same task, semantic priming of the same words by a related picture modulates cortical processing in a broader network, but this does not begin until ∼217 ms. It was replicated in another experiment, but is strongly dissociated from the response to tones in the same subjects. It begins ∼60 ms after stimulus onset and is localized to middle superior temporal cortex. The first acoustic-phonetic stage is selective for words relative to control stimuli individually matched on acoustic properties. We combined magnetoencephalography (MEG) with magnetic resonance imaging and electrocorticography to separate in anatomy and latency 2 fundamental stages underlying speech comprehension.
0 Comments
Leave a Reply. |