10/17/2006

Language Disorders, Modularity, and Domain-General Mechanisms

Yesterday I discussed how domain-general mechanisms can explain several features of language acquisition, including phonology and some aspects of grammar. However, developmental disorders of language pose a slightly stronger challenge to domain-general theories of language.

Perhaps the strongest argument for a specialized grammar mechanism comes from grammatical specific language impairment (G-SLI), a condition in which a selective grammar deficit occurs alongside mutations in a single gene, leaving intact nonverbal, auditory, and articulation abilities (van der Lely, 2005). G-SLI children are specifically impaired at past and passive tense formation, but unlike normal children they do not show a regularity advantage. These problems are stable within individuals across time, as well as between individuals who speak different languages.

Other forms of SLI may result from general auditory processing deficits, but members of the G-SLI subpopulation do not consistently share any deficits except those that define the disorder. This poses a problem for domain-general approaches to language (although see this account of multiple causality in language disorders). But until more tests of auditory and “exception” processing have been performed on this recently-defined (and possibly heterogenous) subpopulation, it seems likely that G-SLI will also be shown to result from the failure of one or more domain-general mechanisms.

This optimism is partly justified by the success of domain-general approaches in accounting for other language disorders. For example, connectionist networks are capable of simulating surface and deep dyslexia (Christiansen & Chater, 2001) without recourse to specialized computational mechanisms – instead, these models rely on the same basic components and learning algorithms used in simulations of a variety of other domains. Although such models typically include independent layers for semantics and phonology, this should not be taken as a strong theoretical claim: the end-state of learning may come to resemble modularity, but the learning process itself can still rely on homogenous, domain-general mechanisms (Colunga & Smith, 2005).

One example of such apparent modularity comes from reports that selective deficits in semantic knowledge can occur for living and non-living things (Thompson-Schill, et al., 1999). Yet, this evidence does not necessarily support the idea that semantic representations are organized according to the “domains” of living and non-living things. For example, one might observe more loss of knowledge about living things in a patient that sustains damage to visual areas, since visual information is more diagnostic of living than non-living things (i.e., non-living categories tend to be formed more by “purpose” than “appearance,” whereas categories of living things tend to be formed in the opposite way). Therefore, the appearance of modularity may actually reflect organization by modality rather than organization by domain.

Likewise, Broca’s and Wernicke’s aphasia also appear to reflect damage to language-specific regions. However, closer inspection of the neuroimaging and neuropsychological data suggest that a variety of regions are involved in language processing of both semantics and grammar (Martin, 2003). Furthermore, Broca’s and Wernicke’s aphasics each manifest heterogenous behavioral impairments, as one would expect if the damaged regions were involved in multiple domains of processing. A sensory-distributed view of Broca’s and Wernicke’s aphasia thus seems more compatible with the available data.

Another powerful demonstration of how domain-general mechanisms can explain semantic knowledge is Latent Semantic Analysis (LSA). LSA is a mathematical model of word meaning that actually passed the synonym-portion of TOEFL at a level sufficient for admission to many major universities (Landauer & Dumais, 1997). Although LSA is not a connectionist model, it is closely related in at least two ways: first, LSA is equivalent to a large three-layer connectionist network; second, LSA’s singular value decomposition algorithm is closely related to principal components analysis, and by extension, Hebbian learning (p.122 of O’Reilly & Munakata, 2000). If a domain-general approach, such as LSA, demonstrates human-competitive performance, then why posit a domain-specific mechanism?

One possible answer is that LSA is non-referential; according to this logic, LSA’s apparent knowledge of word meaning is a kind of “statistical mirage,” where real knowledge of semantics requires being able to identify objects in the environment. Such real-world referential knowledge is sometimes thought to require specialized mechanisms in order to overcome the Gavagai problem – for example, one such mechanism is word learning biases (e.g., Smith et al., 2002).

However, these “biases” may not be specific to word-learning. For example, some have argued that “uncertainty reduction” is a function common to statistical learning processes (Gomez, 2002) which may underlie learning in multiple domains. Biases may appear only because their use maximizes the reduction of uncertainty, and such “maximal error reduction” may be equivalent in some ways to the gradient descent algorithms featured in many connectionist models. Therefore, apparent “word-learning” biases may actually be the result of maximal error-reduction through more general statistical learning processes.

In conclusion, much evidence interpreted to support language-specific mechanisms may actually result from domain-general processes. As reviewed yesterday, characteristics of general-purpose auditory processing explain several aspects of language acquisition, in particular phonology. Likewise, priming effects on grammaticality suggest that grammar is deeply related to a diverse array of other cognitive processes that have also shown priming. There is reason to think that recursive or combinatorial operations are important both for other aspects of cognition and for behavior in non-human species. Disorders of language, both developmental and acquired, may reflect modality- as opposed to domain-specificity. And finally, semantic learning shares remarkable mechanistic similarities to other forms of cognition.

Perhaps the only “problem area” for such an account is the recently defined G-SLI disorder, but more research is needed before GSLI can be considered strong evidence for either perspective.

Therefore, no unequivocal evidence from any of these domains suggests specialized mechanisms must exist to account for language; instead, language appears to emerge as an interaction of powerful but domain-general mechanisms.

References:

Christiansen MH, & Chater N.(2001). Connectionist psycholinguistics: capturing the empirical data. Trends Cogn Sci. 5(2):82-88.

Colunga, E., Smith, L. B. (2005) From the Lexicon to Expectations About Kinds: A Role for Associative Learning. Psychological Review, Vol. 112, No. 2.

Gomez RL. (2002). Variability and detection of invariant structure. Psychol Sci. 13(5):431-6.

Hutzler F, Ziegler JC, Perry C, Wimmer H, & Zorzi M. (2004). Do current connectionist learning models account for reading development in different languages? Cognition. 91(3):273-96.

Landauer, T. K. & Dumais, S. T. (1997) A solution to Plato’s problem: The Latent Semantic Analysis theory of acquisition, induction, and representation of knowledge. Psychological Review 104:211–40.

Martin RC. (2003). Language processing: functional organization and neuroanatomical basis. Annu Rev Psychol. 54:55-89.

O'Reilly, R.C. & Munakata, Y. (2000) Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain, MIT Press.

Premack D. (2004). Psychology. Is language the key to human intelligence? Science. 2004 303(5656):318-20.

Smith LB, Jones SS, Landau B, Gershkoff-Stowe L, & Samuelson L. (2002). Object name learning provides on-the-job training for attention. Psychol Sci.;13(1):13-9.

Thompson-Schill SL, Aguirre GK, D'Esposito M, & Farah MJ. (1999) A neural basis for category and modality specificity of semantic knowledge. Neuropsychologia. 37(6):671-6.

van der Lely HK. (2005). Domain-specific cognitive systems: insight from Grammatical-SLI. Trends Cogn Sci. 9(2):53-9.



3 Comments:

Anonymous Anonymous said...

Wow, you covered a lot of territory here. What if G-SLI could be attributed to a cognitive deficit related to perception/coding of time, rather than seen just as a consistent language error? After all, many SLI folks can learn through intense repetition to parse the components of sounds to define the boundaries of phonemes more accurately. And they increase their auditory processing skills dramatically. What if that kind of training for visualizing time (past tense) or the agent relationship (passive) and then associating the correct verb categories could help? That would assume that it's not language specific and in fact domain-related.

10/18/2006 11:31:00 AM  
Anonymous Anonymous said...

Wow, you covered a lot of territory here. What if G-SLI could be attributed to a cognitive deficit related to perception/coding of time, rather than seen just as a consistent language error? After all, many SLI folks can learn through intense repetition to parse the components of sounds to define the boundaries of phonemes more accurately. And they increase their auditory processing skills dramatically. What if that kind of training for visualizing time (past tense) or the agent relationship (passive) and then associating the correct verb categories could help? That would assume that it's not language specific and in fact domain-related.

10/18/2006 11:31:00 AM  
Blogger Chris Chatham said...

Hi again Sheryle - I think that it's very possible g-sli is related to more domain-general problems, perhaps in time percpetion as you note. I have also been informed that, oddly, there appear to be non-human equivalents to G-SLI in songbirds. So that again would suggest that G-SLI is not due to a langauge-specific impairment, since birds clearly don't have language.

10/18/2006 12:24:00 PM  

Post a Comment

<< Home