11/30/2006

Traumatic Brain Injury: Interventions and Treatment

Yesterday I discussed this article on theories and test of executive function (EF) impairment in patients with traumatic brain injury (TBI). Cicerone, Levin, Malec, Stuss and Whyte also discuss some of the treatment options for patients with EF impairment.

The authors first distinguish between different "levels" of treatment. One may seek to directly improve executive function impairments, perhaps by pharmacological means. A second strategy is to provide devices that can be used to overcome or compensate for these impairments.

The next distinction is between the various levels of testing. One can attempt to measure underlying executive functions themselves (as in the Stroop or WCST paradigms, for example) or can instead measure "functional outcome" - i.e., a more naturalistic level of the patient's ultimate ability to function autonomously.

Cicerone et al. advocate the use of "crossover" experimental designs to investigate the effects of various treatment options. In these techniques, each subject is given either all or some of the available treatments; this serves to reduce variance between subjects and thus increase statistical power.

Unfortunately, current EF rehabilitation often focuses on compensatory techniques, and relies on the clinician to select the appropriate strategy for a given situation. Cicerone et al. note that a prominent feature of executive dysfunction is the failure to self-generate behavior, so in this case the rehabilitation program is simply not addressing this primary type of EF impairment! More ecologically-valid treatments are a particularly promising area of research.

Along these lines, "holistic" rehabilitation programs attempt to address a wide range of behavioral problems simultaneously, including problem-solving, behavioral and emotional regulation, working memory deficits, metacognitive functions (Cicerone et al. include planning, inhibition and self-monitoring under this rubric), and "activation." Each is covered in turn below.

Treatment for Problem-Solving Deficits

Cicerone et al review several prospective, randomized studies problem-solving interventions in patients with TBI or "cerebral insult. In one such study, an experimental group was taught to break down every problem into its constituent subgoals, including training in "problem orientation, problem definition and formulation, generation of alternatives, decision making and solution verification." This group demonstrated substantial gains in "awareness of cognitive deficits, goal-directed ideas, and problem-solving" as compared to a control group that underwent only "memory retraining."

Cicerone et al. also review the effiicacy of "goal-management training," which involves training to "evaluate the current problem state ('What am I doing?') [...] specification of the relevant goals (the 'main task'), and partitioning of the problem-solving process into subgoals (the 'steps')" followed by training to improve retention of these subgoals and to monitor the outcomes of their results. Assessment of this program's efficacy involved patient performance on naturalistic complex tasks (e.g., "room layout"); patients who had undergone 1 hour of goal-management training showed quicker completion time and fewer errors than a control group who had undergone motor skills training.

A third studied reviewed by Cicerone et al. involved around 60 hours of "cognitive-behavioral training in problem-solving skills, a systematic process for analyzing real-life problems, and role-play of real-life examples of problem situations," whereas a control group was trained to "improve cognitive skills and support for coping with emotional reactions and changes after injury." Although both groups improved on some measures, only the experimental group improved on measures of ECF (although this training did not result in gains on more naturalistic measures of functional ouctome, i.e., "community integration").

Working Memory Deficit Intervention

Cicerone et al. review the pharmacological use of bromocriptine (a D2 agonist) on patients with TBI-induced EF impairment, which improved dual but not single task performance. In other words, patients showed better efficiency at completing both tasks simultaneously but not on completing either one individually. This seems to be a specific improviement in executive function.

Behavioral and Emotional Self-Regulation

Two studies involved providing situational cues that would remind EF-impaired TBI patients to reconsider the overall goal of their current task. In one study this involved pre-training on stimulus-response contingencies of the kind "if y, then I'll do x", whereas in the other study an auditory tone was played to remind patients to reconsider their goal.

Cicerone et al briefly review other attempts to improve emotional self-regulation, including anger-management techniques. Unfortunately, Cicerone et al. suggest that emotional self-regulation impairments may be particularly resistant to treatment.

Activation

One typical symptom of frontal damage is lack of motivation or drive. Cicerone et al suggest that very few studies have investigated how to address this problem, although simply cuing patients to initiate conversation shows some limited benefit. The pharmacological D2 agonist Bromocriptine also seems to have some benefit for increasing spontaneity.

Metacognitive Processes

Cicerone et al. review an attempt to teach one brain damaged patient to "internally verbalize" the patient's current intent or goal. In three stages lasting 2-3 weeks apiece, this verbalization went from overt to completely covert; afterwards, the patient was trained for an additional 12 weeks to apply these strategies to everyday situations. This patient and subsequent patients with the same treatment seemed to show improvement in their inhibitory skills.


11/29/2006

Traumatic Brain Injury: Tests and Theories

One of the principal techniques used by cognitive scientists is reverse engineering - for example, a classic experiment by Donders established the field of mental chronometry by subtracting the time for execution of a motor response from total response time.

Reverse engineering "normal" brains can only take you so far. Another approach is to look at the effects of brain damage on various cognitive functions, and use reverse engineering methodologies on those patients to determine what function the damages regions might subserve. While MRI technologies have vastly improved our ability to localize damage in the frontal cortex, behavioral techniques for assessing the type, extent, and potential outcomes of frontal damage are still relatively crude.

As discussed by authors Cicerone, Levin, Malex, Stuss and Whyte, traumatic brain injury typically causes two types of damage: diffuse axonal injury (DAI) and focal cortical contusions (FCCs). The former involves destruction of capillaries and axons in frontal white matter and may contribute to frontal hypometabolism, which is known to be related to executive function impairment. The latter involves abrasions to the cortex by the skull, which can damage cortex directly (through herniation) or indirectly (through lack of oxygen and/or blood flow as a result of inflammation or DAI).

To determine the effects of brain injury on executive functions, the authors distinguish between 4 basic types of EF: cognitive, behavioral/self-regulatory, activation regulation, and metacognitive operations. Each is covered in turn below:

Executive Cognitive Functions: The authors suggest that these functions are dependent on the "control and direction [...] of lower level, more modular, or automatic functions" and that they are mediated by working memory and inhibition. Such functions seem to depend on the archicortical trend, and dlPFC in particular.

Executive cognitive functions are most frequently assessed with the Wisconsin Card Sorting Test, the Trail Making test, and verbal fluency measures. The California Verbal Learning Test is sometimes used to investigate damage to the associative or strategic processes that dlPFC may contribute to memory.

Interestingly, frontal damage does not seem to impair all executive cognitive functions equally. For example, Cicerone et al. review a 1998 study of 30 brain-damaged patients who showed no more susceptibility to distractors and no more dual-task performance decrement than normal controls (though of course they made more errors in general, raising the question of whether there were floor effects here). Performance on this naturalistic task (involving packing a lunch) was not predictive of the degree of brain damage, but was "moderately correlated with a measure of functional outcome."

Behavioral self-regulatory functions: In contrast, orbitofrontal (aka ventromedial) regions of prefrontal cortex are closely connected with the limbic system and thus both reward and emotional processing. Tests of reversal learning are generally sensitive to damage in this region.

Activation Regulating Functions: Cicerone et al suggest that more limited medial damage can result in apathy or abulia, and that this maps onto Stuss's concept of drive. They link these functions to the anterior cingulate and superior frontal cortex, pointing to slowed RTs in veral fluency and Stroop tasks among patients with damage to this area.

Metacognitive Processes: Cicerone et al point towards the frontal poles as the origin of metacognitive processes like personality, consciousness, self-evalution and social cognition. Assessments of damage to the frontal poles involve "reactions to verbal and cartoon humor, visual perspective-taking tests, and comparison of performance on remember-know memory tasks."

All of the assessment techniques listed above have been criticized for a lack of specificity. It is also important to distinguish between the levels of analysis - impairment on laboratory tasks vs. impairment on real-world tasks involving those abilities. Such lack of specificity problematizes rehabilitation, which can involve pharmacological intervention, "direct remediation," or the use of compensatory technologies/devices.

11/28/2006

Eated Their Own Words: Symbolic Accounts of Overregularization Errors

Biopsychology News linked to an article by Hartshorne & Ullman about sex differences in over-regularization error rates - i.e., the formation of words like "holded." Such over-regularization errors have long been a topic of interest for at least two reasons: first, they seem to reveal the overuse of a mental "rule," providing support for theorists who consider cognitive processing to be rule-based or symbolic; second, these errors have a curious developmental trajectory, such that overregularization is essentially absent among both younger speaking children and also among older children.

Although connectionist models have been successful in accounting for this U-shaped performance curve without the use of "mental rules," Hartshorne & Ullman note that such models do not predict any sex differences in overregularization errors - in fact, the authors argue that sex differences have been largely ignored in the past-tense domain.

In contrast, Ullman's "Declarative/Procedural" model of language processing does make predictions about sex differences. This theory posits that two distinct processes mediate past tense formation: first, a mental lexicon stores various instances of past-tense mappings and relies mostly on declarative memory, and second, a "mental grammar" of rules is subserved by the procedural memory system. Because females are generally better than males at verbal declarative memory tasks, Hartshorne & Ullman predicted that females may be more likely to "remember" regular past-tense forms (e.g., "walked") whereas males may be more likely to use the mental grammar mechanism to form these past-tense mappings anew each time. In other words, females may tend to use the mental lexicon, whereas males may tend to use the mental grammar mechanism.

According to this account, over-regularization errors happen when irregular past-tense forms - stored in the mental lexicon - are not sucessfully retrieved, and the irregular verb stem is thus mistakenly passed through the mental grammar rule system, where it is given a regular past-tense ("-ed"). Hartshorne & Ullman predicted that females should therefore make fewer overregularization errors than males, given their superior mental lexicon. To test this hypothesis, the authors analyzed mor ethan 100,000 utterances from 15 boys and 10 girls in the CHILDES transcript database.

Contrary to their hypothesis, Hartshorne & Ullman discovered that girls actually made more than three times more overregularization errors than boys! There were no significant sex differences in age, the size or contents of transcript samples, social class, nor the types of verb stems used by adult conversation partners in the CHILDES transcripts.

To account for this, Hartshorne & Ullman reconsidered their original hypothesis. Perhaps over-regularization errors are more prevalent among females because they are more likely to store regular past-tense forms in the mental lexicon, and thus more likely to inappropriately utilize these regular past-tense mappings in the case of irregular verb stems not through the mental grammar system, but through an associative system in the lexicon. According to this new hypothesis, female overregularization errors should occur primarily for irregular verb stems that sound similar to a variety of regular verb stems.

Hartshorne & Ullman confirmed this prediction with a sophisticated analysis of overregularization errors made by boys and girls, demonstrating that the tendency to overregularize verb stems with a high number of similar-sounding regular verb stems was significantly stronger among girls than boys. In fact, boys showed no such tendency. The authors also attempted to rule out alternative explanations for this result, including that boys' conversation partners in the CHILDES transcripts tended to use fewer similar-sounding regular verb stems, or that floor effects in boys' overregularization rates would "wash out" the correlation seen among girls (although this latter explanation still remains convincing to me, particularly given the small sample sizes here).

The authors note several limitations to their work, including that they did not demonstrate that boys' overregularization errors occur as a result of the rule-based system, that the connection between female over-regularization and associative computation is merely correlational rather than causal, and finally that the typical superiority of females in verbal memory tasks was not confirmed in this particular sample.

Hartshorne & Ullman conclude by suggesting that connectionist models of past-tense formation - which rely entirely on associative links between verb stems and past-tense forms, without recourse to symbolic rule structures - may actually explain more about past-tense formation than the current authors had given them credit for.

One shortcoming of symbolic accounts - such as Ullman's Declarative/Procedural model, or Pinker's "Words and Rules" theory - is the lack of plausible mechanisms by which "grammar rules" could be extracted from memorized verb stem/past-tense pairs (as pointed out by Tomasello). In contrast, cognitive computational mechanisms are well-specified by connectionist and PDP accounts of language learning, which reliably demonstrate the kind of "regularity extraction" (by way of hebbian learning) that would be required for the formation of rule-like representations.

Related Posts:
Disentangling Two Debates: Domain-Specificity and Nativism
Word Learning in Feature Space

11/17/2006

DI on Hiatus

Developing Intelligence will be on hiatus until November 27th. Happy thanksgiving!

11/13/2006

A Candidate Neural Mechanism for Cross-Frequency Phase Coupling

In a recent article from the Journal of Neuroscience, authors Palva, Palva & Kaila present compelling evidence for the idea that neural oscillations in various frequency bands are temporally multiplexed.

The authors begin with a brief synthesis of the previous literature on synchronized oscillations, suggesting that beta and gamma waves seem to be related to active maintenance of information, while theta and alpha waves are involved in top-down modulation of information held online. Cross-frequency phase coupling among these different oscillations is thought by some to result in working memory.

Importantly, Palva et al distinguish between two types of cross-frequency phase coupling: n:m phase synchrony, which "indicates amplitude-independent phase locking of n cycles of one oscillation to m cycles of another oscillation," and "nested oscillations, which reflect the locking of the amplitude fluctuations of faster oscillations to the phase of a slower oscillation."

The authors performed MEG imaging on 17 subjects while they performed three tasks: the first two involves "active rest," in which subjects were told to actively clear their "visual and auditory fields" respectively, and the third task involved iterative mental arithmetic of two or three numbers. The MEG data was analyzed with Morlet wavelets, with the time-domain spread of wavelet transforms further reduced by a "finite-impulse response" filtering technique. Frequencies were said to show "phase coupling" if their peak frequency differed by a constant amount and if the phases of those oscillations were not randomly distributed. The authors also analyzed amplitude relationships between coupled frequencies.

The results from the active rest conditions showed that n:m phase synchrony existed between all frequency bands, either at ratios of 1:2 (between alpha and beta) or 1:3 (primarily between gamma and alpha). The locations of these oscillations differed as well: alpha-beta phase coupling occurred widely throughout cortex, whereas gamma-alpha phase coupling occurred primarily over occipitoparietal and somatomotor regions.

Results from the mental arithmetic task showed increased gamma-beta, beta-alpha, and gamma-alpha phase coupling relative to rest. Furthermore, the use of 3 digits instead of 2 in the mental arithmetic task was associated with enhanced gamma-alpha phase coupling. The amplitudes of theta band oscillations were significantly increased in the 3 digit condition, compared to the 2 digit condition, but only over prefrontal regions; throughout the rest of cortex, theta oscillations were actually suppressed.

The authors conclude with speculation on how these phase-coupled oscillations may arise from neural circuits (probably the question several of you are asking yourselves). They suggest that one type of Layer V pyramidal neuron may be particularly suited to gamma/alpha phase coupling, because it has dendrites that span all cortical layers. Excitatory input to the proximal and basal dendrites comes primarily from thalamic nuclei and from regions "lower" in the cognitive hierarchy, whereas the excitatory input to distal apical dendrites comes primarily from regions higher in the cognitive hierarchy. Burst firing of these cells is evoked only when input from both the distal and proximal dendrites arrive within 5ms of one another.

Much of this excitatory input converges on gamma-band rhythms, but the mechanisms underlying action potentials in distal dendrites (involving calcium) are rather slow and cannot fire much faster than 10 HZ (alpha). Thus, Palva et al. suggest these neurons may act as "phase-couplers" which could be important for attention and binding. Consistent with this speculation is the fact that subcortical projects from these L5 cells include the pulvinar and superior colliculus, areas thought to be important for early attentional processes.

The authors also mention fast rhythmic bursting cells, which show bistable spiking patterns: they can fire single spikes, or change into a burst firing pattern of very fast (300 Hz - aka "high gamma") action potentials clustering at gamma rhythms. Palva et al. note that these cells are well represented in thalamocortical loops.

Related Posts:
The Argument for Multiplexed Synchrony
High Gamma Modulation in Cortex

11/12/2006

The Synapse, Issue 11

Welcome to the 11th issue of the Synapse!

Starting things off, Alpha Psy discusses evolutionary perspectives on the functions of shame (evoked by inappropriate behavior only if witnessed by others) and guilt (evoked by inappropriate behavior regardless of whether an audience witnesses it). Olivier then reviews a recent fMRI study demonstrating that ventrolateral and dorsomedial prefrontal cortex are sensitive to the presence of a witnessing audience after an inappropriate behavior, whereas other regions previously implicated in Theory of Mind tasks (such as the temporal-parietal junction and temporal poles) seem insensitive to audience.

So, what about the audience? The Neurocritic looks at the cognitive neuroscience of empathy, and its relationship with medial inferior frontal cortex, right anterior fusiform gyrus, and the right temporal pole.

Might an audience react differently to a speaker positioning on the left vs. the right of their visual fields? Episteme reviews a new manuscript focused on this question, turning up some pretty interesting results. What ever you might think of gross conclusions about laterality, this is worth a quick read.

According to simulation theorists, an audience may experience empathy by mentally simulating the situation - as though they are experiencing the situation directly. Another post from the Neurocritic bears on this question, in that patients with a congenital insensitivity to pain (yes, they can actually feel no pain) rate the perceived experience of pain the same as normal controls - but only if they are given access to visible or audible expressions of pain. Merely viewing the painful experiences themselves is apparently not enough, tentatively supporting simulation theory.

By the way, if you're interested in simulation theory, be sure to check out this post from Mind Hacks on the role of mirror neurons in autism.

Speaking of pain: HFXN discusses the cerebrospinal markers of axonal and glial damage found in amateur boxers one week after a rough match. Would we see similar indications of brain damage among other professionals who experience other instances of physical shock - for example, in soldiers using automatic weapons, or perhaps construction workers using jackhammers?


The Mouse Trap discusses how Bollywood and Hollywood have stigmatized mental illness, and one recent Bollywood film that that seems to reverse this stereotype: in "Lago Raho Munnabhai," vivid hallucinations actually help the protagonist leave behind a life of crime and live according to Gandhian values.

BPS Research Digest mentions a new study showing that some social outgroups do not elicit the neural responses that usually occur whenever we think about other people or ourselves. Is "dehumanization" now something we can voxelize and quantify?



Drawing from a book called "The Legal Imagination," idealawg asks how thinking like a lawyer might affect your brain. What areas of the brain would we expect to see hyperactive among lawyers - and which relatively quiet - in comparison to someone of equivalent intelligence?

And for people with aspirations to go to law school, Sharp Brains offers some very helpful advice on techniques that can be used to improve memory. Also check out this summary of some recent work on adult neurogenesis.

Starting with Descartes and ending with Marvin Minsky, The Mouse Trap argues that circadian oscillators may be a critical omission from connectionist models of conditioning. After adding this mechanism, connectionist networks can begin to resemble Minsky's critic/selector architecture (it is interesting to note similarities here with the actor/critic method of temporal difference learning).

If you're interested in computational modeling, be sure to check out this video from Channel N: 3D computer simulations of over a million neurons!

Thanks for submitting, and stay tuned for the next issue of The Synapse at Dr. Deborah Serani's blog on November 26th!

11/10/2006

Connectionist Perspectives on "Late Talkers"

A slightly different view of the late-talkers’ predicament comes from connectionist perspectives on language development (nativist and interactionist views were covered previously). As a framework, connectionism emphasizes the graded, domain-general, and input-sensitive nature of cognition (McClelland & Patterson, 2002).

Connectionist simulations of language acquisition do not explicitly posit any innate mechanisms except those common to biological neural networks, nor do they ascribe any particular role to social interaction except simply providing linguistic input to the network. Nonetheless, such simulations accurately model a variety of detailed linguistic phenomena, including aspects of sentence reading, speech segmentation, and speech production (Christiansen & Chater, 2001).

Because connectionist principles are based on neural computation, and because language clearly results in some sense from neural computation, both nativist and interactionist predictions for effective intervention can usually be recast in the connectionist framework.

For example, reward conditioning of social interaction might benefit vocabulary acquisition by increasing the salience of linguistic input – one need not posit a special role for social cognition. Likewise, some developmental delays may simply “iron themselves out” not due to additional environmental triggers or the belated expression of a “shape-bias gene,” but rather because of individual differences in learning rate or the diversity of their early-life experiences.

Finally, children may appear to be sensitive to “intent” when in fact they are discriminating designed from non-designed objects on the basis of simple perceptual features (Colunga & Smith, 2005). Thus, predictions motivated by nativist and interactionist accounts can typically also be explained in terms of connectionist principles.

However, connectionism does suggest a few unique predictions for the absence of a shape bias among late-talkers. For example, attentional-learning accounts of word acquisition suggest that the associative links between names and perceptual features may become weighted with experience. Object features that more reliably correlate with naming patterns (such as shape for solid objects) ultimately become more salient with experience.

According to this view, the experience of late-talkers may have included an abnormally large proportion of objects whose names cannot be differentiated on the basis of shape; the corollary of this view is that certain types of words may be over-represented in the small vocabulary of late-talkers. One would expect these over-represented words to relate to living things and nonsolid objects, since many living things and nonsolid objects can be very similar in shape but have different words (Colunga & Smith, 2005).

Other word types may be under-represented in late-talkers' vocabulary, such as items where shape is a diagnostic feature (for example, solids and possibly non-living things). These predictions could be verified through analysis of the words that late-talkers did know, as measured by the MacArthur Communicative Development Inventory (Jones, 2003) or carefully selected stimuli from the Peabody Picture Vocabulary Test.

If such words were over-represented, late-talkers might show a more pronounced material bias for nonsolid and for simply shaped solid objects (Colunga & Smith, 2005). There is already tentative support for this prediction: late talkers have shown nonsignificant trends towards the use of a material bias (Jones, 2003). Cross-linguistic differences in shape bias usage - and related connectionist simulations - have also shown that syntax can also influence the shape bias (Colunga & Smith, 2005). Based on this work, late-talkers may also be less proficient in discriminating count and mass nouns.

If these predictions were verified, successful intervention techniques would likely involve training on categories well-organized by shape in an attempt to cultivate a shape bias, which has been shown to increase the rate of word learning (Smith et al., 2002). Another possible intervention is training on count vs. mass noun distinctions. It is even possible that extensive training on simple (and even non-verbal) shape matching tasks could instill a habit for late-talkers to attend to shape, which might ultimately translate into an acceleration of word learning. If verified, this latter prediction would be particularly compelling, because it would reflect an instance of far transfer of learning, one of the “holy grails” of training research.

References:

Colunga E, & Smith LB. (2005). From the lexicon to expectations about kinds: a role for associative learning. Psychol Rev. 2005 Apr;112(2):347-82.

Jones, S. S. (2003). Late talkers show no shape bias in a novel name extension task. Developmental Science 6(5):477-483.

McClelland, J. L. & Patterson, K. (2002). Rules or Connections in Past-Tense inflections: What does the evidence rule out? Trends in Cognitive Sciences. 6:11 (2002), pp 465-472

Smith LB, Jones SS, Landau B, Gershkoff-Stowe L, & Samuelson L. (2002). Object name learning provides on-the-job training for attention. Psychol Sci.;13(1):13-9.

11/09/2006

Interactionist Perspectives on "Late Talkers"

Yesterday's post reviewed nativist perspectives on why children with abnormally small vocabularies might not show a "shape bias" in their naming of novel nouns. Today's post focuses on how "interactionist" perspectives on language might account for this finding, while tomorrow's post will focus on the predictions motivated by connectionist theories.

In contrast to nativist accounts, interactionists emphasize the importance of social cognition to language development, and suggest that humans learn language largely because they differ from non-human primates in their attunement to social cues like reaching, looking, and pointing (Tomasello et al., 2003). For example, chimpanzees are unlikely to attend to social cues unless the stimulus is intrinsically rewarding (i.e., food) or unless there is competition from other chimpanzees. Even in this context, however, chimpanzees still appear incapable of more subtle social cognition, such as that involved in ToM, which has a relationship to language learning in humans (Slade and Ruffman, 2005).

From this perspective, interactionists might account for a lack of shape bias among late-talkers as resulting from deficits in skills related to social cognition. In this case, successful intervention techniques might involve reward conditioning for engaging in social dialogue, and specifically for following the social cues of adults. If this training was successful, children might initially show improvement on ToM tasks, followed by subsequent gains in vocabulary.

A more specific prediction is motivated by interactionists that stress the importance of “intent” in children’s naming habits. Specifically, they claim that the use of the shape bias is conditional on the objects having similar intended purposes (Diesendruck, Markson, & Bloom, 2003), and thus, implicitly also dependent on children’s inferential abilities. This account would predict that late-talkers have inferential deficits.

This hypothesis might predict that late-talkers would perform abnormally on tasks involving social inference, perhaps as measured by simple versions of the Child’s Apperception Test (i.e. CAT, in which subjects must tell a story about an ambiguous picture). According to this interactionist perspective, successful intervention might involve explicit training on “intentional cues,” focusing not only on gestures and facial expressions but also on object affordances. Similar to the interactionist perspective discussed above, this version would also predict that children show initial improvement on CAT or ToM tasks, followed by changes in vocabulary.

References:
Diesendruck G, Markson L, & Bloom P. (2003). Children's reliance on creator's intent in extending names for artifacts. Psychol Sci. 2003 Mar;14(2):164-8.

Slade, L., & Ruffman, T. (2005). How language does (and does not) relate to theory of mind: A longitudinal study of syntax, semantics, working memory and false belief. British Journal of Developmental Psychology, 23, 1-26.

Tomasello M, Call J, & Hare B. (2003). Chimpanzees understand psychological states - the question is which ones and to what extent.

11/08/2006

Nativist Perspectives on "Late Talkers"

In novel noun generalization tasks, children tend to show a “shape bias” by 2.5 years of age – that is, they will extend known object names to other objects of the same shape. More recent work has shown that children with abnormally small vocabularies do not demonstrate this shape bias (Jones, 2003); furthermore, this relationship appears to be causal, in that training normal children to show the shape bias drastically improves word learning even outside the laboratory (Smith et al., 2002).

Interactionist, connectionist, and nativist theories of language acquisition each make novel predictions for the underlying mechanism giving rise to a lack of shape bias among so called “late-talkers,” and also for ways to improve their rate of word learning. Predictions motivated by the nativist account are discussed in today's post, while predictions motivated by the other two accounts will be discussed in tomorrow and Friday's posts.

Nativist accounts suggest that language acquisition is guided by mechanisms that are innately specified. These mechanisms may be language specific, such as in the Chomskian concept of a “universal grammar,” or as in Pinker’s “words-and-rules” theory (2002). Alternatively, these mechanisms may be relatively domain-general, such as the computations required for combinatorial (Spelke, 2003) or recursive cognitive processing (Hauser, Chomsky & Fitch, 2002). In either case, the assumption is that a bias to preferentially attend to shape features is somehow innate, and therefore largely determined by intrinsic biological properties as opposed to being developed through experience.

According to hard-line nativist accounts, the failure of late-talkers to demonstrate a shape bias might be purely genetic in origin. Therefore, late-talking could be associated with a maladaptive gene for word learning in a language-specific faculty, or one that codes for how general-purpose attention is distributed to objects in the environment. In the latter case, nativist accounts would predict that late-talkers would show early deficits in a variety of other domains (for example in ignoring distractors or in focusing attention), whereas the former case would predict only language-specific deficits. In either case, the prognosis and possibilities for intervention would be particularly bleak, since language learning is presumed to be innately-specified and relatively unaffected by learning.

A less extreme nativist account would still suggest that word learning strategies like the shape bias are innately specified, but may only become active after certain “linguistic triggering experiences” (as cited by Tomasello, 2000). In this case, late-talkers may fail to show a shape bias simply because they have not yet encountered these environmental triggers, either as a result of less language exposure or deficits in auditory processing. Alternatively, late-talkers may just be developmentally delayed. In either of these cases, the prognosis is much more favorable than what is suggested by hard-line nativist accounts; intervention would simply involve continued exposure to natural language and, if auditory deficits are suspected, possibly hearing aids.

References:

Hauser MD, Chomsky N, & Fitch WT. (2002) The faculty of language: what is it, who has it, and how did it evolve? Science. 2002 Nov 22;298(5598):1565-6.

Pinker S, & Ullman MT. (2002). The past and future of the past tense. Trends Cogn Sci. 2002 Nov 1;6(11):456-463

Smith LB, Jones SS, Landau B, Gershkoff-Stowe L, & Samuelson L. (2002). Object name learning provides on-the-job training for attention. Psychol Sci.;13(1):13-9.

Spelke, E., 2003. What Makes Us Smart? Core Knowledge and Natural Language. In: Gentner, D. & Goldin-Meadow, S., Language in Mind: Advances in the Study of Language and Thought. Bradford Books/MIT Press, Cambridge, MA.

Tomasello M. (2000). The item-based nature of children's early syntactic development. Trends Cogn Sci. 2000 Apr;4(4):156-163.

11/05/2006

Blogging on the Brain: 10/20 - 11/5

Recent highlights from the best in brain blogging:

First, check out the Synapse #10, and don't forget to submit to the next synapse this week!

SharpBrains links to interviews with Eric Kandel, Liz Phelps, and Rebecca Saxe among others.

Kibra polymorphism: One genetic influence on individual differences in long-term memory ability

Genes and Intelligence: A cotwin study of gray and white matter densities, correlated with IQ

While we're on the topic, BrainEthics has a nice roundup a new articles in Science, Nature Neuroscience, and Cognition which focus on "cognitive genetics"

Paul discusses the most recent PNAS paper to make the mainstream news: are elephants self-aware? (Also here and here).

Brain and Body at Peripersonal Space

3d imaging of cell death in Alzheimer's

Is music a "visual language" to the brain?

A review of Hauser's new book, and the idea of a "moral grammar"

Have a nice weekend!

11/03/2006

Disentangling Two Debates: Conclusions

In the last few posts, I established that the debate about domain-general vs. domain-specific mechanisms is theoretically orthogonal to the debate about whether those mechanisms are innate or learned. Here, I've attempted to illustrate how these debates are typically confounded in many studies of the development of language.

Despite the stereotype that nativists advocate domain-specific (DS) mechanisms and that empiricists advocate domain-general (DG) mechanisms, the evidence above clearly establishes that nativist/empiricist debates can be considered orthogonal to the DS/DG debate. Some empiricists can be seen to argue for DS mechanisms (at least at the end-state of learning), while some nativists advocate the DG mechanism of recursion. The assumption that issues of nativism necessarily bear on the DS/DG debate is thus frequently mistaken.

Unfortunately, this faulty assumption is frequently made in the literature, in that evidence bearing on just one of these two distinct debates is frequently interpreted as bearing on both. For example, evidence on the use of the “one-to-one principle” from the isolated deaf and from speakers of Kannada is interpreted to suggest that “domain-specific grammatical knowledge guides linguistic development” (Lidz & Gleitman, 2004) when in fact no evidence is presented that the “one-to-one principle” is reflected only in linguistic tasks. In this case, the researchers have confounded the question of innateness with that of domain-specificity, and assumed that evidence bearing on innateness (e.g., the use of the one-to-one principle in home sign-language, and among speakers of a language whose grammar does not obey the one-to-one principle) also bears on domain-specificity (e.g., whether the “one-to-one” principle is specific to language or whether it actually reflects cognition in general).

Confounding these two distinct debates is not limited to the nativists, however. For example, the fact that children’s usage of syntax appears to be usage-based rather than reflective of an innate grammar has been interpreted as evidence that “general cognitive and social skills” are used in the process of generalization (Tomasello, 2000). This conclusion is wholly unwarranted, because it is clearly possible that language-specific skills (such as analogy, which Tomasello explicitly mentions) are involved in the generalization process.

In a similar fashion, the Latent Semantic Analysis framework establishes that semantic learning need not be enabled by strong instinctive word-learning biases, but can instead occur through statistical learning (Landauer & Dumais, 1997). However, the authors speculate that similar algorithms may actually underlie “associative learning theory” without demonstrating that this is actually the case in any domain except for linguistic or symbolic tasks. Here again, evidence bearing on the innateness question is interpreted to bear on the domain-specificity debate, without acknowledging that these debates are distinct and without mustering evidence that conclusively demonstrates domain-generality.

Likewise, evidence that bears only on the domain-specificity debate is frequently misinterpreted as bearing on the innateness debate. For example, some patients show selective deficits in knowledge for living and non-living things (apparently demonstrating domain-specificity for these categories). This has been hastily interpreted as demonstrating that the neural representation of these categories may be innately specified (Zaitchik & Solomon, 2001). Another study demonstrated selective deficits for living things in a patient who sustained brain damage at 1 day of age, and interpreted this as evidence that a “living things” module is genetically specified (Farah & Rabinowitz, 2003). However, the damage could have been localized to a more DG process, such as retrieval by visual associations, which would differentially impact knowledge of living things.

After a cursory review of the literature on language, it would be easy to think that nativists advocate innate DS mechanisms for language, and that empiricists advocate DG statistical learning mechanisms. However, this is an overly simplistic view; it is contradicted by examples of nativists arguing that language is enabled by a particular innate and yet DG mechanism, and also by other examples where language learning is subserved by statistical learning processes that are (or become) specific to language. Unfortunately, this simplistic view of these two debates is further reinforced by research where evidence on innateness is interpreted in the context of domain-specificity, and vice versa. It will be important for psycholinguists to acknowledge the independence of these debates rather than further confound them.

Note: This is the final post of a series on how to disentangle the domain-specificity debate from the nativism debate in cognitive studies of language.
Part I: Disentangling Two Debates: Introduction
Part II: Some domain-general mechanisms may be innate
Part III: Some domain-specific mechanisms need not be innate

References:

Farah, M.J. & Rabinowitz, C. (2003). Genetic and environmental influences on the organization of semantic memory in the brain: Is “living things” an innate category? Cognitive Neuropsychology, 20, 401-408.

Landauer, T. K. & Dumais, S. T. (1997) A solution to Plato’s problem: The Latent Semantic Analysis theory of acquisition, induction, and representation of knowledge. Psychological Review 104:211–40.

Lidz J, & Gleitman LR. 2004. Argument structure and the child's contribution to language learning. Trends Cogn Sci. 2004 Apr;8(4):157-61.

Tomasello M. (2000). The item-based nature of children's early syntactic development. Trends Cogn Sci. 2000 Apr;4(4):156-163.

Zaitchik, D., & Solomoon, G.E. (2001). Putting semantics back into the semantic representation of living things. Behavioral and Brain Sciences (2001), 24: 496-497

11/02/2006

Learned But Domain-Specific Mechanisms?

Continuing from yesterday's post, to demonstrate that the nativism debate in language is fully distinct from the domain-specificity debate, one needs to demonstrate that a position on one debate does not dictate your position on the other. Yesterday's post examined some theories that might be considered to advocate innate but domain-general capacities that enable language, a perspective not frequently represented in the literature. Below I discuss the flip-side: are there theorists who advocate a learned but domain-general capacity for language?

Thus, the second claim that needs to be substantiated is that domain-specific mechanisms could theoretically be empirically guided. It is tempting to think that empiricist perspectives do not have this “theoretical flexibility,” given that so many prominent empiricists argue for domain-general mechanisms. For example, “dumb attentional mechanisms” are sometimes thought sufficient to account for word learning biases (Booth & Waxman, 2002). Likewise, Bayesian models of syntax learning work partly because they choose “narrow hypotheses over broad ones – but this is not a language-specific constraint” (Regier & Gahl, 2004). Perhaps most tempting is the strong claim to domain-generality made by the connectionist framework (McClelland & Patterson, 2002), a firmly empiricist tradition.

However, empiricist accounts of word learning can remain theoretically coherent and still posit apparently domain-specific mechanisms. For example, the end result of learning in connectionist networks can approximate modularity (Colunga & Smith, 2005), due to the tendency for certain parts of the network to become functionally specialized.

Research on syntactic development illustrates another example of how empiricist theories can be compatible with domain-specificity. Children’s language learning appears to occur by way of imitation, such that words are not initially used based on their part of speech (contrary to what one might expect if they were being placed into a “universal grammar”) but rather based on the contexts in which they are most frequently experienced (Tomasello, 2000). At some critical threshold of linguistic experience, children begin to generalize the constructions of language (on the basis of analogy, according to Tomasello). It is at this point that language becomes a kind of representational modality unto itself, where new linguistic forms are constructed by language-specific rules. Thus, the end result of learning here also approximates domain-specificity.

A third example of domain-specific yet learned mechanisms comes from research on linguistic relativism. For example, the use of grammatical gender in various languages shows that this learned aspect of language can bias adjective generation, “potency” judgments, and a variety of other linguistic tasks by speakers of those languages (Boroditsky, 2003). Although some have interpreted this as evidence that learned aspects of language can have DG effects (i.e., on “thinking-in-general” above and beyond “thinking-for-speaking”), it is difficult to clearly demonstrate this is the case, since language seems to be such a pervasive aspect of cognition.

For example, Boroditsky (2003) reviews evidence from cross-linguistic comparisons of similarity ratings – with an auditory shadowing dual task intended to “disable people’s linguistic faculties.” Although the auditory shadowing task did not interfere with the influence of grammatical gender on similarity ratings, one cannot conclude that this learned aspect of language affects thought more generally, both because similarity ratings are still an arguably linguistic task and because the auditory shadowing task might not have fully occupied the subjects’ linguistic faculties. Likewise, evidence on cross-linguistic differences in time perception clearly demonstrates differences between cultures, but since no articulatory suppression techniques were used, subjects may have covertly recruited language to assist in the task (Casasanto et al., 2004), making this seem like yet another domain-specific effect.

Tomorrow's post will conclude this series with a review of how these two distinct debates have been confounded in prominent cognitive research on language.

Note: This post is part III of a series on how to disentangle the domain-specificity debate from the nativism debate in cognitive studies of language.
Part I: Disentangling Two Debates: Introduction
Part II: Some domain-general mechanisms may be innate
Part III: Some domain-specific mechanisms need not be innate (this post)
Part IV: Dissociations from data and conclusions (coming tomorrow)

References:

Boroditsky, L., Schmidt, L., & Phillips, W. (2003). Sex, Syntax, and Semantics. To appear in Gentner & Goldin-Meadow (Eds.,) Language in Mind: Advances in the study of Language and Cognition.

Booth AE, & Waxman SR. (2002). Word learning is 'smart': evidence that conceptual information affects preschoolers' extension of novel words. Cognition. 2002 May;84(1):B11-22.

Casasanto, D., Boroditsky, L., Phillips, W., Greene, J., Goswami, S., Bocanegra-Thiel, S., Santiago-Diaz, I., Fotokopolou, O., Pita, R., & Gil, D. (2004) How Deep Are Effects of language on Thought? Time Estimation in Speakers of English, Indonesian, Greek and Spanish. Proceedings of the 26th Annual Cognitive Science Society.

Colunga E, & Smith LB. (2005). From the lexicon to expectations about kinds: a role for associative learning. Psychol Rev. 2005 Apr;112(2):347-82.

McClelland, J. L. & Patterson, K. (2002). Rules or Connections in Past-Tense inflections: What does the evidence rule out? Trends in Cognitive Sciences. 6:11 (2002), pp 465-472

Regier T, & Gahl S. (2004). Learning the unlearnable: the role of missing evidence. Cognition. 2004 Sep;93(2):147-55; discussion 157-65.

Tomasello M, Call J, & Hare B. (2003). Chimpanzees understand psychological states - the question is which ones and to what extent.

11/01/2006

Innate But Domain-General Mechanisms?

As mentioned in yesterday's post, cognitive studies of language tend to address two inter-related debates: the extent to which language relies on domain-general vs. domain-specific mechanisms, and the extent to which language relies on innate vs. learned mechanisms. To demonstrate that these are distinct debates, we need to demonstrate that a position on one debate does not dictate your position on the other.

Perhaps the strongest challenge to the claim that these are distinct debates comes from the fact that it is relatively difficult to find theorists who advocate domain-general but innate mechanisms. After all, what domain-general and yet completely innate mechanism could possibly enable language?

One possible answer is recursion. Some have argued that recursion is the only aspect of language which is unique to humans (Hauser, Chomsky & Fitch, 2002) but this is not domain-specific – as Spelke (2003) suggests, such “combinatorial” capacity could also underlie other uniquely human achievements like cooking, mathematics, and music. Therefore, at least two outspoken advocates of nativism have proposed an innate but domain-general mechanism.

Another DG mechanism thought by some to be responsible for language is theory of mind (ToM). Despite extensive training, chimpanzees have not demonstrated human-level performance on all ToM tasks (Tomasello et al., 2003), leading some to suggest that it may be a uniquely human adaptation (and thus in some sense innate). Although language and ToM performance are correlated, the capacity to understand the minds and intentions of others is not language-specific. Therefore, this view also advocates a domain-general but innate mechanism.

Note: This post is part of a series on how to disentangle the domain-specificity debate from the nativism debate in cognitive studies of language.
Part I: Disentangling Two Debates: Introduction
Part II: Some domain-general mechanisms need not be learned (coming soon)
Part III: Some domain-specific mechanisms need not be innate (coming soon)
Part IV: Dissociations from data and conclusions (coming soon)

References:

Hauser MD, Chomsky N, & Fitch WT. (2002) The faculty of language: what is it, who has it, and how did it evolve? Science. 2002 Nov 22;298(5598):1565-6.

Spelke, E., 2003. What Makes Us Smart? Core Knowledge and Natural Language. In: Gentner, D. & Goldin-Meadow, S., Language in Mind: Advances in the Study of Language and Thought. Bradford Books/MIT Press, Cambridge, MA.

Tomasello M, Call J, & Hare B. (2003). Chimpanzees understand psychological states - the question is which ones and to what extent.