8/31/2006

Molecular Basis of Memory Consolidation

Although memory erasure has long been a prominent theme in science fiction movies (I most recommend Paycheck, or Eternal Sunshine), a new study circulating in the blogosphere demonstrates one molecular technique for inhibiting long term memory functions. To understand the cognitive effects of this technique, you first need a quick background on memory processes:

Over the long-term, recent memories encoded by the hippocampus are thought to be transitioned into neocortex. This process is often known as consolidation, and can be observed in at least a couple of ways. First, amnesics who have sustained damage to the hippocampus will not only show impairments in forming future memories, but will also show temporally-graded retrograde amnesia - in other words, they are unable to access relatively recent memories as well. Second, sleeping (and even awake) rat hippocampi can be observed "replaying" recent activity patterns, as though interleaving these experiences for a relatively slow-learning neocortex to absorb. In humans, memories are strengthened after a good night's sleep as compared to an equivalent amount of time awake.

What mechanisms support this consolidation process? I've written previously about work from Harvard showing that destruction of the Armitage protein is important for protein synthesis at the synapse. Previous work has also shown that NMDA antagonists can block memory encoding, but not memory maintenance or consolidation. However, new research from SUNY has shown that inhibition of a different molecule - the protein kinease M-Zeta - can selectively disrupt the storage of memories up to 1 day old, but not the encoding of new memories. This disruption of storage is thought to occur by downregulating AMPA receptors in the CA1 region of hippocampus, which function prominently in late LTP.

Does this molecule work by disrupting consolidation processes within a specific time-window, or does it truly erase these memories?

Note: this post has been heavily edited/revised in response to reader comments.

Related Posts:
Molecular Basis of Memory
A role for Protein in Learning and Memory
A Role for MicroRNA in Learning and Memory

8/30/2006

Dynamic Gating in Long-Term Memory?

Some theories of working memory function posit that midbrain dopamine projections will trigger the updating of representations in prefrontal cortex via the thalamo-cortical loop. In other words, representations enter working memory by a phasic increase in dopamine, which "opens a gate" in the basal ganglia/prefrontal cortex circuit. Might similar gating processes exist in the hippocampus, thought to be responsible for many aspects of long-term memory?

In a new issue of TICS, Fernandex & Tendolkar argue that ento- and peri-rhinal cortex might subserve exactly this function. The authors suggest that a gating function is necessary to prioritize the encoding of more important or unfamiliar events as they occur, and that rhinal cortex fits the bill in more ways than one: it appears to function on the basis of semantic or conceptual associations, it is known to be important in encoding, and is situated right next to the hippocampus.

Experimental evidence also supports the claim that rhinal cortex may "gate" information into long-term memory. As the authors note, fMRI studies have shown that rhinal cortex activity decreases with familiarity. ERP studies have shown that an electrical wave called the "anterior medial temporal lobe N400" is generated by rhinal cortex and decreases with familiarity. Conversely, successful memory encoding is associated with increases in rhinal cortex activity as measured by electrophysiological, fMRI, and ERP techniques. [Incidentally, these results have been interpreted as the reason that spaced learning is better than blocked or massed learning.]

The authors conclude that familiarity detection in rhinal cortex may be an organizing principle of long-term memory encoding and retrieval. During encoding, a lack of familiarity may provide a signal that additional associative encoding should take place; during retrieval, lack of familiarity may signal the need for additional retrieval cues. The authors suggest that this dual role of familiarity detection may provide insight into the role of rhinal cortex and the way in which novelty can be used to optimally allocate encoding resources as well as provide cues for fast retrieval.

Related Posts:
Familiarity vs. Recollection
Implicit vs Explicit Memory: Two Distinct Systems?
EEG Signatures of Successful Memory Encoding

8/29/2006

Symbols, Language and Human Uniqueness

David Premack wrote an interesting perspective piece in the January 2004 issue of Science concerning the uniqueness of human cognition, as distinct from other primates, with regard to our use of symbols. Before debating his perspective, I'll review it in detail.

According to Premack, six symbol systems are in use by humans: the genetic code, spoken language, written language, arabic numerals, music notation, labanotation (a choreography coding scheme, oddly). Premack suggests that the first two "evolved" whereas the last four were invented. The rest of his article concerns the extent to which our use of symbols is unique in the animal kingdom. The authors identifies six ways in which human symbol use is unique, discussed in brief below:
  • Voluntary Control of Motor Behavior. Premack argues that because both vocalization and facial expression are largely involuntary in the chimpanzee, they are incapable of developing a symbol system like speech or sign language.
  • Imitation. Because chimpanzees can only imitate an actor's actions on an object, but not the actions in the absence of the object that was acted upon, Premack suggests that language cannot evolve.
  • Teaching. Premack claims that teaching behaviors are strictly human, defining teaching as "reverse imitation" - in which a model actor observes and corrects an imitator.
  • Theory of Mind. Chimps can ascribe goals to others' actions, but Premack suggests these attributions are limited in recursion (i.e., no "I think you thought he would have thought that.") Premack states that because recursion is a necessary component of human language, and because all other animals lack recursion, they cannot possibly evolve human language.
  • Grammar. Not only do chimps use nonrecursive grammars, they also use only words that are grounded in sensory experience - according to Premack, all attempts have failed to train chimps to use words with meanings grounded in metaphor rather than sensory experience.
  • Intelligence. Here Premack suggests that the uniquely human characteristics of language are supported by human intelligence. Our capacity to flexibly recombine pieces of sensory experience supports language, while the relative lack of such flexibility in other animals precludes them from using human-language like symbol systems.
Unfortunately, this cross-species comparison of symbolic systems seems flawed for a few reasons:
  1. There is no established test for what makes a given action voluntary, and so we cannot know that animal motor acts are truly involuntary.
  2. Voluntariness is irrelevant to the question of language. For example, if I am drunk out of my mind, and I involuntarily blurt out a recursive, grammatical and metaphorical verbalization, it is still human language, despite being involuntary.
  3. Imitation in the absence of an acted-upon object is also irrelevant to the question of language. Language is purely symbolic, meaning that it does not directly act upon anything, and so it's unclear why the physical circumstances of imitation should be so important.
  4. If one insists that physical imitation skills are critically relevant, Premack's claims still contradict evidence reviewed here suggesting that primates are capable of symbolic play without human instruction. And then there's some remarkable work from the Brazilian rainforest showing how primitive some human symbol use is.
  5. Teaching (even using Premack's narrow definition) is not a strictly human behavior: consider this evidence, reviewed by John Hawks, that meerkats modify the behavior of their errant meerkat pupils. Yet, as far as we know, meerkats do not have a human-like symbolic system. Thus, teaching of this sort seems irrelevant to the evolution of language.
The other sections have their flaws as well, but none as critical as those laid out above. Still other vagaries are more philosophical in nature, and often completely immaterial to Premack's point: for example, how to distinguish symbols that evolved from those that were invented?

In summary, it seems like Premack has simply asked the wrong question. Instead of asking "is language the key to human intelligence?" (which forms the title of his piece), it may be more productive to ask "is intelligence the key to human language?"

8/28/2006

Encephalon - 5th edition

Welcome to the Fifth Edition of Encephalon, a neuroscience blog carnival.

Let's start with a topic of perennial interest: what really separates man from the animals? Neurontic presents one autistic's view of that distinction. Similar territory is covered in other posts from Neurontic, such as this one about that essential commonality across the animal kingdom - sex - and how modern neuropharmaceuticals may be altering sexual (and romantic?) experience. And then there's the flip-side: are there neurochemical benefits specific to unprotected sex?

If neurochemicals are your thing, you'll love The Story of NAAG. Cyberspace Rendezvous covers the possible source of the most important excitatory neurotransmitters - glutamate - in exquisite detail. Will it ever be possible to fully deconstruct the complicated machinery underlying neural computation?

The Neurophilosopher asks this question in a different way: to what extent will it be possible to reconstruct neural machinery with nanotechnology? The Neurophilosopher reviews the state of the art in nanomachinery, with a telling comparison to the complexity of potassium channel mechanics.

Coturnix's Blog Around the Clock takes the analysis of potassium channel mechanics a step farther: it's starting to look like voltage-gated potassium channels are not mere binary gates, with only simple on and off states. Instead, each ion channel may be capable of incredibly precise activity regulation - on the order of 1,000 different configurations - resulting in millions of different functional states.

Pure Pedantry connects this talk of potassium channels to the macroscale: some potassium channels may be critical for depression. As it turns out, mice without a TREK-1 potassium channel are "immune" to several established paradigms for inducing depression, and show other markers of non-depressed (or chemically anti-depressed) mice.

In an excellent post, Mind Hacks analyzes more macroscale issues of mental illness in the context of Philip K Dick's A Scanner Darkly (see also this post from the Neurophilosopher on the neural correlates of viewing rotoscope films, like the new film adaptation of A Scanner Darkly). Apparently the book was inspired partly by amphetamine use, and partly by Roger Sperry's work on split-brain patients.

Meanwhile, Thinking Meat takes a little swipe at such sci-fi views of the brain (i.e., the "brain as computer" metaphor) while introducing a new study on the role of nitric oxide in waking. Also, check out this post, an (appropriately) critical review of sex-differences research.

Along the same lines, the Mouse Trap has a tongue-in-cheek analysis of hemispheric processing asymmetries in males and females. There are several other nice posts from Sandy G, including this one about stage-like progressions in theory-of-mind development, as well as the principal components of personality that lead to "celebrity worship."

Finally, to end on a light note, here's a post at Omnibrain about some pernicious logical fallacies in computational neuroscience.

Retrospectacle will host the next edition of Encephalon on September 11th. Be sure to submit early!

8/22/2006

Developmental Change in Networks Supporting Visual Short-term Memory

In a recent issue of the Journal of Cognitive Neuroscience, authors Sherf, Sweeney, and Luna describe how a simple memory-guided visual saccade task can draw out developmental differences in the recruitment of a variety of frontal and parietal regions. In their task, 30 subjects (9 children with mean age 11.2; 13 teenagers with mean age 16, and 8 adults with mean age 29.5) crawled inside an fMRI scanned to complete a simple memory task, in which they viewed a display containing a target, would have to return their eyes to the center of the screen, and then - after a 5 second delay - to gaze in the location previously occupied by the target. In baseline trials, subjects instead had merely to gaze at a target, for which no visual short-term memory was required.

By subtracting the BOLD signal in the baseline task from that in the memory-guided saccade task, the authors were able to isolate the regions specifically active in visual short term memory. (Note to the casual reader: you may wish to skip the italicized text below).

By focusing their analyses on 22 regions-of-interest (areas that have been implicated in visual short-term memory previously), the authors determined that all age groups recruited right DLPFC, right ACC, bilateral anterior insula, right superior temporal gyrus (STG), right interoccipital sulcus (IOS), and right basal ganglia. However, the amount of activation in each of these regions was related to age, as follows:
  • Both right basal ganglia (BG) as well as bilateral anterior insula (AI) activity declined with age, with BG activity decreasing most sharply between childhood and adolescence, and the AI activity decreasing most sharply between adolescence and adulthood. BG activation was also negatively correlated with performance.
  • Both right ACC and right IOS activity increased with age.
  • The right and left DLPFC showed a dissociation in their relationship to age, such that left DLPFC increased greatly with age (most sharply between adolescence and adulthood), but right DLPFC was increased only for the adolescents.
  • Right STG showed the opposite trend as right DLPFC, such that right STG was least activated in adolescents, relative to children and adults.
Yet other regions were activated only in older age groups. For example, children did not appear to use left DLPFC or left supramarginal gyrus, while the other age groups did (the inferior parietal activity in supramarginal gyrus, but not activity in DLPFC, was associated with better performance). The reverse trend was never found - in other words, children never recruited regions that were not also recruited to some extent by their older counterparts.

The authors describe how their results support a trend from the extensive use of basal ganglia in childhood to increasing reliance on prefrontal cortex with age. They also note their results are consistent with the idea that children rely more on ventral pathways than the dorsal stream - even in primarily spatial tasks like this one - perhaps as a result of the longer developmental timeframe for the dorsal pathway. Parts of the dorsal stream only become recruited by this task in adolescence.

Although DLPFC activity was highest among adolescents, which is confusing given the established role for DLPFC in working memory tasks as well as its relatively long developmental timeline, it appears that DLPFC activity actually became more focal in adults relative to adolescents, perhaps reflecting increased specialization and/or pruning.

Finally, the increasing use of ACC regions with age may reflect the increasing use of self-monitoring and error-checking practices among older subjects (in fact, activity in this region increased by 400% between adolescents and adults).

Related Posts:
Multiple Capacity Limitations for Visual Working Memory
Monitoring and Visual Working Memory
Functionally Dissociating Right and Left DLPFC
Developmental Change in the Neural Mechanisms of Risk and Feedback Perception
The Rules in the Brain

8/21/2006

Development of Visual Binding

In Mareschal & Bremner's chapter in the new volume of the Attention & Performance series, they describe how object location and object identity information may be bound together as a function of prefrontal development. Specifically, they show that human infants are incapable of paying attention to both object identity and object location information at once, and that this inability likely arises from lack of attentional capacity rather than more stimulus-driven causes. They then demonstrate how binding might be accomplished by a simple temporal synchrony mechanism in a neural network model.

The authors first describe how the dorsal and ventral visual processing streams have often been considered independently responsible for location and identity information, respectively, but increasing evidence supports the idea that they are integrated to some extent in human adults. For example, sensitivity to motion - what we generally be considered a "dorsal" task - has been observed in ventral areas. Nonetheless, an abundance of developmental data suggests that the two streams are indeed functionally segregated in the developing brain of infants.

To demonstrate this phenomenon, the authors extended some previous work by Mareschal and Johnson. (Note to the casual reader: you may wish to skip the italicized section below, which contains methodological details - the results are summarized below that).

The authors began by familiarizing infants to a display that contained either two female faces, two monochromatic asterisks, or two toys, one at either side of the display, and two white rectangles in the center. In each of these displays, infants were familiarized with the movement of two objects of the same type moving from the sides of the screen towards the center, and then back to the periphery. While the objects were located in the center of the screen, they were occluded by the two white rectangles.

After being familiarized to these displays, the authors then manipulated what would happen after the objects had become occluded. In a baseline trial, the objects would reappear after occlusion as though nothing had changed. This is the only trial type in which a physically possible event occurred. In "location trials," both objects would appear under the same white rectangle after occlusion - as though one of the objects had mysteriously changed location while occluded. In "identity trials," one of the objects would have been replaced by a novel object while being occluded. Finally, in "binding trials," the objects switched locations, so that both the identities of objects in the display as well as the locations of objects in the display remained the same as before occlusion, but the specific combination of those two forms of information had changed: "object identity # 1" now occupied "object location #2," and vice versa.

If infants could detect a difference between the possible and impossible events, one would expect a large increase in their mean looking time to the display - infants show a robust novelty preference in which impossible or otherwise surprising events show increased looking time relative to possible or otherwise familiar events.

As in previous work, the authors found that infants demonstrated sensitivity to either object location or object identity, but not both (i.e., there was no change in looking times to binding trials relative to baseline). However, in addition to these findings, the authors showed that infants tended to show sensitivity to object-identity information only for faces, and tended to show sensitivity to object-location information only for toys. Therefore, infants had not built up some kind of an attentional bias to process one or the other type of object information, but were instead flexibly maintaining object-identity or object-location information based on the type of object that had been presented.

The authors then describe how the binding together of identity and location information could be a later-developing capacity than representing information about either identity or location. They constructed a 6 layer model with 400 visual input units which projects to both 5 object recognition units (as a kind of "ventral" stream) as well as 100 recurrently connected units (as a kind of "dorsal" stream). These recurrent units projected to another layer of 75 hidden units. These hidden units, as well as the 5 object recognition units, both projected to an output layer of 100 units; in addition, the 75 hidden units also projected to a "predicted location" layer of 100 units. The ventral stream was trained through a self-organizing learning algorithm, while the dorsal stream was trained with backpropagation via the predicted location layer.

Unlike a previous network of identical design, which could only maintain information about a single object at one time, this network was made to accept pulsed firing (what they call "peaks") rather than rate codes. This resulted in the ability of the network to predictively track multiple objects simultaneously. To quote from the chapter itself: "Thus, for example, it is possible to encode the location of object 1 on peak 3 and that of object 2 on peak 1 down the dorsal stream, while encoding the identity information of object 1 on peak 2 and that of object 2 on peak 4 down the ventral stream." Weights were changed on the basis of all peaks, such that the connection weights came to represent general properties of objects. However, when binding is required, the prefrontal layer needed to "align" the proper peaks of activation so as to bind them with temporal synchrony.

The prefrontal layer accomplishes this by cycling among the different pulses from the two streams until it arrisved at a feature-location pair which produced the least error. Although this may seem like a theoretical weakness, it does explain why binding tasks are frequently accompanied by a brief burst of gamma-band activity, in which the brain may be realigning representations in the dorsal and ventral streams. Furthermore, it also demonstrates how binding information might be more difficult to maintain than object identity or location information alone.

It is important to note that temporal synchrony mechanisms are still very controversial, particularly with regard to the possible role for temporal synchrony in accomplishing binding. Other mechanisms certainly exist which could support the same role, and which rely on other established properties of neural computation. Nonetheless, the temporality of spike timing is known to carry information, and so temporal synchrony mechanisms seem to be plausible, at least.

8/18/2006

Disinhibition in The Gravity Error?

Increasingly flexible deployment of behavior is a hallmark of cognitive development - but even as adults, it can be difficult to overcome our habits. For children, this difficulty is even more pronounced. One classic demonstration of this difficulty comes from the gravity error, in which children are presented with an apparatus like the one pictured here; a ball may be dropped into one of three pipes, each of which "snakes around" before arriving at one of three locations at the bottom of the apparatus. The task is to find the location of a ball after it is dropped into one of the pipes. Successful performance requires that children overcome the prepotent tendency to assume the ball drops straight down, called the "gravity error," and instead to use their knowledge of the structure of the pipes to direct their searching.

Consistent with the idea that the overcoming prepotent responses is a relatively late-developing ability, many children younger than 4 show robust "gravity errors" on this task, in which they mistakenly search in the location directly underneath the location where the ball was originally dropped regardless of the pipe structures. In contrast, many children older than 4 are able to search in the correct location for the ball.

What mechanisms allow subjects to overcome the gravity error and ultimately search in the correct location? One account posits that increasing ability to inhibit competing representations is the major agent of change. According to this view, children who succumb to the gravity error are unable to inhibit searching directly below the location in which the ball was released.

Authors Freeman, Hood and Meehan set out to test this inhibition hypothesis with the following logic: if the youngest children who succeed at this task are able to do so because they inhibit a tendency to search directly below the release location, they should be also more likely to avoid searching in that location if they're told to avoid finding the ball. Instead, these children would be more likely to search in a third, neutral location than in the location directly below the ball (hereafter the "gravity location"), even though either location would be a safe choice in a task where they had to avoid finding the ball. The authors predicted this pattern because searching in the "gravity location" would involve disinhibition of the inhibition previously directed at that location.

Accordingly, Freeman et al. found that a majority of 4 year olds showed exactly this pattern. By the age of 5, however, a majority of children showed exactly the opposite pattern. And by 7 years of age, children showed no tendency to reach preferentially for either correct location in the avoidance task. What can be made of these results?

Freeman et al. argue that at 4 years, children must inhibit the gravity location in order to succeed at the original task. Then, when told to avoid finding the ball, they still avoid reaching to the gravity location even though that would be the safest place to search in an avoidance task. Instead, they search at a neutral location that does not require this effortful disinhibition. In contrast, the authors suggest that 5 year olds are aware that the gravity location is the "safest" choice in the avoidance task, and 7 year olds are aware that both locations are correct and thus there is no need to preferentially reach to one location or the other.

Are there alternative explanations?

One view is that because of the way the tasks were structured (continually alternating back and forth), the youngest children may actually have had difficulty in switching their behavior between avoiding the ball's true location and approaching the ball's true location. According to this view, repeated task-switching could result in interference for both the gravity location and the ball's true location. This possible alternative could be address by using a blocked design, in which one would expect an even stronger trend in the same direction if disinhibition is difficult for 4 year olds.

8/17/2006

Selection Efficiency in Updating Working Memory

In Vogel, McCollough, and Machizawa's fascinating 2005 Nature paper, they describe how individual differences in short-term memory span appear to relate to the efficiency with which individuals can select the items to be maintained in memory. If this is truly the source of individual differences in working memory, then current "span measures" may be somewhat off the mark in their focus on capacity differences per se; likewise, even the relatively new "period measures" advocated by Towse et al., may only indirectly index the true source of memory differences.

Vogel et al. established the idea that selectivity is a primary source of differences in short-term memory with a relatively simple set of tasks. First, they had subjects view a display of colored and randomly rotated rectangles, and directed them to remember only the red rectangles in one particular half of that display. On any given trial, the items in the to-be-remembered half of the display consisted of either two red items, four red items, or two red and two blue items.

Previously, Vogel and Machizawa had identified a specific wave of electrical contralateral delay activity (CDA) which can be used as an index of an individual's working memory capacity. Specifically, this wave increases until it "maxes out" at an individual's capacity limit; the amount of increase in this wave between situations in which the subject is asked to remember 2 items, and when the subject is asked to remember 4 items, can be used as an index of WM capacity.

In the current study, Vogel et al. were able to show that those subjects with high capacity were more likely to be able to ignore the blue items than the subjects with low capacity. In other words, the low-capacity subjects showed a CDA that increased to around four items when viewing a display that contained only two red and two blue items - suggesting they mistakenly updated WM representations with the distractor items. In contrast, high-capacity individuals showed no change in CDA between the two-red-and-two-blue display and the simple two-red display, suggesting they were able to more efficiently select the representations with which to update their short term memories.

In an extension to this work, the authors also showed that low-capacity individuals were somewhat better at selectively updating WM representations when the distractor items differed in terms of location, rather than simply color. This is to be expected, given that it intuitively seems easier to ignore items based on their location than on their color - you can simply avoid looking in their direction! Nonetheless, it underscores the importance of updating as one of the executive functions that subserves working memory.

If these results are to be believed, then it seems plausible that working memory span differences occur partly because of differences in capacity, but also partly because of differences in how well low-capacity individuals are able to use that capacity. But why should these traits be correlated? In other words, if these are truly separable functions, it seems likely that some individuals with a high-capacity could be impaired at updating. Likewise, it seems natural to expect that low-capacity individuals would optimize their WM updating, so as to make the most of what little capacity they have.

This mystery is still up for grabs - it's possible that a third variable influences both (such as genetics), but it's also possible that these two factors are causally related in a more direct way - for example, a by-product of increased selection efficiency may be more focused gating of representations, which could lead to a higher CDA asymptote. For example, consider the case where neural oscillations in the thalamocortical circuit allow information to enter working memory. For individuals with high-selection efficiency, these oscillations may "slosh around" in this circuit in a more precise way than they do in individuals with low-selection efficiency. If we assume that increasing the focus of the oscillations increases the amount of neural activity directed at a particular representation, then it becomes clear that more highly-activated representations are more likely to cause an increase in the CDA wave, and thereby be maintained in working memory.

8/16/2006

The Argument for Multiplexed Synchrony

Although the spike-timing dependent specificity of neural firing is an established phenomenon, many seem to doubt the idea that more global forms of phase coding (i.e., "multiplexed synchrony," in which populations of neurons will fire at specific phases of a particular firing rhythm) is neurally plausible. In contrast, authors such as Jensen, Lisman, and Idiart have championed the idea that brain functions like working memory can be understood as emerging from multiplexed theta and gamma oscillations in neocortex. Beyond the theoretical and computational possibility that such a mechanism exists, what empirical evidence supports this idea?

This question is addressed by John Lisman in a 2005 article from Hippocampus. He reviews several pieces of evidence in support of the multiplexing hypothesis, such as:

1) Both theta and gamma oscillations occur together in hippocampus, and furthermore, theta rhythms modulate the amplitude of gamma rhythms;

2) The frequency of the dominant oscillation within each of these bands is correlated with the other frequencies, such that a shift in one of them seems to be accompanied by a proportional shift in the other;

3) A phenomenon called "phase procession" is known to occur in awake rats that are exploring a radial arm maze; as they traverse the maze, hippocampal neurons with sensitivity to regions that were just visited fire just out of phase with neurons sensitive to regions that are about to be visited. This suggests that the mechanisms driving synchronous firing do have the temporal precision necessary to generate "multiplexed" phase relationships.

4) The average differences in phase between these differentially-tuned hippocampal neural ensembles always corresponds to a particular fraction of theta; the actual position of a rat can be optimally reconstructed by analyzing the spike timing of these neurons in terms of where they fall within 5 or 6 "phase bins" - i.e., does a particular neural ensembled fire within the first 1/5 of a theta cycle, or the second 1/5? Using a phase bin size of 5 or 6 resulted in a more accurate reconstruction of a rat's actual position than a similar analysis that divides spike timing into 4 or less bins, and was not significantly less accurate than dividing spike timing into 7 or more bins.

5) Measurements of memory scanning from the Sternberg task suggest that memories in this task are searched sequentially and exhaustively, with memory scanning time corresponding roughly to the period of one gamma cycle.

Based on this evidence, it is unreasonable to insist that neural networks do not have the temporal precision to accomplish multiplexed synchrony. It's also unreasonable to claim that noise may have deleterious effects on these abilities - based on the the paper reviewed in this post, we know that "noise" recorded in vivo shows some properties that in some cases might actually increase the information capacity of the neural code. And if anyone should point to the fragility of this mechanism, let's remember that synchronous oscillations are seen to spontaneously emerge both in embodied computational models as well as in hippocampus culture.

Of course, it is still possible to claim that synchronous firing is merely a harmless correlate, rather than a cause, of cognitive functions. But such a claim would contradict evidence reviewed in this post on how recognition memory can be enhanced with slow-wave visual flicker - in other words, that an enhancement of a particular rhythm can enhance cognitive function. Evidence reviewed in this post also suggests a casual role for synchronous firing, in that presentation of auditory rhythms at harmonics of these neural oscillations seems to quicken reaction time. And we know from the paper reviewed in this post that synchrony probably does have a causal role in at least one aspect of cortical function - the rather important function of actually gating sensory information into cortex.

Unfortunately, it seems not possible to disrupt synchronous firing selectively while leaving other cognitive functions intact. For example, muscimol and pentobarbital are known to disrupt gamma and theta rhythms, but clearly have cognitive effects as well. Exploration of the role of muscarinic agonists, such as carbachol, and other drugs like piracetam (which have been shown to elevate the density of muscarinic-cholinoreceptors in frontal cortex) on synchronous oscillations may also prove fruitful in further establishing the link between theta/gamma oscillations and cognition. Unfortunately, drugs like these can always be claimed to act on cognitive functions directly by some pathway that does not primarily involve synchrony; therefore pharmacology may be of limited use in establishing a causal role for synchrony.

In conclusion, a quick thought experiment: if a seagull is observed to flap its wings in synchrony, and be able to fly, one might doubt that synchrony per se is important. Instead, one might point to something else that is accomplished in the process of sychronous wing-flapping, such as symmetric turbulent air flows. Even if every seagull ever observed must flap its wings synchronously in order to fly, it is still possible to doubt that synchronous wing-flaping per se is the feature that allows flight. And, in some sense this is true, because it is possible to construct flying machines that work without synchronous wing-flapping. Fundamentally, it is really the aerodynamics that matter, not synchronous wing-flapping. Nonetheless, it seems patently untrue to say that the synchrony of wing-flapping is merely a side effect of the ability of seagulls to fly.

Likewise, it is possible to construct computational models that work without synchronous oscillations. Fundamentally, one might argue that the information processing is what matters, not the specific implementation. Nonetheless, it seems unreasonable to say that synchrony of firing is merely a side effect of cognition.

8/15/2006

Valid Dimensions of Memory: Strength, Endurance, and Capacity?

Working memory is a central concept in cognitive sciences, and is typically assessed through "span measures," which are frequently used as indices of individual differences in working memory. However, aspects other than capacity might also underlie individual differences in working memory, such as the endurance, or strength of mental representations. In the most extreme, one can even imagine issues of strength or endurance to be orthogonal to issues of capacity.

Authors Towse, Hitch, Hamilton, Peacock and Hutton describe the dominant measures of working memory (i.e., span measures) as reflecting the "suitcase metaphor" of working memory: individuals differ in how much they can pack into these mental suitcases, a function of both "packing efficiency" (perhaps related to chunking?) and the size of the suitcases to begin with.

Towse et al. contrast this view of working memory with what they light-heartedly call a "thermos flask metaphor," in which individuals differ not only in flask size and "packing efficiency" but also in the degree to which they can maintain contents in their original state. Thus, they advocate investigation of what they call "working memory period," which is the longest interval over which information can be maintained during concurrent processing.

For example, in one of their tasks ("operation period") subjects had to remember the answers to a series of mathematical formulae. The number of formulae in any testing block always remained the same, but the length of the formulae changed. (Note that in the traditional measure of operation span, all the formulae are of the same length and blocks differ in the number of formulae).

In a series of three experiments, the authors administered peroid tasks (along with standard span versions of each task, and various measures of scholastic aptitude) to 60 8-year-old children. The results showed that period measures show similar test-retest reliability and that in some cases period measures appear to be a more isolated or controlled measure of one aspect of span tasks. For example, operation span did not account for differences in one measure of scholastic aptitude after controlling for operation period and processing speed.

The authors conclude that period measures may be a useful index of working memory abilities, and that their results tentatively support a task-switching view of working memory function, in which processing and maintenance occur sequentially, and in which additional processing increases the possibility that previously maintained items would decay.

Other theorists discuss concepts such as memory "strength," in the context of a perspective that memory consists of mental representations that are graded in their strength. Memory strength is assessed through measures like simple reaction time to a memory task, or Braver's (2001) working memory context index, based on the AX-CPT task. Memory strength may ultimately reflect the coherence of neural firing, and is hypothesized to positively correlate with task-switching abilities. However, one basic and as yet unanswered question is how these strength measures might correlate with the peroid measures above.

Which of these dimensions (strength, endurance, and capacity) account for unique variance in memory function, and what is the source of their uniqueness? Resource-sharing models of working memory are challenged by the results presented by Towse et al., in that processing efficiency did not decrease as a function of how many items were being maintained. In contrast, a task-switching view is both more compatible with these results, and more compatible with the idea that strength, endurance, and capacity might all be valid characteristics of the mechanisms subserving working memory. Furthermore, several working memory subfunctions have been proposed, and a task-switching view of WM is also more compatible with these theoretical advances.

Related Posts:

Monitoring and Visual Working Memory (Re: WM Subfunctions)

Don't Try This At Home: Working Memory and Convulsions (Re: Neural Coherence)

Working Memory Capacity: 7 +/- 2, around 4, or ... only 1? (Re: graded representations)

Multiple Capacity Limitations for Visual Working Memory (Re: WM subfunctions)

Memory Bandwidth and Interference (Re: WM subfunctions)
Mr Peanut and Working Memory (IQ's Corner)
Theta Frequency Reset in Memory Scanning (Re: WM Subfunctions)
Separate phases for encoding and retrieval in theta rhythms (Re: WM Subfunctions)

8/14/2006

Encephalon edition 4

The Neurocritic has posted the new edition of Encephalon, a neuroscience blog carnival! Be sure to head over there and check out all the links.

Because of a last minute change, the next edition of Encephalon will be hosted here on August 28th. Nominate blog posts (your own, or others') by sending a link to encephalon.host@gmail.com.

8/13/2006

Blogging on the Brain: 8/6 - 8/12

Some highlights from the week in brain blogging:

Cognitive Daily explains how to can train yourself to unintentionally confuse your colors.

Science Daily covers how to enhance peripheral vision with transcranial magnetic stimulation.

Another entry in the "adult neurogenesis" debate in the most recent issue of PNAS.

GNXP covers the narrowing IQ gap between blacks and whites.

OmniBrain links to videos of a new uni-roller bot (and no, that's not an established term).

The splintered mind asks what you see with your eyes closed...

8/11/2006

Monitoring and VIsual Working Memory

Prefrontal cortex is known to be important for working memory processes, but there is debate surrounding whether it is important for the online maintenance of information, the online monitoring of information, or both. Some theories hold that prefrontal cortex (and dorsolateral PFC in particular) are important for the "executive" aspects of working memory, such as monitoring, evaluation, or retrieval/maintenance strategies, as opposed to maintenance itself.

A 2000 J Neurosci paper speaks to this point. Michael Petrides gave monkeys a visual working memory task in which either the delay between display and test or the number of items was varied. These monkeys either had lesioned mid-dorsolateral prefrontal (DLPFC) or anterior inferotemporal (aIT) regions, the latter being a part of the brain that is known to be involved in object processing and could potentially be the neural locus of the active maintenance of visual objects.

The author found a double dissociation between the effects of DLPFC and aIT damage, such that aIT damage affected the robustness of memory accuracy to increased delay, whereas medial DLPFC damage affected the robustness of memory accuracy to increasing set sizes. According to Petrides, this evidence suggests that aIT is more critical for maintenance, whereas medial DLPFC is more critical for the monitoring of multiple stimuli and responses. (Although Petrides makes claims that DLPFC representations are symbolic, the evidence presented here really can't speak to issues of representational format.)

One idea that may tie all of this evidence together is that representations become more abstract in more anterior regions, and that maintenance of a representation in an anterior region is used to select among representations in more posterior regions. According to this perspective, executive functions are accomplished by maintenance within more anterior regions, and the types of executive functions required in this task may involve deciding which of the items in each set size to attend to, or they may involve selectively biasing each of these items' representations so that they are maximally different from one another. Consistent with much previous work, this kind of activity would occur in dlPFC. In contrast, a more posterior region (such as aIT) might contain the object or item representations biased by more anterior regions; activity here should be particularly important for robustness across delays, since this is where the actual item information per se is being maintained.

Several previous posts have discussed the anatomy that is relevant to visual short term memory. For example, Xu and Chun determined that inferior intraparietal sulcus activity is minimally sensitive to set size (pretty much regardless of object complexity), while superior intraparietal sulcus activitiy is sensitive to set size only for simple objects. Likewise, Vogel & Machizawa found parietal and lateral occipital areas were indicative of visual WM span (though these were ERP waves, and could certainly have their source in other regions of the brain).

On the other hand, at least one study has shown that differences in span may be accounted for by differences in "selection efficiency." This function certains suggests a strong role for dlPFC, which is known to be involved in processes like selection, and overcoming interference. And studies mentioned in this post, using TMS, found that disruption of neural activity in right & left PFC in humans resulted in lower visual recognition memory.

Related Posts:
Multiple Capacity Limitations for Visual Working Memory

8/10/2006

Reexamining Hebbian Learning

One of the fundamental ways that neurons compute is thought to be a form of learning called Hebbian learning, in which cells that "fire together, wire together." Other learning mechanisms, such as back-propagation, have proven useful in neural network simulations, but are often considered less biologically-plausible (although the evidence for some form of error-driven learning is accumulating). But given a few elaborations to the classic view of Hebbian learning, this simple rule can explain a wide variety of cognitive phenomena. These elaborations are the focus of McClelland's chapter in the new volume of the Attention and Performance series, summarized below.

McClleland begins his discussion with long-term potentiation, or LTP, in which the synaptic efficacy of "sending" neurons increases if the "receiving" neuron itself fires. In other words, the receiving neuron becomes more sensitive, or potentiated, to its input. Recent work has also established the importance of precise timing of this input: LTP is strongest when sending neurons fire just before a receiving neuron. However, there is also something called "heterosynaptic long-term depression" in which sending neurons that did not fire have their synaptic efficacy decreased. And then there is "vanilla" long-term depression, in which relatively weak activity in the receiving neuron actually results in a decrease, rather than an increase, in synaptic efficacy. Together, these phenomena describe a slightly more complicated and "non-monotonic" hebbian learning curve, in which cells that fire together wire together, but those that fire just before the others become more strongly wired together ... and if the receiving cell does not fire (or does so weakly), the sending cells "unwire." (More accurate, but definitely not as catchy).

McClelland next points out that the Hebbian learning rule, as frequently implemented, often seems incapable of learning certain types of problems; however, this perception can be traced to a few characteristics of hebbian algorithms - some of which accurately characterize human behavior, even if they don't make the ideal learning algorithm for non-linear classifiers in AI applications.

For example, McClelland considers the phenomenon of dystonia. Dystonia occurs when people who repetitively use the same muscle pairings (such as guitarists gripping a pick for hours on end) find that their muscles "enter into a state of chronic activation," perceived as a cramp. This could easily be explained as a result of hebbian learning, in which actions performed at the same time become progressively more associated, until one has difficulty activating one muscle to the exclusion of the others with which it was repeatedly paired.

McClelland also considers the case of phonological confusion in Japanese speakers with English as a second language; for this population, the english sounds /r/ and /l/ are notoriously difficult to distinguish. McClelland hypothesized that this difficulty arises from the fact that english /r/ and /l/ sounds actually correspond to the same phoneme in Japanese, and that everytime an English speaker made either an /r/ or an /l/ sound, Japanese speakers would experience the activation of a single "r & l phoneme combination" representation. Through hebbian learning, this would lead to /r/ and /l/ becoming further intertwined based on mere exposure alone.

Based on this reasoning, McClelland was able to design a procedure which could train Japanese speakers to perceptually discriminate /r/ and /l/ sounds - all without ever getting feedback on whether they were correctly guessing if a given sound was an /r/ or and /l/. The procedure was essentially the following: Japanese speakers began by listening to highly exaggerated /r/ and /l/ sounds, and classifying them as either "r's" or "l's." After getting several consecutive discriminations correct (but never being informed of this), the sounds were covertly replaced with slightly more similar /r/ and /l/ sounds.

This training procedure is thought to work for the following reasons. First, the exaggerated sounds activate distinct percepts by virtue of being exaggerated, instead of activating the "r & l phoneme combination" percept normally activated by any normal-English pronounciation of /r/ and /l/. By repeatedly pairing /r/ and /l/ sounds with their respective percepts, the mappings between these representations would strengthen based on Hebbian mechanisms.

This may be one of the only examples in which instruction is not paired with feedback and is yet nonetheless completely successful. But even if there are other examples, this finding underscores just how pervasive Hebbian mechanisms may be in the neural computations underlying our every-day experiences.

Related Posts:

Towards A Mechanistic Account of Critical Periods

Neural Network Models of the Hippocampus

Learning Like a Child

8/09/2006

Don't Try This At Home Either: Perceptual Enhancement Among the Deaf

If febrile convulsions can confer benefits to learning and memory, then might other neurological disorders offer similar cognitive enhancement? As it turns out, an article in the newest issue of the Journal of Cognitive Neuroscience speaks to this very question, and turns up some fascinating results.

Authors Stevens and Neville first consider whether there might be some brain regions that are more plastic than others. If so, they ask, wouldn't these regions be the most likely to be disrupted by some developmental disorders, and yet also more likely to be enhanced in an attempt to compensate for yet other deficits?

As the authors discovered, this very pattern had been observed in the magnocellular visual pathway, an input to the brain's "dorsal stream" that is largely responsible for motion processing and the perception of low spatial frequencies. Stevens & Neville found a number of studies reporting motion processing deficits in dyslexics, autistics, and those with Turner or Williams syndrome, and yet they also found a completely separate literature describing enhanced motion processing in congenitally deaf populations.

Unfortunately, the techniques used to assess cognition in deaf and developmentally disordered individuals are often very different; given these differences, it was impossible to use previous literature make a definitive claim about the plasticity of the magnocellular pathway, and its "double-edged" nature - in which an overly plastic area can be either selectively enhanced or deteriorated, according to circumstance.

So, in the first demonstration of both neurocognitive enhancement and deficit within the same paradigm, the authors tested motion and central field visual processing in 17 deaf adults, 15 dyslexic adults, and sex-, age-, handedness-, education-, video-game-use-, and socioeconomically-matched control subjects. The results showed that dyslexics could detect motion only in a much smaller field-of-view than the control group. In contrast, the deaf group could detect motion across a much larger field of view than control subjects. No group differences were found in the central field visual processing task, which is primarily sensitive to parvocellular (as opposed to magnocellular) function.

Stevens & Neville concluded that "motion processing is selectively modifiable," and that neurplasticity thus has a double-edge: highly plastic brain regions, such as the magnocellular pathway, are both more vulnerable to deterioration, and yet more promising for enhancement, than other brain regions.

8/08/2006

Don't Try This At Home: Working Memory and Convulsions

Febrile convulsions are a fairly common effect of childhood fever: by some estimates, 1 in 20 children will experience a fever-induced seizure, with this likelihood increasing significantly if one or both parents have a history of febrile convulsions. It would be easy to dismiss febrile convulsions as merely a secondary disease, if it weren't for one fascinating fact: children with a history of febrile convulsions (FC) have in some cases been reported to do better in school than their healthy peers.

Following up on work that showed a scholastic advantage for children with FC, Chang et al. interviewed over 4,300 Taiwanese families to identify 103 children with FC. These children were then brought into the lab for testing on a variety of memory and learning tasks. For comparison, an age- and sex-matched control group of 213 healthy children was selected from the Taiwanese school districts and run on the same tasks. There were no significant differences in parental socio-economic status between the two groups, and yet the two groups showed remarkable differences on several cognitive tests.

The group with a history of febrile convulsions performed significantly better on three tests of spatial memory than the control group. Only in terms of the number of errors on a "sequential learning" task did the FC group do significantly worse than controls; in all other cases the FC group did just as well - or significantly better.

These findings are compatible with those published previously by the same authors, showing that children with a history of FC perform better on a measure of scholastic aptitude. Because the actual tasks used in this specific study are unconventional, and are not adequately explained in the paper, it's hard to know exactly what cognitive functions may be improved in FC (although the authors claim their task that dissociates mneumonic capacity from executive functions, I don't buy it). One message is clear, however; children with a history of febrile convulsions can show a cognitive advantage over their peers.

Why should a symptom of severe illness - full-body seizures - be related to these kinds of cognitive advantages? No one knows for sure, but one possibility raised by the authors is that those who experience FC have a slightly higher prevalence of NMDA receptors, possibly in the hippocampus. According to their logic, this may result in a lower seizure threshold, but under normal circumstances could result in improved LTP or spatial learning.

This study should be taken with a grain of salt - many of the tasks used are unconventional, and this subject matter is unusual for the journal in which the study was published (although Neurology is a high-impact peer-reviewed journal). Some previous studies have not found an ycognitive advantage among children with FC, but that may be due to their use of hospital populations, who tend to show a higher incidence of comorbidity, in particular with mental retardation. On the other hand, this case study may provide a window onto one of the brain's many delicate balancing acts: how to maintain sensitivity and yet avoid hyper-excitability (such as might lead to convulsions).

Related Posts:
Smarter than the Average Primate: How Children and Chimps Sometimes Outperform Human Adults
Neural Oscillations and the Mozart Effect: Does Classical Music Really Improve IQ?
Video Games - Mental Exercise or Merely Brain Candy?
Enhancing Memory With Visual Flicker: Peripheral visual stimulation can enhance recognition memory

8/07/2006

Strength through Synchrony

Our connection to the external world occurs only through the thalamus, through which all sensory signals (except olfaction) must pass in order to gain access to the neocortex. As Alonso points out in his editorial on Bruno & Sakman's 2005 Science paper, our sensory systems are incredibly sensitive - many observers can detect a single photon in complete darkness, and we'll notice an indentation on our skin of as little as 20 microns (this is about 3 times the diameter of a single red blood cell).

And yet, the neural connections between thalamus and the cortex are incredibly weak both in terms of the strength of their signal (.5 mV, 30 times smaller than average intracortical connections) and in terms of the prevalence of synapses ("thalamocortical synapses account for less than 15% of all synapses onto L4 spiny neurons" in the cortex, according to Bruno & Sakman). So how do we maintain such sensitivity in the absence of any substantial neural signals?

One theory holds that signals from the thalamus are amplified by recurrent connections in cortex. Another theory suggests that thalamic signals don't need a "cortical amplifier" to evoke an action potential in cortex, but that they merely need to synchronize their firing to take L4 neurons over threshold. Bruno & Sakman found evidence that this may be the exact mechanism by which thalamocortical connections evoke action potentials in L4: sensory stimulation, but not sinusoid electrical stimulation to the thalamus, evoked "synchronous discharge of thalamic neurons."

Further analyses showed that as little as 30 thalamic cells firing synchronously could suffice to evoke action potentials in cortex. Therefore, the authors argued that a cortical amplifier in L4 is unnecessary to account for how cortex is capable of receiving inputs from thalamus, and thus, how we are able to perceive the outside world.

8/04/2006

Blogging on the Brain: 7/30 - 8/4

Some highlights from the week in brain blogging:

Where are the genetics to IQ? The always excellent GenExp covers a recent article that comes up empty-handed for identifying the precise genes responsible for IQ's hereditary component.

How to Get Better Sleep: Mind Hacks points out an article in Science with advice about how to apply the latest in sleep research to your own nightlife.

Noticing Changes: Cognitive Daily finds a new demonstration of change blindness.

Sound Improves Learning: Dr. Jarrett covers a fascinating study in which compatible sound effects increased the speed of learning a perceptual task.

Rat Art: A petri dish of rat neurons uses robotic arms to paint. Wild!

Alex the Gray Parrot has object permanence! (How similar are parrots and children, really?)

Life Like a Movie: An oldie by Christof Koch in Sci Am, about whether consciousness is more like a series of snapshots or a continuous stream.

Have a nice weekend!

Localizing Executive Functions in Prefrontal Cortex

Continuing on from yesterday's post about the hypothesized functions of left and right dorsolateral prefrontal cortex ("strategy production" and "error checking" respectively), this 2004 Brain paper argues that the computational function of left and right ventrolateral prefrontal cortex are the top-down control of task set and inhibition, respectively. Below I describe how the authors arrive at these conclusions based on their analyses of specific regions of ventrolateral prefrontal cortex (vlPFC; specifically: middle & inferior frontal gyrus, along with pars opercularis), and based on this, I extract some general lessons about the functional differences between right vs. left (and between ventrolateral vs. dorsolateral) prefrontal cortex.

Authors Aaron, Monsell, Sahakian, and Robbins used MRI to determine the locus & extent of damage to a variety of prefrontal regions in 36 brain-damaged patients (17 with focal lesions to the left PFC, and 19 with focal lesions to the right). Each of these patients, along with 20 age- and IQ-matched controls, next engaged in a task-switching paradigm.

[methodological details follow in italics]

In one of the tasks, subjects had to respond based on the direction in which an arrow pointed; in the other task , subjects had to respond based on the direction indicated by a word written inside the arrow. The type of judgment to be made on any given trial (i.e., direction of arrow, or of the word written inside the arrow) would change only on every fourth trial - in other words, the task structure was AAABBBAAABBBAAA, and the type of task to be performed next was indicated when the task switched. Additionally, the stimuli were displayed on an inverted Y framework so that subjects knew when the task would switch based on a "thickened bar" (see the picture at the start of the article and hopefully this will make sense.). There were three types of stimuli: congruent (in which the word and the direction of the arrow were compatible; e.g., both indicating "left"), incongruent (in which word & arrow direction were incompatible; e.g., an arrow pointing left with "right" written inside the arrow), or neutral (e.g., an arrow pointing to the left, with "---" written inside the arrow; or the word "left" written inside a rectangle). For each of 36 trials, the stimuli either appeared 1.5 seconds after the subject's last response (the response-stimulus interval, hereafter RSI), or .1 seconds after their last response; this was then changed for the next block of 36 trials, coming to a total of 14 blocks (the first six of which were considered "practice" and made easier).

What conclusions can be drawn from the results?

Not surprisingly, frontal damage makes you slower on trials where the task switches, but this is the case even when the RSI is long - that is, even if you had been given adequate time to prepare for the task switch. However, there's reason to believe this similar-looking deficit is caused by different computational mechanisms in patients with right or left damage.

Right frontal regions are particularly important for resolving interference when the task switches (based on the fact that RF patients had more errors than any other group on trials where the task switched and the stimuli were incongruent). These patients' residual switch cost (the difference between long RSI switch trials and short RSI switch trials) was strongly correlated with damage to the pars opercularis, as was a simple measure of inhibition (stop signal reaction time).

In contrast, left frontal regions are particularly important for endogenous, top-down control of task set, or in regular terms, selecting a response under conditions of conflict (based on the fact that on task repeat trials, LF patients show a larger difference both between incongruent and congruent trials, and between congruent and neutral trials). This pattern of results was strongly correlated with damage to the middly frontal gyrus in left-damaged patients, suggesting that structure may be particularly crucial for these functions.

The authors speculate that after anterior cingulate detects task-related conflict, right pars opercularis or inferior frontal gyrus may execute the required "reactive suppression of task set." In contrast, ventrolateral prefrontal regions of the left hemisphere may be responsible for task set selection and maintenance.

How does this fit with the many posts on task-switching or prefrontal function that I've made previously?

First of all, Aaron et al's theory of ventrolateral PFC function is very compatible with the Shallice chapter reviewed yesterday, in which dorsolateral PFC function is divided into procedure or strategy production (left) and error checking (right). One can easily imagine left DLPFC producing a new strategy, and passing this to left VLPFC for maintenance, while right DLPFC is checking for any errors and if so, activating right VLPFC for inhibition. This explains why right VLPFC damaged patients, such as those reviewed above, would only show
deficits after a task-switch, as opposed to the more general impairments of left VLPFC patients.

Secondly, this theory is also compatible with Tuesday's post about developmental change in risk perception. Leijenhorst et al found find that right vlPFC is more activated by negative feedback than positive feedback in children and adults alike, while some differences showed up too: adults tend to activate right DLPFC, bilateral ACC, and right VLPFC more for high-risk than low-risk decisions, whereas children only recruited ACC and right VLPFC in the same comparison. According to the logic presented by Aaron et al., high-risk trials have more possibility for error than low-risk trials, and so it makes sense that right prefrontal regions would be more activated here, whereas both trial types have the same task set requirements, and so no difference is seen in left VL or DLPFC activity.

Striaghtforwardly, Aaron et al.'s perspective meshes nicely with Badre and Wagner's perspective on task switching, as reviewed here, that vlPFC is responsible for overcoming interference, and their findings from fMRI that left mid-VLPFC, left posterior VLPFC, and left DLPFC were all more highly activated by switch vs. repeat trials. Unfortunately these authors did not do the congruent vs. incongruent comparison which seems important for showing right VLPFC activity in task-switching.

This theory is also compatible with findings from imaging of lapses in attention on the global/local task, where right prefrontal (and ACC) regions dip in activity before stimulus onset, and then after stimulus onset, right inferior frontal gyrus (same region as pars opercularis) markedly increases in activation. Using Aaron et al's logic, these abrupt shifts might reflect a n initial failure to inhibit the irrelevant feature followed by a rapid recovery.

However, this left/right distinction contrasts with other divisions of labor in prefrontal cortex, notably those reviewed in this presentation, where bilateral DLPFC is held to be in charge of error checking or results monitoring, and VLPFC in charge of strategy maintenance.

Related Posts:
The Rules in the Brain
Imaging Lapses of Attention
Task Switching in Prefrontal Cortex
Developmental Change in the Neural Mechanisms of Risk and Feedback Perception
Functionall Dissociating Right and Left dlPFC

8/03/2006

Functionally Dissociating Right and Left dlPFC

Shallice's chapter of Attention & Perfomance XXI focuses on two of the functions that are neccessary for cognitive flexibility: the production of procedures that can be used to attain a goal, and the "error checking" that must be done to ensure that the produced actions are helping to attain the goal. He argues that these computations are primarily subserved by the left and right dorsolateral prefrontal cortex (DLPFC), respectively, and that together they fit well with the ways prefrontal cortex has been described (e.g., as involved in the on-line monitoring, maintenance, and manipulation of recent information).

Shallice reviews several studies that seem to support this hypothesis, such as:

1) Jahanshahi et al's 1998 study showing that repetitive transcranial magnetic stimulation (TMS) to the left DLPFC more strongly disrupted random number generation than TMS to the right DLPFC, suggesting that left DLPFC is involved in random number generation process itself (though it seems to me that this particular result could just as well implicate lDLPFC as an error checker)

2) right but not left DLPFC damage resulting in twice as many perseverative responses in a free-recall task, thus suggesting that rDLPFC is normally involved in "editing" or "checking" processes (Stuss et al 1994);

3) Decreased accuracy in memory judgments is accompanied by increased rDLPFC activity, this implicating rDLPFC in "error checking" - albeit unsuccessfully in these studies (Henson et al., 1999 & Eldridge et al 2000);

4) Increased rDLPFC (but not lDLPFC) activity with increasing proactive intereference (Henson et al 2002)

5) An EEG wave appears directly centered on rDLPFC electrodes, at around 1 second after stimulus presentation, thus fitting the "temporal profile" of a region responsible for error checking (Wilding & Rugg 1996)

6) repetitive TMS over rPFC during retrieval or over lPFC during encoding, in a visual recognition memory task, significantly decreased recognition accuracy; however, only rPFC TMS resulted in a significant lowering of the response criterion from that found in a baseline "sham TMS" condition (Rossi et al 2001)

7) In a visuospatial version of the Wisconsin Card Sort (the Brixton Spatial Rule Attainment task) where subjects must determine the rule governing the varying location of blue dots on a series of cards, and must ignore the rule governing the varying location of red dots on the same cards, patients with right lateral prefrontal but not left lateral prefrontal showed a tendency to make perseverative errors (Reberberi et al, 2004).

It should be noted that it is difficult to rule out an alternate explanation of how/when rDLPFC is recruited for tasks: perhaps rDLPFC is simply involved when cognitive effort is increased. Of course, this may be a complementary (but in my opinion, less concrete) form of the "error checking" hypothesis.

On the other hand, I have reviewed previous studies with findings that are compatible with this view. For example, the recent Weissman et al. paper in Nature Neuroscience showed rPFC and ACC reductions during lapses of attention. Earlier this week I wrote about another compatible finding, this time from Neuropsychologia, in which right (ventro-) lateral regions PFC were activated selectively by negative (not positive) feedback.

Related Posts:

Imaging Lapses of Attention

Reversing Time: Temporal Illusions
Developmental Change in the Neural Mechanisms of Risk and Feedback Perception

8/02/2006

Excitatory Reverberations in Hippocampus Culture

A recent article by Lau and Bi (PNAS, 2005) explores the mechanisms that support persistent reverberatory neural activity in rat hippocampal cultures, as a model for how perssitent reverberatory activity may accomplish active maintenance in humans. The authors found that brief stimulation of just a single neuron was enough to evoke reverberatory activity for several seconds across a much larger network. Experimentation with AMPA- and NMDA-blockers strongly suggested that this up-state of activity depends on recurrent excitatory connections, and that it is enhanced if stimulation is paired-pulse, with interpulse intervals of 200-400 ms. Based on further manipulations in which the authors interfered with intracellular calcium storage mechanisms, they suggest that paired pulses facilitate reverbatory activity because it supports the release of calcium.

Some cultured networks, however, did not show the same profile as networks that did successfully reverberate in response to electrical stimulation. The authors hypothesized, and later verified, that this was due to an imbalance in excitatory to inhibitory neurons. The authors also observed that reverberatory activity was typically partially synchronized, with a frequency similar to the theta band, as observed in some working memory research.

The idea that a critical balance between excitatory and inhibitory activity makes unique neural behaviors possible (such as these persistent reverberations) is not new, but it is important that this has been observed in real neurons (as opposed to merely in computational models). The authors found that networks with greater than 10-20% inhibitory neurons were impaired in their ability to persistently reverberate. However, the usual caveats apply: these were rat neurons, not human neurons; these were hippocampal neurons, not prefrontal neurons; and these were grown in a dish, not in a skull.

Related Posts:

Sequential Order in Precise Phase Timing

Models of Active Maintenance as Oscillation

Neural Network Models of the Hippocampus

8/01/2006

Developmental Change in the Neural Mechanisms of Risk and Feedback Perception

Effective decision making involves diverse skills, including the estimation of risk, constant monitoring of feedback, and task-set maintenance - all of which undergo rapid developmental shifts even before adolescence. What aspects of decision making change during this time, and is it possible to localize these functions to their hardware components in the brain? This is the question explored in the first fMRI study of decision making in children younger than 12, by Leijenhorst, Crone and Bunge.

According to these authors, recent neuroimaging evidence suppors the idea that risk estimation & anticipation can be localized to the orbitofrontal cortex (OFC) and the anterior cingulate cortex (ACC), while ventrolateral prefrontal cortex (vlPFC) is activated by negative performance feedback. (These results are surprising, given that ACC is typically considered a "conflict detector" or "error monitor;" one would expect ACC, and not necessarily vlPFC, to be engaged by negative feedback.)

One useful task for investigating decision making is the Iowa Gambling Task, in which healthy subjects learn to forego immediate gains for larger (but delayed) rewards. The task itself usually consists of four stacks of cards; if a card is drawn from two of these (A or B), the participant receives a large reward (e.g., $100), whereas a card drawn from the other two stacks (C or D) provides only a small reward ($50). However, stacks A and B each contain 10% penalty cards, in which the participant will lose $1250; C and D contain a similar card, but only for $250. On the whole, then, stacks C and D are "good" stacks, while A and B are "bad" decks; to succeed on this task, however, participants must be capable of maintaining their task-set, monitoring feedback, and estimating risk.

Typically, subjects with OFC damage perseverate on the bad decks, and furthermore do not show the typical stress reaction (via galvanic skin response) to hovering over the "bad" decks. One interpretation of this evidence is that OFC subserves the estimation of risk, and may be specifically engaged when "Reversal learning" is required (i.e., changing responses on the basis of risk estimation). Other imaging evidence shows that DLPFC is engaged when subjects decide to forego immediate gain for future reward (perhaps because it is maintaining the choice to wait).

Given the different developmental trajectories of these brain regions, Leijenhorst, Crone and Bunge created a child-adapted version of the gambling task to differentiate risk estimation and feedback processing in 9-12 year olds and young adults. They used the "cake task" in which subjects are presented with a display containing 9 slices of cake, and must decide which of two types of cake (chocolate or strawberry) is most likely to be selected by the computer (which choses at random). The task is designed such that there are high-risk decisions (in which three or four pieces differed in flavor from the others) and low-risk decisions (in which only one or two pieces differed in flavor from the others), as well as positive feedback (when they select the type of cake that is randomly selected by the computer) and negative feedback (when they don't select the type of cake randomly selected by the computer).

[for the fMRI haters in the audience, I highly suggest that you read Section 2.4 of this paper, which details every step of the fMRI data analysis - this particular study was quite well done. For now, I'll just say that they did an event-related design with over 120 trials on each of 26 subjects. The authors performed ROI analyses of OFC, VLPFC, DLPFC, medial PFC/ACC, and midbrain, by contrasting high-risk, positive feedback trials with low-risk, positive feedback trials, and high-risk, positive feedback trials with high-risk negative feedback trials; the ~6% trials on which subjects selected the wrong type of cake were removed from the analysis. The results are as follows.]

Both children and adults picked the most likely cake flavor over 90% of the time, and both groups also made more errors on high-risk than low-risk trials (though children were more prone to this mistake than adults). The risk contrast revealed more DLPFC and OFC activity in high-risk trials relative to low-risk trials, but there was no interaction with age, suggesting that children may engage these areas during risk estimation in a similar fashion as adults. Only in children was medial PFC & ACC more engaged during high-risk than low-risk trials, suggesting that age related differences in risk estimation may have to do with the amount of perceived response conflict. Midbrain ROI analyses revealed nothing significant.

In contrast, medial and ventrolateral PFC was more active for negative feedback than positive feedback across ages, suggesting that these regions are not responsible for age-related differences in feedback processing. However, children more fully engage lateral OFC during negative feedback relative to positive feedback trial than adults do. To rule out that this activity reflects that negative feedback was unexpected (since, statistically speaking, the subject actually made the most likely choice, and was wrong only by chance), the authors contrasted positive & negative feedback trials among low-risk trials alone. Consistent with their original interpretation, this contrast was not significantly different from the contrast performed only on high-risk trials. [The authors also report whole-brain analyses, which are inferior to ROI analyses in terms of statistical power, and are thus not discussed here.]

In summary, children and adults seem to differ both in risk processing (such that children more fully engage medial PFC & ACC more under conditions of high risk), as well as in their processing of feedback (children more fully engage lateral OFC during negative feedback relative to positive feedback), but also share many similarities (both groups utilize DLPFC and OFC more in high-risk than low-risk trials, as well as engage mPFC & vlPFC more during negative than positive feedback). The behavioral results indicate that children are more likely to make risky decisions in high-risk situations than adults. Based on the observation that error-related negativity increases with age during adolescence, children may be activating ACC more highly because of increased response conflict relative to adults.

At the end of the article, the authors discuss other evidence suggesting that children may have difficulty in processing negative feedback because they tend to assume that even irrelevant negative feedback is relevant to their behavior, in contrast to other explanations that suggest children are merely less able to adjust their behavior based on negative feedback.

Related Posts:
Risk Taking and Intelligence
The Rules in the Brain (and the development of oPFC, dlPFC, rlPFC & vlPFC)
Softmax rule for exploration-exploitation (Neurodudes)