6/15/2006

The Rules in the Brain

Much of cognitive psychology has undergone a profound shift in emphasis during the last decade or so, moving from theoretical information processing accounts of cognition towards a more embodied view of cognition. This embodied view seeks to identify the brain regions that implement the cognitive processes studied for decades previously in cognitive science.

One cognitive process (or set of processes?) that has been particularly elusive is executive function: this "catch-all" phrase has been difficult to pin down either in information processing terms, or in terms of which brain regions implement it. Notable exceptions include some fascinating factor analyses that identify likely executive subfunctions, and a new paper by Bunge and Zelazo in a recent issue of Current Directions in Psychological Science.

In this paper, the authors argue that orbitofrontal cortex is responsible for the control of simple stimulus-reward rules, as might be required when the reward value of a stimulus undergoes rapid reversal. Accordingly, damage to oPFC results in impaired learning of reversed stimulus-reward associations.

In contrast, ventrolateral and dorsolateral PFC seem to represent "univalent conditional" rules, or rules in which each stimulus is associated with a different response (neither of which is intrinsically rewarding or unrewarding). Alternately, dlPFC may also be involved in representing bivalent rules, in which a single stimulus is associated with multiple different responses. These regions appear to be more highly activated for bivalent than univalent rules.

Finally, rostrolateral PFC seems to be involved in resolving interference between multiple different rules, and thus choosing between them. According to these authors, rlPFC thus manages hierarchical rules.

Several structural and functional imaging studies loosely support the accounts provided here, namely that oPFC develops first, with dlPFC and rlPFC following much later afterwards. Furthermore, activity in child oPFC becomes similar to that seen in adult oPFC before this happens in dlPFC and rlPFC, though this is not surprising given that the structure of these regions takes longer to reach and adultlike state.

What is truly interesting about this paper, however, is that it is an extension of earlier work by Zelazo on the development of executive function. Specifically, his perspective has always focused on the importance of rules and rule structure to success in tasks like the Dimensional Change Card Sort, or the Tower of Hanoi. This paper can be viewed as an attempt to firmly bind these abstract information processing theories to the brain regions responsible for implementing them.

However, it is not clear that rule use is the essential function of these regions. For instance, it is possible that these regions are merely responsible for maintaining information, and biasing more downstream regions. One need not posit that these areas are intrinsically specialized for rule representation, but merely that complex rule use requires that these regions be active. In this alternative account, the most important function of prefrontal activity is active maintenance. This is the critical function that subserves rule use, merely because complex rules are too quickly forgotten unless the "prefrontal machinery" is fully developed.

Related Posts:
Task Switching in Prefrontal Cortex
Under The Rug: Executive Functioning
The Transience of Memory
Models of Active Maintenance as Oscillation

6/14/2006

Separate phases for encoding and retrieval in theta rhythms

Several recent posts have highlighted the importance of the theta rhythm, including the idea that it may reset its phase according to stimuli in the external world, and that it may be particularly important for things like spatial learning, known to take place in hippocampus. In a 2002 issue of Neural Computation, Hasselmo and colleagues present a neural network model in which the theta rhythm's phases are broken up into separate functional parts. Specifically, at the peak of the theta wave, synaptic output ("retrieval" - specifically, region CA3) is weak and synaptic input ("encoding" - from entorhinal cortex) is relatively strong. At the trough of the theta rhythm, this pattern reverses itself.

Although these phase-locked patterns of activity may be unfamiliar to many, each has been identified to occur in a series of behavioral experiemtns in the 90's (referenced in Hasselmo et al.) It appears that the hippocampal theta rhythm is paced by input from medial septum, via the fornix, and theta activity can be suppressed by destroying this pathway.

This phase-locked activity sets up a functional oscillation in which hippocampal "input" and "output" essentially alternate, like an alternating current. The researchers show how this account of hippocampal function explains several phenomena in the literature, including the phase locking of behavioral responses to theta rhythms, as well as the impairment of T Maze reversal learning in fornix-lesioned rats.

Related Posts:
Nature's Engineering
Theta Frequency Reset in Memory Scanning
Sequential Order in Precise Phase Timing
EEG Signatures of Successful Memory Encoding
Serial Oscillations and the Frequency Following Response

6/12/2006

The Attentional Zoom Effect

In Motter & Belky's 1998 Vision Research paper, titled "The Zone of Focal Attention During Active Visual Search," they describe a radical new paradigm for identifying whether attention functions in a serial or parallel fashion in visual search. What follows is a brief summary of their logic, the experiments, and their conclusions.

For those unfamiliar with the visual search literature, there's been a heated debate about the relatively serial or parallel nature of visual search for quite some time. In simple feature search, one will often experience "pop out" in which an item differing only in a single way from distractors can be found almost immediately, in a way that is not dependent on the number of distractors or lure items. But in conjunctive search, where subjects search for targets that differ from distractors in terms of multiple features, the time it takes to find an object embedded in a field of distractors increases in a nearly linear fashion as the number of distractors increase. For this reason, some have proposed that attention must be deployed in a serial fashion to each item in conjunctive search. The slope of the quasi-linear search time function is interpreted as the amount of time to scan each object.

This search time is also affected by several other factors, including similarity between target-and distractors (aka discriminability) and which stimulus dimensions differentiate the two. Of course, the attentional scanning process need not have a one-to-one relationship with eye movements; in fact, attention might scan surrounding areas covertly during a single eye fixation. The size of the region available to this covert attentional scanning process may depend on stimulus density.

To test these hypotheses, the authors trained two rhesus monkeys to search for targets within a set of distractors. The stimuli were red and green bars, some of which were oriented at 0 degrees and others at 90 degrees. The number of items on a given trial was either 6, 12, 24, 48, or 96, and were always distributed evenly across the field of view. Two types of search tasks were run: feature search (e.g., look for the red bar, or look for the horizontal bar) and conjunction search (e.g., look for the red horizontal bar, or look for the green veritcal bar). Therefore, any given trial might be feature or conjunctive, with one of five array sizes, 44 possible target locations, and four possible target stimuli. Eye movements were tracked with scleral coils and video analysis.

The results showed that at small array sizes, feature search for color and orientation was equal. However, at higher array sizes, orientation actually became faster - suggesting that orientation search is affected by set size. Furthermore, the search time x array size functions were not perfectly linear, but instead was less positive at higher array sizes.

Statistical analysis did not show a correlation of array size with fixation duration, suggesting that attentional scanning may not be serial within each fixation. Yet the authors knew that information must be extracted from the areas surrounding each fixation, because # of fixations was always much less than half the array size. What if subjects were capable of processing the stimuli surrounding a fixated item in parallel with that item?

By determining the probability that a target was found given its distance from the current fixation, the authors were able to derive an empirical measure of the "conspicuity area," or to use the title of their paper, the zone of focal attention. In metaphorical terms, one might view this as the diameter of the attentional spotlight; according to these results, this "spotlight" does not encompass items that are more than 2 times the average nearest-neighbor interstimulus distance from fixation. Subsequent experiments suggest this finding is invariant to total display size, and that the results are essentially similar for more difficult conjunctive search tasks.

6/08/2006

Symbol Use and Play in Humans, Chimps, and Bonobos

Generally, this blog has focused on learning and memory processes in the course of individual development; today's post has a slightly different emphasis: the development of intelligence over evolutionary time. One feature of intelligent behavior is the ability for abstract thinking - and so we might expect that species showing a capacity for symbolic or representational play (summarized as "acting as if") might also be capable of higher level cognition. To what extent is this capacity shared among the highest primates with a recent common ancestor, namely humans, chimps, and bonobos?

As described in Lyn, Greenfield & Savage-Rumbaugh's new paper in Cognitive Development (which appears to be loosely based on this talk), it does appear that all these species do have the capacity for such symbolic behavior. The authors then go on to investigate to what extent this capacity for symbolic thought is correlated with linguistic skills, by training some of the primates on "lexigrams," a form of interspecies symbolic communication.

While some previous studies had observed linguistically-untrained primates spontaneously playing highly symbolic and abstract games , other studies found that these games were more likely among primates that were exposed to human culture or had been specifically trained on symbol use.

Based on 99 hours of video tape of five primates - two bonobos and two chimpanzees, where one of each had learned to understand spoken English and could communicate with lexigrams, while the other could not - the authors created the largest database of "pretend behaviors" in primates to date. Here's an example of the kinds of behaviors they observed:

This example details a ritual that developed around a toy snake. Panbanisha [a 3.5 year old bonobo] says “snake” at the keyboard and when Liz asks her where the snake is, she points toward the T-room (toy storage area). They head to the T-room and Liz again asks Panbanisha where the snake is. Panbanisha points toward cabinets and Liz opens them. Panbanisha initially seems tense. She holds onto her caregivers’ neck with both hands, hesitates before indicating a cabinet, and holds her hand up to the door of the cabinet as if to ward off what may come out. When the snake is discovered (a plastic snake normally kept in the T-room), Panbanisha holds more tightly to her caregiver’s neck and does not look at the cabinet as her caregiver slams the door shut on the “snake”, hits the door several times making human approximations of bonobo fear barks, and departs.

After analyzing many such episodes of play, the authors conclude that both chimps and bonobos show the same capacity for pretend play as human children, but that the development of this capacity is remarkably slower, and that training on symbol use is helpful but not necessary for the development of this capacity. In conclusion, the authors remark that the common ancestor of humans, bonobos, and chimps, which was thought to live about 5 million years ago, was most likely capable of symbolic play as well.

Although this study has fairly obvious methodological limitations, it is an interesting foray into phylogenetic approaches to cognitive development.

6/07/2006

Towards A Mechanistic Account of Critical Periods

Almost every one who has taken an introductory psychology class is familiar with the concept of a "critical period." The fact that children can learn second languages much easier than adults is often taken as evidence for the existence of a critical or sensitive period, which merely refers to a time-limited window of increased sensitivity to a particular input, whether linguistic, visual or simply auditory.

A central theme of several articles in the May issue of Developmental Psychobiology is that future research must strive to explain the mechanisms that give rise to critical periods in development, rather than merely describing a relationship between plasticity and age. While some argue for the use of converging behavioral, neuropsychological, ERP, and fMRI techniques to achieve this goal, an article by Thomas & Johnson suggests that computational modeling is a particularly effective tool for any such attempt to causally link neural development with behavioral change.

The authors emphasize that computational simulations of critical or sensitive periods force theorists to become explicit about the precise nature of the representations in that problem domain or modality, and how those representations may change with age. Computational models also require that theorists indicate the exact kind of "input" required for a developing system to illustrate the critical period effect, as well as the frequencies with which those inputs are encountered.

As the authors note, computational implementation also requires theorists to consider the kinds of processing resources available to a developing system. Sometimes, sensitive period effects can seem to result from increased competition for mental resources. For example, some children who appear to recover from brain damage will not manifest any particular deficit, but will show an overall decrement in cognitive performance. According to Thomas & Johnson, this means that one cannot interpret failure to recover from brain damage as reflecting a "closed" critical period, unless it can be demonstrated that the domain can be successfully aquired with the lower mental resources to begin with.

The learning algorithms of neural network simulations also suggest other ways in which sensitive periods might emerge. For example, the Hebbian learning algorithm can be described as "fire together, wire together," and based on this description, it becomes clear that the more "active" brain would manifest more plasiticity. The authors go on to describe that both electrical activity and brain metabolism appear to peak in early to mid childhood, and that children's hemodynamic response tends to be more widespread than that in adults for the same tasks.

In their article, the authors argue that critical periods are typically understood as genetically or experientially-induced changes in functional plasticity. Furthermore, critical periods are often thought to "close" in one of three ways, as follows.

Self- Termination

In a simple model of imprinting, O'Reilly and Johnson illustrated how Hebbian learning can cause a self-organized termination of sensitivity to input. Their model was trained on Object A for 100 presentations, and then trained with 150 presentations of a very different looking object, Object D. Preference for an object was interpreted as the amount of activity on the output layer, and by the end of training, the network "preferred" Object D. However, if the network was trained on just 25 more presentations of A (bringing the total to 125), the network would never show a preference for Object D, despite over 900 presentations of that object. In this case, the connection weights within the network became entrenched as a result of a specific type and frequency of input, ultimately resulting in reduced sensitivity to further training. In other words, the "sensitive period" for this network closed between 100 and 125 presentations of Object A.

Stabilization

Another way in which sensitive periods can seem to end is through a process of stabilization. A model by McClelland, Thomas, McCandliss, & Fiez demonstrated this phenomenon using the example of phoneme discrimination by monolingual Japanese speakers. The english sounds /r/ and /l/ are not distinguishable to monolingual Japanese speakers because they have only a single phoneme that corresponds to a blend of those two sounds. This occurs because they have developed in an environment in which sounds varying in that way were functionally categorized as being the same; therefore, the neural system for phoneme recognition learned to blend those two sounds completely.

However, if a monolingual Japanese speaker is exposed to very exaggerated sounding /r/'s and /l/'s - such that the overlap with the blended Japanese phoneme is minimal - they can begin to learn to discriminate the two sounds, even when spoken in a normal fashion. In this case, the "sensitive period" for phoneme discrimination appears to end because the "output" produced by the model (and presumably by the neural structures responsible for phoneme discrimination among monolingual Japanese) becomes stabilized over time. In order to achieve increased sensitivity to the phonemes, the input must be strongly manipulated or exaggerated to decrease the chances that the output (i.e., the phonetic interpretation) will remain stabilized.

Endogenous Factors

Thomas & Johnson also describe how sensitive periods may seem to end as a result of endogenous factors, in which the potential for plasticity is reduced according to a strict developmental timeline. They used a three-layer model of past-tense acquisition, trained through backpropagation, with an input layer of 90 units, an output layer of 100 units, and a hidden layer of 100 units. Two pathways existed between input and ouput: a direct pathway between input and output, and an indirect pathway which connected the input to the output via the hidden layer.

The networks were damaged after 10, 50, 100, 250, 400, 450 or 490 training presentations by removing 75% of the connections between both pathways. After sustaining this damage, each network was trained for an additional 500 trials, and then tested on past-tense formation for both regular mappings (walk - walked) and three types of irregular mappings (run - ran, sleep - slept, go - went). Critically, the authors simulated reduced plasticity by including a small probability that any low connection weight would be destroyed after 100 presentations, which is roughly equivalent to the developmental time-course of synaptic overproduction and subsequent synaptic pruning throughout late childhood and adulthood.

Although the results are difficult to describe verbally, the essential trend is that while regular mappings did not show a substantial sensitive period effect (and so performance was roughly equal on regular past tense formation regardless of when the network damage occurred), the irregular mappings showed a strong sensitive period effect (such that damage late in training had a much more profound effect on the network's ultimate performance than damage early in training.) The take-home point is that given endogenous changes to network functionality, "sensitive and critical periods can appear in some parts of the problem domain but not others."

6/06/2006

Meme Therapy Interview

Just wanted to give a heads up: the kind chaps over at Meme Therapy agreed to post some musings of mine on a variety of cognitive science topics as part of their very interesting Brain Parade series. Although I try to keep this blog very focused on scientific hypotheses regarding what might loosely be called "intelligence," and evidence bearing on those hypotheses, what I say over there is completely unconstrained by the scientific method (though still true, I think!)

Check out some of the other Brain Parade posts, many of which are focused on futurist or transhumanist ideas:

Back to the Future Shock (link)
Underrated SF Writers (link)
Blast Offs From the Past (link)
Underrated Tech (link)
Future Religion (link)
Science Fiction's Leaky Memes (link)
Stranded in Science Fiction (link)
Geek Rapture (link)
Science Fiction Places (link)

6/05/2006

EEG Signatures of Successful Memory Encoding

The subsequent memory effect (SME) refers to two characteristics of evoked potentials that correspond to successful memory encoding in the medial temporal lobe. Both negativity at 400 ms post-stimulus (in rhinal cortex) and positivity at 800 ms post-stimulus (in hippocampus) can be used to predict whether that stimulus will later be successfully recalled. But what role might synchronized oscillations have in this encoding process, and how would synchrony influence or be influenced by these SME effects?

Sederberg et al set out to answer this question in a 2003 article in the Journal of Neuroscience. By presenting subjects with a sequential list of randomly selected words, while simultaneously recording from intracranial electrodes, and then subsequently asking them to recall the words in any order, the authors were able to correlate specific EEG activity at the time of encoding with successful recall later.

The dominant frequencies for SME effects were in the theta and gamma frequency bands. Right temporal and frontal areas showed a lateralized and positive SME, such that increased theta band power between 600-1300 msec post-stimulus correlated with better subsequent recall. Similar areas showed a positive SME for gamma-band oscillations, but this effect was not lateralized. Negative SME's were found for alpha and beta band oscillations in the same regions, such that activity in those frequency ranges was negatively correlated with successful recall.

Interestingly, one aspect of these results can be interpreted to cast doubt on the Jensen & Lisman model of working memory capacity limits, in which the 7 +/- 2 capacity limit is thought to arise from the multiplexing of gamma within theta frequencies. The current results suggest that the areas showing positive theta and gamma frequencies are only very slightly correlated, indicating that the regions manifesting each oscillation are likely to be at least somewhat distinct from each other.

Related Posts:
Neuroindices of Memory Capacity
Entangled Oscillations
Enhancing Memory with Visual Flicker
Neural Correlates of Insight

6/02/2006

Theta Frequency Reset in Memory Scanning

In a 1998 article in the Journal of Neuroscience, authors Jensen & Lisman describe how the "7 plus or minus two" capacity limit of short-term memory might arise from a specific type of oscillatory neural network in prefrontal cortex. Specifically, as I have mentioned before, items in working memory would become serially active during each successive gamma cycle, and by the end of a theta cycle, this process would repeat. The capacity of STM, then, is dependent on the interaction of theta and gamma oscillations.

For the skeptics in the audience, remember that recent evidence has shown that this kind of multiplexing "sequence compression" is known to occur in hippocampus, and so this form of neural computation is not as outlandish as it might otherwise seem.

In order to explain behavioral data from the Sternberg memory task (which I discussed earlier this week as being sensitive to variations in brainwave oscillations), a few complications to this basic account are necessary. First, memory scanning must be initiated at the trough of a theta cycle. Second, the theta period must increase with increasing memory load (so that more gamma cycles can fit within each theta cycle). In other words, dominant theta frequencies should decrease with increasing memory load, from a maximum of 10Hz to a minimum of 6Hz.

Alternatively, the phase of theta oscillations can be reset rather than requiring theta frequencies to adapt with memory load. This means that whatever pattern generator creates the driving theta frequency would essentially change the phase of the theta rhythm at the start of a memory scanning process. This model requires that dominant theta frequencies fall within the range of 4 to 7 Hz.

A more recent article by Jensen & Tesche, from the European Journal of Neuroscience, suggests that this latter model is more likely to be correct, given that theta frequencies actually remain constant during the Sternberg task, and the amplitude increases with load. Other recent evidence also supports the idea that theta-clocked oscillations reset on the presentation of probe stimuli.

Related Posts:
Nature's Engineering

6/01/2006

Enhancing Memory with Visual Flicker

According to a new article in BMC Neuroscience, it's possible to improve some types of memory simply by watching something that turns on and off around 10 times per second. This "flicker frequency" of 10 Hz is thought to enhance recognition memory by amplifying rhythmic slow activity, which is known to be important in memory function.

Alpha band power, which includes 10 Hz, decreases in both Alzheimer's and the typical elderly. This suggests that alpha power may be related to the diminishing episodic memory of those populations. Long-term potentiation is also enhanced by such rhythmic slow activity (RSA). Drugs such as ACTH 4-10 and liptropin as well as brain stimulation techniques that affect RSA also seem to affect memory.

However, many might argue that RSA is merely "epiphenomenal" - meaning that it was a side-effect of actual memory processes, rather than an integral mechanism of these processes. According to this account, skeptics might claim that drugs and brain stimulation affect the mechanisms themselves, which in turn give rise to altered EEG activity. Likewise, the different EEG activity in the elderly is due to an age-related change to the underlying mechanism, which is not related to alpha rhythms or RSA per se.

In contrast, this study and other recent work firmly establish a causal role for RSA in memory function. In this work, human subjects are given a list of words to study, and are later asked to identify which words they actually saw. If subjects are shown LEDs flickering at a frequency of 10Hz (or its harmonics) for 1 sec prior to the test phase, their recognition performance is significantly increased relative to those who experienced a flicker at 9 Hz, 11.5 Hz, or no flicker at all. Furthermore, this paradigm is capable of elevating the performance of elderly subjects to that of young adults.

Finally, this visual flicker is effective even if it is presented peripherally. This technique, as well as the related findings I posted about yesterday, have fascinating implications for non-pharmacological memory enhancement technology.