8.16.2010

Learning and Memory: location, location, location!

(Skip to near the end for my personal hypothesis about learning and memory.)

Part 4: Location, location, location!

Not long ago, we knew very little about learning and memory. Sure, we understood the basic concepts-- people could learn things, and then recall them later-- but they were black box processes, obfuscated by the complexity of the brain. Everyone knew what they were, but nobody could explain how they worked. We just sort of threw information at people and hoped it stuck.

Fast forward to 2010, and though we're getting better at the practicalities of learning and we can roughly describe the contours of the process, we still don’t really understand the internal mechanics of learning and memory to any fundamentally satisfying degree. These internal mechanics are still mysteries, of which we have no large-scale, generally predictive models. Ask a neuroscientist to explain exactly how you remember where you parked your car, and you’ll get a convoluted answer involving mnemonic association, episodic memory, grid cells, the cerebral cortex, and a few other things in our current neuro-ontology. The description you get may be true as far as it goes, but certainly not satisfying, as our descriptions mostly involve hand-waving at major concepts we think are important, rather than telling a specifically predictive story.

But we’re getting closer. We do know a lot more than we did.

What is Learning?

We've made a great deal of progress dissecting the fundamental nature of learning. The organizing principle of our current understanding of learning is, with a nod to Pavlov, "neurons that fire together, wire together." That is, the body's neural wiring algorithms assume that neurons and neural networks which are often activated at the same time are likely connected, and form connections between them. This simple strategy, applied with great nuance and within the diverse and hierarchical structures of the brain, helps the brain find, internalize, and leverage patterns and drives the translation of conscious processes into subconscious aptitudes and habits of body and mind.

The Structure of Memory

A great deal of effort has been put into exploring the structures of memory. The result has been a set of fairly workable models, which have had some practical success at bringing various pathologies under the umbrella of theory and conforms well with folk psychology and the results of many thousands of memory experiments. They don’t handle all the edge cases well, and in many contexts they’re more descriptive than predictive, but they’re pretty good, as far as they go.

Perhaps the crown jewel of the consensus model is that our memory is more-or-less divided into short-term memory and long-term memory. Short-term memory is essentially our capacity to store static information in our brain without engaging the machinery of medium- and long-term memory. Many experiments peg the capacity as limited to 7 +-2 items; this can vary depending on their complexity, similarity, mnemonic strategies in use, familiarity, and the person in question[1,2], but at any rate, it’s a very finite quantity. Most of the stuff that passes through short-term memory is ultimately lost (or severely compressed): people keep stuff there until they don’t need it, then it’s gone.

Long-term memory, on the other hand, is where stuff in your short-term memory goes if your brain’s heuristics decide it’s worth keeping. To drastically simplify things, if your brain decides something’s important, it’ll send it to the hippocampus, then during sleep your brain processes, consolidates, and compresses what’s in the hippocampus, then sends it off to relevant parts of the brain for long-term storage. Much of it ends up in the cerebral cortex, where it’s more-or-less organized into ordered and linked lists. We know these lists have a directional preference, which is why it’s so difficult to say the alphabet backward (your brain needs to make a new list for it).

The Ephemeral Quality of Short-term Memory

A key finding in recent research is the plastic nature of short-term memory: there's a window of time before they're are translated into long-term storage where memories behave like putty and accessing a memory will change it. During this window, when we recall something it can easily be reinforced, altered, or destroyed... depending primarily what else is going on inside and around us and if we're interrupted while recalling it. (The brain doesn't always have a reliable autosave function.)

This seems odd-- we think of memories as timeless records which may fade with age but are inherently stable. But the science does not back this up-- and given that we don't have memories of our memories, who are we to gainsay it? It appears that it can take between ~1-3 nights of sleep to consolidate a memory into long-term storage, hardening the proverbial putty of short-term memory into a more lasting form.

... and of Long-term memory

After memories are consolidated into long-term memory, they're not ephemeral-- but neither are they permanent, or even particularly stable. It appears that memories are similar to library books: you can 'check them out' from the recesses of your brain and use them, but if you alter the memory when it's in short-term storage, those changes get 'checked back in' and change the original. If you think about a given memory often, you are changing it, for better or worse.

The Limits of our Memory Models

The consensus model has less to say about how the brain classifies and integrates different types of information into memory. We’ve established that different brain regions are strongly associated with certain functions, and it’s certain there’s some sort of elegant sorting mechanism the brain uses to direct information to appropriate regions. But we don’t really have ontologically firm concepts with which to speak about how the brain does sorting or (to some extent) classification. That said, a key result in recent years has been the identification of function-specific brain structures, such as grid and place cells. These functional structures are where a lot of the hottest research is happening, since their bounded contexts are accessible to experimentation and reductionism. Clearly, location and episodic memory must use grid cells in some fashion; clearly, mirror neurons must be deeply relevant to muscle memory and social learning. We just don’t know exactly how yet.

Unfortunately, if we push them hard in any specific direction our models of memory start to look like cardboard cutouts (much like memories themselves- but we digress). They’re wonderful guides to what’s roughly going on, but they don’t have a great deal of depth or precision. If we apply Karl Popper’s evaluative lens that ‘inherent in any good explanation is a prediction, and inherent in any good prediction is an explanation’, we find our models of memory rather constrained: they’re not particularly specifically predictive over most of human experience.

They’re also much more detailed in some areas than in others. We know the limit of short-term memory, for instance, with much more clarity than we know the details of how the hippocampus works or even how information gets recalled once stored.

The Future of Memory Research: Models and Measurement

The limits of any science emerge from what it can and can’t measure, and neuroscience is no exception. We have lots of phenomenological information, which is helpful, but the things we’re having trouble measuring include:

1. being able to tag and track information as it travels through the brain;
2. better quantifying how the brain splits information into chunks and ties them together;
3. measuring how different parts of the brain change information that passes through them (and likewise, how information changes the parts of the brain it passes through);
4. extracting deep functional data from activity scans;
5. designing roughly predictive digital models of brain subsections (other than certain exceptions like the cerebellum).

Progress in any of these areas would drive progress in the others. The productive frontiers in this seem to include:

- improving and melding many sorts of brain scans together. The golden standard today for functional research is fMRI; the next gold standard will be an composite of e.g., high-tesla fMRI, PET scans for gene expression data, EEG and MEG for better temporal resolution, etc.
- better identifying the computational principles which fit the contours of various brain activities (as we’ve done somewhat with memory structure in the cerebral cortex);
- better reverse-engineering the algorithmic approaches taken by brain circuitry (as we’ve done with the visual and auditory cortexes);
- charting out the ‘circuit diagram’ of brain subsections (as we’ve done with the cerebellum);
- simulating the brain.

Reverse-engineering and simulating the brain is a huge topic, one which I’ll cover in another post. Basically though, once we have high-quality neural simulations which allow us to tag and track information as it travels through a virtual brain we may be able to move from a fragmented understanding of memory to something more emergent, experimental, and predictive.

---------------------------------------------------

A Modest Proposal

So that's the current story on learning.

What I want to talk about specifically is something that's not in the current story. An implicit assumption running through this current consensus is that physical location of where peoples' brains happen to store information doesn't matter-- in other words, 1. there is very little variability in *where* similar sets of information get encoded in peoples' brains, and/or 2. when differences occur there are only trivial functional implications.

I think these assumptions will be shown to be significantly false, and if we look underneath them there's a whole new realm of study waiting to be unlocked. In a nutshell, I’m arguing three things:

1. The regional localization of learned information can vary;
2. Regional localization of learned information commonly varies between individuals and learning approaches;
3. Differences in regional localization of learned information have practical significance in cognition and behavior.

I can solidly support (1): aside from the obvious example of right-brain-vs-left-brain lateralization of function, there are many examples of hemispherectomy-- the physical removal of half of the brain-- where patients fully recovered and exhibited no mental deficits.[3] The two significant variables seemed to be age and the speed of degeneration: young people did much better than old, and people who had a slow degenerative disease and thus gave their brains time to migrate information and function away from the diseased hemisphere did much better than those with quicker illnesses.

(2) is more arguable. We simply don’t have good ways to measure where people localize learned information. A lot of people who study the brain take localization invariance for granted-- but once we get the technology, experiments on e.g., tracking information storage and retrieval in musicians and non-musicians where each are taught a song could be interesting. Differences in localization might arise from differences in aptitudes, genetics, some environmental cues, or just randomness.

(3) is still somewhat ambiguous, but I can appeal to the considerable functional significance of the brain’s computational topology, and some work on right vs left hemisphere specializations.[4]

The goal of this suggestion is to help us better quantify different ways of knowing, and to ground this in a functionally-predictive context.

What could cause information to be encoded in one region and not another? How could this guide our behavior and/or treatments? It’s hard to say (yet).


Further Musings:
- A closely related issue is topological constraints on information linkages, where the brain is physically limited from connect any arbitrary node of information to any other node. Consider, e.g., two nodes that are in the same region but 2cm away, and the regional neuronal configuration hinders attempts at making a strong connection. How functionally significant are these sorts of topological limitations? Are they responsible for mental blocks at the level of our experience, like not being able to connect two concepts together very well? Do such intra-regional topologies vary in interestingly distinct ways across individuals?
- I have phrased this in terms of differences in "regional localizations". We can perhaps break this down into
- which brain region information gets encoded into;
- which part of each brain region information gets encoded into;
- what the intra-region encoding patterns are.
I don't think we know enough to estimate the relative contributions of each. But they all point toward the central concept I'm trying to convey, that the topology of information localization differs significantly between people and that this has functional implications.

Edit, 10-3-10: Research into the learning process is really moving quite fast. Recommended links:
- Easier Way To Do Perceptual Learning: "20 minutes of training followed by 20 minutes of listening to a musical tone was just as effective as 40 minutes of training."
- Forget What You Know About Good Study Habits: "psychologists have discovered that some of the most hallowed advice on study habits is flat wrong."

Again, we're only able to see the outward-facing phenomena of learning and memory, not the internal mechanisms. But even this stuff is really interesting.

Edit, 10-28-10: Esquire has a particularly readable piece about how modern neuroscience research got its start. The point it makes is that, historically, we've been able to decipher basic brain region function by looking at what happens when that region gets damaged, through injury or surgery.

In 1848, an explosion drives a steel tamping bar through the skull of a twenty-five-year-old railroad foreman named Phineas Gage, obliterating a portion of his frontal lobes. He recovers, and seems to possess all his earlier faculties, with one exception: The formerly mild-mannered Gage is now something of a hellion, an impulsive shit-starter. Ipso facto, the frontal lobes must play some function in regulating and restraining our more animalistic instincts.

In 1861, a French neurosurgeon named Pierre-Paul Broca announces that he has found the root of speech articulation in the brain. He bases his discovery on a patient of his, a man with damage to the left hemisphere of his inferior frontal lobe. The man comes to be known as "Monsieur Tan," because, though he can understand what people say, "tan" is the only syllable he is capable of pronouncing.

Thirteen years later, Carl Wernicke, a German neurologist, describes a patient with damage to his posterior left temporal lobe, a man who speaks fluently but completely nonsensically, unable to form a logical sentence or understand the sentences of others. If "Broca's area," as the damaged part of Monsieur Tan's brain came to be known, was responsible for speech articulation, then "Wernicke's area" must be responsible for language comprehension.

And so it goes. The broken illuminate the unbroken.


Edit, 5-25-11: There's been some interesting research on using brain stimulation to aid learning: essentially using tiny amounts of electricity to induce changes in rats' brains that makethem better learners. After the current is shut off, the rats' brains go back to normal but they keep their learned skills. We don't know what the specific trade-offs may be, but between this approach and approaches which could mimic developmental neuroplasticity triggers, we may have the basis for a very desirable form of cognitive enhancement.

Here's "Scienceblog" on the a theory on how the brain picks which of its neural networks to use for a new skill:

The study by Reed and colleagues supports a theory that large-scale brain changes are not directly responsible for learning, but accelerate learning by creating an expanded pool of neurons from which the brain can select the most efficient, small “network” to accomplish the new skill.

This new view of the brain can be compared to an economy or an ecosystem, rather than a computer, Reed said. Computer networks are designed by engineers and operate using a finite set of rules and solutions to solve problems. The brain, like other natural systems, works by trial and error.

The first step of learning is to create a large set of diverse neurons that are activated by doing the new skill. The second step is to identify a small subset of neurons that can accomplish the necessary computation and return the rest of the neurons to their previous state, so they can be used to learn the next new skill.

By the end of a long period of training, skilled performance is accomplished by small numbers of specialized neurons not by large-scale reorganization of the brain. This research helps explain how brains can learn new skills without interfering with earlier learning.

Edit, 7-28-11: Scientists have traced the recall of a specific memory and found it partially activates other memories from around the same time. Unsurprising, given it's common to experience memories as strongly linked, but still good science, and perhaps it supports the viewpoint that all memory is ultimately episodic in some real sense.

Researchers have long known that the brain links all kinds of new facts, related or not, when they are learned about the same time. Just as the taste of a cookie and tea can start a cascade of childhood memories, as in Proust, so a recalled bit of history homework can bring to mind a math problem — or a new dessert — from that same night.

For the first time, scientists have recorded traces in the brain of that kind of contextual memory, the ever-shifting kaleidoscope of thoughts and emotions that surrounds every piece of newly learned information. The recordings, taken from the brains of people awaiting surgery for epilepsy, suggest that new memories of even abstract facts — an Italian verb, for example — are encoded in a brain-cell firing sequence that also contains information about what else was happening during and just before the memory was formed, whether a tropical daydream or frustration with the Mets.

The new study suggests that memory is like a streaming video that is bookmarked, both consciously and subconsciously, by facts, scenes, characters and thoughts.

...

“When you activate one memory, you are reactivating a little bit of what was happening around the time the memory was formed,” Dr. Kahana said[.]

8.15.2010

SS2010 Highlights: Day 1

SS2010 Highlights:

Day 1: The Future of Human Evolution

Michael Vassar: The Darwinian Method
A solid talk about the scientific method and rationality. People can be rational without being scientific; good organizational structures can protect against bias (but may be being eroded by the internet conjoining universities); there are different types of scientific method ("Scholarly science" (scholarly consensus) vs "Enlightenment science" (testing)); the Scientific Method is amazing because non-geniuses can still contribute to scientific progress. No big surprises, but a good kickoff. Vassar seems pretty familiar with philosophy.

Gregory Stock: Evolution of Post-Human Intelligence
A light talk about the future, progress, and evolution. Interesting points were
1. when Stock veered off and talked about his company, Signum Biosciences. They have an alzheimer's drug just entering human trials based on compounds in coffee.
2. Stock posed the question, "Why would love or human values survive?" -- presumably in AIs or post-human intelligences these things would be competitive handicaps and the ones burdened by human values would die out. I think it's a good point. Perhaps the point could be extended to any intelligence driven by emotion. Or perhaps even consciousness itself, given that something could somehow be intelligent yet nonconscious. Is the future owned by anhedonic, zombie AI?

Ray Kurzweil: The Mind and How to Build One
He teleconferenced in from vacation (boo). Nothing really new, but he has clearly spent a lot of time thinking about reverse engineering the principles the brain uses. According to Kurzweil, the spacial resolution of our brain imaging doubles every year. Talked some about simulation progress projections (Markham of the Blue Brain Project says 2018, Kurzweil says late 2020s to fully simulate a human brain). Interesting points included that we've basically completely reverse engineered the wiring of the cerebellum (essentially the same neuron structure is repeated 10 billion times); we're working on the cerebral cortex, and though it's a lot more complex, we're learning about its datastructures (it functions much like LISP's linked lists). Likewise, we've deciphered that vision is essentially an amalgamation of 7 different low-resolution information streams. Progress.

A big problem in brain simulation, which Kurzweil mentioned, and Goertzel brought up when I spoke with him, is training brain simulations. Training will not only be difficult, but simulations will need to be trained before we can evaluate how good they are-- and even if we can raise them at 20x speed, it'll still take a year before we know enough to tell much.

Ben Goertzel: AI Against Aging
Goertzel's dream is a computer that can do biology better than people can. We're a long way off. He's using 'narrow' AI programs in order to narrow down promising drug targets from thousands to dozens, specifically in the context of longevity compounds. Smart datamining.

His view on the etymology of aging (contra de Grey):
Cross-species data analysis strongly suggests that most age-associated disease and death is due to "antagonistic pleiotropy" -- destructive interference between adaptations specialized for different age ranges. The result is that death rate increases through old age, and then stabilizes at a high constant rate in late life.

Steven Mann: Humanistic Intelligence Augmentation and Mediation
Mann is known as the "first cyborg". Very into wearable computing. Wears a camera on his head and records everything. VR overlay capacity (he calls it "mediated reality"). Interesting in the context of technologies like Layar for the iPhone. Also had designed a water-based instrument (to fill in the orchestral gap between the "solid" instruments like drums and strings, and "air" instruments like woodwinds and brass).

Mandayam Srinivasan: Enhancing our bodies and evolving our brains
The father of haptic (touch feedback) technology. Talked about different 'levels' of haptic technologies-- everything from using haptic tech to interact with digital objects, to perhaps brain-computer interfaces where our brains grow our sense of self to encompass an artificial prothesis (a third arm, say). Bottom line: the brain and our sense of self are very plastic, particularly given a feedback mechanism.

Brian Litt: The past, present and future of brain machine interfaces
Probably my favorite talk. Very grounded and accessible, but with speculative undertones. Talked about the neuroscience and engineering difficulties of BCIs. I'm posting some excerpts, because his talk was very content-rich:

different types of BCIs-
- one way vs two way (open or closed loop)
- invasiveness (non, partial, very) (influences bandwidth)
- spacial scale (topology, degrees of freedom)
- temporal scale (precision)

levels of organization- where to interact with the brain?
- neuron
- cortical column
- nuclei
- functional networks
- cortical regions

afferent BCIs (inject a signal)
- map the network
- choose 'connection' site
- inject a signal (MUST contain information)
- "neuroplasticity" helps interprets over time
- performance = f (information quality, accessibility, bandwidth…)

efferent BCIs (find signal, take it out)
- map the network
- find a recording site
- transduce a signal
- algorithms 'interpret'
- 'neuroplasticity' (but you get less help from the brain going out than going in)
- performance=f (resolution, signal quality, algorithms, information)

major challenges in BCIs:
data dimensionality
data rates- up to 25 bits/min in 2000 (almost double now)
biocompatability
tissue/electrode interface
mapping circuits for meaningful injection/extraction points

state of the art for electrodes is bad…
12 million neurons gets represented by 1 electrode. Likewise, electrodes don't measure the same neurons during different experiments.

Litt also talked about the technology behind coclear implants and a bit about vision implants. The state of the art in coclear implants is 22 1-dimensional channels, and a lot of useful information can be packed into this datastream if some audio filters and harmonic extractions are performed on the original sound.

I was curious how plastic Litt thought brain structure was-- e.g., if you hooked up a coclear implant system to the visual nerves, would you get sonar? He seemed sympathetic to this idea in correspondence. More speculatively, I found myself wondering whether there's any reason to believe we could coax the brain into productively utilizing and relocating function to a digital "third hemisphere" type prothesis?

Demis Hassabis: Combining systems neuroscience and machine learning: a new approach to AGI
Hassabis essentially made the argument that there are three main ways to approach to modeling intelligence, and that only two of these niches are being filled. He calls his third approach "systems neuroscience".

Marr, whom Hassabis refers to as the "father of computational neuroscience," identifies three levels of analyzing complex biological systems:
- computational - defining goals of the system (e.g., Opencog)
- algorithmic - how the brain does things - the representations and algorithms (this guy)
- implementation - the medium - the physical realization of the system (e.g., Blue Brain, SyNAPSE)

So there are productive opportunities for people to try to reverse-engineer then formalize and/or reuse the brain's algorithms.

Hassabis also broke knowledge up into three categories that may(?) roughly correspond to these three levels: perceptual, conceptual, and symbolic. Analysis of perceptual knowledge is serviced by tools such as DBN, HMAX, HTM. Symbolic knowledge is serviced by logic networks. Conceptual knowledge is, according to H, not very well serviced.

It was an interesting talk, and I may need to watch it a second time to organize it into a coherent narrative. It felt content-rich and smart but somewhat conceptually disjoint.

Terry Sejnowski: Reverse-engineering brains is within reach
A smart but content-lite talk about extracting computational principles from the brain. More of a set-up to the debate than anything. Noted that our models of the brain come in many levels of abstraction, and progress in reverse-engineering brains will involve connecting these different maps (CNS, Systems, Maps, Networks, Neurons, Synapses, Molecules).

Dennis Bray: What Cells Can Do That Robots Can't
A smart but impenetrably dense survey of some complexities of cellular operation. Bray was brought in as a skeptical voice of the Old Guard Biological Establishment.

"As a card-carrying cell biologist, my loyalty lies with the carbon-based systems."

His focus seemed to be that cells are incredibly complex (there are 10^12 protein molecules in each of our cells) and impressively adaptable. It'd be very difficult to model the complexity or replace the adaptability, and the two are tightly linked.

Sejnowski/Bray debate: Will we soon realistically emulate biological systems?
An extremely polite and tame cagematch between Sejnowski and Bray. They seemed to converge on the idea that in principle, we could emulate biological systems-- but our current models are Very Far from being realistic simulations. Sejnowski exhibited some tools (MCell, a monte carlo simulator of microphysiology) which apparently do a good job at modeling certain aspects of cell biology. Bray held out for full simulation, relating that Francis Crick once told him, "explanations at the molecular level have a unique power, because they can be verified in so many ways."

One important point that emerged was cells are not stateless-- certain kinds of memory are embedded in epigenetic memory, which have been shown to help restore memories in a damaged brain. In general, cells have a significant amount of memory/learning that help them predict future conditions and prime them for future behavior… to a significant extent, to get realistic neural behavior, presumably this memory will need to be modeled.

The short version: nobody knows how simple a model of a neuron we can get away with to realistically emulate a brain. However, it's safe to say we're not there yet.


All in all, a very interesting day. Not as many superstars as last year, and not a ton of diversity in topic (no 3d printing, no economics, etc), but a lot of good things were thought and said.

8.13.2010

SS2010

I'll be at the Singularity Summit this weekend in San Francisco. Look for M. Edward Johnson. I'll also have rocking sideburns.